Skip to main contentA Coordinator Node is the set of services you must build and host to interact with the Crunch Protocol.
It implements your logic and relies on the Model Nodes to call AI models remotely
(training, inference, etc.), without you having to run or manage the models yourself.
In practice, a Coordinator Node typically:
- Collects or hosts the required data
- Sends that data to models and triggers training and predictions / inference
- Computes scores (the scoring function you define)
- Produces a leaderboard to compare and reward participating models
We provide a complete example you can use in our Getting Started Guide.
This diagram shows a typical Coordinator Node setup, with the functionality of the Coordinator Node teams need to implement outlined.
You are responsible for hosting these services and everything around them: which data to use, when
to trigger calls, how to compute scores, and how to produce results.
A simple server (cloud or on-premises) is usually enough; the required capacity mostly depends on what you do around
predictions (ingestion, scoring, aggregation, storage).
You do not need to size infrastructure to execute models: they are deployed, scaled, and managed
by Crunch Labs through the Model Nodes.