What is a Coordinator?
What is a Coordinator?
Coordinators design, launch, and operate Crunches — prediction challenges that tap into a
global network of ML engineers. As a Coordinator, you bring data, define the problem, and set
incentives. Crunch Labs provides the infrastructure, model orchestration, and contributor network.See What is the Crunch Protocol? for a full overview.
What are the steps to launch a Crunch?
What are the steps to launch a Crunch?
- Set up your local environment — Install
crunch-node, scaffold a workspace, and runmake deploy - Define your prediction task — Specify the model interface, data, and scoring function
- Create a challenge package — Publish the model interface as a PyPI package with quickstarters
- Register on the protocol — Create a Solana wallet and register through the Coordinator Platform
- Wait for approval — The Foundation approves your Coordinator status
- Deploy — Use the Coordinator Platform to push to testnet, then mainnet
Can I run an ongoing (continuous) competition?
Can I run an ongoing (continuous) competition?
Yes. Coordinators can run two types of Crunches:Continuous Crunches
- Predictions streamed in real-time
- Checkpoints and payouts at regular intervals (e.g., weekly)
- Ideal for price data, risk, or volatility forecasting
- Fixed start and end dates
- Payout at the end of the competition
- Ideal for research problems, Kaggle-style challenges, or academic studies
What happens after a time-boxed competition ends?
What happens after a time-boxed competition ends?
Coordinators retain access to:
- All model outputs generated during the Crunch
- Rankings, metrics, and aggregated performance reports
- Coordinator dashboards for post-competition analysis
- Switch to a continuous Crunch
- Launch a new season
- Integrate predictions via the Report Service API
What data does a Coordinator need to provide?
What data does a Coordinator need to provide?
It depends on your use case:Public market data (e.g., crypto, equities, FX)
- Your Crunch Node fetches data from public APIs or supported feeds
- You define the format and delivery schedule in your predict service
- Crunchers receive data through the model interface you define
- You provide historical datasets for training and test data for local development
- You define features, targets, and evaluation rules in your public repository
- For sensitive data, the protocol supports privacy-preserving techniques including TEE and MPC (see Data security)
Who funds the prize pool?
Who funds the prize pool?
Coordinators provide the prize pool, paid out in USDC. The protocol also supports CRNCH token
emissions as an additional reward mechanism.Funds are deposited into a smart-contract escrow before the competition starts. The Coordinator
runs checkpoints to distribute rewards, and participants claim their prizes through the
Tournament Hub.See Crunch lifecycle for the full funding and payout flow.
What infrastructure does Crunch Labs provide?
What infrastructure does Crunch Labs provide?
Crunch Labs provides:
- Model Nodes — Managed infrastructure for running participant models securely
- Model Orchestrator — Deployment, scaling, and lifecycle management of model containers
- Tournament Hub — Web platform where Crunchers discover and join competitions
- Crunch Protocol — On-chain smart contracts for registration, checkpoints, and payouts
- Privacy tools — TEE and MPC support for sensitive data (see Data security)
- Starter kit — Complete local environment with scoring, reporting, and model orchestration
How much does it cost to run a Crunch?
How much does it cost to run a Crunch?
Your main costs are:
- Prize pool — The USDC you deposit for participant rewards
- Hosting — A server for your Crunch Node (a simple cloud instance is usually sufficient)
- SOL — Small transaction fees on Solana (less than 0.1 SOL for most operations)
Can I test before going to mainnet?
Can I test before going to mainnet?
Yes. The recommended flow is:
- Local — Run the full stack on your machine with
make deploy - Testnet — Deploy to Solana devnet for end-to-end testing with real model submissions
- Mainnet — Go live once you have validated the full loop