Skip to main content
Get answers to common questions about designing, launching, and operating Crunches on the Crunch Protocol.
Coordinators design, launch, and operate Crunches — prediction challenges that tap into a global network of ML engineers. As a Coordinator, you bring data, define the problem, and set incentives. Crunch Labs provides the infrastructure, model orchestration, and contributor network.See What is the Crunch Protocol? for a full overview.
  1. Set up your local environment — Install crunch-node, scaffold a workspace, and run make deploy
  2. Define your prediction task — Specify the model interface, data, and scoring function
  3. Create a challenge package — Publish the model interface as a PyPI package with quickstarters
  4. Register on the protocol — Create a Solana wallet and register through the Coordinator Platform
  5. Wait for approval — The Foundation approves your Coordinator status
  6. Deploy — Use the Coordinator Platform to push to testnet, then mainnet
The full walkthrough is in the Getting started guide.
Yes. Coordinators can run two types of Crunches:Continuous Crunches
  • Predictions streamed in real-time
  • Checkpoints and payouts at regular intervals (e.g., weekly)
  • Ideal for price data, risk, or volatility forecasting
Time-boxed Crunches
  • Fixed start and end dates
  • Payout at the end of the competition
  • Ideal for research problems, Kaggle-style challenges, or academic studies
The Condor Game example in the starter kit demonstrates a continuous Crunch. See Crunch Node Example for details.
Coordinators retain access to:
  • All model outputs generated during the Crunch
  • Rankings, metrics, and aggregated performance reports
  • Coordinator dashboards for post-competition analysis
If you want ongoing predictions after the initial Crunch ends, you can:
  • Switch to a continuous Crunch
  • Launch a new season
  • Integrate predictions via the Report Service API
See Crunch lifecycle for details on the end-of-competition flow.
It depends on your use case:Public market data (e.g., crypto, equities, FX)
  • Your Crunch Node fetches data from public APIs or supported feeds
  • You define the format and delivery schedule in your predict service
  • Crunchers receive data through the model interface you define
Custom or proprietary data
  • You provide historical datasets for training and test data for local development
  • You define features, targets, and evaluation rules in your public repository
  • For sensitive data, the protocol supports privacy-preserving techniques including TEE and MPC (see Data security)
Coordinators provide the prize pool, paid out in USDC. The protocol also supports CRNCH token emissions as an additional reward mechanism.Funds are deposited into a smart-contract escrow before the competition starts. The Coordinator runs checkpoints to distribute rewards, and participants claim their prizes through the Tournament Hub.See Crunch lifecycle for the full funding and payout flow.
Crunch Labs provides:
  • Model Nodes — Managed infrastructure for running participant models securely
  • Model Orchestrator — Deployment, scaling, and lifecycle management of model containers
  • Tournament Hub — Web platform where Crunchers discover and join competitions
  • Crunch Protocol — On-chain smart contracts for registration, checkpoints, and payouts
  • Privacy tools — TEE and MPC support for sensitive data (see Data security)
  • Starter kit — Complete local environment with scoring, reporting, and model orchestration
Coordinators are responsible for hosting their Crunch Node and providing the data and scoring logic. See Crunch Nodes for details.
Your main costs are:
  • Prize pool — The USDC you deposit for participant rewards
  • Hosting — A server for your Crunch Node (a simple cloud instance is usually sufficient)
  • SOL — Small transaction fees on Solana (less than 0.1 SOL for most operations)
You do not pay for model execution infrastructure — that is managed by Crunch Labs through the Model Nodes.
Yes. The recommended flow is:
  1. Local — Run the full stack on your machine with make deploy
  2. Testnet — Deploy to Solana devnet for end-to-end testing with real model submissions
  3. Mainnet — Go live once you have validated the full loop
See Testnet deployment and Mainnet deployment.