Skip to main content
This guide will help you set up a complete local development environment where you can run and test your own Crunch competition. By the end, you’ll have a fully functional Crunch running locally — complete with model orchestration, scoring, and a web-based dashboard.

Quick start

Install the crunch-node package, which includes a CLI for scaffolding workspaces:
pip install crunch-node

Prerequisites

Before starting, ensure you have:
  • Docker and Docker Compose installed
  • Python 3.10+ and uv installed
  • make command-line tool
Packs are prebuilt overlays for common competition types. Start by listing available packs:
crunch-node list-packs
Current packs:
PackBest for
predictionFastest way to launch a simple forecasting challenge. Great for first-time Coordinators who want straightforward scoring, short prediction horizons, and minimal setup complexity.
tradingSignal-based competitions with PnL-style evaluation. Best when you want multi-asset workflows, trading-oriented metrics, and strategy-driven leaderboards.
tournamentClassic quant tournament format focused on ranking model quality over time (e.g., IC-style evaluation). Best for research-heavy competitions with deeper comparative analysis.

Scaffold your workspace

Create your workspace with a pack:
crunch-node init my-challenge --pack prediction
cd my-challenge
This creates:
my-challenge/
├── node/          ← Docker Compose, config, scripts (uses crunch-node from PyPI)
├── challenge/     ← Participant-facing package (tracker, scoring, examples)
└── Makefile
If you omit --pack, the default scaffold is used. We recommend starting with a pack to get a stronger baseline for scoring, examples, and UI configuration.
The generated workspace includes .agent/ context and implementation guidance so you can build your challenge in an agentic loop instead of editing everything manually. A practical loop is:
  1. Choose a pack close to your use case
  2. Ask your coding agent to adapt node/config/crunch_config.py, challenge/starter_challenge/tracker.py, and challenge/starter_challenge/scoring.py
  3. Validate quickly with make test and make verify-e2e
  4. Iterate until leaderboard and metrics match your intended behavior
This has been the fastest path in recent team testing because you start from a working baseline and refine only the competition-specific pieces.

Start the stack

From the node/ directory:
cd node
make deploy

What gets started

This launches a complete Crunch ecosystem on your local machine — core workers handling the full pipeline from data ingestion through scoring and reporting, plus a Model Orchestrator, PostgreSQL, and a web dashboard.
Diagram showing the local environment with Crunch Node workers, Model Orchestrator, PostgreSQL, and the web dashboard
For a detailed breakdown of each worker and service, see the Crunch Node Example.

Coordinator Platform

Once your local stack is running, access the Coordinator Platform at http://localhost:3000. This is the primary interface for managing your Crunch — both locally and in production.
Screenshot of the Coordinator Platform showing the leaderboard and model status
The platform provides complete control over your Crunch:
  • Leaderboard — customize ranking columns and display
  • Model management — upload, monitor, and manage participant models
  • Metrics — real-time performance analytics and scoring history
  • Checkpoints — create and manage reward distributions
  • Feeds — monitor active data feeds
  • Logs — view Coordinator and Model Node activity in real-time
When you move to testnet and mainnet, the same platform handles Coordinator registration, certificate enrollment, Crunch creation, and publishing — replacing most CLI operations with a guided web experience.
The leaderboard and metrics will be empty when you first start the local environment. Predictions cannot be scored until their resolution time arrives (e.g., a 1-minute horizon requires 1 minute before scoring begins). This is expected behavior.

Next steps

With your local environment running, follow the remaining guides in order to understand the reference implementation, then build and deploy your own Crunch.
1

Understand the Crunch Node

The Crunch Node Example walks through the working implementation that runs by default — covering the worker pipeline, data feeds, prediction collection, scoring, and reporting.
2

Understand the challenge package

The Challenge package example explains the participant-facing repository — including the model interface, scoring helpers, and quickstarters.
3

Build your custom Crunch

Define your own Crunch by customizing the prediction task, scoring function, and challenge package.
4

Set up your wallet

Create a Solana wallet and register as a Coordinator through the Coordinator Platform.
5

Deploy

Use the Coordinator Platform to push your Crunch to Testnet for testing, then Mainnet for production.

Next: Crunch Node Example

Walk through the default implementation to see how all workers and services fit together.