- Update the model interface for your prediction task
- Build your scoring function
- Configure the Crunch Node
- Customize the challenge package for participants
- Test the full loop locally
Your workspace
When you rancrunch-node init my-challenge, you got this structure:
Step 1: Define your prediction task
Your prediction task defines what participants need to predict and how their models interact with your Crunch Node.Design the model interface
Editchallenge/starter_challenge/tracker.py to define the contract between your node and every
Cruncher submission. Ask yourself:
- What data will models receive? (e.g., price ticks, tabular features, images)
- What should models return? (e.g., a class label, a probability distribution, a numeric value)
- What methods do models need? (e.g.,
tick+predict, ortrain+infer)
TrackerBase uses a tick() → predict() pattern for real-time streaming:
Define output types
Update yourCrunchConfig in node/config/crunch_config.py to match the types your models will
produce:
Step 2: Build your scoring function
Editchallenge/starter_challenge/scoring.py to implement your evaluation logic:
node/.local.env:
Choose your evaluation approach
| Approach | When to use | Example |
|---|---|---|
| Accuracy / F1 | Classification tasks | Iris classification |
| Log-likelihood | Probabilistic predictions | Density estimation |
| MSE / MAE | Regression tasks | Price forecasting |
| Custom metric | Domain-specific evaluation | Risk-adjusted returns |
Configure multi-metric scoring
Beyond your per-prediction scoring function, the engine computes portfolio-level metrics. Configure which ones in yourCrunchConfig:
Choose your payout schedule
SetCHECKPOINT_INTERVAL_SECONDS in node/.local.env:
- Continuous payouts — e.g.,
604800(1 week) for ongoing competitions with live data - One-time payout — set a long interval and trigger manually at competition end
Step 3: Configure the Crunch Node
Editnode/.local.env with your competition settings:
| Env var | Purpose |
|---|---|
INFERENCE_INPUT_BUILDER | Transform raw feed data into model input |
INFERENCE_OUTPUT_VALIDATOR | Validate model output shape and values |
MODEL_SCORE_AGGREGATOR | Aggregate per-model scores across predictions |
LEADERBOARD_RANKER | Custom leaderboard ranking strategy |
Step 4: Customize the challenge package
Update the participant-facing package:Update the model interface
Edit
challenge/starter_challenge/tracker.py with your base class (done in Step 1).Write quickstarter examples
Replace the default examples in
challenge/starter_challenge/examples/ with working models
for your competition. Provide at least one that participants can submit immediately.Update the scoring function
Edit
challenge/starter_challenge/scoring.py with your evaluation logic (done in Step 2).
This lets participants score locally.Configure backtest data
Update
challenge/starter_challenge/config.py so the backtest harness knows where to fetch
historical data from your Crunch Node.Update package metadata
Edit
challenge/pyproject.toml — change the package name, version, and description.Step 5: Test the full loop locally
With your customizations in place, run the complete stack:- Models connect — Open the Coordinator Platform at
http://localhost:3000and check that test models appear - Predictions flow — Submit a test model and confirm predictions are being collected
- Scoring works — Wait for the scoring interval to pass, then check the leaderboard
- API returns data — Hit
http://localhost:8000/reports/leaderboardto verify results
Checklist
Before moving to deployment, confirm:- Model interface defined in
challenge/starter_challenge/tracker.py - Scoring function implemented in
challenge/starter_challenge/scoring.py -
CrunchConfigupdated innode/config/crunch_config.py - Environment variables set in
node/.local.env - Quickstarter examples work and are easy to modify
- Backtest harness fetches data and produces scores
- Full loop works end-to-end with
make preflight - Challenge package published to PyPI
Next: Wallet & CLI setup
Set up your Solana wallet and register as a Coordinator on the protocol.