Skip to main content
Now that you understand the Crunch Node and the challenge package through the default scaffold, this guide walks you through customizing them to build your own Crunch. You will:
  1. Update the model interface for your prediction task
  2. Build your scoring function
  3. Configure the Crunch Node
  4. Customize the challenge package for participants
  5. Test the full loop locally

Your workspace

When you ran crunch-node init my-challenge, you got this structure:
my-challenge/
├── node/                          ← Crunch Node (Docker Compose, config, scripts)
│   ├── docker-compose.yml
│   ├── .local.env                 # Environment variables
│   ├── config/
│   │   ├── crunch_config.py       # CrunchConfig — type shapes and behavior
│   │   └── callables.env          # Callable overrides
│   ├── api/                       # Custom API endpoints
│   ├── extensions/                # Custom callables
│   └── Makefile
├── challenge/                     ← Participant-facing package
│   ├── starter_challenge/
│   │   ├── tracker.py             # Model base class
│   │   ├── scoring.py             # Scoring function
│   │   ├── backtest.py            # Backtest harness
│   │   └── examples/              # Quickstarter models
│   └── pyproject.toml
└── Makefile

Step 1: Define your prediction task

Your prediction task defines what participants need to predict and how their models interact with your Crunch Node.

Design the model interface

Edit challenge/starter_challenge/tracker.py to define the contract between your node and every Cruncher submission. Ask yourself:
  • What data will models receive? (e.g., price ticks, tabular features, images)
  • What should models return? (e.g., a class label, a probability distribution, a numeric value)
  • What methods do models need? (e.g., tick + predict, or train + infer)
The default TrackerBase uses a tick()predict() pattern for real-time streaming:
class TrackerBase:
    def tick(self, data: dict[str, Any]) -> None:
        """Receive latest market data. Override to maintain state."""
        ...

    def predict(self, subject: str, resolve_horizon_seconds: int, step_seconds: int) -> dict[str, Any]:
        """Return a prediction for the given scope."""
        raise NotImplementedError
For a batch classification problem, you might use a completely different pattern:
class MyModelBase(abc.ABC):
    @abc.abstractmethod
    def train(self, train_data: pd.DataFrame) -> None:
        """Train on labeled data."""
        pass

    @abc.abstractmethod
    def predict(self, features: pd.DataFrame) -> pd.DataFrame:
        """Return predictions with a 'prediction' column."""
        pass

Define output types

Update your CrunchConfig in node/config/crunch_config.py to match the types your models will produce:
from coordinator_node.crunch_config import CrunchConfig

contract = CrunchConfig(
    raw_input_type=RawInput,        # Shape of feed data
    output_type=InferenceOutput,    # Shape of model predictions
    score_type=ScoreResult,         # Shape of scoring results
)
The MODEL_BASE_CLASSNAME in node/.local.env must match your tracker class path. For example, if your package is my_challenge and the class is MyModelBase, set MODEL_BASE_CLASSNAME=my_challenge.model_base.MyModelBase.

Step 2: Build your scoring function

Edit challenge/starter_challenge/scoring.py to implement your evaluation logic:
def score_prediction(prediction, ground_truth):
    """Score a single prediction against ground truth.
    
    Return a dict matching your ScoreResult shape.
    """
    error = abs(prediction["value"] - ground_truth["value"])
    return {
        "value": 1.0 / (1.0 + error),
        "success": True,
        "failed_reason": None,
    }
Then point the score worker at your function in node/.local.env:
SCORING_FUNCTION=starter_challenge.scoring:score_prediction

Choose your evaluation approach

ApproachWhen to useExample
Accuracy / F1Classification tasksIris classification
Log-likelihoodProbabilistic predictionsDensity estimation
MSE / MAERegression tasksPrice forecasting
Custom metricDomain-specific evaluationRisk-adjusted returns

Configure multi-metric scoring

Beyond your per-prediction scoring function, the engine computes portfolio-level metrics. Configure which ones in your CrunchConfig:
contract = CrunchConfig(
    metrics=["ic", "ic_sharpe", "hit_rate", "max_drawdown"],
    aggregation=Aggregation(ranking_key="ic_sharpe"),  # Rank by IC Sharpe
)

Choose your payout schedule

Set CHECKPOINT_INTERVAL_SECONDS in node/.local.env:
  • Continuous payouts — e.g., 604800 (1 week) for ongoing competitions with live data
  • One-time payout — set a long interval and trigger manually at competition end
See Crunch lifecycle for the full checkpoint and payout flow.

Step 3: Configure the Crunch Node

Edit node/.local.env with your competition settings:
# Competition identity
CRUNCH_ID=my-challenge

# Data source
FEED_SOURCE=pyth
FEED_SUBJECTS=BTC,ETH
FEED_KIND=tick
FEED_GRANULARITY=1s

# Model interface
MODEL_BASE_CLASSNAME=starter_challenge.tracker.TrackerBase

# Scoring
SCORING_FUNCTION=starter_challenge.scoring:score_prediction
CHECKPOINT_INTERVAL_SECONDS=604800
For advanced customization, you can override additional callables via environment variables:
Env varPurpose
INFERENCE_INPUT_BUILDERTransform raw feed data into model input
INFERENCE_OUTPUT_VALIDATORValidate model output shape and values
MODEL_SCORE_AGGREGATORAggregate per-model scores across predictions
LEADERBOARD_RANKERCustom leaderboard ranking strategy

Step 4: Customize the challenge package

Update the participant-facing package:
1

Update the model interface

Edit challenge/starter_challenge/tracker.py with your base class (done in Step 1).
2

Write quickstarter examples

Replace the default examples in challenge/starter_challenge/examples/ with working models for your competition. Provide at least one that participants can submit immediately.
3

Update the scoring function

Edit challenge/starter_challenge/scoring.py with your evaluation logic (done in Step 2). This lets participants score locally.
4

Configure backtest data

Update challenge/starter_challenge/config.py so the backtest harness knows where to fetch historical data from your Crunch Node.
5

Update package metadata

Edit challenge/pyproject.toml — change the package name, version, and description.
6

Publish to PyPI

cd challenge
pip install build twine
python -m build
twine upload dist/*

Step 5: Test the full loop locally

With your customizations in place, run the complete stack:
cd node
make deploy
Then verify:
  1. Models connect — Open the Coordinator Platform at http://localhost:3000 and check that test models appear
  2. Predictions flow — Submit a test model and confirm predictions are being collected
  3. Scoring works — Wait for the scoring interval to pass, then check the leaderboard
  4. API returns data — Hit http://localhost:8000/reports/leaderboard to verify results
You can also run the full preflight check:
make preflight
This validates config, deploys, checks model connectivity, and runs end-to-end verification.
Scoring requires ground truth. If your predictions have a 1-minute horizon, you need to wait 1 minute before scores appear on the leaderboard.

Checklist

Before moving to deployment, confirm:
  • Model interface defined in challenge/starter_challenge/tracker.py
  • Scoring function implemented in challenge/starter_challenge/scoring.py
  • CrunchConfig updated in node/config/crunch_config.py
  • Environment variables set in node/.local.env
  • Quickstarter examples work and are easy to modify
  • Backtest harness fetches data and produces scores
  • Full loop works end-to-end with make preflight
  • Challenge package published to PyPI

Next: Wallet & CLI setup

Set up your Solana wallet and register as a Coordinator on the protocol.