Skip to main content
A Crunch Node is the set of services you build and host to run your Crunch. It contains your competition logic — how data flows in, how models are called, how predictions are scored, and how results are reported. Your Crunch Node relies on Model Nodes to execute participant models remotely. You never run or manage models directly — you call them over the network via the Model Runner Client.

What a Crunch Node does

In practice, your node handles four responsibilities:
  1. Data ingestion — collect or host the data your competition needs (market feeds, datasets, APIs)
  2. Model orchestration — send data to connected models, trigger training and inference, and collect predictions
  3. Scoring — evaluate predictions against ground truth using your scoring function, aggregate results, and build a leaderboard
  4. Reporting — expose results via API so the dashboard, participants, and external systems can consume them
Diagram of a Crunch Node with workers for data ingestion, prediction, scoring, and reporting, connected to the Model Orchestrator

Infrastructure requirements

You are responsible for hosting your Crunch Node and everything around it: which data to use, when to trigger calls, how to compute scores, and how to produce results. A simple cloud server or on-premises machine is usually enough. The required capacity depends primarily on what you do around predictions — ingestion, scoring, aggregation, and storage. You do not need to size infrastructure for model execution. Models are deployed, scaled, and managed by Crunch Labs through the Model Nodes.
The crunch-node package provides a production-ready engine with all four responsibilities built in. You configure behavior through environment variables and a CrunchConfig. See the Getting started guide.

Next: Model Nodes

Understand the managed infrastructure that executes participant models securely.