Skip to main content
The public GitHub repository is the entry point for your challenge. Its purpose is to present your challenge and clearly explain what you expect from participants (Crunchers). Your repository should include:
  • Base Model Interface — Defines how your Coordinator Node interacts with Cruncher submissions
  • Scoring Function — Provides a scoring function that participants can use for local testing
  • Quickstarters — Notebooks that help participants join your challenge quickly
  • Data — A dataset participants can use locally to test their models (both running and scoring)
The end goal is to build and publish a PyPI package that any Cruncher can import to:
  1. Implement the official interface
  2. Access helper utilities (e.g., data access)
  3. Run scoring locally
You will also need this PyPI package to allow Crunchers to import the Base Model Interface inside the quickstarters.

Example: Condor Game

We’ll learn how to set up a public repository by examining the Condor Game implementation.

Condor Game Public Repository

Public entry point for the Condor Game challenge
This repository serves as the public entry point for the Condor Game challenge. It explains the goal of the game, defines what participants must implement, and provides everything needed to get started locally—including installation instructions, examples, and evaluation helpers.

Model Interface

Participants build a Tracker that implements the TrackerBase interface:
class TrackerBase:
    def tick(self, data):
        """Receive latest market data."""
        raise NotImplementedError

    def predict(self, asset, horizon, step):
        """Return a distribution or prediction."""
        raise NotImplementedError
The returned distributions must follow the density_pdf specification, ensuring a strict and standardized interface.

Data

To simplify local development, the Condor Game ecosystem provides two helper utilities: These functions fetch historical prices from a remote HTTP service at 1-minute resolution. Participants can load a time range and train or validate locally without needing to build their own data pipeline first.

Scoring

The repository includes a local evaluation workflow through TrackerEvaluator, which tracks:
  • Overall likelihood score
  • Recent likelihood score
This gives participants fast feedback before deploying to production.

Quickstarters and Examples

In the condorgame/examples directory you can find:
  • Quickstarter notebooks — Get started quickly with a working baseline
  • Self-contained examples — Ready to copy and adapt for your own implementation