Skip to main content

Local Development Environment

The Condor provides a complete local development stack that includes:
  • Model Orchestrator (manages participant models)
  • PostgreSQL database (stores predictions and scores)
  • All three workers (Predict, Score, Report)
  • Reports UI (leaderboard and metrics dashboard)
  • Example models for testing
Everything runs in Docker containers, giving you a production-like environment locally.

Prerequisites

Before starting, ensure you have:
# Required software
- Docker (version 20.10+)
- Docker Compose (version 2.0+)
- Make (for convenience commands)
- Python 3.9+ (for local development)

Quick Start

1. Clone and Deploy

# Clone the repository
git clone https://github.com/crunchdao/falcon2-backend.git
cd falcon2-backend

# Deploy the full stack
make deploy
This single command will:
  • Build all Docker images
  • Start PostgreSQL
  • Start Model Orchestrator
  • Start Predict, Score, and Report workers
  • Start the Reports UI
  • Load example models

2. Access the Reports UI

Once the stack is running: Reports Dashboard: http://localhost:3000 The UI shows:
  • Leaderboard (model rankings)
  • Performance metrics
  • Score trends over time
  • Per-model detailed views
⏱️ Scoring DelayScoring is not instant! Since predictions have resolution periods (e.g., 1 hour for 1h horizon), you need to wait before scores appear.
  • For 1h horizon: Wait ~1 hour
  • For 4h horizon: Wait ~4 hours
  • For 24h horizon: Wait ~24 hours
If you open the UI immediately, it’s normal to see empty leaderboards.

Development Modes

Full Stack Mode (Default)

Everything runs in Docker:
make deploy
Use when:
  • Testing the complete system
  • Verifying end-to-end functionality
  • Demonstrating to stakeholders

Dev Mode

Infrastructure runs in Docker, workers run locally:
# Start only infrastructure
make dev deploy

# In separate terminals, run workers from your IDE
python -m condorgame_backend.workers.predict_worker
python -m condorgame_backend.workers.score_worker
python -m condorgame_backend.workers.report_worker
Use when:
  • Debugging worker logic
  • Iterating on code quickly
  • Attaching debuggers
  • Testing code changes without rebuilding images
Benefits:
  • Fast iteration (no Docker rebuild)
  • Full IDE debugging support
  • Easy to test changes

Working with Local Models

Model Directory Structure

Place your test models in the submissions directory:
deployment/model-orchestrator-local/data/submissions/
├── my-test-model-1/
│   ├── main.py
│   ├── my_model.py
│   └── requirements.txt
└── my-test-model-2/
    ├── main.py
    ├── my_model.py
    └── requirements.txt

Registering Local Models

Edit deployment/model-orchestrator-local/config/models.dev.yml:
# models.dev.yml
models:
  - id: test-model-1
    submission_id: my-test-model-1 # Must match folder name
    crunch_id: condor
    desired_state: RUNNING
    cruncher_id: local-cruncher-1
    cruncher_name: "Test User 1"
    model_name: "My Test Model"

  - id: test-model-2
    submission_id: my-test-model-2
    crunch_id: condor
    desired_state: RUNNING
    cruncher_id: local-cruncher-2
    cruncher_name: "Test User 2"
    model_name: "Another Model"
Field descriptions:
FieldDescription
idUnique identifier for the model in the orchestrator
submission_idFolder name in submissions/ directory
crunch_idGame identifier (use “condor” for Condor)
desired_stateRUNNING to start, STOPPED to ignore
cruncher_idSimulated blockchain ID of the participant
cruncher_nameDisplay name for the participant
model_nameDisplay name for the model

Example Test Model

Create a simple test model:
# submissions/simple-test/my_model.py
from condorgame import TrackerBase

class SimpleTestModel(TrackerBase):
    """A simple model for testing."""

    def __init__(self):
        self.tick_count = 0
        print("SimpleTestModel initialized!")

    def tick(self, data):
        """Update state with market data."""
        self.tick_count += 1
        print(f"Tick #{self.tick_count} received")

    def predict(self, asset, horizon, step):
        """Return a simple uniform distribution."""
        print(f"Predict called: {asset}, {horizon}h, step {step}")

        # Return uniform distribution
        return {
            "distribution": {
                "-0.10": 0.11,
                "-0.05": 0.11,
                "-0.02": 0.11,
                "-0.01": 0.11,
                "0.00": 0.12,
                "0.01": 0.11,
                "0.02": 0.11,
                "0.05": 0.11,
                "0.10": 0.11
            }
        }
# submissions/simple-test/requirements.txt
condorgame==1.0.0
# submissions/simple-test/main.py
from my_model import SimpleTestModel

# Orchestrator will instantiate and manage

Testing Your Model

After adding your model to submissions/ and models.dev.yml:
# Restart to pick up new models
make restart

# Watch orchestrator logs
docker-compose logs -f model-orchestrator

# You should see:
# - Model container being built
# - Dependencies being installed
# - Model being started
# - Connection to orchestrator

Database Access

Connect to PostgreSQL

# Connect via psql
docker exec -it condorgame-db psql -U condor -d condor

# Or from host (if port 5432 is exposed)
psql -h localhost -U condor -d condor

Useful Queries

-- View recent predictions
SELECT
    model_id,
    asset,
    horizon,
    status,
    created_at
FROM predictions
ORDER BY created_at DESC
LIMIT 10;

-- Count predictions by status
SELECT
    status,
    COUNT(*) as count
FROM predictions
GROUP BY status;

-- View model scores
SELECT
    m.name,
    ms.recent_score,
    ms.steady_score,
    ms.anchor_score
FROM models m
LEFT JOIN model_scores ms ON m.id = ms.model_id
ORDER BY ms.recent_score DESC;

-- Check predictions ready to score
SELECT COUNT(*)
FROM predictions
WHERE resolvable_at <= NOW()
  AND score IS NULL;

Common Operations

Restart Everything

make restart

Stop Everything

make stop      # Stop containers (keep data)
make down      # Stop and remove containers (keep data)

Rebuild After Code Changes

make build     # Rebuild Docker images
make deploy    # Deploy with new images

Clean Slate (Remove All Data)

make down              # Stop containers
docker volume prune    # Remove volumes (WARNING: deletes all data)
make deploy            # Fresh start

View Running Containers

docker-compose ps
Expected output:
NAME                  COMMAND                  STATUS
model-orchestrator    "/app/entrypoint.sh"     Up
condorgame-db         "docker-entrypoint..."   Up
predict-worker        "python -m condorga..."  Up
score-worker          "python -m condorga..."  Up
report-worker         "python -m condorga..."  Up
reports-ui            "nginx -g 'daemon o..."  Up

Testing Workflow

Complete Test Cycle

# 1. Deploy stack
make deploy

# 2. Add your test model
mkdir -p deployment/model-orchestrator-local/data/submissions/my-model
# ... create model files ...

# 3. Register model in models.dev.yml
vim deployment/model-orchestrator-local/config/models.dev.yml

# 4. Restart to load new model
make restart

# 5. Monitor logs
docker-compose logs -f predict-worker

# 6. Watch for ticks and predictions
# You should see:
# - Tick events being sent
# - Predict calls being made
# - Predictions being stored

# 7. Wait for scoring period
# (1+ hours for 1h horizon)

# 8. Check Reports UI
open http://localhost:3000

# 9. Query database directly if needed
docker exec -it condorgame-db psql -U condor -d condor

Troubleshooting

Model Not Starting

Check orchestrator logs:
docker-compose logs model-orchestrator | grep your-model-id
Common issues:
  • requirements.txt has invalid dependencies
  • Model code has syntax errors
  • submission_id doesn’t match folder name
  • desired_state is STOPPED

No Predictions Appearing

Check predict worker logs:
docker-compose logs predict-worker
Common issues:
  • Models timing out (increase timeout in ModelRunnerClient)
  • Models returning invalid distributions
  • Predict worker not connected to orchestrator

Database Connection Errors

Verify database is running:
docker-compose ps condorgame-db
Test connection:
docker exec condorgame-db pg_isready

Scoring Not Happening

Check score worker logs:
docker-compose logs score-worker
Common issues:
  • Not enough time has passed (check resolvable_at timestamps)
  • Score worker crashed (check logs for exceptions)
  • Missing market data for realized returns

Development Best Practices

1. Start Simple

Begin with a basic model that just returns uniform distributions:
def predict(self, asset, horizon, step):
    return {"distribution": {"-0.10": 0.2, "0.00": 0.6, "0.10": 0.2}}
Verify the entire pipeline works before adding complexity.

2. Test Incrementally

  • ✅ First: Verify model starts and connects
  • ✅ Then: Verify tick events are received
  • ✅ Then: Verify predictions are generated
  • ✅ Then: Verify predictions are stored
  • ✅ Finally: Verify scoring works

3. Use Logs Extensively

Add logging to your model:
def tick(self, data):
    print(f"Received tick at {data['timestamp']}")
    # Your logic...

def predict(self, asset, horizon, step):
    print(f"Predicting {asset} {horizon}h")
    # Your logic...

4. Monitor Resource Usage

# Watch container resources
docker stats

# If models are using too much memory/CPU:
# - Optimize model code
# - Reduce state size
# - Use more efficient data structures

Next Steps

With a working local environment, you’re ready to:
  1. Develop your game - Customize the prediction task and scoring
  2. Test with models - Validate your coordinator logic
  3. Prepare for production - Plan your deployment strategy