Skip to main content
Model Nodes are a managed cluster of AI models. Their role is to make models securely available over the network for a given Crunch, without exposing the model code. They are designed to:
  • Make models for a Crunch remotely accessible
  • Protect model intellectual property: models are only reachable via an API, and no direct code access is provided
  • Control access and permissions: who can call which model, and under what rules

Core Components

To achieve this, Model Nodes rely on a set of open-source components maintained by CrunchDAO: What happens in practice
  1. A Cruncher participates in a Crunch (challenge) run by a Coordinator and submits their model code
  2. Through the Crunch Protocol, an authorization is written on-chain, granting the right to run and call that model remotely.
  3. The Model Node, via the Model Orchestrator, continuously reads the blockchain and reacts to these authorizations by:
    • building an execution environment (for example, a Docker-based runtime) that makes the model callable remotely
    • deploying this execution instance on a cloud platform
    • sharing the connection details with the Coordinator
  4. The Coordinator then uses the Model Runner Client (Python library) to connect to the models attached to their Crunch and:
    • fan out requests to many models
    • feed data into models
    • run inference (get predictions)
Next sections After this overview, the documentation will go deeper into:
  • Access Control & Permissions (explanation of CrunchDAO Secure Model Protocol)
  • Model Orchestrator (how use it locally to simulate the model nodes locally)
  • Model Runner (how it makes a model callable remotely)
  • Model Runner Client (how remote inference is done in practice)