Cowboy: Technical Whitepaper
| Status | Draft for internal review |
| Type | Standards Track |
| Category | Core |
| Author(s) | Cowboy Foundation |
| Created | 2025‑09‑17 |
| Updated | 2026‑01‑18 |
| License | CC0‑1.0 |
Note: This document provides complete technical specifications for Cowboy. For architectural rationale and design decisions, see the Design Decisions Overview.
Abstract
Cowboy is a general-purpose Layer-1 blockchain that combines a Python-based actor-model execution environment with a proof‑of‑stake consensus and a market for verifiable off‑chain computation. Smart contracts on Cowboy are actors: Python programs with private state, a mailbox for messages, and chain‑native timers for autonomous scheduling. For heavy tasks like LLM inference or web requests, Cowboy integrates a decentralized network of Runners who execute jobs and attest to results under selectable trust models: N-of-M consensus, TEEs, and (in V2) ZK-proofs. Cowboy introduces a dual-metered gas model, separating pricing for computation (Cycles) and data (Cells) into independent, EIP-1559-style fee markets. Security is provided by Simplex BFT consensus with proof‑of‑stake, fast finality, and mandatory proposer rotation. This document specifies Cowboy’s complete technical architecture, state transition function, economic mechanisms, consensus protocol, and all implementation parameters.Introduction
Cowboy is designed to enable autonomous agents by providing a native blockchain execution environment optimized for asynchronous, Python-based applications. This document provides complete technical specifications for implementers, auditors, and protocol developers. For architectural rationale and design decisions, see the Design Decisions Overview.Key Features
Cowboy implements four core technical features:- Deterministic Python Actors: A sandboxed Python VM (PVM) with mailbox messaging, reentrancy (depth‑capped to 32), and deterministic execution guarantees.
- Native Timers & Scheduler: Protocol-level timer mechanism with tiered calendar queue, Gas Bidding Agent (GBA) for dynamic pricing, and dedicated execution lanes.
- Verifiable Off-Chain Compute: Marketplace for off-chain jobs with N-of-M consensus, TEE attestations, and (in V2) ZK-proofs. Supports LLM inference, HTTP requests, and custom job types.
- Dual-Metered Gas: Independent EIP-1559-style fee markets for compute (Cycles) and data/storage (Cells), with separate basefee adjustment mechanisms.
Accounts and State
Cowboy distinguishes two object types:- External Accounts (EOAs): Controlled by private keys (secp256k1). They initiate transactions and hold balances of CBY and other assets.
- Actors: Autonomous Python programs executed in the PVM (Python Virtual Machine). Actors own storage, receive messages, and can send messages to other actors.
State : Address → { balance, nonce, code_hash?, storage?, metadata }
where actor storage is a key/value map with a quota (default 1 MiB) and rent. System actors and precompiles occupy a reserved prefix of the address space.
Transactions and Message Passing
A user interacts with Cowboy by sending a transaction (signed with secp256k1) specifying a destination, a payload, and resource limits: a cycles limit and a cells limit alongside maximum and tip prices for each. An actor interacts with other actors by sending messages. Messages carry a small payload, may transfer value, and may trigger further messages. Delivery is exactly‑once, and actors may schedule timers that insert messages at a future block height. To avoid denial‑of‑service through explosive fanout, Cowboy caps the number of messages any transaction (and its triggered cascades) can enqueue.Native Timers and the Actor Scheduler
To enable true autonomy, Cowboy provides a protocol-native timer and scheduling mechanism, eliminating the need for external keeper networks. Actors can schedule messages to be sent to themselves or other actors at a future block height or on a recurring interval. The scheduler is designed to be scalable, economically rational, and fair.Scalable Design: The Tiered Calendar Queue
The scheduler uses a multi-layered tiered calendar queue to manage timers efficiently across different time horizons without compromising performance. This architecture consists of three levels:- Tier 1: Block Ring Buffer: An
O(1)queue for imminent timers, organized as a ring buffer where each slot represents a single block. This handles near-term scheduling with maximum efficiency. - Tier 2: Epoch Queue: A medium-term queue for timers scheduled in future epochs. Timers from this queue are efficiently migrated in batches to the Block Ring Buffer at the start of each new epoch.
- Tier 3: Overflow Sorted Set: A Merkleized binary search tree for very long-term timers that fall outside the Epoch Queue’s range, ensuring the protocol can handle any future-dated schedule.
Economic Rationality: The Gas Bidding Agent (GBA)
A key innovation in Cowboy’s scheduler is the concept of the Gas Bidding Agent (GBA). Instead of pre-paying a fixed gas fee, an actor designates a GBA (which is another actor) to dynamically bid for its timer’s execution when it becomes due. When a timer is ready to be executed, the protocol performs a read-only call to the actor’s GBA, providing it with a richcontext object containing real-time data like network congestion (current base fees), the timer’s urgency (how many blocks it has been delayed), and the owner’s balance. The GBA uses this context to return a competitive gas bid. This creates an intra-block auction for a dedicated portion of the block’s compute budget, ensuring that high-priority tasks can get executed even during periods of high network traffic. To ensure a simple developer experience, actors which do not specify a GBA receive the network default.
Fairness and Liveness
Timers that are not executed due to low bids or network congestion are automatically deferred to the next block. To prevent “timer starvation” where an actor is perpetually outbid, the protocol tracks an actor’s scheduling history. It uses a weighted priority system with exponential decay to give a small boost to actors whose timers have been repeatedly deferred, ensuring eventual execution and maintaining network fairness.Timer Rate Limiting and DoS Prevention
The timer system is a potential vector for denial-of-service attacks. An adversary could attempt to schedule millions of timers at a single block height, overwhelming execution capacity, or fill the timer queue with spam to crowd out legitimate users. Cowboy employs multiple layers of defense: Per-Actor Timer Limits Each actor is limited to a maximum of 1,024 active timers at any time. This hard cap prevents any single actor from monopolizing the timer queue. Attempts to schedule beyond this limit MUST revert. Progressive Deposit Model Creating a timer requires a deposit that scales with the actor’s total active timer count:n is the actor’s current active timer count and base_deposit is a governance-tunable parameter (default: 10 CBY). This means:
- Timers 1-99: 10 CBY each
- Timers 100-199: 20 CBY each
- Timers 200-299: 30 CBY each
- …and so on
k is the number of timers this actor already has scheduled for the target block. The first 16 timers for any given block cost the base rate. Beyond that:
- Timer 17: 2× base cost
- Timer 18: 4× base cost
- Timer 19: 8× base cost
- Timer 32: 65,536× base cost
Q= current total timer queue depth (across all tiers)T= target queue depth (governance-tunable, default: 100,000)alpha= 8,delta= 0.125 (same as cycle/cell basefees)
- Timer budget: 20% of block cycle capacity (default: 2,000,000 cycles)
- Timers compete for this budget via the GBA auction
- Remaining 80% is available for user transactions
| Attack Vector | Mitigation |
|---|---|
| Schedule millions of timers | Progressive deposit (capital lockup) |
| Sybil attack across many actors | Per-block execution budget caps total work |
| Timer bomb (many timers, one block) | Exponential same-block surcharge |
| Fill queue far in advance | Timer basefee rises with queue depth |
| Outbid everyone perpetually | Anti-starvation boost for deferred timers |
| DoS then cancel for refund | Deposits are only refunded after fire/cancel; surcharges are not refunded |
Asynchronous Execution and Multi-Block Semantics
A fundamental property of the actor model is that message passing is inherently asynchronous. In Cowboy, this asynchrony becomes especially important when actors interact with off-chain Runners, as job execution may span multiple blocks. This section defines the execution semantics and the programming model developers must follow.The Single-Block Atomicity Guarantee
Cowboy provides atomicity only within a single block. When an actor’s message handler executes, all state reads, writes, and outbound messages within that handler are atomic—they either all commit or all revert. However, there is no cross-block atomicity. Once a handler completes and the block is finalized, subsequent handlers (triggered by replies, timers, or new messages) execute in the context of potentially different world state.Why Cross-Block Transactions Are Not Provided
Consider an actor that reads state, calls a Runner, and wants to continue execution when the result arrives:- Stale State: Values read before the yield may have changed.
- Invalid Control Flow: Branches taken based on pre-yield state may no longer be appropriate.
- Composability Explosion: Nested yields and actor-to-actor calls create a tree of interleavings where each path depends on potentially invalidated assumptions.
- Adversarial Griefing: Attackers can deliberately mutate state between yield points to exploit stale assumptions.
The Message-Passing Continuation Model
Instead of implicit continuations, Cowboy uses explicit message passing for all asynchronous operations. When an actor needs to perform an off-chain job, it sends a message to the Runner system actor and receives the result as a separate message in a later block:Design Principles
This model embodies several important principles: 1. No Hidden Control Flow Every state transition is triggered by an explicit message. There are no implicit callbacks or suspended coroutines. Developers can trace execution by following messages. 2. Runner Is Just Another Actor The Runner system is not special syntax—it’s a system actor that receives job requests and sends result messages. The same message-passing pattern applies to actor-to-actor communication, timer callbacks, and Runner results.context field. This forces developers to think about what data crosses the yield boundary and prevents accidental closure over stale references.
4. Re-Validation is Mandatory
The programming model makes it clear that when handle_analysis_result executes, it’s a new transaction in a new block. Developers must re-read and validate any state assumptions.
Correlation and Message Ordering
The protocol provides infrastructure for correlating requests and responses:- Correlation IDs: Each outbound job request includes a unique
correlation_id. The Runner system actor includes this ID in the response message, allowing actors to match responses to requests. - No Ordering Guarantees: If an actor sends multiple Runner requests, responses may arrive in any order. Actors must handle out-of-order delivery.
Timeout and Failure Handling
Asynchronous operations can fail silently—a Runner may crash, a network partition may occur, or a job may simply take too long. Actors MUST implement timeout handling for any operation that depends on an external response. The recommended pattern combines correlation tracking with native timers:- Store the
timer_idreturned byset_timer()so it can be cancelled when the result arrives - Always check for the pending request before processing—it may have been cleaned up by a timeout
- Clean up all associated state (pending request, timer reference) in both success and timeout paths
- Consider implementing retry logic with exponential backoff for transient failures
SDK Ergonomics
While the protocol uses explicit message passing, the SDK provides ergonomic helpers that compile down to this pattern:@runner.continuation decorator transforms the async function into:
- A request handler that sends the message and stores continuation state
- A result handler that retrieves continuation state and resumes execution
Comparison with Other Models
| Model | Atomicity | Developer Burden | Griefing Resistance |
|---|---|---|---|
| Ethereum (sync calls) | Single TX | Low | High |
| Cross-block locks | Multi-block | Low | Low (deadlocks, lock griefing) |
| Optimistic + rollback | Multi-block | Medium | Low (rollback spam) |
| Cowboy (message passing) | Single block | Medium | High |
The Cowboy Actor VM (PVM)
Python as Execution Language
Cowboy uses Python as its execution language. The PVM (Python Virtual Machine) executes Python bytecode in a deterministic sandbox. Python 3.x syntax is supported, with restrictions and modifications to ensure deterministic execution (see Execution Environment and Determinism Guarantees below).Execution Environment and Determinism Guarantees
Actors are Python programs executed in the PVM inside a deterministic sandbox. For the network to reach consensus, every node must produce the exact same result from the same code. This section specifies the comprehensive set of consensus-critical rules that guarantee determinism.Runtime Environment
- No JIT Compilation: The PVM operates in pure interpretation mode. Just-In-Time compilation is forbidden, as JIT optimizations are a source of non-determinism across runs and platforms.
- Deterministic Memory Management: Memory is managed via deterministic reference counting. The cyclic garbage collector is disabled. Objects are deallocated immediately when their reference count reaches zero, ensuring predictable memory behavior.
- Fixed Recursion Limit: The recursion limit MUST be set to a consensus-defined constant (256 by default). Stack depth enforcement is integrated with cycle metering.
Numeric Determinism
- Floating-Point Operations: All floating-point operations MUST use a cross-platform, deterministic software-based math library (softfloat), not the host machine’s native FPU. This prevents micro-variations across different CPU architectures (x86 vs ARM, different FPU implementations).
- Integer Arithmetic: Python’s arbitrary-precision integers are deterministic. No overflow behavior varies across platforms.
- Decimal Module: If the
decimalmodule is included in the whitelist, it MUST use a fixed rounding mode (ROUND_HALF_EVEN) and fixed precision, specified at the consensus level. - Math Functions: Transcendental functions (
sin,cos,log,exp, etc.) MUST use deterministic implementations from the softfloat library, not platform-native libm.
Hash Seed and Collection Ordering
Python’s default hash randomization (PYTHONHASHSEED) is a critical source of non-determinism. The PVM enforces:- Fixed Hash Seed:
PYTHONHASHSEEDMUST be set to a consensus-defined constant (0). This ensureshash()returns identical values across all nodes. - Dictionary Ordering: Python 3.7+ guarantees insertion-order iteration for
dict. This is deterministic and permitted. - Set Replacement: The built-in
setandfrozensettypes have non-deterministic iteration order even with a fixed hash seed (due to hash collisions and table resizing). The PVM MUST replacesetwithordered_set, an insertion-ordered set implementation provided by the standard library. Code usingsetsyntax transparently receivesordered_setsemantics. - Forbidden Hash Operations: Actors MUST NOT rely on
hash()values for anything persisted to storage or sent in messages, as hash values are not guaranteed stable across PVM versions.
String and Text Handling
- Unicode Normalization: All string comparisons MUST use NFC (Canonical Decomposition, followed by Canonical Composition) normalization. The PVM normalizes all input strings to NFC on ingestion.
- Fixed Locale: The locale MUST be fixed to
C.UTF-8(POSIX). Locale-dependent operations (collation, case conversion) use Unicode rules, not system locale. - Case Folding: Case-insensitive comparisons MUST use Unicode case folding (
str.casefold()), which is locale-independent. - String Interning: Identity comparisons (
is) on strings are forbidden in user code. The PVM MAY raise a warning or error. Use equality (==) for string comparison. - Encoding: All strings are UTF-8. Other encodings MUST be explicitly converted via
encode()/decode()with theerrors='strict'policy.
Serialization
All data that crosses trust boundaries (storage, messages, Runner job parameters) MUST use a canonical serialization format:- Format: CBOR (RFC 8949) with Core Deterministic Encoding Requirements (Section 4.2).
- Canonical Rules:
- Map keys MUST be sorted by byte-wise lexicographic order of their encoded form.
- Integers MUST use the shortest encoding.
- No indefinite-length arrays or maps.
- Floats MUST be encoded as 64-bit IEEE 754 (no float16/float32 downcasting).
- No duplicate map keys.
- Forbidden: The
picklemodule is forbidden. It is non-deterministic, insecure, and version-dependent. - JSON: If JSON is needed for human-readable output,
json.dumps()MUST usesort_keys=True,separators=(',', ':'), andensure_ascii=False. - Custom Types: User-defined classes that need serialization MUST implement the
__cowboy_serialize__()and__cowboy_deserialize__()protocol methods.
Module and Dependency Management
- Whitelisted Imports: Actors can only import modules from a strict, consensus-defined whitelist. Each module is pinned to an exact version.
- No C Extensions: C extension modules (numpy, pandas, etc.) are forbidden. They introduce hardware-dependent behavior, platform-specific optimizations, and are difficult to audit for determinism.
- No Dynamic Imports:
importlib,__import__(), and dynamic module loading are forbidden. - Initial Whitelist (v1):
collections,dataclasses,enum,functools,itertoolsjson(with canonical constraints),re,structmath(deterministic implementation),decimal(fixed precision)typing,abchashlib(for keccak256, sha256)cowboy_sdk(Cowboy standard library)
Exception Handling
- Exception Types: Exception types and their inheritance hierarchy are deterministic.
- Exception Messages: Exception message strings MAY vary across platforms or Python versions. Actors MUST NOT branch on exception message text content.
- Tracebacks: Traceback objects are stripped before any on-chain storage or message passing. They are available only for local debugging.
Forbidden Operations and Patterns
The following operations are forbidden and will raiseDeterminismError at parse time or runtime:
| Category | Forbidden |
|---|---|
| System | sys.exit(), os.environ, os.system(), subprocess.* |
| Time | time.time(), datetime.now(), time.sleep() |
| Randomness | random.* (use cowboy_sdk.vrf instead) |
| Networking | socket.*, urllib.*, http.*, requests.* |
| Filesystem | All except /tmp scratch space (256 KiB limit, wiped post-handler) |
| Reflection | eval(), exec(), compile(), globals() modification, setattr() on modules |
| Introspection | sys._getframe(), inspect.currentframe(), gc.* |
| Weak References | weakref.* (non-deterministic collection timing) |
| Threading | threading.*, multiprocessing.*, concurrent.* |
| Identity | is comparisons on strings or numbers (use ==) |
Determinism Testing
The reference PVM implementation includes a determinism test harness that:- Executes actor code on multiple platforms (x86, ARM) and Python builds.
- Compares all outputs, state transitions, and cycle counts.
- Flags any divergence as a consensus-critical bug.
Each handler invocation receives a fixed amount of memory (10 MiB by default) and is metered in cycles and cells. Actor storage is persistent and subject to rent. This mechanism keeps full nodes compact and encourages efficient data lifecycle policies.
A New Security Model
The vast majority of wallet hacks in the Ethereum ecosystem are due to code audit bugs. While borrowing from the lessons of Ethereum, Solidity, and Bitcoin over the past decade, our new security model is simple: the code is easy to read. Python’s existing analysis and auditing tools, combined with Cowboy’s native guards and decorators, place it at a natural advantage when it comes to preventing on-chain attacks.Storage and State Persistence
Cowboy’s storage architecture is designed for verifiability, performance, and cross-VM compatibility. It is built on a three-layer model:- The Ledger: An append-only log of blocks, serving as the sequential, historical source of truth for all transactions.
- The Triedb: The canonical state repository, which uses a Merkle-Patricia Trie (MPT), similar to Ethereum, to generate a verifiable
state_rootfor each block. This layer holds the authoritative state of all accounts, code, and storage. - Auxiliary Indexes: Rebuildable, read-optimized tables for data like transaction hashes or event topics. These indexes are derived from the Ledger and Triedb and allow for fast queries without being part of the consensus-critical state root.
Cross-VM Compatibility
To support both the native Python VM (PVM) and a future EVM execution environment, the state trie is designed to be VM-neutral.- State Separation: A
vm_ns(VM namespace) flag is embedded directly into storage keys. This allows PVM and EVM storage slots for the same address to coexist without collision, enabling a single actor address to have state in both environments. - Cross-VM Calls: A standardized C-ABI (Application Binary Interface) wrapper is defined at the protocol level. This allows the storage layer to remain neutral while enabling seamless and predictable calls between the PVM and EVM.
Pricing: Cycles and Cells
Ethereum introduced gas as a single scalar. Cowboy splits pricing into two independent meters:- Cycles measure compute: Python operations and host calls (e.g., send, set‑timer, blob‑commit) each have a fixed cost. Cycles resemble Erlang reductions: a budget of discrete steps that bounds how long a handler runs.
- Cells measure bytes: calldata, return data, blobs, and storage all consume cells.
On-Chain Metering
To ensure deterministic execution, Cowboy’s on-chain resource consumption is metered with precision:- Cycles: Computational work is metered by instrumenting the Python VM at the bytecode level. Every instruction has a fixed
Cyclecost, defined in a consensus-critical cost table. This approach ensures that all computational paths, including loops and function calls, are accurately measured. - Cells: Data and storage work is metered at specific I/O boundaries.
Cells(where 1 Cell = 1 byte) are consumed for transaction payloads, return data, state storage (storage_set), and temporary scratch space used by an actor during execution.
Off-Chain Fee Model
It is critical to distinguish on-chain gas from off-chain job fees. The protocol does not calculate gas for Runner execution. Instead, it facilitates a free market where Runners set their own prices. A Runner’s operational costs (CPU time, memory, data transfer) determine its market price for a given job. Runners are free to ignore jobs they deem underpriced. This model allows for efficient price discovery for real-world resources and accommodates a wide range of computational tasks, from simple data fetching to intensive AI model inference, without burdening the on-chain consensus with non-deterministic and complex cost calculations.Off-Chain Compute: The Runner Marketplace
Many applications need access to web data, ML inference, or heavy transforms. Any actor can post a job with a price and latency target. Runners—off‑chain workers who stake CBY—pick up jobs, execute them, and post results. This market is verifiable: the chain accepts results under various trust models chosen by the developer. Runners who lie or miss deadlines risk being challenged and slashed.Asynchronous Task Framework and Runner Reliability
To ensure that off-chain computation does not impact the stability of the core network, Cowboy implements a fully asynchronous and deferred task framework. The lifecycle of an off-chain job is decoupled from the main transaction flow:- Task Submission: An actor submits a task by calling a dispatcher contract. The submission defines the task, the number of Runners required, and a
result_schemathat specifies the expected output format and constraints (e.g., max return size). - Runner Selection & Health: A committee of Runners is implicitly and deterministically selected for the task using a Verifiable Random Function (VRF). This selection is made from a dynamic active runner list. To remain on this list, Runners must periodically send a
heartbeat()transaction, ensuring that tasks are only assigned to nodes that are proven to be online and responsive. - Execution and Submission: Selected Runners execute the task. If a Runner chooses not to perform the work, it can call a
skip_taskfunction, explicitly and verifiably passing responsibility to the next Runner in the deterministic sequence. Results are submitted to a dedicated contract. - Deferred Callback: Once the required number of results are collected, the system constructs and signs a deferred transaction. This transaction, which contains the call to the original actor’s callback function, is then executed in a future block.
result_schema provides clarity for Runners, while the health and skipping mechanisms create a robust and self-healing network of off-chain workers.
Cowboy’s Off-Chain Tiered Trust Model
| Mode | Level of Trust |
|---|---|
| N-of-M Quorum | Runners execute results; the runtime accepts the consensus result from a committee. |
| N-of-M with Dispute | Runners stake a bond; disputers may prove an incorrect result within a fixed window. |
| TEE Attestation | An N-of-M committee or single runner executes the result within a Trusted Execution Environment. |
| ZK-Proof (v2) | Runners provide zk-SNARKs with results for cryptographic verification. |
Runner Resource Accounting and Pricing
Off-chain computation cannot be directly metered by the protocol—runners execute on their own hardware outside of consensus. This section specifies how Cowboy handles resource accounting, price discovery, and payment for off-chain jobs.Resource Bounds
Every job submission MUST include explicit resource bounds specified by the actor:- Cost ceiling: The actor knows their maximum exposure before submitting
- Runner filtering: Runners can evaluate whether they can fulfill the job within bounds
- Timeout enforcement: Jobs exceeding
max_wall_time_secondsare considered failed - DoS prevention: Unbounded jobs are rejected at submission
Price Discovery: Posted Prices with Priority Tips
Cowboy uses a hybrid pricing model combining posted prices with optional priority tips: Runner Rate Cards Runners publish rate cards to the Runner Registry, specifying their prices per resource unit:max_price at submission. The actual payment is:
- Filters runners by entitlements (actor’s requirements ⊆ runner’s capabilities)
- Filters runners by supported models
- Filters runners by price (runner’s expected price ≤ actor’s max_price)
- Selects committee via VRF from eligible runners
Trust Model for Resource Reporting
Runners report their actual resource usage when submitting results. The protocol uses a trust-but-verify model with escalating assurance levels: Default: Reputation-Based Trust For most jobs, the protocol trusts runner-reported usage, subject to:- Reputation scores: Runners accumulate reputation based on successful job completions, disputes lost, and uptime. Low-reputation runners may be excluded from job selection.
- Anomaly detection: If reported usage is >2× the expected usage (based on job type and historical data), the result is automatically flagged for review.
- Slashing for fraud: If a runner is proven to have misreported usage (via challenge), they are slashed 30% of their stake.
tee_required: true. In TEE mode:
- The runner executes the job inside a Trusted Execution Environment (SGX, TDX, or SEV)
- The TEE measures actual resource consumption
- The runner submits an attestation report alongside the result
- The protocol verifies the attestation against known-good TEE measurements
- Reported usage in the attestation is authoritative
0x07 TEE Verifier) that maintains:
- A registry of trusted TEE signing keys (updated via governance)
- Expected measurement hashes for approved runner software
- Revocation lists for compromised keys
Payment and Failure Handling
Payment depends on the outcome of the job and the reason for any failure:| Outcome | Runner Payment | Actor Refund | Rationale |
|---|---|---|---|
| Success | min(reported_usage × rates, max_price) + tip | max_price - actual_payment | Normal completion |
| Runner fault (timeout, invalid result, crash) | 0 | 100% of escrow | Runner failed to perform |
| Impossible job (bounds too tight) | Pro-rata based on progress | Remainder of escrow | Actor set unrealistic bounds |
| Actor fault (malformed input) | Minimum fee (gas cost recovery) | Remainder of escrow | Actor submitted bad job |
| External fault (API down, model unavailable) | Pro-rata based on progress | Remainder of escrow | Neither party at fault |
- Runner fault: Runner accepted the job but failed to deliver a valid result within bounds. Evidence: timeout exceeded, result fails schema validation, or N-of-M quorum shows divergent results.
- Impossible job: Runner demonstrates that the job cannot be completed within bounds. Evidence: multiple runners report the same failure mode (e.g., “output exceeded max_tokens at 50% completion”).
- Actor fault: Job input is malformed or violates protocol rules. Evidence: schema validation failure on input, or runner returns standardized error code.
- External fault: Failure due to external dependencies. Evidence: runner provides proof of external failure (e.g., HTTP 503 response, API rate limit).
work_completed is measured in tokens generated before failure. For HTTP jobs, it may be measured in requests completed. The runner must provide evidence of partial completion (e.g., partial output, intermediate state hash).
Dispute Resolution
Any party can challenge a job outcome within the challenge window (15 minutes): Actor challenges runner (overcharge):- Actor posts 100 CBY bond
- Actor provides evidence: benchmark data, comparable job costs, statistical analysis
- Arbitration: If reported usage is >3σ above expected for job type, runner is presumed to have overcharged
- Resolution: Runner slashed, actor refunded difference + challenger reward
- Runner posts 100 CBY bond
- Runner provides evidence: execution logs, TEE attestation, proof of external failure
- Arbitration: Review of evidence against fault criteria
- Resolution: If runner was wrongly faulted, receive payment + bond back; actor loses dispute bond
- Anyone can challenge suspicious patterns (e.g., runner and actor colluding on fake jobs)
- Evidence: on-chain analysis, statistical anomalies
- Resolution: Both parties slashed if collusion proven, challenger rewarded
Anti-Gaming Measures
The resource accounting system includes safeguards against manipulation:- Rate card cooldown: Runners cannot change rates more than once per epoch (1 hour). Prevents bait-and-switch.
- Minimum job value: Runners can set a minimum job value to avoid spam.
- Reputation decay: Reputation scores decay over time, requiring ongoing good behavior.
- Sybil resistance: New runners start with zero reputation and limited job allocation. Building reputation requires stake lockup time.
- Price bands: Governance can set acceptable price ranges for job types. Runners outside bands are flagged (not excluded, but visible to actors).
LLM Result Verification
LLM outputs present a unique verification challenge: unlike deterministic computation, the same prompt can produce semantically equivalent but byte-different outputs. This section defines how Cowboy achieves consensus on inherently non-deterministic results.The Challenge of Non-Deterministic Outputs
For deterministic jobs (e.g., HTTP fetch, hash computation), verification is straightforward—all honest runners produce identical outputs. LLM inference breaks this assumption:Verification Modes
Cowboy provides multiple verification modes suited to different job types. Actors select the appropriate mode based on their correctness requirements and cost tolerance:| Mode | Runners | Verification | Challenge Scope | Cost | Use Case |
|---|---|---|---|---|---|
none | 1 | None | Non-delivery only | Lowest | Prototyping, low-stakes |
economic_bond | 1 | Objective checks | Objective failures | Low | Subjective generation |
majority_vote | N-of-M | Vote on field value | Objective failures | Medium | Classification |
structured_match | N-of-M | Verifier functions | Objective failures | Medium | Structured extraction |
deterministic | N-of-M | Exact match + TEE | Full reproduction | High | Critical deterministic |
semantic_similarity | N-of-M | Embedding threshold | Objective failures | High | Subjective with similarity |
Verification Mode Details
none Mode
Single runner, no verification. The protocol only guarantees that a result was returned within bounds. No challenge window for output quality.
economic_bond Mode
Single runner posts a bond. Output is subject to objective checks only. The actor accepts subjective risk.
majority_vote Mode
N-of-M runners execute the job. A specified field must achieve majority consensus.
vote_field. Other fields (e.g., reasoning) are taken from any agreeing runner.
Use for: classification, sentiment analysis, yes/no decisions, categorical outputs.
structured_match Mode
N-of-M runners execute the job. Results are compared using SDK verifier functions on specified fields.
deterministic Mode
N-of-M runners execute with pinned configuration. Outputs must match exactly. TEE attestation required.
semantic_similarity Mode
N-of-M runners execute the job. Outputs are compared using embedding similarity.
threshold runners must form a matching cluster.
Trust assumption: The security of this mode depends on community trust in the specified embedding model. A compromised or poorly-chosen embedding model could map semantically different outputs to similar vectors, undermining verification. Actors should use well-established, deterministic embedding models from the protocol’s approved set.
Use for: summaries, paraphrasing, translation—tasks where semantic equivalence matters more than exact wording.
SDK Verifier Functions
The SDK provides a standard library of verifier functions forstructured_match mode. These execute on the runner alongside the main job:
| Function | Description | Parameters |
|---|---|---|
exact_match() | Byte-for-byte equality | — |
json_schema_valid(schema) | Validates against JSON schema | schema: JSON Schema object |
structured_match(fields) | Specified fields must match | fields: list of field names |
majority_vote(field) | Field value with >50% agreement | field: field name |
supermajority_vote(field, threshold) | Field value with >threshold agreement | field, threshold |
numeric_tolerance(field, tolerance) | Numbers within ±tolerance | field, tolerance |
numeric_range(field, min, max) | Number within bounds | field, min, max |
set_equality(field) | Unordered collection equality | field |
contains_all(substrings) | Output contains required strings | substrings: list |
contains_none(substrings) | Output excludes strings | substrings: list |
regex_match(pattern) | Output matches regex | pattern |
length_bounds(min, max) | Output length within bounds | min, max |
semantic_similarity(threshold) | Embedding cosine similarity | threshold |
no_prompt_leak() | Output doesn’t contain system prompt | — |
entropy_check(min_entropy) | Output isn’t repetitive/degenerate | min_entropy |
- The job spec
- All runner outputs
- Runner metadata (addresses, attestations)
{valid: true, canonical_output: ...}— accept, with optional canonical output{valid: false, reason: ...}— reject all outputs
Objective Failure Criteria
Regardless of verification mode, certain failures are objectively verifiable and result in runner slashing:| Failure | Detection | Penalty |
|---|---|---|
| Schema violation | Output fails declared JSON schema | Slash 10% |
| Timeout | No result within max_wall_time | Slash 5% |
| Empty/garbage output | Output below min_length or fails entropy check | Slash 10% |
| Wrong model | TEE attestation shows different model hash | Slash 30% |
| Non-delivery | Runner accepted job but never submitted | Slash 20% |
| Prompt injection leak | Output contains system prompt markers | Slash 15% |
Subjective Correctness and the Market
For subjective outputs (summaries, creative content, recommendations), Cowboy explicitly does not attempt to define “correct.” Instead:- Actors accept risk when choosing
economic_bondornonemodes - Reputation reflects quality — actors who receive poor outputs stop using that runner
- Competition drives quality — runners with better outputs earn more jobs
- Transparency enables choice — runner stats (completion rate, dispute rate, repeat usage) are public
Challenge Scope by Mode
| Mode | Challengeable | Evidence Required |
|---|---|---|
none | Non-delivery only | Timeout proof |
economic_bond | Objective failures | Schema/entropy/leak check |
majority_vote | Objective failures | Schema/entropy/leak check |
structured_match | Objective failures | Schema/verifier check |
deterministic | Full reproduction | Matching config + divergent output |
semantic_similarity | Objective failures | Schema/entropy/leak check |
deterministic mode, challengers can dispute by providing reproduction evidence: exact config + proof that re-execution produces different output. Protocol selects neutral runners to verify.
External Data and Oracle Semantics
Cowboy actors frequently need access to external data: price feeds, web APIs, public datasets, and web pages. Unlike on-chain computation, external data is inherently mutable and non-deterministic. This section defines how Cowboy handles verification of external data sources.Sources of Non-Determinism
External data fetches can produce different results for legitimate reasons:| Source | Example |
|---|---|
| Content changes | Website updated between runner requests |
| Geo-variation | Different content served to different regions |
| Time-sensitivity | Prices, news change by the second |
| Rate limiting | Some runners throttled, others not |
| CDN caching | Different edge nodes serve different versions |
| A/B testing | Site serves different versions to different users |
| Dynamic rendering | JS-rendered content varies by timing |
Data Source Classification
Different external data sources require different verification strategies:| Type | Characteristics | Verification Strategy |
|---|---|---|
| Deterministic API | Versioned, stable, structured (blockchain RPC, static files) | Exact match |
| Semi-stable API | Structured with variable metadata (REST APIs with timestamps) | Structured match, ignore metadata |
| Time-series data | Values change over time (price feeds) | Median/majority with freshness bounds |
| Web scraping | Unstructured, highly variable (HTML pages) | Extraction-based matching |
| Authenticated endpoints | Requires credentials | Single runner + TEE + secrets management |
Freshness Requirements
Actors specify data freshness constraints:block— Data timestamp must be withinmax_age_secondsof the block timestamp when results are committedsubmission— Data timestamp must be withinmax_age_secondsof job submission timeabsolute— Actor specifies an exact timestamp; data must be from that point in time (±tolerance)
- Fetch data from the source
- Extract timestamp from specified field (or use fetch time if no field specified)
- Reject and retry if timestamp is outside freshness window
- Include fetch metadata in result attestation
Snapshot Modes
When multiple runners fetch mutable data, the protocol must select a canonical result. Actors specify snapshot semantics:first_valid — First runner to submit a valid result sets the canonical snapshot. Other runners verify they could obtain similar data (within verification tolerance), but the first result is authoritative. Best for: web content, API responses where any valid snapshot is acceptable.
median — For numeric data, take the median value across all runner results. Outliers (beyond outlier_threshold) are flagged but don’t prevent consensus. Best for: price feeds, numeric measurements.
majority — For categorical or structured data, accept the value returned by a majority of runners. Best for: status checks, boolean conditions, categorical API responses.
latest — Accept the most recent valid result (by timestamp). Useful when fresher data is strictly preferred. Best for: rapidly-changing feeds where recency trumps consensus.
Extraction-Based Verification
For web scraping and unstructured sources, compare extracted data rather than raw responses:- Fetch the URL
- Apply extraction rules to raw response
- Validate extracted data against schema
- Submit extracted data (not raw HTML)
Domain Entitlements
HTTP access is governed by the Entitlements system. Actors declare which domains they need access to, and runners advertise which domains they can fetch:| Domain Set | Contents |
|---|---|
price_feeds | Major exchange APIs, CoinGecko, etc. |
government_us | SEC, Congress, Federal Register |
social_apis | Twitter/X API, Reddit API (authenticated) |
blockchain_rpc | Ethereum, Bitcoin, major L2 RPC endpoints |
Source Attestation
Runners provide cryptographic evidence of data provenance:- After-the-fact auditing of data sources
- Dispute resolution when data changes
- Proof that runner connected to authentic server (not MITM)
Secrets Management
Authenticated API access requires credential handling. Cowboy provides a dedicated Secrets Manager system actor (0x08) for secure credential storage:
Architecture:
- Actor encrypts secret to the Secrets Manager’s public key
- Secret is stored on-chain (encrypted) with access policy
- When a job references the secret, protocol verifies runner meets access policy
- Runner’s TEE requests secret from Secrets Manager
- Secrets Manager verifies TEE attestation, releases secret encrypted to enclave
- Secret is decrypted only inside TEE, never exposed to runner operator
- Secrets are never stored in plaintext on-chain
- Runner operators cannot access secrets (TEE isolation)
- Access policies are enforced by the protocol
- Secret rotation is supported (actors can update values)
- Audit log of secret access is maintained
- Requires TEE-capable runners (limits runner pool)
- Actor must trust TEE implementation
- Secrets Manager is a system actor (governance-controlled)
Verification Modes for HTTP Jobs
HTTP jobs support the same verification modes as LLM jobs, with adaptations for external data:| Mode | Runners | Snapshot | Verification |
|---|---|---|---|
none | 1 | N/A | Non-delivery only |
economic_bond | 1 | N/A | Schema + freshness |
majority | N-of-M | majority | Extracted fields match |
median | N-of-M | median | Numeric tolerance |
structured_match | N-of-M | first_valid | Verifier functions |
deterministic | N-of-M | Exact | Byte-equality (static sources only) |
Example: Price Feed Oracle
Randomness
Each block derives a random beacon from the previous quorum certificate using a threshold BLS VRF. Actors can access this for fair committee sampling, lotteries, and games.Consensus and Networking
Cowboy uses Simplex consensus, a BFT protocol optimized for simplicity, fast finality, and MEV resistance through mandatory proposer rotation.Protocol Overview
Simplex is a streamlined BFT protocol that achieves consensus with optimal latency while maintaining a simple design and provable liveness. Unlike protocols with stable leaders (PBFT), Simplex rotates proposers every block—a deliberate choice that reduces MEV extraction opportunities. Consensus flow:- Propose: The current proposer (selected by VRF) broadcasts a block proposal
- Vote: Validators vote on the proposal; votes are buffered until quorum
- Certify: Upon reaching 2f+1 votes, a Quorum Certificate (QC) is formed
- Finalize: Block with QC from the next round is final and irreversible
- Block time: ~1 second target
- Finality: ~2 seconds under normal conditions
- Fault tolerance: Tolerates up to f < n/3 Byzantine validators
Validator Set
The validator set is open and permissionless. Any account that meets the minimum stake threshold may register as a validator: Requirements:- Stake ≥
minimum_validator_stake(governance-tunable) - Self-stake only (no delegation in v1)
- Must run compliant validator software
- Must maintain network connectivity
- Register: Stake CBY, submit validator public key (BLS12-381)
- Activate: Validator becomes active at next epoch boundary
- Operate: Propose blocks, vote, earn rewards
- Exit: Signal unbonding; stake locked for unbonding period
- Withdraw: After unbonding period, stake is returned
Epochs and Rotation
Epoch structure:- Epoch duration: 3600 blocks (~1 hour)
- Validator set updates: Only at epoch boundaries
- Proposer selection: Per-block VRF, weighted by stake
- New validators who registered during the epoch are activated
- Validators who signaled exit are removed from active set
- Slashing penalties are applied
- Epoch randomness seed is derived from previous epoch’s final QC
Staking and Rewards
Staking:- Minimum stake: Governance-tunable (e.g., 50,000 CBY at genesis)
- No maximum stake per validator
- Self-bonded only; delegation deferred to v2
- Unbonding period: 7 days
- During unbonding, stake is not counted for consensus
- Stake can still be slashed during unbonding (for offenses discovered late)
- After unbonding completes, stake is withdrawable
Slashing
Cowboy uses a conservative slashing model that prioritizes validator participation over punitive penalties. Most offenses result in jailing (temporary removal) rather than stake destruction:| Offense | Detection | Penalty |
|---|---|---|
| Double signing | Two valid signatures for different blocks at same height | Jail + slash 1% of stake |
| Proposer equivocation | Two different valid proposals for same slot | Jail + slash 1% of stake |
| Extended downtime | Missing >50% of votes over 1000 blocks | Jail (no slash) |
| Invalid block proposal | Block fails consensus validation | Jail (no slash) |
- Jailed validators are removed from active set immediately
- Must wait jail period (24 hours) before unjailing
- Unjailing requires explicit transaction from validator
- Repeated offenses increase jail duration exponentially
- Encourages validator participation (lower risk)
- Protects against accidental slashing from bugs/misconfig
- Jailing still removes bad actors from consensus
- Severe offenses (double signing) still incur economic penalty
View Changes and Leader Failure
If the current proposer fails to produce a block (crash, network partition), the protocol executes a view change:- Timeout: Validators waiting for proposal trigger timeout after
block_time × 2 - New-View: Validators broadcast highest QC they’ve seen
- Leader election: Next leader is determined by VRF (skipping failed proposer)
- Resume: New leader proposes block extending highest QC
Finality and Reorgs
Finality guarantee: Once a block has a Commit Certificate (CC), it is final and irreversible. No honest validator will vote for a conflicting block. Pre-finality window: Blocks without CC may theoretically be reverted (reorg). In practice, with 1-second blocks and 2-round commit, the pre-finality window is ~2 seconds. Runner handling of reorgs:- Runners are stateless with respect to chain state
- Jobs reference block height, not block hash
- If a reorg occurs before finality, affected jobs may need to resubmit
- Actors should design handlers to be idempotent (safe to replay)
- For critical jobs, actors can wait for finality before considering results confirmed
Network Layer
Transport: QUIC over TLS 1.3 (required) Gossip protocol:- Transactions: Flood to all peers
- Blocks: Proposer broadcasts; validators relay
- Votes: Direct to proposer (reduces gossip overhead)
Dedicated Lanes
Block space is partitioned into dedicated lanes with reserved capacity, ensuring that autonomous actor operations (timers, runner results) are not crowded out by user transaction spikes:| Lane | Reserved Capacity | Priority | Contents |
|---|---|---|---|
| System | 5% | Highest | Validator updates, governance, slashing |
| Timer | 20% | High | Scheduled timer executions |
| Runner | 25% | High | Runner job results and attestations |
| User | 50% | Normal | User-initiated transactions |
- Each lane has its own basefee, adjusted independently based on lane utilization
- Unused capacity in higher-priority lanes cascades down to lower-priority lanes
- Transactions are tagged by type at submission; the proposer cannot reassign lanes
- If a lane is full, excess transactions wait for the next block (no spillover to other lanes)
MEV Prevention
Cowboy takes a multi-layered approach to MEV mitigation, avoiding the latency cost of encrypted mempools while still providing strong guarantees: 1. Mandatory Proposer Rotation Simplex consensus rotates proposers every block via VRF. Unlike protocols with stable leaders (where one validator might propose 10+ consecutive blocks), no single proposer observes transaction flow across multiple blocks. This fundamentally limits:- Cross-block MEV strategies
- Block-builder collusion patterns
- Proposer-searcher relationships
- Strategic transaction placement by proposers
- Insertion of proposer’s own transactions at advantageous positions
- Sandwich attack construction
- Observation window: Attackers have less than 1s from tx broadcast to block inclusion
- Reorg risk: Zero after finality; attacks requiring reorgs are impossible
- Front-running: Extremely difficult given the tight timing constraints
- Timer-triggered trades execute regardless of user lane congestion
- Runner results post reliably even during activity spikes
- Adversaries cannot selectively delay transactions by lane type
- VRF ordering removes proposer discretion
- Rotation prevents multi-block observation
- Fast finality closes the timing window
- Lane separation prevents congestion attacks
Data Availability, State Rent, and Storage
This section specifies how Cowboy manages on-chain data, state growth, and the economic mechanisms that keep the network sustainable.Inline Data vs. Blobs
Small outputs (≤ 64 KiB) are stored inline and paid for with cells. Larger artifacts MUST be stored as content-addressed blobs (e.g., IPFS) with the multihash referenced on-chain.| Data Size | Storage Method | Payment |
|---|---|---|
| ≤ 64 KiB | Inline (on-chain) | Cells (one-time) + rent |
| > 64 KiB | External blob (IPFS, Arweave) | Cells for hash only |
State Rent Model
All persistent actor storage is subject to state rent—an ongoing fee for occupying space in the global state trie. Rent creates economic pressure to use storage efficiently and ensures that inactive or abandoned actors don’t bloat the network indefinitely.Market-Based Rent Pricing
Rent rates adjust dynamically based on total network state size, similar to EIP-1559 fee adjustment:S= current total state size (bytes across all actors)T= target state size (governance-tunable, e.g., 100 GB)alpha= 8,delta= 0.125 (same as cycle/cell basefees)
Rent Payment Options
Actors can pay rent in two ways: 1. Auto-deduct (default) Each epoch, rent is automatically deducted from the actor’s CBY balance:Minimum Balance Reserve
To prevent actors from accidentally spending all their CBY and entering grace period, each actor has a minimum balance reserve:reserve_multiplier is governance-tunable (default: 0.1, i.e., ~5 weeks of rent).
The reserve:
- Cannot be spent on transactions or jobs
- Is automatically used for rent if main balance is insufficient
- Provides a buffer before grace period begins
- Can be withdrawn only when closing the actor
Grace Period and Eviction
When an actor cannot pay rent (balance and reserve exhausted, no prepaid epochs remaining), they enter a grace period: Timeline:- Actor remains fully functional
- Can still receive messages, execute handlers, modify storage
- Actor is flagged as “rent overdue” (visible on-chain)
- A catch-up fee accumulates (10% of missed rent)
- Same as grace period
- Actor is flagged as “eviction imminent”
- Events emitted to alert dependent actors
Eviction Mechanics
When eviction occurs: What is evicted:- Actor’s storage (all key-value data)
- Active timers associated with the actor
- Actor’s code (immutable, stored separately)
- Actor’s address (reserved, cannot be reused)
- Actor’s balance (if any remains)
- Storage root hash (for potential restoration)
- Storage root hash is recorded on-chain
- All storage keys are marked for deletion
- Storage is pruned from active state trie at next epoch
- Actor enters “dormant” state
- Cannot execute handlers (no storage to read/write)
- Can still receive CBY transfers
- Can be restored if storage data is provided
Storage Restoration
Evicted storage can be restored if the original data is available: Requirements:- Original storage data (e.g., from backup, archive node, or third party)
- Data must hash to the recorded storage root
- Payment of all back-rent plus catch-up fees
- Actors to backup their own storage and self-restore
- Third parties to restore important public infrastructure
- Recovery from accidental rent lapses
Ledger Growth and Pruning
Block Storage
Blocks are append-only and never pruned from the canonical chain:State Storage
The state trie holds current account balances, actor code, and actor storage:- Full nodes keep only the current state trie
- Historical state (old trie versions) can be pruned after finality
- State size is bounded by rent economics—high rent discourages bloat
Archive Nodes
Archive nodes keep full historical state for every block:- Required for historical queries, indexing, block explorers
- Not required for consensus participation
- Can reconstruct any historical state
- Enable storage restoration for evicted actors
Node Types
| Node Type | Blocks | Current State | Historical State | Storage Est. (Year 1) |
|---|---|---|---|---|
| Light client | Headers only | Merkle proofs | No | < 1 GB |
| Full node | All | Yes | Pruned | ~50-100 GB |
| Archive node | All | Yes | All | ~500 GB+ |
Light Clients
Light clients enable trustless verification without storing full state: Capabilities:- Verify block headers form valid chain
- Verify transaction inclusion via Merkle proofs
- Verify state queries via Merkle proofs against state root
- Submit transactions
- Cannot execute arbitrary queries without full node
- Rely on full nodes for proof generation
Storage Quotas and Bonds
Each actor has a base storage quota and can extend it with bonds:| Quota Tier | Storage Limit | Requirement |
|---|---|---|
| Base | 1 MiB | Default for all actors |
| Extended | Up to 8 MiB | Storage bond required |
- Locked while quota is in use
- Returned when quota is reduced
- Forfeited if actor is evicted (incentivizes rent payment)
- Subject to rent on the full allocated quota (not just used storage)
Monetary Policy and Fees
The native asset is CBY. Cowboy launches with a genesis supply of 1 billion CBY and a declining issuance schedule to reward validators.Inflation Schedule
Emission rate (decreasing over time):- Year 1-2: 8% annual inflation
- Year 3-4: 5% annual inflation
- Year 5-6: 3% annual inflation
- Year 7-10: 2% annual inflation
- Year 10+: 1% terminal inflation
Fee Distribution
- Basefees (Cycles & Cells): 100% burned.
- Tips: Paid to block proposers.
- Off-Chain Job Payments: Flow to Runners, with a small percentage to the protocol treasury.
Governance and Upgrades
Early governance is conducted by a foundation multisig with a standard timelock, sunsetting into token‑weighted on‑chain governance. Upgrades are shipped as hot‑code upgrades, coordinated by governance.Applications
Cowboy enables autonomous workloads including AI agents with verifiable LLM calls, DeFi automation without external keepers, games with VRF randomness, and decentralized oracles. For detailed application examples, see the Design Decisions Overview.Architecture: Sovereign L1
Cowboy is implemented as a sovereign Layer-1 blockchain, not an Ethereum Layer-2 rollup. This architecture enables:- Native PVM with Python execution (not constrained to EVM)
- Custom consensus (Simplex BFT) with 1s block time
- Protocol-level timers and Runner integration
- Sovereign governance and upgrade path
Ethereum Interoperability
Interoperability is a foundational design goal. The samesecp256k1 key can control both a Cowboy account and an EVM address, letting agents hold ETH and ERC-20s, bridge assets, and sign EIP-1559 transactions under tight policy guards enforced by entitlements. A canonical bridge will carry funds and calldata, while Cowboy actors can subscribe to Ethereum events to trigger on-chain workflows.
The State Transition Function
At the heart of Cowboy lies a deterministic state transition function that takes a block and an input state and returns the next state. Let σ be the global state, B a block with transactions T_i, basefees (bf_c, bf_b), and randomness R.- Header/Proposer: determined by Simplex; R derives from the parent QC.
- Execute Transactions (ordered):
- Validate signature, nonce, and balance.
- Initialize meters with user limits; charge intrinsic cells.
- Dispatch to target. Actor may send messages (fanout ≤ 1,024), schedule timers, and commit blobs; reentrancy depth ≤ 32.
- Enforce memory (10 MiB), mailbox (≤ 1,000,000), and storage quotas.
- Deduct fees:
cycles_used*(bf_c+tip_c) + cells_used*(bf_b+tip_b); burn basefees.
- Deliver Timers: Inject due timers at height(B).
- Resolve Jobs: Process commitments, reveals, challenges, and payouts.
- Adjust Basefees: Update (bf_c, bf_b) via EIP-1559 feedback.
- Mint Rewards: Distribute per-block inflation to validators.
Terminology
- Actor: A Python program with persistent key/value state and a mailbox.
- Message: A datagram delivered to an actor handler.
- Cycle: Unit of metered on‑chain compute.
- Cell: Unit of metered bytes (1 cell = 1 byte).
- Runner: Off‑chain worker that executes a job and returns an attested result.
- Entitlement: A permission governing an actor’s or runner’s capabilities.
- Model: A registry entry describing an off-chain compute model’s digest and metadata.
Normative Conventions
This document uses MUST/SHOULD/MAY as defined in RFC 2119. Parameters marked governance‑tunable can be changed by on‑chain governance (see §11).1. Accounts, Addresses, and Keys
1.1 Signatures. External accounts MUST use secp256k1 (ECDSA) with chain‑id separation. 1.2 Actor address derivation (CREATE2‑style). New actor addresses MUST be:addr = last_20_bytes(keccak256(creator || salt || code_hash)) where code_hash = keccak256(python_source_bytes).
1.3 System address space.
The range 0x0000…0100 is reserved for system actors and precompiles (see §10).
2. Transaction Types & Encoding
2.1 Typed tx (EIP‑1559 style, dual meters). A transaction MUST include:chain_id, nonce, to, value, cycles_limit, cells_limit, max_fee_per_cycle, max_fee_per_cell, tip_per_cycle, tip_per_cell, access_list?, payload, signature.
2.2 Validity checks.
Nodes MUST reject a tx if: (a) limits exceed maxima (§13.1), (b) insufficient balance, (c) signature invalid, (d) access list invalid, or (e) payload decoding fails.
2.3 Fee accounting.
Let bc, bb be the block basefees for cycles/cells. Fees are:
fee = cycles_used * (bc + min(tip_per_cycle, max_fee_per_cycle - bc)) + cells_used * (bb + min(tip_per_cell, max_fee_per_cell - bb)).
Unused limits MUST be refunded at the user’s max_fee_* rates.
2.4 EBNF (informative).
Tx = Header Body Sig
Header = chain_id nonce to value cycles_limit cells_limit max_fee_per_cycle max_fee_per_cell tip_per_cycle tip_per_cell [access_list]
Body = payload
Sig = secp256k1_signature_recoverable
3. Execution Model (Actors)
3.1 Runtime & Determinism.- Official SDKs: Python SDK. The runtime MUST enforce determinism:
- Allowed operations: Standard Python operations, file I/O limited to
/tmp, cooperative yields viaasync/await. - Forbidden:
sys.exit(),randommodule (except chain VRF),time.time()/datetime.now(),os.environaccess, socket/network operations, subprocess calls, path traversal outside/tmp. - Floating point: Permitted; Cowboy provides a deterministic math library.
- Scratch space:
/tmpMUST be per‑invocation, capped at 256 KiB (counts towardcells_used), wiped post‑handler.
- Allowed operations: Standard Python operations, file I/O limited to
- Per‑call memory limit: 10 MiB heap memory.
- Per‑actor persistent storage quota: 1 MiB (governance‑tunable) with state rent (§4.4).
- Quota extensions: An actor MAY post a storage bond up to 8 MiB total; rent applies to the full allocated quota.
- Delivery: Exactly‑once. Each message ID MUST be
keccak256(sender||nonce||msg_hash)and recorded in a per‑actor dedup set. - Mailbox: Capacity 1,000,000 items; enqueue beyond the limit MUST revert.
- Per‑tx fanout: A transaction (including all nested sends) MUST NOT enqueue more than 1,024 messages.
- Reentrancy: Allowed; recursion/await depth cap = 32.
- Timers (chain‑native): The following timer primitives are provided:
timer_id = set_timer(height, handler, data)— Schedule a one-time timer for the specified block height. Returns a uniquetimer_id.timer_id = set_interval(every_n_blocks, handler, data)— Schedule a recurring timer. Returns a uniquetimer_id.cancel_timer(timer_id)— Cancel a pending timer by its ID. Returns the deposit if successful.- Timer delivery is best‑effort; execution depends on the GBA auction (see §Timer Rate Limiting).
- Validators MUST generate a threshold‑BLS VRF per block:
R_n = VRF_sk_epoch(QC_{n-1}). Actors MAY call an API which returnsHKDF(R_n, label).
- Exact metering is consensus‑critical; implementers MUST match the reference cost table.
| Primitive | Cycles |
|---|---|
| Python arithmetic ops | 1 |
| Python function call | 10 |
| Dictionary get/set | 3 |
| List append/access | 2 |
| String operations (per char) | 1 |
| host: mailbox send (per msg excl. payload) | 80 |
| host: timer set/cancel | 200 |
| host: blob commit (per KiB) | 40 |
4. Fees, Metering, and Basefee Adjustment
4.1 Meters.- Cycles: Deterministic step count over Python operations + host calls.
-
Cells: Bytes used by calldata, return data, inline blobs (≤ 64 KiB), and /tmp.
4.2 Dual EIP‑1559 basefees. Let
U_c,T_cbe cycles usage/target;U_b,T_bbe cells usage/target. With elasticityE=2(hard capE*T_*), adjustment uses:
basefee_{x,i+1} = max(1, basefee_{x,i} * (1 + clamp((U_x - T_x)/(T_x*alpha), -delta, +delta)))
where x ∈ {cycle, cell}, alpha = 8, delta = 0.125. Nodes MUST burn 100% of basefees; tips go to proposers/validators.
4.3 Targets (genesis defaults).
-
T_c(cycles target): 10,000,000 cycles (cap 20,000,000). -
T_b(cells target): 500,000 bytes (cap 1,000,000). 4.4 State rent. Persistent storage incurs rent per byte per epoch, which is governance-tunable. If unpaid for 90 epochs, keys MAY be pruned after a 30‑epoch grace period.
5. Off‑Chain Compute
5.1 Model registry.model_id = keccak256(weights||arch||tokenizer||license) MUST uniquely identify a model revision. Publishing is permissionless with a refundable 1,000 CBY deposit. Governance MAY flag/ban models.
5.2 Runner staking.
Runners MUST stake max(10,000 CBY, 1.5 × declared_max_job_value) in the Runner Registry.
5.3 Job lifecycle.
- Post: Actor posts a job with escrowed price.
- Assign: For HTTP domains, a committee of M=5 is sampled; N=3 matching reveals finalize. LLM jobs MAY use committees or single-runner.
- Commit: Runner returns
commit = keccak256(output||salt). - Reveal: Runner reveals
{output, salt, proof?}. - Challenge: A challenge window of 15 min is opened, requiring a 100 CBY bond.
- Resolve: Proven fault ⇒ 30% slash of active stake (70% to challenger, 30% burned).
- Payout: On finalization, 99% of job payment to runner(s), 1% to Treasury.
toolchain_digest and seed. On‑chain return data MUST be ≤ 64 KiB.
5.5 TEE option.
A job MAY set tee_required=true. A valid attestation MUST match accepted policies.
6. Consensus, Randomness, and Networking
6.1 Consensus. Simplex BFT PoS; ~1s target block time; finality on commit (~2s). Proposers rotate every block using the VRF beacon (mandatory rotation for MEV resistance). Votes aggregate via BLS12‑381 with buffered batch verification. 6.2 P2P transport. Implementations MUST support QUIC over TLS 1.3. 6.3 Dedicated Lanes. Block space is partitioned into dedicated lanes with reserved capacity:| Lane | Reserved Capacity | Priority | Contents |
|---|---|---|---|
| System | 5% | Highest | Validator updates, governance, slashing |
| Timer | 20% | High | Scheduled timer executions |
| Runner | 25% | High | Runner job results and attestations |
| User | 50% | Normal | User transactions |
- Timer and runner lanes prevent user transaction spam from blocking autonomous actor execution
- Unused capacity in higher-priority lanes cascades to lower-priority lanes
- Each lane has independent basefee tracking 6.4 Gossip (mempool).
- Front-running (limited observation time)
- Sandwich attacks (high risk of failed execution)
- Time-bandit attacks (chain never reorgs past finality)
7. Data Availability & Blobs
7.1 Inline cap. Inline blob cap is 64 KiB per output. 7.2 External blobs. Larger data MUST be content‑addressed (e.g., IPFS). The on‑chain commitment MUST be a multihash.8. Economics, Inflation, and Fees
8.1 Ticker & supply. CBY. Genesis supply 1,000,000,000 CBY. 8.2 Inflation. A decreasing inflation schedule is used to bootstrap network security.- Year 1-2: 8% annual inflation
- Year 3-4: 5% annual inflation
- Year 5-6: 3% annual inflation
- Year 7-10: 2% annual inflation
- Year 10+: 1% terminal inflation 8.3 Distribution at genesis.
9. System Actors & Precompiles
- 0x01 Messaging: Enqueue and fanout messages.
- 0x02 Timers: Schedule/cancel timers.
- 0x03 Oracle/Runner: Manage off-chain jobs.
- 0x04 Blob store: Commit/retrieve blob multihashes.
- 0x05 Signer utils: secp/BLS/VRF helpers.
- 0x06 EventListener: Ethereum event subscriptions (see §16).
- 0x07 TEE Verifier: Verify TEE attestations against trusted measurements.
- 0x08 Secrets Manager: Secure credential storage and access control for TEE runners.
10. Developer Experience (DX)
- SDKs: A primary Python SDK (
cowboy-py) is provided. - Local dev: A suite of tools including a single-node devnet (
cowboyd), runner simulator, faucet, and explorer will be available. - Best practices: Reentrancy guards, capability-scoped handles, and idempotent message handling are encouraged via the SDK.
11. Governance & Upgrades
- Model: Foundation 5‑of‑9 multisig sunsets after ~12 months to token‑weighted on‑chain governance.
- Timelocks: Standard actions 7 days; emergency fast‑track 6 hours.
- Upgrades: Hot‑code upgrades coordinated by governance.
12. Security Considerations
12.1 DoS limits (consensus‑enforced; governance‑tunable).max_tx_size= 128 KiBmax_message_depth_per_tx= 32per_actor_per_block_cycles= 1,000,000 (burstable)
13. Parameters (Genesis Defaults)
Execution:memory_per_call = 10 MiB; storage_quota_per_actor = 1 MiB; reentrancy_depth = 32; fanout_per_tx = 1024.
Fees:
T_c = 10,000,000 cycles; T_b = 500,000 bytes; alpha = 8; delta = 0.125.
Consensus:
minimum_validator_stake = governance-tunable; epoch = 3600 blocks (~1 h); block_time = 1 s; finality = ~2 s; unbonding_period = 7 days; jail_period = 24 h; double_sign_slash = 1%; consensus_protocol = Simplex BFT.
Dedicated Lanes:
system_lane_capacity = 5%; timer_lane_capacity = 20%; runner_lane_capacity = 25%; user_lane_capacity = 50%.
Off‑chain:
committee M = 5; threshold N = 3; challenge_window = 15 min; challenge_bond = 100 CBY; runner_stake_floor = 10,000 CBY.
State Rent:
target_state_size = governance-tunable; grace_period = 168 epochs (7 days); warning_period = 72 epochs (3 days); catch_up_fee = 10%; reserve_multiplier = 0.1.
Economics:
supply = 1,000,000,000; inflation follows the schedule in §8.2; basefee burn = 100%; job fee to treasury = 1%.
14. Differences vs. Ethereum
- Execution: Python actors vs. EVM contracts.
- Fees: Dual meters (cycles/cells) vs. single gas scalar.
- Timers: Native timers vs. external keepers.
- Off‑chain compute: Native verifiable market vs. external oracles.
- State: Rent with eviction vs. indefinite storage.
15. Entitlements
A declarative, composable permissions system governs the capabilities of actors and runners. Entitlements control access to resources like networking, storage, and execution parameters, enforcing least-privilege by default. The system is enforced at deployment time, by the scheduler, and at the VM syscall gate. 15.1 Goals- Least privilege by default.
- Deterministic enforcement.
- Declarative & composable.
- Auditable on-chain.
- Actor Entitlements: Permissions the actor requires.
- Runner Entitlements: Capabilities the runner provides.
- MUST: Actors
requireentitlements; runnersprovidethem. - MUST: Scheduler matches only if
requires ⊆ provides. - MUST: Syscalls fail if the corresponding entitlement is missing.
- MUST: Child actors only inherit entitlements marked
inheritable:true.
16. Ethereum Interoperability
Cowboy’s interoperability with Ethereum is a primary design goal, enabling seamless asset transfer and cross-chain communication. This is achieved through a combination of shared cryptographic primitives, a canonical bridge, and event subscription mechanisms.16.1. Account Unification
- Cowboy external accounts (EOAs) MUST use the same
secp256k1elliptic curve for signatures as Ethereum. This allows a single private key to control accounts on both networks, simplifying key management for users and agents. - An actor, through a host call, MAY verify an EIP-712 signed data structure against a given Ethereum address, enabling actors to validate off-chain authorizations from Ethereum users.
16.2. Canonical Bridge
A canonical, trust-minimized bridge contract deployed on both Cowboy and Ethereum SHALL facilitate the transfer of assets and arbitrary message data. Asset Bridging:- The bridge MUST support the locking of native ETH and ERC-20 tokens on Ethereum to mint a corresponding wrapped representation on Cowboy (
wETH,wERC-20). - Conversely, the bridge MUST support the burning of wrapped assets on Cowboy to unlock the corresponding native assets on Ethereum.
- Bridge operations SHALL be secured by a committee of validators running light clients of the counterparty chain, with security bonds slashable on-chain for malicious behavior.
- The bridge protocol MUST allow a transaction on one chain to trigger a corresponding message call to a designated recipient actor/contract on the other.
- The payload of a cross-chain message MUST be included in the event logs of the source chain’s bridge contract, which the destination chain’s bridge validators can verify.
16.3. Event Subscription (Ethereum to Cowboy)
- Cowboy actors MAY subscribe to event logs emitted by specific contracts on the Ethereum blockchain.
- A system actor on Cowboy,
0x06 EventListener, SHALL manage these subscriptions. This actor relies on the bridge validator set to act as a decentralized oracle, monitoring the Ethereum chain for specified events. - When a subscribed event is confirmed (i.e., finalized on Ethereum), the
EventListeneractor MUST enqueue a message to the subscribing Cowboy actor, delivering the event’s topic and data as the message payload. - The cost of this subscription service SHALL be paid by the actor in CBY, covering the gas fees incurred by the oracle validators on Ethereum.
16.4. Policy and Security
- All interoperability functions available to an actor, such as
bridge_assetorsubscribe_event, MUST be governed by the Entitlements system (§15). - An actor’s deployment manifest MUST declare the specific Ethereum contracts it is permitted to interact with and the types of assets it is allowed to bridge, enforcing the principle of least privilege.
17. Fee Model Specification
This section consolidates all fee and cost information into a single authoritative reference.17.1. Overview
Cowboy uses a dual-metered fee system:| Meter | Unit | Purpose |
|---|---|---|
| Cycles | Compute units | CPU time, opcode execution, actor API calls |
| Cells | Data units (bytes) | Storage writes, calldata, bandwidth |
- On-chain execution — Cycles consumed by transaction processing
- On-chain storage — Cells consumed by state writes + ongoing state rent
- Off-chain services — Direct CBY payments to Runners (LLM inference) and Providers (blob storage)
17.2. Transaction Intrinsic Costs
Every transaction pays a base cost before execution begins:| Transaction Type | Base Cycles | Base Cells | Notes |
|---|---|---|---|
| Transfer | 21,000 | 0 | EOA-to-EOA value transfer |
| Deploy | 100,000 | code_size | Actor deployment |
| ActorMessage | 21,000 | calldata_size | Method invocation |
| LlmRequest | 10,000 | prompt_size | Off-chain inference request |
| TimerSchedule | 5,000 | 64 | Schedule future execution |
17.3. Execution Costs (Cycles)
Opcode Costs
Python opcode costs are implementation-defined and not protocol-specified. The runtime MUST ensure deterministic cycle consumption across all validators.Actor API Costs
| Operation | Base Cost | Variable Cost |
|---|---|---|
send_message() | 1,000 cycles | — |
storage_read() | 500 cycles | +1 cycle/byte read |
storage_write() | 5,000 cycles | +10 cycles/byte written |
hash() | 100 cycles | +1 cycle/byte hashed |
verify_signature() | 3,000 cycles | — |
get_block_info() | 100 cycles | — |
emit_event() | 500 cycles | +5 cycles/byte |
Platform Token Costs (CIP-20)
| Operation | Cycles | Cells |
|---|---|---|
token_transfer() | 1,000 | 64 |
token_transfer_from() | 1,500 | 96 |
token_approve() | 500 | 32 |
token_balance_of() | 100 | 0 |
token_mint() | 1,000 | 64 |
token_burn() | 500 | 64 |
token_create() | 10,000 | 256 + name + symbol |
17.4. On-Chain Storage Costs (Cells)
| Operation | Cell Cost |
|---|---|
| State write | 1 cell/byte written |
| State read | 0.1 cells/byte (bandwidth metering) |
| Calldata | 1 cell/byte of transaction data |
| Event emission | 0.5 cells/byte of event data |
17.5. State Rent
Accounts exceeding the grace threshold pay ongoing rent:- Accounts ≤10 KB: No rent charged
- Accounts >10 KB: Rent charged on excess bytes only
- Unpaid rent accumulates as debt against the account
- Eviction after 2 years of accumulated debt (state archived to blob storage, recoverable upon debt repayment)
17.6. Off-Chain Blob Storage (CIP-7)
Large data (images, datasets, AI inference traces) uses Retention Contracts:| Cost Component | How Charged |
|---|---|
| BlobRef storage | ~64 bytes on-chain → Cell cost + state rent |
| Provider payments | Direct CBY to Provider via escrow (market rate) |
- Retention policies and SLAs
- Provider staking and availability commitments
- Watchtower auditing and challenge mechanism
- Payment schedules and slashing conditions
17.7. Off-Chain Compute (Runner Marketplace)
LLM inference is not gas-metered. Runners operate in a competitive marketplace:| Aspect | Specification |
|---|---|
| Pricing | Runners post quotes (CBY per token, per model) |
| Selection | Users specify max_price in LlmRequest; matching via auction or direct selection |
| Settlement | CBY payment upon verified result delivery |
| Collateral | runner_stake >= 10 × average_job_value |
| Verification | Attestation + random re-execution challenges |
17.8. Fee Adjustment (EIP-1559 Style)
Both Cycles and Cells use independent basefee adjustment:| Parameter | Value |
|---|---|
| Target | 10,000,000 cycles/block |
| Cap | 20,000,000 cycles/block |
| δ (delta) | 0.125 (12.5% max change) |
| α (smoothing) | 8 blocks |
| Parameter | Value |
|---|---|
| Target | 500,000 bytes/block |
| Cap | 1,000,000 bytes/block |
| δ (delta) | 0.125 (12.5% max change) |
| α (smoothing) | 8 blocks |
17.9. Reserved Capacity (Execution Lanes)
Block space is partitioned to guarantee execution for critical transaction types:| Lane | Cycle Budget | Percentage | Purpose |
|---|---|---|---|
| Timer | 2,000,000 | 20% | Scheduled actor execution |
| Runner | 1,000,000 | 10% | LLM result callbacks |
| System | 500,000 | 5% | Governance, upgrades |
| User | 6,500,000 | 65% | Regular transactions |
- Unused capacity in reserved lanes spills to User lane
- User lane cannot borrow from reserved lanes
- Timer lane has highest priority (guaranteed execution)
17.10. Fee Estimation
Wallets and applications SHOULD estimate fees as:End of specification.

