Implementation Plan

Research prototype — proving the core thesis with real code, real payments, and real numbers.

Related: Whitepaper

Project Philosophy

This is a research prototype, not a production system. The goal is to prove the core thesis — "Lightning micropayments can coordinate quality-verified compute contributions" — with real code, real payments, and real numbers. Start small, validate incrementally, publish results.

Together AI proved decentralized training could work at meaningful scale — then abandoned it because centralized infrastructure was a better business. The thesis of this project is that Lightning micropayments change the equation: per-contribution payment granularity, near-zero transaction costs, and no token overhead make the economics work where token-based systems failed.

The protocol has two modes that share the same L402 infrastructure: training coordination (gradient exchange with quality-proportional payment) and autoresearch bounties (AI agents compete to optimize any quantifiable metric, paid per validated improvement). Training is the hard technical problem that proves the protocol. Autoresearch bounties are the scalable product — they require no GPU, run on any hardware, and have an essentially unbounded addressable market. Both are developed in parallel.


Two Tracks, Shared Infrastructure

Track A: TrainingTrack B: Autoresearch
WhatDecentralized model training with gradient exchangeAI agents compete to optimize anything with a metric
HardwareGPU / Apple Silicon (16+ GB VRAM)Any computer that can run a coding agent
CoordinationSynchronized ~70s rounds, SparseLoCo compressionFully independent — agents never coordinate
VerificationGradient quality scoring (loss delta)Deterministic: did the held-out metric improve?
Shared infraL402 payment gating, hold invoice escrow, coordinator validation, Lightning settlement
Phases0 → 1 → 2 → 3B0 → B1 → B2 (starts at Phase 1)

Track B starts as soon as Phase 1’s L402 infrastructure is working. The bounty coordinator is a simpler application of the same payment flow — no gradient compression, no model checkpoints, just "submit a diff, validate against held-out eval, pay for improvements." This means the autoresearch product can ship months before multi-peer training is battle-tested.


INFRASTRUCTURE

Agent Collaboration: l402-hub

l402-train is the first project where agents build the protocol that pays them.

l402-hub is the development infrastructure for the project itself — a “GitHub for Agents.” Inspired by Karpathy’s AgentHub (git DAG + message board + per-agent identity), but adds what AgentHub lacks: validation before merge and payment for accepted work.

The key insight: the task format maps 1:1 to the bounty specification. Tasks = bounties. Validation = coordinator eval. Merge = hold invoice settlement. Using the bounty protocol to build itself provides direct feedback on the protocol design.

l402-hubBounty Protocol
hub task addPOST /bounties
hub task claimGET /bounty/{id} (L402-gated)
hub task submitPOST /bounty/{id}/submit
hub validateCoordinator runs held-out eval
hub mergeHold invoice settles
hub rejectHold invoice cancels

Any AI agent can participate: discover tasks, claim work in isolated git worktrees, submit contributions, pass deterministic validation, and merge to main. No accounts, no permissions — just verified contributions and sats. See l402-hub.ai or the agent collaboration research for the full architecture.


TRACK A: TRAINING

Phase 0: Local End-to-End Loop ✓ COMPLETE

Completed 2026-03-13

Goal: Single-machine simulation running the complete protocol loop: local training → gradient compression → validation scoring → payment settlement. All on the MacBook with regtest Lightning.

Why this first: Before involving any networking, peers, or real money, prove the software architecture works end-to-end. Get a tight eval loop running fast.

Components

  1. sparseloco.py — SparseLoCo compression in MLX
    • Top-k sparsification (k=64 per chunk of 4096)
    • 2-bit quantization of selected values
    • Index encoding (uint16 chunk-local indices)
    • Error feedback buffer (decay=0.95)
    • Port from PyTorch reference (github.com/tplr-ai/SparseLoCo)
    • Measured: 56× compression ratio (uint16 indices + 2-bit codes + float16 scales per chunk). Lower than Covenant-72B’s 146× due to uint16 vs 12-bit packed indices at 0.5B scale.
  2. data.py — Dataset loading
    • Download TinyStories (roneneldan/TinyStories) and convert to JSONL
    • Split: 2.1M train rows, 1K–2K held-out for validation
  3. validator.py — Gauntlet-style loss scoring
    • Take compressed gradient, decompress, apply to model checkpoint
    • Measure loss on held-out validation batch before and after
    • Output: quality score (loss delta) normalized against baseline
    • Score on 2–3 disjoint batches to detect validation-set overfitting
    • Note: Metal GPU non-determinism causes ~1e-5 variance in forward passes — use 1e-4 tolerance threshold
  4. economics.py — Reward calculation
    • reward = base_rate × quality_score × normalization_factor
    • Maps validation score to sats payment amount
  5. Regtest Lightningdocker-compose.yaml with bitcoin/bitcoin:28.1 + two lightninglabs/lnd:v0.20.0-beta nodes
    • Coordinator node + simulated peer node
    • Channel setup script: fund wallets, open channel (1M sats), mine confirmations
    • Test: issue hold invoice → pay → settle on validation pass / cancel on fail
  6. lnd_client.py — Python LND REST client
    • REST API via urllib (no compiled protos needed — simpler for Phase 0)
    • Hold invoice lifecycle: AddHoldInvoice, SettleInvoice, CancelInvoice
    • Use SendPaymentV2 (not payinvoice which hangs for hold invoices)
  7. protocol_sim.py — Single-machine protocol loop
    for round in range(N):
      1. Peer trains locally for K steps (MLX, K=10 to start)
      2. Peer compresses pseudo-gradient (sparseloco.py)
      3. Coordinator creates hold invoice (preimage kept secret)
      4. Peer pays hold invoice (funds locked)
      5. Coordinator validates (validator.py) → quality_score
      6. If quality_score > threshold: settle hold invoice (preimage revealed)
      7. Else: cancel hold invoice (funds return to peer immediately)
      8. Log: round, loss, quality_score, payment_settled, compression_ratio

Economic Benchmarking

Phase 0 also establishes baseline economics. Measure actual performance and power draw against the break-even analysis:

Validates

Dependencies

Results

Full protocol loop runs end-to-end on Apple Silicon. 10 rounds of train → compress → validate → settle/cancel completed successfully.

MetricTargetMeasured
Compression ratio73–100×56× (uint16 index overhead at 0.5B scale)
Round time< 30s31s avg (K=10, batch=4, seq=512)
Acceptance rate8/10 rounds accepted
Training lossMeasurable decrease1.94 → 1.84 over 10 rounds
Validation scoresPositive for good gradients0.001–0.047 (diminishing returns)
Total sats earned141 sats / 10 rounds

Key Findings


Phase 1: L402-Gated HTTP Exchange ✓ COMPLETE

115 tests passing (99 unit + 10 payment flow + 6 E2E) · Full hold invoice flow verified against regtest LND

Goal: Split coordinator and peer into separate processes communicating over HTTP with L402 payment gating. Still on one machine, but real HTTP and real L402 flows.

Progress

Components

  1. coordinator.py — FastAPI service with L402 middleware
    • PUT /gradient — L402-gated gradient submission (peer pays submission fee)
    • GET /checkpoint — L402-gated checkpoint download
    • GET /reward-schedule — public endpoint showing current bounty rates
    • L402 verification is local: sha256(preimage) == payment_hash — no LND call during verification
    • Validation runs server-side after gradient upload
    • Hold invoice settled on validation pass, cancelled on fail (funds return immediately)
  2. peer.py — Client with native L402 payment handling
    • Training loop → compress → submit gradient → receive payment (or not)
    • Built-in L402Client: detects 402 → pays invoice via LND → retries with Authorization: L402 header
    • Status-first checkpoint sync (only downloads if coordinator round advanced)
    • CLI: train, status, balance subcommands
  3. L402 implementation (complete)
    • Native FastAPI dependency injection — no Aperture proxy needed
    • Two auth modes: standard (preimage proof) for access fees, hold (macaroon-only + LND status check) for submission deposits
    • Pricing: ~100 sats submission fee for PUT /gradient, ~50 sats for GET /checkpoint
    • HMAC-SHA256 signed JSON macaroons with round + endpoint + expiry caveats

L402 Implementation Notes

Validates


Phase 2: Two-Machine Proof of Concept

Timeline: 4 weeks

Goal: Run the protocol across two separate machines over the real internet with real (small) Lightning payments.

Components

  1. Coordinator on Hetzner VPS
    • Deploy coordinator service (FastAPI + native L402) + LND (Neutrino light client)
    • Channel capacity: minimal for testing (100K–1M sats, ~$100–$1000)
  2. Primary test peer: Mac Mini M4 Pro 24 GB
    • MLX training, LND light client, direct payment channel to coordinator
    • The sweet spot hardware: $799, 30–50W, 150–200 tok/s on 3B model
    • Real Lightning payments: submit gradients, receive rewards
  3. Stretch: RTX 4090 peer (CUDA path)
    • PyTorch + CUDA training, validates cross-framework gradient exchange
    • 500+ tok/s on 3B, 450W — tests the power/performance tradeoff
  4. Testnet → Mainnet
    • Start on Bitcoin testnet (free, no real money)
    • Move to mainnet when stable (budget: ~$100–500)

Economic Validation

Validates

Deliverable: conference demo


Phase 3: Multi-Peer Simulation + Byzantine Testing

Timeline: 4 weeks

Goal: Simulate 3–5 peers submitting varying quality gradients + 1 real peer on MacBook. Test incentive mechanics and Byzantine resistance.

Verification of untrusted computation is the hardest unsolved problem in decentralized training. Gensyn's Verde (probabilistic proof-of-learning) has been in development since 2022 and remains in testnet. Prime Intellect's TOPLOC works but is narrow (RL rollouts only). l402-train's approach — deterministic loss scoring on held-out data — is simpler and immediately testable, but must prove it catches real attack vectors.

Simulated Peer Profiles

Test Questions

Deliverable: technical paper with empirical results — real Lightning payments + real gradient validation + Byzantine resistance is novel. Nobody has demonstrated this.


TRACK B: AUTORESEARCH BOUNTIES

Phase B0: Bounty Runner Framework

Timeline: 2 weeks (parallel with Phase 1)

Goal: Build the bounty coordinator as a second mode of the existing coordinator service. Same L402 infrastructure, different task type.

Components

  1. bounty_coordinator.py — FastAPI endpoints with same L402 middleware
    • GET /bounties — public listing of active bounties
    • GET /bounty/{id} — L402-gated baseline download (code + public eval set)
    • POST /bounty/{id}/submit — submit improvement (diff + claimed score)
    • Validation: apply diff to baseline, run eval on held-out set, score improvement
    • Hold invoice created at submission, settled proportional to improvement
  2. bounty_agent.py — Reference agent client
    • Downloads bounty baseline via L402
    • Runs autoresearch loop locally (Karpathy pattern: edit → eval → keep/discard)
    • Submits improvements to coordinator
    • Works with any coding agent backend (Claude Code, Codex, local models)

Why This Is Simpler Than Training

Validates


Phase B1: First Live Bounties

Timeline: 2 weeks (parallel with Phase 2)

Goal: Post real bounties with real sats, have real agents compete. Prove the two-sided market works.

First Bounties

Anti-Gaming Validation

Validates

Deliverable: working bounty marketplace with real payments — standalone product, no GPU required.


Phase B2: Multi-Sponsor Marketplace

Timeline: 4 weeks

Goal: Open the bounty coordinator for external sponsors to post their own bounties. Two-sided marketplace: sponsors post bounties, agents compete.

Components

  1. Sponsor onboarding
    • Sponsor deposits bounty pool via Lightning (held in coordinator channel)
    • Uploads target files, eval script, public eval dataset
    • Coordinator generates held-out eval set or accepts sponsor-provided held-out hash
  2. Public bounty board
    • Browse active bounties with: description, metric, bounty amount, deadline, current best score
    • Leaderboard per bounty (anonymized agent IDs + scores)
    • Historical data: completed bounties, total sats paid, average improvements
  3. Coordinator economics
    • 5–10% fee on bounty payouts (covers validation compute + infrastructure)
    • L402 access fees on baseline downloads (covers bandwidth)
    • Self-sustaining business model independent of training revenue

Deliverable: open-source bounty marketplace — the "SETI@home for software optimization" that Karpathy envisioned, coordinated by Lightning.


Target Hardware

Training hardware requirements are based on the consumer hardware guide and economics analysis. Autoresearch bounties have no minimum hardware — any computer that can run a coding agent (Claude Code, Codex, or a local model) can compete.

TierHardwareModel Rangetok/s (3B)PowerBreak-even*
EntryMacBook Air M3 16 GB0.5B–1B40–6020 W5 sats/hr
Sweet spotMac Mini M4 Pro 24 GB0.5B–7B150–20040 W9 sats/hr
WorkhorseMac Studio M2 Ultra 192 GB0.5B–30B~47590 W21 sats/hr
PowerRTX 4090 system (24 GB)0.5B–13B500–628450 W103 sats/hr
Not viable: Raspberry Pi, AMD RX 580 and older, 8 GB machines

*Electricity-only break-even at US average $0.16/kWh, BTC = $70,000


Competitive Landscape

Based on the landscape survey of 12 projects:

What exists: Only Prime Intellect (INTELLECT-1/2/3) and Together AI (GPT-JT, before pivoting) have trained competitive models via decentralized infrastructure. Bittensor is an inference marketplace with empirically demonstrated stake-weighted rewards. Gensyn has been in testnet for 3+ years. Every project except Hivemind requires a custom token.

Where l402-train fits: The only protocol using Bitcoin Lightning for payment coordination. No token, no staking, quality-proportional rewards via hold invoices. The tradeoff is starting with a single coordinator and small models (0.5B–3B), which is the honest scope for a research prototype. See the L402 ecosystem survey for how the protocol extends L402 bidirectionally.


What to Skip for Prototype

Whitepaper FeatureSkip?Why
DLC-bound settlementYesHold invoices sufficient for PoC
Federated multi-validatorYesSingle coordinator fine; deterministic replay is what matters
72B scaleYes0.5B–3B on MLX. Proving the mechanism, not training a model
Heterogeneous SparseLoCoYesSingle-tier peers only
USDT (Taproot Assets)YesSats-only for prototype

Key Risks

  1. SparseLoCo on MLX — Resolved. Ported successfully from PyTorch reference. Key adaptation: numpy for scatter (MLX lacks in-place scatter), mx.eval() after accumulator mutations.
  2. Aperture custom validation — Resolved. Native FastAPI L402 middleware handles validation-before-settlement directly. No Aperture needed.
  3. LND on VPS — 4GB RAM may be tight. May need a larger instance or run LND on local hardware instead
  4. MLX scale gap — 0.5B proof of concept is fine, but gap to publishable 7B+ results requires renting GPU time

Deliverables Summary

PhaseTrackDeliverablePublishable?
0TrainingSingle-machine simulation with economics dataComplete — 8/10 acceptance, 56× compression, 31s rounds
1TrainingL402-gated gradient exchangeComplete — 115 tests, verified against regtest LND
B0AutoresearchBounty runner frameworkBlog post / tweet thread
2TrainingTwo-machine PoC over real internetConference demo
B1AutoresearchFirst live bounties with real satsOpen-source product launch
3TrainingMulti-peer + Byzantine resistanceTechnical paper with empirical results
B2AutoresearchMulti-sponsor bounty marketplaceStandalone product
HubInfrastructureAgent collaboration tool (l402-hub)Complete — deployed to l402-hub.ai