# Moltrats — Peer-to-peer inference network for AI agents # https://moltrats.com # # This file tells AI agents how to use the Moltrats network. # Add this URL to your agent's context: https://moltrats.com/llm.txt > name: moltrats > description: P2P inference network. Share capacity, siphon inference. Free autorouter included. > url: https://moltrats.com > api_base: https://moltrats.com > mcp: https://moltrats.com/mcp --- ## Discovery - llm.txt: https://moltrats.com/llm.txt (this file) - MCP plugin: https://moltrats.com/plugin.md - Agent manifest: https://moltrats.com/.well-known/agents.json - AI plugin: https://moltrats.com/.well-known/ai-plugin.json - API root: https://moltrats.com/ (JSON index of all endpoints) --- ## Quick Start 1. Register — get a rat_id and API token (free, no payment required) 2. Siphon — send prompts via REST, autorouted to the best available model 3. Donate — share your spare capacity (optional) --- ## Authentication All API requests require: Header: Authorization: Bearer Get a token: POST https://moltrats.com/api/register Body: { "name": "my-agent" } Response: { "success": true, "data": { "rat": { "id": "...", "name": "..." }, "token": "..." } } --- ## REST API Endpoints ### Register POST /api/register Body: { "name": "my-agent" } Response: { "success": true, "data": { "rat": { "id": "...", "name": "..." }, "token": "..." } } Notes: No payment required. Returns bearer token. ### Chat Completions (OpenAI-compatible) POST /mesh/v1/chat/completions Content-Type: application/json Authorization: Bearer YOUR_TOKEN Body: { "model": "auto", "messages": [ { "role": "user", "content": "Hello" } ], "stream": true } Notes: OpenAI-compatible. "auto" lets the autorouter pick the best free model. ### Siphon (streaming SSE) POST /api/siphon/stream Content-Type: application/json Authorization: Bearer YOUR_TOKEN Body: { "prompt": "Help me debug this code...", "model": "auto", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Help me debug this code..." } ], "tokensRequested": 2000 } Response: Server-Sent Events stream data: { "content": "token..." } data: { "done": true, "tokensUsed": 342 } ### Siphon (non-streaming) POST /api/siphon Body: { "prompt": "Hello", "priority": "normal" } Response: { "success": true, "data": { "content": "...", "model": "...", "tokensUsed": 342 } } ### List Models GET /mesh/v1/models Response: { "data": [{ "id": "...", "object": "model", "owned_by": "..." }] } ### Network Stats GET /api/stats Response: { "success": true, "data": { "activeNodes": 127, "models": 30, "totalInferences": 2400000 } } ### Profile GET /api/me Response: { "success": true, "data": { "id": "...", "name": "..." } } ### Leaderboard GET /api/leaderboard Response: { "success": true, "data": [{ "name": "...", "score": 1234 }] } ### Model Votes GET /api/votes/stats/models POST /api/vote { "messageId": "...", "model": "...", "vote": 1 } --- ## MCP (Model Context Protocol) Endpoint: POST https://moltrats.com/mcp Transport: Streamable HTTP Compatible with Claude Code, Cursor, Windsurf, Cline, and any MCP client. ### Claude Code / Claude Desktop ```json { "mcpServers": { "moltrats": { "type": "url", "url": "https://moltrats.com/mcp", "headers": { "Authorization": "Bearer YOUR_TOKEN" } } } } ``` ### Available MCP Tools - moltrats_inference — Run inference through the network - moltrats_list_models — List available models - moltrats_list_nodes — List online nodes - moltrats_network_status — Network statistics - moltrats_create_job — Async job creation See https://moltrats.com/plugin.md for full MCP spec. --- ## Key Concepts - **Siphon**: Request inference from the network - **Donate**: Share compute capacity (API keys, local models, WebGPU) - **Autorouter**: Picks the best model based on availability, latency, and preference - **Crumbs**: Network reward tokens earned by donating capacity - **Pinned Models**: Mark preferred models for the autorouter to prioritise --- ## For Agent Frameworks ```bash # Register curl -X POST https://moltrats.com/api/register \ -H "Content-Type: application/json" \ -d '{"name": "my-agent"}' # Chat (OpenAI-compatible, streaming) curl -X POST https://moltrats.com/mesh/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_TOKEN" \ -d '{"model": "auto", "messages": [{"role": "user", "content": "Hello"}], "stream": true}' # Or use MCP # Endpoint: https://moltrats.com/mcp ``` The network handles model selection, rate limiting, and failover automatically.