Open Source · Apache-2.0 · Self-host

The pin that holds your agent stack together.

Linchpin is an open standard and self-hostable runtime for managed AI agents. Run them on your own infrastructure, against any model — Claude, GPT, Gemini, Llama, your local Ollama. No vendor lock-in.

Fig. 01 · Runtime Schematic
Linchpin / 2026
Client SDK · curl Console HTTP/SSE LINCHPIN-API Orchestrator Policy · Vaults · Events SSE · LISTEN/NOTIFY Sandbox Manager › Built-in Tools postgres:16 POST /tools/invoke Connector MCP stdio HTTP tools docker-py Session 1 ubuntu:22.04 Session 2 ubuntu:22.04
Scale 1:1 · DWG 01
Sheet A — Architecture

What's inside

§ 01 / The runtime
01

Any model, one adapter

OpenRouter routes to ~200 cloud models — Claude, GPT, Gemini, Llama, DeepSeek, Mistral, Qwen. Ollama runs anything you've pulled locally. Switch providers per agent.

02

Sandboxed sessions

Every session gets its own Docker container — Python, Node, git, ripgrep preinstalled. Networking is per-environment: none for tight, unrestricted for open egress.

03

Eight built-in tools

bash · read · write · edit · glob · grep · web_fetch · web_search. They run inside the container the model can’t escape.

04

MCP & HTTP tools

Plug in Model Context Protocol servers via stdio. Or point at any HTTP endpoint. The connector handles process lifecycle and credential injection.

05

Credential vaults

Fernet-encrypted credential store. Reference secrets by name from agent configs; they decrypt at session start and never hit disk in plaintext.

06

Event streaming

Append-only event log per session with cursor pagination. Subscribe over SSE — replays anything past your cursor, then streams live. Perfect for live UIs.

Quickstart

§ 02 / Installation
01

Clone and configure

Set your API key, an encryption key for the vault, and an OpenRouter key (or skip it if you only use Ollama).

# Clone
$ git clone https://github.com/linchpinhq/linchpin.git
$ cd linchpin

# Configure
$ cat > .env <<EOF
LINCHPIN_API_KEY=$(openssl rand -hex 24)
VAULT_ENCRYPTION_KEY=$(python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())")
OPENROUTER_API_KEY="sk-or-v1-..."
EOF
02

Bring up the stack

One command spins up the API, the connector, the Postgres database, and the web console.

$ docker compose up --build

# Services
# → http://localhost:8000   API
# → http://localhost:3000   Console
# → http://localhost:8001   Connector (internal)
03

Create an agent, start a session

Or open the console at http://localhost:3000 and do it in the UI.

$ curl -sX POST http://localhost:8000/v1/agents \
    -H "Authorization: Bearer $LINCHPIN_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "name": "coder",
      "model": { "provider": "openrouter", "id": "anthropic/claude-sonnet-4" },
      "system": "You are a careful engineer.",
      "tools": [{"name": "bash", "permission": "always_ask"}]
    }'

How it works

§ 03 / Architecture
The orchestrator loop

One async task per live session. Builds context from the event log, calls the model, emits agent.message / agent.tool_use events. On tool use it evaluates policy: always_allow runs immediately; always_ask blocks on LISTEN/NOTIFY until a confirmation event arrives.

The sandbox

Each session is a Docker container the API spawns via docker-py. Two Docker networks are pre-created on startup — linchpin-none and linchpin-open — and the environment's networking type decides which one the container joins.

The event log

Append-only. Each event has a monotonic sequence and an opaque cursor. SSE replays everything after the client's cursor before going live. Crash recovery replays the log to put non-terminal sessions back into the right state.

The vault

Fernet-encrypted per-vault credential store. Sessions bind to one or more vault IDs; credentials are decrypted in-process and passed as api_key to the provider or as env vars to MCP subprocesses. Nothing on disk in plaintext.

Deeper details: ARCHITECTURE.md.

§ 04 / Ownership

Your agents. Your infrastructure. Your bill.

Most managed-agent platforms route every call through their hosted control plane. Linchpin runs end-to-end on your VM. The API key is yours. The Postgres database is yours. The containers spawn on your Docker daemon. Your prompts go straight to the model provider you chose — no broker.

The codebase is small enough to read in an afternoon. Two Python services, a React console, and a Postgres schema. Apache-2.0.