Skip to main content

Documentation Index

Fetch the complete documentation index at: https://glide-9da73dea.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This guide covers running apps/mcp + the Headless agent stack on your own infrastructure. It supplements the top-level docs/SELF_HOSTING.md, which covers the rest of the orchestration shell.

What you’re standing up

ServiceRoleRequired?
apps/mcp (this repo)MCP gateway: /read, /write, /treasury + tools/list discovery + tools/call invocationyes
auth.<your-domain> (Ory or BYO)OAuth Authorization Server — RFC 7591 dynamic client registration, RFC 8707 resource-indicator-bound tokensyes
apps/web (this repo)Origin app for the agent-skill consent flow + tenant DB hostyes
Postgres 16+Per-tenant DB. Migrations 0039–0044 add the agent tablesyes
Upstash Redis (or self-run)Token-bucket rate-limit + step-up nonce store. Free tier sufficient for low-traffic prod; the project ships against the Vercel-Upstash integrationyes

OAuth Authorization Server adapter

OSS supports two adapter shapes per the Cathedral plan §M2.5:

Option A — Ory Hydra

Run a managed Ory Hydra instance with Postgres backend. Configure Privy as the upstream IDP (Ory delegates user auth to Privy and issues OAuth grants on top). Pros: RFC 7591 + RFC 8707 + RFC 9728 all native. Audit trail. Separate failure domain from apps/web. Cons: Operational cost (Ory Enterprise is paid; Ory Cloud free tier is rate-limited).

Option B — External BYO

Bring any RFC 9728-compliant OAuth AS — Auth0, Keycloak, Okta, etc. — and point apps/mcp at its discovery endpoint via AUTH_SERVER_PROVIDER=external + OAUTH_AS_DISCOVERY_URL. Pros: Use the IdP your org already pays for. Cons: Self-host responsibility for RFC 7591 dynamic client registration if your AS doesn’t support it natively.
Glide does NOT ship a custom in-house OAuth AS in OSS. Per the plan §M2.5 Codex review fix #2, shipping a minimal in-house AS for a banking/MCP platform is security-critical surface a solo team should refuse to own.

Apps/MCP transport

The MCP server uses a hand-rolled JSON-RPC envelope (NOT @modelcontextprotocol/sdk — the SDK’s HTTP transport is in flux per spec revision 2025-11-25). The wire format is pinned to MCP spec 2025-11-25. Endpoints:
  • POST /mcp/read — read-only tool calls (accounts, balances, transactions, agents, skills, audit stream)
  • POST /mcp/write — write tool calls (payments, cards, transfers, beneficiaries, x402)
  • POST /mcp/treasury — treasury tool calls (grant issuance, signer rotation, yield allocation, kill-switch)
  • GET /mcp/manifest — public capability discovery (no auth)
  • POST /mcp/{endpoint} with tools/list — public catalog discovery (no auth, per MCP spec)
  • GET /healthz, GET /readyz — ops probes (no auth)
Confused-deputy guard: a read token cannot call write or treasury tools, and vice versa. The check fires BEFORE auth so a sniffed token from one endpoint can’t probe the others. Auth:
  • devMCP_TOKEN_VERIFIER_DEV_SECRET HMAC-SHA256 (set this in .env.local).
  • prod — Ory Hydra JWKS via jose. Set AUTH_SERVER_PROVIDER=ory + OAUTH_AS_JWKS_URL=https://auth.<your-domain>/.well-known/jwks.json.

Money-safety contracts (preserve in self-host)

The Headless platform encodes six “IRON RULE” money-safety contracts that EVERY money-touching tool path observes. Per the M2.5 plan, these are the named contracts that gate all agent activity:
  • F1 — Server-side RPC verify. x402.pay persists on_chain_tx + amount from serverFetchChainTx (RPC), NEVER from facilitator receipt.
  • F2 — CAS-claim before broadcast. agent_pending_payments rows are claimed via UPDATE ... WHERE status='pending' AND claimed_at IS NULL RETURNING id.
  • F3 — Fresh-read tenant verification. @repo/grant-wrapper re-reads tenant from DB on every tool invocation; cached grant alone never authorizes.
  • F4 — Append-only trigger on activity_log. UPDATE/DELETE/TRUNCATE rejected unless app.dsar_context_id session var is set (admin DSAR path only).
  • F5 — Atomic policy_version bump on signer rotation. vault.rotateSigner advances policy_version in the same transaction as the on-chain rotation.
  • F7 — Sigil first-use-only. URL-mode elicitation sigils are CAS-claimed on first use; race losers reject.
(F6 reserved; not assigned in PR153.) If you fork apps/mcp and remove any of these, you accept full responsibility for the money-safety posture of the resulting deployment. They are not optional.

Agent-skill install saga

When a user installs an agent skill, the saga runs through these states:
STARTED → ENTITY_PICKED → POLICY_CONFIGURED → PARTNER_OAUTH_BOUND
       → PRIVY_POLICY_INSTALLED → SUB_VAULT_CREATED → GRANT_ISSUED → COMPLETE
A reaper job runs every 10 minutes (an Inngest cron in apps/web/src/inngest/functions/) and rolls back partial installs older than 30 minutes. The reaper is not optional — without it, a worker crash mid-saga leaves the user with a partial install + a Privy policy on the wrong vault. Self-hosters who skip the reaper see this in production.

Branch A’ policy enforcement (per Privy spike)

Per docs/designs/privy-policy-spike.md (the Headless v1 Privy spike result):
  • EVM: per_tx_max, counterparty_allowlist, time_window, daily_cap, velocity_caps all enforce on Privy programmable signing policy NATIVELY.
  • Solana: per_tx_max, counterparty_allowlist, time_window enforce natively. Stateful aggregation (daily_cap, velocity_caps) lives in the router Redis layer.
Self-hosted policy engine MUST support BOTH paths. @repo/policy-engine already does — see the evaluate() contract tests for the split.

Deployment patterns

Pattern 1 — Single Vercel project (small operators)

apps/web and apps/mcp share a Vercel project. The MCP routes mount under /api/mcp/*. Simpler, but you cannot scale the two services independently. apps/web ships at glide.example.com. apps/mcp ships at mcp.glide.example.com as its own Vercel project. Auto-deploy from the same monorepo, but with the project’s Root Directory set to apps/mcp.

Pattern 3 — Fly.io for apps/mcp

For operators who want region pinning. fly launch from apps/mcp/, set the env above, deploy. Postgres + Upstash stay wherever you have them.

Testing your deployment

Once apps/mcp is live, sanity-check with curl:
# 1. Public manifest (no auth)
curl https://mcp.glide.example.com/mcp/manifest

# 2. Tool catalog discovery (no auth, MCP spec)
curl -X POST https://mcp.glide.example.com/mcp/read \
  -H 'content-type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

# 3. Health probes
curl https://mcp.glide.example.com/healthz
curl https://mcp.glide.example.com/readyz
For end-to-end (with a real Privy-issued JWT):
  1. Register an OAuth client at auth.glide.example.com/oauth2/register (RFC 7591).
  2. Walk the authorization_code + PKCE flow per oauth-flow.md.
  3. Call a tool with the bearer grant; expect a JSON-RPC response.

What’s NOT in OSS today

  • The Trust Console UI (M4) — read-only DB-backed admin view of agent activity, anomaly detection, explainer LLM. Lands after Glide Cloud has soaked Trust Console v1 for 12 weeks per the plan.
  • The glide.co/skills public marketplace UI is OSS at apps/web/src/app/(public)/skills/ (PR153 Phase 4) — but the partner-PR flow + signed Trusted Skill Agreement land with M5.
  • The glide partner submit CLI lands with M5.5.

Where to file issues

  • Agent-platform bugs: GitHub issues with the agent-platform label.
  • OSS deploy questions: docs/SELF_HOSTING.md covers general self-host; this file is the agent-specific addendum.
  • Security vulnerabilities: security@axtior.com per SECURITY.md.