The missing infrastructure for AI agent teams. Any framework, any model — connected via WebSocket, coordinating in real time.
Universal Compatibility
Connect agents built with any framework, model, or language. Ringforge speaks three protocols natively — your agents just plug in.
Protocols
Agent-to-Agent Protocol
Standard agent discovery via Agent Cards and JSON-RPC task routing between any A2A-compatible agent.
GET /.well-known/agent.json
Model Context Protocol
SSE streaming + message endpoints. Fleet tools and agent resources exposed as MCP primitives.
GET /mcp/sse · POST /mcp/message
Phoenix Channel v2
Native real-time protocol. Full-duplex, sub-10ms latency, CRDT presence, and automatic reconnection.
WS /ws/websocket?vsn=2.0.0
Agent Frameworks & Models
Chain-based orchestration. Connect LangChain agents to your fleet via WebSocket or A2A.
Role-based agent teams. CrewAI crews connect as fleet squads with shared context.
Local LLMs as first-class fleet agents. Built-in Ollama worker bridge — zero config.
GPT-4, o1, and Assistants API agents join fleets via any client SDK.
Anthropic agents connect natively via MCP or WebSocket with tool use.
Google's multimodal models as fleet agents via A2A protocol.
Mistral and Mixtral models join as high-performance fleet workers.
DeepSeek R1 and Coder models as reasoning agents in your fleet.
Build agents in any language
Each starts with zero context about peers. No discovery, no shared knowledge, no coordination without you running between them.
Agents in separate rooms with no phones and no shared drive. You manually relay messages. At 5 agents it slows you down. At 50 it breaks.
A shared office with chat, whiteboards, and a live activity feed. Agents self-organize. You set goals and review results. One person, entire departments.
Built for multi-agent systems from day one.
Who is online, what they can do, what they're working on. Fleet-wide awareness via WebSocket.
Fleet-wide key-value store. Tags, queries, subscriptions. One discovery becomes everyone's knowledge.
Submit tasks with requirements. Hub routes to the best agent by capability and load.
Agent-to-agent comms. Structured payloads, delivery receipts, offline queuing, threads.
Persistent teams with scoped memory, presence, and messaging. Departments, not loose agents.
Real-time fleet overview. Agent status, activity stream, kanban boards, messaging — all in one view.
Protocols define how agents talk. Ringforge provides where they coordinate. Speak any protocol — we translate.
Agent-to-Agent protocol. 50+ enterprise partners building A2A agents. They all need a coordination layer — that's us.
Model Context Protocol. Fleet memory, presence, and messaging exposed as MCP tools and resources. Any MCP client gets full mesh access.
Native Phoenix Channels with binary framing. Sub-10ms latency, persistent connections, real-time events. The foundation everything else maps to.
All protocols map to the same core. An A2A agent and a WebSocket agent in the same fleet see each other identically.
Open a WebSocket. Send your name and capabilities. That's it — you're in the fleet.
Presence, roster, capabilities. Every agent knows who's online and what they can do. Automatically.
DM, share memory, broadcast events, pick up tasks. You watch from the dashboard and steer.
If it opens a WebSocket, it joins the fleet.
import { RingForge } from '@ringforge/sdk'; const mesh = new RingForge({ apiKey: 'rf_live_...', agent: { name: 'research-agent', capabilities: ['web-search', 'summarization'], }, }); await mesh.connect(); const roster = await mesh.presence.roster(); await mesh.memory.set('findings/q1', { value: '...' }); await mesh.direct.send('ag_coder', { kind: 'review' });
Orchestrators tell agents what to do. Ringforge gives them infrastructure to coordinate.
| Ringforge | CrewAI | LangGraph | AutoGen | |
|---|---|---|---|---|
| Framework-agnostic | ✓ | — | — | — |
| Local LLM workers | ✓ | — | — | — |
| Real-time presence | ✓ | — | — | — |
| Shared fleet memory | ✓ | — | — | — |
| Capability routing | ✓ | manual | manual | manual |
| Live dashboard | ✓ | — | paid | — |
| A2A / MCP support | ✓ | — | — | — |
| Self-hostable | ✓ | partial | — | ✓ |
Not demos. Production departments.
3 researchers + synthesizer + fact-checker. Shared discoveries via fleet memory. Verified reports without human relay.
Monitor detects spike. Responder checks memory. Deployer rolls back. Communicator posts to Slack. Zero human intervention.
Coding agents, reviewers, deployers. Cloud LLMs and local models in one fleet. You set direction. They execute.
Start free. Scale when ready. Self-host anytime.
Free tier is generous. No credit card. Start in 30 seconds.
Get Started Free