Eyebrow Background Glow

MCP Apps: Bring MCP Apps to your users!

Introducing AG-UI: The Protocol Where Agents Meet Users

By Nathan Tarbert
May 12, 2025
Introducing AG-UI: The Protocol Where Agents Meet Users

We're thrilled to announce AG-UI, the Agent-User Interaction Protocol, a streamlined bridge connecting AI agents to real-world applications.

What is AG-UI?

AG-UI is an open, lightweight protocol that streams a single JSON event sequence over standard HTTP or an optional binary channel. These events—messages, tool calls, state patches, lifecycle signals—flow seamlessly between your agent backend and front-end interface, maintaining perfect real-time synchronization.

Get started in minutes using our TypeScript or Python SDK with any agent backend (OpenAI, Ollama, LangGraph, or custom code). Visit docs.ag-ui.com for the specification, quick-start guide, and interactive playground.

__wf_reserved_inherit

Agent-User Interaction

Today’s AI Agent ecosystem is maturing. Agents are going from interesting viral demos to actual production use, including by some of the biggest enterprises in the world.

However, the ecosystem has largely focused on backend automation, processes that run independently with limited user interaction. Workflows that are set off, or happen automatically, whose output is then used.

Common use-cases include data migration, research and summarization, form-filling, etc.

Repeatable and simple workflows, where accuracy can be ensured, or where 80% accuracy is good enough.

These have already been big productivity boosters. Automating time-consuming and tedious tasks.

Where Agents Meet Users

Coding tools (Devin vs. Cursor)

Throughout the adoption of generative AI, coding tools have been canaries in the coal mine, and Cursor is the best example of a user-interactive agent. An AI agent that works alongside users in a shared workspace.

This contrasts with Devin, which was the promise of a fully autonomous agent that automates high-level work.

For many of the most important use-cases, Agents are helpful if they can work alongside users. This means users can see what the agent is doing, can co-work on the same output, and easily iterate together in a shared workspace.

The Challenges of Building a User-Interactive Agent

Creating these collaborative experiences presents significant technical challenges:

  • Real-time streaming: LLMs produce tokens incrementally; UIs need them instantly without blocking on the full response.
  • Tool orchestration: Modern agents call functions, run code, hit APIs. The UI must show progress and results, sometimes ask for human approval, and then resume the run—all without losing context.
  • Shared mutable state: Agents often generate plans, tables, or code folders that evolve step-by-step. Shipping entire blobs each time wastes bandwidth; sending diffs demands a clear schema.
  • Concurrency & cancellation: A user might fire off multiple queries, stop one mid-flight, or switch threads. The backend and front-end need thread IDs, run IDs, and an orderly shutdown path.
  • Security boundaries: Streaming arbitrary data over WebSockets is easy until you need CORS, auth tokens, and audit logs that an enterprise will sign off on.
  • Framework sprawl: LangChain, CrewAI, Mastra, AG2, home-grown scripts—all speak slightly different dialects. Without a standard, every UI must reinvent adapters and edge-case handling.

The AG-UI Solution

Demo GIF

AG-UI addresses these challenges through a simple yet powerful approach:

Your client makes a single POST to the agent endpoint, then listens to a unified event stream. Each event has a type(e.g., TEXT_MESSAGE_CONTENT, TOOL_CALL_START, STATE_DELTA) and minimal payload. Agents emit events as they occur, and UIs respond appropriately—displaying partial text, rendering visualizations when tools complete, or updating interfaces when state changes.

Built on standard HTTP, AG-UI integrates smoothly with existing infrastructure while offering an optional binary serializer for performance-critical applications.

What This Enables

AG-UI establishes a consistent contract between agents and interfaces, eliminating the need for custom WebSocket formats and text-parsing hacks. With this unified protocol:

  • Components become interchangeable: Use CopilotKit's React components with any AG-UI source
  • Backend flexibility: Switch between cloud and local models without UI changes
  • Multi-agent coordination: Orchestrate specialized agents through a single interface
  • Enhanced development: Build faster with richer experiences and zero vendor lock-in

AG-UI isn't just a technical specification—it's the foundation for the next generation of AI-enhanced applications that enable seamless collaboration between humans and agents.

Want to learn more?

Book a call and connect with our team

Please include who you are, what you're building, and your company size in the meeting description, and we'll help you get started today!

We'd love to get your feedback. Please join our AG-UI Discord Community and join the conversation.

Start building today at docs.ag-ui.com

Top posts

See All
The Developer's Guide to Generative UI in 2026
Anmol Baranwal and Nathan TarbertJanuary 29, 2026
The Developer's Guide to Generative UI in 2026AI agents have become much better at reasoning and planning. The UI layer has mostly stayed the same, and it is holding back the experience. Most agent experiences still rely on chat, even when the task clearly needs forms, previews, controls, or step-by-step feedback. Generative UI is the idea that allows agents to influence the interface at runtime, so the UI can change as context changes. This is usually done through UI specs like A2UI, Open-JSON-UI, or MCP Apps. We'll break down Generative UI, the three practical patterns, and how CopilotKit supports them (using AG-UI protocol under the hood).
Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Anmol Baranwal and Nathan TarbertJanuary 22, 2026
Bring MCP Apps into your OWN app with CopilotKit & AG-UIToday, we are excited to announce CopilotKit’s support for MCP Apps. Now, MCP servers can finally ship an interactive UI that works out of the box in real agent applications.
How to build a Frontend for LangChain Deep Agents with CopilotKit!
Anmol Baranwal and Nathan TarbertJanuary 20, 2026
How to build a Frontend for LangChain Deep Agents with CopilotKit!LangChain recently introduced Deep Agents: a new way to build structured, multi-agent systems that can plan, delegate, and reason across multiple steps. It comes with built-in planning, a filesystem for context, and subagent spawning. But connecting that agent to a real frontend is still surprisingly hard. Today, we will build a Deep Agents powered job search assistant and connect it to a live Next.js UI with CopilotKit, so the frontend stays in sync with the agent in real time.
Are you ready?

Stay in the know

Subscribe to our blog and get updates on CopilotKit in your inbox.