Eyebrow Background Glow

MCP Apps: Bring MCP Apps to your users!

AG-UI and A2UI Explained: How the Emerging Agentic Stack Fits Together

By Nathan Tarbert
December 15, 2025
AG-UI and A2UI Explained: How the Emerging Agentic Stack Fits Together
__wf_reserved_inherit

Today it's difficult to keep up with everything that's shaping up in the AI space, but one thing is for sure-it's moving forward at a lightning pace.

One major highlight is that Google will be releasing a protocol called A2UI. Although AG-UI and A2UI sound similar, they solve completely different, yet highly complementary, problems.

CopilotKit has been working closely with Google on the A2UI protocol, and we’ll be shipping full support when the A2UI spec launches. But before that happens, let’s break down how these pieces fit into the broader agentic landscape.

The Agentic Protocol Ecosystem Is Taking Shape

Across the industry, three open protocols are becoming foundational for agent-driven applications. Each sits at a distinct layer of the stack:

1. AG-UI — Agent ↔ UI Runtime (CopilotKit + ecosystem partners)

The general-purpose, bi-directional runtime connection between an agentic frontend and an agentic backend.
Think of AG-UI as the bridge that moves messages, events, and UI instructions reliably between the two.

2. MCP — Model Context Protocol (Anthropic)

Originally designed to let agents connect to tools — but those tools are becoming agentic themselves. MCP is now the standard for safely exposing capabilities to agents.

3. A2A — Agent-to-Agent Protocol (Google)

Designed so agents can communicate, delegate, and collaborate with other agents.

Even though these protocols were built by different groups, they form a complementary trio, each solving a different piece of the agentic puzzle.

__wf_reserved_inherit

Generative UI Is the New Frontier

Alongside protocols, a new family of Generative UI specifications is emerging. These specs allow agents to return UI - not just text - opening the door to fully dynamic, agent-generated interfaces.

The big ones:

A2UI - Published by Google, declarative generative UI, streaming JSON, and Platform-agnostic.

Open-JSON-UI - OpenAIOpen standardization of OpenAI’s internal declarative UI schema.

MCP-Apps - iframe-based standard for user-facing UIs inside the MCP ecosystem.

AG-UI Is Not a generative UI spec

Instead, AG-UI is the runtime layer that transports generative UI instructions, whether they come from A2UI, Open-JSON-UI, MCP-UI, or a custom format you define yourself.

This distinction is key:

  • A2UI / MCP-UI / Open-JSON-UIWhat UI the agent wants to show
  • AG-UIHow that UI is delivered between backend ↔ frontend

AG-UI Protocol: The Bridge Between Agents and Users

AG-UI’s purpose is simple:
Connect any agentic backend to any agentic frontend - reliably, bidirectionally, and with full event transparency.

AG-UI is pragmatic and built from real-world developer needs. It supports:

  • live event streaming
  • multi-agent interactions
  • long-running agents
  • agent reconnection
  • full command/event lifecycles
  • custom generative UI schemas

And because AG-UI integrates handshakes for MCP and A2A, UI instructions from subagents - even deeply nested ones - can be safely propagated all the way up to the user’s application.

This ecosystem-level interoperability is one reason AG-UI is quickly becoming a foundation for production agentic apps.

Mixing and Matching: The Reality for Developers

The future won’t be one “winner.”
Developers will mix:

  • AG-UI for frontend ↔ backend runtime
  • MCP for tools
  • A2A for orchestrating subagents
  • A2UI (or other generative UI specs) for declarative UI
  • Custom schemas for special use-cases

CopilotKit embraces that future.

Our framework lets developers connect to any of these protocols or specs - individually or together. Whether your application uses agents that speak MCP, A2A, or a custom internal protocol, AG-UI ensures everything plays nicely inside a unified frontend experience.

CopilotKit: The Agentic Application Framework

AG-UI, MCP, and A2A solve different layers, but developers still need a higher-level framework that ties everything together.

That’s where CopilotKit sits:

  • above the protocols
  • above the generative UI specs
  • providing the full stack for building real agentic applications

CopilotKit gives developers:

  • agent runtime + orchestration
  • frontend components
  • backend integrations
  • observability
  • security & policy enforcement
  • production infra
  • cloud-hosted experience or open-source self-hosting

In other words:
CopilotKit is to agentic apps what React was to component-based UIs - a unified, opinionated layer that abstracts fragmentation while staying open to the ecosystem.

And with full A2UI support landing soon, CopilotKit apps will be able to natively render declarative UI generated by agents, alongside everything else AG-UI already supports.

Even though AG-UI and A2UI sound similar, they solve completely different - and highly complementary - problems:

  • A2UI defines what UI the agent wants to display.
  • AG-UI defines how the UI (and everything else) flows between the backend and the frontend.

Together with MCP and A2A, these pieces form the foundation of the next-generation agentic application stack — one that is interoperable, open, and rapidly maturing.

And CopilotKit is committed to supporting the entire ecosystem, not replacing it.

Happy building!

Top posts

See All
Reusable Agents Meet Generative UIs
Anmol Baranwal and Nathan TarbertMarch 12, 2026
Reusable Agents Meet Generative UIsOracle, Google, and CopilotKit have jointly released an integration that standardizes how AI agents are defined, how they communicate with frontends in real time, and how they describe the UI they require. The integration connects three distinct layers. Oracle's Open Agent Specification (Agent Spec) provides a framework-agnostic way to define agent logic, workflows, and tool usage once and run it across compatible runtimes. AG-UI handles the live interaction stream between the agent and the frontend, keeping tool progress, state updates, and user interactions synchronized while the agent is executing. A2UI, developed by Google, allows agents to describe the UI they need - forms, tables, multi-step flows - as structured JSONL, which CopilotKit then renders automatically inside the host application. Previously, each of these layers required custom implementation per project. This release establishes a shared contract across all three, meaning agent developers can define the agent once, expose a standardized interaction stream, and have the frontend render structured UI surfaces without writing custom wiring for each tool or workflow. The practical impact is reduced integration friction across the ecosystem - agent runtimes and frontend clients that implement these standards can interoperate without lock-in to a specific framework or vendor.
The Developer's Guide to Generative UI in 2026
Anmol Baranwal and Nathan TarbertJanuary 29, 2026
The Developer's Guide to Generative UI in 2026AI agents have become much better at reasoning and planning. The UI layer has mostly stayed the same, and it is holding back the experience. Most agent experiences still rely on chat, even when the task clearly needs forms, previews, controls, or step-by-step feedback. Generative UI is the idea that allows agents to influence the interface at runtime, so the UI can change as context changes. This is usually done through UI specs like A2UI, Open-JSON-UI, or MCP Apps. We'll break down Generative UI, the three practical patterns, and how CopilotKit supports them (using AG-UI protocol under the hood).
Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Anmol Baranwal and Nathan TarbertJanuary 22, 2026
Bring MCP Apps into your OWN app with CopilotKit & AG-UIToday, we are excited to announce CopilotKit’s support for MCP Apps. Now, MCP servers can finally ship an interactive UI that works out of the box in real agent applications.
Are you ready?

Stay in the know

Subscribe to our blog and get updates on CopilotKit in your inbox.