A2UI Launch: CopilotKit has partnered with Google to deliver full support at launch in both CopilotKit and AG-UI!CopilotKit delivers full support at launch!

So much is happening in AI right now that even people deep in the space are struggling to keep up. New frameworks, new specs, new protocols-it feels like every week the landscape shifts under our feet.
One of the next big shifts is Google’s upcoming A2UI protocol. And even though AG-UI and A2UI sound very similar, they’re built to solve completely different, but deeply complementary problems.
CopilotKit has been working hands-on with Google as A2UI takes shape, and we’ll be shipping full support the moment the spec goes live. But before that happens, it’s worth slowing down for a minute to map out how these pieces connect, and what they actually mean for the future of agentic apps.
Let’s break it down.
Across the industry, three open protocols are becoming foundational for agent-driven applications. Each sits at a distinct layer of the stack:
The general-purpose, bi-directional runtime connection between an agentic frontend and an agentic backend.
Think of AG-UI as the “plumbing” layer that moves messages, events, and UI instructions reliably between the two.
Originally designed to let agents connect to tools — but those tools are becoming agentic themselves. MCP is now the standard for safely exposing capabilities to agents.
Designed so agents can communicate, delegate, and collaborate with other agents.
Even though these protocols were built by different groups, they form a complementary trio, each solving a different piece of the agentic puzzle.

Alongside protocols, a new family of Generative UI specifications is emerging. These specs allow agents to return UI, not just text, opening the door to fully dynamic, agent-generated interfaces.
Check out our recent writeup on the The Three Types of Generative UI: Static, Declarative and Fully Generated.
Instead, AG-UI is the runtime layer that transports generative UI instructions, whether they come from A2UI, Open-JSON-UI, MCP-UI, or a custom format you define yourself.
This distinction is key:
AG-UI’s purpose is simple:
Connect any agentic backend to any agentic frontend - reliably, bidirectionally, and with full event transparency.
AG-UI is pragmatic and built from real-world developer needs. It supports:
And because AG-UI integrates handshakes for MCP and A2A, UI instructions from subagents - even deeply nested ones - can be safely propagated all the way up to the user’s application.
This ecosystem-level interoperability is one reason AG-UI is quickly becoming a foundation for production ready agentic apps.
The future won’t be one “winner.”
Developers will mix:
CopilotKit embraces that future.
Our framework lets developers connect to any of these protocols or specs individually or together. Whether your application uses agents that speak MCP, A2A, or a custom internal protocol, AG-UI ensures everything plays nicely inside a unified frontend experience.
AG-UI, MCP, and A2A solve different layers, but developers still need a higher-level framework that ties everything together.
That’s where CopilotKit sits:
CopilotKit gives developers:
In other words:
CopilotKit is to agentic apps what React was to component-based UIs. A unified, opinionated layer that abstracts fragmentation while staying open to the ecosystem.
And with full A2UI support landing soon, CopilotKit apps will be able to natively render declarative UI generated by agents, alongside everything else AG-UI already supports.
Together with MCP and A2A, these pieces form the foundation of the next-generation agentic application stack - one that is interoperable, open, and rapidly maturing.
And CopilotKit is committed to supporting the entire ecosystem, not replacing it.
Want to learn more?
→ Book a call and connect with our team
Please tell us who you are → what you're building → company size in the meeting description and we'll help you get started today!
.png&w=750&q=75)

Subscribe to our blog and get updates on CopilotKit in your inbox.
