Eyebrow Background Glow

MCP Apps: Bring MCP Apps to your users!

CoAgents: Connecting AI Agents to Realtime Application Context

By Atai Barkai
September 10, 2024
CoAgents: Connecting AI Agents to Realtime Application Context

Coagent: LangGraph and CopilotKit streaming an AI agent state

We are quickly learning (and re-learning) that agents perform best when they work alongside people. It's far easier to get an agent to perform 70%, and even 80% and 90% of a given task, than to perform the given task fully autonomously (see: cars).

However, to facilitate seamless human-AI collaboration, we must connect AI agents to the real-time application context: agents should be able to see what the end-user sees and to do what the end-user can do in the context of applications.

End-users should be able to monitor agents' executions - and bring them back on track when they go off the rails.

This is where CoAgents come into play: agents that expose their operations to end-users using custom UI that is understandable to end-users, and where end-users can steer back on track should they “go off the rails”.

Human-in-the-loop is essential for creating effective and valuable AI agents. By integrating CoAgents with LangGraph, we elevate in-app human-in-the-loop agents to a new level. This is the next focus for CopilotKit—simplifying the development process for anyone looking to harness this technology. We’re already building the infrastructure for Coagents today and want to share our process and learnings as we go. Want early access to this technology? Sign up here

A framework for building human-in-the-loop agents

CopilotKit is the simplest way to integrate production-ready copilots into any product with both open-source tooling and an enterprise-ready cloud platform. When we look at Coagents, there are a few critical use cases that we see crop up most frequently:

  • Streaming an intermediate agent state - provides insight into the agent's work in progress, rendering any component based on the LangGraph node that it’s processing.
  • Shared state between an agent, a user, and the app - allows an agent and an end-user to collaborate over the same data via bi-directional syncing between the app and agent state.
  • Agent-led Q&A - Allow agents to intentionally ask a user question in conversation and app context with support for response/follow-up via the UI. “Your planned trip seems particularly expensive this week, would you like to reschedule for next week?”

At CopilotKit, we are dedicated to creating a platform that simplifies the development of copilots by providing developers with the most powerful tools at their disposal.

Our next step in this journey is to enable this through Coagents, beginning with the introduction of the intermediate agent state.

Ready to walk through some of this in action?

Build an AI Coagent children's book with LangGraph and CopilotKit

Let's see what a streaming agent state looks like in our storybook app. We've built an AI agent that will help you write a children's storybook where the agent can chat with the user to develop a story outline, generate characters, outline chapters, and generate image descriptions that we'll use to create the images using Dall-E 3.

This first version of Coagents with support for LangGraph will be released fully open source - sign up for first access here.

Top posts

See All
Reusable Agents Meet Generative UIs
Anmol Baranwal and Nathan TarbertMarch 12, 2026
Reusable Agents Meet Generative UIsOracle, Google, and CopilotKit have jointly released an integration that standardizes how AI agents are defined, how they communicate with frontends in real time, and how they describe the UI they require. The integration connects three distinct layers. Oracle's Open Agent Specification (Agent Spec) provides a framework-agnostic way to define agent logic, workflows, and tool usage once and run it across compatible runtimes. AG-UI handles the live interaction stream between the agent and the frontend, keeping tool progress, state updates, and user interactions synchronized while the agent is executing. A2UI, developed by Google, allows agents to describe the UI they need - forms, tables, multi-step flows - as structured JSONL, which CopilotKit then renders automatically inside the host application. Previously, each of these layers required custom implementation per project. This release establishes a shared contract across all three, meaning agent developers can define the agent once, expose a standardized interaction stream, and have the frontend render structured UI surfaces without writing custom wiring for each tool or workflow. The practical impact is reduced integration friction across the ecosystem - agent runtimes and frontend clients that implement these standards can interoperate without lock-in to a specific framework or vendor.
The Developer's Guide to Generative UI in 2026
Anmol Baranwal and Nathan TarbertJanuary 29, 2026
The Developer's Guide to Generative UI in 2026AI agents have become much better at reasoning and planning. The UI layer has mostly stayed the same, and it is holding back the experience. Most agent experiences still rely on chat, even when the task clearly needs forms, previews, controls, or step-by-step feedback. Generative UI is the idea that allows agents to influence the interface at runtime, so the UI can change as context changes. This is usually done through UI specs like A2UI, Open-JSON-UI, or MCP Apps. We'll break down Generative UI, the three practical patterns, and how CopilotKit supports them (using AG-UI protocol under the hood).
Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Anmol Baranwal and Nathan TarbertJanuary 22, 2026
Bring MCP Apps into your OWN app with CopilotKit & AG-UIToday, we are excited to announce CopilotKit’s support for MCP Apps. Now, MCP servers can finally ship an interactive UI that works out of the box in real agent applications.
Are you ready?

Stay in the know

Subscribe to our blog and get updates on CopilotKit in your inbox.