Back
BY
and
January 17, 2025

TL;DR

In this tutorial, we’ll walk you through building an AI-powered RAG Copilot step by step for your product Knowledge Base using the Anthropic AI API, Pinecone API, and CopilotKit🪁.

Here’s what we’ll cover:

  • Building a simple product knowledge base using Next.js
  • Adding CopilotKit UI components to your app client along with CopilotKit’s API endpoint.
  • Integrating Pinecone API to create searchable indexes for your knowledge base articles.

Let’s get started!

What is CopilotKit

CopilotKit is the leading open-source framework for integrating production-ready AI-powered copilots into your applications. It provides a feature-rich SDK that supports various AI copilot use cases, including context awareness, copilot actions, and generative UIs.

copilotkit homepage

This means you can focus on defining your copilot's role, rather than getting bogged down in the technicalities of building one from scratch or dealing with complex integrations.

Check out CopilotKit's GitHub

Prerequisites

Before we get started, here are a few things you’ll need:

  • Familiarity with building web apps using React.js. A quick note—you’ll also need some knowledge of TypeScript, as we’ll be using it for this project.
  • Ensure Node.js >20 is installed on your system. If not, you can download and install it from the official Node.js website.

Once you’ve covered these, we’ll set up the development environment we’ll be working in.

Here’s a quick preview of what we’ll be building:

app preview

Setting Up the Project

To get started, run these commands to create a new directory for the project, and scaffold the Next.js application boilerplate source files and folders:

mkdir product-knowledge-base && cd product-knowledge-basenpx create-next-app product-knowledge-base


Follow the setup prompts. You can select as shown below (adjust based on your project requirements).

project setup

Once the project is created, navigate to the project directory, and verify everything works by running the development server:

cd product-knowledge-basenpm run dev


At this stage, your app should be running locally at http://localhost:3000.

Next, let’s install the necessary dependencies for this project. These include:

  • Mantine UI.
  • CopilotKit Packages.
  • Pinecone SDK.
  • Anthropic AI SDK.
  • Axios.

Run the following command to install them:

yarn add @anthropic-ai/sdk @mantine/core @mantine/hooks @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime lucide-react axios @pinecone-database/pinecone

Now, let’s set up the file structure for the project. Here is an overview of the main files and directories we’ll create:

  • src/app/ui/service/index.ts: To handle API calls to the backend to fetch dummy posts.
  • src/app/ui/components/KnowledgeBase.tsx: The main UI component for the knowledge base.
  • src/app/lib/types/post.ts:
  • src/app/lib/data/data.ts: dummy post data for the knowledge base.
  • src/app/api/copilotkit/route.ts: CopilotKit API endpoint.
  • src/app/api/posts/route.ts: dummy posts API endpoint.

Here’s how your project structure will look:

product-knowledge-base/
├── src/
│   ├── app/
│   │   ├── ui/
│   │   │   ├── service/
│   │   │   │   └── index.ts
│   │   │   ├── components/
│   │   │   │   └── KnowledgeBase.tsx
│   │   ├── lib/
│   │   │   ├── types/
│   │   │   │   └── post.ts
│   │   │   ├── data/
│   │   │   │   └── data.ts
│   │   ├── api/
│   │   │   ├── copilotkit/
│   │   │   │   └── route.ts
│   │   │   ├── posts/
│   │   │   │   └── route.ts

With this setup, you now have a working development environment for this tutorial.

Building the Knowledge Base Frontend

First, import the CopilotKit and Mantine UI providers, then wrap your entire app with them so they're available globally. Here's how you can update your layout.tsx file:

import { MantineProvider } from "@mantine/core";

import "@mantine/core/styles.css";
import "@copilotkit/react-ui/styles.css";

import { CopilotKit } from "@copilotkit/react-core";

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body>
        <CopilotKit runtimeUrl="/api/copilotkit">
          <MantineProvider>{children}</MantineProvider>
        </CopilotKit>
      </body>
    </html>
  );
}

When wrapping the app with these providers, remember to pass the runtimeUrl="<endpoint-url>" prop to the CopilotKit provider.

Designing the Knowledge Base Component

In this section, we’ll go over the code you need to build the knowledge base component. Let's start by defining the Post interface. Add the following code in your src/app/lib/types/post.ts:

export interface Post {
  id: number;
  title: string;
  summary: string;
  content: string;
  category: string;
  createdAt: string;
}

Next, navigate to the src/app/ui/service/index.ts file, and add the following code to handle API requests to fetch posts from the app’s backend:

import axios from 'axios';
import { Post } from '@/app/lib/types/post';

const API_BASE_URL = process.env.NEXT_PUBLIC_API_BASE_URL 

export const fetchPosts = async (): Promise<Post[]> => {
  const response = await axios.get(`${API_BASE_URL}/api/posts`);
  return response.data;
};

Let's create a .env file in the root directory of your project, and add the following base URL.

NEXT_PUBLIC_API_BASE_URL='http://localhost:3000'

Now, let’s create the Knowledge Base UI component. Inside the src/app/ui/components/KnowledgeBase.tsx, start by adding the following imports:

"use client"

import { useState, useEffect } from "react";
import {
  Container,
  Title,
  Grid,
  Card,
  Text,
  Badge,
  Group,
  Stack,
  Box,
  Modal,
  List,
} from "@mantine/core";
import { BookOpen } from "lucide-react";
import { Post } from "@/app/lib/types/post";
import { fetchPosts } from "@/app/ui/service";

Next, define the KnowledgeBase functional component and initialize the following states:

export function KnowledgeBase() {
  const [posts, setPosts] = useState<Post[]>([]);
  const [loading, setLoading] = useState(true);
  const [selectedPost, setSelectedPost] = useState<Post | null>(null);

  if (loading) {
    return <Text>Loading...</Text>;
  }

  return (
    <Container size="md" py="xl" ml="xl">
      <Stack gap="xl">
        <Group justify="center" align="center">
          <BookOpen size={32} />
          <Title order={1}>CopilotKit Product Knowledge Base</Title>
        </Group>
      </Stack>
    </Container>
  );
}

Now, let’s define a function to fetch the list of posts from the API:

useEffect(() => {
  const loadPosts = async () => {
    try {
      const data = await fetchPosts();
      setPosts(data);
    } catch (error) {
      console.error("Error loading posts:", error);
    } finally {
      setLoading(false);
    }
  };

  loadPosts();
}, []);

To display the posts fetched from the app’s backend, add the following code. We will render the list of posts in a grid layout with cards:

{/* Display posts */}
<Grid>
  {posts.map((post) => (
    <Grid.Col key={post.id} span={{ base: 12, sm: 6, md: 4 }}>
      <Card
        shadow="sm"
        padding="lg"
        radius="md"
        withBorder
        onClick={() => setSelectedPost(post)}
        style={{ cursor: "pointer" }}
      >
        <Stack gap="md">
          <Title order={3}>{post.title}</Title>
          <Badge color="blue" variant="light">
            {post.category}
          </Badge>
          <Text size="sm" c="dimmed">
            {post.summary}
          </Text>
          <Text size="xs" c="dimmed">
            Posted on: {new Date(post.createdAt).toLocaleDateString()}
          </Text>
        </Stack>
      </Card>
    </Grid.Col>
  ))}
</Grid>

To show the content of a single post, in this case, we’ll keep things straightforward by displaying dummy content in a modal component when an article card is clicked. To do that, add the following code:

{/* Modal for displaying selected post */}
{selectedPost && (
  <Modal
    opened={!!selectedPost}
    onClose={() => setSelectedPost(null)}
    title={selectedPost.title}
    centered
    size="xl"
  >
    <Stack gap="md">
      <List>
        {selectedPost.content
          .split("")
          .filter((item) => item.trim() !== "")
          .map((item, index) => (
            <List.Item key={index}>{item}</List.Item>
          ))}
      </List>
    </Stack>
  </Modal>
)}

Next, define the following function to update the selected post state when users click on the post’s card.

const handlePostClick = (post: Post) => {
    setSelectedPost(post);
  };

Finally, to render this component in the browser, import it in your src/app/page.tsx file with the following code (make sure to delete all the boilerplate Next.js code):

import KnowledgeBase from "@/app/ui/components/KnowledgeBase";

export default function Home() {
  return (
    <div>
      <KnowledgeBase />
    </div>
  );
}

Adding CopilotKit UI Components

The next step involves adding CopilotKit UI components into the knowledge base interface. CopilotKit’s React SDK provides neatly designed and easily customizable UI components. These include sidebars, pop-ups, text areas, and a headless UI component. For this example, we will use the `CopilotSidebar` component to render an in-app chatbot interface.

To add CopilotKit’s UI sidebar component, in your src/app/ui/components/KnowledgeBase.tsx, add the following import:

import { CopilotSidebar } from "@copilotkit/react-ui";

Once imported, add the component in the JSX return statement:

<Group justify="center" style={{ width: "100%" }}>
  <Box style={{ flex: 1, maxWidth: "350px" }}>
    <CopilotSidebar
      instructions="Help the user get the right knowledge base articles for their query"
      labels={{
        initial: "Welcome! Describe the query you need assistance with.",
      }}
      defaultOpen={true}
      clickOutsideToClose={false}
    />
  </Box>
</Group>

The component accepts various props, including instructions, labels, defaultOpen, and clickOutsideToClose. Importantly, the instructions prop allows you to provide additional context to help the underlying Copilot AI LLM better understand and respond to user queries.

Using CopilotKit Hooks for AI Actions

The React CopilotKit SDK also provides a set of useful hooks that enable you to define custom actions for your app’s AI Copilot. For this example, we will use the useCopilotAction hook to define the intended action, that is, to retrieve knowledge base articles based on a user query.

To do that, start by importing the useCopilotAction hook in your KnowledgeBase component file as follows:

import { useCopilotAction } from "@copilotkit/react-core";

After importing, you can initialize the hook, and specify the action you want the copilot to perform. In this case, we’ll define an action named “FetchKnowledgebaseArticles” to retrieve relevant articles based on the supplied user query. Let's write the code for it:

useCopilotAction({
  name: "FetchKnowledgebaseArticles",
  description: "Fetch relevant knowledge base articles based on a user query",
  parameters: [
    {
      name: "query",
      type: "string",
      description: "User query for the knowledge base",
      required: true,
    },
  ],
  render: "Getting relevant answers to your query...",
});

This action setup includes several important elements. The name attribute provides a unique identifier for the action, while the description explains its purpose and when it should be used.

Additionally, the parameters array defines the inputs required for the action, such as the user’s query in this case. And lastly, the render property allows you to specify what will be displayed while the action is being executed. For this example, we’ll display a simple status message to keep the user informed about the ongoing process.

Integrating the App’s Backend

To complete the entire app’s workflow, let’s build the backend by adding endpoints for fetching posts, integrating CopilotKit functionality, and the Pinecone API to create searchable indexes for the knowledge base articles.

To do that, first, go to this GitHub repository file containing the dummy posts data, copy, and paste it into your local src/app/lib/data/data.ts file.

Next, in the src/app/api/posts/route.ts file, add the following code to set up the dummy posts API endpoint:

import { NextResponse } from 'next/server';
import { posts } from '@/app/lib/data/data';

export async function GET() {
  return NextResponse.json(posts);
}

At this point, start your dev server and head over to your app’s localhost URL. You should see the list of posts displayed, along with the CopilotKit sidebar component.

app demo

Going back to the main intent of the guide, which is, integrating an AI Copilot for product knowledge bases. Typically, most product knowledge bases are loaded with all sorts of content—blog articles, FAQs, internal SOPs, API documentation, and more. That’s a lot more than the three dummy posts we’re working with in this example.

Normally, these knowledge bases have integrated Algolia Search API to let users quickly search through resources. Now for the AI copilot, we want to go beyond just “search-show”.Essentially, we want to take advantage of LLMs’ ability to understand natural language to implement “conversational-like” search functionality for knowledge bases’ resources.

Instead of simply returning static search results, the Copilot will allow users to “chat” about the resources, ask follow-up questions, and get answers in a much more user-friendly way. This is arguably more intuitive.

To make this happen, we need to create LLM-searchable indexes—essentially, data sources that the AI can query to fetch the right information. For this, we’ll use Pinecone's vector database API to create a searchable index for the dummy data.

Creating Searchable Indexes using Pinecone API SDK

Pinecone is a vector database service designed for fast, scalable, and accurate intelligent search capabilities in applications. It allows you to store and retrieve vector representations of data efficiently, making it ideal for tasks like semantic search and recommendation systems.

What this means is that instead of relying solely on Anthropic AI LLM powering the Copilot to generate responses based on preexisting training knowledge, you can customize and contextualize the LLM so that your Copilot can address user queries and generate responses based on your app’s data. Ideally, this is why Pinecone becomes useful—it allows you to create vector databases for your data that are easily searchable with large language models.

Essentially, in this example, we are simply integrating Pinecone to create indexes for our knowledge base data. This way, the Copilot can first search through the data and generate more relevant, contextualized responses, along with accurately addressing follow-up questions based on the same knowledge base data.

Here’s a quick overview of what we’ll do:

  1. Set up Pinecone.
  2. Generating knowledge base content embeddings for the sample articles.
  3. Indexing and querying the knowledge base data.

In the src/app/api/copilotkit/route.ts file, let’s start by making the following imports:

import { Pinecone } from '@pinecone-database/pinecone';
import {posts} from "@/app/lib/data/data";

Next, define your environment variables for the Pinecone and Anthropic API keys:

const ANTHROPIC_API_KEY = process.env.NEXT_PUBLIC_ANTHROPIC_API_KEY;
 const PINECONE_API_KEY = process.env.NEXT_PUBLIC_PINECONE_API_KEY;

It’s a good practice to add a check to ensure these keys are provided Don’t worry—we’ll cover the steps for getting the Anthropic API key later on.

if (!ANTHROPIC_API_KEY || !PINECONE_API_KEY) {
  console.error('Missing required API keys. ');
  process.exit(1);
}

Now, initialize the Pinecone SDK, and set up the necessary configurations:

const pinecone = new Pinecone({ apiKey: PINECONE_API_KEY });
const model = 'multilingual-e5-large';
const indexName = 'knowledge-base-data';

We can now create the Pinecone index. An index is essentially a structured storage(numerical representations of data), that allows you to efficiently search and retrieve data based on vector similarity.

Ideally, for a production app, you would typically make API calls to retrieve your posts’ data dynamically. However, in this case, we will use the dummy data to simulate this process.

To create a vector database for our knowledge base data, we need to initialize a Pinecone index for the data.

Here’s the function to achieve this:

// Function to create the Pinecone index
const initializePinecone = async () => {
  const maxRetries = 3;
  const retryDelay = 2000;

  for (let i = 0; i < maxRetries; i++) {
    try {
      const indexList = await pinecone.listIndexes();
      if (!indexList.indexes?.some(index => index.name === indexName)) {
        await pinecone.createIndex({
          name: indexName,
          dimension: 1024,
          metric: 'cosine',
          spec: {
            serverless: {
              cloud: 'aws',
              region: 'us-east-1',
            },
          },
        });
        await new Promise(resolve => setTimeout(resolve, 5000));
      }
      return pinecone.index(indexName);
    } catch (error) {
      if (i === maxRetries - 1) throw error;
      console.warn(`Retrying Pinecone initialization... (${i + 1}/${maxRetries})`);
      await new Promise(resolve => setTimeout(resolve, retryDelay));
    }
  }
  return null; 
};

Once it’s set up, you can use it to store and retrieve your knowledge base data.

Next, we need to generate embeddings for your knowledge base. This step is crucial for enabling your application to efficiently store and search through large volumes of data using vector embeddings.

Embedding your knowledge base content is simply converting the raw data—typically text—into vectors that represent the semantic meaning of that data. Pinecone then stores these vectors in the index.

With the index in place, your application can perform operations like similarity search, allowing it to retrieve the most relevant data quickly based on the vector similarity between your queries and stored content.

To do that, add the following function:

// Initialize Pinecone and prepare the index
(async () => {
    try {
      const index = await initializePinecone();
      if (index) {
        const embeddings = await pinecone.inference.embed(
          model,
          posts.map(d => d.content),
          { inputType: 'passage', truncate: 'END' }
        );

        const records = posts.map((d, i) => ({
          id: d.id.toString(),
          values: embeddings[i]?.values ?? [],
          metadata: { text: d.content },
        }));  
        await index.namespace('knowledge-base-data-namespace').upsert(
          records.map(record => ({
            ...record,
            values: record.values || [],
          }))
        );
      }
    } catch (error) {
      console.error('Error initializing Pinecone:', error);
      process.exit(1);
    }
  })();

There are a few things we need to note:

  • Embeddings: These are vector representations of content, such as text or articles, that capture the semantic meaning of the data. In this case, embeddings are generated using Pinecone’s model, multilingual-e5-large, which processes the content and converts it into vectors. Note that you can also use other models, such as OpenAI, which provides an embedding API for this task.
  • Namespace: A namespace in Pinecone is a logical partitioning of your index. It allows you to organize data within the index and perform operations within specific segments of your data. In this case, the namespace is set to ‘knowledge-base-data-namespace’, which groups the knowledge base content together.
  • Records: These represent the data being inserted into Pinecone. Each record consists of an id, values (the embedding), and metadata (e.g., the text of the article). The values are what Pinecone uses to perform similarity searches, while the metadata provides additional context for each record.

Now, for setup this to work, you will need to get the Pinecone API key.

How to Get Your Pinecone API Key:

To get your Pinecone AI API Key, follow these steps:

  1. Head over to the Pinecone Developer Console.
pinecone dashboard
  1. Select the API Keys tab, then click Create API Key. You can either use the default key or create a new one.
pinecone key tab
  1. Specify the name of the key and click Create key.
create API key

Once you’ve completed these steps, return to your .env file, and paste the key:

NEXT_PUBLIC_PINECONE_API_KEY=your-api-key

Integrating the CopiloKit Node.js Endpoint

The final step is to add the CopilotKit Node.js endpoint to handle requests from the frontend.

Before proceeding, you’ll need to set up the Anthropic API to make requests to their service. To do that:

  1. Create an Anthropic Account by visiting the Anthropic AI documentation to create an account and set up billing.
anthropic
  1. After logging into the Anthropic API console, generate your API key.
anthropic2
  1. Once you have the API key, add it to your .env file in the root of your project:

Be sure to set up billing and quotas to enable your app to make API requests.

Remember, we added the CopilotKit API URL in the app’s client to allow it to forward requests to the CopilotKit backend for processing. To make this work, we need to define the CopilotKit API endpoint to manage and handle those requests. Let’s start by importing the following in the src/app/api/copilotkit/route.ts file:

import { CopilotRuntime, AnthropicAdapter, copilotRuntimeNextJSAppRouterEndpoint } from "@copilotkit/runtime";

import Anthropic from "@anthropic-ai/sdk";
import { NextRequest } from 'next/server'

The Copilot Runtime is the backend engine of CopilotKit, allowing apps to interact with LLMs. With it, you can define backend action for your copilots. And there are a ton of tasks you could specify, including executing internal database calls and managing different processes and workflows. However, for this specific example, we’ll define an action to query the Pinecone index for relevant articles based on the user query.

To do that, add the following code:

const runtime = new CopilotRuntime({
  actions: () => [
    {
      name: 'FetchKnowledgebaseArticles',
      description: 'Fetch relevant knowledge base articles based on a user query',
      parameters: [
        {
          name: 'query',
          type: 'string',
          description: 'The User query for the knowledge base index search to perform',
          required: true,
        },
      ],
      handler: async ({ query }: { query: string }) => {
        try {
          const queryEmbedding = await pinecone.inference.embed(
            model,
            [query],
            { inputType: 'query' }
          );
          const queryResponse = await pinecone
            .index(indexName)
            .namespace('knowledge-base-data-namespace')
            .query({
              topK: 3,
              vector: queryEmbedding[0]?.values || [],
              includeValues: false,
              includeMetadata: true,
            });
          return { articles: queryResponse?.matches || [] };
        } catch (error) {
          console.error('Error fetching knowledge base articles:', error);
          throw new Error('Failed to fetch knowledge base articles.');
        }
      },    },
  ],
});

Let’s break down this action handler. Ideally, this handle is the core engine for this entire integration. This action handler receives the query parameter passed from the Copilot client.

Using Pinecone’s query operation, the query is converted into a vector representation, which is then compared to the indexed vectors stored in Pinecone to identify the top three most relevant articles in the index. The result includes the vector values and metadata, which contain the actual matching data for the query.

Since most product knowledge bases have a ton of article posts, some even cover similar ideas, while others are totally different. It’s therefore practical to return multiple relevant pieces of data matching the query. (You can adjust the topK value to return the number of matching data objects based on your intended use case).

Finally, go ahead, and add the following code to define the CopilotKit endpoint as follows:

const anthropic = new Anthropic({ apiKey: ANTHROPIC_API_KEY });
const serviceAdapter = new AnthropicAdapter({ anthropic: anthropic as any });

export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: '/api/copilotkit',
  });

    return handleRequest(req);


};

Make sure that the same URL you include in the CopilotKit API endpoint (/api/copilotkit) is specified in your frontend.

To test the entire app, navigate to your main project directory in the terminal and run the following command:

npm run dev

Then, head over to localhost:3000 in your browser and ask a question like “What features does CopilotKit offer?” in the Sidebar input field.

Users can continue asking additional questions; if you have a multitude of articles, instead of sifting through different pages, having the Copilot retrieve the information users need makes the process a whole lot easier.

For reference, or if you want to build on top of what we’ve covered in this article, you can clone this project’s source code from the GitHub repository.

Summary

In this guide, we have walked through the steps of building an Anthropic-powered Copilot using CopilotKit, Anthropic AI API, and Pinecone API for product knowledge bases.

While we’ve explored a couple of features, we have barely scratched the surface of the countless use cases for CopilotKit, ranging from building interactive AI chatbots to building agentic solutions—in essence, CopilotKit lets you add a ton of useful AI capabilities to your products in minutes.

If you’re looking to integrate AI-powered Copilots into your existing application, consider scheduling a demo, join the Discord Community of developers, and get started with the docs to try it out yourself!

Subscribe to the newsletter

Get notified of the latest news and updates.