The Mem0 AI SDK Provider is a library developed by Mem0 to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality.
Mem0 AI SDK now supports Vercel AI SDK V5.
Overview
- Offers persistent memory storage for conversational AI
- Enables smooth integration with the Vercel AI SDK
- Ensures compatibility with multiple LLM providers
- Supports structured message formats for clarity
- Facilitates streaming response capabilities
Setup and Configuration
Install the SDK provider using npm:
npm install @mem0/vercel-ai-provider
Getting Started
Setting Up Mem0
-
Get your Mem0 API Key from the Mem0 Dashboard.
-
Initialize the Mem0 Client in your application:
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0({
provider: "openai",
mem0ApiKey: "m0-xxx",
apiKey: "provider-api-key",
config: {
// Options for LLM Provider
},
// Optional Mem0 Global Config
mem0Config: {
user_id: "mem0-user-id",
},
});
Note: The openai
provider is set as default. Consider using MEM0_API_KEY
and OPENAI_API_KEY
as environment variables for security.
Note: The mem0Config
is optional. It is used to set the global config for the Mem0 Client (eg. user_id
, agent_id
, app_id
, run_id
, org_id
, project_id
etc).
-
Add Memories to Enhance Context:
import { LanguageModelV2Prompt } from "@ai-sdk/provider";
import { addMemories } from "@mem0/vercel-ai-provider";
const messages: LanguageModelV2Prompt = [
{ role: "user", content: [{ type: "text", text: "I love red cars." }] },
];
await addMemories(messages, { user_id: "borat" });
Standalone Features:
await addMemories(messages, { user_id: "borat", mem0ApiKey: "m0-xxx" });
await retrieveMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx" });
await getMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx" });
For standalone features, such as addMemories
, retrieveMemories
, and getMemories
, you must either set MEM0_API_KEY
as an environment variable or pass it directly in the function call.
getMemories
will return raw memories in the form of an array of objects, while retrieveMemories
will return a response in string format with a system prompt ingested with the retrieved memories.
getMemories
is an object with two keys: results
and relations
if enable_graph
is enabled. Otherwise, it will return an array of objects.
1. Basic Text Generation with Memory Context
import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0();
const { text } = await generateText({
model: mem0("gpt-4-turbo", { user_id: "borat" }),
prompt: "Suggest me a good car to buy!",
});
2. Combining OpenAI Provider with Memory Utils
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { retrieveMemories } from "@mem0/vercel-ai-provider";
const prompt = "Suggest me a good car to buy.";
const memories = await retrieveMemories(prompt, { user_id: "borat" });
const { text } = await generateText({
model: openai("gpt-4-turbo"),
prompt: prompt,
system: memories,
});
import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0();
const { text } = await generateText({
model: mem0("gpt-4-turbo", { user_id: "borat" }),
messages: [
{
role: "user",
content: [
{ type: "text", text: "Suggest me a good car to buy." },
{ type: "text", text: "Why is it better than the other cars for me?" },
],
},
],
});
3. Streaming Responses with Memory Context
import { streamText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0();
const { textStream } = streamText({
model: mem0("gpt-4-turbo", {
user_id: "borat",
}),
prompt: "Suggest me a good car to buy! Why is it better than the other cars for me? Give options for every price range.",
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
import { z } from "zod";
const mem0 = createMem0({
provider: "anthropic",
apiKey: "anthropic-api-key",
mem0Config: {
// Global User ID
user_id: "borat"
}
});
const prompt = "What the temperature in the city that I live in?"
const result = await generateText({
model: mem0('claude-3-5-sonnet-20240620'),
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
prompt: prompt,
});
console.log(result);
5. Get sources from memory
const { text, sources } = await generateText({
model: mem0("gpt-4-turbo"),
prompt: "Suggest me a good car to buy!",
});
console.log(sources);
The same can be done for streamText
as well.
6. File Support with Memory Context
Mem0 AI SDK supports file processing with memory context. Here’s an example of analyzing a PDF file:
import { streamText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
import { readFileSync } from 'fs';
import { join } from 'path';
const mem0 = createMem0({
provider: "google",
mem0ApiKey: "m0-xxx",
config: {
apiKey: "google-api-key"
},
mem0Config: {
user_id: "alice",
},
});
async function main() {
// Read the PDF file
const filePath = join(process.cwd(), 'my_pdf.pdf');
const fileBuffer = readFileSync(filePath);
// Convert the file's arrayBuffer to a Base64 data URL
const arrayBuffer = fileBuffer.buffer.slice(fileBuffer.byteOffset, fileBuffer.byteOffset + fileBuffer.byteLength);
const uint8Array = new Uint8Array(arrayBuffer);
// Convert Uint8Array to an array of characters
const charArray = Array.from(uint8Array, byte => String.fromCharCode(byte));
const binaryString = charArray.join('');
const base64Data = Buffer.from(binaryString, 'binary').toString('base64');
const fileDataUrl = `data:application/pdf;base64,${base64Data}`;
const { textStream } = streamText({
model: mem0("gemini-2.5-flash"),
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Analyze the following PDF and generate a summary.',
},
{
type: 'file',
data: fileDataUrl,
mediaType: 'application/pdf',
},
],
},
],
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
}
main();
Note: File support is available with providers that support multimodal capabilities like Google’s Gemini models. The example shows how to process PDF files, but you can also work with images, text files, and other supported formats.
Graph Memory
Mem0 AI SDK now supports Graph Memory. You can enable it by setting enable_graph
to true
in the mem0Config
object.
const mem0 = createMem0({
mem0Config: { enable_graph: true },
});
You can also pass enable_graph
in the standalone functions. This includes getMemories
, retrieveMemories
, and addMemories
.
const memories = await getMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx", enable_graph: true });
The getMemories
function will return an object with two keys: results
and relations
, if enable_graph
is set to true
. Otherwise, it will return an array of objects.
Supported LLM Providers
Provider | Configuration Value |
---|
OpenAI | openai |
Anthropic | anthropic |
Google | google |
Groq | groq |
Note: You can use google
as provider for Gemini (Google) models. They are same and internally they use @ai-sdk/google
package.
Key Features
createMem0()
: Initializes a new Mem0 provider instance.
retrieveMemories()
: Retrieves memory context for prompts.
getMemories()
: Get memories from your profile in array format.
addMemories()
: Adds user memories to enhance contextual responses.
Best Practices
-
User Identification: Use a unique
user_id
for consistent memory retrieval.
-
Memory Cleanup: Regularly clean up unused memory data.
Note: We also have support for agent_id
, app_id
, and run_id
. Refer Docs.
Conclusion
Mem0’s Vercel AI SDK enables the creation of intelligent, context-aware applications with persistent memory and seamless integration.
Help