Skip to main content
Vercel AI SDK makes it simple to generate text, stream responses, and create structured outputs.

Setup in 3 Steps

1

Install Dependencies

Add the AI SDK and providers:
Terminal
npm install ai @ai-sdk/openai @ai-sdk/anthropic zod
2

Get API Keys

Grab your credentials:OpenAI:
  1. Go to platform.openai.com
  2. Navigate to API keys
  3. Click Create new secret key
Anthropic:
  1. Go to console.anthropic.com
  2. Navigate to API Keys
  3. Click Create Key
Add to .env.local:
.env.local
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Never commit API keys to git. Keep them in .env.local only.
3

You're Ready!

All utilities are in /lib/ai.ts and ready to use.
Import and start generating AI content immediately.

Available Models

Choose the right model for your task:
import { models } from '@/lib/ai';

models.gpt4          // GPT-4 Turbo - Best for complex reasoning
models.gpt35         // GPT-3.5 Turbo - Fast and affordable
models.claude        // Claude 3.5 Sonnet - Balanced performance
models.claudeHaiku   // Claude 3.5 Haiku - Lightning fast
Claude models are faster and cheaper for most tasks. Use GPT-4 for complex reasoning.

Generate Text

Simple Generation

Basic text generation:
import { generateAI } from '@/lib/ai';

const text = await generateAI("Write a tagline for a SaaS product");
console.log(text);
// "Ship your MVP in days, not months"

Custom Model

Choose a specific model:
import { generateAI, models } from '@/lib/ai';

const text = await generateAI(
  "Explain React hooks in simple terms",
  models.claude
);

API Route Example

app/api/generate/route.ts
import { generateAI } from '@/lib/ai';

export async function POST(req: Request) {
  const { prompt } = await req.json();
  const text = await generateAI(prompt);
  return Response.json({ text });
}

Stream Responses

Real-time streaming for chat interfaces:
import { streamAI } from '@/lib/ai';

const stream = await streamAI("Write a blog post about AI");

for await (const chunk of stream.textStream) {
  console.log(chunk);
}

In API Routes

Perfect for chat applications:
app/api/chat/route.ts
import { streamAI } from '@/lib/ai';

export async function POST(req: Request) {
  const { prompt } = await req.json();
  const stream = await streamAI(prompt);
  
  return stream.toDataStreamResponse();
}
Streaming responses improve UX by showing content as it’s generated, like ChatGPT.

Structured Output

Generate typed objects with Zod schemas:
import { generateStructured } from '@/lib/ai';
import { z } from 'zod';

const productSchema = z.object({
  title: z.string(),
  description: z.string(),
  price: z.number(),
  features: z.array(z.string()),
});

const product = await generateStructured(
  "Generate a product idea for developers",
  productSchema
);

console.log(product.title);      // Fully typed!
console.log(product.features);   // string[]

Complex Schemas

Build sophisticated structured data:
const blogSchema = z.object({
  title: z.string(),
  slug: z.string(),
  excerpt: z.string(),
  content: z.string(),
  tags: z.array(z.string()),
  seo: z.object({
    metaTitle: z.string(),
    metaDescription: z.string(),
  }),
});

const blog = await generateStructured(
  "Write a blog post about Next.js 15",
  blogSchema,
  models.gpt4
);
Use structured output for form generation, data extraction, or content creation.

Chat Conversations

Multi-turn conversations with context:
import { chat } from '@/lib/ai';

const response = await chat([
  { role: 'user', content: 'What is Next.js?' },
  { role: 'assistant', content: 'Next.js is a React framework...' },
  { role: 'user', content: 'How do I deploy it?' },
]);

for await (const chunk of response.textStream) {
  console.log(chunk);
}

Chat API Route

app/api/chat/route.ts
import { chat } from '@/lib/ai';

export async function POST(req: Request) {
  const { messages } = await req.json();
  const response = await chat(messages);
  
  return response.toDataStreamResponse();
}

Edge Runtime

All functions work in Edge Runtime for faster responses:
app/api/generate/route.ts
export const runtime = 'edge';

export async function POST(req: Request) {
  const { prompt } = await req.json();
  const text = await generateAI(prompt);
  return Response.json({ text });
}
Edge runtime deploys globally for low-latency AI responses worldwide.

Error Handling

Always handle AI errors gracefully:
try {
  const text = await generateAI("Generate content");
  return Response.json({ text });
} catch (error) {
  console.error("AI generation failed:", error);
  return Response.json(
    { error: "Failed to generate content" },
    { status: 500 }
  );
}

Cost Optimization

  • Claude Haiku: Cheapest, fastest (simple tasks)
  • GPT-3.5 Turbo: Affordable, good quality
  • Claude Sonnet: Balanced (recommended)
  • GPT-4: Most expensive (complex reasoning only)
  • Keep prompts concise
  • Use structured output instead of parsing text
  • Stream instead of generating full responses when possible
  • Set lower maxTokens limits
    const cache = new Map();
    
    const cached = cache.get(prompt);
    if (cached) return cached;
    
    const text = await generateAI(prompt);
    cache.set(prompt, text);
    return text;

Troubleshooting

Check:
  • Key is in .env.local (not .env)
  • Correct prefix (sk- for OpenAI, sk-ant- for Anthropic)
  • Restart dev server after adding keys
  • No extra spaces in .env.local
Make sure:
  • Using stream.toDataStreamResponse() in API routes
  • Client uses useChat hook from ai/react
  • Not calling .json() on streamed responses
Common issues:
  • Schema too complex (simplify it)
  • Model hallucinating invalid data (add examples)
  • Missing required fields (make them optional)
Solutions:
  • Add retry logic with exponential backoff
  • Implement request queuing
  • Upgrade API tier for higher limits
  • Use Claude (higher rate limits)

Build AI-powered features in minutes 🤖