Next.js AI Agent Integration: Full-Stack Tutorial
Why Next.js is Perfect for AI Applications
Next.js has become the ideal framework for AI-powered applications thanks to:
- Serverless API Routes: Easy backend for AI processing
- Edge Runtime: Low-latency AI responses
- Streaming Support: Real-time AI output
- React Integration: Seamless frontend AI features
- Vercel AI SDK: Purpose-built for AI apps
Project Setup
1. Create Next.js App with AI SDK
npx create-next-app@latest my-ai-app --typescript --tailwind --app
2. Install AI Dependencies
npm install ai openai
3. Configure Environment Variables
OPENAI_API_KEY=sk-your-api-key-here
Building an AI Chatbot
API Route Setup
// app/api/chat/route.ts
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { Configuration, OpenAIApi } from 'openai-edge';
const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(config);
export const runtime = 'edge';
export async function POST(req: Request) {
const { messages } = await req.json();
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
stream: true,
messages,
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
Frontend Component
// components/Chat.tsx
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map(m => (
<div key={m.id} className={m.role === 'user' ? 'text-right' : 'text-left'}>
<span className={`inline-block p-3 rounded-lg ${
m.role === 'user'
? 'bg-blue-500 text-white'
: 'bg-gray-100 text-gray-800'
}`}>
{m.content}
</span>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
className="flex-1 p-3 border rounded-lg"
/>
<button type="submit" className="px-6 py-3 bg-blue-500 text-white rounded-lg">
Send
</button>
</form>
</div>
);
}
Advanced AI Features
1. Streaming Responses
Real-time AI output improves user experience significantly:
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
onResponse: (response) => {
console.log('Started streaming:', response);
},
onFinish: (message) => {
console.log('Finished streaming:', message);
},
});
2. Custom System Prompts
Tailor AI behavior for specific use cases:
const systemPrompt = `You are a helpful customer service AI for a SaaS company.
Rules:
- Be concise and professional
- Ask clarifying questions when needed
- Escalate technical issues to humans
- Never make up information about pricing`;
const response = await openai.createChatCompletion({
model: 'gpt-4',
messages: [
{ role: 'system', content: systemPrompt },
...messages
],
});
3. Tool Integration
Connect AI to your business systems:
// AI can call your business functions
const tools = {
checkOrderStatus: async (orderId: string) => {
const order = await db.orders.findById(orderId);
return order?.status || 'Not found';
},
scheduleDemo: async (email: string, date: string) => {
const event = await calendar.create({
summary: 'Product Demo',
attendees: [email],
start: date,
});
return event.link;
},
};
Production Considerations
Error Handling
try {
const response = await openai.createChatCompletion({...});
return new StreamingTextResponse(response);
} catch (error) {
console.error('OpenAI Error:', error);
return new Response('AI service temporarily unavailable', {
status: 503
});
}
Rate Limiting
Protect your API from abuse:
import { Ratelimit } from '@upstash/ratelimit';
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, '1 m'),
});
export async function POST(req: Request) {
const ip = req.ip ?? '127.0.0.1';
const { success } = await ratelimit.limit(ip);
if (!success) {
return new Response('Too many requests', { status: 429 });
}
// Process AI request
}
Cost Optimization
- Use GPT-3.5 for simple queries
- Cache common responses
- Implement request queues
- Monitor usage daily
Deployment
Deploy to Vercel with one command:
vercel --prod
Your AI app is now live with edge-deployed AI capabilities.
Conclusion
Next.js + AI is a powerful combination for modern web applications. Whether you're building chatbots, content generators, or intelligent automation, this stack delivers.
Next Steps:
- Add authentication with NextAuth.js
- Implement persistent chat history
- Build custom AI tools
- Add file upload capabilities
Need help building your Next.js AI app? I offer development services.
Frequently Asked Questions
Do I need to be a Next.js expert to add AI features?
Basic Next.js knowledge is sufficient. I guide you through every step, from API routes setup to frontend integration. Even intermediate React developers can follow along and build impressive AI features.
What's the best AI library for Next.js?
For most use cases, the OpenAI SDK is the simplest. For complex agent workflows, LangChain is excellent. Vercel AI SDK is also great for streaming responses. I typically use a combination based on project needs.
How do I handle AI API costs in production?
I recommend implementing request caching, using GPT-3.5 for simple tasks (10x cheaper than GPT-4), and setting up usage monitoring. Most small-medium apps spend $50-200/month on AI APIs with proper optimization.