How to Build an AI agent Using Next.js and Ollama

April 3, 2025

How to Build an AI agent Using Next.js and Ollama

Developers often struggle with understanding complex code, especially when working with new programming concepts. Wouldn't it be great to have an AI-powered assistant that can analyze, execute, and explain any piece of code?

Introduction

In this step-by-step tutorial, we will build an AI Code Explainer using:

✔️ Next.js – A powerful React framework.
✔️ Ollama – A local AI runtime for running Large Language Models (LLMs).
✔️ LLava 13B – A fine-tuned AI model for code explanation.
✔️ Monaco Editor – A VS Code-like experience in the browser.
✔️ Tailwind CSS – For a modern, responsive UI.

By the end of this guide, you'll have a fully functional AI-powered code explainer that can break down complex code step-by-step—just like a real mentor!

💡 The entire source code is available on GitHub.

Why Build an AI Code Explainer?

TL;DR: AI-driven code explanation can help junior developers learn faster and reduce debugging time.

💡 Problems Developers Face:

  • Complex code logic is hard to understand.
  • AI tools like ChatGPT and GitHub Copilot suggest code but don’t explain it well.
  • Debugging can take hours without proper understanding.

Solution? Our AI-powered Code Explainer can:

  • Break down code into step-by-step explanations.
  • Run static analysis to find potential errors.
  • Execute the code in a sandboxed environment for better understanding.

This approach is much smarter than just using an LLM with a prompt! 🚀

Step 1: Setting Up the AI Tech Stack

Before we begin coding, let's set up our environment.

🛠️ Tools & Technologies Used

| Technology | Purpose | |------------|---------| | Next.js | Handles frontend and API requests. | | Ollama | Runs AI models locally. | | LLava 13B | The AI model used for code explanation. | | Monaco Editor | Provides a coding environment inside the app. | | Tailwind CSS | For beautiful, modern UI styling. |

Step 2: Install & Configure Ollama

Ollama allows us to run powerful AI models locally without needing an external API like OpenAI.

Install Ollama

🔹 Mac & Linux users:

curl -fsSL https://ollama.com/install.sh | sh

🔹 Windows users: Download Ollama here

Download & Run LLava 13B Model

ollama pull llava:13b
ollama serve

This starts a local AI server that our Next.js app will communicate with! 🚀

Step 3: Clone the GitHub Repository

Instead of building everything from scratch, you can clone the entire project from GitHub:

git clone your-github-repo-link
cd ai-code-explainer
npm install

If you'd rather build it step by step, continue following the tutorial. 🚀

Step 4: Build the AI API Route

Create a new file /app/api/explain/route.ts and add:

import { NextRequest, NextResponse } from "next/server";
import { Ollama } from "ollama";
import { exec } from "child_process";
import util from "util";

const ollama = new Ollama();
const execPromise = util.promisify(exec);

export async function POST(req: NextRequest) {
  try {
    const { code, language } = await req.json();
    if (!code || !language) return NextResponse.json({ error: "Code and language are required." }, { status: 400 });

    const executionResult = await runCodeSandbox(code, language);

    const lintReport = await fetchLinterReport(code, language);

    const explanation = await generateAIExplanation(code, language, executionResult, lintReport);

    return NextResponse.json({ executionResult, lintReport, explanation });
  } catch (error) {
    console.error("Error in AI explanation:", error);
    return NextResponse.json({ error: "Failed to process request." }, { status: 500 });
  }
}

async function runCodeSandbox(code: string, language: string): Promise<string> {
  try {
    let command = "";
    if (language === "javascript") command = `node -e "${code.replace(/"/g, '\\"')}"`;
    if (language === "python") command = `python3 -c "${code.replace(/"/g, '\\"')}"`;
    if (!command) return "Execution not supported for this language.";

    const { stdout, stderr } = await execPromise(command);
    return stderr ? `Error: ${stderr}` : `Output: ${stdout}`;
  } catch (error) {
    return `Execution failed. ${error}`;
  }
}

async function fetchLinterReport(code: string, language: string): Promise<string> {
  try {
    let command = "";
    if (language === "javascript") command = `echo "${code.replace(/"/g, '\\"')}" | eslint --stdin --format json`;
    if (language === "python") command = `echo "${code.replace(/"/g, '\\"')}" | pylint --from-stdin`;
    if (!command) return "Linting not supported for this language.";

    const { stdout, stderr } = await execPromise(command);
    return stderr ? `Lint Error: ${stderr}` : `Lint Report: ${stdout}`;
  } catch (error) {
    return `Linting failed. ${error}`;
  }
}

async function generateAIExplanation(code: string, language: string, executionResult: string, lintReport: string): Promise<string> {
  const prompt = `
    Explain this ${language} code step by step for a new developer.
    - Execution Result: ${executionResult}
    - Linter Report: ${lintReport}
    - Code:\n\`\`\`${language}\n${code}\n\`\`\`
  `;
  const response = await ollama.chat({ model: "llava:13b", messages: [{ role: "user", content: prompt }] });
  return response.message.content.trim();
}

This API:
Sends the code to LLava 13B for analysis.
Queries Ollama’s local AI server instead of using cloud APIs.
Returns structured explanations instead of just a single response.

Step 5: Build the Frontend UI

Inside /components/CodeExplainer.tsx, add:

"use client";

import { useRef, useState } from "react";
import Editor from "@monaco-editor/react";
import { Button } from "./ui/Button";
import { Card, CardContent } from "./ui/Card";
import { Select, SelectItem } from "./ui/Select";

export default function CodeExplainer() {
    const [code, setCode] = useState("");
    const [language, setLanguage] = useState("javascript");
    const [loading, setLoading] = useState(false);
    const [executionResult, setExecutionResult] = useState("");
    const [lintReport, setLintReport] = useState("");
    const [explanation, setExplanation] = useState("");
    const resultRef = useRef<HTMLDivElement>(null);

    async function submitCode() {
        setLoading(true);
        setExecutionResult("");
        setLintReport("");
        setExplanation("");

        try {
            const res = await fetch("/api/explain", {
                method: "POST",
                headers: { "Content-Type": "application/json" },
                body: JSON.stringify({ code, language }),
            });
            const data = await res.json();

            setExecutionResult(data.executionResult || "No execution result.");
            setLintReport(data.lintReport || "No lint report.");
            setExplanation(data.explanation || "No explanation available.");

            setTimeout(() => {
                resultRef.current?.scrollIntoView({ behavior: "smooth", block: "start" });
            }, 200);
        } catch (error) {
            setExplanation(`Error processing request. ${error}`);
        }
        setLoading(false);
    }

    return (
        <div className="flex flex-col items-center p-6 space-y-6 bg-gray-900 min-h-screen text-white">
            <h1 className="text-2xl font-bold">AI Code Explainer</h1>
            <h2 className="text-xl font-bold">Breaking down complex code into simple terms</h2>

            <p className="text-sm mt-10 font-bold">Select language</p>

            <Select value={language} onChange={(e) => setLanguage(e.target.value)} className="w-64">
                <SelectItem value="javascript">JavaScript</SelectItem>
                <SelectItem value="python">Python</SelectItem>
            </Select>

            <Card className="w-full max-w-4xl">
                <CardContent className="p-4">
                    <Editor
                        height="300px"
                        language={language}
                        theme="vs-dark"
                        value={code}
                        onChange={(value) => setCode(value || "")}
                    />
                </CardContent>
            </Card>

            <Button onClick={submitCode} disabled={loading} className="bg-blue-500 hover:bg-blue-600">
                {loading ? "Processing..." : "Submit Code"}
            </Button>

            {executionResult && (
                <div ref={resultRef} className="w-full max-w-4xl">
                    <Card>
                        <CardContent className="p-4 bg-gray-800">
                            <h2 className="text-lg font-bold text-green-400">Execution Result</h2>
                            <pre className="whitespace-pre-wrap">{executionResult}</pre>
                        </CardContent>
                    </Card>
                </div>
            )}

            {lintReport && (
                <Card className="w-full max-w-4xl">
                    <CardContent className="p-4 bg-gray-800">
                        <h2 className="text-lg font-bold text-yellow-400">Lint Report</h2>
                        <pre className="whitespace-pre-wrap">{lintReport}</pre>
                    </CardContent>
                </Card>
            )}

            {explanation && (
                <Card className="w-full max-w-4xl">
                    <CardContent className="p-4 bg-gray-800">
                        <h2 className="text-lg font-bold text-blue-400">AI Explanation</h2>
                        <pre className="whitespace-pre-wrap">{explanation}</pre>
                    </CardContent>
                </Card>
            )}
        </div>
    );
}

📢 Want More AI Projects?

Check out:

📌 You can get the complete project code from my GitHub repository.

Get in Touch

Want to collaborate or just say hi? Reach out!