⚡ Ollama + React Frontend Template: Build a Modern AI Chatbot with DeepSeek Models
In the evolving AI developer ecosystem of 2025, tools like Ollama and React have emerged as game-changers for local and lightweight AI applications. Whether you're looking to build a fully offline chatbot, prototype an agent interface, or create a production-grade frontend for DeepSeek, this guide will show you how to connect a React UI to an Ollama-powered backend running DeepSeek R1, Coder, or any GGUF-based LLM.
As of 2022, Fire-Flyer 2 had 5000 PCIe A100 GPUs in 625 nodes, each containing 8 GPUs.[28] It later incorporated NVLinks and NCCL to train larger models that required model parallelism.[29][30]
✅ Table of Contents
Introduction to Ollama and React
Why Use DeepSeek + Ollama + React
Project Architecture Overview
Installing and Running Ollama Locally
Installing DeepSeek Models in Ollama
Creating the React Frontend (Vite + Tailwind)
Connecting React to the Ollama API
Sending Prompts and Handling Streaming Responses
Adding Features: Chat History, Avatars, UI Themes
Deploying Your App
Performance Tips and Troubleshooting
Final Thoughts + GitHub Template Offer
1. 🎯 Introduction to Ollama and React
What is Ollama?
Ollama is a local LLM runner and model manager that:
Supports open GGUF models (like DeepSeek R1 or DeepSeek-Coder)
Offers a built-in HTTP REST API
Runs on macOS, Linux, and Windows (WSL)
Simplifies loading, serving, and switching between LLMs
Why React?
React is ideal for frontend UIs:
Declarative UI with state handling
Clean component structure
Easily connects to APIs
Highly customizable (supports Tailwind, animations, etc.)
2. 🧠 Why Use DeepSeek + Ollama + React
Feature | Benefit |
---|---|
DeepSeek R1 | Free, open-weight LLM with 128K context |
Ollama | Easy local serving of LLMs with REST API |
React | Beautiful, fast frontends with flexible logic |
Local Setup | Fully offline, no token fees, great for demos and privacy-sensitive apps |
3. 🏗️ Project Architecture Overview
bash 📁 project-root/ ├── /frontend ← React app (Vite + Tailwind) ├── /backend ← Ollama running locally └── README.md
The frontend will send requests to:
bash复制编辑http://localhost:11434/api/generate
4. ⚙️ Installing and Running Ollama Locally
Step 1: Install Ollama
From official site: https://ollama.com
Or via CLI:
bash curl -fsSL https://ollama.com/install.sh | sh
Step 2: Start Ollama
bash ollama run deepseek-coder
If the model isn’t installed, it will automatically be downloaded.
5. 📥 Installing DeepSeek Models
Popular options:
bash ollama pull deepseek-coder ollama pull deepseek-chat
Other models: mistral
, llama2
, gemma
, etc.
You can list installed models:
bash ollama list
6. 🌐 Creating the React Frontend (Vite + Tailwind)
Step 1: Setup Vite React Project
bash npm create vite@latest ai-chatbot -- --template reactcd ai-chatbot npm install
Step 2: Install Tailwind CSS
bash npm install -D tailwindcss postcss autoprefixer npx tailwindcss init -p
Update tailwind.config.js
:
js content: ["./index.html", "./src/**/*.{js,ts,jsx,tsx}"]
Update src/index.css
:
css @tailwind base;@tailwind components;@tailwind utilities;
7. 🔌 Connecting React to the Ollama API
Create ChatBox.jsx
jsx import React, { useState } from 'react';export default function ChatBox() { const [input, setInput] = useState(''); const [chat, setChat] = useState([]); const sendPrompt = async () => { const res = await fetch('http://localhost:11434/api/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'deepseek-coder', prompt: input, stream: false }) }); const data = await res.json(); const message = data.response; setChat([...chat, { user: input, bot: message }]); setInput(''); }; return ( <div className="max-w-xl mx-auto p-4"> <h1 className="text-xl font-bold mb-4">DeepSeek Chatbot</h1> <div className="bg-white shadow-md p-4 rounded-md space-y-2 h-96 overflow-y-auto"> {chat.map((entry, index) => ( <div key={index}> <p><strong>You:</strong> {entry.user}</p> <p><strong>AI:</strong> {entry.bot}</p> </div> ))} </div> <div className="flex gap-2 mt-4"> <input value={input} onChange={(e) => setInput(e.target.value)} className="flex-1 border p-2 rounded" placeholder="Ask something..." /> <button onClick={sendPrompt} className="bg-blue-500 text-white px-4 py-2 rounded"> Send </button> </div> </div> ); }
Add to App.jsx
:
jsx import ChatBox from './ChatBox';function App() { return ( <div className="bg-gray-100 min-h-screen"> <ChatBox /> </div> ); }export default App;
8. 🔁 Sending Prompts and Handling Streaming Responses
Ollama supports streaming, so you can enhance the chatbot by displaying tokens one-by-one.
Here’s how to adapt streaming in React:
jsx const streamPrompt = async () => { const res = await fetch('http://localhost:11434/api/generate', { method: 'POST', body: JSON.stringify( { model: 'deepseek-coder', prompt: input, stream: true }) }); const reader = res.body.getReader(); const decoder = new TextDecoder(); let message = ''; while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const parsed = JSON.parse(chunk.trim().split('\n')[0]); message += parsed.response; setLiveResponse(message); } };
9. 🎨 Adding Features: Chat History, Avatars, UI Themes
a. Chat Avatars
jsx <p className="text-blue-700">🤖 AI:</p><p className="text-green-700">👤 You:</p>
b. UI Themes
Install Tailwind UI or daisyUI
for dark mode support.
bash npm install daisyui
Update tailwind.config.js
:
js plugins: [require("daisyui")],
Enable dark mode in layout:
html <body className="dark:bg-gray-900">
10. 🚀 Deploying Your App
Local deployment:
bash npm run build serve -s dist
Remote deployment:
Render.com: React + Ollama Docker combo
Fly.io: Deploy Ollama + frontend globally
Tauri/Electron: Wrap in desktop app
Local Web App: Run Ollama in background, React on localhost
11. 🧠 Performance Tips and Troubleshooting
Problem | Solution |
---|---|
Ollama slow to respond | Use quantized models (Q4_0, Q5_1) |
CORS errors | Use proxy or enable CORS headers |
Port conflict | Change Ollama port (OLLAMA_PORT=11435 ) |
Model load failure | Use ollama list to check and pull again |
Slow frontend | Debounce input, reduce render updates |
12. ✅ Final Thoughts + GitHub Template
This combo — Ollama + DeepSeek + React — is one of the most powerful setups in 2025 for:
⚡ Building local AI chatbots
💬 Creating chat interfaces for customers or internal use
💻 Prototyping LLM products with zero cloud cost
🔐 Deploying private assistants for secure environments
📦 Want a GitHub Template?
Let me know and I’ll send:
✅
frontend/
with ChatBox UI✅
README.md
with setup instructions✅ Docker setup for Ollama + Vite
✅ Bonus: PDF branding guide for a commercial version
Would you like this exported to GitHub, Notion, or a downloadable ZIP archive?