Fix qwen3 architecture support
Ollama
Installation Guide: CUDA 11.4 on Rocky Linux 8
Prerequisites:
- A Rocky Linux 8 system or a container based on Rocky Linux 8.
- Root privileges.
- Internet connectivity.
Steps:
-
Update the system: Start by updating the operating system packages.
dnf -y update -
Install EPEL Repository: The Extra Packages for Enterprise Linux (EPEL) repository is required for some dependencies.
dnf -y install epel-release -
Add NVIDIA CUDA Repository: Add the NVIDIA CUDA repository for RHEL 8. This allows
dnfto find and install the necessary CUDA packages.dnf -y config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo -
Install NVIDIA Driver (Version 470): Install the NVIDIA driver version 470 using the DKMS (Dynamic Kernel Module Support) system. This ensures the driver is automatically rebuilt when the kernel is updated.
dnf -y module install nvidia-driver:470-dkms -
Install CUDA Toolkit 11.4: Install the CUDA Toolkit version 11.4.
dnf -y install cuda-11-4 -
Set up CUDA Environment Variables (Optional but Recommended): Copy a script (
cuda-11.4.sh) to/etc/profile.d/to set environment variables. If performing this installation manually, you need to ensure that thePATHandLD_LIBRARY_PATHenvironment variables are correctly configured. The contents ofcuda-11.4.shwould typically include something like:# Create /etc/profile.d/cuda-11.4.sh echo "export PATH=/usr/local/cuda-11.4/bin:${PATH}" > /etc/profile.d/cuda-11.4.sh echo "export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:${LD_LIBRARY_PATH}" >> /etc/profile.d/cuda-11.4.shTo apply these changes, you can either:
- Source the script:
source /etc/profile.d/cuda-11.4.sh - Log out and log back in to refresh your shell environment.
- Add the lines above to your
.bashrcor equivalent shell configuration file.
- Source the script:
Verification:
After installation, verify that CUDA is properly installed by running:
nvcc --version
This command should display the CUDA compiler version. You can also check the installed driver version with:
nvidia-smi
GCC 10 Installation Guide
This guide details the steps to install GCC 10.
Steps:
-
Update and Install Prerequisites:
- Installs
wget,unzip, andlbzip2to download and extract the GCC source code. - Installs the "Development Tools" group, which includes necessary build tools.
dnf -y install wget unzip lbzip2 \ && dnf -y groupinstall "Development Tools" - Installs
-
Download GCC 10 Source Code:
- Downloads the GCC 10 source code from the gcc-mirror GitHub repository.
cd /usr/local/src \ && wget https://github.com/gcc-mirror/gcc/archive/refs/heads/releases/gcc-10.zip -
Extract Source Code:
- Unzips the downloaded GCC 10 archive.
unzip gcc-10.zip -
Prepare Build Environment:
- Navigates into the extracted GCC 10 directory.
cd gcc-releases-gcc-10 -
Download Prerequisites:
- Downloads build prerequisites using the
contrib/download_prerequisitesscript.
contrib/download_prerequisites - Downloads build prerequisites using the
-
Create Installation Directory:
- Creates a directory
/usr/local/gcc-10where GCC 10 will be installed.
mkdir /usr/local/gcc-10 - Creates a directory
-
Configure GCC Build:
- Configures the GCC build process using the
./configurescript. The--disable-multilibflag disables the build of multilib support, which can simplify the build.
cd /usr/local/gcc-10 \ && /usr/local/src/gcc-releases-gcc-10/configure --disable-multilib - Configures the GCC build process using the
-
Compile GCC:
- Compiles GCC using
make. The-j ${nproc}flag utilizes all available CPU cores for parallel compilation, speeding up the process.
make -j ${nproc} - Compiles GCC using
-
Install GCC:
- Installs the compiled GCC binaries.
make install -
Post-Install Configuration:
- Configures the system environment for GCC 10 compatibility. This involves creating a file (
/etc/profile.d/gcc-10.sh) to automatically setLD_LIBRARY_PATH="/usr/local/lib64"and adding a configuration file (/etc/ld.so.conf.d/gcc-10.conf) to update the dynamic linker's cache.
# Create /etc/profile.d/gcc-10.sh echo "export LD_LIBRARY_PATH=/usr/local/lib64:\$LD_LIBRARY_PATH" > /etc/profile.d/gcc-10.sh # Create /etc/ld.so.conf.d/gcc-10.conf echo "/usr/local/lib64" > /etc/ld.so.conf.d/gcc-10.conf # Update dynamic linker cache ldconfig - Configures the system environment for GCC 10 compatibility. This involves creating a file (
CMake Installation Guide
-
Update Package Manager (Optional but Recommended):
While not explicitly in the Dockerfile, it's good practice to start by updating your package lists to ensure you're getting the latest available software. This isn't shown in the provided Dockerfile.
-
Install OpenSSL Development Libraries:
dnf -y install openssl-devel- Purpose: CMake often relies on OpenSSL for secure build configurations. The
-develpackage provides the header files and libraries necessary for building software that uses OpenSSL. dnf: This is the package manager for Fedora and related distributions (like CentOS, Rocky Linux, etc.). If you're using a different distribution, use the appropriate package manager (e.g.,aptfor Debian/Ubuntu,yumfor older CentOS versions).-y: This flag automatically answers "yes" to any prompts during the installation, making the process non-interactive.
- Purpose: CMake often relies on OpenSSL for secure build configurations. The
-
Download CMake Source Code:
cd /usr/local/src wget https://github.com/Kitware/CMake/releases/download/v4.0.0/cmake-4.0.0.tar.gzcd /usr/local/src: Changes the current directory to/usr/local/src. This is a common location for temporary source files.wget: Downloads the CMake source code archive from the specified URL.wgetis a command-line utility for retrieving files from the web. Make sure you havewgetinstalled.
-
Extract the Archive:
tar xvf cmake-4.0.0.tar.gztar: This is the GNU Tape Archiver, a common utility for creating and extracting archive files.xvf: These are thetaroptions:x: Extract files.v: Verbose mode (lists the files being extracted).f: Specifies the archive file.
-
Create a CMake Installation Directory:
mkdir /usr/local/cmake-4mkdir: Creates a new directory. This directory is where CMake will be installed.
-
Configure CMake:
cd /usr/local/cmake-4 /usr/local/src/cmake-4.0.0/configurecd /usr/local/cmake-4: Changes the directory to the newly created installation directory./usr/local/src/cmake-4.0.0/configure: This script prepares the CMake source code for compilation based on your system's configuration. It checks for dependencies and creates Makefiles.
-
Compile CMake:
make -j ${nproc}make: This command compiles the CMake source code.-j ${nproc}: This option tellsmaketo use multiple processor cores to speed up the compilation.${nproc}is an environment variable that contains the number of available processors.
-
Install CMake:
make installmake install: This command installs the compiled CMake binaries and related files to the system directories. This step usually requires root privileges (e.g., usingsudo).
Go Installation Guide
This guide installs Go version 1.24.2, as specified in the Dockerfile.
-
Download Go Distribution:
cd /usr/local wget https://go.dev/dl/go1.24.2.linux-amd64.tar.gzcd /usr/local: Changes the current directory to/usr/local. This is a common location for installing software.wget: Downloads the Go distribution archive from the specified URL. Ensure thatwgetis installed on your system.
-
Extract the Archive:
tar xvf go1.24.2.linux-amd64.tar.gztar: The GNU Tape Archiver.xvf: As described in the CMake installation guide, these options extract the archive and list the extracted files.
-
Post Install Configuration:
After copying the binary to
/usr/local, you should add/usr/local/go/binto the PATH environment variable. To do this, you can create a file in/etc/profile.d/.echo 'export PATH=$PATH:/usr/local/go/bin' | sudo tee /etc/profile.d/go.shecho: Prints the string to standard output.sudo tee: Writes the string to a file with superuser privileges. The-aoption can be used to append to the file instead of overwriting it.
Compilation Guide: Ollama37
Prerequisites:
- Rocky Linux 8.
git: For cloning the repository.cmake: For managing the C++ build process.go: The Go compiler and toolchain.gcc: version 10 (GNU Compiler Collection) and G++: For C++ compilation (although the Dockerfile explicitly sets these viaCCandCXX)- CUDA Toolkit 11.4
Steps:
-
Navigate to the Build Directory:
cd /usr/local/src -
Clone the Repository:
git clone https://github.com/dogkeeper886/ollama37 -
Change Directory:
cd ollama37 -
CMake Configuration:
This step configures the build system. The
CCandCXXvariables are explicitly set to/usr/local/bin/gccand/usr/local/bin/g++, respectively. This is critical if the system's default compilers are incompatible or need to be overridden.CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build -
CMake Build:
This step actually compiles the C++ code using the configuration generated in the previous step.
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake --build build -
Go Build:
Finally, this step compiles the Go code, creating the
ollamaexecutable.go build -o ollama .
Docker
The official Ollama Docker image dogkeeper886/ollama37 is available on Docker Hub.
This Docker image provides a ready-to-use environment for running Ollama, a local Large Language Model (LLM) runner, specifically optimized to leverage the capabilities of an NVIDIA K80 GPU. This setup is ideal for AI researchers and developers looking to experiment with models in a controlled home lab setting.
To pull the image from Docker Hub, use:
docker pull dogkeeper886/ollama37
Libraries
Community
Quickstart
To run and chat with Llama 3.2:
ollama run llama3.2
Model library
Ollama supports a list of models available on ollama.com/library
Here are some example models that can be downloaded:
| Model | Parameters | Size | Download |
|---|---|---|---|
| Gemma 3 | 1B | 815MB | ollama run gemma3:1b |
| Gemma 3 | 4B | 3.3GB | ollama run gemma3 |
| Gemma 3 | 12B | 8.1GB | ollama run gemma3:12b |
| Gemma 3 | 27B | 17GB | ollama run gemma3:27b |
| QwQ | 32B | 20GB | ollama run qwq |
| DeepSeek-R1 | 7B | 4.7GB | ollama run deepseek-r1 |
| DeepSeek-R1 | 671B | 404GB | ollama run deepseek-r1:671b |
| Llama 4 | 109B | 67GB | ollama run llama4:scout |
| Llama 4 | 400B | 245GB | ollama run llama4:maverick |
| Llama 3.3 | 70B | 43GB | ollama run llama3.3 |
| Llama 3.2 | 3B | 2.0GB | ollama run llama3.2 |
| Llama 3.2 | 1B | 1.3GB | ollama run llama3.2:1b |
| Llama 3.2 Vision | 11B | 7.9GB | ollama run llama3.2-vision |
| Llama 3.2 Vision | 90B | 55GB | ollama run llama3.2-vision:90b |
| Llama 3.1 | 8B | 4.7GB | ollama run llama3.1 |
| Llama 3.1 | 405B | 231GB | ollama run llama3.1:405b |
| Phi 4 | 14B | 9.1GB | ollama run phi4 |
| Phi 4 Mini | 3.8B | 2.5GB | ollama run phi4-mini |
| Mistral | 7B | 4.1GB | ollama run mistral |
| Moondream 2 | 1.4B | 829MB | ollama run moondream |
| Neural Chat | 7B | 4.1GB | ollama run neural-chat |
| Starling | 7B | 4.1GB | ollama run starling-lm |
| Code Llama | 7B | 3.8GB | ollama run codellama |
| Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored |
| LLaVA | 7B | 4.5GB | ollama run llava |
| Granite-3.3 | 8B | 4.9GB | ollama run granite3.3 |
Note
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Customize a model
Import from GGUF
Ollama supports importing GGUF models in the Modelfile:
-
Create a file named
Modelfile, with aFROMinstruction with the local filepath to the model you want to import.FROM ./vicuna-33b.Q4_0.gguf -
Create the model in Ollama
ollama create example -f Modelfile -
Run the model
ollama run example
Import from Safetensors
See the guide on importing models for more information.
Customize a prompt
Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.2 model:
ollama pull llama3.2
Create a Modelfile:
FROM llama3.2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
For more information on working with a Modelfile, see the Modelfile documentation.
CLI Reference
Create a model
ollama create is used to create a model from a Modelfile.
ollama create mymodel -f ./Modelfile
Pull a model
ollama pull llama3.2
This command can also be used to update a local model. Only the diff will be pulled.
Remove a model
ollama rm llama3.2
Copy a model
ollama cp llama3.2 my-model
Multiline input
For multiline input, you can wrap text with """:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
Multimodal models
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
Output: The image features a yellow smiley face, which is likely the central focus of the picture.
Pass the prompt as an argument
ollama run llama3.2 "Summarize this file: $(cat README.md)"
Output: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
Show model information
ollama show llama3.2
List models on your computer
ollama list
List which models are currently loaded
ollama ps
Stop a model which is currently running
ollama stop llama3.2
Start Ollama
ollama serve is used when you want to start ollama without running the desktop application.
Building
See the developer guide
Running local builds
Next, start the server:
./ollama serve
Finally, in a separate shell, run a model:
./ollama run llama3.2
REST API
Ollama has a REST API for running and managing models.
Generate a response
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt":"Why is the sky blue?"
}'
Chat with a model
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
See the API documentation for all endpoints.
Community Integrations
Web & Desktop
- Open WebUI
- SwiftChat (macOS with ReactNative)
- Enchanted (macOS native)
- Hollama
- Lollms-Webui
- LibreChat
- Bionic GPT
- HTML UI
- Saddle
- TagSpaces (A platform for file-based apps, utilizing Ollama for the generation of tags and descriptions)
- Chatbot UI
- Chatbot UI v2
- Typescript UI
- Minimalistic React UI for Ollama Models
- Ollamac
- big-AGI
- Cheshire Cat assistant framework
- Amica
- chatd
- Ollama-SwiftUI
- Dify.AI
- MindMac
- NextJS Web Interface for Ollama
- Msty
- Chatbox
- WinForm Ollama Copilot
- NextChat with Get Started Doc
- Alpaca WebUI
- OllamaGUI
- OpenAOE
- Odin Runes
- LLM-X (Progressive Web App)
- AnythingLLM (Docker + MacOs/Windows/Linux native app)
- Ollama Basic Chat: Uses HyperDiv Reactive UI
- Ollama-chats RPG
- IntelliBar (AI-powered assistant for macOS)
- Jirapt (Jira Integration to generate issues, tasks, epics)
- QA-Pilot (Interactive chat tool that can leverage Ollama models for rapid understanding and navigation of GitHub code repositories)
- ChatOllama (Open Source Chatbot based on Ollama with Knowledge Bases)
- CRAG Ollama Chat (Simple Web Search with Corrective RAG)
- RAGFlow (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- StreamDeploy (LLM Application Scaffold)
- chat (chat web app for teams)
- Lobe Chat with Integrating Doc
- Ollama RAG Chatbot (Local Chat with multiple PDFs using Ollama and RAG)
- BrainSoup (Flexible native client with RAG & multi-agent automation)
- macai (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
- RWKV-Runner (RWKV offline LLM deployment tool, also usable as a client for ChatGPT and Ollama)
- Ollama Grid Search (app to evaluate and compare models)
- Olpaka (User-friendly Flutter Web App for Ollama)
- Casibase (An open source AI knowledge base and dialogue system combining the latest RAG, SSO, ollama support, and multiple large language models.)
- OllamaSpring (Ollama Client for macOS)
- LLocal.in (Easy to use Electron Desktop Client for Ollama)
- Shinkai Desktop (Two click install Local AI using Ollama + Files + RAG)
- AiLama (A Discord User App that allows you to interact with Ollama anywhere in Discord)
- Ollama with Google Mesop (Mesop Chat Client implementation with Ollama)
- R2R (Open-source RAG engine)
- Ollama-Kis (A simple easy-to-use GUI with sample custom LLM for Drivers Education)
- OpenGPA (Open-source offline-first Enterprise Agentic Application)
- Painting Droid (Painting app with AI integrations)
- Kerlig AI (AI writing assistant for macOS)
- AI Studio
- Sidellama (browser-based LLM client)
- LLMStack (No-code multi-agent framework to build LLM agents and workflows)
- BoltAI for Mac (AI Chat Client for Mac)
- Harbor (Containerized LLM Toolkit with Ollama as default backend)
- PyGPT (AI desktop assistant for Linux, Windows, and Mac)
- Alpaca (An Ollama client application for Linux and macOS made with GTK4 and Adwaita)
- AutoGPT (AutoGPT Ollama integration)
- Go-CREW (Powerful Offline RAG in Golang)
- PartCAD (CAD model generation with OpenSCAD and CadQuery)
- Ollama4j Web UI - Java-based Web UI for Ollama built with Vaadin, Spring Boot, and Ollama4j
- PyOllaMx - macOS application capable of chatting with both Ollama and Apple MLX models.
- Cline - Formerly known as Claude Dev is a VSCode extension for multi-file/whole-repo coding
- Cherry Studio (Desktop client with Ollama support)
- ConfiChat (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
- Archyve (RAG-enabling document library)
- crewAI with Mesop (Mesop Web Interface to run crewAI with Ollama)
- Tkinter-based client (Python tkinter-based Client for Ollama)
- LLMChat (Privacy focused, 100% local, intuitive all-in-one chat interface)
- Local Multimodal AI Chat (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
- ARGO (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
- OrionChat - OrionChat is a web interface for chatting with different AI providers
- G1 (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
- Web management (Web management page)
- Promptery (desktop client for Ollama.)
- Ollama App (Modern and easy-to-use multi-platform client for Ollama)
- chat-ollama (a React Native client for Ollama)
- SpaceLlama (Firefox and Chrome extension to quickly summarize web pages with ollama in a sidebar)
- YouLama (Webapp to quickly summarize any YouTube video, supporting Invidious as well)
- DualMind (Experimental app allowing two models to talk to each other in the terminal or in a web interface)
- ollamarama-matrix (Ollama chatbot for the Matrix chat protocol)
- ollama-chat-app (Flutter-based chat app)
- Perfect Memory AI (Productivity AI assists personalized by what you have seen on your screen, heard, and said in the meetings)
- Hexabot (A conversational AI builder)
- Reddit Rate (Search and Rate Reddit topics with a weighted summation)
- OpenTalkGpt (Chrome Extension to manage open-source models supported by Ollama, create custom models, and chat with models from a user-friendly UI)
- VT (A minimal multimodal AI chat app, with dynamic conversation routing. Supports local models via Ollama)
- Nosia (Easy to install and use RAG platform based on Ollama)
- Witsy (An AI Desktop application available for Mac/Windows/Linux)
- Abbey (A configurable AI interface server with notebooks, document storage, and YouTube support)
- Minima (RAG with on-premises or fully local workflow)
- aidful-ollama-model-delete (User interface for simplified model cleanup)
- Perplexica (An AI-powered search engine & an open-source alternative to Perplexity AI)
- Ollama Chat WebUI for Docker (Support for local docker deployment, lightweight ollama webui)
- AI Toolkit for Visual Studio Code (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
- MinimalNextOllamaChat (Minimal Web UI for Chat and Model Control)
- Chipper AI interface for tinkerers (Ollama, Haystack RAG, Python)
- ChibiChat (Kotlin-based Android app to chat with Ollama and Koboldcpp API endpoints)
- LocalLLM (Minimal Web-App to run ollama models on it with a GUI)
- Ollamazing (Web extension to run Ollama models)
- OpenDeepResearcher-via-searxng (A Deep Research equivalent endpoint with Ollama support for running locally)
- AntSK (Out-of-the-box & Adaptable RAG Chatbot)
- MaxKB (Ready-to-use & flexible RAG Chatbot)
- yla (Web interface to freely interact with your customized models)
- LangBot (LLM-based instant messaging bots platform, with Agents, RAG features, supports multiple platforms)
- 1Panel (Web-based Linux Server Management Tool)
- AstrBot (User-friendly LLM-based multi-platform chatbot with a WebUI, supporting RAG, LLM agents, and plugins integration)
- Reins (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
- Ellama (Friendly native app to chat with an Ollama instance)
- screenpipe Build agents powered by your screen history
- Ollamb (Simple yet rich in features, cross-platform built with Flutter and designed for Ollama. Try the web demo.)
- Writeopia (Text editor with integration with Ollama)
- AppFlowy (AI collaborative workspace with Ollama, cross-platform and self-hostable)
- Lumina (A lightweight, minimal React.js frontend for interacting with Ollama servers)
Cloud
Terminal
- oterm
- Ellama Emacs client
- Emacs client
- neollama UI client for interacting with models from within Neovim
- gen.nvim
- ollama.nvim
- ollero.nvim
- ollama-chat.nvim
- ogpt.nvim
- gptel Emacs client
- Oatmeal
- cmdh
- ooo
- shell-pilot(Interact with models via pure shell scripts on Linux or macOS)
- tenere
- llm-ollama for Datasette's LLM CLI.
- typechat-cli
- ShellOracle
- tlm
- podman-ollama
- gollama
- ParLlama
- Ollama eBook Summary
- Ollama Mixture of Experts (MOE) in 50 lines of code
- vim-intelligence-bridge Simple interaction of "Ollama" with the Vim editor
- x-cmd ollama
- bb7
- SwollamaCLI bundled with the Swollama Swift package. Demo
- aichat All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
- PowershAI PowerShell module that brings AI to terminal on Windows, including support for Ollama
- DeepShell Your self-hosted AI assistant. Interactive Shell, Files and Folders analysis.
- orbiton Configuration-free text editor and IDE with support for tab completion with Ollama.
- orca-cli Ollama Registry CLI Application - Browse, pull, and download models from Ollama Registry in your terminal.
- GGUF-to-Ollama - Importing GGUF to Ollama made easy (multiplatform)
Apple Vision Pro
- SwiftChat (Cross-platform AI chat app supporting Apple Vision Pro via "Designed for iPad")
- Enchanted
Database
- pgai - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
- MindsDB (Connects Ollama models with nearly 200 data platforms and apps)
- chromem-go with example
- Kangaroo (AI-powered SQL client and admin tool for popular databases)
Package managers
Libraries
- LangChain and LangChain.js with example
- Firebase Genkit
- crewAI
- Yacana (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
- Spring AI with reference and example
- LangChainGo with example
- LangChain4j with example
- LangChainRust with example
- LangChain for .NET with example
- LLPhant
- LlamaIndex and LlamaIndexTS
- LiteLLM
- OllamaFarm for Go
- OllamaSharp for .NET
- Ollama for Ruby
- Ollama-rs for Rust
- Ollama-hpp for C++
- Ollama4j for Java
- ModelFusion Typescript Library
- OllamaKit for Swift
- Ollama for Dart
- Ollama for Laravel
- LangChainDart
- Semantic Kernel - Python
- Haystack
- Elixir LangChain
- Ollama for R - rollama
- Ollama for R - ollama-r
- Ollama-ex for Elixir
- Ollama Connector for SAP ABAP
- Testcontainers
- Portkey
- PromptingTools.jl with an example
- LlamaScript
- llm-axe (Python Toolkit for Building LLM Powered Apps)
- Gollm
- Gollama for Golang
- Ollamaclient for Golang
- High-level function abstraction in Go
- Ollama PHP
- Agents-Flex for Java with example
- Parakeet is a GoLang library, made to simplify the development of small generative AI applications with Ollama.
- Haverscript with examples
- Ollama for Swift
- Swollama for Swift with DocC
- GoLamify
- Ollama for Haskell
- multi-llm-ts (A Typescript/JavaScript library allowing access to different LLM in a unified API)
- LlmTornado (C# library providing a unified interface for major FOSS & Commercial inference APIs)
- Ollama for Zig
- Abso (OpenAI-compatible TypeScript SDK for any LLM provider)
- Nichey is a Python package for generating custom wikis for your research topic
- Ollama for D
Mobile
- SwiftChat (Lightning-fast Cross-platform AI chat app with native UI for Android, iOS, and iPad)
- Enchanted
- Maid
- Ollama App (Modern and easy-to-use multi-platform client for Ollama)
- ConfiChat (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
- Ollama Android Chat (No need for Termux, start the Ollama service with one click on an Android device)
- Reins (Easily tweak parameters, customize system prompts per chat, and enhance your AI experiments with reasoning model support.)
Extensions & Plugins
- Raycast extension
- Discollama (Discord bot inside the Ollama discord channel)
- Continue
- Vibe (Transcribe and analyze meetings with Ollama)
- Obsidian Ollama plugin
- Logseq Ollama plugin
- NotesOllama (Apple Notes Ollama plugin)
- Dagger Chatbot
- Discord AI Bot
- Ollama Telegram Bot
- Hass Ollama Conversation
- Rivet plugin
- Obsidian BMO Chatbot plugin
- Cliobot (Telegram bot with Ollama support)
- Copilot for Obsidian plugin
- Obsidian Local GPT plugin
- Open Interpreter
- Llama Coder (Copilot alternative using Ollama)
- Ollama Copilot (Proxy that allows you to use Ollama as a copilot like GitHub Copilot)
- twinny (Copilot and Copilot chat alternative using Ollama)
- Wingman-AI (Copilot code and chat alternative using Ollama and Hugging Face)
- Page Assist (Chrome Extension)
- Plasmoid Ollama Control (KDE Plasma extension that allows you to quickly manage/control Ollama model)
- AI Telegram Bot (Telegram bot using Ollama in backend)
- AI ST Completion (Sublime Text 4 AI assistant plugin with Ollama support)
- Discord-Ollama Chat Bot (Generalized TypeScript Discord Bot w/ Tuning Documentation)
- ChatGPTBox: All in one browser extension with Integrating Tutorial
- Discord AI chat/moderation bot Chat/moderation bot written in python. Uses Ollama to create personalities.
- Headless Ollama (Scripts to automatically install ollama client & models on any OS for apps that depend on ollama server)
- Terraform AWS Ollama & Open WebUI (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front-end Open WebUI service.)
- node-red-contrib-ollama
- Local AI Helper (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
- vnc-lm (Discord bot for messaging with LLMs through Ollama and LiteLLM. Seamlessly move between local and flagship models.)
- LSP-AI (Open-source language server for AI-powered functionality)
- QodeAssist (AI-powered coding assistant plugin for Qt Creator)
- Obsidian Quiz Generator plugin
- AI Summmary Helper plugin
- TextCraft (Copilot in Word alternative using Ollama)
- Alfred Ollama (Alfred Workflow)
- TextLLaMA A Chrome Extension that helps you write emails, correct grammar, and translate into any language
- Simple-Discord-AI
- LLM Telegram Bot (telegram bot, primary for RP. Oobabooga-like buttons, A1111 API integration e.t.c)
- mcp-llm (MCP Server to allow LLMs to call other LLMs)
Supported backends
- llama.cpp project founded by Georgi Gerganov.
Observability
- Opik is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native intergration to Ollama.
- Lunary is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
- OpenLIT is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
- HoneyHive is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
- Langfuse is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
- MLflow Tracing is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.