1. Fix binary path resolution using symlink (docker/runtime/Dockerfile)
- Build binary to source directory (./ollama)
- Create symlink from /usr/local/bin/ollama to /usr/local/src/ollama37/ollama
- Allows ml/path.go to resolve libraries via filepath.EvalSymlinks()
- Fixes "total vram=0 B" issue without requiring -w flag
2. Add comprehensive logging for model loading phases (llm/server.go)
- Log runner subprocess startup and readiness
- Log each memory allocation phase (FIT, ALLOC, COMMIT)
- Log layer allocation adjustments during convergence
- Log when model weights are being loaded (slowest phase)
- Log progress during waitUntilRunnerLaunched (every 1s)
- Improves visibility during 1-2 minute first-time model loads
3. Fix flash attention compute capability check (ml/device.go)
- Changed DriverMajor to ComputeMajor for correct capability detection
- Flash attention requires compute capability >= 7.0, not driver version
These changes improve user experience during model loading by providing
clear feedback at each stage, especially during the slow COMMIT phase
where GGUF weights are loaded and CUDA kernels compile.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes the issue where large models (>10B parameters) fail to
load due to underestimated compute buffer memory requirements, causing
allocation failures when the model should use multiple GPUs.
Problem:
- deepseek-r1:14b (14B, qwen2 architecture) failed with "failed to allocate
compute buffers" error
- System has 2×Tesla K80 GPUs (24GB total) but tried to fit 12GB model in
1×11GB GPU
- Root cause: Memory estimation underestimated compute buffers by 3-4×
(estimated 916 MB, actual requirement ~3-4 GB)
Solution:
1. Added model-family-specific batch size defaults (llm/memory.go)
- Different architectures have different optimal batch sizes
- deepseek2: 2048/256, qwen2: 512/512, llama: 512/512, etc.
- Ensures accurate memory estimation based on architecture
2. Updated server to use architecture-specific batch sizes (llm/server.go)
- Detects model architecture from GGUF metadata
- Uses family defaults when user doesn't specify
- Ensures consistency between estimation and allocation
3. Applied 3.5× safety margin to compute buffer estimates (llm/memory.go)
- Accounts for temporary tensors not captured in GraphSize formulas
- Conservative approach prevents allocation failures
- Documented with detailed analysis of underestimation causes
4. Implemented measurement API for future use (llama-context.cpp, llama.go)
- C++ function to measure actual memory requirements
- Go wrapper for integration into GPU selection
- Foundation for future measurement-based approach
- Currently unused but documented for future improvement
Results:
- deepseek-r1:14b now loads successfully using both GPUs
- Proper distribution: 25 layers on GPU0, 24 layers on GPU1
- Total memory: 16.2 GB across 2×11 GB GPUs (8.4 + 7.8 GB)
- Compute buffers: 3.1 GB per GPU (with safety margin applied)
- All other models continue to work correctly
Comprehensive documentation added to all modified code explaining:
- Problem analysis with real examples
- Solution rationale and trade-offs
- Future improvement paths
Tested with: deepseek-r1:14b, deepseek-r1:8b, gemma3:4b, gpt-oss
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit represents a complete rework after pulling the latest changes from
official ollama/ollama repository and re-applying Tesla K80 compatibility patches.
## Key Changes
### CUDA Compute Capability 3.7 Support (Tesla K80)
- Added sm_37 (compute 3.7) to CMAKE_CUDA_ARCHITECTURES in CMakeLists.txt
- Updated CMakePresets.json to include compute 3.7 in "CUDA 11" preset
- Using 37-virtual (PTX with JIT compilation) for maximum compatibility
### Legacy Toolchain Compatibility
- **NVIDIA Driver**: 470.256.02 (last version supporting Kepler/K80)
- **CUDA Version**: 11.4.4 (last CUDA 11.x supporting compute 3.7)
- **GCC Version**: 10.5.0 (required by CUDA 11.4 host_config.h)
### CPU Architecture Trade-offs
Due to GCC 10.5 limitation, sacrificed newer CPU optimizations:
- Alderlake CPU variant enabled WITHOUT AVX_VNNI (requires GCC 11+)
- Still supports: SSE4.2, AVX, F16C, AVX2, BMI2, FMA
- Performance impact: ~3-7% on newer CPUs (acceptable for K80 compatibility)
### Build System Updates
- Modified ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt for compute 3.7
- Added -Wno-deprecated-gpu-targets flag to suppress warnings
- Updated ml/backend/ggml/ggml/src/CMakeLists.txt for Alderlake without AVX_VNNI
### Upstream Sync
Merged latest llama.cpp changes including:
- Enhanced KV cache management with ISWA and hybrid memory support
- Improved multi-modal support (mtmd framework)
- New model architectures (Gemma3, Llama4, Qwen3, etc.)
- GPU backend improvements for CUDA, Metal, and ROCm
- Updated quantization support and GGUF format handling
### Documentation
- Updated CLAUDE.md with comprehensive build instructions
- Documented toolchain constraints and CPU architecture trade-offs
- Removed outdated CI/CD workflows (tesla-k80-*.yml)
- Cleaned up temporary development artifacts
## Rationale
This fork maintains Tesla K80 GPU support (compute 3.7) which was dropped in
official Ollama due to legacy driver/CUDA requirements. The toolchain constraint
creates a deadlock:
- K80 → Driver 470 → CUDA 11.4 → GCC 10 → No AVX_VNNI
We accept the loss of cutting-edge CPU optimizations to enable running modern
LLMs on legacy but still capable Tesla K80 hardware (12GB VRAM per GPU).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Problem: Phase 1 optimization (190 MiB for secondary GPUs) caused OOM
errors on large multi-GPU models due to insufficient runtime buffer:
- gemma3:27b: Estimated 10.9 GiB, used 10.8 GiB → only 400 MiB free
- Failed when allocating 6 MiB for KV cache during graph reservation
- Root cause: 190 MiB didn't account for runtime allocations
Investigation: Studied upstream Ollama code (upstream/main:llm/memory.go)
and confirmed official behavior allocates FULL graph to ALL GPUs with
layers, not reduced allocation for secondary GPUs.
Solution: Reverted llm/memory.go to upstream behavior:
- Removed gpuGraphAllocations map and per-GPU logic
- Restored original round-robin layer distribution (layerCount%j)
- All GPUs with layers now get full graph allocation
- Matches official Ollama for maximum stability
Results with revert:
- gemma3:27b: ✅ Works correctly with 31/31 layer split
- Memory allocation: [10.0 GiB, 9.8 GiB] with proper headroom
- nvidia-smi: GPU0 8.7 GiB, GPU1 8.7 GiB (even distribution)
- Graph allocation: Both GPUs get 300 MiB (actual, not estimate)
Trade-offs:
- ❌ gemma3:12b will use 2 GPUs instead of trying single-GPU (stable)
- ✅ Large models (27b+) work reliably with proper buffer
- ✅ Matches upstream behavior (easier to maintain)
- ✅ Conservative estimates prevent OOM errors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Problem: The Phase 2 CC 3.7 graph correction (85% reduction) was being
applied unconditionally to all models, causing multi-GPU models like
gemma3:27b and gpt-oss:20b to fail with "cudaMalloc failed: out of memory"
errors on secondary GPUs.
Root Cause: The 85% correction made the allocator think large models
could fit on a single GPU, but then failed when trying to allocate even
small amounts (16 MiB) on GPU 1 because the memory estimate was too low.
Solution: Disabled Phase 2 correction factor in llm/memory.go:173-182.
Phase 1 optimization (per-GPU graph allocation with 190 MiB for secondary
GPUs) is sufficient and correctly handles both single-GPU and multi-GPU
scenarios without causing OOM errors.
Impact:
- gemma3:4b: Still runs on single GPU ✅
- gemma3:12b: May split across GPUs (acceptable trade-off) ✅
- gemma3:27b: Now works with multi-GPU split ✅
- gpt-oss:20b: Now works with multi-GPU split ✅
Files Modified:
- llm/memory.go: Commented out Phase 2 correction factor
- CLAUDE.md: Updated Phase 2 section with new status and lessons learned
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Problem: gemma3:12b (10.2 GiB actual) was splitting across 2 GPUs
despite fitting in single Tesla K80 (11.2 GiB available).
Root Cause: Graph memory estimates for CC 3.7 were 15-20% too high
(estimated 1.3 GiB, actual 1.1 GiB), causing single-GPU fit check
to fail by ~200 MiB margin.
Solution: Apply empirical 85% correction factor to graph estimates
for Tesla K80 (CC 3.7) based on measured actual usage.
Results:
- Memory estimate: 11.9 GiB → 11.0 GiB (-900 MiB)
- GPU split: 1,48 layers → single GPU (no split)
- GPU 0: 10,015 MiB (was 617 MiB)
- GPU 1: 7 MiB (was 9,866 MiB)
- Inference: 94% GPU utilization, no cross-GPU overhead
Testing: ✅ gemma3:12b loads on single GPU with correct inference
🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implemented multi-GPU memory optimization to reduce unnecessary model splits
across dual Tesla K80 GPUs by fixing graph memory overestimation.
Changes:
1. Per-GPU graph allocation strategy
- Secondary GPUs: 190 MiB (empirically measured)
- Primary GPU: Full 1.3 GiB graph allocation
- Applied during layer distribution, not just final allocation
2. Reverse-order layer distribution
- Prefer loading all layers on last GPU (GPU 1) first
- Only use secondary GPUs when primary is full
- Changed from round-robin to reverse-order (j-1 instead of i%j)
Results:
✅ gemma3:4b: Single GPU (no split, was already working)
✅ gemma3:12b: 1,48 layer split (improved from 25,24 split)
- GPU 0: 1 layer, 610 MiB (down from 4156 MiB)
- GPU 1: 48 layers, 9857 MiB (primary)
- Total actual: 10.5 GiB (fits in single K80's 11.2 GiB)
Memory estimate reduced from 13.0 GiB → 11.9 GiB, enabling more models
to run on single GPU with better performance (no cross-GPU overhead).
Files modified:
- llm/memory.go: Core allocation logic (lines 230-288)
- llm/CLAUDE.md: Detailed implementation guide
- CLAUDE.md: Project status and results summary
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Phase 9 successfully resolved runtime loading issues where CUDA backend
failed to load due to undefined Flash Attention symbols.
Solution:
- Disabled flash attention helper functions (lines 126-274 in fattn.cu)
- Simplified ggml_cuda_flash_attn_ext() to abort immediately for CC 3.7
- Added GGML_UNUSED macros to prevent compiler warnings
- Added ggml_backend_cuda_score() function for backend selection
Testing Results:
✅ CUDA backend loads without undefined symbol errors
✅ GPU layers offload correctly (e.g., 35/35 for gemma3:4b)
✅ Fast GPU inference confirmed working
Flash Attention is not supported on CC 3.7 (requires Volta/Tensor Cores).
If attempted, gracefully aborts with clear error message.
All 9 phases of CC 3.7-only optimization now complete and tested.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Gemma3n model support with text generation capabilities
- Add new CUDA mean operations for improved performance
- Add macOS documentation and performance tests
- Update LLAMA patches for ROCm/CUDA compatibility
- Fix various model conversion and processing issues
- Update CI workflows and build configurations
- Add library model tests and Shakespeare test data
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
"POST predict" basically means that the runner has crashed, which
can have many reasons. However, many people think this is a specific
error and either report only this message or group together unrelated
bugs. This replaces it with a more friendly and helpful message.
This is a partial revert of 0478d44 "Fixed over vram allcation dure to
small initial layer sizes."
Previously we used the size of the first layer as an extra reserved
amount of space to buffer our memory estimates. The above commit
changed this to use the largest layer. However, this had performance
impacts on more models than the original commit was trying to fix.
There is just a heuristic without an ideal solution so this goes back
to the historic behavior.
Fixes: #10765, #10756, #10752, #10726
Currently, when the backend is created, the tensors are loaded at the
same time, which is a slow operation. This separates them to be two
steps:
- Create backend, including enumerating tensors and memory allocation
- Loading tensor data
This allows more flexibility in managing model loading.
The Llama engine always places vision projectors on the first GPU
if one exists. However, the Ollama engine groups it with the output
layer, which means the projector is only offloaded if all other layers
are offloaded. The memory estimation code always assumes the former
layout - this changes it to use the correct layout based on the engine.
This addresses two impacts of the current behavior:
- In multi-GPU setups, we can crash with OOM errors when we try to
allocate memory on a full GPU while another still has space.
- If the vision projector is large, it may prevent us from offloading
anything when we could have fit some of the text layers.
In some cases, if we fail to assign a piece of the model to a GPU then
we lose track of this data. Although it doesn't change the memory
allocation, it does affect the total size of the model reported by
tools such as ollama ps (and also the percent offloaded).
This makes it look like setting num_gpu isn't reflected in ollama ps,
which isn't true but the offloading percent may appear to not change.
Spreading the model across more GPUs will continue to impact the
reported total size of the model.
If a model is loading, and the request context is canceled during the load
by a client closing the connection, and another request is inbound for the
same model with a different configuration (context size, etc.) thus requiring
a reload, two unload events can be in flight. The first shuts down the
original model load, but the second one caused the loss of the new
reloading runner reference, thus triggering the leak.
The primary fix is detecting the duplicate unload and ignoring the second
instance. The load routine is also hardened to ensure we detect
clobbering an already present runner and unload it with a warning.
This reduces the size of our Windows installer payloads by ~256M by dropping
support for nvidia drivers older than Feb 2023. Hardware support is unchanged.
Linux default bundle sizes are reduced by ~600M to 1G.
* Move quantization logic to GGML via new backend
This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
* Remove "add model quantizations"
This is no longer needed now that quantization is implemented in Go+GGML code directly.
Some options listed in api/types.go are not supported in
newer models, or have been deprecated in the past. This is
the first of a series of PRs to clean up the API options
This hides the LlamaServer blank window when chatting outside of the terminal (say like with an app like Msty). This has no other side effects when invoking it the regular way.
For all search path env vars make sure our dirs are first
to avoid potentially finding other incompatible libraries
on the users system.
Also fixes a minor build script glitch for windows rocm
This enhances our logging in the scheduler. The initial "waiting for server" log
no longer claims an initial error state (now "not responding" which better reflects
the actual state). Runners now have slog wiring to report more details about the
runner, including PID.
No functional change. Many different done reasons can be set at the runner
level, so rather than obsuring them we should return them to the server
process and let it choose what to do with the done reason. This separates
the API concerns from the runner.
Gemma3 uses sliding windows for its context on 5/6 layers, significantly
reducing memory usage but leading to uneven usage across layers,
which makes allocation to the correct GPU difficult. We currently
estimate very conservatively by assuming all layers are consistent
at the max size.
Llama3.2-vision is also inconsistent between self attention and cross
attention layers - at moment, we calculate the correct total size
and then average this across layers. In some cases, this may lead
to crashes if a large layer is placed on a GPU sized by the average.
This allows memory estimation to calculate per-layer KV cache size
and take this account when placing layers onto GPUs. We already do
this for weights that vary per-tensor, so this is a logical extension.
Fixes#9730Fixes#9890
This commit refactors the LLM subsystem by removing internal subprocess
request and response types. It consolidates duplicate type definitions
across the codebase, moving them to centralized locations. The change also
standardizes interfaces between components, simplifies the ServerStatusResp
struct, and moves the ParseDurationMs function to a common package. This
cleanup reduces code duplication between different runner implementations
(llamarunner and ollamarunner).
We sometimes tokenize partial strings. For example, with
multimodal inputs, we split the input string around the images
and then tokenize each piece. In these cases, we should only add
the special tokens on the first piece.
* Include unified vision layers in memory prediction
For newer vision models with a single gguf, include
the projection estimates.
* Adjust CLI to handle both styles of vision model metadata
* Wire up new tokenizers for new engine
If we're loading the new engine, utilize the new model
text processor instead of calling into cgo wrappers for
llama.cpp. This also cleans up some tech debt from the
older tokenization flow for the C++ server which was
no longer used.
This also adjusts the grammar handling logic to pass
through to the new engine instead of utilizing the cgo
schema to grammar call.
* Lay foundation for auto selection of new engine
provides a better approach to #9088 that will attempt to
evaluate symlinks (important for macOS where 'ollama' is
often a symlink), but use the result of os.Executable()
as a fallback in scenarios where filepath.EvalSymlinks
fails due to permission erorrs or other issues
In some cases, the directories in the executable path read by
filepath.EvalSymlinks are not accessible, resulting in permission
errors which results in an error when running models. It also
doesn't work well on long paths on windows, also resulting in
errors. This change removes filepath.EvalSymlinks when accessing
os.Executable() altogether
This provides integration with the new Ollama engine
(5824541 next ollama runner (#7913)) and the rest of the Ollama
infrastructure such as the runner and Ollama server.
In addition, it also builds out the KV cache infrastructure to
support requirements of how Ollama runs models such as:
- Parallel processing
- Memory management for defragmentation and shifting
- Multi-modal modals
Both old and new engines continue to be supported. By default, only
the old engine is used. To enable the new engine:
Start the server with the OLLAMA_NEW_ENGINE environment variable set:
OLLAMA_NEW_ENGINE=1 ./ollama serve
Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
./ollama run jessegross/llama3.1
feat: add new Ollama engine using ggml through cgo
This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
- `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
- `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
- `ml.Tensor` defines the interface for a tensor and tensor operations
This is the first implementation of the new engine. Follow up PRs will implement more features:
- non-greedy sampling (#8410)
- integration with Ollama and KV caching (#8301)
- more model support (#9080) with more coming soon
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
* add build to .dockerignore
* test: only build one arch
* add build to .gitignore
* fix ccache path
* filter amdgpu targets
* only filter if autodetecting
* Don't clobber gpu list for default runner
This ensures the GPU specific environment variables are set properly
* explicitly set CXX compiler for HIP
* Update build_windows.ps1
This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset.
* build: add ollama subdir
* add .git to .dockerignore
* docs: update development.md
* update build_darwin.sh
* remove unused scripts
* llm: add cwd and build/lib/ollama to library paths
* default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
* add additional cmake output vars for msvc
* interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
* remove unncessary filepath.Dir, cleanup
* add hardware-specific directory to path
* use absolute server path
* build: linux arm
* cmake install targets
* remove unused files
* ml: visit each library path once
* build: skip cpu variants on arm
* build: install cpu targets
* build: fix workflow
* shorter names
* fix rocblas install
* docs: clean up development.md
* consistent build dir removal in development.md
* silence -Wimplicit-function-declaration build warnings in ggml-cpu
* update readme
* update development readme
* llm: update library lookup logic now that there is one runner (#8587)
* tweak development.md
* update docs
* add windows cuda/rocm tests
---------
Co-authored-by: jmorganca <jmorganca@gmail.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>