mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-09 23:37:06 +00:00
This commit represents a complete rework after pulling the latest changes from official ollama/ollama repository and re-applying Tesla K80 compatibility patches. ## Key Changes ### CUDA Compute Capability 3.7 Support (Tesla K80) - Added sm_37 (compute 3.7) to CMAKE_CUDA_ARCHITECTURES in CMakeLists.txt - Updated CMakePresets.json to include compute 3.7 in "CUDA 11" preset - Using 37-virtual (PTX with JIT compilation) for maximum compatibility ### Legacy Toolchain Compatibility - **NVIDIA Driver**: 470.256.02 (last version supporting Kepler/K80) - **CUDA Version**: 11.4.4 (last CUDA 11.x supporting compute 3.7) - **GCC Version**: 10.5.0 (required by CUDA 11.4 host_config.h) ### CPU Architecture Trade-offs Due to GCC 10.5 limitation, sacrificed newer CPU optimizations: - Alderlake CPU variant enabled WITHOUT AVX_VNNI (requires GCC 11+) - Still supports: SSE4.2, AVX, F16C, AVX2, BMI2, FMA - Performance impact: ~3-7% on newer CPUs (acceptable for K80 compatibility) ### Build System Updates - Modified ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt for compute 3.7 - Added -Wno-deprecated-gpu-targets flag to suppress warnings - Updated ml/backend/ggml/ggml/src/CMakeLists.txt for Alderlake without AVX_VNNI ### Upstream Sync Merged latest llama.cpp changes including: - Enhanced KV cache management with ISWA and hybrid memory support - Improved multi-modal support (mtmd framework) - New model architectures (Gemma3, Llama4, Qwen3, etc.) - GPU backend improvements for CUDA, Metal, and ROCm - Updated quantization support and GGUF format handling ### Documentation - Updated CLAUDE.md with comprehensive build instructions - Documented toolchain constraints and CPU architecture trade-offs - Removed outdated CI/CD workflows (tesla-k80-*.yml) - Cleaned up temporary development artifacts ## Rationale This fork maintains Tesla K80 GPU support (compute 3.7) which was dropped in official Ollama due to legacy driver/CUDA requirements. The toolchain constraint creates a deadlock: - K80 → Driver 470 → CUDA 11.4 → GCC 10 → No AVX_VNNI We accept the loss of cutting-edge CPU optimizations to enable running modern LLMs on legacy but still capable Tesla K80 hardware (12GB VRAM per GPU). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
73 lines
2.6 KiB
Makefile
73 lines
2.6 KiB
Makefile
UPSTREAM=https://github.com/ggml-org/llama.cpp.git
|
|
WORKDIR=llama/vendor
|
|
FETCH_HEAD=7049736b2dd9011bf819e298b844ebbc4b5afdc9
|
|
|
|
.PHONY: help
|
|
help:
|
|
@echo "Available targets:"
|
|
@echo " sync Sync with upstream repositories"
|
|
@echo " checkout Checkout upstream repository"
|
|
@echo " apply-patches Apply patches to local repository"
|
|
@echo " format-patches Format patches from local repository"
|
|
@echo " clean Clean local repository"
|
|
@echo
|
|
@echo "Example:"
|
|
@echo " make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches sync"
|
|
|
|
.PHONY: sync
|
|
sync: llama/build-info.cpp ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
|
|
|
|
llama/build-info.cpp: llama/build-info.cpp.in llama/llama.cpp
|
|
sed -e 's|@FETCH_HEAD@|$(FETCH_HEAD)|' <$< >$@
|
|
|
|
ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal: ml/backend/ggml/ggml
|
|
go generate ./$(@D)
|
|
|
|
.PHONY: llama/llama.cpp
|
|
llama/llama.cpp: llama/vendor
|
|
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /) $@
|
|
|
|
.PHONY: ml/backend/ggml/ggml
|
|
ml/backend/ggml/ggml: llama/vendor
|
|
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /ggml/) $@
|
|
|
|
PATCHES=$(wildcard llama/patches/*.patch)
|
|
PATCHED=$(join $(dir $(PATCHES)), $(addsuffix ed, $(addprefix ., $(notdir $(PATCHES)))))
|
|
|
|
.PHONY: apply-patches
|
|
.NOTPARALLEL:
|
|
apply-patches: $(PATCHED)
|
|
|
|
llama/patches/.%.patched: llama/patches/%.patch
|
|
@if git -c user.name=nobody -c 'user.email=<>' -C $(WORKDIR) am -3 $(realpath $<); then \
|
|
touch $@; \
|
|
else \
|
|
echo "Patch failed. Resolve any conflicts then continue."; \
|
|
echo "1. Run 'git -C $(WORKDIR) am --continue'"; \
|
|
echo "2. Run 'make -f $(lastword $(MAKEFILE_LIST)) format-patches'"; \
|
|
echo "3. Run 'make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches'"; \
|
|
exit 1; \
|
|
fi
|
|
|
|
.PHONY: checkout
|
|
checkout: $(WORKDIR)
|
|
git -C $(WORKDIR) fetch
|
|
git -C $(WORKDIR) checkout -f $(FETCH_HEAD)
|
|
|
|
$(WORKDIR):
|
|
git clone $(UPSTREAM) $(WORKDIR)
|
|
|
|
.PHONE: format-patches
|
|
format-patches: llama/patches
|
|
git -C $(WORKDIR) format-patch \
|
|
--no-signature \
|
|
--no-numbered \
|
|
--zero-commit \
|
|
-o $(realpath $<) \
|
|
$(FETCH_HEAD)
|
|
|
|
.PHONE: clean
|
|
clean: checkout
|
|
@git -C $(WORKDIR) am --abort || true
|
|
$(RM) llama/patches/.*.patched
|