mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-09 23:37:06 +00:00
Set LD_LIBRARY_PATH in all workflow steps to ensure CUDA 11.4 libraries are found during both compile time and runtime. This fixes the issue where the GGML CUDA backend (libggml-cuda.so) fails to load when running 'ollama serve'. Library paths added: - /usr/local/cuda-11.4/lib64 - /usr/local/cuda-11.4/targets/x86_64-linux/lib - /usr/lib64 - /usr/local/lib64 Updated workflows: - tesla-k80-ci.yml: CMake configure, C++/CUDA build, Go build, binary verify - tesla-k80-single-gpu-tests.yml: All test execution steps - tesla-k80-multi-gpu-tests.yml: All test execution steps