mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 15:57:04 +00:00
Add LD_LIBRARY_PATH to GitHub Actions workflows for CUDA library discovery
Set LD_LIBRARY_PATH in all workflow steps to ensure CUDA 11.4 libraries are found during both compile time and runtime. This fixes the issue where the GGML CUDA backend (libggml-cuda.so) fails to load when running 'ollama serve'. Library paths added: - /usr/local/cuda-11.4/lib64 - /usr/local/cuda-11.4/targets/x86_64-linux/lib - /usr/lib64 - /usr/local/lib64 Updated workflows: - tesla-k80-ci.yml: CMake configure, C++/CUDA build, Go build, binary verify - tesla-k80-single-gpu-tests.yml: All test execution steps - tesla-k80-multi-gpu-tests.yml: All test execution steps
This commit is contained in:
7
.github/workflows/tesla-k80-ci.yml
vendored
7
.github/workflows/tesla-k80-ci.yml
vendored
@@ -28,20 +28,27 @@ jobs:
|
||||
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build
|
||||
env:
|
||||
CMAKE_BUILD_TYPE: Release
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Build C++/CUDA components
|
||||
run: |
|
||||
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake --build build -j$(nproc)
|
||||
timeout-minutes: 30
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Build Go binary
|
||||
run: |
|
||||
go build -v -o ollama .
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Verify binary was created
|
||||
run: |
|
||||
ls -lh ollama
|
||||
./ollama --version
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Upload ollama binary as artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
|
||||
Reference in New Issue
Block a user