Add LD_LIBRARY_PATH to GitHub Actions workflows for CUDA library discovery

Set LD_LIBRARY_PATH in all workflow steps to ensure CUDA 11.4 libraries
are found during both compile time and runtime. This fixes the issue where
the GGML CUDA backend (libggml-cuda.so) fails to load when running
'ollama serve'.

Library paths added:
- /usr/local/cuda-11.4/lib64
- /usr/local/cuda-11.4/targets/x86_64-linux/lib
- /usr/lib64
- /usr/local/lib64

Updated workflows:
- tesla-k80-ci.yml: CMake configure, C++/CUDA build, Go build, binary verify
- tesla-k80-single-gpu-tests.yml: All test execution steps
- tesla-k80-multi-gpu-tests.yml: All test execution steps
This commit is contained in:
Shang Chieh Tseng
2025-10-30 13:28:44 +08:00
parent bc8992d014
commit c022e79e77
3 changed files with 25 additions and 0 deletions

View File

@@ -28,20 +28,27 @@ jobs:
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build
env:
CMAKE_BUILD_TYPE: Release
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
- name: Build C++/CUDA components
run: |
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake --build build -j$(nproc)
timeout-minutes: 30
env:
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
- name: Build Go binary
run: |
go build -v -o ollama .
env:
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
- name: Verify binary was created
run: |
ls -lh ollama
./ollama --version
env:
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
- name: Upload ollama binary as artifact
uses: actions/upload-artifact@v4