mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-11 00:07:07 +00:00
Fix GitHub Actions workflows to upload build libraries and remove LD_LIBRARY_PATH
Changes: - Update tesla-k80-ci.yml to upload build/lib/ollama/ containing CUDA backend - Remove all LD_LIBRARY_PATH environment variables (no longer needed with RPATH) - Test workflows now receive libggml-cuda.so enabling GPU offload This fixes the issue where test workflows couldn't offload to GPU because the CUDA backend library wasn't included in the artifact. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -30,8 +30,6 @@ jobs:
|
||||
chmod +x ollama
|
||||
ls -lh ollama
|
||||
./ollama --version
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Verify multi-GPU setup
|
||||
run: |
|
||||
@@ -51,21 +49,15 @@ jobs:
|
||||
go build -o ../../test-runner .
|
||||
cd ../..
|
||||
ls -lh test-runner
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Validate multi-GPU test configuration
|
||||
run: |
|
||||
./test-runner validate --config test/config/models.yaml --profile multi-gpu
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Run multi-GPU tests
|
||||
run: |
|
||||
./test-runner run --profile multi-gpu --config test/config/models.yaml --output test-report-multi-gpu --verbose
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
LD_LIBRARY_PATH: /usr/local/cuda-11.4/lib64:/usr/local/cuda-11.4/targets/x86_64-linux/lib:/usr/lib64:/usr/local/lib64
|
||||
|
||||
- name: Check multi-GPU test results
|
||||
run: |
|
||||
|
||||
Reference in New Issue
Block a user