Changes:
- Update tesla-k80-ci.yml to upload build/lib/ollama/ containing CUDA backend
- Remove all LD_LIBRARY_PATH environment variables (no longer needed with RPATH)
- Test workflows now receive libggml-cuda.so enabling GPU offload
This fixes the issue where test workflows couldn't offload to GPU because the
CUDA backend library wasn't included in the artifact.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Set LD_LIBRARY_PATH in all workflow steps to ensure CUDA 11.4 libraries
are found during both compile time and runtime. This fixes the issue where
the GGML CUDA backend (libggml-cuda.so) fails to load when running
'ollama serve'.
Library paths added:
- /usr/local/cuda-11.4/lib64
- /usr/local/cuda-11.4/targets/x86_64-linux/lib
- /usr/lib64
- /usr/local/lib64
Updated workflows:
- tesla-k80-ci.yml: CMake configure, C++/CUDA build, Go build, binary verify
- tesla-k80-single-gpu-tests.yml: All test execution steps
- tesla-k80-multi-gpu-tests.yml: All test execution steps
- Replace actions/download-artifact@v4 with dawidd6/action-download-artifact@v6
- The default download-artifact action only works within same workflow run
- Third-party action enables downloading artifacts from different workflow
- Both test workflows now download from latest successful tesla-k80-ci.yml run
- Rename tesla-k80-tests.yml to tesla-k80-single-gpu-tests.yml for clarity
- Add new tesla-k80-multi-gpu-tests.yml workflow for large models
- Add multi-gpu profile to test/config/models.yaml with gemma3:27b and gpt-oss:20b
- Multi-GPU workflow includes GPU count verification and weekly schedule
- Profile-specific validation allows multi-GPU splits for large models
- Separate workflows optimize CI efficiency: quick tests vs. thorough tests