mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-09 23:37:06 +00:00
Fix test-runner to inherit LD_LIBRARY_PATH for CUDA backend loading
The test-runner was starting the ollama server subprocess without inheriting environment variables, causing the GGML CUDA backend to fail loading even though LD_LIBRARY_PATH was set in the GitHub Actions workflow. Changes: - Added s.cmd.Env = os.Environ() to inherit all environment variables - This ensures LD_LIBRARY_PATH is passed to the ollama server subprocess - Fixes GPU offloading failure where layers were not being loaded to GPU Root cause analysis from logs: - GPUs were detected: Tesla K80 with 11.1 GiB available - Server scheduled 35 layers for GPU offload - But actual offload was 0/35 layers (all stayed on CPU) - Runner subprocess couldn't find CUDA libraries without LD_LIBRARY_PATH This fix ensures the runner subprocess can dynamically load libggml-cuda.so by inheriting the CUDA library paths from the parent process.
This commit is contained in:
@@ -56,6 +56,9 @@ func (s *Server) Start(ctx context.Context, logPath string) error {
|
||||
|
||||
// Set working directory to binary location
|
||||
s.cmd.Dir = filepath.Dir(binPath)
|
||||
// Inherit environment variables (including LD_LIBRARY_PATH for CUDA libraries)
|
||||
s.cmd.Env = os.Environ()
|
||||
|
||||
|
||||
// Start server
|
||||
if err := s.cmd.Start(); err != nil {
|
||||
|
||||
Reference in New Issue
Block a user