Files
ollama37/tests/testcases/runtime/TC-RUNTIME-002.yml
Shang Chieh Tseng 143e6fa8e4 Improve UVM device check messaging in TC-RUNTIME-002
- Rename step to "Verify UVM device files" for clarity
- Add "WARNING:" prefix when UVM device is missing
- Add "SUCCESS:" prefix when device is present
- Add confirmation message after UVM fix is applied
- Separate ls command for cleaner output

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 22:57:37 +08:00

48 lines
1.5 KiB
YAML

id: TC-RUNTIME-002
name: GPU Detection
suite: runtime
priority: 2
timeout: 120000
dependencies:
- TC-RUNTIME-001
steps:
- name: Check nvidia-smi inside container
command: docker exec ollama37 nvidia-smi
- name: Check CUDA libraries
command: docker exec ollama37 ldconfig -p | grep -i cuda | head -5
- name: Verify UVM device files
command: |
if [ ! -e /dev/nvidia-uvm ]; then
echo "WARNING: UVM device missing, creating with nvidia-modprobe..."
sudo nvidia-modprobe -u -c=0
echo "Restarting container to pick up UVM devices..."
cd docker && docker compose restart
sleep 15
echo "UVM device fix applied"
else
echo "SUCCESS: UVM device file present"
ls -l /dev/nvidia-uvm
fi
- name: Check Ollama GPU detection in logs
command: |
cd docker && docker compose logs 2>&1 | grep -E "(inference compute|GPU detected)" | tail -5
criteria: |
Tesla K80 GPU should be detected by both nvidia-smi AND Ollama CUDA runtime.
Expected:
- nvidia-smi shows Tesla K80 GPU(s) with Driver 470.x
- CUDA libraries are available (libcuda, libcublas, etc.)
- /dev/nvidia-uvm device file exists (required for CUDA runtime)
- Ollama logs show GPU detection, NOT "id=cpu library=cpu"
NOTE: If nvidia-smi works but Ollama shows only CPU, the UVM device
files are missing. The test will auto-fix with nvidia-modprobe -u -c=0.
The K80 has 12GB VRAM per GPU. Accept variations in reported memory.