Files
ollama37/tests/testcases/runtime/TC-RUNTIME-002.yml
Shang Chieh Tseng d11140c016 Add GitHub Actions CI/CD pipeline and test framework
- Add .github/workflows/build-test.yml for automated testing
- Add tests/ directory with TypeScript test runner
- Add docs/CICD.md documentation
- Remove .gitlab-ci.yml (migrated to GitHub Actions)
- Update .gitignore for test artifacts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-15 14:06:44 +08:00

30 lines
761 B
YAML

id: TC-RUNTIME-002
name: GPU Detection
suite: runtime
priority: 2
timeout: 60000
dependencies:
- TC-RUNTIME-001
steps:
- name: Check nvidia-smi inside container
command: docker exec ollama37 nvidia-smi
- name: Check CUDA libraries
command: docker exec ollama37 ldconfig -p | grep -i cuda | head -5
- name: Check Ollama GPU detection
command: cd docker && docker compose logs 2>&1 | grep -i gpu | head -10
criteria: |
Tesla K80 GPU should be detected inside the container.
Expected:
- nvidia-smi shows Tesla K80 GPU(s)
- Driver version 470.x (or compatible)
- CUDA libraries are available (libcuda, libcublas, etc.)
- Ollama logs mention GPU detection
The K80 has 12GB VRAM per GPU. Accept variations in reported memory.