mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-20 20:57:01 +00:00
- Add .github/workflows/build-test.yml for automated testing - Add tests/ directory with TypeScript test runner - Add docs/CICD.md documentation - Remove .gitlab-ci.yml (migrated to GitHub Actions) - Update .gitignore for test artifacts 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
30 lines
761 B
YAML
30 lines
761 B
YAML
id: TC-RUNTIME-002
|
|
name: GPU Detection
|
|
suite: runtime
|
|
priority: 2
|
|
timeout: 60000
|
|
|
|
dependencies:
|
|
- TC-RUNTIME-001
|
|
|
|
steps:
|
|
- name: Check nvidia-smi inside container
|
|
command: docker exec ollama37 nvidia-smi
|
|
|
|
- name: Check CUDA libraries
|
|
command: docker exec ollama37 ldconfig -p | grep -i cuda | head -5
|
|
|
|
- name: Check Ollama GPU detection
|
|
command: cd docker && docker compose logs 2>&1 | grep -i gpu | head -10
|
|
|
|
criteria: |
|
|
Tesla K80 GPU should be detected inside the container.
|
|
|
|
Expected:
|
|
- nvidia-smi shows Tesla K80 GPU(s)
|
|
- Driver version 470.x (or compatible)
|
|
- CUDA libraries are available (libcuda, libcublas, etc.)
|
|
- Ollama logs mention GPU detection
|
|
|
|
The K80 has 12GB VRAM per GPU. Accept variations in reported memory.
|