mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-17 19:27:00 +00:00
Add model warmup step to TC-INFERENCE-001
Tesla K80 needs ~60-180s to load model into VRAM on first inference. Add warmup step with 5-minute timeout to preload model before subsequent inference tests run. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -18,6 +18,13 @@ steps:
|
|||||||
- name: Verify model available
|
- name: Verify model available
|
||||||
command: docker exec ollama37 ollama list
|
command: docker exec ollama37 ollama list
|
||||||
|
|
||||||
|
- name: Warmup model (preload into GPU)
|
||||||
|
command: |
|
||||||
|
curl -s http://localhost:11434/api/generate \
|
||||||
|
-d '{"model":"gemma3:4b","prompt":"hi","stream":false}' \
|
||||||
|
| jq -r '.response' | head -c 100
|
||||||
|
timeout: 300000
|
||||||
|
|
||||||
criteria: |
|
criteria: |
|
||||||
The gemma3:4b model should be available for inference.
|
The gemma3:4b model should be available for inference.
|
||||||
|
|
||||||
@@ -25,6 +32,9 @@ criteria: |
|
|||||||
- Model is either already present or successfully downloaded
|
- Model is either already present or successfully downloaded
|
||||||
- "ollama list" shows gemma3:4b in the output
|
- "ollama list" shows gemma3:4b in the output
|
||||||
- No download errors
|
- No download errors
|
||||||
|
- Warmup step loads model into GPU memory (may take up to 3 minutes on Tesla K80)
|
||||||
|
- Warmup returns a response from the model
|
||||||
|
|
||||||
Accept if model already exists (skip download).
|
Accept if model already exists (skip download).
|
||||||
Model size is ~3GB, download may take time.
|
Model size is ~3GB, download may take time.
|
||||||
|
First inference loads model into VRAM - subsequent inferences will be fast.
|
||||||
|
|||||||
Reference in New Issue
Block a user