Files
ollama37/tests/testcases/inference/TC-INFERENCE-003.yml
Shang Chieh Tseng 2c5094db92 Add LogCollector for precise test log boundaries
Problem: Tests used `docker compose logs --since=5m` which caused:
- Log overlap between tests
- Logs from previous tests included
- Missing logs if test exceeded 5 minutes

Solution:
- New LogCollector class runs `docker compose logs --follow`
- Marks test start/end boundaries
- Writes test-specific logs to /tmp/test-{testId}-logs.txt
- Test steps access via TEST_ID environment variable

Changes:
- tests/src/log-collector.ts: New LogCollector class
- tests/src/executor.ts: Integrate LogCollector, set TEST_ID env
- tests/src/cli.ts: Start/stop LogCollector for runtime/inference
- All test cases: Use log collector with fallback to docker compose

Also updated docs/CICD.md with:
- Test runner CLI documentation
- Judge modes (simple, llm, dual)
- Log collector integration
- Updated test case list (12b, 27b models)
- Model unload strategy
- Troubleshooting guide

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 17:46:49 +08:00

118 lines
3.9 KiB
YAML

id: TC-INFERENCE-003
name: API Endpoint Test
suite: inference
priority: 3
timeout: 120000
dependencies:
- TC-INFERENCE-001
steps:
- name: Test generate endpoint (non-streaming)
command: |
curl -s http://localhost:11434/api/generate \
-d '{"model":"gemma3:4b","prompt":"Say hello in one word","stream":false}' \
| head -c 500
- name: Test generate endpoint (streaming)
command: |
curl -s http://localhost:11434/api/generate \
-d '{"model":"gemma3:4b","prompt":"Count from 1 to 3","stream":true}' \
| head -5
- name: Verify API requests logged successfully
command: |
# Use log collector file if available, fallback to docker compose logs
if [ -f "/tmp/test-${TEST_ID}-logs.txt" ]; then
LOGS=$(cat /tmp/test-${TEST_ID}-logs.txt)
else
LOGS=$(cd docker && docker compose logs --since=5m 2>&1)
fi
echo "=== API Request Log Verification ==="
# Check for generate requests with 200 status
GENERATE_200=$(echo "$LOGS" | grep -c '\[GIN\].*200.*POST.*/api/generate' || echo "0")
echo "Generate requests with 200 status: $GENERATE_200"
if [ "$GENERATE_200" -gt 0 ]; then
echo "SUCCESS: API generate requests completed successfully"
echo "$LOGS" | grep '\[GIN\].*POST.*/api/generate' | tail -3
else
echo "WARNING: No successful generate requests found in recent logs"
fi
- name: Check for API errors in logs
command: |
# Use log collector file if available, fallback to docker compose logs
if [ -f "/tmp/test-${TEST_ID}-logs.txt" ]; then
LOGS=$(cat /tmp/test-${TEST_ID}-logs.txt)
else
LOGS=$(cd docker && docker compose logs --since=5m 2>&1)
fi
echo "=== API Error Check ==="
# Check for 4xx/5xx errors on generate endpoint
if echo "$LOGS" | grep -qE '\[GIN\].*(4[0-9]{2}|5[0-9]{2}).*POST.*/api/generate'; then
echo "WARNING: API errors found on generate endpoint:"
echo "$LOGS" | grep -E '\[GIN\].*(4[0-9]{2}|5[0-9]{2}).*POST.*/api/generate' | tail -3
else
echo "SUCCESS: No API errors on generate endpoint"
fi
# Check for any CUDA errors during API processing
if echo "$LOGS" | grep -qE "(CUBLAS_STATUS_|CUDA error)"; then
echo "CRITICAL: CUDA errors during API processing:"
echo "$LOGS" | grep -E "(CUBLAS_STATUS_|CUDA error)"
exit 1
fi
echo "SUCCESS: No critical errors during API processing"
- name: Display API response times from logs
command: |
# Use log collector file if available, fallback to docker compose logs
if [ -f "/tmp/test-${TEST_ID}-logs.txt" ]; then
LOGS=$(cat /tmp/test-${TEST_ID}-logs.txt)
else
LOGS=$(cd docker && docker compose logs --since=5m 2>&1)
fi
echo "=== API Response Times ==="
# Show recent generate request response times
echo "$LOGS" | grep -E '\[GIN\].*POST.*/api/generate' | tail -5 | while read line; do
# Extract response time from GIN log format
echo "$line" | grep -oE '[0-9]+(\.[0-9]+)?(ms|s|m)' | head -1
done
echo ""
echo "Recent API requests:"
echo "$LOGS" | grep '\[GIN\]' | tail -5
- name: Unload model after 4b tests complete
command: |
echo "Unloading gemma3:4b from VRAM..."
curl -s http://localhost:11434/api/generate -d '{"model":"gemma3:4b","keep_alive":0}' || true
sleep 2
echo "Model unloaded"
criteria: |
Ollama REST API should handle inference requests.
Expected for non-streaming:
- Returns JSON with "response" field
- Response contains some greeting (hello, hi, etc.)
Expected for streaming:
- Returns multiple JSON lines
- Each line contains partial response
Log verification:
- Generate API requests logged with 200 status
- NO 4xx/5xx errors on generate endpoint
- NO CUDA/CUBLAS errors during API processing
Accept any valid JSON response. Content may vary.