mirror of
https://github.com/dogkeeper886/ollama-k80-lab.git
synced 2025-12-10 07:46:59 +00:00
- Create templates/ for reusable prompt templates - Create examples/ for test context inputs - Create responses/ for multi-model output comparison - Move Professional Communication Assistant to templates/ - Rename Task Context to deadline-extension-request example - Add comprehensive README with usage guidelines 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
2.2 KiB
2.2 KiB
Prompts
This directory contains LLM prompt templates and example contexts for workflow automation in the Ollama K80 Lab environment.
Structure
prompts/
├── templates/ # Reusable prompt templates
│ └── professional-communication-assistant.md
├── examples/ # Context examples for testing prompts
│ └── deadline-extension-request.md
└── responses/ # Model responses organized by example
└── deadline-extension-request/
├── qwen2.5-vl.md
├── gemma-3-12b.md
├── phi4-14b.md
└── deepseek-r1-32b.md
Usage
Templates
Prompt templates are structured prompts designed for specific use cases. They include:
- Clear instructions and guidelines
- Configurable parameters (marked with placeholders)
- Multiple tone/style variations where applicable
Examples
Example contexts provide sample inputs to test and demonstrate prompt templates:
- Real-world scenarios
- Edge cases
- Different complexity levels
Responses
Model responses are organized by example scenario, with each model's output saved separately:
- Compare different models on the same prompt/context
- Track model performance over time
- Analyze response quality and consistency
- Use kebab-case filenames matching model names
Integration
These prompts integrate with:
- Dify workflows - For automated LLM-powered QA tasks
- VS Code Continue plugin - For development assistance
- Ollama API - Running on K80-optimized containers
Adding New Prompts
- Create template in
templates/with descriptive kebab-case naming - Add corresponding examples in
examples/ - Test with your target LLM models
- Save model responses in
responses/example-name/model-name.md - Update this README if needed
Testing Workflow
- Use template + example to generate prompts
- Run against multiple models (Qwen2.5-VL, Gemma 3, Phi-4, DeepSeek-R1, etc.)
- Save each model's response in the appropriate response folder
- Compare outputs for quality, consistency, and usefulness
Related Components
/dify/- Workflow automation configurations/ollama37/- Docker runtime for LLM executionCLAUDE.md- Project development guidelines