Update documentation for v1.3.0 release

- Add v1.3.0 release notes with new model support (Qwen2.5-VL, Qwen3 Dense & Sparse, improved MLLama)
- Update both main README.md and ollama37/README.md for consistency
- Add CLAUDE.md for future Claude Code instances
- Enhanced Docker Hub documentation

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Shang Chieh Tseng
2025-07-20 09:42:26 +08:00
parent 8c3ff4a230
commit 5436af0189
3 changed files with 121 additions and 0 deletions

View File

@@ -20,6 +20,19 @@ This repository includes a customized version of Ollama, specifically optimized
### 📦 Version History
#### v1.3.0 (2025-07-01)
This release expands model support while maintaining full Tesla K80 compatibility:
**New Model Support:**
- **Qwen2.5-VL**: Multi-modal vision-language model for image understanding
- **Qwen3 Dense & Sparse**: Enhanced Qwen3 model variants
- **Improved MLLama**: Better support for Meta's LLaMA models
**Documentation Updates:**
- Updated installation guides for Tesla K80 compatibility
- Enhanced Docker Hub documentation with latest model information
#### v1.2.0 (2025-05-06)
This release introduces support for Qwen3 models, marking a significant step in our commitment to staying Tesla K80 with leading open-source language models. Testing includes successful execution of Gemma 3 12B, Phi-4 Reasoning 14B, and Qwen3 14B, ensuring compatibility with models expected to be widely used in May 2025.