mirror of
https://github.com/dogkeeper886/ollama-k80-lab.git
synced 2025-12-10 07:46:59 +00:00
Update documentation for v1.3.0 release
- Add v1.3.0 release notes with new model support (Qwen2.5-VL, Qwen3 Dense & Sparse, improved MLLama) - Update both main README.md and ollama37/README.md for consistency - Add CLAUDE.md for future Claude Code instances - Enhanced Docker Hub documentation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -14,8 +14,11 @@ This setup ensures that users can start experimenting with AI models without the
|
||||
## Features
|
||||
|
||||
- **GPU Acceleration**: Fully supports NVIDIA K80 GPUs to accelerate model computations.
|
||||
- **Multi-Modal AI**: Supports vision-language models like Qwen2.5-VL for image understanding.
|
||||
- **Advanced Reasoning**: Built-in thinking support for enhanced AI reasoning capabilities.
|
||||
- **Pre-built Binary**: Contains the compiled Ollama binary for immediate use.
|
||||
- **CUDA Libraries**: Includes necessary CUDA libraries and drivers for GPU operations.
|
||||
- **Enhanced Tool Support**: Improved tool calling and WebP image input support.
|
||||
- **Environment Variables**: Configured to facilitate seamless interaction with the GPU and network settings.
|
||||
|
||||
## Usage
|
||||
@@ -99,6 +102,19 @@ This will stop and remove the container, but the data stored in the `.ollama` di
|
||||
|
||||
## 📦 Version History
|
||||
|
||||
### v1.3.0 (2025-07-01)
|
||||
|
||||
This release expands model support while maintaining full Tesla K80 compatibility:
|
||||
|
||||
**New Model Support:**
|
||||
- **Qwen2.5-VL**: Multi-modal vision-language model for image understanding
|
||||
- **Qwen3 Dense & Sparse**: Enhanced Qwen3 model variants
|
||||
- **Improved MLLama**: Better support for Meta's LLaMA models
|
||||
|
||||
**Documentation Updates:**
|
||||
- Updated installation guides for Tesla K80 compatibility
|
||||
- Enhanced Docker Hub documentation with latest model information
|
||||
|
||||
### v1.2.0 (2025-05-06)
|
||||
|
||||
This release introduces support for Qwen3 models, marking a significant step in our commitment to staying Tesla K80 with leading open-source language models. Testing includes successful execution of Gemma 3 12B, Phi-4 Reasoning 14B, and Qwen3 14B, ensuring compatibility with models expected to be widely used in May 2025.
|
||||
|
||||
Reference in New Issue
Block a user