Add Gemma 3n model support to documentation

Update README files to include Gemma 3n models, which are designed for
efficient execution on everyday devices such as laptops, tablets or phones.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Shang Chieh Tseng
2025-07-20 09:51:34 +08:00
parent 5436af0189
commit 4005589447
2 changed files with 2 additions and 5 deletions

View File

@@ -108,12 +108,10 @@ This release expands model support while maintaining full Tesla K80 compatibilit
**New Model Support:**
- **Qwen2.5-VL**: Multi-modal vision-language model for image understanding
- **Qwen3 Dense & Sparse**: Enhanced Qwen3 model variants
- **Improved MLLama**: Better support for Meta's LLaMA models
- **Gemma 3n**: Efficient models designed for execution on everyday devices such as laptops, tablets or phones
**Documentation Updates:**
- Updated installation guides for Tesla K80 compatibility
- Enhanced Docker Hub documentation with latest model information
### v1.2.0 (2025-05-06)