mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 15:57:04 +00:00
docs: update documentation to reflect Gemma3n support in v1.3.0
Update README.md and CLAUDE.md to correctly reference Gemma3n model support that was added in version 1.3.0, replacing generic "Gemma 3" references with the specific "Gemma3n" model name. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -122,7 +122,7 @@ Ollama is a local LLM server with Go backend and C++/CUDA acceleration:
|
||||
- Platform-specific files for Darwin, Linux, Windows
|
||||
|
||||
**Model Layer** (`model/`): Handles model format conversion and tokenization:
|
||||
- `models/` - Model-specific implementations (Llama, Gemma, etc.)
|
||||
- `models/` - Model-specific implementations (Llama, Gemma3n, etc.)
|
||||
- `imageproc/` - Image processing for multimodal models
|
||||
- Tokenizer implementations (BPE, SentencePiece)
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ ollama run gemma3
|
||||
```
|
||||
|
||||
### Supported Models
|
||||
All models from [ollama.com/library](https://ollama.com/library) including Llama 3.2, Gemma 3, Qwen 2.5, Phi-4, and Code Llama.
|
||||
All models from [ollama.com/library](https://ollama.com/library) including Llama 3.2, Gemma3n, Qwen 2.5, Phi-4, and Code Llama.
|
||||
|
||||
### REST API
|
||||
```bash
|
||||
@@ -66,7 +66,7 @@ curl http://localhost:11434/api/chat -d '{"model": "gemma3, "messages": [{"role"
|
||||
- **Optimized Builds**: Tesla K80-specific performance tuning
|
||||
|
||||
### Recent Updates
|
||||
- **v1.3.0** (2025-07-19): Added Gemma 3, Qwen2.5VL, latest upstream sync
|
||||
- **v1.3.0** (2025-07-19): Added Gemma3n, Qwen2.5VL, latest upstream sync
|
||||
- **v1.2.0** (2025-05-06): Qwen3, Gemma 3 12B, Phi-4 14B support
|
||||
|
||||
## Building from Source
|
||||
|
||||
Reference in New Issue
Block a user