mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 15:57:04 +00:00
docs: update documentation to reflect Gemma3n support in v1.3.0
Update README.md and CLAUDE.md to correctly reference Gemma3n model support that was added in version 1.3.0, replacing generic "Gemma 3" references with the specific "Gemma3n" model name. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -122,7 +122,7 @@ Ollama is a local LLM server with Go backend and C++/CUDA acceleration:
|
||||
- Platform-specific files for Darwin, Linux, Windows
|
||||
|
||||
**Model Layer** (`model/`): Handles model format conversion and tokenization:
|
||||
- `models/` - Model-specific implementations (Llama, Gemma, etc.)
|
||||
- `models/` - Model-specific implementations (Llama, Gemma3n, etc.)
|
||||
- `imageproc/` - Image processing for multimodal models
|
||||
- Tokenizer implementations (BPE, SentencePiece)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user