docs: update documentation to reflect Gemma3n support in v1.3.0

Update README.md and CLAUDE.md to correctly reference Gemma3n model
support that was added in version 1.3.0, replacing generic "Gemma 3"
references with the specific "Gemma3n" model name.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Shang Chieh Tseng
2025-07-20 09:47:05 +08:00
parent ef67ce4d2e
commit f337f53408
2 changed files with 3 additions and 3 deletions

View File

@@ -122,7 +122,7 @@ Ollama is a local LLM server with Go backend and C++/CUDA acceleration:
- Platform-specific files for Darwin, Linux, Windows - Platform-specific files for Darwin, Linux, Windows
**Model Layer** (`model/`): Handles model format conversion and tokenization: **Model Layer** (`model/`): Handles model format conversion and tokenization:
- `models/` - Model-specific implementations (Llama, Gemma, etc.) - `models/` - Model-specific implementations (Llama, Gemma3n, etc.)
- `imageproc/` - Image processing for multimodal models - `imageproc/` - Image processing for multimodal models
- Tokenizer implementations (BPE, SentencePiece) - Tokenizer implementations (BPE, SentencePiece)

View File

@@ -47,7 +47,7 @@ ollama run gemma3
``` ```
### Supported Models ### Supported Models
All models from [ollama.com/library](https://ollama.com/library) including Llama 3.2, Gemma 3, Qwen 2.5, Phi-4, and Code Llama. All models from [ollama.com/library](https://ollama.com/library) including Llama 3.2, Gemma3n, Qwen 2.5, Phi-4, and Code Llama.
### REST API ### REST API
```bash ```bash
@@ -66,7 +66,7 @@ curl http://localhost:11434/api/chat -d '{"model": "gemma3, "messages": [{"role"
- **Optimized Builds**: Tesla K80-specific performance tuning - **Optimized Builds**: Tesla K80-specific performance tuning
### Recent Updates ### Recent Updates
- **v1.3.0** (2025-07-19): Added Gemma 3, Qwen2.5VL, latest upstream sync - **v1.3.0** (2025-07-19): Added Gemma3n, Qwen2.5VL, latest upstream sync
- **v1.2.0** (2025-05-06): Qwen3, Gemma 3 12B, Phi-4 14B support - **v1.2.0** (2025-05-06): Qwen3, Gemma 3 12B, Phi-4 14B support
## Building from Source ## Building from Source