mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 15:57:04 +00:00
Create dockerhub-readme.md
This commit is contained in:
88
docker/runtime/dockerhub-readme.md
Normal file
88
docker/runtime/dockerhub-readme.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Ollama37 🚀
|
||||
|
||||
**Tesla K80 Compatible Ollama Fork**
|
||||
|
||||
Run modern LLMs on NVIDIA Tesla K80 and other CUDA Compute Capability 3.7 GPUs. While official Ollama dropped legacy GPU support, Ollama37 keeps your Tesla K80 hardware functional with the latest models and features.
|
||||
|
||||
## Key Features
|
||||
|
||||
- ⚡ **Tesla K80 Support** - Full compatibility with CUDA Compute Capability 3.7
|
||||
- 🛠️ **Optimized Build** - CUDA 11 toolchain for maximum legacy GPU compatibility
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Docker (Recommended)
|
||||
```bash
|
||||
# Pull and run
|
||||
docker pull dogkeeper886/ollama37
|
||||
docker run --runtime=nvidia --gpus all -p 11434:11434 dogkeeper886/ollama37
|
||||
```
|
||||
|
||||
### Docker Compose
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
ollama:
|
||||
image: dogkeeper886/ollama37:latest
|
||||
container_name: ollama37
|
||||
runtime: nvidia
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: all
|
||||
capabilities: [gpu]
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- ollama-data:/root/.ollama
|
||||
environment:
|
||||
- OLLAMA_HOST=0.0.0.0:11434
|
||||
- NVIDIA_VISIBLE_DEVICES=all
|
||||
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/usr/local/bin/ollama", "list"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 5s
|
||||
|
||||
volumes:
|
||||
ollama-data:
|
||||
name: ollama-data
|
||||
```
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Run Your First Model
|
||||
```bash
|
||||
# Download and run a model
|
||||
ollama pull gemma3
|
||||
ollama run gemma3 "Why is the sky blue?"
|
||||
|
||||
# Interactive chat
|
||||
ollama run gemma3
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
Found an issue or want to contribute? Check our [GitHub issues](https://github.com/dogkeeper886/ollama37/issues) or submit Tesla K80-specific bug reports and compatibility fixes.
|
||||
|
||||
## License
|
||||
|
||||
Same license as upstream Ollama. See LICENSE file for details.
|
||||
|
||||
### CLI Commands
|
||||
```shell
|
||||
ollama list # List models
|
||||
ollama show llama3.2 # Model info
|
||||
ollama ps # Running models
|
||||
ollama stop llama3.2 # Stop model
|
||||
ollama serve # Start server
|
||||
```
|
||||
Reference in New Issue
Block a user