mirror of
https://github.com/dogkeeper886/ollama-k80-lab.git
synced 2025-12-10 07:46:59 +00:00
Create README.md
This commit is contained in:
47
ollama37/README.md
Normal file
47
ollama37/README.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Docker Image for Ollama on NVIDIA K80 GPU
|
||||
|
||||
## Description
|
||||
|
||||
This Docker image provides a ready-to-use environment for running Ollama, a local Large Language Model (LLM) runner, specifically optimized to leverage the capabilities of an NVIDIA K80 GPU. This setup is ideal for AI researchers and developers looking to experiment with models in a controlled home lab setting.
|
||||
|
||||
The project repository, [dogkeeper886/ollama-k80-lab](https://github.com/dogkeeper886/ollama-k80-lab), offers insights into configuring and using the image effectively. The Dockerfile included in this image is designed for ease of use and efficiency:
|
||||
|
||||
- **Build Stage**: Compiles Ollama from source using GCC and CMake.
|
||||
- **Runtime Environment**: Utilizes Rocky Linux 8 with necessary GPU drivers and libraries pre-configured.
|
||||
|
||||
This setup ensures that users can start experimenting with AI models without the hassle of manual environment configuration, making it a perfect playground for innovation in AI research.
|
||||
|
||||
## Features
|
||||
|
||||
- **GPU Acceleration**: Fully supports NVIDIA K80 GPUs to accelerate model computations.
|
||||
- **Pre-built Binary**: Contains the compiled Ollama binary for immediate use.
|
||||
- **CUDA Libraries**: Includes necessary CUDA libraries and drivers for GPU operations.
|
||||
- **Environment Variables**: Configured to facilitate seamless interaction with the GPU and network settings.
|
||||
|
||||
## Usage
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Ensure you have Docker installed on your system and that your NVIDIA K80 GPU is properly set up. You may need the NVIDIA Container Toolkit to enable GPU support in Docker containers.
|
||||
|
||||
### Pulling the Image
|
||||
|
||||
To pull the image from Docker Hub, use:
|
||||
|
||||
```bash
|
||||
docker pull dogkeeper886/ollama37/ollama-k80-lab
|
||||
```
|
||||
|
||||
### Running the Container
|
||||
|
||||
To run the container with GPU support, execute:
|
||||
|
||||
```bash
|
||||
docker run --runtime=nvidia --gpus all -p 11434:11434 dogkeeper886/ollama37/ollama-k80-lab
|
||||
```
|
||||
|
||||
This command will start Ollama and expose it on port `11434`, allowing you to interact with the service.
|
||||
|
||||
---
|
||||
|
||||
For further assistance and community support, visit the [GitHub Issues page](https://github.com/dogkeeper886/ollama-k80-lab/issues).
|
||||
Reference in New Issue
Block a user