Update command.

This commit is contained in:
Shang Chieh Tseng
2025-10-29 14:21:03 +08:00
parent 6024408ea5
commit 135b799b13
2 changed files with 6 additions and 34 deletions

View File

@@ -39,22 +39,6 @@ The project documentation is organized as follows:
### Building the Project
#### Quick Build
```bash
# Configure build (required on Linux/Intel macOS/Windows)
cmake -B build
cmake --build build
# For ROCm on Windows
cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++
cmake --build build --config Release
# Build Go binary
go build -o ollama .
```
#### Tesla K80 Optimized Build
For Tesla K80 and CUDA Compute Capability 3.7 hardware, use specific compiler versions:
```bash
# Configure with GCC 10 and CUDA 11.4 support
CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build
@@ -90,18 +74,6 @@ go test ./integration/...
go test ./server/...
```
### Docker
```bash
# Build standard image
docker build .
# Build with ROCm support
docker build --build-arg FLAVOR=rocm .
# Build ollama37 image for Tesla K80/Compute 3.7 support
docker build -f ollama37.Dockerfile -t ollama37 .
```
## Architecture Overview
Ollama is a local LLM server with Go backend and C++/CUDA acceleration:
@@ -155,7 +127,7 @@ The project supports multiple acceleration backends:
Libraries are dynamically loaded from:
- `./lib/ollama` (Windows)
- `../lib/ollama` (Linux)
- `../lib/ollama` (Linux)
- `.` (macOS)
- `build/lib/ollama` (development)
@@ -170,4 +142,4 @@ Libraries are dynamically loaded from:
- Unit tests throughout codebase (`*_test.go`)
- Integration tests in `integration/` requiring running server
- Benchmark tests for performance validation
- Platform-specific test files for GPU/hardware features
- Platform-specific test files for GPU/hardware features