From 135b799b13bb5c8132bbe0753600b5543fb712db Mon Sep 17 00:00:00 2001 From: Shang Chieh Tseng Date: Wed, 29 Oct 2025 14:21:03 +0800 Subject: [PATCH] Update command. --- CLAUDE.md | 32 ++------------------------------ docs/manual-build.md | 8 ++++---- 2 files changed, 6 insertions(+), 34 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index 61cdcfe2..36cc1cfb 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -39,22 +39,6 @@ The project documentation is organized as follows: ### Building the Project -#### Quick Build -```bash -# Configure build (required on Linux/Intel macOS/Windows) -cmake -B build -cmake --build build - -# For ROCm on Windows -cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -cmake --build build --config Release - -# Build Go binary -go build -o ollama . -``` - -#### Tesla K80 Optimized Build -For Tesla K80 and CUDA Compute Capability 3.7 hardware, use specific compiler versions: ```bash # Configure with GCC 10 and CUDA 11.4 support CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build @@ -90,18 +74,6 @@ go test ./integration/... go test ./server/... ``` -### Docker -```bash -# Build standard image -docker build . - -# Build with ROCm support -docker build --build-arg FLAVOR=rocm . - -# Build ollama37 image for Tesla K80/Compute 3.7 support -docker build -f ollama37.Dockerfile -t ollama37 . -``` - ## Architecture Overview Ollama is a local LLM server with Go backend and C++/CUDA acceleration: @@ -155,7 +127,7 @@ The project supports multiple acceleration backends: Libraries are dynamically loaded from: - `./lib/ollama` (Windows) -- `../lib/ollama` (Linux) +- `../lib/ollama` (Linux) - `.` (macOS) - `build/lib/ollama` (development) @@ -170,4 +142,4 @@ Libraries are dynamically loaded from: - Unit tests throughout codebase (`*_test.go`) - Integration tests in `integration/` requiring running server - Benchmark tests for performance validation -- Platform-specific test files for GPU/hardware features \ No newline at end of file +- Platform-specific test files for GPU/hardware features diff --git a/docs/manual-build.md b/docs/manual-build.md index ba1af1f1..44fb81b5 100644 --- a/docs/manual-build.md +++ b/docs/manual-build.md @@ -37,8 +37,8 @@ git clone https://github.com/dogkeeper886/ollama37 cd ollama37 # If compiling from source (requires GCC 10): -cmake -B build -cmake --build build -j$(nproc) +CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build +CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake --build build -j$(nproc) go build -o ollama . # If using pre-built binary (GCC 10 not required): @@ -522,13 +522,13 @@ go version 3. **CMake Configuration:** Set compiler variables and configure the build system: ```bash - cmake -B build + CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake -B build ``` 4. **CMake Build:** Compile the C++ components (parallel build): ```bash - cmake --build build -j$(nproc) + CC=/usr/local/bin/gcc CXX=/usr/local/bin/g++ cmake --build build -j$(nproc) ``` > **Note:** `-j$(nproc)` enables parallel compilation using all available CPU cores. You can specify a number like `-j4` to limit the number of parallel jobs.