mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 07:46:59 +00:00
* Windows ARM build Skip cmake, and note it's unused in the developer docs. * Win: only check for ninja when we need it On windows ARM, the cim lookup fails, but we don't need ninja anyway.
131 lines
2.8 KiB
Markdown
131 lines
2.8 KiB
Markdown
# Development
|
|
|
|
Install prerequisites:
|
|
|
|
- [Go](https://go.dev/doc/install)
|
|
- C/C++ Compiler e.g. Clang on macOS, [TDM-GCC](https://github.com/jmeubank/tdm-gcc/releases/latest) (Windows amd64) or [llvm-mingw](https://github.com/mstorsjo/llvm-mingw) (Windows arm64), GCC/Clang on Linux.
|
|
|
|
Then build and run Ollama from the root directory of the repository:
|
|
|
|
```shell
|
|
go run . serve
|
|
```
|
|
|
|
## macOS (Apple Silicon)
|
|
|
|
macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.
|
|
|
|
## macOS (Intel)
|
|
|
|
Install prerequisites:
|
|
|
|
- [CMake](https://cmake.org/download/) or `brew install cmake`
|
|
|
|
Then, configure and build the project:
|
|
|
|
```shell
|
|
cmake -B build
|
|
cmake --build build
|
|
```
|
|
|
|
Lastly, run Ollama:
|
|
|
|
```shell
|
|
go run . serve
|
|
```
|
|
|
|
## Windows
|
|
|
|
Install prerequisites:
|
|
|
|
- [CMake](https://cmake.org/download/)
|
|
- [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) including the Native Desktop Workload
|
|
- (Optional) AMD GPU support
|
|
- [ROCm](https://rocm.docs.amd.com/en/latest/)
|
|
- [Ninja](https://github.com/ninja-build/ninja/releases)
|
|
- (Optional) NVIDIA GPU support
|
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_network)
|
|
|
|
Then, configure and build the project:
|
|
|
|
```shell
|
|
cmake -B build
|
|
cmake --build build --config Release
|
|
```
|
|
|
|
> [!IMPORTANT]
|
|
> Building for ROCm requires additional flags:
|
|
> ```
|
|
> cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++
|
|
> cmake --build build --config Release
|
|
> ```
|
|
|
|
|
|
Lastly, run Ollama:
|
|
|
|
```shell
|
|
go run . serve
|
|
```
|
|
|
|
## Windows (ARM)
|
|
|
|
Windows ARM does not support additional acceleration libraries at this time. Do not use cmake, simply `go run` or `go build`.
|
|
|
|
## Linux
|
|
|
|
Install prerequisites:
|
|
|
|
- [CMake](https://cmake.org/download/) or `sudo apt install cmake` or `sudo dnf install cmake`
|
|
- (Optional) AMD GPU support
|
|
- [ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html)
|
|
- (Optional) NVIDIA GPU support
|
|
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads)
|
|
|
|
> [!IMPORTANT]
|
|
> Ensure prerequisites are in `PATH` before running CMake.
|
|
|
|
|
|
Then, configure and build the project:
|
|
|
|
```shell
|
|
cmake -B build
|
|
cmake --build build
|
|
```
|
|
|
|
Lastly, run Ollama:
|
|
|
|
```shell
|
|
go run . serve
|
|
```
|
|
|
|
## Docker
|
|
|
|
```shell
|
|
docker build .
|
|
```
|
|
|
|
### ROCm
|
|
|
|
```shell
|
|
docker build --build-arg FLAVOR=rocm .
|
|
```
|
|
|
|
## Running tests
|
|
|
|
To run tests, use `go test`:
|
|
|
|
```shell
|
|
go test ./...
|
|
```
|
|
|
|
## Library detection
|
|
|
|
Ollama looks for acceleration libraries in the following paths relative to the `ollama` executable:
|
|
|
|
* `./lib/ollama` (Windows)
|
|
* `../lib/ollama` (Linux)
|
|
* `.` (macOS)
|
|
* `build/lib/ollama` (for development)
|
|
|
|
If the libraries are not found, Ollama will not run with any acceleration libraries.
|