mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-19 12:17:02 +00:00
9164981d72890541709cc33f889673e28c8ea79f
Ollama
An easy, fast runtime for large language models, powered by llama.cpp.
Note: this project is a work in progress. Certain models that can be run with
ollamaare intended for research and/or non-commercial use only.
Install
Using pip:
pip install ollama
Using docker:
docker run ollama/ollama
Quickstart
To run a model, use ollama run:
ollama run orca-mini-3b
You can also run models from hugging face:
ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
Or directly via downloaded model files:
ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin
Building
go generate ./...
go build .
Documentation
Languages
Go
86.9%
GLSL
6.6%
TypeScript
3.3%
Shell
0.8%
JavaScript
0.5%
Other
1.8%