mirror of
https://github.com/dogkeeper886/ollama37.git
synced 2025-12-10 15:57:04 +00:00
move to contained directory
This commit is contained in:
96
README.md
96
README.md
@@ -1,66 +1,82 @@
|
||||
# ollama
|
||||
# Ollama
|
||||
|
||||
🙊
|
||||
- Run models, fast
|
||||
- Download, manage and import models
|
||||
|
||||
## Running
|
||||
|
||||
Install dependencies:
|
||||
## Install
|
||||
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
pip install ollama
|
||||
```
|
||||
|
||||
Put your model in `models/` and run:
|
||||
## Example quickstart
|
||||
|
||||
```
|
||||
python3 ollama.py serve
|
||||
```python
|
||||
import ollama
|
||||
model_name = "huggingface.co/thebloke/llama-7b-ggml"
|
||||
model = ollama.pull(model_name)
|
||||
ollama.load(model)
|
||||
ollama.generate(model_name, "hi")
|
||||
```
|
||||
|
||||
To run the app:
|
||||
## Reference
|
||||
|
||||
```
|
||||
cd desktop
|
||||
npm install
|
||||
npm start
|
||||
### `ollama.load`
|
||||
|
||||
Load a model from a path or a docker image
|
||||
|
||||
```python
|
||||
ollama.load("model name")
|
||||
```
|
||||
|
||||
## Building
|
||||
### `ollama.generate("message")`
|
||||
|
||||
If using Apple silicon, you need a Python version that supports arm64:
|
||||
Generate a completion
|
||||
|
||||
```bash
|
||||
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
|
||||
bash Miniforge3-MacOSX-arm64.sh
|
||||
```python
|
||||
ollama.generate(model, "hi")
|
||||
```
|
||||
|
||||
Get the dependencies:
|
||||
### `ollama.models`
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Then build a binary for your current platform:
|
||||
|
||||
```bash
|
||||
python3 build.py
|
||||
```
|
||||
|
||||
### Building the app
|
||||
List models
|
||||
|
||||
```
|
||||
cd desktop
|
||||
npm run package
|
||||
models = ollama.models()
|
||||
```
|
||||
|
||||
## API
|
||||
### `ollama.serve`
|
||||
|
||||
### `GET /models`
|
||||
Serve the ollama http server
|
||||
|
||||
Returns a list of available models
|
||||
## Cooing Soon
|
||||
|
||||
### `POST /generate`
|
||||
### `ollama.pull`
|
||||
|
||||
Generates completions as a series of JSON objects
|
||||
Examples:
|
||||
|
||||
model: `string` - The name of the model to use in the `models` folder.
|
||||
prompt: `string` - The prompt to use.
|
||||
```python
|
||||
ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
|
||||
```
|
||||
|
||||
### `ollama.import`
|
||||
|
||||
Import an existing model into the model store
|
||||
|
||||
```python
|
||||
ollama.import("./path/to/model")
|
||||
```
|
||||
|
||||
### `ollama.search`
|
||||
|
||||
Search for compatible models that Ollama can run
|
||||
|
||||
```python
|
||||
ollama.search("llama-7b")
|
||||
```
|
||||
|
||||
## Future CLI
|
||||
|
||||
```
|
||||
ollama run huggingface.co/thebloke/llama-7b-ggml
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user