mirror of
https://github.com/dogkeeper886/ollama-k80-lab.git
synced 2025-12-10 15:57:05 +00:00
Update README.md
This commit is contained in:
@@ -13,9 +13,13 @@ This project explores running Ollama, a local LLM runner, with a NVIDIA K80 GPU
|
|||||||
* **Dify Integration:** Dify provides a robust framework for building LLM applications (chatbots, agents, etc.). We want to see how seamlessly Ollama and Dify can work together, allowing us to rapidly prototype and deploy LLM-powered solutions.
|
* **Dify Integration:** Dify provides a robust framework for building LLM applications (chatbots, agents, etc.). We want to see how seamlessly Ollama and Dify can work together, allowing us to rapidly prototype and deploy LLM-powered solutions.
|
||||||
* **Cost-Effective Experimentation:** Running LLMs locally avoids the costs associated with cloud-based APIs, enabling broader access and experimentation.
|
* **Cost-Effective Experimentation:** Running LLMs locally avoids the costs associated with cloud-based APIs, enabling broader access and experimentation.
|
||||||
|
|
||||||
## Contributing
|
## Modified Version
|
||||||
|
|
||||||
Contributions are welcome! Please feel free to submit pull requests or open issues.
|
This repository includes a modified version of Ollama, specifically customized for running on a Tesla K80 GPU. For more details and contributions, visit our GitHub page:
|
||||||
|
|
||||||
|
[ollama37](https://github.com/dogkeeper886/ollama37)
|
||||||
|
|
||||||
|
This custom build aims to optimize performance and compatibility with the Tesla K80 hardware, ensuring smoother integration and enhanced efficiency in LLM applications.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user