From 27daf6343831753e59ca4fca6ccb305bb37bf1ac Mon Sep 17 00:00:00 2001 From: dogkeeper886 Date: Fri, 28 Mar 2025 18:40:53 +0800 Subject: [PATCH] Create README.md --- README.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..6285248 --- /dev/null +++ b/README.md @@ -0,0 +1,22 @@ +# ollama-k80-lab + +[![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) + +## Overview + +This project explores running Ollama, a local LLM runner, with a NVIDIA K80 GPU and investigates its integration with Dify, a powerful framework for building LLM-powered applications. The goal is to assess performance, explore limitations, and demonstrate the potential of this combination for local LLM experimentation and deployment. + +## Motivation + +* **Local LLM Exploration:** Ollama makes it incredibly easy to run Large Language Models locally. This project aims to leverage that ease with the power of a GPU. +* **K80 Utilization:** The NVIDIA K80, while older, remains a viable GPU for LLM inference. This project aims to demonstrate its capability for running smaller to medium sized LLMs. +* **Dify Integration:** Dify provides a robust framework for building LLM applications (chatbots, agents, etc.). We want to see how seamlessly Ollama and Dify can work together, allowing us to rapidly prototype and deploy LLM-powered solutions. +* **Cost-Effective Experimentation:** Running LLMs locally avoids the costs associated with cloud-based APIs, enabling broader access and experimentation. + +## Contributing + +Contributions are welcome! Please feel free to submit pull requests or open issues. + +## License + +This project is licensed under the [MIT License](LICENSE).