Add docker compose section in readme.

This commit is contained in:
Shang Chieh Tseng
2025-05-09 08:57:51 +08:00
parent 8dc8df36e0
commit e5f2fdb693
3 changed files with 58 additions and 66 deletions

65
idea.md
View File

@@ -1,65 +0,0 @@
**Video Title:**
Run LLMs Directly in VS Code with Continue (Alternatives to Cline!)
**Video Structure & Script Ideas:**
* **You:** "Hey everyone, I'm going to introduce 'Continue,' a VS Code plugin that solves a common problem with other popular solutions like Cline, and lets you run LLMs locally, directly within your coding environment."
* **You:** "Lots of people are using plugins like Cline to bring LLMs into VS Code. Its a great concept! However, Cline has a significant issue: the context window. Cline sends your prompts and code to a remote server, and the size of that context window the amount of text the LLM can consider can be *huge*. "
* **You:** "This means running Cline often requires a powerful GPU. People with older gpu like me, find themselves forced to use smaller, less capable LLMs, like the 0.5B model, just to get it to work."
* **You:** "Thats where 'Continue' comes in. 'Continue' tackles this problem head-on by using Ollama to run the LLM *locally* on your machine."
* **You:** "This means you can leverage powerful LLMs without needing a cloud connection or a high-end GPU." (Emphasize the key benefit: local, powerful LLMs.)
* **You:** "Critically, Continue's interface lives *inside* VS Code. No more switching back and forth between your editor and a browser window. It's a seamless, integrated experience."
* **Briefly explain Ollama's role:** "Ollama makes running LLMs locally incredibly easy. You don't need to be an AI expert to get started."
**4. Demo (2:30 - 5:00): Show, Don't Just Tell!**
* **(Visual: Screen recording of you using "Continue" in VS Code.)**
* **You (Narrating):**
* "Let's walk through a quick example. I have this file open..." (Show the file in VS Code.)
* "To use Continue, I just right-click, and select 'Continue Chat'." (Show the context menu.)
* “You can type in a prompt, like 'Explain this code snippet' and press enter.” (Type a simple prompt and wait for the response.)
* **Show the response appearing directly within VS Code.**
* **Show how you can easily select a block of code and send it to the LLM for analysis or explanation.** "The real power here is the ability to easily select code and send it to the LLM."
* **Show how easy it is to experiment with different LLMs through Ollama.** (If time allows - demonstrates flexibility).
* **Keep it concise and focused on the core benefits (seamless integration, local LLMs).**
**5. Wrap Up & Call to Action (5:00 - 5:30)**
* **(Visual: End screen with links and social media handles.)**
* **You:** "So, if you're looking for a way to bring the power of LLMs into your VS Code workflow without the limitations of Cline, 'Continue' is definitely worth checking out.”
* **You:** "I'm going to put a link to the plugin in the description below. Go give it a try and let me know what you think in the comments!"
* **Encourage engagement:** "If you found this video helpful, please like and subscribe for more developer tools and tutorials!"
**Production Tips:**
* **Screen Recording Software:** OBS Studio (free), Camtasia (paid).
* **Microphone:** A decent USB microphone will significantly improve audio quality.
* **Lighting:** Good lighting makes you look more professional.
* **Edit!:** Cut out any unnecessary pauses or mistakes. Tight editing makes a big difference.
* **Music:** Background music can add atmosphere, but keep it subtle and non-distracting. Use royalty-free music.
**IMPORTANT CONSIDERATIONS (Read This!)**
* **Target Audience:** This video is for developers who are already familiar with VS Code and potentially have some interest in using LLMs. Dont assume *everyone* knows what an LLM is.
* **SEO (Search Engine Optimization):**
* **Keywords:** "VS Code", "LLM", "AI", "Local LLM", "Continue", "Cline", "Ollama" - use these naturally throughout your video title, description, and tags.
* **Thumbnail:** Create a visually appealing thumbnail that clearly communicates the video's topic. Include the "Continue" plugin icon and some text (e.g., "Local LLMs in VS Code").
* **Description:** Write a detailed description that includes keywords and a summary of the video's content.
* **Engagement is Key:** Respond to comments and questions. Building a community around your channel is crucial for growth.
* **Call to Action Placement:** Put the call to action (subscribe, like) in multiple places: at the beginning, middle, and end of the video.
* **Monetization:** Consider how you might monetize your channel (ads, sponsorships) once you have a decent amount of views.
To help me refine this further, can you tell me:
* What level of developer are you targeting? (Beginner, Intermediate, Advanced?)
* Do you want to include a section on how to install Ollama? (It adds complexity, but might be helpful.)
* Are there any specific features of "Continue" that you want to highlight?

View File

@@ -42,6 +42,63 @@ docker run --runtime=nvidia --gpus all -p 11434:11434 dogkeeper886/ollama37
This command will start Ollama and expose it on port `11434`, allowing you to interact with the service.
## Ollama37 Docker Compose
This `docker-compose.yml` file sets up an Ollama 3.7 container for a more streamlined and persistent environment. It utilizes volumes to persist data and ensures the container automatically restarts if it fails.
### Prerequisites
* Docker
* Docker Compose
### Usage
1. **Save the `docker-compose.yml` file:** Save the content provided below into a file named `docker-compose.yml` in a convenient directory.
2. **Run the container:** Open a terminal in the directory where you saved the file and run the following command:
```bash
docker-compose up -d
```
This command downloads the `dogkeeper886/ollama37` image (if not already present) and starts the Ollama container in detached mode.
```yml
version: '3.8'
services:
ollama:
image: dogkeeper886/ollama37
container_name: ollama37
ports:
- "11434:11434"
volumes:
- ./.ollama:/root/.ollama # Persist Ollama data
restart: unless-stopped # Automatically restart the container
runtime: nvidia # Utilize NVIDIA GPU runtime
```
**Explanation of key `docker-compose.yml` directives:**
* `version: '3.8'`: Specifies the Docker Compose file version.
* `services.ollama.image: dogkeeper886/ollama37`: Defines the Docker image to use.
* `ports: - "11434:11434"`: Maps port 11434 on the host machine to port 11434 inside the container, making Ollama accessible.
* `volumes: - ./.ollama:/root/.ollama`: **Important:** This mounts a directory named `.ollama` in the same directory as the `docker-compose.yml` file to the `/root/.ollama` directory inside the container. This ensures that downloaded models and Ollama configuration data are persisted even if the container is stopped or removed. Create a `.ollama` directory if it does not already exist.
* `restart: unless-stopped`: This ensures the container automatically restarts if it crashes or is stopped (but not if you explicitly stop it with `docker-compose down`).
* `runtime: nvidia`: Explicitly instructs Docker to use the NVIDIA runtime, ensuring GPU acceleration.
3. **Accessing Ollama:** After running the container, you can interact with Ollama using its API. Refer to the Ollama documentation for usage details.
### Stopping the Container
To stop the container, run:
```bash
docker-compose down
```
This will stop and remove the container, but the data stored in the `.ollama` directory will be preserved.
## 📦 Version History
### v1.2.0 (2025-05-06)

View File

@@ -7,7 +7,7 @@ services:
ports:
- "11434:11434"
volumes:
- /home/jack/.ollama:/root/.ollama
- ./.ollama:/root/.ollama
restart: unless-stopped
runtime: nvidia
#volumes: