From 22340258f2bc4100a2ecf8d0932fb30aff69c53f Mon Sep 17 00:00:00 2001 From: AbdBarho Date: Sun, 30 Oct 2022 09:57:07 +0100 Subject: [PATCH] Updated Usage (markdown) --- Usage.md | 30 ++++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/Usage.md b/Usage.md index 0acf2b2..d327fe0 100644 --- a/Usage.md +++ b/Usage.md @@ -18,20 +18,38 @@ services: Possible configuration: -## `auto` +# `auto` By default: `--medvram` are given, which allow you to use this model on a 6GB GPU, you can also use `--lowvram` for lower end GPUs. [You can find the full list of cli arguments here.](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py) -This also has support for custom models, put the weights in the folder `data/StableDiffusion`, you can then change the model from the settings tab. There is also a `services/AUTOMATIC1111/config.json` file which contains additional config for the UI. +### Custom models -## `auto-cpu` -CPU instance of the above, requires `--no-half --precision full` for it to run, which are already given +This also has support for custom models, put the weights in the folder `data/StableDiffusion`, you can then change the model from the settings tab. + +### General Config +There is multiple files in `data/config/auto` such as `config.json` and `ui-config.json` which let you which contain additional config for the UI. + +### Scripts +put your scripts `data/config/auto/scripts` and restart the container + +### Extensions +put your extensions in `data/config/auto/extensions`, there is also the open to create a script `data/config/auto/startup.sh` which will be called on container startup, in case you want to install any additional dependencies for your extensions or anything else. + +### **DONT OPEN AN ISSUE IF A SCRIPT OR AN EXTENSION IS NOT WORKING** + +I maintain neither the UI nor the extension, I can't help you. -## `hlky` + + +# `auto-cpu` +CPU instance of the above, some stuff might not work, use at your own risk. + + +# `hlky` By default: `--optimized-turbo` is given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. [You can find the full list of cli arguments here.](https://github.com/sd-webui/stable-diffusion-webui/blob/2236e8b5854092054e2c30edc559006ace53bf96/scripts/webui.py) -## `lstein` +# `lstein` This fork might require a preload to work, see [#72](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/72)