Updated Usage (markdown)

AbdBarho
2022-10-30 09:57:07 +01:00
parent e4c1cada2d
commit 22340258f2

@@ -18,20 +18,38 @@ services:
Possible configuration:
## `auto`
# `auto`
By default: `--medvram` are given, which allow you to use this model on a 6GB GPU, you can also use `--lowvram` for lower end GPUs.
[You can find the full list of cli arguments here.](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)
This also has support for custom models, put the weights in the folder `data/StableDiffusion`, you can then change the model from the settings tab. There is also a `services/AUTOMATIC1111/config.json` file which contains additional config for the UI.
### Custom models
## `auto-cpu`
CPU instance of the above, requires `--no-half --precision full` for it to run, which are already given
This also has support for custom models, put the weights in the folder `data/StableDiffusion`, you can then change the model from the settings tab.
### General Config
There is multiple files in `data/config/auto` such as `config.json` and `ui-config.json` which let you which contain additional config for the UI.
### Scripts
put your scripts `data/config/auto/scripts` and restart the container
### Extensions
put your extensions in `data/config/auto/extensions`, there is also the open to create a script `data/config/auto/startup.sh` which will be called on container startup, in case you want to install any additional dependencies for your extensions or anything else.
### **DONT OPEN AN ISSUE IF A SCRIPT OR AN EXTENSION IS NOT WORKING**
I maintain neither the UI nor the extension, I can't help you.
## `hlky`
# `auto-cpu`
CPU instance of the above, some stuff might not work, use at your own risk.
# `hlky`
By default: `--optimized-turbo` is given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. [You can find the full list of cli arguments here.](https://github.com/sd-webui/stable-diffusion-webui/blob/2236e8b5854092054e2c30edc559006ace53bf96/scripts/webui.py)
## `lstein`
# `lstein`
This fork might require a preload to work, see [#72](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/72)