Updated Usage (markdown)
12
Usage.md
12
Usage.md
@@ -6,16 +6,20 @@ where `[ui]` is one of `hlky`, `auto`, `auto-cpu`, and `lstein`.
|
||||
|
||||
In the `docker-compose.yml` you can change the `CLI_ARGS` variable for all of the available UIs.
|
||||
|
||||
## `hlky`
|
||||
By default: `--optimized-turbo` is given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. [You can find the full list of cli arguments here.](https://github.com/sd-webui/stable-diffusion-webui/blob/2236e8b5854092054e2c30edc559006ace53bf96/scripts/webui.py)
|
||||
|
||||
|
||||
## `auto`
|
||||
By default: `--medvram --opt-split-attention` are given, which allow you to use this model on a 6GB GPU, you can also use `--lowvram` for lower end GPUs.
|
||||
[You can find the full list of cli arguments here.](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/d97c6f221ff9f97823c7ffce181a243dec895fa1/modules/shared.py)
|
||||
[You can find the full list of cli arguments here.](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)
|
||||
|
||||
This also has support for custom models, put the weights in the folder `cache/custom-models` (create if its not there), you can then change the model from the settings tab. There is also a `services/AUTOMATIC1111/config.json` file which contains additional config for the UI.
|
||||
|
||||
## `auto-cpu`
|
||||
CPU instance of the above, requires `--no-half --precision full` for it to run.
|
||||
|
||||
|
||||
## `hlky`
|
||||
By default: `--optimized-turbo` is given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. [You can find the full list of cli arguments here.](https://github.com/sd-webui/stable-diffusion-webui/blob/2236e8b5854092054e2c30edc559006ace53bf96/scripts/webui.py)
|
||||
|
||||
|
||||
## `lstein`
|
||||
No config at this time!
|
||||
|
||||
Reference in New Issue
Block a user