Updated FAQ (markdown)

AbdBarho
2022-09-19 11:16:13 +02:00
parent 3bb7dbfe32
commit 2d47035902

15
FAQ.md

@@ -38,6 +38,21 @@ You can try forcing plain text auth creds storage by removing line with "credSto
You have to use one of AWS's GPU-enabled VMs and their Deep Learning OS images. These have the right divers, the toolkit and all the rest already installed and optimized. [#70](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/70)
---
### lstein: OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
Just update `docker-compose.yaml` to [refresh the models](https://github.com/lstein/stable-diffusion/issues/34) (i.e. `PRELOAD=true`). [#72](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/72#issuecomment-1250382056)
```yml
lstein:
<<: *base_service
profiles: ["lstein"]
build: ./services/lstein/
environment:
- PRELOAD=true
- CLI_ARGS=
```
---
### Output is a always green image