Updated FAQ (markdown)
15
FAQ.md
15
FAQ.md
@@ -38,6 +38,21 @@ You can try forcing plain text auth creds storage by removing line with "credSto
|
|||||||
|
|
||||||
You have to use one of AWS's GPU-enabled VMs and their Deep Learning OS images. These have the right divers, the toolkit and all the rest already installed and optimized. [#70](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/70)
|
You have to use one of AWS's GPU-enabled VMs and their Deep Learning OS images. These have the right divers, the toolkit and all the rest already installed and optimized. [#70](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/70)
|
||||||
|
|
||||||
|
---
|
||||||
|
### lstein: OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'
|
||||||
|
|
||||||
|
Just update `docker-compose.yaml` to [refresh the models](https://github.com/lstein/stable-diffusion/issues/34) (i.e. `PRELOAD=true`). [#72](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/72#issuecomment-1250382056)
|
||||||
|
|
||||||
|
```yml
|
||||||
|
lstein:
|
||||||
|
<<: *base_service
|
||||||
|
profiles: ["lstein"]
|
||||||
|
build: ./services/lstein/
|
||||||
|
environment:
|
||||||
|
- PRELOAD=true
|
||||||
|
- CLI_ARGS=
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Output is a always green image
|
### Output is a always green image
|
||||||
|
|||||||
Reference in New Issue
Block a user