From 2d47035902f97ebf6c06e96b7df9aa612599e530 Mon Sep 17 00:00:00 2001 From: AbdBarho Date: Mon, 19 Sep 2022 11:16:13 +0200 Subject: [PATCH] Updated FAQ (markdown) --- FAQ.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/FAQ.md b/FAQ.md index 4f33474..d6fa550 100644 --- a/FAQ.md +++ b/FAQ.md @@ -38,6 +38,21 @@ You can try forcing plain text auth creds storage by removing line with "credSto You have to use one of AWS's GPU-enabled VMs and their Deep Learning OS images. These have the right divers, the toolkit and all the rest already installed and optimized. [#70](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/70) +--- +### lstein: OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14' + +Just update `docker-compose.yaml` to [refresh the models](https://github.com/lstein/stable-diffusion/issues/34) (i.e. `PRELOAD=true`). [#72](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/72#issuecomment-1250382056) + +```yml + lstein: + <<: *base_service + profiles: ["lstein"] + build: ./services/lstein/ + environment: + - PRELOAD=true + - CLI_ARGS= +``` + --- ### Output is a always green image