You are on page 1of 3

(Gradio GUI Version) Local Install of Stable Diffusion for Windows

(Non-GUI Version) Local Install of Stable Diffusion for Windows


Img2img Usage Guide
Prompt Weighting
Prompt Modifiers
Common Errors
1. (Gradio GUI Version) Local Install of Stable Diffusion for Windows
Visit https://huggingface.co/CompVis/stable-diffusion-v-1-4-original, scroll down
and select "Authorize", make sure you make an account first
Download the checkpoint: https://huggingface.co/CompVis/stable-diffusion-v-1-4-
original/blob/main/sd-v1-4.ckpt
Download Stable Diffusion:
https://github.com/basujindal/stable-diffusion/archive/refs/heads/main.zip
Unzip stable-diffusion-main.zip file to your preferred location and go to the
stable-diffusion-main/models/ldm folder and make a new folder inside called stable-
diffusion-v1 then rename sd-v1-4.ckpt that you downloaded to model.ckpt and move
the file into this folder
Go back to the start of the stable-diffusion-main folder and open environment.yaml
using Notepad and scroll down to dependencies: and add the line - git so it looks
like:
dependencies:
- git
- python=3.8.5
- pip=20.3
Download Miniconda from here: https://repo.anaconda.com/miniconda/Miniconda3-
latest-Windows-x86_64.exe and install it
Open Anaconda Prompt (miniconda3) and type cd path to stable-diffusion-main folder,
so if you have it saved in Documents you would type cd Documents/stable-diffusion-
main
Run the command conda env create -f environment.yaml (you only need to do this step
for the first time, otherwise skip it)
Run conda activate ldm, pip install gradio (first time only or when updates are
required) and then python optimizedSD/txt2img_gradio.py
Enter the IP address shown in the command window (it will start with 127.0.0.1)
into your address bar in your web browser and there is your GUI to create images!
2. (Non-GUI Version) Local Install of Stable Diffusion for Windows
Visit https://huggingface.co/ and create an account
Visit https://huggingface.co/CompVis/stable-diffusion-v-1-4-original, scroll down
and select "Authorize"
Download the checkpoint: https://huggingface.co/CompVis/stable-diffusion-v-1-4-
original/blob/main/sd-v1-4.ckpt
Download Stable Diffusion:
https://github.com/basujindal/stable-diffusion/archive/refs/heads/main.zip
Unzip stable-diffusion-main.zip file to your preferred location and go to the
stable-diffusion-main/models/ldm folder and make a new folder inside called stable-
diffusion-v1
Rename the downloaded sd-v1-4.ckpt to model.ckpt and move the file into the stable-
diffusion-v1 folder
Go back to the start of the stable-diffusion-main folder and open environment.yaml
using Notepad
Scroll down to dependencies: and add the line - git so it looks like:
dependencies:
- git
- python=3.8.5
- pip=20.3
Download Miniconda from here: https://repo.anaconda.com/miniconda/Miniconda3-
latest-Windows-x86_64.exe
Run Miniconda3-latest-Windows-x86_64.exe and install it
Open Anaconda Prompt (miniconda3)
Type cd path to stable-diffusion-main folder, so if you have it saved in Documents
you would type cd Documents/stable-diffusion-main
Run the command conda env create -f environment.yaml (you only need to do this step
for the first time, otherwise skip it)
Wait for it to process
Run conda activate ldm
Now you may create prompts using python scripts/txt2img.py --prompt "insert
prompt"!
NOTE: If you are receiving CUDA out of memory errors, use python
optimizedSD/optimized_txt2img.py instead of scripts/txt2img.py!
Your images are saved to stable-diffusion-main/outputs/txt2img-samples/<prompt
name> by default, you may change it by using --outdir directory_name to change it
3 images are created by default and 5 for optimizedSD. If you would like less, use
--n_samples x
3. Img2img Guide
Complete setup for (Gradio GUI Version) Local Install of Stable Diffusion for
Windows above
Open Anaconda Prompt (miniconda3) and type cd path to stable-diffusion-main folder,
so if you have it saved in Documents you would type cd Documents/stable-diffusion-
main
Run conda activate ldm and then python optimizedSD/img2img_gradio.py
Enter the IP address shown in the command window (it will start with 127.0.0.1)
into your address bar in your web browser
Select an image to upload and then enter your details on the page for generation!
4. Prompt Weighting
While using either Gradio GUI or manual prompting, you may use prompt weighting to
shift towards certain modifiers inside of your prompt.
For example, if you wish to have prompt to generate broken-down car, rusted, red
paint you can then instead do broken-down car, rusted:0.25 red paint:0.75 which
would increase the emphasis on the broken-down car having more red paint visible in
the image with less rusted in the requested prompt.

Another example can be with the prompt chicken:0.75 snake:0.25 mixed animal which
would increase the emphasis on the prompt looking more like a chicken and lesser a
snake.

5. Prompt Modifiers
txt2img:
--prompt - The main and first one that you use to generate images with
--outdir - Specify the folder you wish to have your images saved to
--skip_grid - Saves the output as individual images instead of a grid
--ddim_steps - Specifies the amount of processing steps used. The higher the number
the more times it'll work on rendering it. Higher steps DOES NOT mean a beter
image. Every 50 steps multiplies processing time by 1 (Default: 50)
--plms - Use PLMS sampling
--laion400m - Use the LAION400M model during creation
--n_samples - How many images should be created in one go (Default: 3, 5 for
optimizedSD)
--n_iters - How many times the amount of images under --n_samples should run
--H - Specify the image height, multiples of 64. Warning: Higher values drastically
increase compute and VRAM usage (Default: 512)
--W - Specify the image width, multiples of 64. Warning: Higher values drastically
increase compute and VRAM usage (Default: 512)
--C - Latent channels used (Default: 4)
--scale - How close an image should match the prompt given. Lower numbers stray
further away from the prompt and higher numbers try to be more accurate.
Recommended to stay at default or up to 15-20 (Default: 7.5)
--seed - Seed used during image generation
--precision - [full, autocast]
img2img:
prompt - Description on what you want the new image to be based on
strength - How close the prompt should affect the image (0.0 is basically the input
image, 10.0 is basically ignoring the input image, 5.0 is a middle-ground)

6. Common Errors
If you are running out of memory and you have a sufficient GPU, use --n_samples 1
to render only one image per batch as well as keep the standard 512 width/height.
If you receive any "No module named 'cv2', 'omegaconf', etc. errors, try pip
install opencv-python and if that doesn't work, your installation may have been
messed up you should start over.
If you're running into a lot of errors trying to get Stable Diffusion to work on
your computer, chances are that the environment needs to be cleared out and
recreated in order to start fresh again. First you'll want to type conda activate
into your console to move into the base environment again.
Next, you'll want to type conda info --envs to verify the name of the environment
you have installed.
After verifying the name of the environment, you'll want to type conda remove --
name myenv --all where you replace myenv with the name of the environment you're
uninstalling. Enter yes when prompted to allow it to finish the removal process.
Enter the cd path command, replacing the word path with the path to your Stable
Diffusion folder where environment.yaml is located.
Follow installation instructions again, starting with conda env create -f
environment.yaml.
Author: Kevi

You might also like