How to run stable diffusion locally with GUI on Windows

You can install Stable Diffusion locally on your PC, but the typical process involves a lot of command line work to install and use. Lucky for us, the Stable Diffusion community has solved this problem. Here’s how to install a version of Stable Diffusion that runs locally with a GUI!

What is stable diffusion?

Stable Diffusion is an AI model that can generate images from text hints or modify existing images with a text hint, like in MidJourney or DALL-E 2. It was first released in August 2022 by Stability.ai. It understands thousands of different words and can be used to create almost any image your imagination can imagine in almost any style.

However, there are two important differences that set Stable Diffusion apart from most other popular AI image generators:

  • It can be run locally on your PC
  • This is an open source project

The last point here is really important. Traditionally, Stable Diffusion is installed and run through a command line interface. This works, but can be clumsy, non-intuitive, and presents a significant barrier to entry for people who would otherwise be interested. But since it’s an open source project, the community quickly created a user interface for it and began adding their own additions, including optimizations to minimize video memory (VRAM) usage and built-in scaling and masking.

What do you need to run this version of Stable Diffusion?

This version of Stable Diffusion is a fork of the main repository (repository) created and maintained by Stability.ai . It has a graphical user interface (GUI) which makes it easier to use than the regular Stable Diffusion which only has a command line interface and an installer which does most of the setup automatically.

Warning. As always, be careful with third-party software forks that you find on GitHub. We’ve been using this for a while now with no problems, as have thousands of others, so we tend to say it’s safe. Luckily, the code and changes here are small compared to some offshoots of open source projects.

This fork also contains various optimizations that should allow it to run on PCs with less RAM, built-in scaling and facial features using GFPGAN, ESRGAN, RealESRGAN and CodeFormer, and masking. Masking is a huge deal – it allows you to selectively apply AI image generation to certain parts of an image without distorting other parts, a process commonly referred to as inpainting.

How to install Stable Diffusion with GUI

The installation process has been greatly simplified, but you still need to complete a few manual steps before you can use the installer.

Install Python first

The first thing you need to do is install the version of Python 3.10.6 recommended by the author of the repository. Follow this link, scroll down the page and click on ” Windows Installer (64-bit) “.

Click on the executable you downloaded and follow the prompts. If you already have Python installed (and you probably did), just click Update. Otherwise, follow the recommended prompts.

Note. Make sure you add Python 3.10.6 to your PATH if you can.

Install Git and download the GitHub repository

You need to download and install Git on Windows before you can run the Stable Diffusion installer. Just download the 64-bit Git executable , run it and use the recommended settings unless you have something specific in mind.

Next, you need to download the files from the GitHub repository . Click the green “Code” button, then click “Download ZIP” at the bottom of the menu.

Open the ZIP file in File Explorer or your preferred zipping program, and then extract the contents to any location. Just keep in mind that you will need to navigate to this folder in order to run Stable Diffusion. In this example, they are extracted to the C:\ directory, but this is not required.

Drag the "stable-diffusion-webui-master" folder wherever you want.

Note. Make sure you don’t accidentally drag “stable-diffusion-webui-master” to a folder other than the empty space – if you do, it will end up in that folder and not the parent folder you intended.

Download all checkpoints

To do this, you will need several control points. The first and most important are the stable diffusion control points . You need to create an account to download checkpoints, but it doesn’t take much for an account – all they need is a name and email address and you’re done.

Note. Downloading checkpoints takes several gigabytes. Don’t expect it to be done instantly.

Copy and paste “sd-v1-4.ckpt” into the “stable-diffusion-webui-master” folder from the previous section, then right-click “sd-v1-4.ckpt” and click “Rename”. Type “model.ckpt” into the text field and press Enter. Be very sure it’s “model.ckpt” – otherwise it won’t work.

Note. The rename feature is an icon in Windows 11.

You also need to download GFPGAN checkpoints . The author of the repository we use called for GFPGAN v1.3 breakpoints , but you can use v1.4 if you want to try it. Scroll down the page and click “Version 1.3 Model”.

Place this “GFPGANv1.3.pth” file in the “stable-diffusion-webui-master” folder as you did with the “sd-v1-4.ckpt” file, but don’t rename it. The “stable-diffusion-webui-master” folder should now contain the following files:

This is what the folder should look like after you've renamed the Stable Diffusion model and added the GFPGAN model.

You can also download as many ESRGAN checkpoints as you like. They usually come as ZIP files. Once downloaded, open the ZIP file and then extract the “.pth” file to the “ESRGAN” folder. Here is an example:

Place for ESRGAN models.

ESRGAN models tend to provide more specific functionality, so choose the pair you like.

Now you just need to double click on the “webui-user.bat” file which is located in the main “stable-diffusion-webui-master” folder. A console window will appear and begin downloading all other important files, creating a Python environment, and setting up the web user interface. It will look like this:

Note. Expect the first launch of this app to take at least a few minutes. You need to download a bunch of everything from the Internet. If it seems to hang for an unreasonably long time at one stage, just try selecting a console window and hitting the Enter key.

The WebUI client downloads and installs all resources.

When this is done, the console will display:

Running on local URL: http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`

How to generate images using stable diffusion with GUI

Okay, you installed the web-based variant of Stable Diffusion and your console told you that it was “running at the local URL: http://127.0.0.1:7860”.

Note: what exactly does this mean, what’s going on? 127.0.0.1 is the localhost address – the IP address that your computer gives itself. This version of Stable Diffusion creates a server on your local PC accessible via its own IP address, but only if you connect on the correct port: 7860.

Open a browser, type “127.0.0.1:7860” or “localhost:7860” into the address bar and press Enter. You will see this in the txt2img tab:

The first page of the WebUI client in Google Chrome.

If you’ve used Stable Diffusion before, these settings will be familiar to you, but here’s a quick overview of what the most important options mean:

  • Hint: a description of what you want to create.
  • Scroll Button: Applies a random art style to your tooltip.
  • Sampling steps: The number of times the image will be refined before you get output. Generally, more is better, but there are diminishing returns.
  • Sampling Method: The underlying math that determines how a sample is processed. You can use any of these, but euler_a and PLMS seem to be the most popular options. You can read more about PLMS in this article.
  • Restore Faces: Uses GFPGAN to try and fix strange or distorted faces.
  • Batch Count: The number of images to be generated.
  • Lot Size: The number of “lots”. Leave this value at 1 unless you have a huge amount of video memory.
  • CFG Scale: How carefully Stable Diffusion will follow your prompts. Larger numbers mean he’s following it very closely, while smaller numbers give more creative freedom.
  • Width: The width of the image you want to create.
  • Height: The width of the image you want to create.
  • Seed: The number that provides the initial input for the random number generator. Leave -1 to randomly generate a new seed.

Let’s create five images based on the prompt: “mountain cow in a magical forest, photograph on 35mm film, sharpness” and see what we get using a PLMS sampler, 50 sampling steps and a CFG scale of 5.

Tip: You can always click the “Abort” button to stop the generation if your job is taking too long.

The output window will look like this:

Conclusion for a hint about mountain cows.  Five mountain cows, two black and white.

Note. Your images will be different.

The top middle image is the one we’ll be using for masking a bit later. There really is no reason for this particular choice other than personal preference. Take any image you like.

Charming mountain cow in the forest.

Select it and click Send to Inpaint.

How to mask images you create for inpaint

Inpainting is a fantastic feature. Normally Stable Diffusion is used to create entire images from a hint, but inpainting allows you to selectively generate (or regenerate) parts of an image. There are two critical options here: inpaint is masked, inpaint is not masked.

Inpaint with a mask will use the hint to create an image in the selected area, while inpaint without a mask will do the exact opposite – only the masked area will be saved.

First, we’ll talk a little about Inpaint masking. Move your mouse over the image while holding the left mouse button and you will notice that a white layer appears on top of your image. Draw the shape of the area you want to replace and be sure to fill it in completely. You don’t circle around the area, you mask the whole area.

Tip: If you’re just adding something to an existing image, it can be helpful to try and align the masked area with the approximate shape you’re trying to create. Masking a triangular shape when you want it, such as a circle, is counterproductive.

Let’s take, for example, our mountain cow and put a chef’s hat on it. Mask out an area roughly shaped like a chef’s hat and make sure the Batch Size is set to a value greater than 1. You’ll probably need several to get the perfect result.

Also, you should select “Hidden Noise” and not “Fill”, “Original”, or “Nothing Hidden”. It tends to give the best results when you want to create an entirely new object in the scene.

Note: You will notice that the left edge of the hat has removed part of his horn. This was due to the “Mask Blur” setting being too high. If you see this kind of thing in your images, try lowering the “Mask Blur” value.

Mountain cow in a chef's hat.
Hint: chef’s hat. Settings: Inpaint Masked, Latent Diffusion, CFG 9.5, Noise Reduction Level 0.75, Sample Steps = 50, Sample Method = Euler_A.

Okay, maybe a chef’s hat isn’t the best choice for a mountain cow. Your mountain cow looks more like early 20th century vibes, so let’s give her a bowler hat.

Mountain cow in a bowler hat.
Hint: Bwel hat settings: Inpaint Masked, Latent Diffusion, CFG 9.5, Noise Reduction Level 0.75, Sampling steps = 50, Sampling method = Euler_A

How positively dapper.

Of course, you can also do the exact opposite with Inpaint Not Masked. It’s conceptually similar, except that the areas you define are upside down. Instead of highlighting the area you want to change, you highlight the areas you want to keep. This is often useful when you want to move a small object to a different background.

How to fix “CUDA Out Of Memory” error

The larger the image you make, the more video memory is required. The first thing you should try is to create smaller images. Stable Diffusion produces good, if very different, images at 256×256.

If you’re itching to make large images on a computer that doesn’t have problems with 512×512 images, or you’re encountering various “Out of Memory” errors, there are some configuration changes that should help.

Open “webui-user.bat” in Notepad or any other text editor. Just right click “webui-user.bat”, click “Edit” and select “Notepad”. Define the line that reads set COMMANDLINE_ARGS=. This is where you are going to put the commands to optimize Stable Diffusion.

If you just want to take huge images, or you don’t have enough RAM on your GTX 10XX series GPU, give it a try --opt-split-attentionfirst. It will look like this:

Then click File > Save. Alternatively, you can press Ctrl+S on your keyboard.

If you are still getting memory errors, try adding --medvramthem to the command line argument list (COMMANDLINE_ARGS).

You can add --always-batch-cond-uncondto try to fix additional memory issues if the previous commands didn’t help. There is also an alternative --medvram that can further reduce VRAM usage --lowvram, but we can’t confirm if it will actually work.

Adding a user interface is an important step forward in making such AI-based tools accessible to everyone. The possibilities are almost endless, and even a quick look at the online communities devoted to the art of artificial intelligence will show you how powerful this technology is, even while it is in its infancy. Of course, if you don’t have a gaming PC or don’t want to worry about customization, you can always use one of the online AI art generators . Just keep in mind that you cannot assume that your entries are private.

Leave a Reply

Your email address will not be published. Required fields are marked *