ComfyUI is the most powerful and modular stable diffusion GUI and backend. It fully supports the latest Stable Diffusion models including SD1.x, SD2.x, SDXL, Stable Video Diffusion and Stable Cascade through an intuitive visual workflow builder.
With ComfyUI, you can chain together different operations like upscaling, inpainting, and model mixing all within a single UI. The node-based workflow builder makes it easy to experiment with different generative pipelines for state-of-the-art results. In this blog post we are going to look at installing ComfyUI in just a few minutes so you can have Stable Diffusion up and running.
GPU: A decent NVIDIA GPU with at least 8GB of VRAM (e.g., RTX 3060 Ti or higher).
CPU: A modern processor (Intel Xeon E5 or i5 or Ryzen 5 or higher).
RAM: 16GB or more.
Operating System: Windows 10/11 or Linux.
Sufficient storage space on your PC for models and generated images.
The easiest way to install ComfyUI on Windows is to use the standalone installer available on the releases page. The standalone installer comes bundled with common dependencies like PyTorch and Hugging Face Transformers so you don’t have to install them separately. It’s an all-in-one package that lets you get up and running with ComfyUI quickly on Windows without any complex setup. Just download, extract, add models and run!
Step 1: Download the standalone version of ComfyUI from this direct download link.
Step 2: After download the latest comfyui-windows.zip file and extract it with a tool like 7-Zip or WinRAR.
Step 3: You need a checkpoint model to start using ComfyUI. Download a checkpoint model from here or Hugging Face. Put the model in the folder:
ComfyUI_windows_portable\ComfyUI\models\checkpoints
Note: You also can share models with other Stable Diffusion GUI such as AUTOMATIC1111.
Step 4: Now simply run the run_nvidia_gpu.bat (recommended) or run_cpu.bat. ComfyUI should automatically start on your browser. The command line will execute and generate a URL http://127.0.0.1:8188/ that you can now open in your browser.
The default graph will load that is designed to run any Stable Diffusion model. You can select the model via the Load Checkpoint node, this should show all the model/checkpoints you download and placed in the checkpoint folder. Simply click on Queue Prompt to generate images. The output images will be placed in the ComfyUI/outputs folder.
After starting ComfyUI for the very first time, you should see the default text-to-image workflow. It should look like that screenshot above. If this is not what you see, click Load Default on the right panel to return this default text-to-image workflow.
Step 1. Selecting a model. First, select a Stable Diffusion Checkpoint model in the Load Checkpoint node. Click on the model name to show a list of available models.
Step 2. Enter a prompt and a negative prompt. You should see two nodes with the label CLIP Text Encode (Prompt). Enter your prompt in the top one and your negative prompt in the bottom one.
Step 3. Generate an image. Click Queue Prompt to run the workflow. After a short wait, you should see the first image generated.
Step 4. Save an image. Move the mouse over the picture, right-click to bring up the SaveImage menu, and select Save Image to download the picture. Select Open Image to open the image in the browser.