Stable diffusion guide. Nov 22, 2023 · Using embedding in AUTOMATIC1111 is easy.
All of Stable Diffusion's upscaling tools are located in the "Extras" tab, so click it to open the upscaling menu. You can find the documentation and the OG Github repo from Automatic1111 here Aug 22, 2022 · It’s a really easy way to get started, so as your first step on NightCafe, go ahead and enter a text prompt (or click “Random” for some inspiration), choose one of the 3 styles, and click Dec 16, 2023 · Thankfully by fine-tuning the base Stable Diffusion model using captioned images, the ability of the base model to generate better-looking pictures based on her style is greatly improved. Sep 28, 2023 · Stable Diffusion is an AI model that converts text descriptions to realistic images. Apr 3, 2024 · Step2: Connect to one of the supported APIs, including KoboldAI, KoboldCpp, Cloud LLM APIs, OpenAI (ChatGPT), and more. And those are the basic Stable Diffusion settings! I hope this guide has been helpful for you. 💡 Note: For now, we only allow DreamBooth fine-tuning of the SDXL UNet via LoRA. Make sure you are in the stable-diffusion-main folder with stuff in it. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. It's designed for designers, artists, and creatives who need quick and easy image creation. The subject’s images are fitted alongside images from the subject’s class, which are first generated using the same Stable Diffusion model. What kind of images a model generates depends on the training images. First, download an embedding file from Civitai or Concept Library. Aitrepreneur - Step-by-Step Videos on Dream Booth and Image Creation. 9vae. We covered 3 popular methods to do that, focused on images with a subject in a background: DreamBooth: adjusts the weights of the model and creates a new checkpoint. Ideal for boosting creativity, it simplifies content creation for artists, designers Apr 1, 2023 · As usual, copy the picture back to Krita. This will let you run the model from your PC. Img2Img is a cutting-edge technique that generates new images from an input image and a corresponding text prompt. Before we begin, it’s always a good practice to ensure that your system is up-to-date with the latest package versions. Aug 14, 2023 · Lynn Zheng. There are a few popular Open Source repos that create an easy to use web interface for typing in the prompts, managing the settings and seeing the images. webui. Stable Diffusion 3 is an advanced AI image generator that turns text prompts into detailed, high-quality images. It is created by Stability AI. 3 which is 20-30%. Beginner/Intermediate Guide to Getting Cool Images. 5 models, each with their unique allure and general-purpose capabilities, to the SDXL model, a veritable upgrade boasting higher resolutions and quality. zip from here, this package is from v1. You can construct an image generation workflow by chaining different blocks (called nodes) together. These models are essentially de-noising models that have learned to take a noisy input image and clean it up. bin. Stable Diffusion Web UI is a browser interface based on the Gradio library for Stable Diffusion. No token limit for prompts (original stable diffusion lets you use up to 75 tokens) DeepDanbooru integration, creates danbooru style tags for anime prompts xformers , major speed increase for select cards: (add --xformers to commandline args) Stable Diffusion. You will learn how to train your own model, how to use Control Net, how to us Mar 19, 2024 · Stable Diffusion Models, or checkpoint models, are pre-trained Stable Diffusion weights for generating a particular style of images. org YouTube channel. Copy and paste the code block below into the Miniconda3 window, then press Enter. Its popularity has surged due to its striking results and user-friendly interface. The Guidance Scale, or Classifier-Free Guidance (CFG) scale, influences the degree to which Stable Diffusion adheres to the provided text prompt during image generation. The noise predictor then estimates the noise of the image. Apr 24, 2024 · LoRAs (Low-Rank Adaptations) are smaller files (anywhere from 1MB ~ 200MB) that you combine with an existing Stable Diffusion checkpoint models to introduce new concepts to your models, so that your model can generate these concepts. noplaxochia. A very basic guide to get Stable Diffusion web UI up and running on Windows 10/11 NVIDIA GPU. Stable Diffusion is a In the intricate world of stable diffusion, the ability to contextualize prompts effectively sets apart the ordinary from the extraordinary. Step 3: Set outpainting parameters. Nov 24, 2023 · Select and download the desired model. model_id = "runwayml/stable-diffusion-v1-5". It is the most popular model because it has served as the basis for many other AI models. You will see a Motion tab on the bottom half of the page. Explore the key components, types, formats, and features of Stable Diffusion and its variants. Feb 11, 2024 · Folders and source model Source model: sd_xl_base_1. Stable Diffusion is entirely free and Stable Diffusion is cool! Build Stable Diffusion “from Scratch”. MAT outpainting. It's good for creating fantasy, anime and semi-realistic images. I just released a video course about Stable Diffusion on the freeCodeCamp. 4 and v1. You can also type in a specific seed number into this field. Stable Diffusion is a free AI model that turns text into images. We're going to create a folder named "stable-diffusion" using the command line. This guide will walk you through everything you need to know. It’s significantly better than previous Stable Diffusion models at realism. Navigate to Img2img page. 0) Image folder: <path to your image folder> Output folder: <path to the Go to the stable-diffusion-main folder wherever you downloaded using "cd" to jump folders. First of all you want to select your Stable Diffusion checkpoint, also known as a model. Im still a beginner, so I would like to start getting into it a bit more. If you download the file from the concept library, the embedding is the file named learned_embedds. 5 model. from_pretrained(model_id, use_safetensors= True) Feb 13, 2024 · SD Upscale is a script that comes with AUTOMATIC1111 that performs upscaling with an upscaler followed by an image-to-image to enhance details. A. This component is the secret sauce of Stable Diffusion. Install Stable Diffusion on Ubuntu 22. They are all generated from simple prompts designed to show the effect of certain keywords. 0_0. They are used to generate images, most commonly as text to image models: you give it a text prompt, and it returns an image. The intricate undertaking of structuring, restoring, or reconstructing digital images to achieve desired outcomes cannot be undermined. Hello, Im a 3d charactrer artist, and recently started learning stable diffusion. 04 LTS Jammy Jellyfish. 1 model with which you can generate 768×768 images. May 16, 2024 · Once you’ve uploaded your image to the img2img tab we need to select a checkpoint and make a few changes to the settings. Download the sd. Step3: Install Stable Diffusion WebUI or APIs. We will inpaint both the right arm and the face at the same time. Enter the following commands in the terminal, followed by the enter key, to install Automatic1111 WebUI. Dec 26, 2023 · Step 2: Select an inpainting model. This is the interface for users to operate the generations. Failure example of Stable Diffusion outpainting. Aug 4, 2023 · The Mechanics of Stable Diffusion. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. It’s where a lot of the performance gain over previous models is achieved. 9): 0. Here's a step-by-step guide: Load your images: Import your input images into the Img2Img model, ensuring they're properly preprocessed and compatible with the model architecture. Tons of other people started Feb 11, 2024 · To use a VAE in AUTOMATIC1111 GUI, click the Settings tab on the left and click the VAE section. Press the big red Apply Settings button on top. Safetensor file, simply place it in the Lora folder within the stable-diffusion-webui/models directory. To produce an image, Stable Diffusion first generates a completely random image in the latent space. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Some people have been using it with a few of their photos to place themselves in fantastic situations, while others are using it to incorporate new styles. - GitHub - Guizmus/sd-training-intro: This is a guide that presents how Fine tuning Stable diffusion's models work. Image below was generated on a fine-tuned Stable Diffusion 1. 0. Mar 20, 2024 · Guidance Scale. Principle of Diffusion models (sampling, learning) Diffusion for Images – UNet architecture. What do you guys think is a good way to start learning stable diffusion, like a whole course (free if possible). 1. Install AUTOMATIC1111’s Stable Diffusion WebUI. Jun 21, 2023 · This step-by-step guide will walk you through the entire process, from understanding the basics of stable diffusion to setting up your workspace and running the diffusion process. This article covers the anatomy, process, techniques, and tips of prompts, with examples and a prompt generator. Center an image. With your system updated, the next step is to download Stable Diffusion. A guide to Stable Diffusion . Jan 30, 2024 · Stable Diffusion is a text-to-image model, powered by AI, that uses deep learning to generate high-quality images from text. Tags vs Sentence. Nov 2, 2022 · The image generator goes through two stages: 1- Image information creator. 1. py script shows how to implement the training procedure and adapt it for Stable Diffusion XL. Its code and model weights have been released publicly, and it can run on most consumer hardware equipped with a modest GPU with at least 8 GB VRAM. In the SD VAE dropdown menu, select the VAE file you want to use. Once downloaded, create a new folder in your Google Drive titled "Stable Diffusion. Use the paintbrush tool to create a mask. Or, if you've just generated an image you want to upscale, click "Send to Extras" and you'll be taken to there with the image in place for upscaling. Open up your browser, enter "127. Jul 6, 2024 · ComfyUI is a node-based GUI for Stable Diffusion. It uses a variant of the diffusion model called latent diffusion. November 28, 2023 by Morpheus Emad. 5. It can create images in variety of aspect ratios without any problems. Stable Diffusion gets its name from the fact that it belongs to a class of generative machine learning called diffusion models. While a basic encoder-decoder can generate images from text, the results tend to be low-quality and nonsensical. The train_dreambooth_lora_sdxl. This guide aims to elevate your skills to the top 10% of Stable Diffusion users. We'll talk about txt2img, img2img, The Stable Diffusion model was initially trained on images with a resolution of 512×512, so in specific cases (large images) it needs to “split” the images up, and that causes the duplication in the result. Trusted by 1,000,000+ users worldwide. 0-pre we will update it to the latest webui version in step 3. In the ever-evolving field of digital technology, the realm of image processing plays a significant role. Dec 22, 2023 · Stable Diffusion is an algorithm that gradually reduces noise in imagery to create something new, a process referred to as diffusion. A model won’t be able to generate a cat’s image if there’s never a cat in the training data. Jun 22, 2023 · In this guide, we will show how to generate novel images based on a text prompt using the KerasCV implementation of stability. 1, Hugging Face) at 768x768 resolution, based on SD2. This is an entry level guide for newcomers, but also establishes most of the concepts of training in a single place. It got extremely popular very quickly. Step 4: Enable the outpainting script. You can set a value between 0. 1 models downloaded, you can find and use them in your Stable Diffusion Web UI. It was initially trained by people from CompVis at Ludwig Maximilian University of Munich and released on August 2022. And even the prompt is better followed. Nov 22, 2023 · Using embedding in AUTOMATIC1111 is easy. conda activate Automatic1111_olive. Stable UnCLIP 2. Convert to landscape size. Which each step, it puts another layer of “paint” down on that latent space, first with blurry blocks of color to define what goes where. Introduction . Keep reading to start creating. . (Alternatively, use Send to Img2img button to send the image to the img2img canvas) Step 3. Running Stable Diffusion Locally. 9) in steps 11-20. Step 1. The course explores options for users with less powerful equipment. 2. This model allows for image variations and mixing operations as described in Hierarchical Text-Conditional Image Generation with CLIP Latents, and, thanks to its modularity, can be combined with other models such as KARLO. It provides a user-friendly way to interact with Stable Diffusion, an open-source text-to-image generation model. Step 1: In AUTOMATIC1111 GUI, Navigate to the Deforum page. We provide a reference script for sampling, but there also exists a diffusers integration, which we expect to see more active community development. Diffusion models work by taking noisy inputs and iteratively denoising them into cleaner outputs: Start with a noise image. If you've ever felt frustrated that your generated images don't match the quality you see online, you're in the right place. Dec 2023 · 13 min read. Concept Art in 5 Minutes. Aug 14, 2023 · Learn how to use Stable Diffusion to create art and images in this full course. " Step 5: Return to the Google Colab site and locate the "File" icon on the left-side panel. However, it also limits creative liberty, potentially yielding less diverse May 28, 2024 · Stable Diffusion is a text-to-image generative AI model, similar to DALL·E, Midjourney and NovelAI. Apr 29, 2024 · Welcome to the ultimate guide on Stable Diffusion prompts. Let words modulate diffusion – Conditional Diffusion, Cross Attention. Adding Characters into an Environment. This course focuses on teaching you how to use Nov 15, 2023 · You can verify its uselessness by putting it in the negative prompt. Open a terminal and run the following commands: sudo apt update sudo apt upgrade. Dec 5, 2023 · Stable Diffusion is a text-to-image model powered by AI that can create images from text and in this guide, I'll cover all the basics. Feb 22, 2024 · The Stable Diffusion 3 suite of models currently ranges from 800M to 8B parameters. 5] Since, I am using 20 sampling steps, what this means is using the as the negative prompt in steps 1 – 10, and (ear:1. Begin by loading the runwayml/stable-diffusion-v1-5 model: from diffusers import DiffusionPipeline. Generate AI image for free. NOTE: There are many versions of Stable Diffusion. This component runs for multiple steps to generate image information. Stable Diffusion is a powerful, open-source text-to-image generation model. What is Img2Img in Stable Diffusion. Setting a value higher than that can change the output image drastically so it’s a wise choice to stay between these values. Stable Diffusion operates on the basis of a deep learning model that crafts images from text descriptions. Mar 10, 2024 · Now that you have the Stable Diffusion 2. Andrew. A higher value on the Guidance Scale indicates stricter adherence to the input text. It uses "models" which function like the brain of the AI, and can make almost anything, given that someone has trained it to do it. Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder. This part of the stable diffusion guide delves into the nuanced approach of incorporating rich details into prompts without leading to confusion, a pivotal aspect of the prompt engineering process. That will save a webpage that it links to. I find it very useful and fun to work with. Feb 22, 2024 · Introduction. Stable Diffusion Web UI ( SDUI) is a user-friendly browser interface for the powerful Generative AI model known as Stable Diffusion. This compendium, which distills insights gleaned from a multitude of experiments and This is an entry level guide for newcomers, but also establishes most of the concepts of training in a single place. Mar 4, 2024 · The array of fine-tuned Stable Diffusion models is abundant and ever-growing. Oct 25, 2022 · Training approach. Otherwise, you can drag-and-drop your image into the Extras Mar 2, 2023 · In this post, you will see images with diverse styles generated with Stable Diffusion 1. Using Stable Diffusion out of the box won’t get you the results you need; you’ll need to fine tune the model to match your use case. Jun 21, 2023 · Running the Diffusion Process. The predicted noise is subtracted from the image. In Automatic1111, click on the Select Checkpoint dropdown at the top and select the v2-1_768-ema-pruned. Step 1: Open the Terminal App (Mac) or the PowerShell App (Windows). Curate an extensive collection spanning various scenarios, enriching your model with the versatility to create diverse Stable diffusion guide. pipeline = DiffusionPipeline. Step 2: Navigate to the keyframes tab. safetensors (you can also use stable-diffusion-xl-base-1. These new concepts generally fall under 1 of 2 categories: subjects or styles. Authors. Here I will be using the revAnimated model. Aug 2, 2023 · Quick summary. Outpainting complex scenes. (i made that mistake lol) Stable Diffusion XL (SDXL) is an open-source diffusion model, the long waited upgrade to Stable Diffusion v2. If you wish to generate 512×512 images Aug 4, 2023 · Once you have downloaded the . Stable Diffusion is a collection of open-source models by Stability AI. The biggest uses are anime art, photorealism, and NSFW content. 6. bat to update web UI to the latest version, wait till Mar 17, 2024 · Step 2: Download Stable Diffusion. This will preserve your settings between reloads. Feb 24, 2024 · In Automatic111 WebUI for Stable Diffusion, go to Settings > Optimization and set a value for Token Merging. cd C:/mkdir stable-diffusioncd stable-diffusion. There are a lot of places where you can find a custom model, I usually find myseulf Stable Diffusion operates in a similar fashion: when you give the AI a prompt to generate, it starts with nothing but a canvas of latent space. You will get the same image as if you didn’t put anything. An example of deriving images from noise using diffusion. Stable Diffusion is an AI image generator developed by StablilityAI who strive to make AI image generation open source and ethically sound. With your images prepared and settings configured, it's time to run the stable diffusion process using Img2Img. Twitter. In this article we're going to optimize Stable Diffusion XL, both to use the least amount of memory possible and to obtain maximum performance and generate images faster. Oct 21, 2023 · Diffusion Model. ckpt model. Understanding prompts – Word as vectors, CLIP. Whether you're looking to visualize Nov 28, 2023 · Mastering Stable Diffusion in Image Processing: A Guide. Background on Diffusion Models. Step 2: Navigate to ControlNet extension’s folder. Embarking on a journey with Stable Diffusion prompts necessitates an exploratory approach towards crafting veritably articulate and distinctly specified prompts. Upload an Image. To aid your selection, we present a list of versatile models, from the widely celebrated Stable diffusion v1. Stable Diffusion is a very powerful AI image generation software you can run on your own home computer. Stable Diffusion Prompt: A Definitive Guide. CDCruz's Stable Diffusion Guide. Step 2. Enter the following command in the terminal: This command creates a directory named stable-diffusion-webui in your current directory. Name. DreamBooth is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject. Fix details with inpainting. This is done by cloning the Stable Diffusion repository from GitHub. Its mainstay is a diffusion process, where an image is morphed from random noise into a coherent image via a series of steps. Feb 17, 2024 · Video generation with Stable Diffusion is improving at unprecedented speed. You should see the message. conda create --name Automatic1111_olive python=3. We would like to show you a description here but the site won’t allow us. Now let’s choose the “Bezier Curve Selection Tool”: With this, let’s make a selection over the right eye, copy and paste it to a new layer, and Stable Diffusion v1. Jan 5, 2024 · A robust Stable Diffusion model thrives on a diverse, top-tier dataset. In order to better understand Stable Diffusion, it is necessary to have a basic knowledge of its backbone — Diffusion models. 10. If you already have AUTOMATIC1111 WebGUI installed, you can skip this step. Jan 2, 2024 · This guide aims to take you on a journey through the installation process of Stable Diffusion on your Windows PC, allowing you to harness the full potential of this remarkable software. In AUTOMATIC1111 GUI, Select the img2img tab and select the Inpaint sub-tab. Training a Style Embedding with Textual Inversion. Stable Diffusion is an AI-powered tool that enables users to transform plain text into images. Feb 13. Used primarily in the fine-tuning of Mar 29, 2024 · Learn the basics of Stable Diffusion, a powerful text-to-image model that can generate realistic images from text prompts. Extract the zip file at your desired location. It is easy to use for anyone with basic technical knowledge and a computer. Make sure not to right-click and save in the below screen. 4, but we can download a custom model to use with the UI. Upload the image to the inpainting canvas. But they can also be used for inpainting and outpainting, image-to-image (img2img) and a lot more. This page can act as an art reference. Stable Diffusion 3 combines a diffusion transformer architecture and flow matching. Settings: sd_vae applied. Stable Diffusion is an open-source latent diffusion model that was trained on billions of images to generate images given any prompt. This step is optional but will give you an overview of where to find the settings we will use. By the end of this guide, you'll be well-equipped to harness the power of stable diffusion and create your own amazing images. Stable Diffusion is a deep learning, text-to-image model that has been publicly released. User can input text prompts, and the AI will then generate images based on those prompts. It's one of the most widely used text-to-image AI models, and it offers many great benefits. ===== Step 8: Run the following command: "conda env create -f environment. Aug 30, 2023 · End-to-End Python Guide For Giving a Stable Diffusion Model Your Own Images for Training and Making Inferences from Text. Diffusion in latent space – AutoEncoderKL. The one in this guide is from Automatic1111. The dice button to the right of the Seed field will reset it to -1. Mar 14, 2023 · The default setting for Seed is -1, which means that Stable Diffusion will pull a random seed number to generate images off of your prompt. Jun 7, 2023 · This article provides an intuitive but comprehensive guide on how Stable Diffusion works, as well as some accompanying code to better illustrated the concepts in practice. Now use this as a negative prompt: [the: (ear:1. Jul 9, 2023 · 1. This is an advanced AI model capable of generating images from text descriptions or modifying existing images based on textual prompts. " Proceed by uploading the downloaded model file into the newly created folder, "Stable Diffusion. This is the area you want Stable Diffusion to regenerate the image. One of the powerful ways to enhance the algorithm's overall performance is the use of LoRA. Text-to-Image with Stable Diffusion. If you are comfortable with the command line, you can use this option to update ControlNet, which gives you the comfort of mind that the Web-UI is not doing something else. This process is repeated a dozen times. Stable Diffusion Online is a free Artificial Intelligence image generator that efficiently creates high-quality images from simple text prompts. The Web UI offers various features, including generating images from text prompts (txt2img), image-to-image processing (img2img May 29, 2024 · How to install and get started with Stable Diffusion. ai's text-to-image model, Stable Diffusion. You will see the exact keyword applied to two classes of images: (1) a portrait and (2) a scene. Feb 29, 2024 · Thu Feb 29 2024. Stable diffusion makes it simple for people to create AI art with just text inputs. Double click the update. Jan 4, 2024 · Learn how to write effective prompts for Stable Diffusion, a text-to-image generation tool. Stable Diffusion's realistic AI images are In-Depth Stable Diffusion Guide for artists and non-artists. It has a base resolution of 1024x1024 pixels. 1-768. The super resolution component of the model (which upsamples the output images from 64 x 64 up to 1024 x 1024) is also fine-tuned, using the subject’s images exclusively. The main difference is that, Stable Diffusion is open source, runs locally, while being completely free to use. In this post, you will learn how to use AnimateDiff, a video production technique detailed in the article AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo and coworkers. Whether you’re an artist, designer, or simply intrigued by the capabilities of AI, Stable Diffusion opens up a realm of possibilities for creating captivating Jul 7, 2024 · Option 2: Command line. Conclusion. This is where Stable Diffusion‘s diffusion model comes into play. You'll see this on the txt2img tab: Mar 19, 2024 · Creating an inpaint mask. We will be able to generate images with SDXL using only 4 GB of memory, so it will be possible to use a low-end graphics card. The web UI developed by AUTOMATIC1111 provides users In this tutorial I'll go through everything to get you started with #stablediffusion from installation to finished image. Nov 7, 2022 · Dreambooth is a technique to teach new concepts to Stable Diffusion using a specialized form of fine-tuning. 🧨 Diffusers provides a Dreambooth training script. Sep 8, 2023 · Here is how to generate Microsoft Olive optimized stable diffusion model and run it using Automatic1111 WebUI: Open Anaconda/Miniconda Terminal. Its key features include the innovative Multimodal Diffusion Transformer for enhanced text understanding and superior image generation capabilities. We advise a fragmented Prompt such as: “ 1girl, schoolgirl, white uniform “ rather than a full sentence like: “ a schoolgirl in white uniform “ Even though they give very similar results with very small prompts, long sentence-type prompts are prone to be partially dismissed or get interrupted by unintended filler words. Training stable diffusion from scratch. Mar 11, 2023 · By default the UI will download Stable Diffusion 1. or just type "cd" and then drag the folder into the Anaconda prompt. Mar 28, 2023 · The sampler is responsible for carrying out the denoising steps. This loads the 2. Aug 3, 2023 · This version of Stable Diffusion creates a server on your local PC that is accessible via its own IP address, but only if you connect through the correct port: 7860. Step4: Install and launch the Sep 29, 2022 · Diffusion steps. yaml". Apr 22, 2023 · Generate a test video. The green recycle button will populate the field with the seed number used in Feb 16, 2023 · Click the Start button and type "miniconda3" into the Start Menu search bar, then click "Open" or hit Enter. 2 to 0. Sep 16, 2023 · In this comprehensive guide, we’ll walk you through setting up the software, using the color sketch tool, and leveraging Img2Img to turn amateur sketches into professional artwork. Youtube Tutorials. Upload an image to the img2img canvas. 5 base model. This site offers easy-to-follow tutorials, workflows and structured courses to teach you everything you need to know about Stable Diffusion. 1:7860" or "localhost:7860" into the address bar, and hit Enter. This is meant to be read as a companion to the prompting guide to help you build a foundation for bigger and better generations. So let's get started on your journey to becoming a diffusion wizard. If you want to run Stable Diffusion locally, you can follow these simple steps. New stable diffusion finetune ( Stable unCLIP 2. After this you will grasp the basics of Stable Diffusion and be able to use it locally on your computer. The AUTOMATIC1111 webGUI is the most popular locally run user interface for Stable Diffusion, largely due to its ease of installation and frequent updates with new features. Stable Diffusion is a latent diffusion model, a kind of deep generative neural network. This approach aims to align with our core values and democratize access, providing users with a variety of options for scalability and quality to best meet their creative needs. This article will introduce you to the course and give important setup and reading links for the course. Sep 22, 2023 · Option 2: Install the extension stable-diffusion-webui-state. px xz op cu gx ty xc jn ct dp