Learn How To Install & Use Stable Diffusion Via Automatic1111 Web UI & Realistic DreamBooth & Kohya LoRA Training
Lecture 1: How To Install Python, Setup Virtual Environment VENV, Set Default Python System Path & Install Git
What you’ll learn
- Learn Stable Diffusion.
- Learn DreamBooth Training.
- Learn Generative AI.
- Learn How To Generate Studio Quality Images Of Yourself.
- Learn Kohya LoRA Training.
- Learn Automatic1111 Web UI For Stable Diffusion.
Course Content
- Full Course –> 8 lectures • 4hr 7min.
Requirements
Lecture 1: How To Install Python, Setup Virtual Environment VENV, Set Default Python System Path & Install Git
How to install Python, have multiple Python installations, set system wide default Python version. How to compose venv for any Python installation, change Python default path, and install SD web UI properly.
0:00 Very comprehensive guide to Python installation on Windows
1:11 What is CMD – Command Prompt
1:56 How to open a cmd window and use it
2:04 How to run cmd as administrator
2:17 What is Git and why do we need Git
2:35 How to download and install Git
3:30 Why do we need Git large and how to download and install Git large
3:50 Why do we need specific Python versions
4:03 How to download and install any Python version
4:32 How to verify if Python installed or not
4:55 How to customize Python installation
5:17 Python add path checkbox during installation
6:20 How to verify your Python installed version
6:35 How to change or set system environment variables path of Python
7:15 How to install another Python version – multiple Python installations
8:30 How to change default Python version when having multiple Python installations
9:30 How to use specific Python installation when having multiple Python
9:35 What is Python venv and why do we need it
10:40 How to start cmd inside certain directory
10:55 How to compose a Python venv
11:19 How to activate Python venv
11:58 How to compose a venv from different Python version
13:39 Demo of installed package separation from other Python installations inside venv
14:17 Where to find installed packages in Python installation folder
14:50 How to write a bash script to automatically activate Python venv and start a cmd
15:24 How to view extensions of files in Windows
15:43 The script itself to activate venv and start cmd
17:11 How to install Stable Diffusion Automatic1111 web UI
17:30 How to use Git clone to download entire project from GitHub repo
Lecture 2: Zero to Hero ControlNet Tutorial: Stable Diffusion Web UI Extension | Complete Feature Guide
I show how to install Automatic1111 Web UI & ControlNet extension installation from scratch in this video. Moreover I show how to make amazing QR codes and inpainting and out painting of ControlNet which are very similar to Photoshop generative fill and Midjourney zoom out. Furthermore, I explain and show what are Canny, Depth, Normal, OpenPose, MLSD, Lineart, SoftEdge, Scribble, Seg, Shuffle, Tile, Inpaint, IP2P, Reference, T2IA features of ControlNet.
0:00 Introduction to most advanced zero to hero ControlNet tutorial
2:55 How to install Stable Diffusion Automatic1111 Web UI from scratch
5:05 How to see extensions of files like .bat
6:15 Where to find command line arguments of Automatic1111 and what are they
6:46 How to run Stable Diffusion and ControlNet on a weak GPU
7:37 Where to put downloaded Stable Diffusion model files
8:29 How to give different folder path as the model path – you can store models on another drive
9:15 How to start using Stable Diffusion via Automatic1111 Web UI
10:00 Command line interface freezing behaviour
10:13 How to improve image generation of Stable Diffusion with better VAE file
11:39 Default VAE vs best VAE comparison
11:50 How to set quick shortcuts for VAE and Clip Skip for Automatic1111 Web UI
12:30 How to upgrade xFormers to the latest version in Automatic1111
13:40 What is xFormers and other optimizers
14:26 How to install ControlNet extension of Automatic1111 Web UI
18:00 How to download ControlNet models
19:40 How to use custom Stable Diffusion models with Automatic1111 Web UI
21:24 How to update ControlNet extension to the latest version
22:53 Set this true, allow other scripts to control ControlNet extension
24:37 How to make amazing QR code images with ControlNet
30:59 Best settings for QR code image generation
31:44 What is Depth ControlNet option and how to use it
33:28 Depth_leres++ of ControlNet
34:15 Depth_zoe of ControlNet
34:22 Official information of Depth maps
34:49 ControlNet Normal map
35:34 Normal Midas map
36:05 Official information of Normal maps
34:49 ControlNet Canny model
37:42 Official information of Canny
37:55 ControlNet MLSD straight lines model
39:08 Official information of MLSD straight lines
39:18 ControlNet Scribble model
40:28 How to use your own scribble images and turn them into amazing artworks
40:45 When to select none in pre-processor section
41:20 My prompt is more important
41:36 ControlNet is more important
42:01 Official information of Scribble
42:11 ControlNet Softedge model
43:12 Official information of SoftEdge
43:22 ControlNet Segmentation (Seg) model
43:55 How to modify your prompt to properly utilize segmentation
44:10 Association of prompt with segments the ControlNet finds
44:41 How to turn your wall into a painting with ControlNet
45:33 Why I selected none preprocessor
43:06 Official information of segmentation (Seg)
46:16 Open pose module of ControlNet
46:40 How to install and use OpenPose editor
50:58 Official information of OpenPose
51:08 ControlNet Lineart model
51:36 Preprocessor preview bug
54:21 Real lineart into amazing art example
56:34 How to generate amazing logo images by using Lineart of ControlNet
58:16 Difference between just resize, crop and resize, and resize and fill
59:02 ControlNet Shuffle model
1:00:50 Official information of Shuffle
1:02:36 What is multi-ControlNet and how to use it
1:04:05 Instruct pix2pix of ControlNet
1:06:00 Inpainting feature of ControlNet
1:07:49 ControlNet inpainting vs Automatic1111 inpainting
1:07:59 How to get true power of inpainting of ControlNet (hint: with tiling)
1:09:00 How to upscale and add details to the images with inpainting + tiling
1:09:30 The tile color fix + sharp to obtain even better results
1:10:35 Tile color fix + sharp vs old tile resample result comparison
1:11:20 How to use generative fill feature of Photoshop in ControlNet to remove objects
1:12:58 How to outpaint (zoom out feature of midjourney 5.2) image with ControlNet
1:14:17 The logic of outpainting
1:14:40 How to continue outpainting easily
1:16:06 Tiling of ControlNet – ultimate game changer for upscaling
1:17:19 How to turn your image into a fully stylized image with tiling without training
1:20:57 Reference only feature of ControlNet
1:22:29 Official information of Reference mode
1:22:39 Style Transfer (T2IA) of ControlNet
1:26:54 How to install and use ControlNet on RunPod
Lecture 3: How To Find Best Stable Diffusion Generated Images By Using DeepFace AI – DreamBooth / LoRA Training
If you are also getting tired of trying to find good images among thousands of generated images you don’t have to anymore. By using DeepFace AI library, you can sort images by their similarity to your target images and quickly find the best Stable Diffusion DreamBooth LoRA trained model generated images. I am explaining everything step by step and this tutorial requires 0 technical knowledge.
0:00 Introduction to what DeepFace does and how we are going to utilize it
0:58 Let’s say you have generated 2000 images how to get good ones
1:17 This approach can be used for professional business purposes
1:32 If you are new to Stable Diffusion or image generation
2:17 Beginning with composing venv to install DeepFace
3:18 The training dataset images I have used for this tutorial
3:57 I have generated over 3000 images
4:06 The prompts I have used to generate images – how to use PNG info to find used prompts
5:23 How to write and use DeepFace best images finding script
9:18 How to use the script demonstration after you written and set it
11:20 Explanation of the values displayed during the script runtime
12:18 Sorted images from best to worst
Lecture 4: Generate Studio Quality Realistic Photos By Kohya LoRA Stable Diffusion Training – Full Tutorial
Kohya SS web GUI DreamBooth LoRA training full tutorial. You don’t need technical knowledge to follow this tutorial. In this tutorial I have explained how to generate professional photo studio quality portrait / self images for free with Stable Diffusion training.
0:00 Introduction to Kohya LoRA Training and Studio Quality Realistic AI Photo Generation
2:40 How to download and install Kohya’s GUI to do Stable Diffusion training
5:04 How to install newer cuDNN dll files to increase training speed
6:43 How to upgrade to the latest version previously installed Kohya GUI
7:02 How to start Kohya GUI via cmd
8:00 How to set DreamBooth LoRA training parameters correctly
8:10 How to use previously downloaded models to do Kohya LoRA training
8:35 How to download Realistic Vision V2 model
8:49 How to do training with Stable Diffusion 2.1 512px and 768px versions
9:44 Instance / activation and class prompt settings
10:18 What kind of training dataset you should use
11:46 Explanation of number of repeats in Kohya DreamBooth LoRA training
13:34 How to set best VAE file for better image generation quality
13:52 How to generate classification / regularization images via Automatic1111 Web UI
16:53 How to prepare captions to images and when you do need image captions
17:48 What kind of regularization images I have used
18:04 How to set training folders
18:57 Best LoRA Training settings for minimum amount of VRAM having GPUs
21:47 How to save state of training and continue later
22:44 How to save and load Kohya Training settings
23:31 How to calculate 1 epoch step count when considering repeating count
24:41 How to decide how many epochs when repeating count considered
26:00 Explanation of command line parameters displayed during training
28:19 Caption extension changing
29:24 After when we will get a checkpoint and checkpoints will be saved where
29:57 How to use generated LoRA safetensors files in SD Automatic1111 Web UI
30:45 How to activate LoRA in Stable Diffusion web UI
31:30 How to do x/y/z checkpoint comparison of LoRA checkpoints to find best model
33:29 How to improve face quality of generated images with high res fix
36:00 18 Different training parameters experiments I have made and their results comparison
36:42 How to test 18 different LoRA checkpoints with x/y/z plot
39:18 How to properly set number of epochs and save checkpoints when reducing repeating count
40:36 How to use checkpoints of Kohya DyLora, LoCon, LyCORIS/LoCon, LoHa in Automatic1111 Web UI
42:12 How to install Torch 1.13 instead of 1.12 and newer xFormers compatible with this version
43:06 How to make Kohya scripts to use your second GPU instead of your primary GPU
Lecture 5: The END of Photography – Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training
Dreambooth is the best training method for Stable Diffusion. In this tutorial, I show how to install the Dreambooth extension of Automatic1111 Web UI from scratch. Additionally, I demonstrate my months of work on the realism workflow, which enables you to produce studio-quality images of yourself through Dreambooth training. Furthermore, I share my automatic installer script for the DreamBooth extension.
0:00 Dreambooth training with Automatic1111 Web UI
1:44 How to install DreamBooth extension of Automatic1111 Web UI
2:37 Automatic installer script for DreamBooth extension
3:20 Manual installation of DreamBooth extension
3:30 How to use older / certain version of Auto1111 or DreamBooth with git checkout
4:30 Main manual installation part of DreamBooth extension
4:57 How to manually update previously installed DreamBooth extension to the latest version
5:44 How to install requirements of DreamBooth extension
7:15 How to use DreamBooth extension
7:25 How to compose your training model in DreamBooth extension
7:35 Best base model and settings for realism training in DreamBooth
7:51 Where to find installed Python ,xFormers, Torch, Auto1111 versions
8:10 How to solve frozen / non-progressing CMD window
8:23 Where the DreamBooth generated training files (native diffusers) are stored
8:37 Where the Stable Diffusion training files are stored
8:57 Select training model and start setting parameters for best realism
9:07 How to continue training later a time
9:38 Which configuration (settings tab) for best realism and best training
12:14 Concept tab settings
12:28 How to prepare your training images dataset with my human cropping script and pre-processing
13:43 What kind of training images you should have for DreamBooth training
14:52 Continue back setting parameters for concepts tab
15:02 Everything about classification / regularization images used during Dreambooth / LoRA training
16:07 Used pre-prepared real images based classification images for this tutorial
16:55 How to generate classification images by using the trained model
17:22 How to generate images with Automatic1111 forever until cancelled
18:09 How to use image captions with DreamBooth extension via filewords
18:25 How to automatically generate captions for training or class images
18:35 How to use BLIP or deepbooru for captioning
19:25 What happens when image caption is read, what is the final output of instance prompt
19:59 How to set class images per instance
20:32 What is the benefit of using real photos as classification images
21:42 How to start training after setting all configuration
23:05 Training started, displayed messages on CMD
23:47 When it generates new classification images
25:52 What if if you don’t have such powerful GPU for such quality training
26:55 How to do x/y/z checkpoint comparison to find best checkpoint
28:43 How checkpoints are named when saved – 1 epoch step count
30:05 The best VAE file I use for best quality
30:36 How to open x/y/z plot comparison results and evaluate them
33:20 How sort thousands of generated image with the best similarity thus quality
34:39 How to improve generated image quality via 2 different inpainting methodology
36:56 Improve results with inpainting + ControlNet
38:50 What is important to get good quality images after inpainting
Lecture 6: How To Use Stable Diffusion XL (SDXL 0.9) On Google Colab For Free
SDXL is currently in beta and in this video I will show you how to use it on Google Colab for free. Hopefully how to use on PC and RunPod tutorials are coming as well.
0:00 How to use SDXL On Google Colab for free
0:18 How to accept SDXL agreement and get weights
0:43 How to generate access token of Hugging Face
1:07 How to start Colab properly to get GPU and use SDXL
2:14 Advanced settings of Google Colab gradio for SDXL
2:48 it/s on Google Colab for SDXL
Lecture 7: Stable Diffusion XL (SDXL) Locally On Your PC – 8GB VRAM – Easy Tutorial With Automatic Installer
SDXL is currently in beta and in this video I will show you how to use it install it on your PC. This tutorial should work on all devices including Windows, Unix, Mac even may work with AMD but I couldn’t test it. I also have shown settings for 8GB VRAM so don’t forget to watch that chapter.
0:00 How to use SDXL locally on your PC
1:01 How to install via Automatic installer script
1:35 Beginning manual installation
1:47 How to accept terms and conditions to access SDXL weights and model files (instantly approved)
2:08 How agreement page looks like and how to fill form for instant access
2:38 How to generate Hugging Face access token
2:53 Continuing the manual installation
3:36 Automatic installation is completed. How to start using SDXL
4:00 How to add your Hugging Face token so that Gradio will work
4:45 Continuing the manual installation
5:19 Manual installation is completed. How to start using SDXL
6:17 How to delete cached model and weight files
6:44 How the app will download weight files showing live
7:20 Advanced settings of the Gradio APP of SDXL
8:11 Speed of image generation with RTX 3090 TI
8:39 Where are the generated images are saved
9:44 8 GB VRAM settings – min VRAM settings for SDXL
10:06 How to see file extensions on Windows
Lecture 8: How To Use SDXL On RunPod Tutorial. Auto Installer & Refiner & Amazing Native Diffusers Based Gradio
You don’t have a good GPU or don’t want to use weak Google Colab? Here how to install and use Stable Diffusion XL (SDXL) on RunPod. How to download and install it step by step. I am also providing an auto installer script. On RunPod with a cheap RTX3090 GPU it works super fast. The shared Gradio APP is based on native diffusers so working very fast and correct.
0:00 How to install and use SDXL on RunPod Tutorial
1:12 Which Pod you should pick for SDXL with which settings
2:26 How to use auto installer script to install SDXL on RunPod
3:16 How to do manual installation of SDXL on RunPod step by step
3:47 How to accept terms of services of SDXL repository
5:24 How to download / clone SDXL repository
7:40 How to run SDXL after installation
8:15 How to use SDXL Gradio UI interface for generating images
9:03 SDXL base output vs refined output comparison
9:32 How to delete your Pod to not spend any money
9:48 How to try prompts of Midjourney and do comparison with SDXL
10:25 Advanced settings tab of SDXL gradio interface for batch size, refiner strength and CFG value
12:00 Number of steps 100 experiment
12:29 Where the generated images are saved/stored
14:00 More image comparison between SDXL and Midjourney
15:18 How to turn off display of non-refiner / base model images
16:43 Explanation of SDXL Refiner strength and refiner