Discover
Generative AI Tutorials: Stable Diffusion, TTS, Voice Cloning, DeepFakes, LLMs, Training, Animation

Generative AI Tutorials: Stable Diffusion, TTS, Voice Cloning, DeepFakes, LLMs, Training, Animation
Author: Furkan Gözükara
Subscribed: 3Played: 20Subscribe
Share
© Furkan Gözükara
Description
Best tutorials for: Stable Diffusion, Stable Diffusion XL (SDXL), Stable Diffusion 3, PixArt, Stable Cascade, Large Language Models (LLMs), Text to Speech, Speech to Text, ChatGPT, GPT-4, Image to Video Animation, Video to Video Animation, Deep Fakes, SwarmUI, ComfyUI, Fooocus, SUPIR, V-Express, InstantId, ControlNet, IP Adapters, RunPod, Massed Compute, Cloud, Kaggle, Google Colab, Automatic1111 SD Web UI, TensorRT, DreamBooth, LoRA, Training, Fine Tuning, Kohya, OneTrainer, Text to 3D, Image to 3D, Upscale, Inpainting, Outpainting, Superscale, AI-assisted Digital Art Generation and more
4 Episodes
Reverse
The V3 update includes video-to-video functionality. For those interested in using LivePortrait but lacking a powerful GPU, Mac users, or those preferring cloud usage, this tutorial is ideal. It guides you through installing and using LivePortrait with one click on #MassedCompute, #RunPod, and even a free #Kaggle account. After this tutorial, running LivePortrait on cloud services will be as straightforward as running it on your own computer. LivePortrait is the latest state-of-the-art static image to talking animation generator, surpassing paid services in both speed and quality.
🔗 Cloud (no-GPU) Installations Tutorial for Massed Compute, RunPod and free Kaggle Account ️⤵️▶️ https://youtu.be/wG7oPp01COg
🔗 LivePortrait Installers Scripts ⤵️▶️ https://www.patreon.com/posts/107609670
🔗 Windows Tutorial - Watch To Learn How To Use ⤵️▶️ https://youtu.be/FPtpNrmuwXk
🔗 Official LivePortrait GitHub Repository ⤵️▶️ https://github.com/KwaiVGI/LivePortrait
🔗 SECourses Discord Channel to Get Full Support ⤵️▶️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
🔗 Paper of LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control ⤵️▶️ https://arxiv.org/pdf/2407.03168
🔗 Upload / download big files / models on cloud via Hugging Face tutorial ⤵️▶️ https://youtu.be/X5WVZ0NMaTg
🔗 How to use permanent storage system of RunPod (storage network volume) ⤵️▶️ https://youtu.be/8Qf4x3-DFf4
🔗 Massive RunPod tutorial (shows runpodctl) ⤵️▶️ https://youtu.be/QN1vdGhjcRc
0:00 Introduction to the state-of-the-art image-to-animation open-source application LivePortrait cloud tutorial2:26 Installing and using LivePortrait on MassedCompute with an amazing discount coupon code4:28 Entering the special Massed Compute coupon for a 50% discount4:50 Setting up ThinLinc client to connect and use Massed Compute virtual machine5:33 Configuring ThinLinc client's synchronization folder for file transfer between your computer and MassedCompute6:20 Transferring installer files into Massed Compute sync folder6:39 Connecting to initialized Massed Compute virtual machine and installing LivePortrait app9:22 Starting and using LivePortrait application on MassedCompute post-installation10:20 Launching a second instance of LivePortrait on the second GPU on Massed Compute12:20 Locating generated animation videos and downloading them to your computer13:23 Installing LivePortrait on RunPod cloud service14:54 Selecting the appropriate RunPod template15:20 Setting up RunPod proxy access ports16:21 Uploading installer files to RunPod's JupyterLab interface and initiating installation17:07 Starting LivePortrait app on RunPod after installation17:17 Launching LivePortrait on the second GPU as a second instance17:31 Connecting to LivePortrait via RunPod's proxy connection17:55 Animating the first image on RunPod using a 73-second driving video18:27 Time required to animate a 73-second video (impressive speed)18:41 Understanding input upload errors with an example case19:17 One-click download of all generated animations on RunPod20:28 Monitoring the progress of animation generation21:07 Installing and using LivePortrait for free on a Kaggle account with amazing speed24:10 Generating the first animation on Kaggle after installing and starting LivePortrait24:22 Waiting for input images and videos to upload fully to avoid errors24:35 Monitoring animation status and progress on Kaggle24:45 GPU, CPU, RAM, and VRAM usage during LivePortrait animation process on Kaggle25:05 Downloading all generated animations on Kaggle with one click26:12 Restarting LivePortrait app on Kaggle without reinstallation26:36 Joining the SECourses Discord channel for chat and support
LivePortrait AI: Transform Static Photos into Talking Videos. Now supporting Video-to-Video Conversion and Superior Expression Transfer at Incredible Speed
A new tutorial is anticipated to showcase the latest changes and features in V3, including Video-to-Video capabilities and other enhancements.
This post provides information for both Windows (local) and Cloud installations (Massed Compute, RunPod, and free Kaggle Account).
The V3 update introduces video-to-video functionality. If you're seeking a one-click installation method for LivePortrait, an open-source zero-shot image-to-animation application on Windows, this tutorial is essential. It introduces the cutting-edge LivePortrait generator. Simply provide a static image and a driving video to create an impressive animation in seconds. LivePortrait is remarkably fast and adept at preserving facial expressions from the input video. The results are truly astonishing.
🔗 Windows Local Installation Tutorial ️⤵️▶️ https://youtu.be/FPtpNrmuwXk
🔗 LivePortrait Installers Scripts ⤵️▶️ https://www.patreon.com/posts/107609670
🔗 Requirements Step by Step Tutorial ⤵️▶️ https://youtu.be/-NjNy7afOQ0
🔗 Cloud Massed Compute, RunPod & Kaggle Tutorial (Mac users can follow this tutorial) ⤵️▶️ https://youtu.be/wG7oPp01COg
🔗 Official LivePortrait GitHub Repository ⤵️▶️ https://github.com/KwaiVGI/LivePortrait
🔗 SECourses Discord Channel to Get Full Support ⤵️▶️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
🔗 Paper of LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control ⤵️▶️ https://arxiv.org/pdf/2407.03168
0:00 Introduction to the state-of-the-art image-to-animation open-source application LivePortrait2:20 How to download and install LivePortrait Gradio application on your computer3:27 Requirements for LivePortrait application and installation process4:07 Verifying accurate installation of requirements5:02 Confirming successful installation completion and saving installation logs5:37 Starting the LivePortrait application post-installation5:57 Additional materials provided, including portrait images, driving video, and rendered videos7:28 Using the LivePortrait application8:06 VRAM usage when generating a 73-second animation video8:33 Animating the first image8:50 Monitoring the animation process status10:10 First animation video rendering completion10:24 Resolution of rendered animation videos10:45 Original output resolution of LivePortrait11:27 Improvements and new features coded on top of the official demo app11:51 Default save location for generated animated videos12:35 Effect of the Relative Motion option13:41 Effect of the Do Crop option14:17 Effect of the Paste Back option15:01 Effect of the Target Eyelid Open Ratio option17:02 How to join the SECourses Discord channel
Tutorial Link : https://youtu.be/XFUZof6Skkw
In this video, I demonstrate how to install and use #SwarmUI on cloud services. If you lack a powerful GPU or wish to harness more GPU power, this video is essential. You’ll learn how to install and utilize SwarmUI, one of the most powerful Generative AI interfaces, on Massed Compute, RunPod, and Kaggle (which offers free dual T4 GPU access for 30 hours weekly). This tutorial will enable you to use SwarmUI on cloud GPU providers as easily and efficiently as on your local PC. Moreover, I will show how to use Stable Diffusion 3 (#SD3) on cloud. SwarmUI uses #ComfyUI backend.
🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ https://www.patreon.com/posts/stableswarmui-3-106135985
🔗 Windows Tutorial for Learn How to Use SwarmUI ➡️ https://youtu.be/HKX8_F1Er_w
🔗 How to download models very fast to Massed Compute, RunPod and Kaggle and how to upload models or files to Hugging Face very fast tutorial ➡️ https://youtu.be/X5WVZ0NMaTg
🔗 SECourses Discord ➡️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
🔗 Stable Diffusion GitHub Repo (Please Star, Fork and Watch) ➡️ https://github.com/FurkanGozukara/Stable-Diffusion
Coupon Code for Massed Compute : SECoursesCoupon works on Alt Config RTX A6000 and also RTX A6000 GPUs
0:00 Introduction to SwarmUI on cloud services tutorial (Massed Compute, RunPod & Kaggle)3:18 How to install (pre-installed we just 1-click update) and use SwarmUI on Massed Compute virtual Ubuntu machines like in your local PC4:52 How to install and setup synchronization folder of ThinLinc client to access and use Massed Compute virtual machine6:34 How to connect and start using Massed Compute virtual machine after it is initialized and status is running7:05 How to 1-click update SwarmUI on Massed Compute before start using it7:46 How to setup multiple GPUs on SwarmUI backend to generate images on each GPU at the same time with amazing queue system7:57 How to see status of all GPUs with nvitop command8:43 Which pre installed Stable Diffusion models we have on Massed Compute9:53 New model downloading speed of Massed Compute10:44 How do I notice GPU backend setup error of 4 GPU setup11:42 How to monitor status of all running 4 GPUs12:22 Image generation speed, step speed on RTX A6000 on Massed Compute for SD312:50 How to setup and use CivitAI API key to be able to download gated (behind a login) models from CivitAI13:55 How to quickly download all of the generated images from Massed Compute15:22 How to install latest SwarmUI on RunPod with accurate template selection16:50 Port setup to be able to connect SwarmUI after installation17:50 How to download and run installer sh file for RunPod to install SwarmUI19:47 How to restart Pod 1 time to fix backends loading forever error20:22 How to start SwarmUI again on RunPod21:14 How to download and use Stable Diffusion 3 (SD3) on RunPod22:01 How to setup multiple GPU backends system on RunPod23:22 Generation speed on RTX 4090 (step speed for SD3)24:04 How to quickly download all generated images on RunPod to your computer / device24:50 How to install and use SwarmUI and Stable Diffusion 3 on a free Kaggle account28:39 How to change model root folder path on SwarmUI on Kaggle to use temporary disk space29:21 Add another backend to utilize second T4 GPU on Kaggle29:32 How to cancel run and start SwarmUI again (restarting)31:39 How to use Stable Diffusion 3 model on Kaggle and generate images33:06 Why we did get out of RAM error and how we fixed it on Kaggle33:45 How to disable one of the back ends to prevent RAM error when using T5 XXL text encoder twice34:04 Stable Diffusion 3 image generation speed on T4 GPU on Kaggle34:35 How to download all of the generated images on Kaggle at once to your computer / device
Full tutorial is on : https://youtu.be/HKX8_F1Er_w
Do not skip any part of this tutorial to master how to use Stable Diffusion 3 (SD3) with the most advanced generative AI open source APP SwarmUI. Automatic1111 SD Web UI or Fooocus are not supporting the #SD3 yet. Therefore, I am starting to make tutorials for SwarmUI as well. #StableSwarmUI is officially developed by the StabilityAI and your mind will be blown after you watch this tutorial and learn its amazing features. StableSwarmUI uses #ComfyUI as the back end thus it has all the good features of ComfyUI and it brings you easy to use features of Automatic1111 #StableDiffusion Web UI with them. I really liked SwarmUI and planning to do more tutorials for it.
🔗 The Public Post (no login or account required) Shown In The Video With The Links
➡️ https://www.patreon.com/posts/stableswarmui-3-106135985
0:00 Introduction to the Stable Diffusion 3 (SD3) and SwarmUI and what is in the tutorial
4:12 Architecture and features of SD3
5:05 What each different model files of Stable Diffusion 3 means
6:26 How to download and install SwarmUI on Windows for SD3 and all other Stable Diffusion models
8:42 What kind of folder path you should use when installing SwarmUI
10:28 If you get installation error how to notice and fix it
11:49 Installation has been completed and now how to start using SwarmUI
12:29 Which settings I change before start using SwarmUI and how to change your theme like dark, white, gray
12:56 How to make SwarmUI save generated images as PNG
13:08 How to find description of each settings and configuration
13:28 How to download SD3 model and start using on Windows
13:38 How to use model downloader utility of SwarmUI
14:17 How to set models folder paths and link your existing models folders in SwarmUI
14:35 Explanation of Root folder path in SwarmUI
14:52 VAE of SD3 do we need to download?
15:25 Generate and model section of the SwarmUI to generate images and how to select your base model
16:02 Setting up parameters and what they do to generate images
17:06 Which sampling method is best for SD3
17:22 Information about SD3 text encoders and their comparison
18:14 First time generating an image with SD3
19:36 How to regenerate same image
20:17 How to see image generation speed and step speed and more information
20:29 Stable Diffusion 3 it per second speed on RTX 3090 TI
20:39 How to see VRAM usage on Windows 10
22:08 And testing and comparing different text encoders for SD3
22:36 How to use FP16 version of T5 XXL text encoder instead of default FP8 version
25:27 The image generation speed when using best config for SD3
26:37 Why VAE of the SD3 is many times better than previous Stable Diffusion models, 4 vs 8 vs 16 vs 32 channels VAE
27:40 How to and where to download best AI upscaler models
29:10 How to use refiner and upscaler models to improve and upscale generated images
29:21 How to restart and start SwarmUI
32:01 The folders where the generated images are saved
32:13 Image history feature of SwarmUI
33:10 Upscaled image comparison
34:01 How to download all upscaler models at once
34:34 Presets feature in depth
36:55 How to generate forever / infinite times
37:13 Non-tiled upscale caused issues
38:36 How to compare tiled vs non-tiled upscale and decide best
39:05 275 SwarmUI presets (cloned from Fooocus) I prepared and the scripts I coded to prepare them and how to import those presets
42:10 Model browser feature
43:25 How to generate TensorRT engine for huge speed up
43:47 How to update SwarmUI
44:27 Prompt syntax and advanced features
45:35 How to use Wildcards (random prompts) feature
46:47 How to see full details / metadata of generated images
47:13 Full guide for extremely powerful grid image generation (like X/Y/Z plot)
47:35 How to put all downloaded upscalers from zip file
51:37 How to see what is happening at the server logs
53:04 How to continue grid generation process after interruption
Comments