Put Yourself Into A Universe Of Art - Using Your Images In Dreambooth Stable Diffusion

  Рет қаралды 3,498

Ming Effect

Ming Effect

Күн бұрын

Ming details how to use a Google Colab for dreambooth, stable diffusion, and Automatic1111 by training your own images to create a universe with you in it - for free. ALL of this is free, and you don't even have to use your own computer for processing.
Links mentioned in video:
Batch image processing: www.birme.net
Hugging Face site for free token: huggingface.co/
Dreambooth and Automatic1111: github.com/TheLastBen/fast-st...
Dreambooth colab for creating your own checkpoint datasets: colab.research.google.com/git...
00:00 Start
00:55 Birme - Batch process images
01:31 Copy Colab To Google Drive
03:19 Setting Up Button
03:52 Name Your Instance
04:41 How To Label Your Images
05:25 Optional Setting Up Class Images
06:45 Entering Dreambooth Info
07:43 Finished Training - How To Find Your CKPT File
08:10 Running Automatic1111
08:38 Test Your Trained Model Using Generated URL
09:42 Click Restore Faces
10:31 Changing Image Dimension
10:54 Entering Negative Prompts - What They Are
11:17 Totally Rad Image - Ha
11:51 Even Radder Image
12:13 CFG Scale - Why Use It
12:58 A Note About NSFW Images
14:20 Adding Artists Names To Prompts
I hope this helps answer your questions regarding using your own images in Stable diffusion and dreambooth. Other tutorials will be available on the channel.
Check out our webpage: www.mingeffect.com
Facebook: / mingeffect
Ming Effect Instagram Channel: / mingeffects
===================================
===================================
#dreambooth #stablediffusion #googlecolab #freeimagegeneration #aiimagegeneration #aigenerator #mingeffect #stablediffusioncolab #ahowtousecolab #generativeimages #ai #automatic1111

Пікірлер: 19
@martinkaiser5263
@martinkaiser5263 Жыл бұрын
Hi and TY for this Vid ! The first vid i see thats in a proper speed to follow ! Cant wait until you reveal the secret artist !!
@mingeffect
@mingeffect Жыл бұрын
You're welcome! I just released the video detailing that today.
@vacc06
@vacc06 Жыл бұрын
Thanks for sharing! Eager to learn more!
@abhinavchhabra4947
@abhinavchhabra4947 Жыл бұрын
Awsome content, thanks putting these videos. Keep posting!
@mingeffect
@mingeffect Жыл бұрын
Thank you :)
@atgaming5318
@atgaming5318 Жыл бұрын
amazing, thanks!!!
@rikenshah1861
@rikenshah1861 Жыл бұрын
Thanks a lot!
@NirdeshakRao
@NirdeshakRao Жыл бұрын
Your are a great tutor, thanks for sharing your knowledge with all. 🙂🙏
@mingeffect
@mingeffect Жыл бұрын
So nice of you, thank you.
@ak_mits
@ak_mits Жыл бұрын
I think I have just found what I have been looking for! I tried training on my photos (1 with the same background and the other version with background removed) but still I got basic results. It did not take the face of the photo but rather it seems like it used the whole frame. So the results were all like with the same tshirt of the person because the original photos were with the same tshirt. Do you have any idea to fix it?
@mingeffect
@mingeffect Жыл бұрын
The key is to have a variety of angles from closeup to far away. Then comes the prompts themselves. Some of mine (with certain trained models) only produced good results of my face when I also included the word "viking" -go figure. I suppose it has something to do with my beard. Think of other words that might actually tune the render more closely to your face. Maybe add businessman, person, or even throw in a 4-letter random set which will also change the render: something like sdd4
@eugeniusvision
@eugeniusvision Жыл бұрын
Eric, when you have a chance, could you record a short video about the new fast-DreamBooth collab interface. I think they changed quite a few things compared to this video. Dreambooth area looks differently. Appreciate it.
@mingeffect
@mingeffect Жыл бұрын
This does seem to be in order. Sometimes the GitHub updates with daily changes. I’ve got that video on the board for production. Thank you for your suggestion.
@lloydkeays7035
@lloydkeays7035 Жыл бұрын
Hi, I believe the interface changed quite a bit since your tutorial (which is great) I was able to get it all up and working. But now... How do I simply go back to stable diffusion interface with my trained model directly? I don't seem able to find how to do it without retraining the model. Thanks
@mingeffect
@mingeffect Жыл бұрын
Dreambooth produces a session folder on your google drive that will contain the trained model. You can then move that model to any other folder (I create a “model” folder, and then access that model folder when using the standard Automatic1111 colab.
@eugeniusvision
@eugeniusvision Жыл бұрын
@mingeffect with chatGPT gaining so much buzz, do you think we soon will be able to train AI in openAI as well as we do in Stable Diffusion and create our own Avatars there as well? Or can we already do that?
@mingeffect
@mingeffect Жыл бұрын
I have a feeling it wouldn’t take much for that to happen. As of yet chat doesn’t generate images (or even random numbers for that matter) but it wouldn’t take much for someone to have it generate code to access the api and produce that elsewhere. As t is, the chat is t connected to the internet for pulling data. I have a feeling they are not wanting to test the waters vs google until they’ve got the appropriate legal issues resolved for such.
@UnfilteredTrue
@UnfilteredTrue Жыл бұрын
How do you get 169 images of yourself? Do you copy same image multiple times to basically overtrain the model? I a struggling with getting the face right.
@mingeffect
@mingeffect Жыл бұрын
Hi, thanks for the question. I have another channel where I am often filmed from different angles in different environments. This is key. Different lighting conditions (just enough where the image is still well-lit but with different ambient conditions), and angles. Think of recording yourself in certain 3d apps. You can start at a low angle looking up at your face and then take photos as you circle around yourself side-to-side. Then you set the camera at a higher level and do the same thing. And then again, until you eventually get photos at a very high angle around your face/head. And different environments--as well as different clothing--will cause subtle lighting and color reflections on your face or the rest of your body. It's easy to get that number of images with this approach. Plus the different styles of clothing-with shots close, medium, and far away will help round out your training images. I hope this gives you a bit more to go on.
Stable Diffusion Dreambooth Made Easy - Clone Yourself In AI Art
21:55
MISS CIRCLE STUDENTS BULLY ME!
00:12
Andreas Eskander
Рет қаралды 20 МЛН
Alex hid in the closet #shorts
00:14
Mihdens
Рет қаралды 18 МЛН
How Stable Diffusion Works (AI Image Generation)
30:21
Gonkee
Рет қаралды 142 М.
KLING AI wipes out LUMA and Runway Gen 3 ?
9:00
AI Filmmaking Academy
Рет қаралды 13 М.
LORA + Checkpoint Model Training GUIDE - Get the BEST RESULTS super easy
34:38
How To Create Your Own AI Clone for Videos (No More Shooting)
11:50
100x Engineers
Рет қаралды 567 М.
ULTIMATE FREE LORA Training In Stable Diffusion! Less Than 7GB VRAM!
21:14
Moshi 2 - My Next Conversation with AI
16:22
Ming Effect
Рет қаралды 413
FREE Image Upscaler For Mac - FREESCALER - Stable Diffusion Ai
8:01