can you create a model from pictures instead of a video?
@dominimatic36743 ай бұрын
Wtf co to Robi na planecie faktów
@allanbruce94733 ай бұрын
Just downloaded what I think was represented as complete downloads - as GITHUB owner has made his files read-only. The extraction went ok BUT FOR the entire "Workspace" folder is missing. Windows 11 did not report any setup files. I have an NVIDIA A3000 (laptop) perhaps this and the DirectX alternate will not run OR some unknown security issue is causing the folder to vanish. THERE IS a .bat file which I ran and it opens to "DeepFaceLive". Is that it - the same as "Workspace"?
@tcalleja743 ай бұрын
Much more difficult when you use longer videos with way more angles. This is cake in comparison
@ElDespertar3 ай бұрын
Thanks a million for this great tutorial. Eternally grateful!
@lenimann3 ай бұрын
The part where i should enter the Hugging Face Token does not appear.
@innocentbystander66743 ай бұрын
You don't need to do it anymore. Just skip that step
@thatonegamer117Ай бұрын
@@innocentbystander6674it doesn’t work if you skip that step
@giokaios3 ай бұрын
Does this still work ? V5 or v
@emrey22523 ай бұрын
awesome. thank you <3
@ugurergun98164 ай бұрын
How do I set another face with the face I want to replace in the aligned folder?
@achimschnick48904 ай бұрын
Really cool Video and very good described. I wanted to make a still image alive, but it seems not to work. I don' see any proceed to making the image looking recocnizable in the Preview Window. Not in SAEHD nor in Quick96 Mode. But I am traings later. ;-) ----- EDIT: To avoid Missunderstanding: I made the still Immage to a MP4 Video with about 300 Frames. The Destination Video had about the sames Frame Numbers. I also switched the to dst / src each other.
@user-hb7dh2hc9p4 ай бұрын
thank you for making this video. I wish I could find another video that discusses how each specific setting would do in an advanced situation.
@thomasdavid-up3dq4 ай бұрын
I love the video thank you, but please how can you create my own models and use it on DeepFaceLive. Please teach me
@Newinceptions4 ай бұрын
Great video! However, I'm running into a wall at the training section. When I run it, it says something went wrong after 40 seconds. (My goal is to make AI created images of my wife and I - and then eventually others. So if there's a newer free option, I'm open to it!) Here's where it gets to in the execution: Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main args.func(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 837, in launch_command simple_launcher(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--image_captions_filename', '--train_only_unet', '--save_starting_step=500', '--save_n_steps=0', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jcpdb', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jcpdb/instance_images', '--output_dir=/content/models/jcpdb', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jcpdb/captions', '--instance_prompt=', '--seed=565794', '--resolution=960', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=2e-06', '--lr_scheduler=linear', '--lr_warmup_steps=0', '--max_train_steps=3000']' returned non-zero exit status 1. Something went wrong Some potential issues: I'm trying to follow along via a newer version of fast-DreamBooth.ipynb - there wasn't a place to put the token in from huggingface. I'm doing this on a Mac - I didn't have Stable Diffusion installed yet. (I'm using this tutorial to install it if I should have it... uxplanet.org/install-stable-diffusion-ui-on-mac-beginners-guide-351e40a9e8e2) Training Image size - I The runtime - I need to be connected to hosted runtime for colab. The only GPU I have available to me is T4 GPU. When I'm running these steps, I assume I select that one. My image files are 1000x1000px. Maybe that's causing an issue? Thanks if you can help!
@Dogboy734 ай бұрын
How do you put in your own videos and face images? All the tutorials I've watched show the included 'Robert Downey' files used. I confused! :-/
@DigitalDazYT4 ай бұрын
Hey. at 3:09 I show exactly how to do this.
@Dogboy734 ай бұрын
@@DigitalDazYTCheers. I eventually figured it out 👍
@mahler6665 ай бұрын
Thanks a lot, indeed fascinating!!what If we want to only faceswap in one of the characters in the movie? can we replace Elon's face with a few pictures of other individuals' faces?
@user-cd4fb7nb9o5 ай бұрын
just click model download on that step since is not need to get a hugging face token
@user-cd4fb7nb9o5 ай бұрын
nevermind my ckpt model didnt even load in stable diffusion
@jlaltamrino29125 ай бұрын
Very nice,,how to 8natall pyton
@user-jf5uv9ir5k5 ай бұрын
What does data sort bat do?
@phily80205 ай бұрын
How did the Tom Cruise deepfake achieve such incredible quality? Think they trained on tonnes of different video data. How do you do that?
@PhantomLord12355 ай бұрын
Thank you for explaining it in a clear way
@AIEntusiast_6 ай бұрын
can you do this with source video, but instead of destination video you just got pictures ?
@Music_Light_Show6 ай бұрын
None of download links working for me.
@DigitalDazYT6 ай бұрын
Hi. Not quite sure what you mean. There is only one link in the description and thats for the github page. The download links on there are working fine.
@Music_Light_Show6 ай бұрын
@@DigitalDazYT Thanks managed to get one of the links to work, can you use this to swap faces in images or is it just for video?
@silver26977 ай бұрын
Version 5.2 is able to give us seed number just by clicking Apps on the right above, then DM results Great video btw, keep it up buddy
@johnbrennick87387 ай бұрын
No envelope symbol in my version! :-(
@DigitalDazYT7 ай бұрын
You need to do what I showed in the video to get the envelope. There is only one version of Midjourney
@catwithlonghair18507 ай бұрын
fantastic! thank you so much!!
@DigitalDazYT7 ай бұрын
Glad it was helpful!
@krystiankowalewski93448 ай бұрын
can we save the target iteration anc continue later
@user-gh6bk8el3p8 ай бұрын
perfect tutorial
@carolynluna-pu9lj8 ай бұрын
Thanks for the tutorial, I’m using RTX 1060 i don’t know why my training window won’t open after running the SHED trainer
@DigitalDazYT7 ай бұрын
If you head over to the deepfacelab github page they can help you with software issues.
@obaroleonard57219 ай бұрын
I am getting an error with training SAEHD. "ImportError: DLL load failed: The paging file is too small for this operation to complete." How it be fixed?
@DigitalDazYT7 ай бұрын
If you head over to the deepfacelab github page they can help you with software issues.
@user-re3sf2gi1m9 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🎨 Use Midjourney to create consistent characters for comic books, video games, or any project. 00:27 📝 Use tags and prompts to generate desired character variations in Midjourney. 01:24 🔗 In Midjourney 4, upscale the image and use the envelope icon to get the seed information for the character. In Midjourney 5, use the envelope reaction on the grid of four images. 02:46 ✍️ Write a formula in the prompt to instruct Midjourney to use a specific character with the desired seed. 04:01 ➡️ Change the character's actions, locations, and appearance by modifying the prompt and using different tags. 05:44 🙏 Thank you to viewers and subscribers for the support and engagement on the channel. Made with HARPA AI
@jriker19 ай бұрын
Part that's missing and a big one is the SRC. When you extract faces how many people have a src video that has one face in it. After extracting the aligned faces from the SRC and DST you need to remove any that aren't the faces you are targeting.
@DigitalDazYT7 ай бұрын
It's not missing. It's common sense. You obviously only want to use a video showing your source face.
@itzme8839 ай бұрын
how to do it if video has multiple people in frame?
@DigitalDazYT7 ай бұрын
You would need to edit out the other faces
@Itgyrl9099 ай бұрын
Excellent video! Would love to see a Photoshop Crash Course Tutorial for beginners if you're up to it!
@Itgyrl9099 ай бұрын
You do a great job with your video presentations! No fluff, long intros or bs weird humor (which I find cringe btw!) It's quite refreshing to have someone get down to the basics and straight to the point! These are the type of videos and media I lean towards in general. Subscribed!
@DigitalDazYT7 ай бұрын
Thank you. I try to make videos that I would watch myself. :)
@365479 ай бұрын
Very clear - Only thing IM unclear on is during Training do we have to contiously click P so the graph changes or can we just leave training on for hours ( currently my iterate is 1 and ive left it on for 10 minutes)
@OlaJunior-jo1kh9 ай бұрын
Please I can't find the P(update)key... What's the alt
@spiffnoblade77318 ай бұрын
You can leave it going for hours, Pressing P refreshes the images and count displayed on the preview so you can view its progress.
@DigitalDazYT7 ай бұрын
Pressing P just refreshes the preview so you can see how it's going. No need to keep pressing it
@secretforge9 ай бұрын
Something to try with "full body" would be to change your AR to be taller. Your wide screen AR is what is causing the AI to do half body. If you changed just your AI and put (full body photo) in parentheses you might generate the full thing. Then you can out paint if you want the image back to a wide 16:9. Just an idea.
@FlexinVR9 ай бұрын
Amazing. I'm stoked. Worked like a charm with only using my i5 11th Gen CPU! No more paying $20 for 1 video to these scammy websites 👏🥂🤝
@jomapelspirit9 ай бұрын
What’s the name and model of your laptop bro
@NakedMatrix9 ай бұрын
@@jomapelspirit it’s a PC I built. Only specs are Intel i5 11th Gen CPU with 16GB of memory. No GPU
@jomapelspirit9 ай бұрын
@@NakedMatrix and it carries on with DeepFaceLab so well ?
@val_nightlily9 ай бұрын
Thanks! This is refreshingly clear.
@qbitsday34389 ай бұрын
How to extract a specific face from a video where there are many faces and replace a face in another video
@DigitalDazYT7 ай бұрын
You would need to delete any faces you dont want changed
@qbitsday34387 ай бұрын
@@DigitalDazYT great understood , thank you
@MikeWilliams-uh8ii9 ай бұрын
Thanks, this was a great intro video to get started and hit the ground running!
@cyfisher86799 ай бұрын
Awesome work.
@krishnapushpak81019 ай бұрын
what are the prerequisites before using deepfacelab ? do i need to install python and tensor flow ? and any other prerequisites ?
@DigitalDazYT9 ай бұрын
Hello. It's all self-sufficient. Nothing else needed other than the main files that download.
@user-bg7yx9pr4q7 ай бұрын
@@DigitalDazYThey. Can I save my progress? Just in case i need to shut down my pc.
@patience999 ай бұрын
This was a clear explanation, I also hope that some people would explain like this with more specific informations. The onle problem for me is that you're going to do more of the processing deepfake editing and it's going to take more time and patience just to get the best quality. It's really hard. Thank you!
@EEE202210 ай бұрын
Could you please explain how to select a specific face in a multi-face video (source and dest).
@DigitalDazYT9 ай бұрын
You will need to delete the face in the dataset that you wish to remain the same
@mattduncan550010 ай бұрын
how do you keep the exact outfit tho, say your trying to make an AI photoshoot , you want images to look like they're from the same place
@DigitalDazYT9 ай бұрын
AI isn't capable of that right now. The only way to do that would be with something like Stable Diffusion and training a LORA file on the outfit
@mattduncan55009 ай бұрын
@@DigitalDazYT are dang . yeah i have been struggling !! can get some similar shots but its takes so much time
@DailyLifeBasics10 ай бұрын
how about same outfit, same face, but different scenes and poses?
@DigitalDazYT10 ай бұрын
For the same outfit, you would have to try to be as specific as possible about the outfit. For example, a red dress isn't specific enough. However, if you said a red lace thigh-length dress with a brown leather belt around the waist it will do a better job at the same outfit. Remember this is AI and no AI is 100% consistent yet so we have to do what we can to get as close as possible. The best for consistency is Stable DIffusion with a Dreambooth-trained model. Midjourney requires a little more work.
@codrut_sere201810 ай бұрын
One of the only good tutorials on this matter, nice work!
@jimajames10 ай бұрын
Thank you. This was very helpful.
@wagmi61410 ай бұрын
can you make one for after detailer extension as well