ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

  Рет қаралды 13,176

Abe aTech

Abe aTech

Күн бұрын

Push your creative boundaries with ComfyUI using a free plug and play workflow! Generate captivating loops, eye-catching intros, and more! This free and powerful tool is perfect for creators of all levels.
Chapters:
00:00 Sample Morphing Videos
01:15 Downloads
02:09 Folder locations
02:14 Workflow Overview
04:10 Generating first Morph
04:40 Running the Workflow
04:47 Quick bonus tips
06:35 Supercharge the Workflow
08:58 Getting more variation in batches
10:31 Scaling up
10:59 Scaling up with model
11:35 This is pretty cool
I'll show you how to make morphing videos and use images to create stunning animations and videos,
You'll also learn how to use text prompts to morph between anything you can imagine!
Plus there are some valuable tips and tricks to streamline the comfyui morphing video workflow and save time while creating your own mind-bending visuals.
#########
Links:
########
Workflow: Morpheus Modified workflow for text to image to video
openart.ai/workflows/abeatech...
Tutorial for Batch Generating Text to Image using external text file:
• ComfyUI: Batch Generat...
Workflow: ipiv's Morph - img2vid AnimateDiff LCM:
civitai.com/models/372584?mod...
Note: See 02:09 of the video for Model folder locations
AnimateDiff:
huggingface.co/wangfuyun/Anim...
VAE:
huggingface.co/stabilityai/sd...
AnimateLCM LORA:
huggingface.co/wangfuyun/Anim...
Clip Vision Model ViT-H:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename:
huggingface.co/h94/IP-Adapter...
Clip Vision Model ViT-G:
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename -
huggingface.co/h94/IP-Adapter...
IPADAPTER MODEL:
huggingface.co/h94/IP-Adapter...
Control Net (QRCode):
huggingface.co/monster-labs/c...
Motions Animations for AnimateDiff: civitai.com/posts/2011230
################
Music: Bensound.com/royalty-free-music
License code: LU8J6ZAOXHXNOAI4

Пікірлер: 88
@amunlevy2721
@amunlevy2721 Ай бұрын
Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader
@ted328
@ted328 Ай бұрын
Literally the answer to my prayers, have been looking for exactly this for MONTHS
@alessandrogiusti1949
@alessandrogiusti1949 Ай бұрын
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
@AlvaroFCelis
@AlvaroFCelis Ай бұрын
Thank you so much! Very clear, and organized. Subbed..
@SylvainSangla
@SylvainSangla Ай бұрын
Thanks a lot for sharing this, very precise and complete guide ! 🥰 Cheers from France !
@mcqx4
@mcqx4 Ай бұрын
Nice tutorial, thanks!
@abeatech
@abeatech Ай бұрын
Glad it was helpful!
@MSigh
@MSigh Ай бұрын
Excellent! 👍👍👍
@velvetjones8634
@velvetjones8634 Ай бұрын
Very helpful, thanks!
@abeatech
@abeatech Ай бұрын
Glad it was helpful!
@popo-fd3fr
@popo-fd3fr Ай бұрын
Thanks man. I just subscribed
@TechWithHabbz
@TechWithHabbz Ай бұрын
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
@abeatech
@abeatech Ай бұрын
Thanks for the sub!
@user-yo8pw8wd3z
@user-yo8pw8wd3z 15 күн бұрын
good video. where can i find the link to the additional video masks? I don't see it in the description
@zarone9270
@zarone9270 Ай бұрын
thx Abe!
@Injaznito1
@Injaznito1 15 күн бұрын
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
@chinyewcomics
@chinyewcomics Күн бұрын
Hi, does anybody know how to add more images to create a longer video?
@Ai_Gen_mayyit
@Ai_Gen_mayyit 20 күн бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture'
@produccionesvoid
@produccionesvoid 4 күн бұрын
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
@Caret-ws1wo
@Caret-ws1wo 10 күн бұрын
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
@SF8008
@SF8008 Ай бұрын
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
@frankiematassa1689
@frankiematassa1689 16 күн бұрын
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
@tetianaf5172
@tetianaf5172 18 күн бұрын
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
@kwondiddy
@kwondiddy 21 күн бұрын
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil Ай бұрын
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
@Ai_Gen_mayyit
@Ai_Gen_mayyit 20 күн бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture' your video timestep: 04:20
@aslgg8114
@aslgg8114 Ай бұрын
What should I do to make the reference image persistent
@gorkemtekdal
@gorkemtekdal Ай бұрын
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
@abeatech
@abeatech Ай бұрын
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
@MACH_SDQ
@MACH_SDQ 12 күн бұрын
Goooooood
@paluruba
@paluruba Ай бұрын
Thank you for this video! Any idea what to do when the videos are blurry?
@jesseybijl2104
@jesseybijl2104 24 күн бұрын
Same here, any answer?
@efastcruelx7880
@efastcruelx7880 21 күн бұрын
Why my generated animation very different from the reference images
@MichaelL-mq4uw
@MichaelL-mq4uw Ай бұрын
why do you need controlnet at all? can it be skipped and morph without any mask?
@BrianDressel
@BrianDressel Ай бұрын
Excellent walkthrough of this, thanks.
@wagmi614
@wagmi614 Ай бұрын
can could one add some kind of ip adaptar to add your own face to transform?
@ImTheMan725
@ImTheMan725 23 күн бұрын
Why can't your morph 20/50 pictures?
@MariusBLid
@MariusBLid Ай бұрын
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
@rowanwhile
@rowanwhile Ай бұрын
Brilliant video. thanks so much for sharing your knowledge.
@saundersnp
@saundersnp Ай бұрын
I've encountered this error : Error occurred when executing RIFE VFI: Tensor type unknown to einops
@TinyLLMDemos
@TinyLLMDemos 19 күн бұрын
where do i get your input images
@brockpenner1
@brockpenner1 Ай бұрын
ComfyUI threw an error in the VRAM Debug node of Frame Interpolation: Error occurred when executing VRAM_Debug: VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough' Any help would be appreciated!
@user-vm1ul3ck6f
@user-vm1ul3ck6f Ай бұрын
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
@abeatech
@abeatech Ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@cabb_
@cabb_ Ай бұрын
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
@AlexDisciple
@AlexDisciple 4 күн бұрын
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler: Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
@AlexDisciple
@AlexDisciple 4 күн бұрын
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
@AlexDisciple
@AlexDisciple 4 күн бұрын
Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it
@ComfyCott
@ComfyCott 28 күн бұрын
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
@pro_rock1910
@pro_rock1910 Ай бұрын
❤‍🔥❤‍🔥❤‍🔥
@cohlsendk
@cohlsendk Ай бұрын
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
@cohlsendk
@cohlsendk Ай бұрын
Got it :D
@axxslr8862
@axxslr8862 Ай бұрын
in my comfy UI there is no manager option ...... help please
@ESLCSDivyasagar
@ESLCSDivyasagar 17 күн бұрын
search in youtube how to install
@TinyLLMDemos
@TinyLLMDemos 19 күн бұрын
how do i kick it off?
@yakiryyy
@yakiryyy Ай бұрын
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
@abeatech
@abeatech Ай бұрын
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@efastcruelx7880
@efastcruelx7880 21 күн бұрын
@@abeatech Is there any way to make the result more like reference images
@devoiddesign
@devoiddesign Ай бұрын
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@tilkitilkitam
@tilkitilkitam Ай бұрын
same problem
@tilkitilkitam
@tilkitilkitam Ай бұрын
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@devoiddesign
@devoiddesign Ай бұрын
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
@TheNexusRealm
@TheNexusRealm 28 күн бұрын
cool, how long did it take you?
@rooqueen6259
@rooqueen6259 28 күн бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@CoqueTornado
@CoqueTornado Ай бұрын
great tutorial, I am wondering... how many vram does this setup need?
@abeatech
@abeatech Ай бұрын
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@CoqueTornado
@CoqueTornado Ай бұрын
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
@Adrianvideoedits
@Adrianvideoedits 19 күн бұрын
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
@WalkerW2O
@WalkerW2O 21 күн бұрын
Hi Abe aTech, very informative and i like your work very much.
@creed4788
@creed4788 Ай бұрын
Vram required?
@Adrianvideoedits
@Adrianvideoedits 19 күн бұрын
16gb for upscaled
@creed4788
@creed4788 19 күн бұрын
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@Adrianvideoedits
@Adrianvideoedits 13 күн бұрын
@@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards
@ErysonRodriguez
@ErysonRodriguez Ай бұрын
noob question: why my results more different from my output
@ErysonRodriguez
@ErysonRodriguez Ай бұрын
i mean, what images i loaded have different output instead transitioning
@abeatech
@abeatech Ай бұрын
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
@3djramiclone
@3djramiclone Ай бұрын
This is not for beginners, put that on the description mate
@kaikaikikit
@kaikaikikit 24 күн бұрын
what are you are crying about...go find a beginner class when it's too hard to understand...
@zems_bongo
@zems_bongo 5 күн бұрын
i don't understand why its doesnt work with me, i get this type of messages Error occurred when executing CheckpointLoaderSimple: 'NoneType' object has no attribute 'lower' File "/home/ubuntu/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/ubuntu/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/ubuntu/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/ubuntu/ComfyUI/nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "/home/ubuntu/ComfyUI/comfy/sd.py", line 446, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) File "/home/ubuntu/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
@miukatou
@miukatou 8 күн бұрын
I'm sorry, I need help. I'm a complete beginner. I can't find any sd 1.5 model any . Where to download it? ipadapter,I cannot find this folder in my model path. Do I need to create a folder named ipadapter myself?🥲🥲
@user-vm1ul3ck6f
@user-vm1ul3ck6f Ай бұрын
Help! I encountered this error while running it
@user-vm1ul3ck6f
@user-vm1ul3ck6f Ай бұрын
Error occurred when executing IPAdapterUnifiedLoader : module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
@abeatech
@abeatech Ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil Ай бұрын
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
NO NO NO YES! (50 MLN SUBSCRIBERS CHALLENGE!) #shorts
00:26
PANDA BOI
Рет қаралды 102 МЛН
Ну Лилит))) прода в онк: завидные котики
00:51
CAN YOU HELP ME? (ROAD TO 100 MLN!) #shorts
00:26
PANDA BOI
Рет қаралды 36 МЛН
Как ОПТИМИЗИРУЮТ ИГРЫ
11:00
Atix
Рет қаралды 352 М.
Mirrors, Every Way You Can Make Them In A Video Game
8:14
Code It All
Рет қаралды 8 М.
From Stills to Motion - AI Image Interpolation in ComfyUI!
11:32
Nerdy Rodent
Рет қаралды 28 М.
Animation with weight scheduling and IPAdapter
20:50
Latent Vision
Рет қаралды 23 М.
Become a Style Transfer Master with ComfyUI and IPAdapter
19:02
Latent Vision
Рет қаралды 20 М.
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
24:54
enigmatic_e
Рет қаралды 78 М.
Which Phone Unlock Code Will You Choose? 🤔️
0:14
Game9bit
Рет қаралды 12 МЛН
What’s your charging level??
0:14
Татьяна Дука
Рет қаралды 7 МЛН
⌨️ Сколько всего у меня клавиатур? #обзор
0:41
Гранатка — про VR и девайсы
Рет қаралды 653 М.
Трагичная История Девушки 😱🔥
0:58
Смотри Под Чаёк
Рет қаралды 377 М.