ComfyUI AnimateDiff Prompt Travel: ControlNets and Video to Video!!!

  Рет қаралды 25,139

c0nsumption

c0nsumption

9 ай бұрын

This is a fast introduction into ‪@Inner-Reflections-AI‬ workflow regarding AnimateDiff powered video to video with the use of ControlNet.
You can download the ControlNet models here:
huggingface.co/lllyasviel/Con...
The workflow file can be downloaded from here:
drive.google.com/file/d/14F6a...
The model (checkpoint) downloaded for this tutorial series are here:
civitai.com/models/134442/hel...
The VAE used can be downloaded from:
huggingface.co/AIARTCHAN/aich...
The motion_modules and motion_loras can be found on the original AnimateDiff repo where you will be offered different sources to download them from:
github.com/guoyww/AnimateDiff
Or here's a quick link to civitai:
civitai.com/models/108836/ani...
civitai.com/models/153022
Socials:
x.com/c0nsumption_
/ consumeem

Пікірлер: 182
@yoyo2k149
@yoyo2k149 9 ай бұрын
Tested on AMD RX6800XT (Ubuntu22.04 + ROCm 5.7). it works flawlessly and stay close to 12Go of VRAM. Really helpful, thanks a lot.
@c0nsumption
@c0nsumption 9 ай бұрын
Awesome. Will pin this for others. Mind giving a short guide on the r/animatediff subreddit? :)
@miaoa7414
@miaoa7414 9 ай бұрын
@@c0nsumption When loading the graph, the following node types were not found: BatchPromptSchedule Nodes that have failed to load will show as red on the graph.😭
@yoyo2k149
@yoyo2k149 9 ай бұрын
@@c0nsumption I will try to post a small guide before the end of the week-end. :)
@Andro-Meta
@Andro-Meta 9 ай бұрын
Converting the pretext, and how to do that completely blew my mind and opened doors to understanding what I could do. Thank you.
@victorhansson3410
@victorhansson3410 9 ай бұрын
damn, glad i saw your channel recommended on reddit. fantastic video - calm, concise and well made!
@c0nsumption
@c0nsumption 9 ай бұрын
Thanks dude 🙏🏽 Happy to help elevate and educate the community
@ronnykhalil
@ronnykhalil 9 ай бұрын
yea baby (edit: this is straight up the most valuable 10 minutes ive watched on KZfaq in a while, exactly the signal I needed amidst all the noise regarding comfy and diff. You explained it really well and clear. Thank ye kindly!
@SkyOrtizCreative
@SkyOrtizCreative 9 ай бұрын
Love your vids bro!!! I know it takes a lot of work to makes these, really appreciate your efforts. 🙌
@c0nsumption
@c0nsumption 9 ай бұрын
Thanks for understanding 🧍🏽‍♂️ Legit takes so much time 😣 lol
@user-sk2mk2wp9e
@user-sk2mk2wp9e 9 ай бұрын
Hard! Thank you very much for paying for it all the way! It's a pity that I brushed it here before going to bed and have to wait until tomorrow to practice.
@Copperpot5
@Copperpot5 9 ай бұрын
Nice job on these of late. In general I have a hard time watching video tutorials w/ people on screen talking - but you're hitting all the right notes on these so far.......Haven't -wanted- to bother w/ Comfy, but have definitely admired the generations some have been sharing. Thanks for making well timed / friendly tutorials. Stick w/ it and you'll def build a good/active channel. Thanks!
@c0nsumption
@c0nsumption 9 ай бұрын
Thanks for the positivity hey 👏🏽
@JaredVBrown
@JaredVBrown 5 ай бұрын
Very helpful and approachable tutorial. Thanks!
@keagoaki
@keagoaki 8 ай бұрын
straight to the point and clear, nice to follow,no music is perfect. i can choose my own background if needed. thanks a lot you just made me a fortune haha
@calvinherbst304
@calvinherbst304 6 ай бұрын
Thank you. Excellent tutorial :) Keep them coming, subbed!
@58gpr
@58gpr 9 ай бұрын
I was waiting for this one! Thanks mate & keep 'em coming :)
@c0nsumption
@c0nsumption 9 ай бұрын
No worries 😉 Figured it’d be a quick way to introduce ControlNets but still give a lot of y’all what you’re waiting for 🧍🏽‍♂️
@aminshallwani9369
@aminshallwani9369 9 ай бұрын
Thanks for the video, very helpful. Well done😍
@yuradanilov5244
@yuradanilov5244 9 ай бұрын
thanks for the tutorial, man! 🙌
@Inner-Reflections-AI
@Inner-Reflections-AI 9 ай бұрын
Nicely Done!
@c0nsumption
@c0nsumption 9 ай бұрын
Everyone, this is the original creator of this workflow. Amazing artist/creative. Please follow them! 🙏🏽
@edkenndy
@edkenndy 9 ай бұрын
Awesome! Thanks for sharing the resources.
@c0nsumption
@c0nsumption 9 ай бұрын
Trying to get everyone up to speed on all the amazing workflows available 🙏🏽
@mikberg1824
@mikberg1824 7 ай бұрын
Really good tutorial,thank you!
@banzai316
@banzai316 9 ай бұрын
Good work! Thanks! 👏
@francaleu7777
@francaleu7777 9 ай бұрын
perfect tuto ! Thanks a lot !
@haydnmann
@haydnmann 9 ай бұрын
this is sick, nice work dude. sub'd
@wholeness
@wholeness 9 ай бұрын
Bro we on this journey together. Keep goin!
@colaaluk
@colaaluk 5 ай бұрын
great video
@digidope
@digidope 9 ай бұрын
Thanks! Straight to the point!
@c0nsumption
@c0nsumption 9 ай бұрын
Yes indeed. Hard to keep it that way with such complex topics but I’m trying!
@Ekopop
@Ekopop 8 ай бұрын
that my friend is a very nice video, thanks a lot, I'll follow your stuff
@LearningVikas
@LearningVikas 4 ай бұрын
Thanks worked finally❤❤
@UON
@UON 9 ай бұрын
Exciting! I hope this helps me figure out how to do a much longer vid2vid without running out of vram
@c0nsumption
@c0nsumption 9 ай бұрын
I mention a note on VRAM. Can lower the image size so a smaller resolution and then upscale later. How much VRAM do you have???? Have you considered using RunPod? They have a preset comfyUI template
@leretah
@leretah 9 ай бұрын
awesome, thank you. Im really appreciated
@c0nsumption
@c0nsumption 9 ай бұрын
No worries. More on the way. Just super busy with work sorry 🙏🏽
@samshan9321
@samshan9321 8 ай бұрын
really helpful tutorial, thx
@Elliryk_
@Elliryk_ 9 ай бұрын
Great Video my friend!! Elliryk 😉
@c0nsumption
@c0nsumption 9 ай бұрын
Ahhhhhhhhhh shiiiii 🧍🏽‍♂️ Enjoy the video my guy. Excited to see what you cook up 🍳🥘⏲️
@TheJPinder
@TheJPinder 5 ай бұрын
good stuff
@DefinitelyNotMike
@DefinitelyNotMike 8 ай бұрын
This is so fucking cool and it worked with no issues! Thanks!
@Spajra-music
@Spajra-music 9 ай бұрын
crushing bro
@danielvgl
@danielvgl 5 ай бұрын
Great!!!
@ekke7995
@ekke7995 8 ай бұрын
this is it!!
@victorvaltchev42
@victorvaltchev42 9 ай бұрын
Top!
@nelson5298
@nelson5298 8 ай бұрын
Thanks for ur sharing. I really learn a lot. Quick question... how do I change model's cloth and keep the new cloth can prerform consistently. I type in sweater, but some frame will change sweater into tank top...
@MrPlasmo
@MrPlasmo 9 ай бұрын
very helpful as always thanks. Is there a way to make a "preview" video frame node so that you can view the progress of the render before it is completed? This way one could cancel the render if it looks terrible or not the way you want it without wasting render time. This was one of the nice things about deforum that saved me a lot of time
@lovisodin8658
@lovisodin8658 9 ай бұрын
just use a fixed seed, and in the node 'load video upload', just change "select every nth", to 20 for example if you want 6 images preview
@aoi_andorid
@aoi_andorid 9 ай бұрын
This video will help many creators. Please have a place to pay for coffee.
@c0nsumption
@c0nsumption 9 ай бұрын
🥹 Will set up soon. I love y’all. Thanks for all the love 🙏🏽 I set up a patreon, will be sharing soon. Also considering setting up subscriptions on X
@alishkaBey
@alishkaBey 7 ай бұрын
Great tutorial bro ! Could you make a video about morphing videos with Ipadapters?
@ywueeee
@ywueeee 9 ай бұрын
cool vid, you'll might the best animatediff channel now, what's coming next?
@c0nsumption
@c0nsumption 9 ай бұрын
IPAdapter, ControlNet Keyframes, Frame Interpolation, Refiner and Upscaling, amongst others! Also Hotshot-XL tutorials. Thanks btw, I appreciate ya.
@ywueeee
@ywueeee 9 ай бұрын
@@c0nsumption 3 or 5 image interpolation as in with start and end frames please
@ucyuzaltms9324
@ucyuzaltms9324 9 ай бұрын
i love the output
@BrandonFoy
@BrandonFoy 9 ай бұрын
Whoa! This is awesome, thanks for sharing your workflow. I haven’t used ComfyUI - just been in A1111. Can you recommend tutorials for Comfy? Or any you’ve made that’ll be a solid start to start learning this method? Thank you!!
@c0nsumption
@c0nsumption 9 ай бұрын
This was by me and is a great way to get started. It’s part of the playlist this current video is in: kzfaq.info/get/bejne/ia2ZqdyVxqjOYqs.htmlsi=MDwuANfnq6W_Wzul Also this actually isn’t my workflow, it’s the work of @Inner-Reflections-AI here on KZfaq! I did make some modifications though to make things a bit easier :)
@BrandonFoy
@BrandonFoy 9 ай бұрын
@@c0nsumption oh man, thank you so much!!!!! 🙌🏾🙌🏾🙌🏾
@BrandonFoy
@BrandonFoy 9 ай бұрын
@@c0nsumption yeah, this is exactly what I’m looking for!! Awesome thanks again!
@leandrogoethals6599
@leandrogoethals6599 4 ай бұрын
nice tutorial, have u found a way that u can upload a 3 min video in one piece into the VHS load video node?
@zweiche
@zweiche 9 ай бұрын
i really appreciate for this guide. this will help me alot! however i have 1 problem maybe you can help me with i have done everything right. i see frames from video and i see controlnet output with lines. however after ksampler my gif and image outputs are all blackscreen. what do you think my problem could be.
@kaleabspica8437
@kaleabspica8437 4 ай бұрын
what do i have to do if i want to change the look of it ? since yours is closer to anime style i want to make it to realism or sci-fi etc
@risewithgrace
@risewithgrace 9 ай бұрын
Thanks! I downloaded this workflow but the output only has formats for image/gif, or image/webp, even though I am inputting video. There is no video/h264 setting in the dropdown. Any idea how I can add that?
@c0nsumption
@c0nsumption 9 ай бұрын
Replace the output node with “VHS Video Combine” node. You can double click in the interface and search for it.
@benjaminbardouparis
@benjaminbardouparis 9 ай бұрын
Wow. Huge thanks for this! Is it possible to use a SD XL model for generating a painting style? I’d like to use this one and I don’t know if it’s possible with your workflow. Btw, many thanks !!
@c0nsumption
@c0nsumption 9 ай бұрын
You can use hotshotxl: civitai.com/articles/2601/guide-comfyui-sdxl-animation-guide-using-hotshot-xl-an-inner-reflections-guide
@benjaminbardouparis
@benjaminbardouparis 9 ай бұрын
Thanks!
@eraniopetruska5701
@eraniopetruska5701 8 ай бұрын
Hi! Did you manage to get it running? @@benjaminbardouparis
@GamingDaveUK
@GamingDaveUK 9 ай бұрын
Nice, may have to try this after work. is it the same process if you want to use more uptodate models (cant go back to 1.5 after using SDXL lol)
@c0nsumption
@c0nsumption 9 ай бұрын
I’ve tested the HotshotXL workflow. Currently SD15 is doing a lot better. But InnerReflections is creating some magnificent pieces using it and is supposedly about to share his workflow 🧍🏽‍♂️
@victorvaltchev42
@victorvaltchev42 9 ай бұрын
What was the size of the video in the end? Because you showed 1024 576 in the beginning. Is that the resolution in the end as well? Also how do you load other formats of video, I only have webp and gif?
@c0nsumption
@c0nsumption 9 ай бұрын
Yes, that's what dictates the output resolution. Have upscaling coming up soon but have two jobs so very limited time!
@victorvaltchev42
@victorvaltchev42 9 ай бұрын
@@c0nsumption Great content man! Thanks for the answer! I was a long time Automatic1111 user but the past weeks with the advances of animatediff in comfyui I'm definetely switching!
@l1far
@l1far 9 ай бұрын
I use run diffusion and can't load your workflow in json( can you upload the pic too maybe that can fix it?
@OffTheHorizon
@OffTheHorizon 3 ай бұрын
Im using Ksampler, but it takes 9 minutes for 1 of the 25 samples. Which is obviously extremely slow. Im working on a macbook m1 Max, do you have any tips on making it quicker?
@JimDiMeo
@JimDiMeo 9 ай бұрын
Hey man - love the tutorials!! Where do you add different video creation formats - I only have gif and webp - Thx
@c0nsumption
@c0nsumption 9 ай бұрын
🤔 should be more. Search for the VHS Video Combine node in your ComfyUI and try that.
@JimDiMeo
@JimDiMeo 9 ай бұрын
@@c0nsumption yes! Found that last night. Thx for the reply through.
@vtchiew5937
@vtchiew5937 9 ай бұрын
thanks! got it working after a few tries, but i realize the prompts are not really working (at least I don't see them "travelling"), it seems the whole prompts are taken into consideration instead. do you have similar issues? i see that the default workflow has 4 prompts, but in your generated video at least it traveled from green lush to wintery storm, whereas mine always started with wintery storm, and remained like that throughout the video.
@c0nsumption
@c0nsumption 9 ай бұрын
Depends on various factors: key frame distance, seed, cfg, sampler, inputs, etc etc. That’s the artistic process my friend, fiddle with it all. This was a quick output to get everyone involved. I’m just really busy testing all the new tech, working, and trying to formulate constructive tutorials for everyone to tag along.
@vtchiew5937
@vtchiew5937 9 ай бұрын
@@c0nsumption thanks for the reply bro, been fiddling with it since then, great tutorial~
@Csarmedia
@Csarmedia 9 ай бұрын
the ebsynth of comfyui
@c0nsumption
@c0nsumption 9 ай бұрын
Honestly better than EBSynth. Cause it works on every frame. Only reason why this changes is cause of prompt travel. Otherwise the first scene would have stayed 👍🏽
@norvsta
@norvsta 9 ай бұрын
@@c0nsumption so cool. I faffed around for a coupla days trying to install Win 10 just to run ebsynth, now I don't have to bother. Thanks for the tut 🙌
@SuperDao
@SuperDao 9 ай бұрын
Can you make a tutorial on how to upscale the render ?
@Csarmedia
@Csarmedia 9 ай бұрын
The worklflow file is giving me an error: TypeError: Cannot read properties of undefined (reading '0')
@Beedji
@Beedji 9 ай бұрын
Hey man, great tutorial ! I have an error message that pops out however, it says "Control type ControlNet may not support required features for sliding context window; use Control objects from Kosinkadink/Advanced-ControlNet nodes." which is weird since I have Kosinkadink's model installed. Have you experienced this error as well ?
@Beedji
@Beedji 9 ай бұрын
Ok I think i've found the problem. I wasn't using the same VAE that you (I was using a SD1.5 pruned one) and now that I installed the same than you (Berrysmix) it seems to work. No idea what difference this makes, but we'll see! haha
@mulleralmeida4844
@mulleralmeida4844 3 ай бұрын
Starting to learn ComfyUI, when I click on Queue Prompt, my computer takes a long time to process the KSampler node. I'm using a MacBook Pro 14 M2 PRO, is it normal for it to take so long?
@aaronv2photography
@aaronv2photography 9 ай бұрын
You made a video (I think) about unlimited animatediff length animations. How would we incorporate that into this workflow so we can go past the 120 frame limit?
@c0nsumption
@c0nsumption 9 ай бұрын
I would imagine just make sure you add in more than 120 frames and increase the max frames on the ‘BatchPromptSchedule’ node past 120. If you don’t include enough frames I’m assuming the generation will just continue prompts based form the point of missed frames but who knows 🤷🏽‍♂️ Test it out, will probably make some cool stuff
@voytakaleta
@voytakaleta 9 ай бұрын
Awesome! I have one question, how can I install / connect ffmpeg to comfyUI. I get this error "[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled". Thank you very much!
@JMcGrath
@JMcGrath 9 ай бұрын
I have same issue
@voytakaleta
@voytakaleta 9 ай бұрын
kzfaq.info/get/bejne/p7mcq9lnnb7Um6s.html@@JMcGrath
@Spajra-music
@Spajra-music 9 ай бұрын
followed this all the way through and at the end my video output was just black. any suggestions?
@luclaura1308
@luclaura1308 9 ай бұрын
How would you go around adding a Lora (not a motion one) to this workflow? I tried adding a Load Lora after Load checkpoint, but I'm getting black images.
@c0nsumption
@c0nsumption 9 ай бұрын
This tutorial: kzfaq.info/get/bejne/e9KdrKyk27PGnHk.htmlsi=Kk_dWXxGELq-Kemy
@luclaura1308
@luclaura1308 9 ай бұрын
@@c0nsumption Thanks!
@jiananlin
@jiananlin 9 ай бұрын
how to apply more than 1 controlnets?
@RenoRivsan
@RenoRivsan 5 ай бұрын
can you show how to remove animatediff from this workflow... i dont want my video to change style
@itsjaysenofficial
@itsjaysenofficial 9 ай бұрын
Will it work for a macbook pro M1 with 16gb or ram??
@hatakeventus
@hatakeventus 9 ай бұрын
does this work with AMD RX 6700??
@looneyideas
@looneyideas 9 ай бұрын
Can you use RunPod or does it have to be local?
@c0nsumption
@c0nsumption 9 ай бұрын
Runpod has a ComfyUI template
@user-hb6dd9iu9g
@user-hb6dd9iu9g 9 ай бұрын
Thank you for this tuttorial! I'm using colab version and i get tottally black result pictures and video, could you give me a hint how can i fixe it? thx But most of the time i get this issue ..SD model must be either SD1.5-based for AnimateDiff or SDXL-based for HotShotXL. Need help..=\
@c0nsumption
@c0nsumption 9 ай бұрын
Are you using an SDXL model or SD1.5? Other models don’t work for animatediff/hotshot. Can you lmk what model you are using and I’ll do some research
@DimiArt
@DimiArt 4 ай бұрын
Weird im getting preview images from the upscaler node and the lineart images from the controlnet, but im not getting any actual output results.
@DimiArt
@DimiArt 4 ай бұрын
ok i realized my checkpoint and my VAE were set to the ones in the downloaded workflow and i had to set them to the ones i actually had downloaded instead. My bad
@kaleabspica8437
@kaleabspica8437 4 ай бұрын
do you know how to change the look of it.
@DimiArt
@DimiArt 4 ай бұрын
@@kaleabspica8437 change the look of what
@speaktruthtopower3222
@speaktruthtopower3222 9 ай бұрын
is there wa way to point to different directories so we don't have to re'download models, lors and others files.
@c0nsumption
@c0nsumption 9 ай бұрын
I use ComfyUI as my base for a there repos so not sure. But try here: github.com/comfyanonymous/ComfyUI/discussions/72
@speaktruthtopower3222
@speaktruthtopower3222 9 ай бұрын
@@c0nsumption I figured it out. just change the root directory and point it to your SD in the "extra_model_paths.yaml" file.
@El__ANTI
@El__ANTI 9 ай бұрын
Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect ...
@antonradacic2374
@antonradacic2374 9 ай бұрын
ive set everything up, but for some reason i get error at ksampler step "Error occurred when executing KSampler: 'ModuleList' object has no attribute '1'"
@c0nsumption
@c0nsumption 9 ай бұрын
DM me on Twitter the actual error message and a screenshot of the nodes. Too vague to answer. Either that or post on r/animatediff subreddit
@AI-nsanity
@AI-nsanity 9 ай бұрын
I don't have the option for mp4 output do you have any idea why ?
@c0nsumption
@c0nsumption 9 ай бұрын
Change output node to VHS Video Combine. I believe that solves it
@philspitlerSF
@philspitlerSF 4 ай бұрын
I don't see a link to download the workflow
@saiya3725
@saiya3725 9 ай бұрын
Hey when i drag from the pre text input im not getting the ttn text node option. What am i missing?
@saiya3725
@saiya3725 9 ай бұрын
i installed tinyterra and got it
@c0nsumption
@c0nsumption 9 ай бұрын
@@saiya3725 👍🏽 Good job figuring it out
@VJSharpeyes
@VJSharpeyes 9 ай бұрын
The Node 'realistic lineart' node is always missing when loading your CSV. Any tips of what I could have missed? I am warned about "LineArtPreprocessor" missing and then in the install manager I only see Fannovel16s which is already installed.
@VJSharpeyes
@VJSharpeyes 9 ай бұрын
Oh hang on. There is an abandoned repo that looks like it contains it.
@jorgecucalonf
@jorgecucalonf 9 ай бұрын
Same issue here. Console gives me this: (IMPORT FAILED): C:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux
@jorgecucalonf
@jorgecucalonf 9 ай бұрын
Managed to get it working by reverting the comfyui_controlnet_aux folder to an older commit. Otherwise we must wait for the owner of the repository to update the it with a fix.
@jorgecucalonf
@jorgecucalonf 9 ай бұрын
That was quick. It's fixed now :D
@c0nsumption
@c0nsumption 9 ай бұрын
Good job getting it working. If I have some spare time today or this week I’ll try to research.
@pauliuscreative
@pauliuscreative 9 ай бұрын
My original input video was 7 seconds and the output video I got is slower which is 12 seconds. Do you know why?
@c0nsumption
@c0nsumption 9 ай бұрын
Check you output frame rate
@bowaic9467
@bowaic9467 Ай бұрын
I don't know how to fix this problem. 'ControlNet' object has no attribute 'latent_format'
@Oscilatii
@Oscilatii 9 ай бұрын
Hello! Used your tutorial and workflow but duno why, my video is crap :) The background is modified and is cool, but my face is still like original video with some modified colors. If I want to make my face a robot for example, just won't work... With openpose instead of lineart, i got great results but is missing mouth movement when I speak If I use same prompt in img2img, results are amazing
@c0nsumption
@c0nsumption 9 ай бұрын
You can adjust controlnet weight, try different controlnets or try mixing them. I’ll drop a multi controlnet video soon
@Oscilatii
@Oscilatii 9 ай бұрын
@@c0nsumption thanks for your answer. One of my problem was that I use a realistic model :) now everything is ok. Thx again for this tutorial, rly helped me
@lanvinpierre
@lanvinpierre 9 ай бұрын
can you do cli prompt in comfyui? great tutorial btw!
@c0nsumption
@c0nsumption 9 ай бұрын
Sorry, confused about what you’re asking. Are you asking if you can do prompt travel?
@lanvinpierre
@lanvinpierre 9 ай бұрын
the one where where you used 3 different images to help with the animation "frame 0 0001, frame 8 0002' im not sure what its called but can that be done thru comfyui or should it be done like your other tutoriial? @@c0nsumption
@leretah
@leretah 9 ай бұрын
Yesterday all is ok and today I have this error: Error occurred when executing KSampler: unsupported operand type(s) for //: 'int' and 'NoneType' File "C:\Users\lenin\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) Please help me, learn this is really frustrating many times, but Im love it!!!
@c0nsumption
@c0nsumption 9 ай бұрын
Sounds like you have the wrong data in your ksampler somewhere. Try reloading the workflow from scratch. Consider posting your issue in the r/animatediff subreddit
@nilshonegger
@nilshonegger 9 ай бұрын
Thank you so much for sharing your workflow! Is there a way to bypass the VAE nodes in order to use it with Models that don't require a VAE (such as Dreamshaper, EpicRealism)?
@c0nsumption
@c0nsumption 9 ай бұрын
Plug the vae from your checkpoint loader node into any slot that requires a vae
@ehsankholghi
@ehsankholghi 6 ай бұрын
i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig
@c0nsumption
@c0nsumption 6 ай бұрын
Should be fine with that. Dont upgrade your ram till you hit your bottleneck. If your doing really really long sequences it’ll bottleneck but even then you can just split them up into smaller chunks
@ehsankholghi
@ehsankholghi 5 ай бұрын
@@c0nsumption thanks so much.is it possible to make a video with like 1000 frame(1000 png) whit ur workfllow? i got this error after 1.5 hours rendertime: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
@arkelss4
@arkelss4 9 ай бұрын
Can this alsowork with automatic1111?
@c0nsumption
@c0nsumption 9 ай бұрын
No idea but most likely not. Best to start learning the newer tools and growing out of Auto1111. Developer experience isn’t the greatest on Auto so most development on state of the art tech is happening on ComfyUI and other repos.
@frustasistumbleguys4900
@frustasistumbleguys4900 9 ай бұрын
hey, why i got noise with artefact on my output? i follow you
@c0nsumption
@c0nsumption 9 ай бұрын
DM me over X or instagram. Send me an example image.
@terencechen5857
@terencechen5857 9 ай бұрын
have you tried this workflow + IP adapter, it will increase memory significantly
@c0nsumption
@c0nsumption 9 ай бұрын
Yeah, it’ll pull around 17 GB of VRAM. I have a Runpod tutorial coming for those lacking. Took a lot of debugging and studying but I’ve ironed out the bugs and got it figured out. Then I can drop all the remaining workflows and tutorials 🙏🏽 This way if anyone’s lacking I can redirect them to run pod where they pay as they go and for good cards rather than Google colab which imo really isn’t worth it.
@terencechen5857
@terencechen5857 9 ай бұрын
it's more than 17GB in my case, depending on how many frames to be generated, however looking forward to see your update, thanks @@c0nsumption
@terencechen5857
@terencechen5857 9 ай бұрын
I did some updates(comfyui, custom nodes like ipadapter etc.), the usage of VRAM is down to 11GB with a resolution of 576 * 1024 😂@@c0nsumption
@AIPixelFusion
@AIPixelFusion 9 ай бұрын
How are you only using 11GB of VRAM? Mine goes above 24GB and has to use non-GPU RAM...
@c0nsumption
@c0nsumption 9 ай бұрын
How much VRAM do you have? How many frames are you using? What is the size of your frames? What size are you upscaling them too? How long is your generation? What do you have running in the background on your computer? Going above 24GB of VRAM has to be for a reason.
@AIPixelFusion
@AIPixelFusion 9 ай бұрын
@@c0nsumption I have: 24GB VRAM 30 frames video frame size is 720x1280 (should I be lowering it to 576x1024?) values for upscaler: 576x1024 (are these ignored if smaller than the video frame size?)
@c0nsumption
@c0nsumption 9 ай бұрын
@@AIPixelFusion 🤔 What the hell. Can you send me a photo of your node network over X? I don’t understand how you’re using that much vram if your upscaler is at 576 by 1024. How long is your actual input video/ amount of frames? Did you make sure to cap them like I did? (Where I limited the amount of frames it would process)
@Syzygyyy
@Syzygyyy 9 ай бұрын
same issue@@c0nsumption
@koalanation
@koalanation 9 ай бұрын
Great video! Just you know: the models in huggingface are free to download, no need to open any account
@c0nsumption
@c0nsumption 9 ай бұрын
Some require sign especially upon initial release. It’s all what the developers dictate when posting. Like when SDXL dropped you had to have a huggingface account to download
@dnvman
@dnvman 9 ай бұрын
hey nice video 🫶 where to get ttn text node?
@c0nsumption
@c0nsumption 9 ай бұрын
This video shows the process: kzfaq.info/get/bejne/e9KdrKyk27PGnHk.htmlsi=ej88H8_35b1N2cb9
@2amto3am
@2amto3am 9 ай бұрын
Can we do image to image??
@c0nsumption
@c0nsumption 9 ай бұрын
This is image to image it’s just converting the video for you. If you want just use the node from the beginning of the video. Am I reading your question correctly? 🤔
@bowaic9467
@bowaic9467 Ай бұрын
Do u know what happen to this error? Error occurred when executing CheckpointLoaderSimpleWithNoiseSelect: 'model.diffusion_model.input_blocks.0.0.weight' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff odes_extras.py", line 52, in load_checkpoint out = load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ywueeee
@ywueeee 9 ай бұрын
bro it's been a week, where's some new vids, eagerly waiting
@c0nsumption
@c0nsumption 9 ай бұрын
lol 😂 Been working on a Runpod setup video for people who don’t have compute power. Was pretty difficult to figure it all out but I got it. Posting in the next 30 min to an hour. Workflow vids now coming since I got that out of the way 🧍🏽‍♂️
@ywueeee
@ywueeee 9 ай бұрын
@@c0nsumption hope new workflows doesn't always involve runpod now on, would love to get it working locally always
@aoi_andorid
@aoi_andorid 8 ай бұрын
Is anyone using AI to generate workflow for Comfy UI? Please let me know if you know of any useful links.
@c0nsumption
@c0nsumption 8 ай бұрын
I don’t understand the question. ComfyUI is literally an AI powered software
@aoi_andorid
@aoi_andorid 8 ай бұрын
I thought that if GPT could recognize and learn from a large number of json files and images showing workflows, it would be possible to generate workflows in natural language! (I used DeepL for the translation, so I apologize if I was rude in my wording;)@@c0nsumption
@skycladsquirrel
@skycladsquirrel 9 ай бұрын
Awesome tutorial. I'm using the Controlnet set for the next one. Here's my latest video: kzfaq.info/get/bejne/rteClr1jt623Zqc.html
@nft_bilder_art2098
@nft_bilder_art2098 9 ай бұрын
please tell me why I got this error?? when I launch COMFYUI... D:\comfuUI\ComfyUI>python main.py ** ComfyUI start up time: 2023-10-17 05:30:32.177484 Prestartup times for custom nodes: 0.0 seconds: D:\comfuUI\ComfyUI\custom_nodes\ComfyUI-Manager Traceback (most recent call last): File "D:\comfuUI\ComfyUI\main.py", line 69, in import comfy.utils File "D:\comfuUI\ComfyUI\comfy\utils.py", line 1, in import torch ModuleNotFoundError: No module named 'torch'
@nft_bilder_art2098
@nft_bilder_art2098 9 ай бұрын
before this there was no such error at start
@nft_bilder_art2098
@nft_bilder_art2098 9 ай бұрын
Maybe I'm launching it wrong somehow? Thank you in advance for your cooperation!
@nft_bilder_art2098
@nft_bilder_art2098 9 ай бұрын
all okay, I watched your last video, I figured it out, thank you very much
@c0nsumption
@c0nsumption 9 ай бұрын
Love that you internally said “I’m figuring this out dammit! “ lol. Good job 👍🏽
@yuxiang3147
@yuxiang3147 9 ай бұрын
Awesome video! Do you know how you can combine openpose and depth and lineart together to improve the results?
@c0nsumption
@c0nsumption 9 ай бұрын
Yeah, I’ll make a follow up video for multiple ControlNets
@yuxiang3147
@yuxiang3147 9 ай бұрын
@@c0nsumption Nice! Looking forward to it, you are doing awesome stuff man keep it up!
@c0nsumption
@c0nsumption 9 ай бұрын
@@yuxiang3147 thanks for the positivity 🙏🏽
ComfyUI AnimateDiff Prompt Travel: Runpod.io Cloud GPUs Tutorial
22:49
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
24:54
enigmatic_e
Рет қаралды 89 М.
Inside Out Babies (Inside Out Animation)
00:21
FASH
Рет қаралды 22 МЛН
Mama vs Son vs Daddy 😭🤣
00:13
DADDYSON SHOW
Рет қаралды 48 МЛН
Идеально повторил? Хотите вторую часть?
00:13
⚡️КАН АНДРЕЙ⚡️
Рет қаралды 6 МЛН
ControlNet Union for SDXL - one Model for everything
4:45
Olivio Sarikas
Рет қаралды 41 М.
Why The Windows Phone Failed
24:08
Apple Explained
Рет қаралды 177 М.
Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!)
26:38
Mastering ComfyUI: Getting Started with Video to Video!
13:53
DreamingAI
Рет қаралды 60 М.
Easy AI animation in Stable Diffusion with AnimateDiff.
12:48
Vladimir Chopine [GeekatPlay]
Рет қаралды 34 М.
How to Use AnimateLCM in ComfyUI
5:03
Prompting Pixels
Рет қаралды 4,4 М.
ComfyUI AnimateDiff Prompt Travel: LoRAs and Embeddings
7:04
c0nsumption
Рет қаралды 6 М.
Inside Out Babies (Inside Out Animation)
00:21
FASH
Рет қаралды 22 МЛН