I just fell over this and I can make all sorts of changes but getting rid of the beard nup, no matter how I do what the beard will not go, in parentheses, in negative prompt reducing strength to 0 the beard stays on lol
@voneverick29999 күн бұрын
Please do make more SVD tutorial videos like this.
@JohnSundayBigChin17 күн бұрын
Insane!...I already had all the nodes from other tutorials installed but I never knew exactly what each one did. Thanks for sharing your Worflow!
@valorantacemiyimben28 күн бұрын
Hello, how can we do professional face changing like this?
@soljr9175Ай бұрын
Your workflow link doesnt work. It would have be nice if you included in Hugginface.
@kevint.8553Ай бұрын
I successfully installed the Manager, but don't see the manager options on the UI page.
@lukeovermindАй бұрын
Thanks having a simple and advaced face detailer is clever . Going to try it, Got a sub from me, keep going!
@bobwinberry2 ай бұрын
Thanks for your videos! They worked great, but now (due to updates?) This workflow no longer works, seems to be lacking the BNK_Unsampler. Is there a work around for this? I've tried but aside from stumbling around, this si way over my head. Thanks for any help you might have and thanks again for the videos - well done!
@FiXANoNada2 ай бұрын
finally, some guide that i can comprehend and follow, and then even play around with it. You are so kind to even list all the resources in the description in an well-organized manner. instant sub from me.
@meadow-maker2 ай бұрын
you don't explain how to set the node up?????
@nawafalhinai16432 ай бұрын
were should i put all that files in the links?
@IMedzon2 ай бұрын
Useful video thanks!
@jbnrusnya_should_be_punished2 ай бұрын
Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error: Error occurred when executing FaceDetailer: The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1 File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
@bentontramell22 күн бұрын
this sometimes happens when mixing SD and SDXL assets in the workflow.
@CornPMV2 ай бұрын
One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?
@maxehrlichАй бұрын
probably crop that section run the fix and image composite it back in
@zhongxun20053 ай бұрын
Thank you for sharing! Subscribed:) I have a question about AIO Aux Preprocessor in 2 SDXL workflow. I don't see LineartStandardPreprocessor option, closet one is LineartPreprocessor, but it throws error "Error occurred when executing AIO_Preprocessor: LineartDetector.from_pretrained() got an unexpected keyword argument 'cache_dir'"
@zhongxun20053 ай бұрын
Never mind, I have it resolved. I replaced both with "[Inference.Core] AIO Aux Preprocessor" which has the option. Hope this could help others
@PavewayIII-gbu243 ай бұрын
Great tutorial, thank you
3 ай бұрын
This is a wonderfully good job! I just found it and it works amazingly well! Do you have a workflow that does the same thing img2img?
@goactivemedia3 ай бұрын
When I run this I get - The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device?
@Mranshumansinghr3 ай бұрын
Much Better explanation of Cascade in Comfiui. Thank you. Will try this today. The b to c and then a is a bit confusing and works sometimes. This is much simpler and requires less files.
@aliyilmaz8523 ай бұрын
Amazing share! Thanks again, I am old and have lots of b/w photos. Will give it a try. And If I can I will try to swap the faces with current ones :) Maybe you can teach us how to swap faces, will definitely appreciate it!
@PIQK.A13 ай бұрын
how to facedetail vid2vid?
@cheezeebred3 ай бұрын
I'm missing the BNK_UNsampler and cant find it via google search. What am i doing wrong? Cant find it in manager either
@lumina364 ай бұрын
im amazed that no one has ever thought of Combining Stable Forge with Both Krita and Cascade it would actually solve a lot of problems
@SumNumber4 ай бұрын
This is cool but it is just about impossible to see how you connected all these nodes together so it did not help me at all. :O)
@HowDoTutorials4 ай бұрын
Yeah I’ve been working on making things a little easier to parse going forward. There’s a link to the workflow in the description if you want to load it up and poke around a bit.
@aliyilmaz8524 ай бұрын
thanks for the great explanation, hope you do videos like that more.
@focus6784 ай бұрын
What is GPU spec you are using?.
@HowDoTutorials4 ай бұрын
I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅
@onurc.69444 ай бұрын
When it comes to svd decoder the connection is lost :(
@HowDoTutorials4 ай бұрын
Sorry to hear it's giving you trouble. Here are a couple things to try: 1. Make sure you're using the correct decoder model for your SVD model. (e.g. If using the "xt" model be sure you're using the "xt" decoder) 2. You may be running out of memory. Try lowering the `video_frames` parameter. You might also try using the non-xt model and decoder.
@onurc.69444 ай бұрын
Thanks ur help :) I can work without image_decoder@@HowDoTutorials
@RuinDweller4 ай бұрын
After I discovered ComfyUI, my life changed forever. It has been a dream of mine for 5 years now to be able to run models and manipulate their latent spaces locally. ...But then I discovered just how hard it is for a noob like me to get a lot of these workflows working - at all - even after downloading and installing all of the models required, in the proper versions, and all of the nodes loaded and running together normally. This, was one of about 3 that actually worked for me, and it is BY FAR my favorite one. It was downloaded as a "color restorer" and it works beautifully for that purpose, but I was so excited to see it featured in this video, because it already works for me! Now I can unlock its full potential, and it turns out all I needed were the proper prompts! THANK YOU so much for making these workflows, and these video tutorials, I can't tell you how much you've helped me! If you ever decide to update any of this to utilize SDXL, I am so on that...
@HowDoTutorials4 ай бұрын
I loved reading this comment and I'm so happy I could help make this tech a bit more accessible. Here's a version of the "Reimagine" workflow updated for SDXL: comfyworkflows.com/workflows/4fc27d23-faf3-4997-a387-2dd81ed9bcd1 You'll also need these additional controlnets for SDXL: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank128 Have fun and don't hesitate to reach out here if you run into any issues!
@RuinDweller4 ай бұрын
@@HowDoTutorials I thought I had already responded to this, but apparently I didn't! Anyway THANK YOU for posting the link to that workflow! It's running, but I can't get it to colorize any more, which was my main use for it. :( Oh well, it can still edit B/W images, and then I can colorize them in the other workflow, but I would love to be able to do both things in one. I can colorize things, but not people. I've tried every conceivable prompt. :(
@HowDoTutorials4 ай бұрын
@@RuinDweller I've been having trouble getting it to work as well. Seems there's something about SDXL that doesn't play with that use case quite as well. I'll keep at it and let you know if I figure something out.
@jroc67454 ай бұрын
This looks great thanks for sharing. How can this be altered for img2img?
@HowDoTutorials4 ай бұрын
Here's a modified workflow: comfyworkflows.com/workflows/cd47fbe6-68cc-4f40-8646-dfc62d32eeb4
@mikrodizels4 ай бұрын
That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence
@amorgan58444 ай бұрын
Its the most discouraging part of making ai art
@greypsyche52554 ай бұрын
try hands.
@MultiSunix4 ай бұрын
This is great and helpful, thank you!
@teenudahiya014 ай бұрын
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
@HowDoTutorials4 ай бұрын
It sounds like your diffusers package may be out of date. If you haven’t already, try updating ComfyUI. If you have the Windows portable install you can go into ComfyUI_windows_portable/update folder and run `update_comfyui_and_python_dependencies.bat`.
@97BuckeyeGuy4 ай бұрын
You have an interesting cadence to your speech. Is this a real voice or AI?
@HowDoTutorials4 ай бұрын
A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.
@97BuckeyeGuy4 ай бұрын
@@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼
@lukeovermindАй бұрын
@@HowDoTutorials Thats very clever. its a very soothing voice
@jocg91684 ай бұрын
Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.
@JonDankworth4 ай бұрын
Stable Cascade takes too long only to create images that are not truly better
@HowDoTutorials4 ай бұрын
I agree for the most part. There are a few things it can do better than other models without special nodes, such as text and higher resolutions, but in general I think its strengths won’t really show until some fine tunes come out. That said, given its current licensing and the upcoming SD3 release, that may not matter much either.
@AngryApple4 ай бұрын
would a Lightning Model be a plug and play replacement for this, just because of the different License
@HowDoTutorials4 ай бұрын
I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.
@AngryApple4 ай бұрын
@@HowDoTutorials I will try it, thanks
@JefHarrisnation4 ай бұрын
This was a huge help, especially showing where the models go. Running smooth and producing some very nice results.
@kamruzzamanuzzal37644 ай бұрын
SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?
@HowDoTutorials4 ай бұрын
I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it. EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.
@rovi-farmiigranhermanodela86934 ай бұрын
What about all that videos where they use inpainting tools to edit pictures or to aplay "filters" wich AI can do that?
@HowDoTutorials4 ай бұрын
You can do that with ComfyUI too, though in-painting can be done a bit more easily with AUTOMATIC1111. I don’t have a video covering in-painting yet, but this method can give you something like the “filters” you mentioned: Reimagine Any Image in ComfyUI kzfaq.info/get/bejne/ebiFhdd60drKZWw.html
@AkoZoom5 ай бұрын
very easy step by step tuto ! thank you ! But my rtx3060 12Go takes about near 2min for the 4 images and the last (which has en H special) is different also (?)
@HowDoTutorials4 ай бұрын
You may want to try using the lite models or adjusting the resolution down to 1024x1024 to improve generation speed. You may also have better luck using the new models specifically for ComfyUI. Here's an updated tutorial: kzfaq.info/get/bejne/fbWegLuWz6ecdpc.html
@AkoZoom4 ай бұрын
. @HowDoTutorials Oh yep ty ! then the models are no more in UNet folder.. but in regular checkpoints folder.
@kamruzzamanuzzal37645 ай бұрын
question, what happened to stable cascade stage a (VAE)? I don't see it edit: ok got the answer, another person already asked it before Anyway, subscribed. cause not many ppl experimenting with stable cascade and sharing their findings like you
@WalidDingsdale5 ай бұрын
I really have not figured out the applicablility of cascade yet, thanks for sharing this all the same
@HowDoTutorials5 ай бұрын
I’ve noticed its biggest strengths are composition and text while still allowing variety in output. There are some great fine tunes for SDXL out there that offer better composition for certain styles, but can be more limited in their breadth. Honestly though, I think the main upside of Stable Cascade is not the current checkpoint, but the method and how it allows for creating fine tunes at a reduced cost.
@andriiB_UA5 ай бұрын
Where is vae "stage_a"? or is this not necessary?
@HowDoTutorials5 ай бұрын
Not necessary as a separate model for this method. It’s been baked in as the VAE of the stage b checkpoint for the ComfyUI-specific models.
@TinusvdMerwe5 ай бұрын
Fantastic, I appreciate the time taken to detail explain some concepts and the general easy , unhurried tone
@Vectorr665 ай бұрын
Are you on discord?
@HowDoTutorials5 ай бұрын
Not currently, but it's probably about time for me to make an account and get on there. 😅
@Vectorr665 ай бұрын
I do wish you can make the noodles less noticable ha
@HowDoTutorials5 ай бұрын
Usually I'll adjust it for myself to make things look cleaner, but it makes it harder to see what the connections are so I switch it to ultimate noodle mode for videos. You can change it by clicking the gear in the menu to the right and switching the Link Render mode.