I've tried it, but it simply is slow as hell for me on an rtx 2060...
@CodeCraftersCornerКүн бұрын
Thanks for sharing your experience. I'm really hoping for an optimized ComfyUI version.
@mappa-qu3lo2 күн бұрын
Thank you for the subtitles, which made it easier for more people to understand
@CodeCraftersCornerКүн бұрын
Happy to help!
@HerrSausB2 күн бұрын
shiimizu /ComfyUI-PhotoMaker-Plus News from: 2024-07-26 Support for PhotoMaker V2. This uses InsightFace, so make sure to use the new PhotoMakerLoaderPlus and PhotoMakerInsightFaceLoader nodes. Have not tried it... just a hint Your Videos are always inspire me to try something new .
@user-ty8eo3up3g3 күн бұрын
I decided to test the huggingface, the first photomaker did not impress me. and this one still too bad, the similarities with the original didn’t impress me at all, face ID and instant ID are still my favorites. But thx for video)
@CodeCraftersCorner3 күн бұрын
Thanks for sharing your experience. Same here, I was not able to get similar results as the demo. I'm hoping for a ComfyUI version for more testing.
@voxyloids87233 күн бұрын
can not run on comfy =(
@CodeCraftersCorner3 күн бұрын
Yes, like I mentioned, only V1 will work in ComfyUI for now. I'll keep an eye and will follow up if we get a ComfyUI version.
@bionicsidekick66043 күн бұрын
I'm only interestsed in photomaker style and it doesn't seem to work anymore. The old one used to give me great drawings, the drawings look ugly in V2
@CodeCraftersCorner3 күн бұрын
Thank you for your feedback! I also noticed that the results aren't quite like the demo. Let's hope for a ComfyUI version soon so we can experiment.
@TrungHieuVo-b1u3 күн бұрын
Thank you. Your lesson is very clear
@CodeCraftersCorner3 күн бұрын
Glad it was helpful!
@sunlightlove13 күн бұрын
Awesome
@CodeCraftersCorner3 күн бұрын
Thanks!
@sigitpermana86443 күн бұрын
Its always good to see someone like You who share and explain the how to with detail and easy to follow workflow, respect. Thank You Sir
@CodeCraftersCorner3 күн бұрын
Thank you, I appreciate that!
@FalconWingz884 күн бұрын
came for a comfyui node explanation, got a very clear how stable diffusion works too, Thankyou !
@CodeCraftersCorner4 күн бұрын
Great to hear!
@sinuva4 күн бұрын
amazing content as allways, thank u make, do more and more, i will fallow everything
@swannschilling4745 күн бұрын
Thanks for sharing!😊
@CodeCraftersCorner4 күн бұрын
Thanks for watching!
@sunlightlove15 күн бұрын
always giving amazing content to community. Thank you !
@CodeCraftersCorner4 күн бұрын
I appreciate that!
@nguyenhai5518 күн бұрын
thank you
@sinuva8 күн бұрын
Dude this is the Best IPAdapater tutorial, u should make this type of videos as a model for ur channel. explain many types of models and workflows... best video !!!! i suggest u do videos like, consistend characters, more then one character per pic, doing in differents workflows. I have seen many youtube videos, but u have the best videos tutorials !!
@sinuva8 күн бұрын
thank for the video mate, have u done a videos between InstantId, ipadapter and face detailer diferences ?
@CodeCraftersCorner7 күн бұрын
Not yet! IPAdapter has a lot of uses. InstantID is mainly to add style and maintain the face similarity. Face detailer is to ensure that the face generated is anatomically correct.
@walidkh-sansfiltre9 күн бұрын
Hello great thank you, how build a web app on top off comfuyi live portrait ?
@CodeCraftersCorner7 күн бұрын
Hello, I've reply to your other comment.
@walidkh-sansfiltre9 күн бұрын
Hello great thank you, how build a web app on top off comfuyi live portrait ?
@CodeCraftersCorner7 күн бұрын
Hello, you can do the same as my previous videos on ComfyUI API. Save the workflow (json) file and load it in your app.
@yngeneer9 күн бұрын
did you, or do you plan to take a look at implementing live portrait in video to video workflow? is it even possible?
@CodeCraftersCorner9 күн бұрын
Hello @yngeneer, LivePortrait primarily works by animating a single image, so integrating it directly into a video-to-video workflow might be challenging at the moment. The main purpose is to use video to drive the animation of the head. These projects are advancing really fast, so it’s possible in the near future. Motion capture is already widely used to animate 3D models in the film and gaming industries, soon it might be available for 2D video animations.
@CodeCraftersCorner9 күн бұрын
Not sure why my previous reply is not showing. LivePortrait primarily works by animating a single image, so integrating it directly into a video-to-video workflow might be challenging. The main purpose is to use video to drive the animation of the head. Since these projects are moving so fast, it’s possible in the near future. Motion capture is already widely used to animate 3D models in the film and gaming industries, soon it might be available for 2D video animations.
@yngeneer9 күн бұрын
@@CodeCraftersCorner ok, thank you
@CodeCraftersCorner9 күн бұрын
👍
@sunlightlove19 күн бұрын
alwaYS GREAT CONTRIBUTION
@vickyrajeev982110 күн бұрын
Thanks, can I run on CPU because i don't have GPU
@CodeCraftersCorner9 күн бұрын
Not sure! I checked the resources used during the generation. For me, it took about 2GB VRAM and CPU was at 100%. You can give it a try. It may work, although will be slow. Alternatively, you can try the Huggingface space. It is free for now and it should be fast.
@thschied10 күн бұрын
Vielen Dank für die gute Erklärung. Ein sehr gutes Video und eine angenehme Stimme.
@CodeCraftersCorner10 күн бұрын
Thank you very much!
@tatagrossevilaine175311 күн бұрын
T H A N K Y O U ! ! ! ! ! ! ! ! ! ! ! !
@shashisolanki960211 күн бұрын
appreciated 👍👍
@yvann.mp411 күн бұрын
thanks a lots for all your help, is really helpful and interesting
@CodeCraftersCorner10 күн бұрын
Glad to hear that!
@centurionstrengthandfitnes369411 күн бұрын
At first, I thought 'This is way above my tech ability to understand!' But you were so logical and clear in how you put it all across, I actually understood most of it. Thanks for a great lesson. Subbed!
@CodeCraftersCorner10 күн бұрын
Glad it was helpful! Thanks for the sub!
@daxgarduno333511 күн бұрын
how to install it if I don't have the portable version????
@CodeCraftersCorner10 күн бұрын
Hello, most of the steps will be the same. You will have to replace part of the command with you python environment. Every time, I said "Python_embedded/python.exe", you have to use your virtual environment or conda environment python executable instead to run the command. Otherwise, it will be the same.
@profitsmimetiques868212 күн бұрын
Hi ! I wanted to know if you have any idea how zia fit on I.ns.sta is made ? It's a specillized ai agency (themixedai) behind and i don't know what kind of workflow they use to get this level of quality, consitency. It seems that the base image is an exsiting one, but then they maybe use a 3d pose character + openpose + lora for body + lora for face, but something is off. I you have any idea on how to do that it would be really interesting to know, they are kinda the best in their field.
@CodeCraftersCorner10 күн бұрын
I have not looked into it. Thanks for letting me know. I will take a look.
@jeremyjones152513 күн бұрын
THANK YOU!
@oncelife749913 күн бұрын
thank you I will watch the useful video and learn from it. I will support you~
@CodeCraftersCorner13 күн бұрын
Thank you too
@ShubzGhuman13 күн бұрын
facexlib error never goes away, i tried everything
@CodeCraftersCorner13 күн бұрын
Sorry to hear that! I understand it can be really frustrating. As a last resort, you might consider trying Python 3.10. However, this can lead to more complications with future updates.
@ShubzGhuman9 күн бұрын
@@CodeCraftersCorner yes i did it i was using pythoon 3.12 then i had to downgrade.
@CodeCraftersCorner9 күн бұрын
@ShubzGhuman I see.
@itsmenord199313 күн бұрын
thank you sir
@CodeCraftersCorner13 күн бұрын
Most welcome
@ShubzGhuman13 күн бұрын
thank you its working perfectly.
@CodeCraftersCorner13 күн бұрын
Glad this one worked!
@yngeneer13 күн бұрын
what about live portrait, isn't it better?
@深度伪造爱好者13 күн бұрын
Yes it's way far better
@CodeCraftersCorner13 күн бұрын
I'll say better
@深度伪造爱好者12 күн бұрын
@@CodeCraftersCorner PuLID or libepotrait??? Live potrait mix node one is superior
@CodeCraftersCorner10 күн бұрын
They are for two different purposes though. LivePortrait will animate a static portrait while PuLID will create a new image with the face.
@GARDNSOUND14 күн бұрын
This was a great tutorial. I felt like I was back in class. Excellent work. I do not understand python as well as cpp; but you made this particular application very easy to understand.
@CodeCraftersCorner13 күн бұрын
Glad it was helpful!
@cdrbroda14 күн бұрын
ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32 query.dtype: torch.float32 key.dtype : torch.float16 value.dtype: torch.float16 🤔
@CodeCraftersCorner13 күн бұрын
You can try running ComfyUI with --force-fp16. Add it to the run_nvidia_gpu.bat (batch) file. It may or may not work though!
@glibsonoran14 күн бұрын
Nice tutorial for a fairly complicated installation. Thx :)
@CodeCraftersCorner13 күн бұрын
Glad it helped
@kwlook9014 күн бұрын
Outstanding presentation. Thank you. 😀
@sonic5519314 күн бұрын
This video is very educational. Thank you.
@dadekennedy971214 күн бұрын
Thank you for your super detailed videos!
@CodeCraftersCorner14 күн бұрын
Glad you like them!
@mohdfaizan536514 күн бұрын
"Hey, thanks a lot for Creating this tutorial! Really appreciate your effort, you're awesome!👍"
@vj-baker-8814 күн бұрын
Thanks, major nodes are present. A new update could be great.
@CodeCraftersCorner14 күн бұрын
Thanks for mentioning. I made a video on SD3, Stable audio and AuraFlow updates. It's on the channel. Can you specify which new nodes you are referring to?
@AdrianMark15 күн бұрын
Nice to see you back! Hope all is well with you mate. Your tutorials are really some of the best!
@stable_contusion15 күн бұрын
Thank you so much for your community news videos.
@ArrowKnow15 күн бұрын
Thank you for this, especially the stable audio as I had not seen how to use it before! The auraflow model is very good at prompt adherence. I would say the best I have seen yet but the quality is still fairly poor. It is in beta as you said so I hope for improvements in future versions. Very useful to get a base picture for further processing with other models, although on my 3080ti it takes a while to render even at low steps. civitai posted a video with the creator explaining and exploring the model on their youtube channel if you want to know more. Keep the great videos coming!
@CodeCraftersCorner14 күн бұрын
Thank you for letting me know! I will take a look at the video.
@FiveBelowFiveUK15 күн бұрын
awesome, not enough info around about stable audio in comfy ! very helpful
@PriyanshuSingh-sd2dc5 күн бұрын
bro waiting for you new video 😃
@MushroomFleet15 күн бұрын
nice!
@mohdfaizan536515 күн бұрын
Thank you so much, can you please make a similar video for PuLID installation 🙏
I can run ControlNET workflows with no problem, to good effect. I can run IPAdapter workflows with no problem, to good effect. I have NEVER been able to run them both together in the same workflow. :( I get the error: "An error occurred when executing KSamplerAdvanced: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" Do you know what's causing this issue?
@CodeCraftersCorner15 күн бұрын
This usually happens when there is a mismatch between the main checkpoint and the controlnet or ipadapter model. If you are using SD1.5, then the controlnet and ipadapter models should be for 1.5. When using SDXL, make sure you are using the corresponding controlnet and ipadapter.