Пікірлер
@aivideos322
@aivideos322 5 сағат бұрын
Too bad it takes soo long to render, it's a very nice model.
@TheFutureThinker
@TheFutureThinker 4 сағат бұрын
Its all about GPU game. Upcoming newer AI models , I believe it will be more obvious.
@sdvbnmwwwe
@sdvbnmwwwe 5 сағат бұрын
can stable diffusion batch 1000 images at ones???
@SFzip
@SFzip 5 сағат бұрын
Thanks! Installing now. Does it support multi-GPUs?
@TheFutureThinker
@TheFutureThinker 4 сағат бұрын
Suppose, the supporting for Multi gpu is depends on how it config in ComfyUI. You can expirement with the multi gpu setting for this model.
@giuseppedaizzole7025
@giuseppedaizzole7025 5 сағат бұрын
bad hands
@AgustinCaniglia1992
@AgustinCaniglia1992 7 сағат бұрын
12gb VRAM. 32 ram. 2 minutes x image 💁
@MilesBellas
@MilesBellas 7 сағат бұрын
@@AgustinCaniglia1992 Schnell or Dev?
@TheFutureThinker
@TheFutureThinker 7 сағат бұрын
Need more Vram
@AgustinCaniglia1992
@AgustinCaniglia1992 7 сағат бұрын
@@MilesBellas schnell 8fpt or something
@crazyleafdesignweb
@crazyleafdesignweb 7 сағат бұрын
this model look promising, as a base model it can generate image like this. So all your images are generate from 1 sampler?
@TheFutureThinker
@TheFutureThinker 7 сағат бұрын
yup, all image generated with their basic txt2img workflow. :)
@MilesBellas
@MilesBellas 7 сағат бұрын
How is it possible to use LORAs ?
@TheFutureThinker
@TheFutureThinker 7 сағат бұрын
Var ai1 = sd; Var lora; If lora.type == ai1.lora { Wf.lora = lora; return true; }else{ Return false;}
@MilesBellas
@MilesBellas 7 сағат бұрын
@@TheFutureThinker Can complex workows from SD3 be used with FLUX ?
@TheFutureThinker
@TheFutureThinker 6 сағат бұрын
you can try ;)
@beetwing
@beetwing 8 сағат бұрын
Yeah been playing with it since yesterday, it's day and night with Stable Diffusion. Can't wait to also see it running in Automatic1111, I think it easier to manipulate control net and Loras.
@TheFutureThinker
@TheFutureThinker 7 сағат бұрын
well, newer AI model beat older models as expected. And this model, maybe it is some of kind quality what we expected when SD3 launched.
@kalakala4803
@kalakala4803 8 сағат бұрын
😂😂the thumbnail , i believe its generated by Flux, really.
@TheFutureThinker
@TheFutureThinker 8 сағат бұрын
Image is making for expression 😉
@TheFutureThinker
@TheFutureThinker 8 сағат бұрын
For all FLUX.1 Models, Demo Page, ComfyUI Instruction: thefuturethinker.org/flux-1-black-forest-labs-ai-image/ Additional Workflow Flux Image With SDXL Refiner : www.patreon.com/posts/just-created-109459378?Link&
@shubhammate245
@shubhammate245 8 сағат бұрын
Can you have step by step document for installation
@__________________________6910
@__________________________6910 11 сағат бұрын
It's very complex task
@__________________________6910
@__________________________6910 11 сағат бұрын
Your system config ?
@OnlyNiceMusic007
@OnlyNiceMusic007 19 сағат бұрын
nice explanation
@inteligenciafutura
@inteligenciafutura 20 сағат бұрын
It's very bad
@edwardbradshaw6850
@edwardbradshaw6850 21 сағат бұрын
I get a "runtime error" . So this was fun while it lasted. :(
@VisibleMRJ
@VisibleMRJ Күн бұрын
normal people are not going to be tracking bacteria or collecting scientific data but they will definitely be segmenting Asian woman body parts.
@TheFutureThinker
@TheFutureThinker Күн бұрын
😂 many are doing in Civitai
@electroncommerce
@electroncommerce Күн бұрын
I'm running ComfyUI on Runpod, and for some reason, I am not able to open the checkpoints folder using Jupyter in order to put the aura_glow_0.1.safetensors file in there. Any ideas? All the other folders within /ComfyUI/models/ I can open and run terminal, just nor the checkpoints folder. Appreciate the video and any assistance.
@machanmobile4216
@machanmobile4216 Күн бұрын
齊藤さんだぞ
@lionhearto6238
@lionhearto6238 Күн бұрын
hi. is there a way to output/save only the orange? instead of the mask of the orange?
@santicomp
@santicomp 2 күн бұрын
I was thinking this exact flow when Sam 2 was released. The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2. Awesome job. 🎉
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Florence 2 FT works well with this, should give it a try.
@weirdscix
@weirdscix 2 күн бұрын
I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too. I noticed, it happened when an object moving different angle.
@aivideos322
@aivideos322 Күн бұрын
had the same issue, large worked better but it still was not perfect. Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should
@TheFutureThinker
@TheFutureThinker Күн бұрын
@@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.
@authorkevin
@authorkevin Күн бұрын
​@@aivideos322toggle the individual objects selector
@darkmatter9583
@darkmatter9583 2 күн бұрын
is that comfyUI? yes i see now...
@aivideos322
@aivideos322 2 күн бұрын
Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1 Edit : i needed to change the security of my manager to weak to install this.
@MrDebranjandutta
@MrDebranjandutta 2 күн бұрын
Great stuff, the only little thing I can provide some positive criticism about is the mouth jitter
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Yes , that need to improve.
@goodie2shoes
@goodie2shoes 2 күн бұрын
its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Yes, he is very quick to implement, whenever new model release, he will create a custom node done. 😊
@AgustinCaniglia1992
@AgustinCaniglia1992 2 күн бұрын
Who?
@kalakala4803
@kalakala4803 2 күн бұрын
Thanks , i will update my wf to try SAM2.
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Have fun😉
@crazyleafdesignweb
@crazyleafdesignweb 2 күн бұрын
Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Nice! ☺️
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
segment-anything-2 ai.meta.com/blog/segment-anything-2/ github.com/kijai/ComfyUI-segment-anything-2 Model : huggingface.co/Kijai/sam2-safetensors/tree/main Save to ComfyUI/models/sam2
@LuckRenewal
@LuckRenewal 2 күн бұрын
this is good! thanks for sharing!
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Glad you liked it!
@SudeepKumarRana
@SudeepKumarRana 2 күн бұрын
I have watched a few of your videos and they are really informative. Especially when you talk in detail and share the technicals too. Keep up the good work and Thank you for teaching.
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Glad you like them!
@naemi7614
@naemi7614 3 күн бұрын
Great video, thank you ! The a-person-mask-generator is bugging for me. I can't understand why. One of them works but not the other as you can see here : i.ibb.co/xLwkX3m/whaaaat.jpg <code> Error occurred when executing APersonMaskGenerator: [Errno 13] Permission denied: 'F:\\AI-ALL\\ComfyUI_windows_portable\\ComfyUI\\models\\mediapipe\\selfie_multiclass_256x256.tflite' File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\custom_nodes\a-person-mask-generator\a_person_mask_generator_comfyui.py", line 102, in generate_mask with open(a_person_mask_generator_model_path, "rb") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code> Please help 😢
@naemi7614
@naemi7614 2 күн бұрын
Found it : Models/mediapipe had a selfie_multiclass_256x256.tflite folder with the selfie_multiclass_256x256.tflite file in it, I copied the file and replaced the folder with it.
@patrickdevaney3361
@patrickdevaney3361 3 күн бұрын
For image generation do you need the config ,tokenizer, scheduler, text encoder files as well? Or just model.safetensors to put in the checkpoint directory?
@TheFutureThinker
@TheFutureThinker 3 күн бұрын
Yes need those file, as it listed in Github page
@patrickdevaney3361
@patrickdevaney3361 3 күн бұрын
@@TheFutureThinker Cool. I got it working just with the model file, so maybe comfyui handles some of the requirements? Do you know which directories in ComfyUI/models to put the other files?
@justaguy-69
@justaguy-69 3 күн бұрын
can you do a video of changing the mouth movement in a video? like input a video then change the video to say what you want , this would be good in a stand alone open source format. also something that works on AMD not just invidia
@abaj006
@abaj006 3 күн бұрын
Fantastic, thanks for the quick tutorial!
@TheFutureThinker
@TheFutureThinker 2 күн бұрын
Glad it was helpful!
@yngeneer
@yngeneer 4 күн бұрын
Watch out! the fish wants to run out!
@TheFutureThinker
@TheFutureThinker 4 күн бұрын
ComfyUI Custom Node : github.com/kijai/ComfyUI-KwaiKolorsWrapper/ Hugging Face : huggingface.co/Kwai-Kolors/Kolors ChatGLM3 : huggingface.co/Kijai/ChatGLM3-safetensors/tree/main
@crazyleafdesignweb
@crazyleafdesignweb 4 күн бұрын
Interesting, i will try the controlnet soon 😊
@TheFutureThinker
@TheFutureThinker 4 күн бұрын
You should! With your design skill, this tool will not be waste, saw many just download and play around only.
@MilesBellas
@MilesBellas 4 күн бұрын
I watch at 1.5x😊
@RamonGuthrie
@RamonGuthrie 4 күн бұрын
Have you got the new Kolors FaceID IPadapter working?
@TheFutureThinker
@TheFutureThinker 4 күн бұрын
Looks like IPA have not update yet for FaceID integration. It just new release, so we have to be patient then.
@luisellagirasole7909
@luisellagirasole7909 4 күн бұрын
Thanks! I tested it, very nice! I don't know why it gives me also a 3rd image (an old man) that is not in the prompt LOL.
@kalakala4803
@kalakala4803 4 күн бұрын
Thanks looking forward a series of tutorial for Kolors!
@TheFutureThinker
@TheFutureThinker 4 күн бұрын
Yes, will do 😉
@RoshanYadav-v2z
@RoshanYadav-v2z 4 күн бұрын
Second
@SeanietheSpaceman
@SeanietheSpaceman 4 күн бұрын
First ;)
@TheFutureThinker
@TheFutureThinker 4 күн бұрын
😆😆😆
@rkuo2000
@rkuo2000 4 күн бұрын
overlay with a fix background pic would be much better
@chrisder1814
@chrisder1814 5 күн бұрын
bonjour j'aimerais te poser une question à ce qui concerne les logiciels de retouche d'image par AI avec API