Too bad it takes soo long to render, it's a very nice model.
@TheFutureThinker4 сағат бұрын
Its all about GPU game. Upcoming newer AI models , I believe it will be more obvious.
@sdvbnmwwwe5 сағат бұрын
can stable diffusion batch 1000 images at ones???
@SFzip5 сағат бұрын
Thanks! Installing now. Does it support multi-GPUs?
@TheFutureThinker4 сағат бұрын
Suppose, the supporting for Multi gpu is depends on how it config in ComfyUI. You can expirement with the multi gpu setting for this model.
@giuseppedaizzole70255 сағат бұрын
bad hands
@AgustinCaniglia19927 сағат бұрын
12gb VRAM. 32 ram. 2 minutes x image 💁
@MilesBellas7 сағат бұрын
@@AgustinCaniglia1992 Schnell or Dev?
@TheFutureThinker7 сағат бұрын
Need more Vram
@AgustinCaniglia19927 сағат бұрын
@@MilesBellas schnell 8fpt or something
@crazyleafdesignweb7 сағат бұрын
this model look promising, as a base model it can generate image like this. So all your images are generate from 1 sampler?
@TheFutureThinker7 сағат бұрын
yup, all image generated with their basic txt2img workflow. :)
@MilesBellas7 сағат бұрын
How is it possible to use LORAs ?
@TheFutureThinker7 сағат бұрын
Var ai1 = sd; Var lora; If lora.type == ai1.lora { Wf.lora = lora; return true; }else{ Return false;}
@MilesBellas7 сағат бұрын
@@TheFutureThinker Can complex workows from SD3 be used with FLUX ?
@TheFutureThinker6 сағат бұрын
you can try ;)
@beetwing8 сағат бұрын
Yeah been playing with it since yesterday, it's day and night with Stable Diffusion. Can't wait to also see it running in Automatic1111, I think it easier to manipulate control net and Loras.
@TheFutureThinker7 сағат бұрын
well, newer AI model beat older models as expected. And this model, maybe it is some of kind quality what we expected when SD3 launched.
@kalakala48038 сағат бұрын
😂😂the thumbnail , i believe its generated by Flux, really.
@TheFutureThinker8 сағат бұрын
Image is making for expression 😉
@TheFutureThinker8 сағат бұрын
For all FLUX.1 Models, Demo Page, ComfyUI Instruction: thefuturethinker.org/flux-1-black-forest-labs-ai-image/ Additional Workflow Flux Image With SDXL Refiner : www.patreon.com/posts/just-created-109459378?Link&
@shubhammate2458 сағат бұрын
Can you have step by step document for installation
@__________________________691011 сағат бұрын
It's very complex task
@__________________________691011 сағат бұрын
Your system config ?
@OnlyNiceMusic00719 сағат бұрын
nice explanation
@inteligenciafutura20 сағат бұрын
It's very bad
@edwardbradshaw685021 сағат бұрын
I get a "runtime error" . So this was fun while it lasted. :(
@VisibleMRJКүн бұрын
normal people are not going to be tracking bacteria or collecting scientific data but they will definitely be segmenting Asian woman body parts.
@TheFutureThinkerКүн бұрын
😂 many are doing in Civitai
@electroncommerceКүн бұрын
I'm running ComfyUI on Runpod, and for some reason, I am not able to open the checkpoints folder using Jupyter in order to put the aura_glow_0.1.safetensors file in there. Any ideas? All the other folders within /ComfyUI/models/ I can open and run terminal, just nor the checkpoints folder. Appreciate the video and any assistance.
@machanmobile4216Күн бұрын
齊藤さんだぞ
@lionhearto6238Күн бұрын
hi. is there a way to output/save only the orange? instead of the mask of the orange?
@santicomp2 күн бұрын
I was thinking this exact flow when Sam 2 was released. The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2. Awesome job. 🎉
@TheFutureThinker2 күн бұрын
Florence 2 FT works well with this, should give it a try.
@weirdscix2 күн бұрын
I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this
@TheFutureThinker2 күн бұрын
Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too. I noticed, it happened when an object moving different angle.
@aivideos322Күн бұрын
had the same issue, large worked better but it still was not perfect. Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should
@TheFutureThinkerКүн бұрын
@@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.
@authorkevinКүн бұрын
@@aivideos322toggle the individual objects selector
@darkmatter95832 күн бұрын
is that comfyUI? yes i see now...
@aivideos3222 күн бұрын
Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1 Edit : i needed to change the security of my manager to weak to install this.
@MrDebranjandutta2 күн бұрын
Great stuff, the only little thing I can provide some positive criticism about is the mouth jitter
@TheFutureThinker2 күн бұрын
Yes , that need to improve.
@goodie2shoes2 күн бұрын
its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!
@TheFutureThinker2 күн бұрын
Yes, he is very quick to implement, whenever new model release, he will create a custom node done. 😊
@AgustinCaniglia19922 күн бұрын
Who?
@kalakala48032 күн бұрын
Thanks , i will update my wf to try SAM2.
@TheFutureThinker2 күн бұрын
Have fun😉
@crazyleafdesignweb2 күн бұрын
Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.
@TheFutureThinker2 күн бұрын
Nice! ☺️
@TheFutureThinker2 күн бұрын
segment-anything-2 ai.meta.com/blog/segment-anything-2/ github.com/kijai/ComfyUI-segment-anything-2 Model : huggingface.co/Kijai/sam2-safetensors/tree/main Save to ComfyUI/models/sam2
@LuckRenewal2 күн бұрын
this is good! thanks for sharing!
@TheFutureThinker2 күн бұрын
Glad you liked it!
@SudeepKumarRana2 күн бұрын
I have watched a few of your videos and they are really informative. Especially when you talk in detail and share the technicals too. Keep up the good work and Thank you for teaching.
@TheFutureThinker2 күн бұрын
Glad you like them!
@naemi76143 күн бұрын
Great video, thank you ! The a-person-mask-generator is bugging for me. I can't understand why. One of them works but not the other as you can see here : i.ibb.co/xLwkX3m/whaaaat.jpg <code> Error occurred when executing APersonMaskGenerator: [Errno 13] Permission denied: 'F:\\AI-ALL\\ComfyUI_windows_portable\\ComfyUI\\models\\mediapipe\\selfie_multiclass_256x256.tflite' File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI-ALL\ComfyUI_windows_portable\ComfyUI\custom_nodes\a-person-mask-generator\a_person_mask_generator_comfyui.py", line 102, in generate_mask with open(a_person_mask_generator_model_path, "rb") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code> Please help 😢
@naemi76142 күн бұрын
Found it : Models/mediapipe had a selfie_multiclass_256x256.tflite folder with the selfie_multiclass_256x256.tflite file in it, I copied the file and replaced the folder with it.
@patrickdevaney33613 күн бұрын
For image generation do you need the config ,tokenizer, scheduler, text encoder files as well? Or just model.safetensors to put in the checkpoint directory?
@TheFutureThinker3 күн бұрын
Yes need those file, as it listed in Github page
@patrickdevaney33613 күн бұрын
@@TheFutureThinker Cool. I got it working just with the model file, so maybe comfyui handles some of the requirements? Do you know which directories in ComfyUI/models to put the other files?
@justaguy-693 күн бұрын
can you do a video of changing the mouth movement in a video? like input a video then change the video to say what you want , this would be good in a stand alone open source format. also something that works on AMD not just invidia