ultra instinct goku animation
0:14
blender beach wave sceane
12:21
Жыл бұрын
Пікірлер
@Warz-cx6zk
@Warz-cx6zk Күн бұрын
Hi I wanted to ask the following: 1.) Is this LoRA inpainting process better than Fooocus or BrushNet Inpainting? 2.) Is it okay to stack LoRA's using Efficient Loader SDXL to inpaint with this Inpainting process, Fooocus Inpainting, and BrushNet Inpainting. Thank you very much.
@TECHTOUR
@TECHTOUR 3 күн бұрын
i downloaded the workflow and no changes there, but I am getting 60 images that are same, what could be the reason?
@vincema4018
@vincema4018 7 күн бұрын
Sorry to say that, but given the outcomes you got in the video, I don’t feel an urge to make the change. Tile upscale in SDXL is still very primitive, and I’d rather to use Ultimate SD instead, which normally gives me much stable and better results, under both A1111 and Comfyui. Actually for the image generated from SDXL, I always use high res fix with an upscale model like 4xultrasharp, it works great in terms of keeping the original details. However, for upscaling a random images using SDXL, I would prefer SUPIR as it gives me more clean and creative results. One alternative way of refine the images generated from either SD 1.5, SDXL or MJ, is using SD3 with a low denoising without a control net. I find the results are much better than the original version. But be aware it may introduce too much details, and some of them we may not want.
@cgpixel6745
@cgpixel6745 6 күн бұрын
well thanks for the tips i also wanted to try supir but i has a low vram thats why i wanted to try other methods like ultimatesd or tile upscale because they are less vram consuming and the results is quite acceptable but i will try your method and see the results of it it seems quite promising tips
@weirdscix
@weirdscix 8 күн бұрын
Nice video, I used to use Supir quite a lot, but then I moved onto using Mcboaty upscaler/refiner it uses tiling (can even edit the prompt/denoise per tile) and can use controlnet models with it as well, it's from marascott nodes.
@leiyangalable
@leiyangalable 8 күн бұрын
do u know how to calculate the tiles width and height? I wanna tile images myself, but I get stuck at the upscaling when I wanna put the tiled images back into whole one.
@leiyangalable
@leiyangalable 8 күн бұрын
I used the mcboaty too,but I wanna use different cn,so I tile myself.
@arjuneswarrajendran
@arjuneswarrajendran 8 күн бұрын
after the generation my storage decreased from 50 gb to 30 gb , do you know why?
@cgpixel6745
@cgpixel6745 6 күн бұрын
it could be related to comfyui update that downloaded other nodes models
@user-yx4yt5zq2y
@user-yx4yt5zq2y 9 күн бұрын
Error occurred when executing RIFE VFI: RIFE_VFI.vfi() missing 1 required positional argument: 'frames' File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "H:\MachineLearning\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@cgpixel6745
@cgpixel6745 6 күн бұрын
try to update this nodes
@marcdevinci893
@marcdevinci893 4 күн бұрын
I get the same error. I'm all up to date too
@RoshanYadav-v2z
@RoshanYadav-v2z 10 күн бұрын
Error occurred when executing ImageScaleBy: ImageScaleBy.upscale() missing 1 required positional argument: fimage' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in
@user-yx4yt5zq2y
@user-yx4yt5zq2y 9 күн бұрын
same but line 152
@RoshanYadav-v2z
@RoshanYadav-v2z 9 күн бұрын
@@user-yx4yt5zq2y I found solution
@cgpixel6745
@cgpixel6745 6 күн бұрын
it seems that the last update created that issue i am also facing the same problem but you can use upscale image instead
@RoshanYadav-v2z
@RoshanYadav-v2z 6 күн бұрын
@@cgpixel6745 I fixed the problem
@health_beaty
@health_beaty 12 күн бұрын
class
@MisterCozyMelodies
@MisterCozyMelodies 13 күн бұрын
where can i download depth sdxl.safetensors??
@health_beaty
@health_beaty 14 күн бұрын
OK
@MaghrabyANO
@MaghrabyANO 15 күн бұрын
Can you provide the workflow in the intro?
@cgpixel6745
@cgpixel6745 6 күн бұрын
you can found this workflow under ic-light root folder
@RoshanYadav-v2z
@RoshanYadav-v2z 15 күн бұрын
I got this error what i do Error occurred when executing LoraLoader: 'NoneType' object has no attribute 'lower' File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", Line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", 11, in load lora
@cgpixel6745
@cgpixel6745 15 күн бұрын
make sure that you select the lora file in the lora loader otherwise it will not work
@RoshanYadav-v2z
@RoshanYadav-v2z 15 күн бұрын
@@cgpixel6745 ok i will try now
@RoshanYadav-v2z
@RoshanYadav-v2z 15 күн бұрын
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
@RoshanYadav-v2z
@RoshanYadav-v2z 15 күн бұрын
@@cgpixel6745 I got another error Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'NoneType' object has no attribute 'lower' File :\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, olj.FUNCTION, allow_interrupt=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func) (**slice_dict(input_data_all, i))) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved animatediff odes_gen1.py", line 138, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora-motion_lora, motion_model_settings=motion_model_settings) C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- File " Evolved\animatediff\model_injection.py", line 1201, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
@DBPlusAI
@DBPlusAI 16 күн бұрын
where can i find a lastest version , i saw in ur subcribe but it not lastest version <3
@cgpixel6745
@cgpixel6745 15 күн бұрын
the latset version of what exactly ?
@DBPlusAI
@DBPlusAI 14 күн бұрын
@@cgpixel6745 I'm very sorry that I haven't watched the video carefully, I misunderstood, I'm sincerely sorry :<<<
@RoshanYadav-v2z
@RoshanYadav-v2z 16 күн бұрын
Hi sir i need help about comfyui😊
@cgpixel6745
@cgpixel6745 16 күн бұрын
OFC how can i help you
@RoshanYadav-v2z
@RoshanYadav-v2z 16 күн бұрын
@@cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
@RoshanYadav-v2z
@RoshanYadav-v2z 16 күн бұрын
@@cgpixel6745 @cgpixel6745 sir when I run comfy ui by clicking run_nvidea_gpu. This error show me what I do plzz guid Traceback (most recent call last): File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\main.py", line 80, in <module> import execution File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 11, in <module> import nodes File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI odes.py", line 21, in <module> import comfy.diffusers_load File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\diffusers_load.py", line 3, in <module> import comfy.sd File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 5, in <module> from comfy import model_management File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 119, in <module> total_vram = get_total_memory(get_torch_device()) / (1024*1024) File "C:\Users\akash\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 88, in get_torch_devi ce return torch.device(torch.cuda.current_device()) File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 778, in current_device _lazy_init() File "C:\Users\akash\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 293, in lazy_init torch._C._cuda_init() RuntimeError: The NVIDIA driver on your system is too old (found version 11060). Please update your GPU driver by downlo ading and installing a new version from the URL: www.nvidia.com/Download/index.aspx Alternatively, go to: https:/ /pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. C:\Users\akash\Documents\ComfyUI_windows_portable>pause Press any key to continue
@Name-cc8jj
@Name-cc8jj 3 күн бұрын
​@@RoshanYadav-v2zyou need install new version of driver for your graphic processor
@bgtubber
@bgtubber 19 күн бұрын
I've also had bad results with the old SDXL canny. I've always wondered why it's worse than the SD 1.5 canny. Good to know that the Canny in the Controlnet Union model doesn't have such problems. Thanks for the demonstration!
@cgpixel6745
@cgpixel6745 18 күн бұрын
thanks to you for the positive energy
@wellshotproductions6541
@wellshotproductions6541 21 күн бұрын
Thank you for this, another great video. I like how you go over the workflow, highlighting different steps. And no background music to disract me! You sound like a wise professor.
@cgpixel6745
@cgpixel6745 21 күн бұрын
@@wellshotproductions6541 thanks for you comments I am trying to improve the quality in every next tutorial based on the community advices, I am glad that you liked it
@ken-cheenshang6829
@ken-cheenshang6829 22 күн бұрын
thx!
@lance3301
@lance3301 22 күн бұрын
Great content and great workflow. Thanks for sharing.
@jinxing-xv3py
@jinxing-xv3py 23 күн бұрын
It is amazing
@MrEnzohouang
@MrEnzohouang 23 күн бұрын
I have a question to ask about the commercial use of comfyui workflow, is it possible to naturally put its products on the models? At present, I seem to know only clothes and shoes are relatively large products, but jewelry, such as earrings, bracelets, necklaces, etc. Although midjourney can be used to process the modified photos, the appearance of the product cannot be controlled, but the appearance control of very small objects in sd seems not to be strong, at least the thin chain will be difficult, I wonder if you have a solution? Thank you very much
@Gavinnnnnnnnnnnnnnn
@Gavinnnnnnnnnnnnnnn 24 күн бұрын
how do i get depth_sdxl.safetensors for depth anything?
@sinuva
@sinuva 25 күн бұрын
bit big diference actually
@cgpixel6745
@cgpixel6745 25 күн бұрын
In speed it's more interesting
@kallamamran
@kallamamran 28 күн бұрын
I feel like V2 actually has LESS details 🤔
@cgpixel6745
@cgpixel6745 25 күн бұрын
In some images that's true
@Nonewedone
@Nonewedone Ай бұрын
Thank you, I use this workflow to generate a picture, everything seems good, but only the upload image didn't affect the color which I masked.
@cgpixel6745
@cgpixel6745 Ай бұрын
Try to play with the weight value of ipadapter
@govindmadan2353
@govindmadan2353 Ай бұрын
sdxl depth controlnet keeps giving error -> Error occurred when executing ACN_AdvancedControlNetApply: 'ControlNet' object has no attribute 'latent_format' Do you know anything about this, or can you please give the link to the exact dept and scribble files for ControlNet that you are using
@govindmadan2353
@govindmadan2353 Ай бұрын
Already using the one given in link
@cgpixel6745
@cgpixel6745 Ай бұрын
use this link huggingface.co/lllyasviel/sd-controlnet-scribble/tree/main also dont forget to rename your controlnet model and click refrech in comfyui in order to add the model name and it should fix your error
@KINGLIFERISM
@KINGLIFERISM Ай бұрын
comfy is so annoying. The developer really needs to make it more stable. Could not install this. And I have installed LLM's even dependencies for faceswap, dlib and anyone knows that isn't straightforward but this? No go... sigh. I give up and not reinstalling again.
@cgpixel6745
@cgpixel6745 Ай бұрын
yes you are right but for this DAV2 it is quite simple did you face any issues ?
@pixelcounter506
@pixelcounter506 Ай бұрын
Thank you very much for your information. For me it's quite surprising to have a more detailed depth map with V2, but more or less the same results. I guess canny or scribble is of help to overcome that lack of precision of depth map V1.
@aarizmohamed17138
@aarizmohamed17138 Ай бұрын
Amazing work🙌🙌🥳🔥
@lonelytaigahotel
@lonelytaigahotel Ай бұрын
how to increase the number of frames?
@cgpixel6745
@cgpixel6745 Ай бұрын
You change it with the number of frame in the video combine
@RoshanYadav-v2z
@RoshanYadav-v2z 16 күн бұрын
​@@cgpixel6745Ipadaptor folder not found in model folder what I do
@MattOverDrive
@MattOverDrive Ай бұрын
Thank you very much for posting the workflow! for anybody curious, I ran CG Pixel's default workflow and prompt on an NVidia P40. Image generation was 25 seconds and video generation was 9 minutes and 11 seconds. I have a 3090 on the way lol.
@cgpixel6745
@cgpixel6745 Ай бұрын
I am glad that I helped you and I also have rtx 3060 yours should perform better than mine especially if you have more than 6 gb vram
@MattOverDrive
@MattOverDrive Ай бұрын
@@cgpixel6745 I put in an rtx 3070ti (8gb) and it generated the image in 5 seconds and the video in 2 minutes and 13 seconds. Time to retire the P40 lol. I'll report back when the 3090 is here
@MattOverDrive
@MattOverDrive Ай бұрын
It was delivered today, RTX 3090 image generation was 3 seconds and the video was 1 minute and 14 seconds. Huge improvement!
@weirdscix
@weirdscix Ай бұрын
Interesting video. Did you base this on the ipiv workflow? As only the upscaling seems to differ.
@cgpixel6745
@cgpixel6745 Ай бұрын
yes it is
@RoshanYadav-v2z
@RoshanYadav-v2z 16 күн бұрын
​@@cgpixel6745Ipadaptor folder not found in model folder what I do
@senoharyo
@senoharyo Ай бұрын
thanks a lot brother! this is work flow that I'm looking for, you are my superhero ! XD
@cgpixel6745
@cgpixel6745 Ай бұрын
i am here to help you
@senoharyo
@senoharyo Ай бұрын
@@cgpixel6745 I know :)
@runebinder
@runebinder Ай бұрын
Interesting comparison but it's a bit of an apple to oranges one as the fine tuned models have the benefit of a much greater data set and development. Not seen anyone compare it to SDXL Base yet which would be more of an accurate check. SD3's main issue that I can see is it appears to have quite a limited training data set as poses all look very similar etc. Really looking forward to seeing what the community do with it.
@cgpixel6745
@cgpixel6745 Ай бұрын
yeah i also believe that more amazing update are gonna come with this SD3 model lets cross our fingers for it
@Utoko
@Utoko Ай бұрын
If you disincentivizing finetunes with your licencing it is another story tho.
@yesheng8779
@yesheng8779 Ай бұрын
@yesheng8779
@yesheng8779 Ай бұрын
thank you so much
@Davidgotbored
@Davidgotbored Ай бұрын
There is a annoying problem When i zoom out the fog on the moon disappears from my vision How can i increase the view, so the fog doesn't disappear? Please help me
@cgpixel6745
@cgpixel6745 Ай бұрын
in the view tab change the end value from 1000 to 10 000 then select the camera go to the camera icon and do the same from 100 to 10 000 and it should be fixed
@onezen
@onezen Ай бұрын
Can we do all the upscale stuff in ComfyUI directly?
@cgpixel6745
@cgpixel6745 Ай бұрын
Yes we can I will upload a video on that soon stay tune
@onezen
@onezen Ай бұрын
@@cgpixel6745
@user-kx5hd6fx3t
@user-kx5hd6fx3t Ай бұрын
so great, thank you so much
@pixelcounter506
@pixelcounter506 Ай бұрын
Thank you for presenting this tool. Seems to be really interesting and could be quite helpful regarding compositing!
@cgpixel6745
@cgpixel6745 Ай бұрын
i am glad that i helped you
@pixelcounter506
@pixelcounter506 Ай бұрын
Your comparison between IC-light and IP-Adapter is really a good idea. I have the feeling that you have more control of the final result with IP-Adapter in selecting a base image. With IC-light you always have a quite heavy color shift. Is the mask still playing a role if you are using IP-Adapter?
@cgpixel6745
@cgpixel6745 Ай бұрын
yes it is still playing role and you can check it by changing its position
@vincema4018
@vincema4018 Ай бұрын
Possible to get your light type images?
@cgpixel6745
@cgpixel6745 Ай бұрын
Sure just send me your email
@netspacema
@netspacema Ай бұрын
can i please have them too?
@zlwuzlwu
@zlwuzlwu Ай бұрын
Great job
@cgpixel6745
@cgpixel6745 Ай бұрын
thanks
@ismgroov4094
@ismgroov4094 Ай бұрын
Thx sir❤
@cgpixel6745
@cgpixel6745 Ай бұрын
your welcome hope that was helpfull
@StudioOCOMATimelapse
@StudioOCOMATimelapse Ай бұрын
Merci, c'est nickel 👍
@cgpixel6745
@cgpixel6745 Ай бұрын
Avec plaisir 👍
@ismgroov4094
@ismgroov4094 Ай бұрын
this is good.
@ismgroov4094
@ismgroov4094 Ай бұрын
thanks a lot. I respect you, sir!
@cgpixel6745
@cgpixel6745 Ай бұрын
thanks it helps me to create more amazing video
@SoSpecters
@SoSpecters 2 ай бұрын
hey, I really like this workflow and concept, but I can't seem to run it. I keep getting this error Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' And in the console I see WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3] !!! Exception during processing!!! 'ModuleList' object has no attribute '1' I didn't touch anything, and watched the IC light installation video before. I completely re-installed comfyui and installed only modules used in this workflow, and still I get this error... any ideas?
@cgpixel6745
@cgpixel6745 2 ай бұрын
Check your checkpoint model I personally used juggernaut version not the sdxl one
@SoSpecters
@SoSpecters Ай бұрын
@@cgpixel6745 I used 5 different SD1.5 models, including the very first one that comes with comfy. Emu 1.5 or whatever it's called... right now my latest lead indicates that despite installing layerdiffuse, a requirement for IC light, it may not have installed correctly. Further research once I get home.
@cgpixel6745
@cgpixel6745 Ай бұрын
@@SoSpecters in that case try update comfyui or reduce the resolution of the image from 1024 to 512 may be that would do
@SoSpecters
@SoSpecters Ай бұрын
@@cgpixel6745 alright did brother, seems like it was not the case. I opened a ticket with the IC light github, I'm seeing a lot of Ksampler errors like my own. Hoping to get some feedback there and I will share with the community when I figure it out.
@MrEnzohouang
@MrEnzohouang 2 ай бұрын
Could you help me on this case? Please An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\AI\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux ode_wrappers\depth_anything.py", line 19, in execute model = DepthAnythingDetector.from_pretrained(filename=ckpt_name).to(model_management.get_torch_device()) File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything\__init__.py", line 40, in from_pretrained model_path = custom_hf_download(pretrained_model_or_path, filename, subfolder="checkpoints", repo_type="space") File "D:\AI\ComfyUI-aki-v1.3\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\util.py", line 324, in custom_hf_download model_path = hf_hub_download(repo_id=pretrained_model_or_path, File "", line 52, in hf_hub_download_wrapper_inner File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "D:\AI\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download raise LocalEntryNotFoundError( I'm put the checkpoint file in D:\AI\sd-webui-aki-v4.5\models\Depth-Anything and Add the file link address in comfyui as: Add the file link address in comfyui as,then i'm put 3 pth files in D:\AI\sd-webui-aki-v4.5\extensions\sd-webui-controlnet\models and mark the same adress on comfyui yaml file
@MrEnzohouang
@MrEnzohouang 2 ай бұрын
I found the file address and fixed the problem myself, thanks for the edited workflow!
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 2 ай бұрын
wonderful tutorials, could you please make a video work with people subjects? ( IClight and IPADAPTER ERROR with face and body) thanks
@cgpixel6745
@cgpixel6745 2 ай бұрын
Yeah I will try too I will upload another ic light soon so stay tune
@user-kx5hd6fx3t
@user-kx5hd6fx3t 2 ай бұрын
I can't find this vedio for 16:9 Version in your channel
@cgpixel6745
@cgpixel6745 2 ай бұрын
I did not post it yet I will do it soon
@user-kx5hd6fx3t
@user-kx5hd6fx3t 2 ай бұрын
@@cgpixel6745 thank you very much