Пікірлер
@LucasPfaff
@LucasPfaff 10 сағат бұрын
Thanks for showing that off again! I had a look at his GitHub and saw ViTMatte, which does have incredible output on hair. I saw Julian Kreussers tutorial on Advanced Removals on Fundrys Learning Channel, but instead of only using ModNet I keymixed it with ViTMatte; ViTMatte for fine detail/translucency like hair/motionblur, and ModNet for a fast solid body). Then trained a CopyCat with that (like you also showed in the ComfyUI-Normalmap Video), and the output was intense. Using his trick "stabilizing" it with the Denoise, I got a very decent matte for roughly an hour of work and 45min of training on a 100f shot. Amazing what we can get these days
@brunodelacalva6976
@brunodelacalva6976 22 сағат бұрын
Buenísimo. Gracias Alex.
@alexvillabon
@alexvillabon 17 сағат бұрын
Bruno!
@user-vr9dj4eo8v
@user-vr9dj4eo8v 4 күн бұрын
I love these videos
@alexvillabon
@alexvillabon 4 күн бұрын
@@user-vr9dj4eo8v Thank you! I really enjoy making them.
@LFPAnimations
@LFPAnimations 8 күн бұрын
I have been using RIFE through flowframes for a while now. So cool to see it integrated into Nuke. It really is the best frame interpolation tool out there.
@alexvillabon
@alexvillabon 4 күн бұрын
@@LFPAnimations I had never heard of flowframes! Thanks for pointing me in that direction :)
@LFPAnimations
@LFPAnimations 4 күн бұрын
@@alexvillabon It is a great free program for batching RIFE operations, but having RIFE in Nuke is probably even more useful
@user-db2dl7wz8y
@user-db2dl7wz8y 8 күн бұрын
looks good
@Osvaldsson
@Osvaldsson 10 күн бұрын
Versioning, it’s so easy now, thanks Alex!
@81sw0le
@81sw0le 11 күн бұрын
Just now came across your channel. Are you trying to figure out how to use comfy + ue5 + nuke? Not many are trying to innovate like that and i'd love to know if this is the case, because I'm doing the exact some thing.
@redbeard4979
@redbeard4979 15 күн бұрын
thank you so much Alex! It helped me a lot. I used Ray render for mattepaint and I had exactly this problem with overscan. I had to go back to the skyline render because I didn’t have time to deal with overscan for the ray render. And now there is a solution.
@alexvillabon
@alexvillabon 15 күн бұрын
@@redbeard4979 happy to hear it helped out :)
@RichardServello
@RichardServello 16 күн бұрын
For something like the runner it would work better if it could work temporally.
@alexvillabon
@alexvillabon 12 күн бұрын
That’s the natural evolution. So far no model does that but im sure its a matter of time.
@RichardServello
@RichardServello 16 күн бұрын
That LaMa result is very useable as a first pass. Much better than photoshop generative fill.
@behrampatel4872
@behrampatel4872 16 күн бұрын
Hi do you just drag the demo image into comfyui and let the manager figure out the missing nodes ? Or did you download the model files separately. Thanks
@alexvillabon
@alexvillabon 12 күн бұрын
Id recommend you watch an intro to comfyui to get your feet wet. I didnt cover the basics of how to get comfyui working because there are a ton of channels out there doing this very well.
@ss_websurfer
@ss_websurfer 16 күн бұрын
where can I get the pmask node?
@alexvillabon
@alexvillabon 12 күн бұрын
Any position mask node will do. There are a bunch on nukepedia such as pmatte
@iamimpress
@iamimpress 16 күн бұрын
How much ram do you have and which GPU are you running?
@alexvillabon
@alexvillabon 16 күн бұрын
@@iamimpress I have 64gb of ram and a 4090.
@iamimpress
@iamimpress 16 күн бұрын
@@alexvillabonthank you very much. Love the videos - just subscribed :)
@alexvillabon
@alexvillabon 16 күн бұрын
@@iamimpress happy to hear it! Thank you
@THEJATOXD
@THEJATOXD 17 күн бұрын
Something i have been looking for months, thanks a lot for the insight
@kietzi
@kietzi 17 күн бұрын
i see this in beauty retouches <3
@CarpeUniversum
@CarpeUniversum 18 күн бұрын
Clever
@DreamsIllusions-k8t
@DreamsIllusions-k8t 20 күн бұрын
very nice and informative! fantastic sharing my friend!
@behrampatel4872
@behrampatel4872 21 күн бұрын
I hope this gets better. However if we get a clean matte for single frames, then we could then use the output to train copycat. Cheers
@alexvillabon
@alexvillabon 21 күн бұрын
Agreed! I have something coming in the next couple of weeks that should be able to do just that :)
@AiLife115
@AiLife115 21 күн бұрын
Could you please make a cat file "base" and more large model ? sorry for asking, I dont know any code or program :(
@EspadaJusticeira
@EspadaJusticeira 22 күн бұрын
So how can i use this in nuke , what is the best way , should i use the green channel as z ?
@fanimations2363
@fanimations2363 22 күн бұрын
This is just great stuff, need more views!
@JaeohnEspheras
@JaeohnEspheras 22 күн бұрын
Seems that the lips don't follow the reference if the source has it already opened. I thought it would be a good solution for lipsyncing. Perhaps still could.
@alexvillabon
@alexvillabon 21 күн бұрын
There are knobs that should be able to help. This is just a product of my first few hours playing with the tool.
@JaeohnEspheras
@JaeohnEspheras 20 күн бұрын
@alexvillabon would be good to get around showcasing it when you get around with it. Basically creates a new branch of services for dubbing for foreign audience. On top of direct vocal translation from the actor themselves. Will kill a lot dubbing jobs though but also enhance it. Though scary tool for fraudulent activities as well.
@behrampatel4872
@behrampatel4872 22 күн бұрын
Alex your channel is wonderful and I luckily found your channel on my mobile but on pc it's almost non existent ! Besides subscribing to the channel, what can i do to help ? upvote every vid , something else ? Is there any way we can bring models from hugging face into Nuke. I use comfyUI or A1111 extensions from github . How do we leverage so many tools out there for the cattery ? Thanks, b
@AdrianPueyo
@AdrianPueyo 22 күн бұрын
Great stuff!!!!
@ramaniyer3600
@ramaniyer3600 22 күн бұрын
please make a video on What All Can Be Achieved With Depth Maps in compositing, very helpful its going to be
@flowvfx1511
@flowvfx1511 22 күн бұрын
Looks very good indeed - is it legally safe to use for commercial work?
@Cragdognamedbear
@Cragdognamedbear 22 күн бұрын
This looks much better then the photoshop one also. I usually use the depth maps to distort BG plates and put them in 3D. Cant wait to try this new model
@AiLife115
@AiLife115 23 күн бұрын
Look promising, I alway use depthscaner plugin in AE, I will try this anything to see how.
@LucasPfaff
@LucasPfaff 23 күн бұрын
this is terrific. I was always wondering if there may be custom-Cattery files for some of those awesome models; are there any others yet?
@alexvillabon
@alexvillabon 23 күн бұрын
I’ll cover more soon.
@LucasPfaff
@LucasPfaff 22 күн бұрын
@@alexvillabon awesome, can't wait :)
@user-db2dl7wz8y
@user-db2dl7wz8y 23 күн бұрын
thanks for the info
@jtsanborn1324
@jtsanborn1324 23 күн бұрын
Hey Alex, look at ChronoDepth, the output is temporal consistent and has [IMO] the least of flickering i’ve seen in all the depth estimation i’ve tried. The only “downside” of this is that it requires a lot of vram...24gb gpu is not enough for long shots but the result is quite stable!
@reed4109
@reed4109 23 күн бұрын
Alex, this is amazing............Thanks again for sharing all these awesome trick.
@rc116987
@rc116987 23 күн бұрын
Good explanation about machine learning tools like cattery. keep it some of new , thanks
@prony5145
@prony5145 24 күн бұрын
Great, thanks a lot! Do you also have a video on Depth-Anything?
@alexvillabon
@alexvillabon 24 күн бұрын
@@prony5145 coming soon :)
@VFXforfilm
@VFXforfilm 24 күн бұрын
RTX 4070 TI, LaMa completely crashed my computer when I looked through the node with mask added.
@alexvillabon
@alexvillabon 24 күн бұрын
Oh man, thats unfortunate. It probably hit the memory limit of your gpu. It’s strange that it crashed your whole computer though. Maybe try again or send a report to the foundry.
@glennteel1461
@glennteel1461 25 күн бұрын
Awesome video Alex!
@alexvillabon
@alexvillabon 24 күн бұрын
Glenn! Thanks :)
@iamimpress
@iamimpress Ай бұрын
Is Apple Silicon processor needed as well? I see it mentioned on the download page but it's not mentioned here in the video. Thanks!
@iamimpress
@iamimpress Ай бұрын
Nevermind - answered my own question - CRASH! haha
@alexvillabon
@alexvillabon 23 күн бұрын
:(
@lilulaList
@lilulaList Ай бұрын
Hi Alex! Thank you so much for this video... and the others. My question is if there are any possibility to export/save the Image or mask of the BRIAAI Matting node as Prompt output to obtain a mask, thanks in advance
@alexvillabon
@alexvillabon 23 күн бұрын
Somehow I missed your comment! Yes you can. Just connect that as your output. I have a video coming in the next few weeks about matte extractions inside of nuke that is MUCH better than that though.
@src1903
@src1903 Ай бұрын
I was realy wonder this ai model.I think seems a lot not practical.Maybe I will use some easy shots.Thaks for the video.
@AlexUdilov
@AlexUdilov Ай бұрын
Thx for tutorial
@fabiocolor
@fabiocolor Ай бұрын
I initially thought it was only spatial, but in your last example, the actor with the glasses had the same ID. This suggests that over time, temporal weights might be applied, making it more consistent with Luma changes. It’s really interesting!
@crisppxls
@crisppxls Ай бұрын
Interesting and cool stuff but I think i'll give the boffins a bit longer to cook on this one before diving in
@alexvillabon
@alexvillabon Ай бұрын
My feelings exactly.
@juancamilo908
@juancamilo908 Ай бұрын
Could you , track the crypto ID so each frame it refreshes the ID asignation its stuck with the same alpha. I presume it will brek any comp. I think its a cool tool to learn from but we are not there yet.
@alexvillabon
@alexvillabon Ай бұрын
@@juancamilo908 yeah, i guess you could. But its not terribly efficient. I have a video coming soon that does mattes and are REALLY good for much less effort than this. Im really bummed by segment anything’s performance but what I share about mattes next is an actual game changer. I just need time to record it.
@ProzacgodAI
@ProzacgodAI Ай бұрын
Great tutorial! I don't think I ever saw the final product? like I saw the end product of the clouds being upscaled, but I would have liked to have seen the final scene you were making, even if a bunch of steps were missing. Thanks for showing off the workfflow!
@alexvillabon
@alexvillabon Ай бұрын
Hi, not sure what you mean. I shared the process of creating skies, rendering and upscaling them. How you use them is shot dependent of course. Thanks for your comment.
@wix001HD
@wix001HD Ай бұрын
Compared to solutions like fooocus, generative fill and etc. it looks relatively outdated and very rough, almost useless in terms of quality as for image inpaint (as expected this tool was built on 2021 research). Does it have some benefits in terms of sequence inpaint or you haven't tested yet?
@alexvillabon
@alexvillabon Ай бұрын
You are right, this is by no means the most advanced solution out there... not even close. The advantage here is the fact that you don't have to leave nuke and it is just one more tool/option in a comper's arsenal. I worked at one of the large studios for almost a decade in both film and commercials and I know for a fact that you dont get access to photoshop most times, let alone stuff like comfyui / stable diffusion like tools. As for temporal consistency, the foundry's website states: "LaMa is not temporally consistent, in the example video smart vectors were used to propagate the in-painted area."
@jtsanborn1324
@jtsanborn1324 Ай бұрын
Im really really happy with the stuff you show us here, this is the kind of content me and many are looking for! In this case with this new lama node, would be interesting to compare instead of NNCleanup, it is so powerful and have solved me some things that otherwise would take days and weeks, even on moving footage instead of single frame. Thanks Alex, this is great!
@alexvillabon
@alexvillabon Ай бұрын
Thanks for the kind words, I'm happy you are finding value in my videos so far! I had never heard of NNCleanup before. I downloaded the demo version and it seems to be very good in most instances but because I don't have a license it adds very heavy grain/watermark over the images so it makes it hard to judge properly, let alone in motion. If I get a license or a proper trial I'll do a video where I compare results. Thanks for watching and for pointing me in the direction of this tool!
@Osvaldsson
@Osvaldsson Ай бұрын
Great vid Alex! Pretty soon I’m not going to be able to keep all these names straight. LaMa, ABME, MiDaS, RAFT, TecoGAN…
@alexvillabon
@alexvillabon Ай бұрын
Ha! Absolutely, add to that the endless amount of comfyui models and loras… its tough to keep track!
@NickPittas
@NickPittas Ай бұрын
Have you tried using the inpaint first and then add the Lama? In ComfyUI we use it for inpainting that way. Maybe it makes a cleaner plate
@alexvillabon
@alexvillabon Ай бұрын
Interesting thought. Unfortunately, I just tested it and it gives the exact same result.
@NickPittas
@NickPittas Ай бұрын
@@alexvillabon Yeah I also tested it as soon as I wrote the comment, and it seems to not respect the contents of the mask, only the surrounding areas. It could still be usefull to export the mask and video to comfyUI and test it with the inpainting there with crop and stich nodes. Get a better quality maybe for large scale matte paintings and cleanups. And maybe AnimateDiff could help with the temporal consistency even.
@samueljrgensen417
@samueljrgensen417 Ай бұрын
another great tutorial Senor Villabon!
@alexvillabon
@alexvillabon Ай бұрын
Thanks Sam! :)
@reed4109
@reed4109 Ай бұрын
Amazing stuff