Since I made this video I added a "precise style transfer" node to the IPAdapter. You can use that instead of fiddling with the Mad Scientist. It also works with SD1.5 (to some extent). Also since I've been asked quite a few times now... sorry, we do not have exact data of what each block does. 3 and 6 are pretty strong so it was easy but other layers have also some impact on both the composition and the style. Some seems to effect text, others background, others age. But at the moment it doesn't seem there is a "definitive guide". I would have told you otherwise 😅
@flankechen27 күн бұрын
thanks a lot, so in SD1.5, which block for style which for composition?
@CaraDePatoGameplays21 күн бұрын
This intrigued me, I'm going to do a lot of tests to see what they do besides 3 and 6
@MarcSpctrАй бұрын
This guy is literally equivalent to what Piximperfect is to Photoshop. I doubt even the people who worked on SDXL had any idea that this much stuff and control can be gained over the models. Like seriously, wtf ???? Amazing work.
@saschamrose6498Ай бұрын
i would say more like video co pilot is to after effects
@latentvisionАй бұрын
nnaah I guess that the difference is just that I actually share what I find
@GG-hh1slАй бұрын
@@latentvision lol
@DarioToledoАй бұрын
Unm3sh
@rhaedas9085Ай бұрын
@@latentvision Share, and explain. You're like that one teacher that didn't just show you the math formula, but showed why it was important and how to use it practically.
@831digitalАй бұрын
Best Comfyui channel on KZfaq.
@miguelitohacksАй бұрын
x4096 agree
@leolis78Ай бұрын
Matteo, your work is amazing! You are our Dr. Brown. Our mad scientist who will give 1.21 Gigawatts to the AI to take us to the future. We love you!!! 😄😄😄
@latentvisionАй бұрын
just doing my part!
@ooiirraaАй бұрын
@@latentvision and we are doing our part loving you and being grateful 🎉
@caseyj789456Ай бұрын
Yeah you are our mad scientist 😂 ❤ Merci Mateo !
@DarkGrayFantasyАй бұрын
As always amazing work Matt3o! For those interested in the Crossattention codes this is what they target: 1) General Structure 2) Color Scheme 3) Composition 4) Lighting and Shadow 5) Texture and Detail 6) Style 7) Depth and Perspective 8) Background and Environment 9) Object Features 10) Motion and Dynamics 11) Emotions and Expressions 12) Contextual Consistency
@stefansotra2934Ай бұрын
Where did you get this info?
@DarkGrayFantasyАй бұрын
@@stefansotra2934 Research really, nothing more...
@ceegeevibes1335Ай бұрын
wow cool... thanks!
@walidflux16 күн бұрын
is 12 the 0.0 index ? if there is a more clear description for all these please link it
@TriNguyenKVАй бұрын
when it comes to teaching and concise explaining, you are the GOAT!!!! Thank you so much, please keep doing this. Thank you!
@lonelyeyedlad769Ай бұрын
Great work as usual, M! I am happy to see that the group experimentation with the UNET layers has led to the development of a node that will give us more control over our generations. Thank you for your continued efforts in this field!
@Archalternative4 күн бұрын
Matteo sei davvero incredibile con il tuo lavoro... 🎉
@autonomousreviews2521Ай бұрын
Love what you're doing for the community - thank you for your time and for sharing :D
@user-ck5sh2um3bАй бұрын
You are a mad scientist haha thank you so much Matteo
@latentvisionАй бұрын
mad for sure, scientist not so much 😅
@user-ck5sh2um3bАй бұрын
@@latentvision haha 😂 keep up the great work I love your content.
@moviecartoonworld4459Ай бұрын
"While keeping up with the influx of new features is important, I'm reminded again of the value of in-depth understanding of a single function. Thank you as always."
@GG-hh1slАй бұрын
Just found the node today and was wondering about its use - thanks for sharing the knowledge!
@jasonchen1139Ай бұрын
Incredible Content ! your work is undoubted the best !
@urbanthemАй бұрын
Thanks a thousand Matteo. Your last statement is something I tell time and time again, we only use so little potential in what's already out there. Brilliantly proving that point.
@HiProfileAIАй бұрын
I love the idea of target conditioning various layers and being able to direct the layer with this kind of control in the cross attention. Thank you Matteo for you continued work and expertise. You give us a lot to play with and work with. The implications of the kind of control we can have in image creation and manipulation will last for years. Continued blessing to and appreciation to you good sir. 🙏🏾👍🏾
@marioptАй бұрын
Thanks a lot for this new node, really appreciate it.
@SerginMattosАй бұрын
Your work is amazing!
@rsunghunАй бұрын
Absolutely amazing 😮
@Showdonttell-hq1dkАй бұрын
This is so incredibly cool! Thank you very much. I can't even imagine how nerve-wracking and exciting the coding was for this. :)
@karlwang4837Ай бұрын
it was amazing ,thank you for the work you have done for the community, i really appreciate it
@ysy69Ай бұрын
Thank you. Exactly, we become conditioned to chase the new shiny toy rather that fully learning and enjoying the old ones. So much can be done with this, looking forward to...
@fukongАй бұрын
God of IPAdapter
@legendaryanime69Ай бұрын
Always waiting for your greate video, that help me alot! Thanks
@abdellahla6159Ай бұрын
Great node, thanks a lot 😁
@walidfluxАй бұрын
Again, blowing minds !!!!
@dck7048Ай бұрын
Image gen is a tech that seemed science fiction a couple years ago, but to have refined it to the point people in their homes can casually do generations like 7:19 is nothing short of outstanding. Thanks as always.
@ceegeevibes1335Ай бұрын
love love love this, going MAD!!!!
@johnriperti3127Ай бұрын
Thanks Matteo, this is so good!
@openroomxyzАй бұрын
Thanks that's cool, amazing findings that will help the comunity
@dreammaking516Ай бұрын
Insanely cool, also just realized, you are italian as well😂🔥
@euroronaldauyeung8625Ай бұрын
genius hacking of cross attention and perfect explanation of the indexing.
@madmushroom8639Ай бұрын
Very cool! Would love to see some coding sessions. Maybe you could explain your code a bit. More info about the vector sizes, layers etc :)
@latentvisionАй бұрын
I was thinking about that... not sure how much interest there would be on that though
@madmushroom8639Ай бұрын
@@latentvision Yeah maybe, but your "ComfyUI: Advanced Understanding (Part 1)" video actually performed really well I think, where you went into more details. That plus some code examples what is going on behind the scenes with your knowledge would be awesome! Maybe a small poll could show if its worth your time :)
@Firespark81Ай бұрын
This is awesome! ty!
@YING180Ай бұрын
so cool and you are our mad scientist
@Nairb932Ай бұрын
Keep up the good work man
@latentvisionАй бұрын
I try
@SedtinyАй бұрын
Thank you again, my lord
@latentvisionАй бұрын
most welcome, my liege
@igorkotov8937Ай бұрын
Thank you!
@jibcot8541Ай бұрын
Very cool, I need to play with IPAdapter more often, but I am often too busy just improving prompts and upscale workflows!
@ryanontheinsideАй бұрын
this is awesome thank you
@nerdbg1782Ай бұрын
This builds on your previous experimental node where you asked for some help from the community. Glad to see they helped you decipher the layers
@latentvisionАй бұрын
not to remove anything from the wonderful community but you've been distracted 😄Style and Composition was released months ago, way before the prompt injection.
@nerdbg1782Ай бұрын
@@latentvision I was speaking about block weights, this one: kzfaq.info/get/bejne/hdiDh5l_1peyhZs.htmlsi=VyhskRDQS5m8JFMX Anyhow, it's nice to see the two combined, regardless of if it is a new feature or not. Good stuff, in either case 🙂
@huwhitememesАй бұрын
Awesome, Bro
@GoblinWarАй бұрын
Cos-XL is so tight, I'm a huge fan
@nelsonportoАй бұрын
GENIUS
@mycelianotyours1980Ай бұрын
Thank you so much!
@jccluavizАй бұрын
Thanks you dr.Matteo. I think i need one of your pills to make my days shine. Again an extraodinary work.
@yvann.mp4Ай бұрын
amazing, thanks a lots
@Alice-CoroАй бұрын
Amazing video. You do a great job at explaining complex ideas. I've learned so much from your videos.
@Mika43344Ай бұрын
W O W!!! AMAZING!
@BillybucketsАй бұрын
Until I use this a *lot*, I will have no idea what the different UNet blocks do. Maybe you could put a Note node in the pack that contains an estimation of the relative contribution of each block to style, composition, and anything else that might be useful. A++ work as always. Best SD channel around.
@latentvisionАй бұрын
unfortunately we don't know exactly what the blocks do
@majic_snapАй бұрын
My understanding is that Precise generally weakened the weights of more layers, but style has always been a mystery to neural networks, although you have done so well already. I hope you can bring us more surprises, thank you for your contributions! The name 'Mad Scientist' is simply fantastic
@BubbleVolcanoАй бұрын
Nice work! ❤It's awesome to see real progress on the U-net layer. But having too many parameters can make it tough to get started, even for someone like me who's been at it for over a year. It's just too challenging for ordinary people. If we change the filling parameter to four simple options like ABCD, it might be easier to promote. Ordinary people aren't into the process; they're all about the end result.
@swannschilling474Ай бұрын
I'll take the blue pill!! 😁 Thanks so much for this one!! 💊
@jensenkung6 күн бұрын
7:20 my jaw literally drop
@divye.ruhelaАй бұрын
Impeccable naming, we're all a little mad by now 🤣
@gsMuzakАй бұрын
you're the man, thanks for all this tutorials!
@kenwinneАй бұрын
Matteo, thank you for bringing us IPAdapter, which provides a solid ground for us to combat the uncertainty generated by large models. I personally like your explanation of basic theories. Although your course is less than 10 minutes, I have studied it repeatedly for several hours. If you have time to explain in detail the specific functions and applications of the 12 layers of the cross nerve, thank you very much for your efforts, thank you!
@bgmspot7242Ай бұрын
Nice❤❤
@johnsondigitalmediaАй бұрын
Awesome work! Do you have the info on the other 10 control index points?
@glassmarble996Ай бұрын
you have so many secrets matteo :D
@miguelitohacksАй бұрын
HOLY SHIT, this is powerful!
@latentvisionАй бұрын
IKR?!
@MrGingerSirАй бұрын
This is awesome! Are you planning on making a version that works with embeds?
@latentvisionАй бұрын
why not :)
@MrGingerSirАй бұрын
@@latentvision sweet!
@lucagenovese7207Ай бұрын
07:20 quella roba è fucking insane.
@manojkchauhanАй бұрын
Hey Matteo, Just finished your ComfyUI tutorial - seriously impressive stuff! 👍❤ Your breakdown of advanced features with practical examples is super motivating. I'm excited to put these into action and unlock the full potential of ComfyUI. Thanks for sharing your knowledge!
@alxleivaАй бұрын
You called that node based on yourself right? You're truly a mad scientist bringing us the best discoveries! Thank you Mateo
@GG-hh1slАй бұрын
How about a widget setting in the IpAdapter node, to set the strength of each layer with a short lable of its function?
@latentvisionАй бұрын
we don't know exactly what is the function of each layer unfortunately
@krio_genАй бұрын
Unbelievable.
@latentvisionАй бұрын
believe it!
@krio_genАй бұрын
@@latentvision ))) I dived into it with my head. I feel like a Mad Scientist)
@vf4am23 сағат бұрын
This is pretty awesome. Great work! I have a question about the cross attention indexes. Are they tied to output or input blocks in terms of merging? I am wondering if this could help to find the best blocks to merge to for more precision.
@StudioOCOMATimelapseАй бұрын
Very good as always Matteo. Can you explain all the index please? I've noticed only 3: 3: Reference image 5: Composition 6: Style
@DanielVaggАй бұрын
Great video. Top notch content, as always
@sephia4583Ай бұрын
Is there any similar way to apply Lora style to only specific layer? Maybe we can apply negative weight for composition layer (e.g. layer 3) and positive weight for style layer (e.g. layer 6)?
@SouthbayCreationsАй бұрын
Great video, thank you! Where can we find this node?
@kallamamranАй бұрын
Wow... You should make the layers as weight handles and call the layers for what they are :D
@michail_777Ай бұрын
And one more question. Where can I find an explanation of the index/Cross Attention?
@noxin7Ай бұрын
Mateo, This is amazing work with the mad scientist node - My only question (not criticism) is if you plan to convert the index:weight string into widgets for ease of use or is there something that prevents that?
@latentvisionАй бұрын
yeah I can do that :)
@ElevatedKitten-sr6yiАй бұрын
🤯
@4rrxw794Ай бұрын
🤩🤩
@ParrotfishSandАй бұрын
🙏
@neofuturistАй бұрын
UPDATE ALL THE NODES!!!! thanks Matteo
@pixelcounter506Ай бұрын
One proposal and one remark: You should name it "clever" instead of mad!^^ Your last words are very well spoken. I have problems to keep track of all the new nodes and developments regarding time and depth of understanding and using. There isn't even enough time to carefully read through manuals, test nodes, combine new workflows based on what we already have. The big players are on the run, too. See all these new announcements what their model brings for the benefit of the world (and for their pockets). [irony off]. Nevertheless, interesting, but challenging times ! And thank you for your contributions, Matteo, always appreciated!
@nomandАй бұрын
incredible. Apart from style and composition, has the community found consensus on what specific qualities of the image other indexes affect?
@latentvisionАй бұрын
not really unfortunately
@MikeTon9 күн бұрын
Amazing and insightful work! Question wrt to sponsorship, do you have a preference between github vs patreon? I'm getting so much value here that I want to meaningfully support you and will default to github support if there's no preference
@latentvision8 күн бұрын
hey thanks! I don't use patreon because I don't have time to push updates. Either github or paypal at the moment!
@context_eidolon_musicАй бұрын
Your 666th like is from me. I don't know what I'd do without your brilliant work. Thank you.
@denisquarte7177Ай бұрын
"We fail to understand what we already have" - cries in GLIGEN conditioning
@latentvisionАй бұрын
so true
@kinai_4414Ай бұрын
Damn that's impressive. Could the same logic be applied to a Lora node in the future ?
@pedrogorilla48329 күн бұрын
Did anyone ever figure out what each block of the Unet does? When I was obsessively trying to understand how stable diffusion work, I went deep into it but could never get a straight answer. Also what processes are involved in each block? If I remember correctly each block has layers within it, with ResNets and other things above my pay grade. If anyone can point to a resource I’d appreciate 🙏
@ooiirraaАй бұрын
Woooow, wow 🎉 you are amazing. This is just soooo cool. Why the negative prompt doesn't go with minus? It would be 3:-2.5, 6:1, and this way all the sintaxis could be consistent everywhere. And people would be able to pass positive and negative as much as they want.
@latentvisionАй бұрын
I need to think about it, technically you can send a negative value to the positive embeds so it's not that simple
@ooiirraaАй бұрын
@@latentvision then it could be a letter like 3:n2.5, 6:1 or 3:2.5n, 6:1 or 3:neg2.5, 6:1 (to make it 100% transparent)
@aidiffuserАй бұрын
Hello man, thanks for sharing this amazing improvement on control! Did something change between the style transfer and composition from 2 days ago to this release? I cant seem to reproduce same results :( Or, is there a way to reproduce the exact same layer weights of that previous release within the mad scientist node?
@latentvisionАй бұрын
no, style and composition should be the same. if you have issues please post an issue on the official repository with a before/after images possibly
@gsMuzakАй бұрын
a newbie question (maybe), index 3 is composition and 6 is style, what are the others? I don't remember if you have already talked about them in your other ipadapters videos
@rhaedas9085Ай бұрын
Look at his video a few weeks about about prompting the individual UNet blocks, that's what's going on here. There's still a lot to figure out, and some may be still dependent on others so it's not as clear cut as these.
@gsMuzakАй бұрын
@@rhaedas9085 thanks
@flankechen27 күн бұрын
amazing work, anyone test mad scientist in SD1.5? how is the specific block to inject attn work?
@latentvision27 күн бұрын
I made a new "precise style transfer" node that should work with SD1.5 and makes the whole process simpler
@quotesspace1713Ай бұрын
Thanks, that's really cool 🙏🙏. but Is this just for me? I found almost everything too advanced and couldn't understand what's going on, but I would really love to understand it in depth so that I can add my own to it and share. I do have some knowledge on comfyui but this is...
@latentvisionАй бұрын
check the "basics" series!
@baseerfarooqui5897Ай бұрын
hi thanks for this great tutorial. im getting error while executing the code is "" ipadapter object has no attributee 'apply_ipadapter" i tried to using sd15 checkpoints as well sdxl. but getting same.
@latentvisionАй бұрын
maybe it's an older version, of an old workflow, or simply browser cache
@isaactut2520Ай бұрын
I will say this again, you are simply amazing Matteo! "Shut Up and Take My Money!"💰
@elegost25709 күн бұрын
@latentvision Is it possible to combine the image to image workflow along with even more control to resemble the input? Aka, control net type of options.
@latentvision8 күн бұрын
yes of course!
@elegost25708 күн бұрын
@@latentvision do you have any pointers in that regard? I’ve tried a few things but keep getting errors :(
@calvinherbst304Ай бұрын
dying to know what the other index blocks are!
@latentvisionАй бұрын
don't we all?! 😄
@afrosymphony8207Ай бұрын
please is the prompt injection node out yet???
@michail_777Ай бұрын
Mateo hi and thank you. I'm using the Mad Scientist node. Thanks for the clarification. I've become more aware of how to use it. I also have one question about the "IPAdapter Encoder" node, it has an input for a mask? The point is that both input image and mask should be connected to this node. When using only the input image in the "IPAdapter Encoder" node, the output image adopts the style/whatever. But when I also connect an input mask (I tried just a colored map, image, half painted image), the IPAdapter Encoder node has no effect on the generated image at all. Could you please explain how to use the mask in the "IPAdapter Encoder" node?
@latentvisionАй бұрын
I'm sorry I'm not sure I completely understand, maybe join my discord or post a discussion with some screenshots in the IPAdapter repository
@michail_777Ай бұрын
@@latentvision Yeah, I already wrote to L2 (quick help).
@AIFuzz59Ай бұрын
Do you have a list of what the other index layers are? We are experimenting with this now
@latentvisionАй бұрын
no, it's difficult to undestand. some are subject specific for example (eg: they work with people not with landscapes)
@nkofrАй бұрын
Hi, thanks, wonderful! I just don't understand the point of this custom node having "weight_type" field if we modify the layers' weights in the bottom input field? Is "weight_type" overriden by the values in the input field?
@latentvisionАй бұрын
"style transfer precise" uses a different strategy to apply the embeds. You need to use it only if you want to do the style transfer thing. If you want to experiment with blocks you can select whatever and it will be overwritten (except again "precise")
@nkofrАй бұрын
@@latentvision Thank you Matteo, that's awesome! Grazie
@AI.AbsurdityАй бұрын
@tofu1687Ай бұрын
... It feels like SD3 is going to have a very hard time