AI images to meshes / Stable diffusion & Blender Tutorial

  Рет қаралды 180,112

DIGITAL GUTS

DIGITAL GUTS

10 ай бұрын

after my quick proof-of-concept experiment with this technique, i've got many requests to explain how I made these meshes and what actually stable diffusion do in this case. here is your guide
Zoe Depth model
huggingface.co/spaces/shariqf...
ShaderMap
shadermap.com/home/
Backgound music: (me jamming on elektron)
• downtempo beats - elek...
Follow me on:
IG: / sashamartinsen
TW: / sashamartinsen

Пікірлер: 262
@pygmalion8952
@pygmalion8952 10 ай бұрын
what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too. only thing that is beneficial is that depth map thing i guess. that is kinda cool.
@digital-guts
@digital-guts 10 ай бұрын
Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'. As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.
@luminousdragon
@luminousdragon 10 ай бұрын
This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves. The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting. For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look. This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant. Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.
@AB-wf8ek
@AB-wf8ek 10 ай бұрын
I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods. It's just an experimental process playing around with what's currently available. It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that. If you can't see the purpose for it, then just remove creative from anything you do.
@TaylorColpitts
@TaylorColpitts 10 ай бұрын
Concept art - really great for populating giant scenes with lots of gack and set dressing
@Mrkrillis
@Mrkrillis 10 ай бұрын
Thank you for asking this question as I wondered myself what this could be used for
@Arvolve
@Arvolve 8 ай бұрын
Very cool, thanks for sharing the workflow!
@nswayze2218
@nswayze2218 7 ай бұрын
My jaw literally dropped. This is incredible! Thank you!
@MordioMusic
@MordioMusic 5 ай бұрын
usually I don't find such good music with these tutorials, cheers mate
@PuppetMasterdaath144
@PuppetMasterdaath144 9 ай бұрын
I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over
@EladBarness
@EladBarness 9 ай бұрын
Amazing! Thanks for sharing
@VincentNeemie
@VincentNeemie 10 ай бұрын
I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.
@AtrusDesign
@AtrusDesign 9 ай бұрын
It’s an old idea. I think many of us discover it first or later.
@orlybarad
@orlybarad 2 ай бұрын
So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.
@ChrixB
@ChrixB 7 ай бұрын
ok, I'm speechless... just wow!
@jaypetz
@jaypetz 10 ай бұрын
This is really good I like this workflow thanks for sharing.
@referencetom1276
@referencetom1276 10 ай бұрын
For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.
@tommythunder6578
@tommythunder6578 5 ай бұрын
Thank you for this amazing tutorial!
@danelokikischdesign
@danelokikischdesign 7 ай бұрын
Absolutly amzing! Thank you for the tutorial! :D
@wizards-themagicalconcert5048
@wizards-themagicalconcert5048 6 ай бұрын
Fantastic content and video mate,very useful ,subbed ! Keep it up !
@dragonmares59110
@dragonmares59110 9 ай бұрын
Woah, i think i will try to see if i can remake this tomorrow, would be a nice way to spend some time, thanks !
@WhatNRdidnext
@WhatNRdidnext 9 ай бұрын
I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅 Thank you for sharing ❤
@petarh.6998
@petarh.6998 8 ай бұрын
How would one do this with a front-facing character? Or does this technique demand the profile view of them?
@dmingod999
@dmingod999 10 ай бұрын
This can be a great process to use for a rough starter mesh that you can then refine
@pygmalion8952
@pygmalion8952 10 ай бұрын
i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.
@dmingod999
@dmingod999 10 ай бұрын
@@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..
@oberdoofus
@oberdoofus 10 ай бұрын
very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...
@lightning4201
@lightning4201 9 ай бұрын
Great video. Do you have a Cinema 4d tutorial on this?
@AlexandreRangel
@AlexandreRangel 7 ай бұрын
Very nice techniques, thank you!!
@joseterran
@joseterran 9 ай бұрын
nice one! got to try this! thansk for sharing
@AArtfat
@AArtfat 9 ай бұрын
Simple and cool
@s.foudehi1419
@s.foudehi1419 9 ай бұрын
thanks for the video, very insightful
@ofulgor
@ofulgor 4 ай бұрын
Wow... Just wow. Nice trick.
@johanverm90
@johanverm90 6 ай бұрын
Amazing ,awesome ..Thanks for sharing
@filipemecenas
@filipemecenas 10 ай бұрын
Thanks !!!! I will try it !!!
@hwj8640
@hwj8640 10 ай бұрын
thanks for sharing! it is inspiring
@tony92506
@tony92506 9 ай бұрын
very cool concept
@jonathanbernardi4306
@jonathanbernardi4306 6 ай бұрын
Very interesting nontheless, thanks for your time man, this technique sure have its uses.
@williammccormick3787
@williammccormick3787 21 күн бұрын
Great tutorial thank you
@touyaakira1866
@touyaakira1866 9 ай бұрын
Please make this topic more with more examples.Thank you
@LeKhang98
@LeKhang98 10 ай бұрын
I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?
@spooderderg4077
@spooderderg4077 10 ай бұрын
I'm gonna blow your mind. But this workflow can easily be improved in bender 3.5+ by creating imms and vdms. Once you create the object break it into core components with intercept booleans. Then if you want an imm just save as an asset. But if you want a vdm you apply the insertion point onto your main model vertically down towards the bottom of a cube with the top plane having uv coordinates taking up the entire cube. Then delete the other sides of the cube besides the top plane. Then select the faces of the top plane, not your merged object then in sculpt mode create a face set from edit mode selection. Then create a shape key (important for later) Immediately mask that face set and then go to the full mesh faceset manipulation brush, underneath the full mesh geometry and full mesh cloth sim brushes (I forget what they're called whenever I'm not staring at them but they affect the entire mesh that isn't masked or visible. And then select the second to last mode (should say relax or something.) You should get a flat plane again but with your geometry in the middle. Go to the top of the mesh with num7 and hit num. To center over the mesh. And hit u and project from bounds. Now you have the uvs. Now delete that shape key or set it to 0 if you want variations and go to your vdm baker and type in a name for that part and click generate at 512. You now have a draggable brush for sculpting. At this point I recommend creating a vdm displacement geometry nodes network to test it on the baking plane for any minor errors and also have a more easily editable brush. Finally rebake the cleaned up version and you'll have a reusable completely nondestructive vdm brush of your ai gen.
@kenalpha3
@kenalpha3 9 ай бұрын
video demo?
@spooderderg4077
@spooderderg4077 9 ай бұрын
@@kenalpha3 I'm guessing you mean the vdm part. In which case give me an idea of something furry related and I'll make one sure.
@kenalpha3
@kenalpha3 9 ай бұрын
@@spooderderg4077 Does VDMS = [watch?v=lx6p8sJd-QY]? I looked up the term and found that vid. And by furry do you mean hairy or do you mean fantasy character? Im making a game for UE4.27, it wont handle fur very well (5 might). But Im also making character scifi armor, want to add accents buttons or creases, or accent body parts like spikes or thicker armor skin. But if you can do an example with a lizard type alien or his armor, that would be helpful thanks. [I already have a lot of alien characters + textures. But I could Im thinking I could increase my collection by adding accent meshes to the body or armor - to create a new race, or howd theyd look when they "level up."] Also if you can explain how to reuse an existing Base texture > apply a small part of it to a new mesh (the smaller accent/overlay mesh), and how to reset the UVs to match this new mesh shape? (while the original body mesh UVs and texture do not change). Thanks. I subbed.
@spooderderg4077
@spooderderg4077 9 ай бұрын
@@kenalpha3 look at my icon, that's what a furry is, an anthropomorphic animal.
@kenalpha3
@kenalpha3 9 ай бұрын
@@spooderderg4077 Yes, I looked at your channel. But you mean low poly furry, without individual hair strands, correct? Anyways, yours looks like a dragonoid, so that works as an example to show me. Ty.
@gamedev020
@gamedev020 8 ай бұрын
This is nest. cool technique.
@issaminkah
@issaminkah 10 ай бұрын
Thanks for sharing!
@kingcrimson_2112
@kingcrimson_2112 9 ай бұрын
Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.
@tahajafar206
@tahajafar206 7 ай бұрын
What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually? It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36
@digital-guts
@digital-guts 7 ай бұрын
i’ll give it a try and take a look. i’ve done some test with ai-characters and it looks ok-ish and weird, maybe share the results later.
@games528
@games528 10 ай бұрын
You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.
@albertobalsalm7080
@albertobalsalm7080 10 ай бұрын
you can actually
@games528
@games528 10 ай бұрын
@@albertobalsalm7080 Yes but that will lead to horrible results. You can also plug it straight into roughness if you want.
@sashamartinsen
@sashamartinsen 10 ай бұрын
thanks i missed that part while recording
@AB-wf8ek
@AB-wf8ek 10 ай бұрын
I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.
@EGP-Hub
@EGP-Hub 10 ай бұрын
Also it needs to be set to non-colour
@wrillywonka1320
@wrillywonka1320 8 ай бұрын
this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video
@digital-guts
@digital-guts 8 ай бұрын
oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one kzfaq.info/get/bejne/edOZY66gq9rHXWg.htmlsi=mKSHWz8SCE8evM6M
@wrillywonka1320
@wrillywonka1320 8 ай бұрын
@@digital-guts thank you! And i have been using this and most images work but some images invert when i mirror them. Have you wver had this problem?
@shaunbrown3806
@shaunbrown3806 Ай бұрын
@DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed
@digital-guts
@digital-guts Ай бұрын
yeah, since this videos i’ve tried couple of things and its kinda ok for characters in certain cases. especially for weird aliens )
@retroeshop1681
@retroeshop1681 10 ай бұрын
Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!
@Rodgerbig
@Rodgerbig 9 ай бұрын
Amazing, Bro! But.. How did you get 2nd (BW) image? My SD gen only one image
@digital-guts
@digital-guts 9 ай бұрын
this is ControlNet depth model you can get it here github.com/Mikubill/sd-webui-controlnet or use zoedepth online from link in description
@Rodgerbig
@Rodgerbig 9 ай бұрын
⁠​⁠@@digital-guts thanks for the answer! yes, I have it installed. but it gives only one result and it is different from what is needed.
@Rodgerbig
@Rodgerbig 9 ай бұрын
⁠@@digital-gutszoedepth actually work, but I try to do this in SD
@Murderface666
@Murderface666 9 ай бұрын
very interesting
@miinyoo
@miinyoo 2 ай бұрын
That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining. Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.
@psykology9299
@psykology9299 9 ай бұрын
This works so much better than zoedepths image to 3d
@sameh.blender
@sameh.blender 10 ай бұрын
Amazing , thank u
@Philmad
@Philmad 10 ай бұрын
Excellent
@kingsleyadu9289
@kingsleyadu9289 9 ай бұрын
u are crazy😆😆😆😆🥰🤩😍❤❤❤❤❤❤, i u love bro keep it up
@timedriverable
@timedriverable 5 ай бұрын
Sorry if this is a newbie question ...but is this dreamstudio some componet of SDXL?
@SoulStoneSeeker
@SoulStoneSeeker 10 ай бұрын
this has many, possibilities...
@ArturSofin
@ArturSofin 10 ай бұрын
Привет, очень классно! Подскажи пожалуйста, анимация лица сделана в unreal с помощью live link или это всё блендер ?
@digital-guts
@digital-guts 10 ай бұрын
это metahuman animator внутри анриала уже да, но запись сама делается через live link просто более качественно интерпретируется
@xirlio8532
@xirlio8532 7 ай бұрын
This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.
@Karasus3D
@Karasus3D 10 ай бұрын
My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such
@wrillywonka1320
@wrillywonka1320 7 ай бұрын
Would you say this works better with black and white images?
@digital-guts
@digital-guts 7 ай бұрын
i dont think so. today i’m recording new video with this technique, it could be useful.
@wrillywonka1320
@wrillywonka1320 7 ай бұрын
@@digital-guts awesome! because ive gotten it to work with about 60% of my images but some get destroyed when i bisect the z axis on mirroring. but all the info you got is useful. this technique is mid blowing and a major day saver. one last question. you kind of speeded over the part where you clean up the mesh after mirroring. me being a noob to 3d software i could really use some clarification on how you cleaned it up. you made it look so simple.
@user-zx5ts4uk8j
@user-zx5ts4uk8j 3 ай бұрын
When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why? note: I made my place 16:9 ratio and applied scale before adding displacement modifier.
@Al-Musalmiin
@Al-Musalmiin 5 ай бұрын
i wouldnt mind learning blender and learning how to do this. can you do a tutorial on how to run "zoe depth" locally?
@ElHongoVerde
@ElHongoVerde 10 ай бұрын
It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?
@CharpuART
@CharpuART 7 ай бұрын
Now you are literally working for the machine, for free! :)
@shiora4213
@shiora4213 6 ай бұрын
thanks man
@motionislive5621
@motionislive5621 8 ай бұрын
Mirror tool become life changer LOL
@DanDanceMotion
@DanDanceMotion 10 ай бұрын
wow!
@jvdome
@jvdome 9 ай бұрын
i could do well until the part i had to sculpt the stuff out, i couldnt come to a solution like you did easily
@siete-g4971
@siete-g4971 10 ай бұрын
nice method
@n0b0dy_know
@n0b0dy_know 10 ай бұрын
"What? What A Mazing!
@salvadormarley
@salvadormarley 10 ай бұрын
How did you get the animated face, That seems completely different to what you showed us in this demo.
@EGP-Hub
@EGP-Hub 10 ай бұрын
Looks like the metahuman facial animator possibly
@digital-guts
@digital-guts 10 ай бұрын
yes it is and its no the point of this video. there are tons of content about metahuman in youtube
@salvadormarley
@salvadormarley 10 ай бұрын
@@digital-guts I've heard of metahuman but never tried it. I'll look into it. Thank you.
@ghklfghjfghjcvbnc
@ghklfghjfghjcvbnc 6 ай бұрын
u are a lying clickbait @@digital-guts
@JamesClarkToxic
@JamesClarkToxic 6 ай бұрын
The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.
@digital-guts
@digital-guts 6 ай бұрын
you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun
@JamesClarkToxic
@JamesClarkToxic 6 ай бұрын
@@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.
@pastuh
@pastuh 9 ай бұрын
Nice, but I will wait for 360 3D AI models :X
@timd9430
@timd9430 10 ай бұрын
Such a jimmy rig way to do things. Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways? Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf??? AI is a career killer.
@sashamartinsen
@sashamartinsen 10 ай бұрын
So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.
@zephilde
@zephilde 10 ай бұрын
No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)
@timd9430
@timd9430 10 ай бұрын
@@zephilde Any video links on that exact process?
@incomebuilder4032
@incomebuilder4032 9 ай бұрын
Fooking genius you are..
@sebastianosewolf2367
@sebastianosewolf2367 3 ай бұрын
yeah and what software or website did you use on the first minutes ?
@digital-guts
@digital-guts 3 ай бұрын
this is Automatic1111 web-ui for stable diffusion
@zephilde
@zephilde 10 ай бұрын
You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)
@mercartax
@mercartax 10 ай бұрын
The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.
@armandadvar6462
@armandadvar6462 13 күн бұрын
I was waiting to see animation like your intro video😢
@Savigo.
@Savigo. 10 ай бұрын
Wait, can you now just plug in normal map to the "norma"l socket without extra "normal map" node? I have to check it.
@Savigo.
@Savigo. 10 ай бұрын
Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.
@zergidrom4572
@zergidrom4572 10 ай бұрын
sheeesh
@Ollacigi
@Ollacigi 10 ай бұрын
Ita still need a time.but its a cool start
@ATLJB86
@ATLJB86 9 ай бұрын
I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…
@googlechel
@googlechel 2 ай бұрын
Yo, how you get a stable diffusion, control net as local? it is?
@digital-guts
@digital-guts 2 ай бұрын
kzfaq.info/get/bejne/mpecg9l6lbrDl6M.html check this link
@googlechel
@googlechel 2 ай бұрын
@@digital-guts thanks
@nathanl2966
@nathanl2966 9 ай бұрын
wow.
@joedanger4541
@joedanger4541 9 ай бұрын
the R.U.R. is coming
@aleemmohammed7794
@aleemmohammed7794 10 ай бұрын
Can you make a character model with this?
@1airdrummer
@1airdrummer 10 ай бұрын
no.
@bigfatcat8me
@bigfatcat8me 4 ай бұрын
where is your hoodie from?
@digital-guts
@digital-guts 4 ай бұрын
i dont remember, i think something like h&m or bershka nothing special
@matthewpublikum3114
@matthewpublikum3114 6 ай бұрын
Great for kit bashing!
@stevesloan6775
@stevesloan6775 2 ай бұрын
Changing to double sided vertices, is the way to remove and double the texture map data.😂
@younesaitdabachi7968
@younesaitdabachi7968 9 ай бұрын
God Damn it you look like that Guy who help Mr Walter with cooking Drug in breaking Bad by the way i like your tuto keep it UP
@yklandares
@yklandares 10 ай бұрын
its the end og the world VFX
@realkut6954
@realkut6954 2 ай бұрын
Hello thank for video please ,please.give me tutorial traking armor 3d stable diffusion and man video please urgen sorry bad english iam french
@digital-guts
@digital-guts 2 ай бұрын
kzfaq.info/get/bejne/mLF_ktGHrLHLfHU.htmlsi=j7BOrRMU_8AeXrUe
@realkut6954
@realkut6954 2 ай бұрын
@@digital-guts thank my friend soory i want video 2d no man 3d sorry .
@realkut6954
@realkut6954 2 ай бұрын
Sim wonder studio softwar
@realkut6954
@realkut6954 2 ай бұрын
kzfaq.info/get/bejne/nNiGf6R7z9GslmQ.htmlsi=2uVjuK6HK8WwhQWX
@somebodynothing8028
@somebodynothing8028 8 ай бұрын
im using invoke AI how do i get the controlnet v1.1.224 to run with it or where do i find the controlnet v1.1.224
@abdullahimuhammed6550
@abdullahimuhammed6550 10 ай бұрын
what about the aye animation and smile tho? thats the most important part tbh
@giovannimontagnana6262
@giovannimontagnana6262 10 ай бұрын
Most definitely the face mesh was a separate ready model. The assets were made with AI
@EmvyBeats
@EmvyBeats 10 ай бұрын
AI texturing skills constantly amaze me.
@entumonitor
@entumonitor 10 ай бұрын
in the normal node the color space is non color for normal maps!
@Savigo.
@Savigo. 10 ай бұрын
"Linear" is pretty much the same, although he missed "normal map" node between.
@CBikeLondon
@CBikeLondon 9 ай бұрын
I think the opposite direction (mesh to AI) is more interesting as it can then be used for AI training
@zacandroll
@zacandroll 8 ай бұрын
Im baffled
@stevesloan6775
@stevesloan6775 2 ай бұрын
Goodness me… how and why are slow eye movements in the female eye ball-brain so deeply, directly connected to the male brain.???? 🧠 😂❤
@sburgos9621
@sburgos9621 10 ай бұрын
Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.
@sashamartinsen
@sashamartinsen 10 ай бұрын
and this is the main point of this aproach. to trick the eye
@sburgos9621
@sburgos9621 10 ай бұрын
@@sashamartinsen I do 3d printing so this technique wouldn't work for my application.
@_casg
@_casg 6 ай бұрын
Here’s a peppery comment
@thesagerinnegan5898
@thesagerinnegan5898 2 ай бұрын
what about meshes to ai images?
@digital-guts
@digital-guts 2 ай бұрын
kzfaq.info/get/bejne/fbmHZtBontrXoYk.html
@sandkang827
@sandkang827 9 ай бұрын
good bye my future career in 3d modeling :')
@fredb74
@fredb74 6 ай бұрын
Dont give up! AI is just another powerful tool you'll have to learn, like Photoshop back in the days.
@iloveallvideos
@iloveallvideos 10 ай бұрын
hOLY SHIT
@NoEnd911
@NoEnd911 10 ай бұрын
Jessi Pickman🎉😂
@DarkFactory
@DarkFactory 9 ай бұрын
This shows that AI images aren't just combination of random images, but a depiction of actual 3D figure and it's amazing
@OlegC3D
@OlegC3D 9 ай бұрын
Ai Images are just a combination of images. - You can extract depth from any image and turn it into 3D figure.
How I use Ai to make 3D Models for my Game!
8:34
Floky
Рет қаралды 153 М.
Glow Stick Secret 😱 #shorts
00:37
Mr DegrEE
Рет қаралды 128 МЛН
How To Choose Ramen Date Night 🍜
00:58
Jojo Sim
Рет қаралды 54 МЛН
Glow Stick Secret (part 2) 😱 #shorts
00:33
Mr DegrEE
Рет қаралды 46 МЛН
Turn 2d Images Into 3D Images In Seconds
6:37
LAYRS
Рет қаралды 20 М.
Text to 3D is AWESOME now! - AI Tools you need to know
10:51
Olivio Sarikas
Рет қаралды 127 М.
These Blender AI Addons Will Shock You!
8:26
InspirationTuts
Рет қаралды 115 М.
4D Games | 4D Graphics | What 4D games look like, 2D 3D 4D 5D
15:04
Cyberstars - Как создать игру
Рет қаралды 1,8 МЛН
ВАША ПЕРВАЯ 3Д МОДЕЛЬ В BLENDER
7:16
веб-жаба
Рет қаралды 1,4 МЛН
[Gegagedigedagedago] Nuggets Mukbang Girl and GIANT Gummy Worm
0:25
Оплатил в кофейне сыном @super.brodyagi
0:25
Супер Бродяги - Семейство бродяг
Рет қаралды 2,9 МЛН
What's The Most Powerful Demon Slayer Nichirin??
0:50
Mini Katana
Рет қаралды 12 МЛН
Behind the scene 😁 and result 👆
0:17
Andrey Grechka
Рет қаралды 13 МЛН