The ONLY texture a game NEEDS [UE4, valid for UE5]

  Рет қаралды 68,602

Visual Tech Art

Visual Tech Art

Жыл бұрын

It this video we go through a tech art experiment: can I texture an asset by using just one texture?
How that would look like and what how the shader should be to make it into a fully fledged material?
This is essentially an unconventional incipit to discover few tips and tricks to do cool stuff like deriving a Normal map from an Height map, learn how Colour Picking is done and how to manipulate gradients with math, to some extent.
======================================================
Discord: / discord
Patreon: / visualtechart

Пікірлер: 278
@D3FA1T1
@D3FA1T1 Жыл бұрын
"the normal map is very important! lets scrap it" proper game development right there
@VisualTechArt
@VisualTechArt Жыл бұрын
I'm glad that someone got it ahahahaa
@reliquary1267
@reliquary1267 Жыл бұрын
It's GENIUS actually, unless you just don't get it
@Johndiasparra
@Johndiasparra Жыл бұрын
Proper technical artist**
@gehtsiegarnixan
@gehtsiegarnixan 10 ай бұрын
Then procedes to recreate it using bilinear interpolation with 4 extra texture samples making the improved material way more expensive than the original. Still very interesting process
@falkreon
@falkreon Жыл бұрын
Your artifacts aren't coming from texture compression. They're coming from sampling immediate neighbors. You can sample more neighbors or just slide your points out so that the hardware texture filtering samples more points for you. And you're doing a heck of a lot of work to arrive at the built-in normalize node. Just normalize the vector after you scale it.
@VisualTechArt
@VisualTechArt Жыл бұрын
The neighbours are already bilinearly interpolated, and since the original texture was a jpg recompressed to BC5, you can see why I say that the artefacts are due to that :) Normalize does exactly the same thing I did, internally. If I was using a normalize node, that would have meant doing one more dot product for the condition check, in that way I'm recycling it in my normalization!
@falkreon
@falkreon Жыл бұрын
@@VisualTechArt what are you talking about? normalize usually decomposes to a length (via pythag on hardware), a divide, and a saturate. your divide by zero check happens for free. and the box blur artifact is not from the compression. it's from your unintended box blur. placing your extra samples 1.5px away instead of 1px will help but not eliminate it because the most accurate answer is two gaussian kernels.
@VisualTechArt
@VisualTechArt Жыл бұрын
@@falkreon You can go on shaderplayground.io and check the ISA breakdown yourself that the instructions generated from a normalize and what I did are identical! :D I'll check the thing you're saying about the kernel and see if I missed something, thanks.
@falkreon
@falkreon Жыл бұрын
@@VisualTechArt No, I get what you're saying, you save three multiplies, which are cheap, and still doing the square root, divide, and saturate, which are the expensive parts, and throwing an IF node on top of that. This is going to become optimized SPIR-V, not GLSL. Use the normalize node so that it's easier to maintain.
@falkreon
@falkreon Жыл бұрын
As an aside here, you're right on target with the memory and streaming bandwidth bottleneck. For Unreal 5 specifically, instead of using a shader to build displacement and normal maps, throw away the normal and displacement maps entirely and use real geometry with nanite. Performance increases dramatically, not with the switch to nanite, but with the removal of these "extra" maps which are trying to approximate what we can finally do for real.
@ShrikeGFX
@ShrikeGFX Жыл бұрын
The thing is that the normal creation works very inconsistently well and you are basically reverting back to CrazyBump times What looks good on a bark might look really bad on a brick. This makes sense if you don't have normal maps available otherwise all these operations will just cost more performance but less memory
@VisualTechArt
@VisualTechArt Жыл бұрын
I don't see what's the issue in tuning up the Normal in UE instead of doing it in Maya or Blender or Designer or ZBrush, etc... Unless you're baking from an highpoly, you're still running a similar filter to make it :) And you can still bake an heightmap instead of a normal and run the filter in the shader anyways... And yes, this moves the "price" from "memory reads" to "vector alu", which, as I point out in the video, are the ones with more fire power in current gen of GPUs :)
@ShrikeGFX
@ShrikeGFX Жыл бұрын
@@VisualTechArt Yes but I see very few people still doing normals from height, I guess from a baked heightmap its good but generally people bake anyways or use already done textures with baked or scanned normals, and megascans have only a low contrast displacement map in mid grays, but I can see the convenience of just using one texture
@mattiabruni5463
@mattiabruni5463 Жыл бұрын
@@VisualTechArt another problem with your approach is that your generated normal map is losing the highest frequency details and is practically equivalent to a normal map of half the resolution. So you could have your 4K, 1 channel displacement or a 2K, 2 channel normal map and get the same information (it you don't use actual displacement, so it depends) Interesting method nonetheless but not really one-size-fits-all
@VisualTechArt
@VisualTechArt Жыл бұрын
I didn't care about the size of the texture, it wasn't the point of the video! But yes, of course using an oversized texture is bad :)
@gamertech4589
@gamertech4589 Жыл бұрын
@@VisualTechArt Does it cost fps or cpu ?
@fernandodiaz5867
@fernandodiaz5867 Жыл бұрын
Wow! I only was doing this with two textures. First texture I put a difusse (rgb) and roughess (alpha) Second texture I use a normal map compresed in two channels (r,g) and derive z normal from this two vectors. And I put ambient oclusion in the blue channel and displacement in the alpha. Incredible with one texture!! Thanks a lot!
@jabadahut50
@jabadahut50 Жыл бұрын
Tbf your method allows more artistic flexibility with traditional tools and includes the AO map which is incredibly useful for reducing performance impact since you don't need ssao. Both are very cool methods of reducing texture bloat. One could combine the two methods... use his method to get roughness and normals from the displacement... and then with a second texture you could have Ambient occlusion, Subsurface Scattering/mesh thickness map, Metalness map, and Opacity map.
Жыл бұрын
A normal map often imported as a Normal map in traditional game engines. I tried what you proposed above like 2 years ago and I got weird looking AO and Smoothness back in time. Am I doing something wrong or can you explain a bit more about how do you import and use these textures?
@kenalpha3
@kenalpha3 Жыл бұрын
Did your performance increase or decrease by switching to 2 texture Materials (more instructions)? More textures = more ram use correct? But less textures, more instructions = higher stress on performance (lower performance if many Materials like this at once)?
@N00bB1scu1tGaming
@N00bB1scu1tGaming Жыл бұрын
@kenalpha3 the impact to sample 2 packed textures is far less compute heavy than whatever you want to call this solution. Most GPUs also have built in architectural optimization to make these additional samples negligible. Just channel pack and atlas, you get the correct results with far less cancer.
@kenalpha3
@kenalpha3 Жыл бұрын
@@N00bB1scu1tGaming I mean what is better performance: 2 textures packed (even with Alpha = doubles mem use in UE4?) OR 4 textures (not tightly packed, Not all RGBA channels used. But someone said UE4 loads all RGBA channels in mem anyways if just 1 channel is connected?) My texture set is 4 to 5, some are loose. My code is like 490 instructions (for a character skin with advanced effects and Material/texture changes).
@SumitDasSD
@SumitDasSD Жыл бұрын
A cool approach to experiment. Though this technique is inefficient and also can't be used for a lot of surface to represent. You are basically following the old Photoshop process of creating textures for materials using Albedo. That approach is nice but will never produce better result. Also Because you are calculating them in shader instead of using Textures, it can be calculation heavy. I feel the trade off is not worth it. Also for the Normal, You can use a more detailed Height map and then process it with the mesh normals to get a more accurate detail normal.
@TorQueMoD
@TorQueMoD Жыл бұрын
Wow, you clearly know a lot about shader programming. Well done. I've never seen a tutorial like this :) Liked and Subbed.
@VisualTechArt
@VisualTechArt Жыл бұрын
Much appreciated :)
@schrottiyhd6776
@schrottiyhd6776 Жыл бұрын
"Speaking of which, would you like to explore more the world of photogrammetry" while moving camera around the object with the chunky mouse movement 👍 I like it
@Mehrdad995GTa
@Mehrdad995GTa Жыл бұрын
Should be titled "How to unnecessarily hyper complicate a shading process to achieve a similar result in an unoptimized way" or "How to have 10x shading complexity and bottleneck instead of 1x memory bottleneck" Brilliant 👍
@jackstack2136
@jackstack2136 Жыл бұрын
Agreed, I struggle to find any value from this video other than "I know what I'm doing so I tied my hands behind my back and did it some more"
@Mehrdad995GTa
@Mehrdad995GTa Жыл бұрын
@@jackstack2136 No doubt you are well-educated on shading, just wanted to share my point of view in a humorous way. Sorry if it sounded offending, didn't absolutely mean to. 🙏
@tech-bore8839
@tech-bore8839 Жыл бұрын
@@Mehrdad995GTa To be fair the bottlenecking is an important caveat to mention, especially if people are going to use this technique. I think people would like to know these things before going through all the hassle of setting it up.
@Mehrdad995GTa
@Mehrdad995GTa Жыл бұрын
@@tech-bore8839 Exactly. Lower-lever optimizations can be postponed to the final steps in development but things like this are the approach that changing them most of the time requires re-doing things from scratch. Good point.
@NathanHarris83
@NathanHarris83 Жыл бұрын
There are many scenarios where this approach would be (and HAS been) useful. Reducing memory but putting more strain on ALU might be necessary depending on what platform(s) you are targeting. Industry experience would have likely had you posting a very different type of comment, but alas... here we are. I used a similar technique on more than one occasion for gamejams that had file size constraints. There, that's another good reason to be aware of this technique. I worked on a major project once, and when tasked with some optimization passes, we found several key areas where we could obtain noticeable gains. One of them was reducing VRAM usage by utilizing a similar technique for nearly 60% of the materials being utilized. A batch script and a couple hours of elbow grease later, and our VRAM budget had some more breathing room. Many hobbyists that lack a lot of experience tend to run with the flock of sheep, and when they see something different, can't retrieve the creative engineering mind juice to find a good use for an approach/solution. Don't do that. That holds you back. Store these useful things away, because odds are there will be a point in time when it proves to be useful.
@EXpMiNi
@EXpMiNi Жыл бұрын
Virtual Textures/lightmaps/shadows are there to not have the Vram issue, doing that is transferring one issue (who does not exist that much anymore) to an other (number of instructions), that being said I think it's a very interesting approach! I would completely consider that kind of solutions if I wanted to have a very very light project with tones of reused assets using the same textures set everywhere :) !
@VisualTechArt
@VisualTechArt Жыл бұрын
Number of instruction is a less existing issue at this point in time, I'd argue! But yes, fair point :)
@YourSandbox
@YourSandbox Жыл бұрын
big fan of your channel sir. was thinking of a wayout to have simplified props and character shaders for metahuman and megascans. found a thing to keep in minds. brilliant
@VisualTechArt
@VisualTechArt Жыл бұрын
Thanks! :D
@DARK_AMBIGUOUS
@DARK_AMBIGUOUS Жыл бұрын
I love watching videos about optimization. I make games for phones and try to have the best graphics so I need this kind of stuff
@toapyandfriends
@toapyandfriends Жыл бұрын
😀'amen!
@albarnie1168
@albarnie1168 Жыл бұрын
Adding an alpha channel doubles the amount of space and memory, because unreal uses 16 bit for textures with alpha. Also, in photogrammetry, the normal and height are not derived directly from color, but from the 3d point cloud generated from hundreds of images. Regardless, this is a super fun exercise. The best idea for textures like this is to do two samples Basecolor, and normal. Because the blue channel is not needed in normal, you can put AO, height or roughness in there. Common technique is to put height into normal, and then have the AoRM texture. Still 3 texture samples, but not so much more space. Alternatively, if you care less about samples, you could do basecolor, then have height, ao and roughness in a second texture. Same amount of memory as your technique! Stepping issue is due to precision and sample distance, not compression I think. Compression would have larger blocks with smooth details in them.
@kenalpha3
@kenalpha3 Жыл бұрын
Can you post the code on BlueprintUE? More textures = more ram use correct? But less textures, more instructions = higher stress on performance (lower performance if many Materials like this at once)? And you do or dont use Alpha channel packed?
@saisnice
@saisnice Жыл бұрын
Every interesting! Could be really useful for stylized/mobile games. Thank you for video!
@mb.3d671
@mb.3d671 Жыл бұрын
Really good explanation thank you
@WoodysAR
@WoodysAR Жыл бұрын
Since it is a grayscale texture ( and normal Maps only need two channels) I'd create a proper normal map from the texture. Then put grayscale texture in the third Channel and use an alpha for roughness and metallicity. That would give the dimensional normal effect without compromise and still have a single texture. even more efficient would be to use a single gray scale image for the texture and then a constant variable for roughness and metalicity.
@nicolashernanhoyosrodrigue762
@nicolashernanhoyosrodrigue762 9 ай бұрын
Thanks a lot for you wisdom tutorial!
@Starlingstudio
@Starlingstudio Жыл бұрын
this is amazing, thank you
@penkimat
@penkimat Жыл бұрын
Great quality video. Well done.
@VisualTechArt
@VisualTechArt Жыл бұрын
Thanks!
@cocinando3d
@cocinando3d Жыл бұрын
Your videos are amazing, pure usefull content
@lennytheburger
@lennytheburger Жыл бұрын
there is always a tradeoff between compute time and storage (in this case memory), if memory is a concern calculating many of the maps for a texture is a good solution, good video
@plasid2
@plasid2 Жыл бұрын
finally next video from my master
@VisualTechArt
@VisualTechArt Жыл бұрын
@C_Corpze
@C_Corpze Жыл бұрын
I use a texture format called "ARM" which is 3 textures inside one. Ambient occlusion = Red channel Roughness = Green channel Metalness (or use for different mask if material is not metallic) = Blue channel. This reduces the amount of textures from 3 to 1 and simply uses RGB values. I use ARM textures almost everywhere because they save a lot of memory and still look really good. And because they still can provide a lot of material information or since their unused channels (such as metallic) can be used for various types of masks they can be used for many things. Also you can reduce the amount of texture samplers by setting the sampler nodes to "shared wrapped" mode. If you then sample the same texture multiple times it's seen as 1 sample.
@karambiatos
@karambiatos Жыл бұрын
wow good job you discovered.... the unreal engine basic documentation....
@jakubklima5193
@jakubklima5193 5 ай бұрын
​@@karambiatos Damn, bruh, do you have self-esteem issues? What he said might actually help someone; there's no need to be toxic and demotivating.
@AllExistence
@AllExistence Жыл бұрын
What you built is a very specific shader for a very specific case. You sacrificed alpha and roughness because you didn't need them in this case. But it makes the shader useless for any other texture that needs them. For example, roughness may not match the albedo.
@TriVoxel
@TriVoxel Жыл бұрын
I could see a simpler and more performance-friendly version of this being really great for adding small, low resolution models for finer detail, but for more prominent models such as characters, buildings, major set pieces, etc. it would be better to have all the real textures. I think it is more practical to combine things like roughness, height, metallic, into a second texture, and use a separate normal and color map. This gives you three textures as opposed to 5, and is typically much better looking than pulling from 1 texture as you get all the benefits of PBR graphics, with the smaller memory footprint of less textures, but less shader math than your method. While I agree that many modern developers tend to waste performance on poorly compressed textures and using many B/W maps to achieve what could be a single texture, it is important to remember that these "extra" maps exist for a reason, and that reason is to simulate the tiny changes in a surface based on real materials, rather than the older method of just approximating everything with fake or baked-in texture trickery. These tiny surface details cannot truly be achieved for most assets with a technique like this.
@sciverzero8197
@sciverzero8197 Жыл бұрын
I notice significant drop in detail and shading clarity on the derived version compared to the normal mapped version. Moreover about not being generalizable... you're absolutely right. MOST game assets won't be able to derive their normal maps because most assets won't be using maps derived from one source. Most assets will be using maps derived directly from a model rather than from texture, and many of these textures will be greatly different in resolution from each other. Normal maps in particular are often 2 to 4 times the resolution of diffuse or height maps, and roughness maps tend to have absolutely catastrophic results when not mapped exactly to the right texel brightness. I've had no end of trouble trying to paint my own roughness maps by eye or using color sampling from other maps, because it just doesn't map in a simple linear way that can be interpreted. The reason these maps are usually baked into stored textures is actually because deriving them at runtime is... a bad idea. You lose a lot of quality (as I noted in the comparison here, though you seem not to have) if done the simple way, and you lose a lot of processing power if you do it the correct way that image processors that actually produce _good_ textures do it. (most texture processors do not produce good textures, which is why some texture bundles are a more expensive than others... more effort, better technique, and not all derived from diffuse... though some are just scams too.) If you really want to cut down on your memory footprint.... you can just not use lighting. Bake your lighting data into the vertex color channel of your objects, derive a general edge modulus value from your normal map to use on the baked lighting, and apply a diffuse texture. If you want height data... sample your diffuse map at about... 1/4 ~1/8 scale and normalize the values, or just bake the correct shape of your models.... into the model. Vertices and polygons are generally less memory intensive than textures, so... unfortunately, having several million polygons in your scene is more efficient on memory than having highly detailed textures OR derived texture information. And usually... the extra polygons aren't needed, because you can get rid of hundreds of thousands, quite readily, for one normal map or parallax map that is significantly lower resolution and can be used across multiple objects, than the footprint of the high resolution mesh. Most mesh details aren't necessary at all, because you won't see their silhouette or deformation ever, and this is the real problem with modern development. No one... is optimizing, because they've been taught that they don't need to anymore. A little work in the front-loaded asset development workflow goes a hell of a long way toward making a game run better in all ways. Taking careful thought to how your assets will perform when making them can avoid the headache of trying to get more performance out of the game you've already put together.
@Catequil
@Catequil Жыл бұрын
The issue with moving the height map into the alpha channel of the albedo map is that the alpha channel is generally uncompressed, meaning it takes up as much memory as the RGB channels combined. You're better off having two RGB textures than one RGBA texture.
@OverJumpRally
@OverJumpRally Жыл бұрын
Exactly my thought. Also, if you consider that you can have the Normal map at half the size and the Roughness/Metalness one at 1/4, you end up with way more space than just having those elements on separate textures. But it's true that it could be useful to create a Normal map from Albedo if you have a scanned object, that would be a great scenario.
@stefanguiton
@stefanguiton Жыл бұрын
Excellent
@82FGDT
@82FGDT Жыл бұрын
embedding textures in the alpha channel is my life for the past year. In the Source Engine (the engine for half life 2, portal, CSGO, etc if anyone didn't know already) you have to embed roughness into the normal map's alpha or else you can't have both at the same time
@TroublingMink59
@TroublingMink59 Жыл бұрын
Technically, you could just bake in Displacement (Which Normal data is usually inherited from) in an external 3D program, and simply use a roughness channel in the alpha channel of the diffuse. To make things extra crazy, you could retain a metal map encoded into the same map as the roughness at the cost of half precision roughness by making metal roughness features use the top 50% of the value range and diffuse roughness for the lower 50%. Then just feed this new encoded roughmetalness into a multiply node set to 2, and then a pair of IF nodes. For the roughness output IF node, just have it pass through for values less than 1, and have a roughness-1 version come through for values greater than 1. For the metal IF node, I would just set a 0 float for values less than 1 and a 1 float for values greater than 1. The biggest caveat of this method is that nonbinary metal textures would have to be crushed into binary ones. Another caveat is that every mesh using a different scale for the texture would have to be tessellated individually Another thing worth mentioning is that you would not always need one vertex per pixel on your mesh to retain normal data on the mesh faces if you use an angle based dissolve on the mesh. But you might need to. And you definitely would need a lot of vertices. This method really leans heavily on Nanite to do heavy lifting, but, so does using WPO inputs. This also lets you skip the unreliability and low quality of WPO and Derived Roughness maps and ignore the vertex shader. It is also an overall simpler shader graph.
@VisualTechArt
@VisualTechArt Жыл бұрын
I'm not sure if I got everything you're saying but yes, if you want to really go wild there's much more you can do!
@asdfghjklmnop850
@asdfghjklmnop850 Жыл бұрын
our TA's have that Material Template that utilizes just the base color only (for optimization purposes of course) and it automatically converts to Height, roughness, normal. But we Environment Artists mostly use his other material template which uses (Base Color + Height, AORM (AO, Roughness, Metallness), and Normal Map. It just looks better IMO and easier to control. Although our TA recommends Atlases for environment to save memory. btw, AORM can be set to a lower size to also save memory, doesn't affect the final output that much.
@VisualTechArt
@VisualTechArt Жыл бұрын
Interesting! I'd say that if I were able to reliably convert the usual multi maps pipeline to a one texture one I'd try to make an automation that in build converts all the materials to one texture, so artists can work as usual but the game would be converted at build time :) I should actually measure performances first to see if it would actually be a performance gain though
@Mireneye
@Mireneye Жыл бұрын
@@VisualTechArt A tool that would atlas textures already applied in a material would be lit!
@AFE-GmdG
@AFE-GmdG Жыл бұрын
Very interesting technique. I wonder how much calculation per pixel is to heavy. I guess it's a race between memory usage and calculation time and dependy on the current situation. It may be better to simply reduce a 4k texture to 1k or 2k to reduce the memory footprint and make usage of a ORM texture.
@VisualTechArt
@VisualTechArt Жыл бұрын
Whatever texture size you use, using less textures (of same size) is always a gain in memory!
@chillfactory2149
@chillfactory2149 Жыл бұрын
Not too relevant question, what is the source of your pfp?
@daveyhintzen53
@daveyhintzen53 Жыл бұрын
It's worth noting that in unreal using the alpha channel doubles the memory usage of the texture compared to using one without alpha. RGB uses DXT1 compression which is 64bits per block, while adding an alpha uses 128bits per block (64bit for the colour and another 64 for the alpha values). So purely from a memory point of view a 2k RGBA texture is as expensive as 2 2k RGB textures. Spreading it over 2 textures has the benefit of possibly giving you 2 additional channels to play with, though you have less bits per channel to work with. Another useful thing to note is that the bit count per channel is different. Simplified you have 16 bits per RGB value, split as 5:6:5 bits, meaning you get better quality out of the green channel.
@kenalpha3
@kenalpha3 Жыл бұрын
Can you post an example optimized code on BlueprintUE, that only uses RGB? but multi texture? (So I can see what you pack vs what you calculate). Ty
@cmds.learning7426
@cmds.learning7426 Жыл бұрын
amazing! i will pay for your tutorial
@mwjvideos
@mwjvideos Жыл бұрын
I am not very good in shading process but after all these years of development I understood one thing that whatever you do, do not ever mess with normal map texture.
@nonchip
@nonchip Жыл бұрын
"do they look the same?" my eyes: ummno? "good!" OY!
@sc4r3crow28
@sc4r3crow28 Жыл бұрын
Interesting. Was not thinking that calculating textures in memory would be better then sampling a texture ... but it makes sense. Unfortunately i would not want to loose roughness textures. But would it be good to use the same textures across multiple materials? If for example i have a the same roughness texture i put on multiple materials. Currently i use AO, Roughness and Metal in one RGB texture for a material. But would it be a benefit to split this and reuse the same Roughness texture across materials?
@VisualTechArt
@VisualTechArt Жыл бұрын
Having a single channel texture is generally a waste... And yes, deleting the Roughness is a risky thing, it really depends on the assets. But you could have cases where you don't need the Base Color maybe, so you get 3 channels to play with :D The texture in this case should be more flexible in what it can contain I think (speaking of an hypothetical game that takes this approach as production pipeline).
@N00bB1scu1tGaming
@N00bB1scu1tGaming Жыл бұрын
This process saves memory, yes, but trades memory for compute. You can get better results by packing and atlasing to shrink your texture inputs into 2 textures anyhow and more objects at the same time. GPUs continue to gain routine op improvements to make PBR math faster and faster, so skipping these inputs can actually be slower on top of being more compute heavy. Neat experiment, and I do agree there was a loss somewhere for hefty ops in favor of brute force, but this method is not the way.
@ls.c.5682
@ls.c.5682 Жыл бұрын
This reminds me of a project i did as a hobbyist before i got into the industry where I used height map generated terrain so of course had to calculate the normals in the shader. However, like any engineering problem this has massive tradeoffs. 4 texture samples per pixel? That could be a lot of bandwidth, granted there might be some values in the gpu cache lines depending on the tiling mode of the textures but I'd be curious running this through something like pix to see the overall cost. Also I wonder about the VALU cost of all the calculations per pixel across shaders running on a gpu unit. With block compression of normal maps and other textures to help with memory I'm not sure if this would be a net win. I could be wrong, but i need to see metrics. Creative solution though, and good for thinking originally
@VisualTechArt
@VisualTechArt Жыл бұрын
I'm quite confident that Cache Misses would be fairly low and ALUs wouldn't be causing a bottleneck! But yes, giving it a run on PIX would give the answer
@theluc1f3r93
@theluc1f3r93 Жыл бұрын
I use only 1 texture as even bump and normal map in unity, in many event games and small apps for phones etc. and always bake them in 3D + in Unity. It was way more efficient, but un unreal it look way more better (compare to standard shader in Unity, not other like uber etc.).
@lorenzomanini1017
@lorenzomanini1017 Жыл бұрын
Nice approach! I also experimented with a different way of dealing with masks for color overlaying, in order to pack a max of 44 different masks in a single texture, instead of the classic 4 you can get by using only the RGBA channels. Maybe we can have a chitchat on that if you're intrested in making a video about it to share the knowledge
@VisualTechArt
@VisualTechArt Жыл бұрын
Definitely interested in that! Were you manually assigning bits for that? You can join my Discord Channel and we can talk there if you fancy :)
@kenalpha3
@kenalpha3 Жыл бұрын
Im also interested in learning optimized masks. Can you post the code on BlueprintUE in the meantime (4x color mask is ok).
@multikillgames
@multikillgames Жыл бұрын
The one on the left looks more real, maybe it's the lighting, or something. The one on the right looks kind of off? Oh found out why. I wouldn't do this but I like the idea. Plus you can highly optimize textures already anyways. But it could be a good decision.
@Mjp11111
@Mjp11111 Жыл бұрын
Out of interest, how would you go about optimising your textures instead?
@poly_elina
@poly_elina Жыл бұрын
@@Mjp11111 Channel packing is one way to reduce the amount of texture files: "Channel packing means using different grayscale images in each of a texture's image channels... Red, Green, Blue, and optionally Alpha. These channels are usually used to represent traditional RGB color data, plus Alpha transparency. However each channel is really just a grayscale image, so different types of image data can be stored in them. " From Polycount wiki
@maybebix
@maybebix Жыл бұрын
Wow, very interesting material! 👍But in theory, is it possible to pack multiple grayscale maps into one channel and split them in a shader? I saw something like that in Bungie presentation from GDC, they packed 7 params into 4 channels
@VisualTechArt
@VisualTechArt Жыл бұрын
Ah, like packing two maps in one channel by doing bit operations? Yes you can do it, I never personally tried. The downside would be having a map with a way lower value range, so if you need it for maps that are very blocky and mostly uniform colours it would be a smart way to compress them!
@johnsarthole
@johnsarthole Жыл бұрын
I've shipped a game where we did that. It will work, precision is an issue - especially with the lower quality compression methods - but you get some of that back from texture filtering.
@mncr13
@mncr13 Ай бұрын
@maybebix do you have the link to that video by any chance? thankss
@maybebix
@maybebix Ай бұрын
@@mncr13 sure, it was called "Translating Art into Technology: Physically Inspired Shading in 'Destiny 2'" by Alexis Haraux, Nate Hawbaker. You can find it on gdc vault site
@reliquary1267
@reliquary1267 Жыл бұрын
The fact that he's doing this with purely math and advanced shader programming knowledge is genius enough for me, regardless of what anyone's opinion might be of the method
@Sweenus987
@Sweenus987 Жыл бұрын
For the normal calculation, could you do a single small pass at bluring it to help with the steps?
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes, one better way to calculate it would be a 3x3 sobel filter, which is actually the derivative after a gaussian blur :) It would need 8 samples though (it considers also the neighbours in the diagonals)
@Potatinized
@Potatinized Жыл бұрын
Inaccurate methods for all other textures including heightmap, which supposed to have higher bit than the normal texture we're using for color maps. But will this solve the infamous vRAM limitation issues? because small inaccuracy but enabling more stuff we can use in a scene is subjectively better.
@wpwscience4027
@wpwscience4027 Жыл бұрын
Another tip for normal maps: I had good success multiplying in a blur constant to my image kernel texel sampling so I could play with how much angle I wanted to matter. Anything below 4 doesn't seem to overlap much and allows you to fuzz your angles since sometimes you don't want hyper detail but other times you do. I also added a saturate before the derive normalZ because depending on what that multiplier is you can push in some dead values there also by accident. I didn't notice any appreciable performance change BUT this does speed up my asset creation pipeline because with a lot of things I can go over to a single CH packed texture instead of a CR and NH and that means less to do and less to keep track of.
@VisualTechArt
@VisualTechArt Жыл бұрын
You mean like doing the Sobel Operator for the Normal? That's a solution too! Actually be careful because if you saturate before the DeriveNormalZ you wipe out the -1 to 0 values, which results in a wrong normal :) Spoiler: I recently had a look at performances with RenderDoc and found out that my material has the same performance as one that uses 2 "classic" RGB textures :) So there's a gain if you're able to use this approach to replace 3 or more in theory!
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt Your spoiler tracks well with my experience as my original channel packed mat fit your description. I don't think I introduced any saturation problems as my change was way upstream in the UVs of the normals.
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt I noticed today unreal comes packed with a normal from heightmap function. You might also check a comparison in perf and quality of pulling the normal out of the rgb vs out of the heightmap since you have both.
@SatikCZE
@SatikCZE Жыл бұрын
Would love to see some benchmark to see how it performs
@VisualTechArt
@VisualTechArt Жыл бұрын
Me too ahahahah, I think I'll do a stress test in the future :)
@SatikCZE
@SatikCZE Жыл бұрын
@@VisualTechArt would be great :)
@3diec811
@3diec811 Жыл бұрын
Awsome explanation!. Is this more expensive in performance than using the normal map?
@VisualTechArt
@VisualTechArt Жыл бұрын
It probably is, but if the usage of the cache is efficient as I think it may actually be not that different, I want to profile it in the future :)
@SenEmChannel
@SenEmChannel Жыл бұрын
i developing VR game. And roughness, specular, ambient occlusion dont stand out as solid material like rock or brick. O i dump them all. Only use diffuse and normal, and it look good, combine with custom data, optimize texture size, optmize uv, optimive vertex count, optimize lod, optimize mip map, optimize culling, etc.
@benceblazsovics9123
@benceblazsovics9123 Жыл бұрын
First of all, splendid work! Love it!
@VisualTechArt
@VisualTechArt Жыл бұрын
I don't use Blender that much but... Doesn't it have the equivalent of UE's Custom Node for materials, where you can type in you HLSL code?
@creedence999
@creedence999 Жыл бұрын
molto molto utile grazie :D
@cepryn8222
@cepryn8222 Жыл бұрын
Maybe i dont understand exactly how channel packing works in term of optimization compared to this method but wouldnt be a channel packed texture a similar gain of performance while using much faster workflow? If we lets say use a channel packed texture with Diffuse, Normal and Roughnes (or anything that's needed) you can just basically plug the texture add a fraction of instructions you use to the material and the work is done. Please correct me if im wrong, and thanks for awesome work :)
@VisualTechArt
@VisualTechArt Жыл бұрын
This video was more of an academic experiment :) To understand if an approach like this may be worth we should test it in a much wider context (what's the production pipeline, system requirements, actual performance measurements, etc)
@arsenal4444
@arsenal4444 Жыл бұрын
A good way to test this for real world cases would be to turn it into an asset package or addon that crawls through and modifies all files in a project. Then any project made in the usual pbr style could have a second copy made for testing and have this modification applied to all materials project-wide. Then it's just a matter of running some tests to see results of running things this way, especially if it's a realistic graphics project, as apposed to stylized. I'd be really interested in the frame times and vram usage in said comparison of a project running on the usual method and this one.
@VisualTechArt
@VisualTechArt Жыл бұрын
That would be a proper tech art project! I might give it a chance in the future :D
@arsenal4444
@arsenal4444 Жыл бұрын
@@VisualTechArt I think it would be a win-win for your channel as well as viewers. It's a bit funny to think about how on the hardware side there's extreme obsession with cpu and gpu spec comparisons, whereas running the same type of test on the software side, which is what this would be a demonstration of, is much more rare. They're both equally part of the end result, but seems only one gets most of the analysis.
@VisualTechArt
@VisualTechArt Жыл бұрын
You're right :D And I'd argue that software at the moment is WAY more important than hardware, I think. There are tons of HW tests because they're easy and everybody is able to put a GPU in and run some apps that someone else made
@N00bB1scu1tGaming
@N00bB1scu1tGaming Жыл бұрын
Save you the effort, this method is compute heavy and skips a lot of micro architecture optimizations. You are trading vram for compute. While there is definitely a need and push for more efficient ways to handle PBR input, you will get better results packing your PBR maps into 2 packed textures and letting the GPU run the proper math ops with its architecture optimizations.
@arsenal4444
@arsenal4444 Жыл бұрын
​@@N00bB1scu1tGaming I think in most cases, you'd be right. But idk if 'most cases' would be closer to 51% or 99.9%, so if it were possible to have a way to approximately test this project-wide without too much hassle, that would be ideal (if that's not possible then I guess this was all just an interesting 'what-if?'). Point being is, if a project is in it's total volume of assets weighted either too unevenly into either compute or vram use, as apposed to optimally balanced, testing it in a project-wide optimizer may be a working solution. If it does come out more performant, from there it would just be a matter of testing to clean up any errors after modifying everything.
@EclyseGame
@EclyseGame Жыл бұрын
you are very smart, take my sub incredible value tutorial thank you
@VisualTechArt
@VisualTechArt Жыл бұрын
Thanks!
@y.h.lee.5288
@y.h.lee.5288 Жыл бұрын
이런 방법으로 하나의 텍스쳐로 각각의 텍스쳐를 구현할 수 있군요. 어메이징.
@roadtoenviromentartist
@roadtoenviromentartist Жыл бұрын
One question Master?. Minute 5:59 Calculating the pendient (derivate X and Y of the neighbourpixel).... Is it very fast use in this step to use DDX and DDY???? I think that you can avoid repack the normals and normalize them. Thank´s. :)
@VisualTechArt
@VisualTechArt Жыл бұрын
Not sure I understood your message honestly 😅
@TomSwogger
@TomSwogger Жыл бұрын
Is this available for download somewhere? I don't think I'm savvy enough to create this, but I'd like to play around with it!
@VisualTechArt
@VisualTechArt Жыл бұрын
I wasn't planning to upload it as it's not a versatile material (didn't take the time to add parameters etc..), but I may do that in the future!
@MeatFloat
@MeatFloat Жыл бұрын
Hey! You could actually subtract the vectors by 1, then sign the result, then lerp with the signed result between your divided result and the default mult100 to avoid the if statement. ;D
@VisualTechArt
@VisualTechArt Жыл бұрын
True! But I went for the IF because it doesn't actually get compiled as an actual branch and it was more straight forward to follow, I think :)
@KittenKatja
@KittenKatja Жыл бұрын
This video didn't make me realize anything, but it made me relive an old idea I had with transparent pictures. Is it possible to remove all white from the picture, and translate it into alpha channel? This way, if there's a white background, the transparent picture will appear to be completely normal. If the background weren't white, the texture itself would look rather dark, or deprived of any light. I also would like to do this with any kind of color, not just white.
@VisualTechArt
@VisualTechArt Жыл бұрын
You would need to implement a proper Chroma Keying, like the stuff they use in cinema all the time to remove greenscreens
@KittenKatja
@KittenKatja Жыл бұрын
@@VisualTechArt There are some chroma keys on paintNET, two default, one custom. The two default are magic wand, and one of the effects. The custom one is made to remove the white/black background of an object, and leave the shadow it casts intact, and also the see-through area, like on a magnifying glass. But it leaves like 10% white in the pixels. Does Photoshop have something like that?
@Nerthexx
@Nerthexx 7 ай бұрын
The only textures you need is albedo and some kind of spacial information, depth or height, if you know that it's metal or non-metal by default. Think of how real-life objects work. You can pack this information in one RGBA texture. Other maps can be derived at runtime. This is a basic long-running argument of "processing" vs "precalculation".
@Psyda
@Psyda Жыл бұрын
While yes this makes the files smaller, when running the game procedural textures blow up using memory somewhat comparable still.
@Kaboom1212Gaming
@Kaboom1212Gaming Жыл бұрын
Is there a particular reason you didn't use the "height to normal" node instead of all of the custom node setup in the shader graph?
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes, both to explain the concept behind this function and because I like less the output of that function, which is an even more hard approximation of the normal than mine :)
@Kaboom1212Gaming
@Kaboom1212Gaming Жыл бұрын
@@VisualTechArt I see, very interesting. I will give your approach a go next time, it seems useful in a few ways!
@lasereye159
@lasereye159 Жыл бұрын
Smart, very 😌✌️
@btarg1
@btarg1 Жыл бұрын
This could save a lot of storage space in larger games which would otherwise have many textures, nice
@VisualTechArt
@VisualTechArt Жыл бұрын
It's nice to see that every once in a while someone gets your point xD
@FishMan1nsk
@FishMan1nsk Жыл бұрын
Hey. This method of generating normal maps. Can it be used for generating a normal map between two blening materials? Lets say. I have a metal and a paint on it which is added using vertex paint. Is it possible to add a normal map on the edges like in painter using this method? Or may be you know other method? This is probably a good theme for a video btw.
@VisualTechArt
@VisualTechArt Жыл бұрын
I think it can be done :) but you have to consider that since the transition comes out of the intersection of two maps, to obtain the adjacent pixels you'll have to repeat that multiple times too, worth having a try though, good idea!
@plasid2
@plasid2 Жыл бұрын
Could you use your magic to make flow map based high map, like lava flow from mountain?
@VisualTechArt
@VisualTechArt Жыл бұрын
Nice idea! I'm gonna add that to the list :D
@GDPROD
@GDPROD Жыл бұрын
Very interesting. I have a personal question, are you italian?
@VisualTechArt
@VisualTechArt Жыл бұрын
@JasonMitchellofcompsci
@JasonMitchellofcompsci Жыл бұрын
I'm seeing an application for AI here. Not even a heavy AI. You've basically correlated aspects of a color map to other maps. Those correlations can be developed into models automatically. As you've shown it doesn't take that large a model to perform the correlation even when done by hand. A AI model could likely do it with an even smaller model, and likely consider pixels beyond immediate neighbors to drive useful results. The only area I'm not very familiar with is generating those high frequency features. My intuition is that is possible as well based on what I've seen other people do, but I wouldn't know how specifically to implement it myself. But with that you could compress textures pretty painlessly.
@VisualTechArt
@VisualTechArt Жыл бұрын
It has been a while now that I had this idea of overfitting an AI to a specific texture and use its graph as shader, actually! Never took the time to test it out though (also because training AIs is so time consuming and boring)
@JasonMitchellofcompsci
@JasonMitchellofcompsci Жыл бұрын
@@VisualTechArt I don't know how much you would have to over fit. Thus smaller model. It would not take much processing to fit that. Literal seconds of training time. Not all AI has to be this heavy thing. Language models sure. But AI concepts apply to models 20 or 50 weights large as well as 20 million. Considering you current method is practically doing 1:1 with extra steps a small model is all it takes. Even on CPU that trains in a practical instant.
@VisualTechArt
@VisualTechArt Жыл бұрын
I think we would need to overfit to keep it really small, like creating a very small network with a bunch of neurons/convolutions that can only do that operation on that specific texture set. I know that for such a small task training doesn't take long, the problem is that training doesn't give you the result you want straight away :D
@Kio_Kurashi
@Kio_Kurashi Жыл бұрын
For the Roughness section, aren't you having to store the values that you're calculating from the original texture in a separate memory instance from the texture? Isn't that essentially having two textures in memory? Transient though one might be.
@VisualTechArt
@VisualTechArt Жыл бұрын
In GPUs you can say (a lot of approximation here, just to pass through the idea) that every pixel computes independently, from scratch, every frame. That means that you don't have access to other screen pixels in the same frame and you don't "remember" anything you did in the previous frame. So once you calculate the Roughness you're not saving "a texture", but you're just saying to the shader how you want the surface to react with light for that frame and that's it :)
@Kio_Kurashi
@Kio_Kurashi Жыл бұрын
@@VisualTechArt Ah okay, Thanks for the clarification!
@mindped
@mindped 8 ай бұрын
i made a material with paint chipping on the edges.. i did this by using vert color and a texture mask to randomize up the edge vert color so its uneven. Is there a way to generate a normal map for the edge of the paint?
@VisualTechArt
@VisualTechArt 8 ай бұрын
Well you can generate a normal from the texture mask like I did here with the height... But you can't look at data coming from vertices in that way, so it's a bit tricky, you have to come up with some euristics based on your use case :)
@erendiktepe7773
@erendiktepe7773 Жыл бұрын
Bro thanks for normal map and btw i try to make a material function for later on and engine crashed lol👍
@jakubklima5193
@jakubklima5193 5 ай бұрын
Nice idea, but is it even worth it now that we are using virtual textures pretty much everywhere? It also needs manual adjustment pretty much for every material, I wonder if this workflow would actually pass and if this method is wildly used in any projects. Maybe for something like mobiles it would be more beneficial ?
@VisualTechArt
@VisualTechArt 4 ай бұрын
It's of course to not give for granted that something like this would fit in a project, to be honest it was more of an experiment I wanted to make :) I personally see some areas of application for it, but I wouldn't base an entire project on this
@jakubklima5193
@jakubklima5193 4 ай бұрын
@@VisualTechArt Ah got ya. Cool stuff. Thanks for sharing 😃
@wpwscience4027
@wpwscience4027 Жыл бұрын
Ideas on how to use this to fake or rederive the ambient occlusion map would be neat.
@VisualTechArt
@VisualTechArt Жыл бұрын
You can definitely do some cavity, the AO is a bit more complex :) I may try one day though
@wpwscience4027
@wpwscience4027 Жыл бұрын
@@VisualTechArt Adding a bump to the UVs looks nice and is cheap since you already have the heightmap. I've spent the evening exploring building something for the AO. As of yet I have settled on using that heightmap kernel to calculate a proxy for the openness to light by getting the volume of the cone it makes with the center height pixel. This yields a map of how sharp the cavities are but it looks pretty different than a regular AO map. I feel like I could get something better using the same information and the hillshade calculation that gets used in GIS and other landscape mapping applications, but that requires picking a direction and height away from the texture. I feel like 12 degrees (goldenhour) and 315 (northeast) for alt and azimuth would look nice for textures laid flat in the xy. For vertical objects that would put the light behind the viewer and to the upper left. I think that's passable but it feels wrong to just pick a shadow, but that's what I'm going to try next anyways.
@VisualTechArt
@VisualTechArt Жыл бұрын
@@wpwscience4027 I think the only issue is that to calculate AO you would be forced to use quite a big kernel as it's extend is always beyond the first pixel in terms of distance, but a simple cavity map can be computed for sure :)
@guybrush3000
@guybrush3000 Жыл бұрын
you might’ve saved some texture samples but you made the shader massively massively more processing intensive. sampling a texture is much faster than this. Is saving the VRAM worth taxing the fill rate like this? I would never recommend that anyone make something like this
@rossbayliss4151
@rossbayliss4151 Жыл бұрын
Can anyone tell me what those light-blue BaseUV nodes are at 3:23? My materials always look like spaghetti and they look perfect for me.
@VisualTechArt
@VisualTechArt Жыл бұрын
I suppose you're referring to Named Reroute Nodes? They're quite handy :D
@StBlueFire
@StBlueFire Жыл бұрын
@@VisualTechArt Thank you so much! This has been one of the things I've hated most about materials but now I have a solution.
@Mittzys
@Mittzys Жыл бұрын
How computationally expensive is this? I would assume it's a bit of a trade off, less VRAM usage but more processing time?
@VisualTechArt
@VisualTechArt Жыл бұрын
It is a trade off :D I didn't do a performance pick but I wouldn't worry too much about valu performance here to be honest
@cad97
@cad97 Жыл бұрын
The biggest downside to this approach is IIUC going to be that it needs to use its own material. If all of your megascan materials share the same material and just vary by what textures they're using, it's typically going to be easier on the GPU to draw multiple actors using different material instances than it is with fully different materials. As with all things, there's a tradeoff to be made - if you're strapped for VRAM, deriving normals from the heightmap texture will certainly help.
@Itsme-wt2gu
@Itsme-wt2gu Жыл бұрын
Can you add parallex onclusion instead of displacement
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes
@EdrickIvan
@EdrickIvan Жыл бұрын
Reminds me of ORM textures.
@chasingdaydreams2788
@chasingdaydreams2788 Жыл бұрын
is there a derivative node in ue? if so, you can derive the exact same normal map from the bump accurately. right now your normal conversation isn't as accurate as it can be.
@VisualTechArt
@VisualTechArt Жыл бұрын
The derivatives are screen space, it's a bit long and difficult to explain here, but if you try that you'll see that the result is pretty bad, actually! Plus with them you can only use 3 samples instead of 4, which has also more drawbacks (in terms of output quality) :)
@artemg9753
@artemg9753 Жыл бұрын
What will you do if there are some contrasting patterns on the texture, and/or materials with different properties? Rhetorical question.)
@VisualTechArt
@VisualTechArt Жыл бұрын
I wouldn't go for this approach or I would change the texture I'm using to store different kind of data :D
@toapyandfriends
@toapyandfriends Жыл бұрын
Will this cut down under CPU usage or GPU usage that a game needs to run... If so can you put a link under this video or video or at least to this comment about other videos you made that have this level of scientific efficiency genius so I could grow in your light and become like a like a UE5 scientist too! 👊😎'kapaw
@VisualTechArt
@VisualTechArt Жыл бұрын
Ahahahahaha! Well about the CPU I don't have an answer to be honest, maybe if you applied this approach to the entirety of a game yes, you would be requesting less textures, making smaller draw calls? Don't know, I'll look into that. I'd say go to my channel page and watch everything! :D But you may be especially interested about the Voronoi ones and the Grass Animation? Start from those ;)
@Itsme-wt2gu
@Itsme-wt2gu Жыл бұрын
Can you list the draw calls of both?
@VisualTechArt
@VisualTechArt Жыл бұрын
I'll do a separate video where I profile several things I did at once :) But spoiler: I already did a check on renderdoc few days ago on this, turns out that my solution performs around 25% better than the reference material as you see it in the video, while basically the same if I pack the reference material textures into 2. I shared the timings in my Discord few days ago :D
@3DWithLairdWT
@3DWithLairdWT Жыл бұрын
Would it not be more effective to just take the min between vector {1, 1, 1} and the derived normal? If statements are costly
@VisualTechArt
@VisualTechArt Жыл бұрын
These "node IFs" are not usually compiled as actual branches, but as ternary operators, which at the end of the day is like doing a mask with the Min, as you suggested. To completely avoid any doubts I usually don't use them, but for clarity in the video I decided to, this time :D
@whyismynametaken123
@whyismynametaken123 Жыл бұрын
The "expensive" part of if statements comes from them running all the code from each potential result so it depends on what your outputs are. If result A is 200 instructions and result B is 200 instructions then it will always end up running 400 instructions and thus in that case it would be very expensive. On the other hand if result A is the number 0 and result B is number 1 then it's very cheap. You can output the material's HLSL code to look over. It will take a bit to decifer what's happening the first time you do it due to how UE optimizes your material graph, but after that it's fairly straight forward to read. [EDIT: The optimizer will re-use the result of a block of code multiple times if it doesn't change .. I think my above explanation will lead people to think that isn't the case. I should probably just go to sleep instead of inserting myself into tech discussions lol]
@VisualTechArt
@VisualTechArt Жыл бұрын
For what the GPU is concerned (and as far I know) running both branches and discarding one result is cheaper than an actual branch where you first check the condition and the run only one, if the sum of the instructions doesn't surpass the issues for the threads to loose sync and having to stall in wait state. Something that's usually worth if avoiding texture fetches inside the if statement, for example.
@mikewake1024
@mikewake1024 Жыл бұрын
Hi Visual, I am not 100% sure but tessellation should not work in UE5 shaders. It was removed in favor of nanite. The only tessellation like geometry is only available in the water Objekt and for the landscape. Other than that the video itself was very informative and the a very interesting approche. Love your videos a lot!
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes, this video was made well before that news ahahah anyways there's nothing stopping you using an heightmap anyways! :) The main thing here is replacing the normal map and in UE5 you still have WPO ;)
@tylergorzney8499
@tylergorzney8499 Жыл бұрын
I think this is a great excercise, but not very useful. This is a super heavy shader just for abasic PBR material. Taking this and adding in even more shader effects makes it very hard to work with and very very expensive compared to just using 2 texture samples compared to your "complex" math and many many more texture smaples. From my knowledge, texture samples are very slow. I think another btter method would be using texture arrays with small (512) textures with high tiling and texture maps that define the geometry features such as AO and Cavity (can be combined in a single channel and seperated in shader, Edge Map, Curvature Map etc. You will have a much cheaper shader, smaller texture sizes, higher detail and can make a shader where yu can rotate a mesh and textures will render appropriately so a single mesh rotated will appear like a different object.
@Jetravard
@Jetravard Жыл бұрын
The one on the left has much more granular detail.
@romank9121
@romank9121 Жыл бұрын
is calculating shaders in real time is more performant than loading textures?
@VisualTechArt
@VisualTechArt Жыл бұрын
There's a threshold :) I don't think I'm passing it with this shader, but doing a performance test would be great
@kaos88888888
@kaos88888888 Жыл бұрын
Interesting, but i think i will not do that shit at least for the normal map and use another texture instead ahahaa, and yes, memory is an issue indeed, you cannot expect to import quixel megascans and to make a game with it out of the box
@VisualTechArt
@VisualTechArt Жыл бұрын
You can do it once and put everything in a Material Function ;)
@kaos88888888
@kaos88888888 Жыл бұрын
@@VisualTechArt That's true :)
@sntg_p
@sntg_p Жыл бұрын
what are you using for text to speech?
@VisualTechArt
@VisualTechArt Жыл бұрын
A Microphone and my mouth :D
@sc4r3crow28
@sc4r3crow28 Жыл бұрын
You said RGBA = 2xRGB when compressed ... is that true? Or did you mean its just bigger by 1/3
@VisualTechArt
@VisualTechArt Жыл бұрын
It's true and you can check it yourself :D If you add the Alpha channel to a texture it doubles in size (on the other hand that channel is the one that gets the least amount of compression artefacts and has the best quality among all 4)
@sc4r3crow28
@sc4r3crow28 Жыл бұрын
@@VisualTechArt ok thank you that is good to know
@sc4r3crow28
@sc4r3crow28 Жыл бұрын
@@VisualTechArt Sorry, i have another question ... if i have a RGBA texture and sample only the R channel it will always load the whole RGBA texture right?
@cedric7751
@cedric7751 Жыл бұрын
@@sc4r3crow28 Textures are compressed in blocks of 4x4 texels. For the rgb, 2 most "extreme" color values are saved for each compression block (2colors x 16 bits per color) and 2 new values are interpolated between those 2 extremes to form a 2bits indexed color table (4 colors: the extreme colors and the 2 interpolated values). Each of the 16 texels of the block will then index one of those 4 colors, for a total of 16 texels x 2bits index = 32bits + the 32bits from the 2reference colors = 64bits or 8bytes per compression block. For the alpha, the 2 reference colors only have 8 bits of depth but 6 new colors are interpolated to form a 3bits indexed color table (8 colors: the extreme colors and 6 interpolated values). So we have 16texels x3bits index + 2reference colors x 8bits = 64bits or 8bytes of data per compression block for the alpha. This is why adding an alpha doubles the size of a compressed texture when using the BC format, which is the default in Unreal. This is no longer true with other formats like pvrtc (power vr mobile architecture) or console specific formats. As a side note, the 3 channels of the rgb are stored as a single 16 bits value. Since 16 is not divisible by 3, the red channel is stored as a 5 bits value, the green channel as 6 bits and the blue channel as 5 bits for a total of 16. The reason why the green channel receives more bits of data is because the human eye is more sensitive to shades of green.
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes
@Annyxel
@Annyxel Жыл бұрын
the real proof here is to made a small landscape of some sort with a town and forest. after doin it normally, copy it and made one with your method. then put then to the test. which has better frames and results.
@VisualTechArt
@VisualTechArt Жыл бұрын
Yes, I want to do it at some point
@TheT0N14
@TheT0N14 9 ай бұрын
This is not very useful in the case of photogrammetry because you can capture normal map and roughness. You'd need a darkroom for that. If you need to capture something outside, you use a tent or just scan at night. For normal map you need a small metal ball and a light source that you can move. And software, of course. Substance Designer or Details Capture | Photometric Stereo. For roughness you need to take two pictures without moving the camera, one picture as usual and the other with polarising filter and polariser on the light source. This will give you two images with and without glare. The one without glare will be our Albedo. Now we turn both images into greyscale, count the difference between them and get something similar to rougness.
@andrewsneacker1256
@andrewsneacker1256 3 ай бұрын
Its a good technique, but if you need specific normal map with specific features its not the wat at all.
@multiupgame
@multiupgame Жыл бұрын
Is such material not difficult for GPU? Runtime calculation. I'll have to test later🤔
@VisualTechArt
@VisualTechArt Жыл бұрын
You need lots of ALUs to make a GPU feel them :D My gut says that's not that much of an issue, but if you run some tests definitely let me know please!
@multiupgame
@multiupgame Жыл бұрын
@@VisualTechArt Ok In discord if I don't forget😅
@chosen_oNEO
@chosen_oNEO Жыл бұрын
But will that really help performance 🤔🤔 idk man… I’d say just save time and go the old fashion
@zxcaaq
@zxcaaq Жыл бұрын
can someone pls make a GLSL code or HLSL for this pls?
@0805slawek
@0805slawek Жыл бұрын
Why not working for me? I analized every sec video but cant find mistake in my nodes, my normal map is flat, could you upload somewere this material pls?
@VisualTechArt
@VisualTechArt Жыл бұрын
I wasn't planning to upload it as it's not a versatile material (didn't take the time to add parameters etc..), but you can join my Discord channel and we can try to make it work together :)
@JordiTheViking
@JordiTheViking Жыл бұрын
Putting it into Alpha makes the engine consider it as 2 textures pretty much
@TreyMotes
@TreyMotes Жыл бұрын
This dude sounds like Steven Hawking's synthesized voice. LOL
Outline Stylized Material - part 1 [UE5, valid for UE4]
23:02
Visual Tech Art
Рет қаралды 37 М.
Hot Ball ASMR #asmr #asmrsounds #satisfying #relaxing #satisfyingvideo
00:19
Oddly Satisfying
Рет қаралды 45 МЛН
🌊Насколько Глубокий Океан ? #shorts
00:42
King jr
Рет қаралды 3,6 МЛН
Кәріс өшін алды...| Synyptas 3 | 10 серия
24:51
kak budto
Рет қаралды 1,3 МЛН
10 Unreal Engine 5 PLUGINS I can't live without!
9:37
Cinecom.net
Рет қаралды 462 М.
5 Tips to Optimize Environments in Unreal Engine 4
12:23
Jakub Hałuszczak
Рет қаралды 68 М.
Unreal Engine Height Lerp blend demonstration
0:47
Dmytro Piatyhorets
Рет қаралды 2,3 М.
Optimize with Custom Data | 5-Minute Materials [UE4/UE5]
11:25
PrismaticaDev
Рет қаралды 30 М.
3DTubers are Cool, Here's How I Became One
13:42
JayTheDevGuy
Рет қаралды 47 М.
I Struggled With Blueprint Interfaces for Years!! (Unreal Engine 5)
16:48
Glass Hand Studios
Рет қаралды 171 М.
World Position | 5-Minute Materials [UE4]
8:04
PrismaticaDev
Рет қаралды 41 М.
Open World - Landscape Texture Tiling Unreal Engine 5.1
8:14
CG Dealers
Рет қаралды 40 М.
Hot Ball ASMR #asmr #asmrsounds #satisfying #relaxing #satisfyingvideo
00:19
Oddly Satisfying
Рет қаралды 45 МЛН