Fixing Intel's Arc Drivers: "Optimization" & How GPU Drivers Actually Work | Engineering Discussion

  Рет қаралды 238,959

Gamers Nexus

Gamers Nexus

Күн бұрын

We sponsored ourselves! Buy one of our metal emblem badge pint glasses! store.gamersnexus.net/product... or our ultra comfortable hoodies! store.gamersnexus.net/product...
In this discussion, we talk about a number of questions relating to GPU performance in gaming. Other than talking about how GPU drivers actually work, we get into what it means when a game is "compiling shaders" or is "caching shaders," what it means for a game to be "unoptimized" vs. the drivers or hardware, and more. The key goal is to define drivers and how they interact at both a hardware and software level, giving a better understanding as to what it means when Intel, NVIDIA, or AMD talk about "optimizing" performance. Likewise, this helps cover some of what it means for a game developer to have an "unoptimized" game. We're joined by Intel Engineer Tom Petersen for this discussion. If you learned from this talk, consider watching these:
Intel Animation Error discussion: • FPS Benchmarks Are Fla...
NVIDIA latency technical discussion: • Framerate Isn't Good E...
Watch our Intel Arc 2024 revisit: • Intel Arc 2024 Revisit...
To further support our interviews and deep dives, consider grabbing one of our PC building DIY anti-static modmats: store.gamersnexus.net/product...
Or our soldering mats: store.gamersnexus.net/product...
Like our content? Please consider becoming our Patron to support us: / gamersnexus
TIMESTAMPS
00:00 - No One Actually Knows What "Optimization" Means
02:05 - The Driver Stack, APIs, & Basics
04:38 - Shaders & Programs
05:43 - What Does "Compiling Shaders" Mean?
06:30 - Optimizing GameDev Shaders
08:19 - Kernel Mode Driver
10:08 - Graphics Firmware & Hardware
11:18 - "OPTIMIZING" Drivers & Games
16:25 - Types of Optimization
18:30 - Future Plans
21:24 - Games Being "Unoptimized"
23:15 - Common Misconception About Drivers
** Please like, comment, and subscribe for more! **
Links to Amazon and Newegg are typically monetized on our channel (affiliate links) and may return a commission of sales to us from the retailer. This is unrelated to the product manufacturer. Any advertisements or sponsorships are disclosed within the video ("this video is brought to you by") and above the fold in the description. We do not ever produce paid content or "sponsored content" (meaning that the content is our idea and is not funded externally aside from whatever ad placement is in the beginning) and we do not ever charge manufacturers for coverage.
Follow us in these locations for more gaming and hardware updates:
t: / gamersnexus
f: / gamersnexus
w: www.gamersnexus.net/
Steve Burke: Host
Tom Petersen: Guest
Video: Vitalii Makhnovets

Пікірлер: 1 200
@GamersNexus
@GamersNexus 2 ай бұрын
If you like this, watch our video discussion animation error and flaws of frametime testing as we know it today! kzfaq.info/get/bejne/ecWCgpuTr9XUaKM.html To further support our interviews and deep dives, consider grabbing our metal emblem pint glass! store.gamersnexus.net/products/gn-3d-emblem-glasses or our ultra comfortable hoodies! store.gamersnexus.net/products/warm-ultra-soft-fleece-zip-hoodie-tear-down NVIDIA latency technical discussion: kzfaq.info/get/bejne/fNBdqr2QsK3Ho5s.html Or our Intel Arc 2024 revisit: kzfaq.info/get/bejne/rZmHhtR9qsnNdas.html
@olnnn
@olnnn 2 ай бұрын
re drivers you should really have on one of the people working on the MESA linux graphics drivers, it's like one of the things making the steam deck possible
@robertlawrence9000
@robertlawrence9000 2 ай бұрын
Very interesting! I wonder if it would be possible for Tom to get a video of some engineers actually using these tools to show us what they do. Like maybe a behind the scenes video step by step of an actual problem they encounter in a game and the process of what they do to fixing it using software and testing. Maybe that's a bit intrusive but it could be an interesting video. Thanks for the video Steve and GN team!
@realdragonrude
@realdragonrude 2 ай бұрын
Would love to pick this guys brain about the advantages and disadvantages the vulkan api has over directX on the gpu side Also Thanks Steve
@dasiro
@dasiro 2 ай бұрын
With Tom having worked at NVIDIA and now at Intel in this very unique position: how do different choices at the hardware level impact the software and vice versa? And on a personal note: how hard is it to work around previous knowledge that you aren't allowed to use due to patents and NDA's + could your previous employer find out if you did so?
@AwesomeBlackDude
@AwesomeBlackDude 2 ай бұрын
. Hey Steve, (bro) when are you planning to break this system? We're still waiting for an in-depth investigation into the actual prices for these fast, fancy calculators of GPUs in mass production. Correct me if I'm wrong, but isn't it mind-blowing that the iPhone 14 only costs $10 each in mass production out the door?
@beachslap7359
@beachslap7359 2 ай бұрын
Thanks Steve.
@GamersNexus
@GamersNexus 2 ай бұрын
The bot check has been passed.
@graphicarc
@graphicarc 2 ай бұрын
😱
@HanmaHeiro
@HanmaHeiro 2 ай бұрын
Thanks @beachslap7359 😐
@bigbrain8839
@bigbrain8839 2 ай бұрын
​@@GamersNexus lmao
@PostNoteIt
@PostNoteIt 2 ай бұрын
Never gets old.
@WereCatStudio
@WereCatStudio 2 ай бұрын
I wish all GPU vendors were more willing to talk about this kind of stuff. This is very interesting and informative stuff and I applaud Intel for willing to talk about this. Makes me want to pick up an Intel card just to tinker with it.
@mikebertolini120
@mikebertolini120 2 ай бұрын
You stole my words
@zivzulander
@zivzulander 2 ай бұрын
Hopefully they will talk about this stuff more now that they see there is interest. I think they just assumed people aren't interested in the low-level details as much (specifically on the software/drivers side). Nvidia and AMD have had engineers on this channel before (and others) to discuss other technical info, though. That recent video on latency Steve did with Guillermo Siman of Nvidia was also very informative, as were the videos recorded in the AMD labs in Austin, Texas.
@PrefoX
@PrefoX 2 ай бұрын
just AMD is not visiting GN, Nvidia was there quite often and Intel too.
@Mr.Genesis
@Mr.Genesis 2 ай бұрын
@@PrefoX GN went to AMD's own offices and spoke to the engineers there. Guess you missed the video
@cabir.bin.hayyan.800
@cabir.bin.hayyan.800 2 ай бұрын
intel bends over to nazisrael don't buy into genos aiders
@MrMonday1000percent
@MrMonday1000percent 2 ай бұрын
24:56 Thanks for watching Tom!
@GamersNexus
@GamersNexus 2 ай бұрын
I considered cutting that but figured so few people watch that long that it'd be a great easter egg!
@grievesy83
@grievesy83 2 ай бұрын
@@GamersNexus Really? People switch off from a video with Tom in it? In my country, we'd say they "have kangaroos loose in the top paddock".
@shaunwhiteley3544
@shaunwhiteley3544 2 ай бұрын
​@@virtuserable that's what's stopping me from buying Arc 😪
@woodmon122
@woodmon122 2 ай бұрын
HAHAHAHA that cracked me up
@shaneeslick
@shaneeslick 2 ай бұрын
@@grievesy83 🦘🦘🦘🦘🦘🦘🦘🦘🦘🦘🙃👎
@wansnek3997
@wansnek3997 2 ай бұрын
This is amazing. I work as a low level engineer in the reverse engineering field, and these technical deep dives are AMAZING. Its not an exaggeration to say that the engineering content you are putting out is legitimately one of the things people will be watching for years for knowledge of certain software topics. Thanks Steve & team, and you'll always have my support!
@GamersNexus
@GamersNexus 2 ай бұрын
That's a cool job! And thank you for the kind words!
@svampebob007
@svampebob007 2 ай бұрын
the software part is always a game of "speaking the right language" but the low level is always so interesting. like having this API have X registers or reading this text and inferring this information to perform "that" instruction. I work in retail, but low level circuits and programing has been my hobby, I remember building my first dual core circuit 10 years ago and it took me 2 years to figure out how to coordinate just to figure out how to share the information from RAM and match it with the program counter register. and that's is when I was using my own "coding language" it was very hardware specific language. watching those deep dive from giants like Intel is so much fun!
@sgredsch
@sgredsch 2 ай бұрын
Former Mod developer here, working with source engine as 2d/3d artist. id like to share some basic views onto game optimizations. Petersen did a great job explaining deeper driver/engine Level optimizations, but this is only one part of it. Its not just the engine or driver, its also the assets, and this is also where some studios just drop the ball. you want to load a gpu evenly, but you can absolutely choke a gpu when overloading it with a heck of a lot of one single workload. that might be one specific shader, or absolutely insane geometry load, or stuffing the vram. 1. 3D models. models are basically a wireframe that form a body. the possible level detail of a model is determined by the number intersecting lines of the grid. patches formed by the grid are refered to as polygons (there are different types like ngons, quads, not gonna touch on that). a high polycount gives you a higher resolution mesh with potentially more detail, HOWEVER you can have a model with millions of polygons with no details at all. the polycount is direct geometry load on the gpu. the higher the polycount in the scene, the higher the load on the gpu. once you overload the geometry pipeline with absolutely insane levels of geometry, the gpu performance drops off a cliff. this is basically what tessellation does - it takes a simple wireframe and bloats the polycount, increasing mesh fidelity, but blasting the gpu with geometry load. this was nvidias big trick to choke radeon gpus in certain games using gameworks or hairworks. nvidia massively increased the geometry capability of their gpus starting with fermi, and tried to get studios to use their tessellation libraries found in gameworks/hairworks that would absolutely obliterate radeon gpus with stupid high amounts of geometry load. notable games are the witcher 3 with hairworks and crysis 2 with environment tessellation that does absolutely nothing except cutting radeon framerates in half - this is why you can find the tessellation factor option in the radeon and intel arc driver, it limits the mesh complexity scaling of tessellation. geometry load is the whole visible scene with all models and map geometry on the screen. so you want to keep the polycount of the rendered scene as low as possible, and you absolutely want to avoid wasting polygons on geometry that doesnt even need it. a cube can be done with a "single digit" number of polygons, or 2.000.000.000.000 without any visible difference. you can have thousands of the minimal polygon cubes on screen without breaking a sweat, while just a few of the bloated cube will make your gpu scream. modern gpus can handle a ton of geometry load, but this is not an excuse to just waste it. having a ton of unnecessary polycount puts unnecessary load on the gpu, and is a result of lazy mesh optimization. sure, you want higher fidelity models for better visuals, but you can assume that studios rather cut on time per model in exchange for worse client side performance. one technique, usually used in older games, is "backface culling", which basically deletes all parts of the model that the player is never seeing, cutting geometry load. there are possible artifacts when you can actually see the deleted backface, and the model looks hollow. today this is not done alot because gpus are pretty powerful, but there are situations where this should still be done but isnt. but dont worry, polycount isnt the only way to have detailed models, theres a way to simulate polycount with textures, thats why we will switch over to... 2. textures: textures in games have 3 main usecases. 1. give color and texture to a 3d model and to the map/environment, 2. decals and sprites, and 3. control shaders. like you already heard textures are relatively easy on the gpu, HOWEVER, this is not entirely true across the board. the taxing factor of textures is resolution, file size and function - hitting the Vram and the shader pipeline. resolution gives better clarity, but bloats filesize and you can combat filesize with compression. you wanna set a texture resolution that makes sense. high resolution textures that the player is viewing up close, low resolution where it doesnt matter so much, also use compression where possible. this applies for the albedo map, or diffuse map, which is basically the texture that gives you the color information. there are other textures that control shaders, like bump and normal maps, specular maps, phong shader maps, self illumination maps, parallax maps. these textures tell the engine what to do with which part of the texture. these can be greyscale or include different informations on each of the R G B channels, like a normal map (this is a very fancy bumpmap with accurate "3d" information baked into 2d space. normal maps are either hand made (legacy) or more commonly "baked" in a 3d application from a high resolution version of a 3d model to reduce geometry load by using a low poly model + normal map instead of a model thats 100-1000 times the polycount). so you can use a 2d texture to reintroduce "3d" details back onto a low poly model. the normal map acts somewhat like a textured rubber glove you pull over your hand. normal maps should not be lossy compressed, because it will introduce awful blocky artifacts when light hits the object, and normal maps usually also have an alpha channel (greyscale channel next to the RGB channels) with a different function, usually controlling a different shader, like specularity, which makes the normal map a chonker in file size. you dont wanna overdo normal map resolution, as it will eat vram for breakfast and very high resolution normal maps also put more load on the shader pipeline. a gpu can withstand a considerable amount of normal maps, but you can obviously overdo it. lets briefly talk about how you map texture space to models. in the 3d modeling software you "unwrap" or "uv map" the model into 2d space. imagine taking a roll of toilet paper and cutting it across the long side, now you can put it flat on the table and when youre done painting, you can put it back into a roll shape. its basically the same with 3d models, but theres a twist. with the toilet roll you have a 1:1 representation of the model and the texture, but with 3d models you set a texture size (for example 2048x1024) and then you place and scale parts of the model onto it. the bigger the parts on the uvmap, the more pixel space they get, hence higher resolution. now comes the kicker: to preserve texture size you map important parts big and unimportant parts, that are barely visible, but can still be seen, small. take a gun, for example - you want the regions close to the camera to be as high resolution as possible, but you also have tons of parts that just need color, but arent usually seen, or just are plain black that can be smaller. you can also use mirroring to cut uv map area in half for some parts. by not properly scaling the parts of the uv map you can end up with a very unoptimized uv map that gives you worse resolution to important parts while being twice the size. the impact of one bad texture set for one model isnt big, but consider a scene sometimes containing hundreds or thousands of assets. it adds up.
@sgredsch
@sgredsch 2 ай бұрын
3. bad practice. now that we covered the general function and cost of models and textures, and the general need to optimize certain aspects to retain as much performance as possible, we enter the realm of wasted performance due to bad practice. there are shader effects that are very easy on the gpu, or very hard. and ususally there are multiple ways to do stuff. choosing the right way to do stuff can make or break a game. lets talk about reflections for a moment. there are two common easy ways to add reflectiveness via shaders, and this is via 1. cubemaps and 2. screenspace reflections. screenspace reflections take the actual content rendered in the scene and reapplies it - which makes it expensive to run, but quite close to the actual scene, but ONLY what is shown on the screen. out of screen stuff is blank - a game notorious for bad screenspace reflections is resident evil 2 remake in the police station, where objects would constantly blank out the reflection, because to the shader some parts of the scene plainly dont exist. the 2nd way to do it are cubemaps and they are so basic in function, that you can do it all day and it doesnt matter, because its basically a plain old texture. what is a cubemap? a cubemap is a texture or a set of textures that represent what something may reflect in a certain area of a map. its basically something like a skybox, but for reflections. these are non-dynamic and prebaked, which makes them insanely easy to run. now, if you want to give some kind of reflectivity to an object you can either do cubemaps, screenspace reflections or raytraced reflections, and each method being progressively harder to run. choose wisely. while a rifle scope might look absolutely stunning with raytraced reflections, 600 empty coke cans in an alley dont and will absolutely murder your gpu. raytracing is its own can of worms, and while raytracing the scene would have different implications, i just wanted to demonstrate that you can totally use the wrong tool for the job. lets touch on some different technical bad practice that you actually dont see - as theoretical example: hitboxes. you know why its called a hitbox - in early games a hitbox was exactly that - a simple box around a character that would register the hits instead of the actual "visual" model because its too complex, later hitboxes were a cylinder, now more sophisticated hitboxes are closely modeled after the visual model and look like a stick figure, made out of very basic geometry so its still easy to run. but what if you have very complicated models that need hitboxes and you run out of time... w..wou... would you just take the mesh of the visual model to use as a hitbox so you dont have to make a seperate hitbox mesh? nooooo, certainly no one would be insane enough to run a 80.000 polygon visual model mesh as a hitbox. right? RIGHT? not saying this is a thing, its just an example of under the hood insanity no one can see, but can make your 3D sidescroller the next crysis on steroids. and we have alot of other departments that all can do some really shoddy stuff, like animation and rig setup, mappers, coders, general shader setup etc. making games is very complicated. theres tons of stuff you can do one way or another. doing it properly takes more time than taking some shortcuts that trade developer time vs client side performance. blaming bad performance on the driver or the engine is only one part of the equation. you can do alot of bad stuff on the asset and coding side and, albeit having near perfect driver and engine optimization, the whole thing can just run like a brick. looking at games that barely hit 60 fps @wqhd on a 4090... lets just say i have some serious doubts regarding the technical quality of the games. the last game i had an in depth "game dev" look at was warcraft 3 reforged, and my god is this thing botched. some units dont even have proper textures, like the frostwyrm - its a placeholder texture in a finished game no one cared to fix or finish. having such assets in the game probably also tells you the shader optimization and coding isnt properly done aswell. thats the state of the industry, at least partly. dont be fooled by the hardware hunger of the games, chances are you buy expensive hardware to cross finance a sloppy development.
@ezg8448
@ezg8448 2 ай бұрын
Excellent comment. Adds an to what was spoken about in the video. One issue I had was while Tom wouldn't mind any question asked, he still did dance around the Starfield optimization question (may not actually be the game specified, but the video made it so). Care to your take on it if possible?
@umblapag
@umblapag 2 ай бұрын
Thank you for the information. This comment should be higher up.
@KaiSoDaM
@KaiSoDaM 2 ай бұрын
Thanks for the info, I remember hearing about a river of tesselation running on Crysis 2 under the map just to slow down Radeons lol. Crazy world...
@GamersNexus
@GamersNexus 2 ай бұрын
​@@ezg8448we didn't specify a game. There was no specific game. We put the reference in the video as an example but it's not like he saw the edit live.
@Curios_Cat
@Curios_Cat 2 ай бұрын
Can't wait for ARC Battlemage!
@IntelArcTesting
@IntelArcTesting 2 ай бұрын
Will definitely buy one or multiple
@102728
@102728 2 ай бұрын
There's a chance that the lessons learned from alchemist - especially driver development - came too late in battlemage's development to be included in the hw design. No idea how relevant drivers are in designing the hw, but if they are battlemage has probably been defined long before all the current development of alchemist drivers (and the lessons learned there) happened. Of course there will be a bunch of improvements regardless (otherwise why design new hw), but celestial is where I'd expect intel could be competitive enough to sell gpu's with reasonable margins. I'd suggest comparing alchemist die sizes and nodes to geforce/radeon cards with similar performance. Iirc, alchemist uses around 50% larger dies on a similar node. That's where they're probably losing a lot of money and making alchemist likely not very profitable if at all.
@operator8014
@operator8014 2 ай бұрын
Get a couple of them in SLI and you'll be ready for ARCs Fatalis.
@hololightful
@hololightful 2 ай бұрын
I am really hoping for Battlemage as well
@DigitalJedi
@DigitalJedi 2 ай бұрын
​@@102728 The hardware team doesn't worry too much about driver-level stuff. They are mostly focused on making changes to the structure of the GPU logic to keep things moving through easier. Battlemage will make improvements to things like the dispatch logic Tom mentioned, as well as things like caching subsystems, predictive logic, or specific blocks like TMUs or RT accelerators. Battlemage will have several lessons learned from alchemist under the hood, but you are correct to assume they won't have everything right yet. The ARC team is fighting uphill into decades of industry progression, so I full expect them to need a few goes at getting everything down.
@grievesy83
@grievesy83 2 ай бұрын
I'm a simple man. I see Tom Peterson in a GN video, and I watch the entire thing sitting forward in my chair with both index fingers pressed against my lips, brow furrowed, learning intently. Love this series. Never stop. There's no such thing as too much Tom on GN's video catalogue.
@draigaur9543
@draigaur9543 2 ай бұрын
i'm an aging grease monkey, most of this is way above me in the clouds.but i still enjoy it.
@102728
@102728 2 ай бұрын
Oh yeah these are great!
@sword0948
@sword0948 2 ай бұрын
I honestly just love this miniseries. Im currently studying to become a software engineer and watching your videos makes the learning process very interesting and fun. Thanks to you and Tom for bringing us these videos and I hope we'll see more of them !
@GamersNexus
@GamersNexus 2 ай бұрын
That's so awesome to hear! That our content can be helpful at all in early stages of education is a big compliment. Keep studying!
@zivzulander
@zivzulander 2 ай бұрын
This graphics miniseries with TAP is great. Excellent explanations and presentation, and Steve is following up with all the right questions.
@GamersNexus
@GamersNexus 2 ай бұрын
Thank you! Great educational opportunity for us as well!
@RetroPaul6502
@RetroPaul6502 2 ай бұрын
Impressively good info. The register spills are spills to memory which is why it’s bad and slow. The UMD is technically one, but yes, it is made up of DX12, OCL, Vulkan, etc components where the component isn’t necessarily loaded every call.
@Kendop16
@Kendop16 2 ай бұрын
Great video and explainer! As someone who bought an Arc A770 last year and has seen huge improvements in its drivers; it gives me great confidence seeing Toms passion and drive for Intel GPUS and that the future of intel graphics can only be getting better with the likes of professionals like him in Intel. 👍
@Jensen761
@Jensen761 2 ай бұрын
I'm hoping so cause I would love to try them in future!
@KingFeraligator
@KingFeraligator 2 ай бұрын
Please tell me you didn't buy it as your main GPU...
@82_930
@82_930 2 ай бұрын
As Somebody who’s had two A770’s, (A770 Titan, A770 LE/Founders) this is true. I’d confidently say this card is faster than the 3060Ti and comparable to the RTX 4060Ti
@Kendop16
@Kendop16 2 ай бұрын
@@KingFeraligator No,I can't tell you. You'll need to say pretty please. Then I might tell you...
@volvot6rdesignawd702
@volvot6rdesignawd702 2 ай бұрын
im looking forward to the next gen ARC .. my current ARC 770 16gb is a great little card .. my main7800x3d and 7900xtx is a great combo but ive been using my 14600kf and ARC 770 16gb build and it has been chugging along beautifully !!
@lian2780
@lian2780 2 ай бұрын
12:38 All that's happening per frame. PER FRAME, silicon + electrons = magic.
@GamersNexus
@GamersNexus 2 ай бұрын
Magic is definitely the best explanation for it. It is absolutely unbelievable when you think about how much happens to generate a single frame of a video game.
@Geo64x
@Geo64x 2 ай бұрын
Certainly more electrons than silicon!
@KARLOSPCgame
@KARLOSPCgame 2 ай бұрын
Magic is a lot of electric tickles
@xchronox0
@xchronox0 2 ай бұрын
@@GamersNexus I mean, it is magic. We're making silicon runes, engraving them with magical sigils, then running energy through it to produce effects that can't be done in nature.
@spudhead169
@spudhead169 2 ай бұрын
I noticed something Steve said "I didn't feel confident in my understanding to....". THIS is what separates GN from most other channels and I'm not talking about only PC hardware, just technology in general. A lot of others will just do some half a$$ed job of compiling information and present it without a clue about what they're talking about. Steve doesn't treat us like that. Massive respect.
@vitormoreno1244
@vitormoreno1244 2 ай бұрын
This made me realize the genius work from Valve on DXVK, the amount of work it needed to be seamless to the games is outstanding. Thank you Steve for the amazing interview.
@phoenixrising4995
@phoenixrising4995 2 ай бұрын
That’s what their new drivers legacy mode is. It would have been nice for them to just ship their dx9~11 dxvk library as a signed windows module and focusing on making dx12 Vulkan and OpenGL better. Less surfaces to maintain and find improvements for.
@MajesticBowler
@MajesticBowler 2 ай бұрын
Most of that work was done by Microsoft. MS created DirectX HLSL to Vulkan SPIR-V compiler and released is as open source in 2018. HLSL is DirectX shader language. Spir-V is shader language created for Vulkan. MS created this compiler for Linux Subsystem for Windows (WSL). Compiler can translate Vulkan to DirectX and DirectX to Vulkan. DirectXShaderCompiler currently have more than 600 github forks created by: Valve, Sony, Apple and more
@PXAbstraction
@PXAbstraction 2 ай бұрын
21:40 That's some subtle shade thrown right there. Nice.
@Fluke1x
@Fluke1x 2 ай бұрын
What is that sound, for those of us who don't know?
@KarlDag
@KarlDag 2 ай бұрын
​@@Fluke1x Starfield's theme song
@WayStedYou
@WayStedYou 2 ай бұрын
@@Fluke1xthey also flickered starfield on the screen right after
@TurntableTV
@TurntableTV 2 ай бұрын
What I love about GN is that they respect their audience intelligence and have the guts to dive into more complex subjects such as this one. Thank you, Steve! This stuff is trully fascinating.
@BrownStain_Silver
@BrownStain_Silver 2 ай бұрын
This was a great video! Thanks for having Tom on GN. I learned quite a bit in this one. On a separate note, I got an A770 from someone who gave up on Intel early. I let it sit in the box for 6-9 months and it aged like fine wine. It's really cool to see the performance getting unlocked by the hardworking people at Intel.
@maxmike181
@maxmike181 2 ай бұрын
This was fantastic, and seeing how invested the Arc team is has me very excited for Battlemage
@GamingBits-py1or
@GamingBits-py1or 2 ай бұрын
I can only repeat: Incredible respect for Intel in working on this stuff even after the initial failure and issues. I hope the current financial losses dont stop you guys from keeping it up and coming with nee Hardware at some point. I will buy it, no matter what. yYou guys worked hard to convince you dont just give up, and you will have earned that money!
@_TrueDesire_
@_TrueDesire_ 2 ай бұрын
Unless intel pay their fines and own their mistakes, I will never buy anything intel related.. “The European Commission has re-imposed a fine of around €376.36 million on Intel for a previously established abuse of dominant position in the market for computer chips called x86 central processing units ('CPUs')” from September 2023
@zodwraith5745
@zodwraith5745 2 ай бұрын
Since the first time I saw Tap in a video I thought he should be the face of Intel. No marketing veil, just honest and open descriptions of the inner workings of a GPU. He genuinely wants you to understand what a GPU is and what it does like you would expect from a teacher. I mean, of course his appearance on certain channels is intended as some level of marketing, but it's not full of the normal bullshit where they ridiculously exaggerate everything about what they're selling. When he appeared on PCWorld I asked in the Q&A how Arc performed in VR and he straight up said "Do not buy Arc for VR." You would _NEVER_ see that from an AMD or Nvidia rep, and you gotta respect when he shows you respect with a blunt honest answer like that.
@ShrkBiT
@ShrkBiT 2 ай бұрын
I was talking to a friend about drivers and how they work not too long ago, but couldn't really convey the finer details (not that I knew them to this extent anyway). This video explains it so well in "normal people speak". I really enjoy the way these video's and difficult topics are framed and laid out for everyone to understand. Awesome reporing! Edit: I really appreciate that Intel is fixed on letting people know what they are doing with Arc and really making a solid attempt at the GPU market. More competition = better and with AMD and NVIDIA so focussed on AI now, barely noticing their "ol' reliable" gamer market, we need someone to keep their attention invested there, before they get their marketshare snatsched away long term.
@Safetytrousers
@Safetytrousers 2 ай бұрын
Nvidia at least have greatly increased their workforce over the past few years. They can easily have people 100% focused on gaming GPUs and software.
@SelecaoOfMidas
@SelecaoOfMidas 2 ай бұрын
Only Nvidia has explicitly signaled that gamers are on the back burner, and their projected priorities sell it. Don't see that with AMD pursuing AI, at least not yet.
@Safetytrousers
@Safetytrousers 2 ай бұрын
''Number one, RTX was invented for gamers and for RTX, the technology, the most important technology is AI. Without AI, we could not do ray tracing in real time. It was not even possible. And the first AI project in our company-the number one AI focus was Deep Learning Super Sampling (DLSS). Deep learning. That is the pillar of RTX.'' Jensen Huang 2023@@SelecaoOfMidas
@Singurarity88
@Singurarity88 2 ай бұрын
I love these engineering insights. Keep it up Steve and thanks!
@graphicarc
@graphicarc 2 ай бұрын
Great! More intel arc news on GN always excited when I see the notification.
@IntelArcTesting
@IntelArcTesting 2 ай бұрын
You and me both fellow arc user.
@1337Superfly
@1337Superfly 2 ай бұрын
So am I. Tom Peterson is just a brilliant communicator!
@Danger8255
@Danger8255 2 ай бұрын
My brain hurts from this, but I have a better understanding and appreciate how much goes into driver optimizations. Hats off to Tom.
@PXAbstraction
@PXAbstraction 2 ай бұрын
This was one of the most fascinating deep dies I've watched in quite a while. Tom is an excellent presenter and does a great job of taking really complex stuff and making an understandable.
@PlantainSupernova
@PlantainSupernova 2 ай бұрын
Just finished watching the video from earlier in the week. Perfect time to jump into this one. *THANKS STEVE!*
@Abrasive-Heat
@Abrasive-Heat 2 ай бұрын
This video helps me a lot as a hardware person. I don’t do much on the software side. I think I know what it’s doing (educated guessing) but it’s nice to have an explainer of what it’s actually doing.
@notjustforhackers4252
@notjustforhackers4252 2 ай бұрын
Are we going to get a discussion about MESA support?
@DodoGTA
@DodoGTA 2 ай бұрын
Especially the sparse implementation there (with TR-TT and VM_BIND) 🐸
@GamersNexus
@GamersNexus 2 ай бұрын
I don't feel qualified enough yet, but can study it more. Maybe that'd be a good one for Wendell to help me with!
@asunavk69
@asunavk69 2 ай бұрын
@@GamersNexus as a linux user, much appreciated and would be content to see a video on that one too :).
@olnnn
@olnnn 2 ай бұрын
@@GamersNexus An interview with one or more of the people working on MESA (not specifically intel ones) would be awesome
@krishnachittur
@krishnachittur 2 ай бұрын
+1 to all the Mesa comments, would really appreciate some educational content there!
@IntelArcTesting
@IntelArcTesting 2 ай бұрын
Always love seeing some Arc content. Thanks Steve and Tom for these types of videos
@darkoz1692
@darkoz1692 2 ай бұрын
It's refreshing to see someone honestly talk about their products, as long as Intel offer their GPU I won't be going back to the other two.
@kurgo_
@kurgo_ 2 ай бұрын
I was basically the "they're the same image" meme when that before and after slide came up, I was puzzled until it was zoomed in haha. Honestly, I got lost pretty early (I understand nothing about these things and goodness knows a scientific/analytic mind is not one of my strong points) but even with my limited understanding, it was really interesting. Another example of "yeah just optimise it" when the work behind it is truly daunting. Thanks for the interview Steve. Speaking of optimisation, do you think you could have a game developer on and ask them why a game collection that was 3gb was turned into one that's dozens of gbs heavier and doesn't even work properly? :p it might not be as technical as this but goodness knows I'd like to understand why game devs have decided compression and optimisation aren't necessary, maybe there's a graph for that too. Cheers!
@Ruhigengeist
@Ruhigengeist 2 ай бұрын
An increase in install size like that is usually because the art assets are higher resolution. For example you might have had 256x256 pixel (width x height) images for textures (the stuff that gives color & depth to 3D models and surfaces), but they might have shipped higher resolution textures like 2048x2048. You might think that's 8x bigger because 256 -> 2048 but actually it's about 64x bigger because it's two dimensions (same idea as 1080p to 4K isn't 2x more pixels, it's 4x more). So think for every texture in the game, you might have thousands or tens of thousands of them, if all of them are made bigger then the game size balloons really fast. Higher resolution textures doesn't necessarily improve things on its own though, cause the new texture needs to actually have more detail to make use of the extra size. If you just resize it and that's it, you still have the same amount of detail as you had before which won't look any better at all. So you need to have an artist go in and redo the texture to make it more detailed and less blocky. And now there's AI tech that can do some of these tasks for upscaling textures (same idea as DLSS upscaling but applied by game devs to the actual game).
@snemarch
@snemarch 2 ай бұрын
I was *VERY* close to buying an ARC card for my new system build, and Intel letting their engineers participate in videos of this quality and detail increases the likelihood that my next GPU ends up being one of theirs (given reasonable performance, power consumption and driver quality). This is really, really good PR, and it's very interesting to watch. Thanks for covering both technical and business/process aspects, and for not dumbing things down. Kudos to both Steve for facilitating a conversation where he's sometimes slightly out of his depth (but still asks relevant questions!), and to Tom for very good explanations of pretty technical topics.
@mrwidestrides4802
@mrwidestrides4802 2 ай бұрын
This series is good. Very informative and Mr Intel is a natural teacher. Thanks Steve
@Lintary
@Lintary 2 ай бұрын
I love these sorts of dives, it might not ever make us experts in the field, but it gives you a good sense of just how complex all this stuff is, this in turns allows you to get a better perspective on things.
@eddiebreeg3885
@eddiebreeg3885 2 ай бұрын
As a game developer myself, I really love to see these conversations, it's not every day I have the opportunity to hear from hardware engineers every day! I do feel it's important to specify one thing I think wasn't super clear: just as every car uses an engine of some sort, same goes for games. The engine is the library that provides programmers like me with the tools we need to run the game in the first place. What Tom meant by "a lot of games use an engine" is "a lot of games use *publicly available* engines" (that's your Unreal and Unity and such). It's worth noting that a lot of game studios build their own if they have the resources to do so, there are quite a few advantages to this approach.
@veganboi3938
@veganboi3938 2 ай бұрын
Saw the notification. Clicked it. Liked it.
@Azureskies01
@Azureskies01 2 ай бұрын
it really goes to show how well AMD and nvidia have done given just how complex all this is, and intel is trying to get into it and having to do it from pretty much scratch.
@AvocadoBondage
@AvocadoBondage 2 ай бұрын
Or if anything it shows how bad they are considering intels basically redone their entire driver multiple times over the past 2 or so years while amd and nvidia still have issues that have been prevalent for literal years. Hopefully intel kicks them into shape
@Azureskies01
@Azureskies01 2 ай бұрын
@@AvocadoBondage Saying intel doesn't have problems or that somehow some way they will get their drivers to be bug free is.... funny. to say the least anyway
@PixelatedWolf2077
@PixelatedWolf2077 2 ай бұрын
Well, just because they JUST got into consumer GPU hardware. They're VERY new to this, so considering they already have a DLSS and FSR competitor in their first card generation is outstanding already. Ontop of that, they have to catch up with drivers, but it seems like their optimizations have been very massive considering how poor they originally performed when they launched.
@Azureskies01
@Azureskies01 2 ай бұрын
@@PixelatedWolf2077 They are not in any way "new" to GPUs as they have been putting out APUs for longer than AMD has. It is however the first time intel has been serious about making their GPUs work well.
@PixelatedWolf2077
@PixelatedWolf2077 2 ай бұрын
@Azureskies01 Well originally, the iGPU for Intel was alot like a GT1030. It had its sole purpose as a display driver. Intel, however, realized they could benefit quite a bit by trying to make a proper iGPU. That first attempt was Iris which was a step in the right direction. It was better than the normal UHD graphics of the time. Then their silly little productivity only GPU came out and it turns out it wasn't bad, so that's what got the ball rolling towards making ARC. Arc didn't come until 2022 however. So in reality, they haven't had much experience in both driver tech and making a proper GPU.
@ryanspencer6778
@ryanspencer6778 2 ай бұрын
As someone who knows just enough about this stuff to understand what TAP is saying at a surface level, this is possibly one of GN's best videos. The average gamer probably won't understand much of this and that's OK, but as a software engineering student and GPU nerd, this is so cool.
@macgynan
@macgynan 2 ай бұрын
I love these in depth tech talkes. Tom does such a fantastic job explaining and even is kind enough to bring slides. Cant wait to see more!
@TheBlackRogue96
@TheBlackRogue96 2 ай бұрын
Love seeing Tom on camera, he always has something new and interesting to say. I do have a follow up question for when something is the driver teams responsibility to improve or the application teams responsibilty. Would it not be a more efficient use of your time to identify the reasons people are getting poor performance i.e. Tom's example of register spilling, and writing tools and documentation that tells app developers "Hey, you're doing this wrong". So that over time more people write better performing code, rather than relying on the driver team to fix that issue for you
@Akkbar21
@Akkbar21 2 ай бұрын
I LOVE this kind of in-depth and well informed break down of complex concepts like this. Ty to everyone involved.
@aznmarinex2
@aznmarinex2 2 ай бұрын
Thanks for the Selfie in Asia Steve.
@GamersNexus
@GamersNexus 2 ай бұрын
Absolutely!
@jacoavo7875
@jacoavo7875 2 ай бұрын
Man, this is the kind of content youtube was made for. Outstanding and interesting work here, from both Steve and Tom !
@seikojin
@seikojin 2 ай бұрын
As a QA engineer, I have gotten very close to all these various layers of the pipeline. I am glad for the deep dive into it. Make all this stuff common sense.
@italianbasegard
@italianbasegard 2 ай бұрын
Props to the Intel guys who went out of their way to make these slides, and Tom for going over them in detail. As an engineer, it can be annoying to put technical work aside to create diagrams and presentations, but they really went out of their way to make sure this looks great and serves as efficient documentation as well. Thanks, Intel & Tom, and thanks, Steve!
@goldsquadron
@goldsquadron 2 ай бұрын
What a amazingly informative video. Tom does a great job of breaking down very complicated topics and Steve is asking all the right questions.
@SlocusST
@SlocusST 2 ай бұрын
I may or may not currently work for the big blue silicon corp, but having someone like Tom on the GPU team is essential. When Raja Koduri left, ARC's future felt way less certain, but I think these types of improvements and attention in the gaming space translate to improvements in the AI compute space, so upper management sees the financial appeal of getting their hands dirty in the gaming space too. I'm actually stoked for the continued improvement to Alchemist and the release of Battlemage.
@Mr-Clark
@Mr-Clark 2 ай бұрын
I love Intel's effort and involvement with the PC market to develop and improve their product. I purchased an A770 despite it not being faster than my current gpus. What I do love about it is all the tweaking I go to help improve performance. I especially love it when after doing a combination of tweaks, I get game performance that rivals their competitor or completely blows my mind and not expecting ARC to perform that well. I also encounter some games that runs terrible. I run some tweaks and at times it improves a little. Some times there no improvements at all. All I can do is just wait for better drivers to come out or an update from the game devs. With ARC I feel that I am part of the development of the product. The product grows on me with every day that passes by. Going with ARC and expecting things to work flawlessly... is a you problem. Intel advised the gaming community day one. I cannot wait for ARC to get better. I've been an Nvidia and AMD once in a while PC gamer. I can so see myself going ARC 2 next time around. Even if they still have a ways to go in catching Nvidia. Fantastic work Intel. Bravo.
@halko1
@halko1 2 ай бұрын
This is the kind of content I love. Technical but so well explained and presented that it’s easy to follow and I learn new stuff. This is the best kind of content on KZfaq. The best.
@kazioo2
@kazioo2 2 ай бұрын
Kudos to GN for refraining from judging if game is "optimized", because it's a difficult things to analyze without knowing internal details of the game and it's a term often misunderstood by gamers and even experts misinterpreted it. I remember Digital Foundry guys, who are usually very knowledgeable, criticizing a completely empty open world UE5 demo for not utilizing all CPU cores and comparing it to Cyberpunk - with tons of NPCs and various simulations and systems running in the background. That UE5 demo literally had nothing to run on other cores, because there was nothing except graphics and one player entity, so even if their conclusion about the engine was correct, the example used was not and could not be used for this kind of comparison. It was bizarre to see this kind of cluelessness from some of the best "game optimization" nerds in the media.
@Mallchad
@Mallchad 2 ай бұрын
There is always something to be run on other cores in the vast majority of cases. The case is not IF you can run things on other cores its more like _is it worth the complexity_? -because trying to juggle data flow and data race problems can make a multithreaded program slower than singlethreaded if you get it wrong. But it's still possible to make it go faster. This is especially true with game engines like Unreal because the very nature of a game engine shatters up the program into many thousands of individualized componments like transform components which are relatively straightforward to parallelize. So much so that Unreal does it by default...
@MSquared135
@MSquared135 2 ай бұрын
@@MallchadAgreed. That is what really hinders the amibtious UE4 games like Jedi Survivor, Callisto protocol, etc is the engine's poor CPU core utilization. Its indicative of the engine's origins in the early-to-mid 2010s where CPU clock speed was more important than utilizaton of the cores available.
@Mallchad
@Mallchad 2 ай бұрын
@@MSquared135 It's not really the engines fault. the engine is just a shell you bolt onto what you eventually call your game. it's up to the game programmer to make it to faster.
@jcm2606
@jcm2606 2 ай бұрын
@@Mallchad In some cases it is the engine's fault. An engine is more along the lines of the _framework or scaffolding_ that you build your game within. If that framework does not scale properly across multiple threads then that will manifest as a CPU bottleneck when you eventually stress the framework in exactly the right way that it can't scale properly, and the only remedy is to either stop stressing the framework (dumb down your AI behaviours, reduce the number of AI agents, reuse meshes more to take advantage of instanced draw calls more, etc) or replace parts of the framework with your own in-house code (write your own AI system that can scale the number of agents well, write your own GPU-driven rendering pipeline that can handle many instances of different meshes well).
@Mallchad
@Mallchad 2 ай бұрын
@@jcm2606 Yes, the "replace parts of the framework with your own in-house code" is why I don't see it as the engines fault. Unless you're using a completely priorietary engine where you have no ability to acces or even read the code, you usually have options. and even highly proprietary engines usually leave you with options for engine-tinkering- the game and the engine are one and the same, and must be treated as such.
@TehPredrrr
@TehPredrrr 2 ай бұрын
Please more videos like these, they are genuinely enjoyable and informative!
@envirovore
@envirovore 2 ай бұрын
Thoroughly enjoying this series so far, great in depth look at what's going on behind the scenes (frames?). Looking forward to the third part, and hopefully more deep dives such like this and the dive into system latency with the nVidia rep. Great stuff!
@iogarchi
@iogarchi 2 ай бұрын
Excelent video, thank you for giving the opportunitiy to have a general idea on how is established the relationship between software and hardware in termes of video games and gpu. thank you so much to both of you and your team.
@beardedgaming3741
@beardedgaming3741 2 ай бұрын
i love that intel is working so hard at this.... its so great to see. my next budget build i think id be in a place to go intel now.
@MaxUmbra
@MaxUmbra 2 ай бұрын
Same thoughts. Very optimistic for their future
@NervousNoodles
@NervousNoodles 2 ай бұрын
Videos like this are precisely why I'm a subscriber. Can't wait to see more!
@Green-Mountainboy
@Green-Mountainboy 2 ай бұрын
I have been a gamer almost from the very beginning of gaming back on DOS, have very little understanding of how things actually work these days but i still love listening to people like this.
@anastassiosroumboutsos8288
@anastassiosroumboutsos8288 2 ай бұрын
Fantastic video. Makes you realize and have a level of respect on the amount of research is needed to compete.
@Deltarious
@Deltarious 2 ай бұрын
I personally have actually said "this game is optimised!" about the initial release of Overwatch (2016). For how good it looked it really was pretty damn well optimised and still looked pretty great even on lower settings on very old hardware while still delivering good or even great FPS. I guess the only area where it lost some points from me was it used to have some very odd random CTDs for no apparent reason.
@EastyyBlogspot
@EastyyBlogspot 2 ай бұрын
Thank you for this....the compiling shaders thing has always puzzled me as while it had existed before, nowdays it is everywhere...i just wish it was done behind the scenes rather than a bar when starting up the game lol
@GamersNexus
@GamersNexus 2 ай бұрын
I do like knowing though that the game isn't quite ready yet. Better than waiting and not being sure if it's working or stuck!
@EastyyBlogspot
@EastyyBlogspot 2 ай бұрын
I did always wonder are hardrives making a difference to performance and compilations as noticed past few years games on hdd more issues than ssds
@zivzulander
@zivzulander 2 ай бұрын
Yeah I wondered about this as well. I don't think I've even heard it as a phrase until the past few years. That was more a behind the scenes thing in the past.
@WereCatStudio
@WereCatStudio 2 ай бұрын
To be fair I'd rather have the shader compile happen before I start playing rather than have a stutter fiesta when playing the game... Jedi Survivor for example. I think many games are way more shader heavy than they used to be so that can be part of the reason why this happens often now. AFAIK it has been a problem with UE4 and now with UE5 engine games mostly.
@EastyyBlogspot
@EastyyBlogspot 2 ай бұрын
@zivzulander first I remember seeing it was the ps3 emulator rspc3 though have said it has been around for a while with some older battlefield games....but now almost every game has it
@keira_churchill
@keira_churchill 2 ай бұрын
I saw someone wishing "good luck" at 23:22 but none was needed. You two nailed it.
@AnjanaDharmasiri85
@AnjanaDharmasiri85 2 ай бұрын
Great video guys. Very Informative! Keep them coming!!! 🔥🔥🔥
@NuSpirit_
@NuSpirit_ 2 ай бұрын
Great video. My two only remaining questions are when Battlemage is coming and if the drivers will make it shine 😂 (jk jk) But seriously I so want to buy Intel GPU since prices are great but I’m still discouraged by the prevalent issues and I don’t want to risk the next big field of stars game won’t work on launch.
@highcue
@highcue 2 ай бұрын
So, if I get this right the drivers are "driving" the cards. Those DLLs and Kernels seems to be the instructions on how to handle the "road". It's almost as if you had to learn to drive a new car in a new city for every new game. But every time you come into a new city; I need to learn to drive differently. Not only because of the car but also because of the roads, signs, rules, temperature and terrain... Sometimes it's not too different from one city to another, but other times it's completely wild. What could be the safest maneuver in one city could lead to your death in another one.
@OriginalBrett610
@OriginalBrett610 2 ай бұрын
This is incredible! I’ve been using an A750 since launch and look forward to the future hardware. Really appreciate the hard work employees have put into improving the performance. It’s super cool to see behind the scenes of what makes them tick.
@LazyKing92
@LazyKing92 2 ай бұрын
This was amazing, Tom was able to explain really complex topics for the average user to understand. Makes me appropriate all the work the Arc team is doing. Thanks Tom!
@ArthurM1863
@ArthurM1863 2 ай бұрын
Tom Petersen is amazing. I follow him since he was at Nvidia and he did a review about one of my first gpus, the gtx 650ti. His great commentary about products and software mixed with silly jokes here and there makes him the perfect man for the job lol
@CXDezign
@CXDezign 2 ай бұрын
I bought a laptop with an Intel A370M GPU. This was over a year and a half ago, I still experience crashes on Photoshop that halt the entire system to this day and cannot for the life of me reach out to anyone to diagnose the issue...
@Codec264
@Codec264 2 ай бұрын
If your driver's are all up to date, most likely you have bad silicon and need to look for a warranty replacement from the manufacturer
@thegeforce6625
@thegeforce6625 2 ай бұрын
@@Codec264or Intel or adobe hasn’t gotten around to fixing the crashing bug yet.
@Leftycpe
@Leftycpe 2 ай бұрын
Alway senjoy the videos with Tom. The details at times are well beyond what I understand, but that's just more fuel for google searches. I've had an A750 for a couple months now and been pretty pleased with it's performance on my old DX9 based titles. Fantastic deep dive into drivers!
@garretthazlett9116
@garretthazlett9116 2 ай бұрын
Love the "deeper dives" here! great stuff!
@mofstar8683
@mofstar8683 2 ай бұрын
Thanks for your continued coverage of Arc! On your most recent are video you talked about MSAA being broken for Arc in GTA and how the Intel team responded and said it was a GTA exclusive issue. Please tell the team this isn’t exclusive to GTA, in some older games like Watch Dogs 1 and Assassin’s Creed Black Flag MSAA is completely broken on Arc!
@neuronmind
@neuronmind 2 ай бұрын
The Black flag AA issue is old and occured on AMD drivers many years ago . It's bad programming from Ubisoft. Not bad drivers.
@slumlord2625
@slumlord2625 2 ай бұрын
these videos with tom are great!
@SToad
@SToad 2 ай бұрын
I love these series, I didn't know there were more coming up after last one. Definitely enjoying Toms in-depth knowledge here. Thanks for sharing!
@MartynDerg
@MartynDerg 2 ай бұрын
this is absolutely fantastic, I seriously love chats like this, it's what has been sorely missing in the relationship between product and consumer. Talking to the people who actually *make* the damn thing makes me feel indescribably more informed and appreciative of everything.
@lassenlautta
@lassenlautta 2 ай бұрын
just got a770 recently .. this is interesting
@IntelArcTesting
@IntelArcTesting 2 ай бұрын
Great choice, hope you will have a great time with it
@haikopaiko
@haikopaiko 2 ай бұрын
Great video! Thanks Steve and the team! Great explanation from Tom 🤙
@SvDKILLSWITCH
@SvDKILLSWITCH 2 ай бұрын
Love these videos with Tom. Very much looking forward to the video encode/decode video - I've been contemplating picking up an LGA1700 CPU predominately for Intel QSV (specifically the dual encoders on their UHD 770 iGPU) to use in a media server.
@tek_lynx4225
@tek_lynx4225 2 ай бұрын
Please grill him on his companys DX3\5\7 OGL1.x\2.x emulation\wrapping. They never want to talk about it.
@brendanconaway
@brendanconaway 2 ай бұрын
Tom Peterson seems like such a nice guy :)
@JagHiroshi
@JagHiroshi 2 ай бұрын
I bought ARC .. and I like ARC. Thank you Tom for continually improving this product.
@martyfliesdrones
@martyfliesdrones 2 ай бұрын
Super cool video, I love hearing this guy talk about these really technical challenges.
@omniscient9533
@omniscient9533 2 ай бұрын
Really thinking of ARC for next card. This community out reach makes it feel less like a corporation
@redslate
@redslate 2 ай бұрын
The ARC Team seems to be genuinely committed to their product.
@nielsbishere
@nielsbishere 2 ай бұрын
5:50, just to add; not all shaders are text when they land in driver land. For DirectX12 and Vulkan, they have already compiled the text to an intermediary binary format (DXIL or SPIRV) which is easier for the driver to handle since there's less steps required to convert it to their instructions. But still it needs that compilation step. Probably text is easier to explain though (and relevant to opengl)
@SimonBuchanNz
@SimonBuchanNz Ай бұрын
In contrast, there's WebGL / WebGPU, where the browser parses the shader source (GLSL / WGSL) on demand, then internally translates into the browser's target API shader language (HLSL, GLSL, whatever Metal uses) then complies that again 😢 It even needs to translate when it's GLSL to GLSL for safety and small changes in the flavor of the language. Chrome even uses something called ANGLE, which is implementing OpenGL in terms of DirectX for... reasons. So you can in theory end up translating GLSL to GLSL to HLSL to DXIL 😢 Hopefully browsers get good at caching all this nonsense!
@nielsbishere
@nielsbishere Ай бұрын
@SimonBuchanNz such a giant step back. DXIL and SPIRV were greater specifically to remove the compiler from the process. Now you only have to deal with DXC and glslangvalidator directly (you can just file an issue rather than contact IHVs), rather than 1 compiler per vendor. Web never seems to learn from mistakes people found out in the real world :/
@SimonBuchanNz
@SimonBuchanNz Ай бұрын
@@nielsbishere well, there *are* reasons (security, a lot of the time) ... but I wouldn't be surprised if *eventually* there's a binary form of WGSL, and in theory the browsers could write their own optimizing emitters for DXIL and SPIRV (dunno about Metal) to skip another compile - they do way harder stuff all the time. A lot of the time, though, this stuff is just politics: I think there's a hint of a suggestion that SPIRV might have been vetoed because it might make Apple look worse due to the extra translation they would need due to not having Vulkan (the obvious response being ... why not?)
@nielsbishere
@nielsbishere Ай бұрын
@SimonBuchanNz I think in the end it's because Apple wants to force the way the api can go. With khronos they have way less influence which they don't like. Apple seems very determined to cut all ties with khronos, and I don't really know why. Security might be true in some cases, but I've ran into enough driver crashes to know that probably this also increases likelihood of bigger issues because of the additional layers in between (seeing as binary -> ISA already goes wrong). Now you'll have to deal with every different browser having different compilers and every backend and every vendor having different behavior... with spirv that'd have been reduced to 1, maybe 2 compilers (dxc for hlsl and glslangvalidator for glsl)
@ChannelSho
@ChannelSho 2 ай бұрын
Thanks for getting this out when other major places for whatever reason hasn't been able to get this kind of talk out of a company. Also kudos to Steve for being hesitant to say a game is "unoptimized"; his reasoning is the best approach to this. The problem I have with people saying something is "unoptimized" is there's no empirical way to test this. To even start, you need performance requirements and even then, those performance requirements can be subject to opinion. Just because you demand the game be able to run 120 FPS average at 1080p on maximum details on a RTX 4060 or RX 7700 doesn't mean the next person will; they may demand 30 FPS, 60 FPS, or 240FPS. If I want to be that guy and demand 4K, 240 FPS, maximum quality on an RTX 4060 8GB, then every game except maybe things from before 2010 is unoptimized. This is just scratching the surface, but ultimately, without data and requirements, there's really nothing to go off of. I even asked someone what an "optimized" game means, and they just gave me a generic answer. If you don't even know what to look for specifically, then what even makes you qualified to make the call that something is "optimized" vs. not?
@Wild_Cat
@Wild_Cat 2 ай бұрын
Another GN x TAP video so quickly?! Thanks Steve! PS. It would be wonderful if you took us behind the scenes at the Intel graphics dev lab office
@nocturneuh
@nocturneuh 2 ай бұрын
Doom Eternal was OPTIMIZED. (🙏 Thank you ID).
@ddpwe5269
@ddpwe5269 2 ай бұрын
These are great videos! Even though I may not understand everything that is being said, it at least shows us that not only are the companies doing as much as they can, but they're showing it with someone who at least understands them to ask questions and confirm what they're being told, to a degree of course. It's also great when you have someone like Tom who is enthusiastic about the work he and his teams do!
@jayhsyn
@jayhsyn 2 ай бұрын
Incredible video! I learned a lot. Thank you for the hard work Steve and team, and thank you Tom for the in depth explanations!
@Hobo_X
@Hobo_X 2 ай бұрын
The fact that Tom is still doing this stuff (and Intel is letting him) makes me think they still have a long-term plan to _eventually_ get there with their graphics division. I truly believe if Intel holds out a bit and doesn't kill Arc too fast, they can carve out a chunk of the market by focusing on price/perf gamers. From that starting position, it can grow further. Nvidia needs a kick in the pants and to lose sales to snap back to reality.
@filcon12
@filcon12 2 ай бұрын
Hi, im tech lead for one of those games which was optimized in the intel driver lately. I would have question for Intel guy, is there any way to find out what did you optimize for our game? Maybe we could do more stuff on our side to make the game run faster.
@Moostuffas
@Moostuffas 2 ай бұрын
Super interesting. Thank you for taking the time for this content.
@MalamIbnMalam
@MalamIbnMalam 2 ай бұрын
Great review! I learned a lot of this stuff in my undergrad computer science courses. This guy can take difficult concepts and explain them in a simplified fashion.
@EastyyBlogspot
@EastyyBlogspot 2 ай бұрын
I do wonder how much the game engine matters, for example is UT4 more optimised than UT5 purely because it has been around longer and more mature. I hear so much about games being ported from 4 to 5 and i wonder does the performance take a hit just by doing that
@Cinnamon1080
@Cinnamon1080 2 ай бұрын
UE5 games are heavier because the modern features Devs choose to use are heavier.
@GamersNexus
@GamersNexus 2 ай бұрын
Great question. Without being in a position to directly answer, I'd wager that UE5 is probably more optimized even in spite of UE4's maturity just because UE5 should be rolling all those optimizations into UE4. I do think there are probably situations where your thought is right though: If a dev has an option of a mature engine versus a totally ground-up build, there are probably things better optimized on the older one.
@kazioo2
@kazioo2 2 ай бұрын
Another problem many gamers don't understand is that a more flexible modern feature can be more expensive even if visually it looks the same or even worse. A photorealistic room with baked light in UE4 will run at 200 FPS and could even look better than the same room in UE5 with lumen running at 60 FPS, where every light can be dynamically changed and the wall be demolished. And it's not because UE5 is unoptimized, it's because it does things in real-time that were offline baked before and this allows for more dynamic worlds and gameplay. But when a dev still makes very static game while using fully dynamic lighting, this new cost doesn't feel justified.
@jcm2606
@jcm2606 2 ай бұрын
It does matter a lot since the engine dictates the exact calls, shader code, resource usages and such that are fed down into the driver. OpenGL and pre-DX12 drivers can do a lot to wring more performance out of what they've been given by the engine, but at the end of the day it _is_ the engine that dictates what exactly is going on in the frame. DX12 and Vulkan turn this up several notches, too, as they push a lot of stuff out of the driver and back into the engine. Management of API-level command buffers, memory management and frame presentation/buffering under OpenGL and DX11-and-below happened largely in the API runtime itself or the driver based on what the engine fed the driver, but DX12 and Vulkan both push these back up into the engine, making the engine responsible for recording API commands into an API command buffer, performing memory management at the API level, dictating the overall flow of frame presentation and frame buffering, etc. As time goes on this is happening more and more, too, as seen by the Approaching Zero Driver Overhead movement in OpenGL or "bindless" style descriptor management in Vulkan (descriptor indexing, buffer device addresses, etc).
@EastyyBlogspot
@EastyyBlogspot 2 ай бұрын
@kazioo2 that is it exactly when I see a game and the graphics on the face of it do not look that great...and yet the performance is nowhere near what I would expect from what I see
@wile123456
@wile123456 2 ай бұрын
Wish you had asked him this critical question: "what was the point of 2 alpha releases with DG1 and DG2, as well as over a decade of integrated graphics drivers? Why were the drivers not ready when you had 2 years to get them ready before arc launched if we only count DG1 and DG2, and why was all the integrated driver work useless for ARC?"
@GamersNexus
@GamersNexus 2 ай бұрын
This video has been live for literally 5 minutes. You could not have even known what was asked, first of all, and secondly, the topic of this video is the driver stack.
@janbenes3165
@janbenes3165 2 ай бұрын
​@@GamersNexus On one hand, sure, the commentor did not watch the whole video before commenting, neither did I, so far. On the other... Aren't they right? Sure, you say it's not the topic of THIS video, but how many videos have there been with Arc engineers so far? It's great to celebrate that Intel is making progress, but shouldn't they be held accountable for the original mess?
@GamersNexus
@GamersNexus 2 ай бұрын
@@janbenes3165 do you mean when we held them accountable for 2 straight years when it was all happening actively? Because yes, and we did. We covered DG1 before anyone else, covered DG2's messy launch, and in fact we broke the story on what a trainwreck the drivers were for Arc originally -- so much so that other reporters reported on our findings and on Intel's direct response to them. We covered that story. Receipts: Literally called "Worst We've Tested" - kzfaq.info/get/bejne/g9CJhsiFkrqZpZ8.html Intel responds: kzfaq.info/get/bejne/sNR6rdKrt8yuiY0.html DG1 launch: kzfaq.info/get/bejne/frmjmMSc06vaaYU.html Talking about what a nightmare the A380 was: kzfaq.info/get/bejne/gsddl8Z8mc6Yiok.html Unrelated but while we're at it, holding them accountable for a terrible card design: kzfaq.info/get/bejne/hJlnZMx-ysXSlnU.html
@janbenes3165
@janbenes3165 2 ай бұрын
@@GamersNexus Yeah, you covered the story. I know you did. I was there too. But there you have a person who was close to the top when it was all happening, or at least should have relevant information about it and did not ask simply "why?" it happened. I'm not saying you should be dragging Arc through mud or anything like that, but this question of "why the DG1 never amounted to functional drivers?" never comes up.
@marvinmallette6795
@marvinmallette6795 2 ай бұрын
The answers on the surface seem obvious to anyone who has been gaming long enough. Why are games releasing in a broken state so often? Sometimes you have to get a product out. You could spend two years with a product working in a lab, and not learn as much as you could learn three months post release. Arc is Intel's first attempt at a competitive dedicated GPU, which is a different product from the integrated GPUs. Gaming was likely not a priority, and usually nobody cared about gaming performance on Intel integrated. This is essentially Intel working at "getting in shape".
@ZinoAmare
@ZinoAmare 2 ай бұрын
I always love these videos, Tom giving us info about arc I am very excited for Arc and hope to see Paladin arctype.
@globalvillagedrunk
@globalvillagedrunk 2 ай бұрын
Great stuff thanks Steve. I’ve just ordered parts for a new system which started out being very price focused - I picked up a Ryzen 3600 for £57 and an Acer Predator A770 for £232 to go in an also secondhand Fractal Design Pop Air Mini case - until I ended up buying a 5950X after reading about ReBar… Anyway, really glad to see Intel’s drivers coming on leaps and bounds and really hoping my new build will perform well (perhaps increasingly so) when everything arrives!
@drevokocur66
@drevokocur66 2 ай бұрын
How about fixing AMD's garbage drivers?
@rulik006
@rulik006 2 ай бұрын
impossible
@GamersNexus
@GamersNexus 2 ай бұрын
AMD has honestly gotten a ton better in the last 3 years. We haven't had anywhere near as many problems with them since the 6000 series started. 5000 was also much better than Vega.
@ViktardTRTH
@ViktardTRTH 2 ай бұрын
@@GamersNexusamazing that people still believe this tired trope
@habama1077
@habama1077 2 ай бұрын
​​@@ViktardTRTH I remember when Hogwarts: Legacy came out. That experience did not refute this ,,tired trope". They absolutely have gotten better. But there is still a long ways to go.
@joaobcrts
@joaobcrts 2 ай бұрын
Shut UP NVidia fanboy 😂
@1337Superfly
@1337Superfly 2 ай бұрын
Daym, is this a nice video! Listening to Tom Peterson talking is quite amazing. So knowledgeable and so helpful! Really liked the video editing as well - some great zoom-ins and overlays😄 Questions for next time: Tom, where is the area of focus for driver/software improvements now (and next) I am not only talking gaming here.
@guimello_silva
@guimello_silva 2 ай бұрын
That was amazing in different levels. Thank you all for discussing this.
@jakephills3013
@jakephills3013 2 ай бұрын
I loved this series. I left this series with so much knowledge about GPU image processing , render times, optimizations. amazing . thanks a lot steve and tom 🥰. Will wait for more exciting stuff like this
@twinn61661
@twinn61661 2 ай бұрын
Bought my 8 gb A750 LE card in January for 200 USD new. So far no issues with my use cases. Awesome looking card !
Маленькая и средняя фанта
00:56
Multi DO Smile Russian
Рет қаралды 5 МЛН
어른의 힘으로만 할 수 있는 버블티 마시는법
00:15
진영민yeongmin
Рет қаралды 8 МЛН
顔面水槽をカラフルにしたらキモ過ぎたwwwww
00:59
はじめしゃちょー(hajime)
Рет қаралды 37 МЛН
Here's WHY You Should BUY Intel ARC GPUs ... and NOT!
20:47
Tech Notice
Рет қаралды 18 М.
Llama 1-bit quantization - why NVIDIA should be scared
6:08
George Xian
Рет қаралды 21 М.
I’m kind of an iPad hater, but this is MAGICAL. - iPad Pro M4
15:03
AMD's CPU Analysis Lab Full Interview | Lasers, Scopes, & Silicon
29:02
How To Write A Driver (STM32, I2C, Datasheet) - Phil's Lab #30
38:21
Should You Buy an Intel Arc for Your Media Server?
16:36
Wolfgang's Channel
Рет қаралды 164 М.
SONIC VS AMY w WYSCIGU
0:30
Śpiący
Рет қаралды 6 МЛН