Forward and Deferred Rendering - Cambridge Computer Science Talks

  Рет қаралды 21,568

Ben Andrew

Ben Andrew

Күн бұрын

A talk given to my fellow Cambridge computer science students on the 27th January 2021.
Abstract:
The visuals of video games and films have deep influences on our culture, from Shrek to Garfield Racing. The modern history of real-time rendering is deeply tied to the architecture of GPUs and what they allow us to do. How have our approaches to rendering changed over time, and what may the future hold?
In this talk, I will compare the Forward and Deferred rendering pipelines, from both a technical standpoint as well as explaining the history behind them. I will also briefly explore what future developments may look like in the industry.
Website:
www.benmandrew.com/
LinkedIn:
/ benmandrew
GitHub:
github.com/benmandrew
Thumbnail image by Paul Siedler.
www.artstation.com/artwork/BR3rl

Пікірлер: 46
@Lavimoe
@Lavimoe Жыл бұрын
"If you cannot explain something in simple terms, you don't understand it." This video really shows how deep an understanding you have on the shader topic. Thanks so much!
@rtyzxc
@rtyzxc 4 ай бұрын
Can't wait for the return of MSAA and sharp graphics again!
@TechDiveAVCLUB
@TechDiveAVCLUB Жыл бұрын
Can't believe such a perfect digestion of high level information into actionable mental models exist. Thank you!
@donovan6320
@donovan6320 2 жыл бұрын
Should also mention all lighting in doom eternal is dynamic. The "pre-processing" that is being done is called clustered forward rendering, in which a culling stage reduces the lights sampled in a specific part of the scene.
@benmandrew
@benmandrew 2 жыл бұрын
Yep, unfortunately had to cut out clustered rendering to keep the talk focused and under half an hour. It's a very cool technique explained in the Doom Eternal graphics study by Adrian Courrèges (one of my sources at the end).
@donovan6320
@donovan6320 2 жыл бұрын
@@benmandrew I figured, thought I should clarify for those that are curious about the technique and "prepossessing" (technically correct but I would have just called it a culling pass, preprocessing implies a static, pre-runtime/serialised nature to which the light culling pass is not), definitely beyond my paygrade, but is really cool.
@cafe_underground
@cafe_underground 6 ай бұрын
Amazing explanation, I could finally grasp the pros and cons of each technique
@jgriep
@jgriep 2 жыл бұрын
Hands down the best, most straightforward explanation of forward vs. reverse rendering I have seen!
@slothsarecool
@slothsarecool 8 ай бұрын
Shrek?? no way 😅 that’s great. Awesome talk, thanks
@glass1098
@glass1098 Жыл бұрын
Thanks for the video, i had a lot of questions on the topic and this was an absolute clear explanation of the differences
@anoomage
@anoomage Ай бұрын
Thank you so much for sharing your knowledge sir, I learned a lot from your presentation!
@haos4574
@haos4574 Жыл бұрын
This is gold content, watched several videos on the topic, this is the one that actually makes me understand.
@penneywang6552
@penneywang6552 Жыл бұрын
best video to explain them, from the history , hardware to gpu pipeline work , thank you. looking forward more tutorial with this way .
@Kalandrill
@Kalandrill Жыл бұрын
Thanks a lot for sharing, didn't expect a dive into the current state of things in games. It was a very pleasant surprise :)
@santitabnavascues8673
@santitabnavascues8673 Жыл бұрын
Is curious how everybody who illustrates a depth buffer always use the reverse depth approach, where white is closer and black is farther, more curious is that the reverse depth buffer distributes the depth precision much better than the original, 'forward' depth buffer, where closer objects have a depth close to 0 and far objects have a depth closer to 1 😊
@benmandrew
@benmandrew Жыл бұрын
Correct, for those interested this is due to the non-linear perspective transformation (1/z) either combining with or cancelling out the somewhat-logarithmic distribution of points in IEEE floating point numbers. A really good explanation is on the Nvidia developer website -- developer.nvidia.com/content/depth-precision-visualized.
@egoinstart8756
@egoinstart8756 Жыл бұрын
Excellent. Best video about this topic I've found. Thank you.
@StealthMacaque
@StealthMacaque 8 ай бұрын
Unbelievably good explanation. I cannot thank you enough!
@gideonunger7284
@gideonunger7284 Жыл бұрын
why is forward always portrayed as lights x meshes. i have never written a forward renderer like that. just put the lights in a buffer and then send the lights affecting a mesh as indices. gives you 1 uniform branch for the loop but that should be fine and way faster than multiple draw calls lol
@leeoiou7295
@leeoiou7295 Жыл бұрын
Excellent talk. I did a little research and found out that you are just a young lad. Wish you all the best and thanks for such a great talk.
@rubenhovhannisyan317
@rubenhovhannisyan317 Жыл бұрын
Thanks a lot. Saved a lot of time and effort.
@schmildo
@schmildo 8 ай бұрын
Thanks mate
@pwhv
@pwhv Жыл бұрын
very well explained, loved it
@lovve996
@lovve996 2 жыл бұрын
very useful ! thanks a lot
@thhm
@thhm 6 ай бұрын
Definitely still a heady topic for me, but thank you for explaining it. Specially for the emerging trends and outlook in the end, definitely interesting.
@jiayuezhu5848
@jiayuezhu5848 2 жыл бұрын
This is such a helpful video!
@carlosd562
@carlosd562 Жыл бұрын
Very good video!
@stephenkamenar
@stephenkamenar 2 жыл бұрын
thank you shrek
@gordazo0
@gordazo0 6 ай бұрын
excellent
@onevoltten7352
@onevoltten7352 2 жыл бұрын
Thank you! Been going back and forth between defferred and forward as it's a lot more effort using forward shading - requiring much more planning and optimising. I plan to force myself to use Forward rendering during development and commit to a much more optimised game rather than go for dynamic lighting.
@donovan6320
@donovan6320 2 жыл бұрын
I mean you can use forward and have a lot of dynamic lighting... Doom Eternal uses all dynamic forward lighting.
@vitordelima
@vitordelima 9 ай бұрын
@@donovan6320A shader can loop over many light sources during the same rendering step, but many screen space effects are compromised if you don't use deferred.
@donovan6320
@donovan6320 9 ай бұрын
@@vitordelima You arent wrong?
@vitordelima
@vitordelima 9 ай бұрын
@@donovan6320Deferred is only important if you need some extra data from each separate rendering step that isn't easily generated by forward only, but lighting can be calculated in a single step for forward nowadays.
@vitordelima
@vitordelima 9 ай бұрын
@@donovan6320I found out more about it, modern hardware still supports a lot of light sources in forward mode simply by iterating over them but there are methods to improve this via something similar to culling. If you use a method for global illumination that is good enough, deferred or forward don't matter that much because the lighting is calculated in another rendering step.
@jeffg4686
@jeffg4686 Жыл бұрын
have you checked out the "clustered forward renderer" in bevy? Looks pretty nice. Don't know if any downsides. Says unlimited lights
@SergioWolf843
@SergioWolf843 9 ай бұрын
Deferred rendering has the advantage of calculating lights per block and not per pixel, decreasing the GPU overload, so it doesn’t matter how many lights cross your blocks because it won’t affect performance. If I’m not mistaken, Apple has TBDR Patents and has been using it on the iPhone since 2017.
@MrTomyCJ
@MrTomyCJ 3 күн бұрын
7:25 In webgpu, the vertex and fragment shader code is provided to the pipeline. This means that a pipeline can only execute 1 fragment shader. So to render the scene we wouldn't just have the nested loops: lights>objects, but rather materials>lights>objectsWithThisMaterial, and for each material set a different pipeline. Am I missing something here? is that pipeline-per-material, the intended way to draw the objects for this case? 19:30 In WGSL it doesn't seem to be possible to use some samplers inside branching code. Is there a way around that?
@dan_perry
@dan_perry Жыл бұрын
Hmm, I thought the PowerVR/Dreamcast was the first tile based deffered renderer?
@chenbruce3784
@chenbruce3784 Жыл бұрын
谢谢你
@benmandrew
@benmandrew Жыл бұрын
别客气
@charactername263
@charactername263 3 ай бұрын
But surely you just put your lights into a GPU buffer and then you can sample the buffer whilst drawing meshes. That makes it just M draw calls for M meshes, with sampling into the buffer for N lights which is really not any different from deferred, other than that deferred avoids redrawing fragments - but even a depth prepass on forward solves that issue.
@zugolf4980
@zugolf4980 Жыл бұрын
And this is why you're at Cambridge University
How do games render their scenes? | Bitwise
13:12
DigiDigger
Рет қаралды 564 М.
🤔Какой Орган самый длинный ? #shorts
00:42
Самый Молодой Актёр Без Оскара 😂
00:13
Глеб Рандалайнен
Рет қаралды 9 МЛН
Llegó al techo 😱
00:37
Juan De Dios Pantoja
Рет қаралды 47 МЛН
Vello: high performance 2D graphics - Raph Levien
36:24
RustLab Conference
Рет қаралды 10 М.
We Need to Rethink Exercise - The Workout Paradox
12:00
Kurzgesagt – In a Nutshell
Рет қаралды 4,6 МЛН
LIGHTS! // Hazel Engine Dev Log
12:36
The Cherno
Рет қаралды 31 М.
Volume Tiled Forward Shading
7:57
Jeremiah van Oosten
Рет қаралды 18 М.
A Deep Dive into Nanite Virtualized Geometry
1:10:00
SIGGRAPH Advances in Real-Time Rendering
Рет қаралды 240 М.
When Optimisations Work, But for the Wrong Reasons
22:19
SimonDev
Рет қаралды 899 М.
An introduction to Raymarching
34:03
kishimisu
Рет қаралды 128 М.
RAY TRACING and other RENDERING METHODS
10:22
Andrey Lebrov
Рет қаралды 263 М.
How Real Time Computer Graphics and Rasterization work
10:51
FloatyMonkey
Рет қаралды 85 М.
Samsung laughing on iPhone #techbyakram
0:12
Tech by Akram
Рет қаралды 2,4 МЛН
НОВЫЕ ФЕЙК iPHONE 🤯 #iphone
0:37
ALSER kz
Рет қаралды 42 М.
Я купил первый в своей жизни VR! 🤯
1:00
Как бесплатно замутить iphone 15 pro max
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 3,5 МЛН
Как распознать поддельный iPhone
0:44
PEREKUPILO
Рет қаралды 2,1 МЛН
ГОСЗАКУПОЧНЫЙ ПК за 10 тысяч рублей
36:28
Ремонтяш
Рет қаралды 567 М.