No video

I Finally Understand Load Balancing

  Рет қаралды 54,976

Theo - t3․gg

Theo - t3․gg

Күн бұрын

Load balancing is key to keeping your services up. It's also not as simple as you may think. Sam did an INSANE job visualizing load balancing and you should definitely check the post out to play with it yourself
samwho.dev/loa...
FOLLOW SAM / samwhoo
Check out my Twitch, Twitter, Discord more at t3.gg
S/O Ph4se0n3 for the awesome edit 🙏

Пікірлер: 124
@t3dotgg
@t3dotgg 4 ай бұрын
Sam's the coolest and deserves a follow. If he gets popular enough he can dedicate more time to making dope blog posts like this twitter.com/samwhoo
@mpty2022
@mpty2022 4 ай бұрын
excellent work of animation, design and understanding. I just want to add a small point here. this is just plain scheduling (decades old) and load balancing is one application of scheduling. The topic of scheduling itself has been well researched in CS research community. The actual researchers here are to be praised more.
@samrosewho
@samrosewho 4 ай бұрын
@@mpty2022for sure! I want to write more broadly about the topic of scheduling soon, there’s a tonne of literature to absorb first though 😅
@mpty2022
@mpty2022 4 ай бұрын
@@samrosewho oh man you did a great work, i feel like an a** now
@samrosewho
@samrosewho 4 ай бұрын
@@mpty2022 it’s alright, we’re all standing on the shoulders of giants. Important to remember that 😄
@samrosewho
@samrosewho 4 ай бұрын
Oh hey I wondered why I got a bunch of new followers on Twitter 😅
@lyreshechter1812
@lyreshechter1812 4 ай бұрын
Great article! Thanks!
@jasonaables
@jasonaables 4 ай бұрын
Really well done. I'm a visual learner so this kind of style is very helpful.
@EvertvanBrussel
@EvertvanBrussel 4 ай бұрын
Hey I have a question about your post. Because you mentioned that in terms of dropped requests, PEWMA starts better than LC, but that eventually it starts performing worse. But I don't really understand why that would be the case. As in, I understand why that's the case for how you explained PEWMA works, but the fix seems (to me at least), trivially simple. You start with PEWMA, but the load balancer also knows which servers have their queues maxed out, so once you know that all your fast servers have their queues maxed out, you start using the slowest servers again, effectively falling back to the LC algorithm. And of course, then once traffic slows down a bit and your servers have some breathing room again, you switch back to the pure PEWMA algorithm. Isn't it that easy? Am I missing something here? Edit: oh finally, I absolutely loved all the animations and especially the playground at the end where you could tinker with the parameters. I was wondering if it would be not too much to ask, if you could add to the playground an automatically updating graph showing the 95% percentile of latency of the last 60 seconds of requests and / or the number of dropped requests of the last 60 seconds. Because I noticed that once you push the algorithms close enough to their limits, it's actually quite hard to get an accurate feel of their performance simply by eyeballing it.
@NicholasMaietta
@NicholasMaietta 4 ай бұрын
I've been building servers and full stack apps and always hated the issues I ran into with using Load Balancers. I appreciate the very good breakdown of this. This is now part of my reference material to share with others.
@samrosewho
@samrosewho 4 ай бұрын
@@NicholasMaietta really glad you enjoyed it! ❤️
@welcometovalhalla2884
@welcometovalhalla2884 4 ай бұрын
Just hire the Factorio players smh
@4.0.4
@4.0.4 4 ай бұрын
These algorithms were initially invented by the late John McAfee to handle his harem of side girls. This is why they're called "load requests".
@smnomad9276
@smnomad9276 4 ай бұрын
LMAO
@fdsafdsafdsafdsafd
@fdsafdsafdsafdsafd 4 ай бұрын
Nice to see devs appreciating ops instead of just assuming "it works".
@samrosewho
@samrosewho 4 ай бұрын
Ops/infra has some of the coolest computer science, but also lots of coverage of it is quite dry and intimidating. I’m having a blast trying to bring the ideas to life and make them less scary!
@charliesta.abc123
@charliesta.abc123 4 ай бұрын
All devs appreciate ops, but javascript devs.
@AMalevolentCreation
@AMalevolentCreation 4 ай бұрын
@@charliesta.abc123very accurate
@Aoredon
@Aoredon 4 ай бұрын
Every dev appreciates ops. You must not be a dev.
@jst1977
@jst1977 4 ай бұрын
Fun fact, exponentially weighted moving average is exactly the same thing as RC low pass filter in audio. Also, I don't think that the algorithm has to keep track of last N values, since the math of it allows using previous moving average and current value only. That way the algorithm is O(1) and only requires a few CPU cycles (I think less than 20 per server for the computation itself) edit: specify which low pass
@samrosewho
@samrosewho 4 ай бұрын
Oh damn… I didn’t realise this! Makes total sense though, and now I feel silly 😅
@roycrippen9617
@roycrippen9617 4 ай бұрын
Is that because as you move into higher and higher frequencies the number of samples to represent the signal decreases? If I remember right the simplest fir low-pass is just the average of n and n-1. So as the number of samples per frequency decreases the difference between the amplitudes of adjacent samples becomes much more significant. I never thought about it as an exponentially weighted moving average though, probably because I'm a dumb novice programmer and huge audio nerd lol
@jst1977
@jst1977 4 ай бұрын
@@roycrippen9617 One more fun fact about EMA: it is also used in LLM training. I don't think you're dumb, there is just a crap ton of knowledge in this field. EMA (exponential moving average) is an IIR filter that can be derived by discretizing resistor capacitor (RC) analog low pass filter. The analogy that clicked for me was that the original filter pulls the moving average towards itself with a rubber band. Low frequencies have enough time to change the position of the average. While high frequencies kind of just wiggle it a little because they change too fast to attract the average. This analogy is also reasonably mathematically accurate.
@oshotz
@oshotz Ай бұрын
@@jst1977first thing i thought of was the adam optimizer! funny how these things pop up in places you'd least except
@khepin
@khepin 4 ай бұрын
If you're interested in load balancing, there's a fantastic talk by the CTO of Fastly on the topic. He shows that "random" is already a great algorithm that's hard to beat and shows some methods that perform better. I think "get 2 at random then pick the fastest" is one of the best algos they use. Talk is a bit old, so things may have evolved since too.
@doc8527
@doc8527 4 ай бұрын
Many technical term like load balancing is not a "simple" topic once we start to think about how they can work. That's why I hate the modern interview process for non-senior above involves system design questions (that suggests and encourage the interviewees to use those terms as an unspoken rule), where random junior/mid devs keep using those term to randomly brag about we can horizontally scale our system to handle billions requests in random design for non-sense cases without tradeoff and penalty. Everyone pretends they know that during the interview but in fact most (includes the interviewers) don't without a long research. In reality, often the person like me (not try to brag I'm good dev at all, just average more pragmatic one) who can't pass those random questions, will have to implement the similar stuffs in real servers. Those who brags they know it during the interview? All their theoretical assumption and imagination without the true past experiences literally failed at the very first step.
@BosonCollider
@BosonCollider 4 ай бұрын
One benefit of rewriting your server in Rust or optimized Go is that you usually don't need a load balancer until the point where you need to distribute your database. Unless the load on your server is inherent to what it is doing, I really dislike the idea of having multiple servers that all access a shared database, especially if this means that requests from the same client can get reordered. Given how good server hardware has gotten lately, diagonal scaling + region sharding is just a much better path than horizontal scaling imho. With that said, PEWMA's ability to favour single servers is neat if you need to autoscale the number of pods.
@Ray-gs7dd
@Ray-gs7dd 4 ай бұрын
17:35 more backend stuff would be awesome. Thanks for the video :)
@BastianInukChristensen
@BastianInukChristensen 4 ай бұрын
Sweet! I needed to know which load balancer to use for my side project with 0 users, thanks for sharing!!
@weatherwaxusefullhints2939
@weatherwaxusefullhints2939 4 ай бұрын
Sometimes I ask myself what I'm looking for on KZfaq. This is the answer.
@Pscribbled
@Pscribbled 4 ай бұрын
There’s a reason why round robin is still the standard - it optimizes for availability first. Imagine if one of your servers fails for some reason and just starts to immediately drop requests, in this scenario, while the server still has not been health checked, both the dynamic weighted round robin and the least connections algorithm will send all of the requests to the downed server. This is called black holing. I’m not familiar with the PEWMA algorithm but it looks like it’d fall victim to the same issues. With respect to the standard weighted round robin, generally you try to have homogeneous fleets. This makes it more predictable how your hosts will behave and easier to extrapolate data from load tests and steady state performance. At a high enough RPS, the cost of your requests will generally become more or less homogeneous - you can often make an assumption of even distribution (unless you’re using L4 load balancing) Given the two statements above is true, there no real reason to have weighted round robin on your load balancing.
@samrosewho
@samrosewho 4 ай бұрын
There are strategies to avoid the kind of behaviours you’re talking about, but none of them do as well as round robin at minimising loss. I wouldn’t go as far as saying that it optimises for availability, but it does a good job at avoiding pathological behaviour. It’s also wonderfully simple and good enough for 99% of use-cases 👌
@miscbits
@miscbits 4 ай бұрын
This article could have played 5d chess making you refresh to fully appreciate it and also reload some ads
@samrosewho
@samrosewho 4 ай бұрын
Where are you seeing ads on the post?
@miscbits
@miscbits 4 ай бұрын
@@samrosewho I’m not, I just thought the concept of doing that would be funny. I’ll edit my comment cause I think it wasn’t clear that this was a joke
@samrosewho
@samrosewho 4 ай бұрын
@@miscbits you had me worried 😅 thought they may have slipped in somehow. Thanks for the edit!
@db_2112
@db_2112 4 ай бұрын
Really impressed he actually coded the examples!
@NateThompsontheGreat
@NateThompsontheGreat 4 ай бұрын
Sam killed it on that post, and I appreciate your commentary. The visuals alone bring amazing clarity to an often misunderstood staple technology. Kudos Sam!
@samrosewho
@samrosewho 4 ай бұрын
Said thank you on Twitter but will say it here too: thank you 🙏
@BluntsNBeatz
@BluntsNBeatz 4 ай бұрын
This is an easy to understand hypothetical. Now if only I knew how to actually, practically get started implementing load balancing in real scenarios.
@InterFelix
@InterFelix 4 ай бұрын
The first step is getting your application ready for horizontal scaling. Does it tolerate reordered requests? Do you also need to scale your database? If so, how? How do you handle race conditions? There's probably a metric fuckton of additional important aspects I've omitted here (not a dev, I'm an ops guy). If your application is ready, you can think about implementing load balancing. There's great FOSS load balancing software available, most notably HAproxy and nginx. Both work great and have their own idiosyncrasies, but you're probably already familiar with nginx because it's also a great webserver, so you can stick with that one. This is where you can set the load balancing algorithm. Your load balancer needs to be able to handle all traffic coming to your site, but it's task is not very computationally expensive, so it will be able to handle much more throughput at a given hardware spec than your webservers. So spec generously, but evaluate your production metrics, so you're not wasting money on a completely overspeced load balancer. You'll also probably want to make your load balancer redundant, so add a second one with floating IP failover (through keepalived for example). This way your load balancer is redundant, but you're also wasting money on a server sitting idle all of the time. If you need to scale your load balancer horizontally, you can always add more pairs and take advantage of DNS round robin to split traffic between them. Alternatively, you can of course buy load balancing as a service through your hyperscaler of choice, or Cloudflare, or whatever. You can also buy load balancer appliances from vendors like Kemp, a lot of enterprise firewall appliances also have load balancer functionality built in. There's also ready-made software solutions available in case you don't want to build the nginx / HAproxy setup yourself and just want a nice setup process with a fancy GUI.
@urisinger3412
@urisinger3412 4 ай бұрын
i can handle any load
@ChristopherCricketWallace
@ChristopherCricketWallace 4 ай бұрын
If only I could get to that level.... Well, done Sam. BRAVO!!!!
@samrosewho
@samrosewho 4 ай бұрын
Thank you ❤
@JobStoit
@JobStoit 4 ай бұрын
That is a really really really good article. That's art! 👏👏
@samrosewho
@samrosewho 4 ай бұрын
Thank you ❤🙏
@jonasosterberg7517
@jonasosterberg7517 4 ай бұрын
Round Robin is always my choice, because of predictability of load distribution. Fast doesn't always mean correct. Often a 404 or a 500 is faster than a 200.
@samrosewho
@samrosewho 4 ай бұрын
I got a quite a few people pointing out to me that failures are faster than successes, and I should have pointed that out in the post. I did think about it at the time, but in practice these algorithms already account for that and I didn’t think it helped me achieve the goal of the post 😅 Round robin is a solid choice, and works fine up to much larger scale than most companies will ever achieve.
@jonny555333
@jonny555333 4 ай бұрын
True but that's kinda not what the article showed. It showed that there are algorithms that are both faster and drop connections less than Round Robin.
@smanqele
@smanqele 4 ай бұрын
Conclusion: Stay away from figuring out LB. Adopt a solution and just pray!
@damonguzman
@damonguzman 4 ай бұрын
When did this channel become,”Read along Blogs with Theo?”
@SharunKumar
@SharunKumar 4 ай бұрын
I wouldn't know about these posts if not for this channel 🤷🏻‍♂️
@laserspike
@laserspike 3 ай бұрын
Really nice work there - top marks to Sam. I'm guessing there must be a wee bit more to PEWMA than mentioned though, because the behaviour observed (not hitting the worst server with _any_ requests) apparently can't be explained by simply multiplying the live connection count by the weighted latency... If at any time you have zero connections to a server then you always get an answer of zero and you'd therefore choose that server some of the time (cos it's _at worst_ equal to every other server, but likely better than some of them).
@kingnick6260
@kingnick6260 4 ай бұрын
This article was written with plenty of love
@samrosewho
@samrosewho 4 ай бұрын
I’m glad it shows. My in-progress post is probably the most love I’ve poured into any of them so far. Should be out some time in the next month.
@RemotHuman
@RemotHuman 4 ай бұрын
this is more interesting than sorting algorithms
@asaurcefulofsecrets
@asaurcefulofsecrets 3 ай бұрын
This assumes that all request events are independent. In real life they rarely are. For processors operating on cacheable data, sticky sessions help hit the cache more often. I am surprised that is not even mentioned; It's queue theory 101. Like page 1, paragraph 1: "Let's assume independent events, exponentially spaced in time, also called Poisson distributed traffic, blahblahblah". OK, cool. But what if they are not? The balancer does not know the events are related, *but it knows the source*. If they come from the same source and close enough in time it can assume they are related. Then send them to the same processor, which will take less time to serve them overall, because it only has to retrieve the required data once. This is not too complex of a strategy. Coupled with a simple round robin, weighted or not, usually gives better results than any of the strategies described here (I don't think I need to remind anyone that IO is almost always the latency killer). Of course any real balancer also limits the number of concurrent connections per processor and tracks timeouts to determine healthiness, effectively achieving some of the effects of the weighted RR and min connections techniques described here, at the cost of some drops. On top of that it may also support periodically probing a health endpoint on each processor, with specific settings regarding minimum response time to decide on processor healthiness based on that out of band predetermined request only.
@sumshitteinnit8484
@sumshitteinnit8484 4 ай бұрын
You're masking a lisp?? You're doing it extremely well in that case
@skylark.kraken
@skylark.kraken 4 ай бұрын
As a backend person I'm definitely up for you doing more backend and ops stuff, you seem to be really good at finding things like this
@Pekz00r
@Pekz00r 4 ай бұрын
Great article and great video! Great job to both Theo and Sam. More backend topics would be great!
@samrosewho
@samrosewho 4 ай бұрын
Thanks so much 🙏
@tobiasfedder1390
@tobiasfedder1390 4 ай бұрын
That is a great blog post. Also, I'd love to know more about load balancers, especially how it works with multiple active load balancers. Same IPs for multiple servers, heartbeats and so forth, I tried to read about it but I just can not grasp it.
@balaclava351
@balaclava351 4 ай бұрын
Soon we're gonna need load balancers for the load balancers. Xzibit meme anyone?
@samrosewho
@samrosewho 4 ай бұрын
Multi-tier load balancing is quite common in practice! The big companies need quite a few levels, at different layers in the stack, to achieve their scale.
@dimitriborgers9800
@dimitriborgers9800 4 ай бұрын
Noob question, is the request queue something that comes out of the box for a server or is that something like rabbitMQ?
@poweron3654
@poweron3654 4 ай бұрын
Load balancing when the load balancer goes down or you have to do session pinning 😭😭
@dandigangi_em
@dandigangi_em 4 ай бұрын
Just discovered that GCP has a central region. AWS needs to get it!
@ErazerPT
@ErazerPT 4 ай бұрын
Guess the next step up would be the LB actually being able to figure which requests "naturally" take longer, as not all requests are equal a priori. This would then prioritize sending the most expensive request to the best available server. Can't prove it of the bat, but pretty sure that on a fully loaded system with a somewhat even mix of requests this would pretty much distribute everything to the point where any request would be close to the "average response time" line.
@samrosewho
@samrosewho 4 ай бұрын
A really tricky problem in practice, to the point where I’ve never seen it done. I very nearly didn’t cover weighted round robin because it’s so impractical to rely on humans to judge the cost of anything. Closest I’ve seen is splitting your API endpoints out into groups of “slow”, “medium” and “fast” and treating those buckets differently.
@ErazerPT
@ErazerPT 4 ай бұрын
@@samrosewho I was thinking about the API too when i wrote it. But, as you said, "impractical to rely on humans", and we now have something that is sort of good at making predictions, it just needs a lot of data, and well... this is precisely the kind of data we can synthetically generate ;)
@Malix_Labs
@Malix_Labs 4 ай бұрын
Handle any hard load with another guest
@benschmaltz5789
@benschmaltz5789 4 ай бұрын
More backend brother. Full stack programmers, we ride at dawn
@johnnygri99
@johnnygri99 4 ай бұрын
We must make Sam explain all the things.
@samrosewho
@samrosewho 4 ай бұрын
The mitochondria is the powerhouse of the cell.
@johnnygri99
@johnnygri99 4 ай бұрын
@@samrosewho 🤯
@gro967
@gro967 4 ай бұрын
How is upload thing even a product? This is the kind of project we did on a weekend in university, everyone with more than 2 days in IT can easily build it in no time...
@CubaneMusic
@CubaneMusic 4 ай бұрын
Can you overload your load balancer or is that not really a concern?
@the-real-random-person
@the-real-random-person 4 ай бұрын
Impressive video, I learnt a lot from it :) thanks man continue like that ❤
4 ай бұрын
Obnoxiously easy to understand! Have no other words other than PERFECT! ❤❤❤❤🎉🎉🎉
@samrosewho
@samrosewho 4 ай бұрын
You flatter me! ❤
@rickdg
@rickdg 4 ай бұрын
If your servers are stateless (PHP says hi), you can just scale horizontally as needed and the load balancer can just default round robin as the servers are identical.
@samrosewho
@samrosewho 4 ай бұрын
I talk about this in the post, but be careful assuming your servers are identical! Odds are they aren’t, especially if you’re using VPSs in a cloud provider. Even machines in the same instance class can vary. One of the things I’d love people to take away from this post is that with minimal effort (usually 1-2 lines of nginx config or whatever) you can do quite a bit better than round robin.
@rickdg
@rickdg 4 ай бұрын
@@samrosewho Thanks for the reply. Have you noticed considerable differences in spawning identical VPSs? Depending on your stack, each server can be really simple. Usually, the bottleneck is then the database server, which is a whole different story.
@samrosewho
@samrosewho 4 ай бұрын
@@rickdg I’ve seen non-trivial differences in servers using exactly the same hardware, it can be pretty wild.
@rodjenihm
@rodjenihm 4 ай бұрын
Shared VPS with the same specs can vary a lot. Only if you have dedicated servers you can "bet" that they are equally powerful. But dedicated ones are way more expensive.
@asaurcefulofsecrets
@asaurcefulofsecrets 3 ай бұрын
The requests are not, even for the same endpoint/entity. Example: list items on account X is light with 0 items. I add 100 items, then it is not. I re-query immediately on the same instance and it is light again, because it is cached.
@TheTmLev
@TheTmLev 4 ай бұрын
Consistent hashing sometimes matters much more than any other load balancing algorithm
@samrosewho
@samrosewho 4 ай бұрын
I have a post about consistent hashing that’s in draft. I haven’t really seen it used in the load balancing space, it’s usually put to work in data sharding from what I’ve seen.
@TheTmLev
@TheTmLev 4 ай бұрын
@@samrosewho hey Sam! It was used extensively in a paper about Maglev, Google's distributed load balancer: static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf
@TheTmLev
@TheTmLev 4 ай бұрын
@@samrosewho hey Sam! It was used extensively in Maglev, Google's distributed load balancer. Can't link the paper, unfortunately, since KZfaq deletes comments with URLs for some reason, but it should easy to find. Search query: "A Fast and Reliable Software Network Load Balancer"
@Wielorybkek
@Wielorybkek 4 ай бұрын
that was so cool!
@YuriBez2023
@YuriBez2023 4 ай бұрын
"Called called" bug patched and deployed.
@AveN7ers
@AveN7ers 4 ай бұрын
How many times did you change the title of this video? 🤣🤣
@John_Versus
@John_Versus 4 ай бұрын
Typos like "called called" tells that it was actually done by human. 😄
@samrosewho
@samrosewho 4 ай бұрын
I was so annoyed when it was pointed out. I re-read this post at least a dozen times 😅
@PureGlide
@PureGlide 4 ай бұрын
It would be ironic if Sam's website was overloaded. Opportunity missed haha
@samrosewho
@samrosewho 4 ай бұрын
It’s a static site on GitHub pages. If it ever got overloaded it’d be an excellent day. 😁
@yoskokleng3658
@yoskokleng3658 27 күн бұрын
Do u have real practice video?
@SandraWantsCoke
@SandraWantsCoke 4 ай бұрын
That article is tits!
@samrosewho
@samrosewho 4 ай бұрын
Thank you 🙏
@shadyworld1
@shadyworld1 4 ай бұрын
WOW
@edumorangobolcombr
@edumorangobolcombr 4 ай бұрын
Hello night owls
@jmatya
@jmatya 4 ай бұрын
Hello night people from the US. You know timezones. People also watch him from Europe 😉
@penewoldahh
@penewoldahh 4 ай бұрын
hello night person from the US
@the-real-random-person
@the-real-random-person 4 ай бұрын
Am I so early 😅 still gotta watch the vid lol
@codersauthority
@codersauthority 4 ай бұрын
@samuelgunter
@samuelgunter 4 ай бұрын
sam who?
@VicioGaming
@VicioGaming 4 ай бұрын
so you're again reacting to stuff other people made without really providing anything new? what a suprise
@samrosewho
@samrosewho 4 ай бұрын
I wrote the post and I’m ecstatic Theo used his reach to introduce more people to my work 🙂
@VicioGaming
@VicioGaming 4 ай бұрын
Sure, but he still didn't provide anywhere near enough value for this video to be justified. Majority of this video is just reading word for word what you wrote. He could've went out and gathered more blog posts/other materials like yours and just show us parts of those and provide links in description. But that's too much work, that's why this video is what it is
@nickmoore5105
@nickmoore5105 4 ай бұрын
@@VicioGaming and yet you’ve watched it and you are commenting on it
@VicioGaming
@VicioGaming 4 ай бұрын
@@nickmoore5105 As if it changes anything. I've watched it to the end to see if Theo actually brings value with this video and, what a suprise, he doesn't
@INDABRIT
@INDABRIT 4 ай бұрын
Rip any red/green colorblind viewers
@samrosewho
@samrosewho 4 ай бұрын
I knowwwww, I have been putting effort into fixing this in subsequent posts. Sorry if this is something that made the post difficult for you. It’s not that I don’t care, it’s that I’m not very good at this yet 🙈
@INDABRIT
@INDABRIT 4 ай бұрын
It was actually an awesome explanation, with really nice visuals. I wouldn't know a better way to show the difference without colors either. Amd I'm not a color expert to pick new colors, I just know red/green is a pretty common colorblindness (if thats a word?)
@samrosewho
@samrosewho 4 ай бұрын
It is! Chrome has tools for emulating different types of colourblindness that I didn’t know about at the time, but I use on all my posts now. I’m starting to branch out into using different shapes and patterns as well, so as not to rely solely on colour 😁
@abdelmananabdelrahman4099
@abdelmananabdelrahman4099 4 ай бұрын
#ad
Video Compression Is Magical
29:55
Theo - t3․gg
Рет қаралды 88 М.
The Redis Rug Pull Is Worse Than You Think
35:38
Theo - t3․gg
Рет қаралды 160 М.
Nurse's Mission: Bringing Joy to Young Lives #shorts
00:17
Fabiosa Stories
Рет қаралды 6 МЛН
艾莎撒娇得到王子的原谅#艾莎
00:24
在逃的公主
Рет қаралды 54 МЛН
Dad Makes Daughter Clean Up Spilled Chips #shorts
00:16
Fabiosa Stories
Рет қаралды 3,8 МЛН
Underwater Challenge 😱
00:37
Topper Guild
Рет қаралды 42 МЛН
`const` was a mistake
31:50
Theo - t3․gg
Рет қаралды 134 М.
98% Cloud Cost Saved By Writing Our Own Database
21:45
ThePrimeTime
Рет қаралды 365 М.
Turns out REST APIs weren't the answer (and that's OK!)
10:38
Dylan Beattie
Рет қаралды 152 М.
The New JS Features Coming Soon (I'm so hyped)
39:03
Theo - t3․gg
Рет қаралды 84 М.
A Video About Queues
25:49
Theo - t3․gg
Рет қаралды 54 М.
Projects Every Programmer Should Try
16:58
ThePrimeTime
Рет қаралды 439 М.
How GitHub's Database Self-Destructed in 43 Seconds
12:04
Kevin Fang
Рет қаралды 981 М.
React vs HTMX - A Fascinating War
22:23
Theo - t3․gg
Рет қаралды 85 М.
Why WebAssembly Can't Win
19:38
Theo - t3․gg
Рет қаралды 111 М.
Nurse's Mission: Bringing Joy to Young Lives #shorts
00:17
Fabiosa Stories
Рет қаралды 6 МЛН