AWS Fooled Devs & Sabotaged The Industry | Prime Reacts

  Рет қаралды 213,897

ThePrimeTime

ThePrimeTime

Күн бұрын

Recorded live on twitch, GET IN
/ theprimeagen
Reviewed video: • Matteo Collina on how ...
By: Changelog | / @changelog
MY MAIN YT CHANNEL: Has well edited engineering videos
/ theprimeagen
Discord
/ discord
Have something for me to read or react to?: / theprimeagenreact
Kinesis Advantage 360: bit.ly/Prime-Kinesis
Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/deeznuts

Пікірлер: 563
@k98killer
@k98killer 7 ай бұрын
Technically, going from 0 users to 1 user is an infinite growth rate. VCs will cream their pants over infinite growth.
@lylyscuir
@lylyscuir 7 ай бұрын
Since the relative growth rate formula requires dividing by the previous value and division by zero is undefined, the relative growth rate from 0 to 1 is undefined. However, I doubt VCs can do math, so the "infinite growth rate" strat may work.
@MrTerribleLie
@MrTerribleLie 7 ай бұрын
@@lylyscuir unless you evaluate the limit instead of doing the division.
@JeremyAndersonBoise
@JeremyAndersonBoise 7 ай бұрын
Based af
@joshix833
@joshix833 7 ай бұрын
Just use js node -e "console.log((1 - 0) / 0)"
@cookiecrumbzi
@cookiecrumbzi 7 ай бұрын
​@@MrTerribleLieExcept that limit doesn't exist
@kamilkardel2792
@kamilkardel2792 7 ай бұрын
AWS's own training materials from a few years ago (I'm not sure how old this course was) provide the following use case for Lambda: a function that will update product database when an employee uploads a spreadsheet, to be used once in a while. Rather far from running an entire application this way.
@disguysn
@disguysn 7 ай бұрын
I do not understand why people are trying to replicate monolith applications in lambda.
@AhmadShehanshah
@AhmadShehanshah 7 ай бұрын
@@disguysn SERVERLESS (Business loves this word)
@mma93067
@mma93067 7 ай бұрын
Lambdas should never be in that hot path …
@-Jason-L
@-Jason-L 7 ай бұрын
I have ran startups with over 100 million $ valuiation on aws lambdas, and our aws bill was under $20.
@user-cr3dn9vt6h
@user-cr3dn9vt6h 7 ай бұрын
​@@-Jason-L need more words for this to add up
@FQAN17
@FQAN17 7 ай бұрын
Speaking as someone that grew up on a computer with 4K of RAM I can absolutely agree with this. Because everything - compilers, code development, pipelines, hardware all got faster everyone stopped caring about quality of code. It became focused on “lines of code” measurement instead of the efficiency. Very happy to see some still value it.
@ivonakis
@ivonakis 7 ай бұрын
The business is expecting it to be done by Friday not it should have latency less than 200ms
@tadghhenry
@tadghhenry 7 ай бұрын
@@ivonakis Those 200ms add up
@YunisRajab
@YunisRajab 7 ай бұрын
@@tadghhenry technically inept management doesn't get this and devs just roll over and accept the status quo
@erikbaer4472
@erikbaer4472 7 ай бұрын
4k ? Good times, I was on 1 mb Ram at the beginning, working the whole summer to get that beautiful Amiga 600 Ram edition
@101Mant
@101Mant 7 ай бұрын
​@@tadghhenrydepends what the software is doing. Sometimes it matters but many times it does not. For some apps having a dev spend a couple of weeks optimising it costs more than you will save. For others performance is critical.
@purdysanchez
@purdysanchez 7 ай бұрын
Former Amazonian here. Within the company people just throw code into lambdas, even if each transaction takes multiple seconds (like 7+ seconds). "But it scales, bro" This doesn't even take into account the added complexity of having to add extra layers to support statically linked libraries that aren't included in the lambda stack.
@PaulSpades
@PaulSpades 7 ай бұрын
He completely misses the initial point - THE CLOUD PROVIDERS don't care about performance at all, because YOU get charged for performance, so THEY don't optimize anything. And it's in their interest that YOU ALSO don't care about performance so they get all the moneys. The worse their tech runs, the more you pay. The worse your tech runs, the more you pay. The more you fail to configure something in their preferred convoluted way, the more you pay. The more of their products you use, the more you pay. The harder it is to use somebody else's service with their service, the more services from them you use, the more you pay. Also, you don't need async anything to have concurrent events in any language or system. You need: procedures, stack frames and queues - that's it, you can build an event system with those. Ask game programmers.
@Bokto1
@Bokto1 7 ай бұрын
Yeah, it was strange that Prime totally missed it. Also: cloud providers are incetified to run their hardware as underclocked as possible.
@PaulSpades
@PaulSpades 7 ай бұрын
@@Bokto1 Yeah, the only optimizations they benefit from is on hardware power consumption and cooling versus numbers on the dashboard. Since you're paying for CPU time and cores, they might as well move your code to 256 core arm chips running at 20w. Which is (or might be) good for the environment, but not for your pocket, when you need 50 cores to do the same job a quadcore system used to do.
@rapzid3536
@rapzid3536 7 ай бұрын
He didn't miss it, it's just a child's understanding of the incentives involved. If you can't also list some incentives AWS has to care about performance you have no real understanding of their business.
@GackFinder
@GackFinder 7 ай бұрын
This also applies for alot of PaaS offerings, such as Azure Logic Apps. You wanna pay 10x the money for every CPU-cycle? Look no further than Azure Logic Apps.
@PaulSpades
@PaulSpades 7 ай бұрын
@@rapzid3536 Why don't you list two such incentives?
@-Jason-L
@-Jason-L 7 ай бұрын
Newsflash: most companies dont need to worry about webscale. I worked for startup that did $10 million a year, and our aws serverless bill was under $20 per month. We scaled to 3 states, health services industry. We could scale to the entire US and not break a few hundred $. That would have been close to 200 million ARR. That is unicorn territory valuation. on a few hundred in infra costs.
@LusidDreaming
@LusidDreaming 7 ай бұрын
I work with a system that is mostly serverless (aws lambda) right now, and my take is that it actually forces us to hyperoptimize every function because of how much your cost is directly related to performance. 50ms extra latency per request when using containers? Probably not a huge issue. But 50ms per request when you are literally paying for every extra ms of runtime? Its a huge cost implication if you are dealing with millions of requests per hour. Also, memory leaks do matter despite a lot of contrary info out there, since lambdas that stay "hot" will share memory. We actually had a slow memory leak take down one of our lambdas in production.
@farrongoth6712
@farrongoth6712 7 ай бұрын
Also no matter how much you optimized there is a inherent perfomance hit that can't be optimized away by you and your paying for their inefficiencies.
@LusidDreaming
@LusidDreaming 7 ай бұрын
@@farrongoth6712 I agree. It definitely limits you on options to optimize too, since you have to treat every request as an individual execution. There is some fancy stuff you can do with layers/extensions, but its still limited. I don't think serverless functions are bad, in fact I think they're a great tool. But, like everything, I think too many people are treating it as a silver bullet. There are definitely use cases for them, but I'm currently living the pain of going all serverless.
@Qrzychu92
@Qrzychu92 7 ай бұрын
what is the reason to run this as lambda and not containsers? Seems like the amount of time you invested into making your code run cheaper could have been spent on setting up a k8s cluster, even with autoscaling if your traffic is variable
@head0fmob
@head0fmob 7 ай бұрын
@@Qrzychu92with containers occasionally nodes can go down but lambda is very rare to fail
@LusidDreaming
@LusidDreaming 7 ай бұрын
@@Qrzychu92 one example where I work is an overnight batch processing job. We have 16 hours every day where we need 0 instances and latency/startup is not a huge factor as it is not a customer facing service. Lambda makes it easy to simply maintain the python script used to process the data. No need to manage a kubernetes cluster just to run a script. With lambda we get the whole runtime provided for us. And devops is basically nonexistent because it just works. Assuming containers are always a better option is the equivalent of assuming serverless is always a better option. They're just two tools with two different sets of tradeoffs.
@gbb1983
@gbb1983 7 ай бұрын
Thing is: 80% of ppl in the industry works for small medium companies where perfomance is not that important due the low number of concurrent users. They only start to think about perfomance when its too late: too much users onboarded and things go south. Some companies will never get to this level btw.
@Alien-fv9gd
@Alien-fv9gd 6 ай бұрын
Never optimise too early. And if it's too late, you already made it.
@Rick104547
@Rick104547 6 ай бұрын
Yes but we do need microservices in everything because it's the future! Everything needs to be infinitely scalable even if you have 0 users.
@pikzel
@pikzel 5 ай бұрын
Indeed. I started my career long before the cloud and no-one cared about performance even back then. It was long forgotten after we got 8MB RAM.
@lashlarue7924
@lashlarue7924 Ай бұрын
My little company qualifies absolutely. We're so barebones and basic that the value of cloudy services that automate our workloads just completely stomps the cost of a few hundred lambda calls every other day.
@bkucenski
@bkucenski 7 ай бұрын
I paid $900 for an i7 Dell recently. Tons of power. It would cost at least $900 per month to have the same power on AWS. And I pay $80 per month for a 1GB fiber connection. You'd think people would be more concerned with costs if they're paying for AWS. Companies love to throw away money on everything but salaries.
@brainites
@brainites 7 ай бұрын
I couldn't agree more. I have been leading migration of applications from these expensive cloud providers and CEOs get surprised at the cost saved. It is the economic downturn that had led some companies to realize how they have been taken advantage of.
@connoisseurofcookies2047
@connoisseurofcookies2047 4 ай бұрын
It really depends on what your infrastructure requirements are. If high availability, local and regional accessibility, multiple redundant services, etc. are a must, services like AWS are actually quite cheap and affordable. AWS, for example, claims to offer 11 9's worth of availability, which would be monstrously expensive for a smaller or intermediate company to achieve. If we're analysing your setup, it's a single server with a single point of failure on the network. You might be able to safely run some web servers and a database on that, which is completely acceptable if your tolerance for downtime is around 2-7 days a year and you have your backups in order. But imagine adding redundant internet gateways, redundant ISP providers, electrical redundancy, redundant distribution and access switches, multiple hypervisors configured for HA, etc. The costs would easily run hundreds of thousands, even millions, and because of warranty issues these costs would be recurring every 5-10 years. Cloud providers are clearly a poor choice for smaller business entities and single developers, but claiming that using them is 'throwing away money' is quite asinine.
@OpinionatedSkink
@OpinionatedSkink 4 ай бұрын
You're not buying servers. You're buying SLAs.
@outwithrealitytoo
@outwithrealitytoo 5 күн бұрын
​@@connoisseurofcookies2047 All spot on... I love that you put numbers on some of the risks being accepted. I once worked for a company that sold 5 9s when the telco only promised 3 9s. Our customents management didn't know what it meant... our management didn't know what it meant. It have a few hours unscheduled downtime each month - no-one got sued. A lot of management get high on their own hype and sense of importance "2-7 days downtime? we have to be 5 9s" do you? do you really? do you know how much that would cost if we actually did that? Even Office365 being Office 363, no-one cares. BUT most s/w people don't understand that MTBF is "within the warranty pperiod" and believe their SSDs will be found fully working by archeologists - so let's not beat up on management too much. Paying for the whole buffet when you just want a bit of salad is foolish though!
@br3nto
@br3nto 7 ай бұрын
The push to AWS, Azure, etc, is basically the same as choosing to be a renter vs a home owner. There’s pros and cons to each.
@Sonsequence
@Sonsequence 6 ай бұрын
If you were talking about renting VMs with ECS I'd agree but we're talking about lambda here. That's more like deciding to live in a restaurant. Heyyy you want a cherry on that sundae, just click your fingers and order it. Missing sprinkles? Just keep ordering. That's a company card you're using right? Don't worry about the bill sir.
@truehighs7845
@truehighs7845 5 ай бұрын
@@Sonsequence That is why scale economy increases, then decreases... 🤣🤣🤣
@KiraIRL
@KiraIRL 4 ай бұрын
@@Sonsequence Spot on. I used to be a solutions architect for AWS and majority of our customers start their footprint with one service to solve some critical business problem and then end up having most of their infrastructure on the cloud utilizing services they don't even need. We encourage our customers to adopt more and more services, exactly like your resturant analogy. and then when spending becomes a problem but they still want to use more services we issue EDP credits, free Proof of concept, etc.
@Sonsequence
@Sonsequence 4 ай бұрын
@KiraIRL wow that's a gem bit of insider knowledge
@siriguillo
@siriguillo 7 ай бұрын
Elixir, as a language is nothing out of the ordinary, it is the erlang VM that is insane, people learning elixir should focus on understanding the VM, if you only learn the language's syntax you wont understand why elixir gets all the praises it gets
@JeremyAndersonBoise
@JeremyAndersonBoise 7 ай бұрын
Straight, no chaser, this is why Elixir matters.
@perc-ai
@perc-ai 7 ай бұрын
Elixir is probably one of the best languages ever made
@majorhumbert676
@majorhumbert676 7 ай бұрын
What is special about the Erlang VM?
@madlep
@madlep 7 ай бұрын
@@majorhumbert676 “The Soul of Erlang and Elixir” by Sasa Juric is a great primer on what there is to love kzfaq.info/get/bejne/gNxyh5eJp8rThXk.htmlsi=tIpPXVbKgaqEIQjJ
@siriguillo
@siriguillo 7 ай бұрын
@majorhumbert676 well, ill try to explain but is too much for a KZfaq comment, so as a summary is like a kubernetes in a box, it is like an OS, it has something like redis inside, is has pubsub, it has service discovery and consensus mechanism. All that is part of the standard library and available to the developer through simple elegant abstractions. I use Go at work and achieving all the functionality that the erlang VM has out of the box would require a monumental effort... raft, consul, kafka, etc. Is too much to explain in a short KZfaq comment.
@z34d
@z34d 7 ай бұрын
We are literally 2 devs. One Backend and one Fullstack. We just used simply a docker swarm, costed him about two weeks to learn and setup. Runs perfect and can easily scale up and down.
@nekoill
@nekoill 7 ай бұрын
What boils my piss the most is the fact that thanks to AWS everyone assumes you're talking serverless when you mention lambdas, when in fact lambdas have been about anonymous functions for ages, and I have no idea why the fuck did Amazon think it was appropriate to steal that name for their bullshit. Thank god at least buckets don't remind most people about object storage & shit.
@mudnutz
@mudnutz 4 ай бұрын
I mean just look at what they’ve done with open source projects. Amazon doesn’t give a fuck because they have the money to do what they want I guess.
@kneelesh48
@kneelesh48 3 ай бұрын
Think of the name Alexa lmao
@nekoill
@nekoill 3 ай бұрын
@@kneelesh48 facking kew
@lashlarue7924
@lashlarue7924 Ай бұрын
Buckets of shit? 🪣💩 Maybe Bitbucket == Shitbucket?
@johnathanrhoades7751
@johnathanrhoades7751 7 күн бұрын
I, um, I will now be using the phrase “boils my piss” as opposed to “grinds my gears” 😂
@codeline9387
@codeline9387 7 ай бұрын
ruby does not perform one request per process (actually one request per two processes), and it can be customized with async/await like rust
@yellingintothewind
@yellingintothewind 7 ай бұрын
Python has OS-level threads, but most of the python implementations (iron python and jython excepted) only ever let one thread run at a time. They actually end up _slower_ on multicore machines the moment you introduce threading because of lock contention issues. That said, it does have first class support for green threads at this point, in the javascript async/await style. And it has had promise-like async networking since 1999 or there abouts. You are also absolutely correct that there's a heavy expectation that you just know the libraries to do stuff. "import antigravity"
@marioprawirosudiro7301
@marioprawirosudiro7301 7 ай бұрын
Never expected to see the Python import meme to show up here.
@dylanevans2732
@dylanevans2732 4 ай бұрын
Yup, I had to stop watching after the python bit, it's rare to here so many incorrect things at once. Also, celery is not $&#&$@ threading.
@thecollector6746
@thecollector6746 4 ай бұрын
...but that doesn't really matter as Python(cPython) doesn't at all support parrallelism....concurrency : yes.....parrallelism that would actually make that support for concurrency useful : no.
@yellingintothewind
@yellingintothewind 4 ай бұрын
@@thecollector6746 That's just about as untrue as it meaningfully can be. First, the restriction on running threads only applies to threads holding the GIL. Numpy, OpenCV, and most libraries written on top of PyBind11 or similar don't hold the GIL while doing heavy computation. You can even run CUDA (or OpenCL) kernels fully async, without blocking the GIL. Second, python does support full parallelism as well as most javascript implementations do (WebWorkers), the restriction is data must be serializable and can only be shared by explicit message passing. It actually is ahead of javascript in that you can share sockets between peers, so if you want to handle orthogonal connections through the same listen port, it's trivial to do so. (Also, the subinterpreter system will let this resource sharing work within the same process, but that's a little ways off still). Third, concurrency is incredibly useful _without_ full parallelism. That's why python grew its concurrency model before multi-core CPUs existed outside enterprise environments. It lets you work on computations while waiting on data to load from spinning rust (or tapes). It lets you multiplex connections without needing to duplicate your stack per request. And it makes it easier for the programmer to reason about certain classes of problems.
@thecollector6746
@thecollector6746 4 ай бұрын
@@yellingintothewind Except for the fact that I am 100% correct. Cry harder. You wrote this wall of bullshit just to say that I am right, but you didn't like what I said.
@Mister5597
@Mister5597 7 ай бұрын
Serverless is great, we needed to extract some data from a PDF file that the php library we normally use couldn't get so we made a tiny gcloud function in python just to use a completely different pdf library that we can call from a http request.
@Kane0123
@Kane0123 7 ай бұрын
Triggered by timer or http call, simple to work with, easy to lock down the scaling. Perfect tool for many small bits of work I do.
@Vim_Tim
@Vim_Tim 7 ай бұрын
0:55 He was referring to _Amazon's_ disincentives to invest in performance, not the user investing in performance.
@jamesclark2663
@jamesclark2663 7 ай бұрын
As a person that has only ever programmed video games and generally has no clue how real software works I do always find it interesting when people talk about handling events or requests that are only in the triple or quadruple digits. Yet at the same time these are for infrastructures that handle millions of active users daily. It really shows just how vast the realm of software development can be and how massively different project requirements are.
@david0aloha
@david0aloha 5 ай бұрын
As a game developer, your programs are doing far more computing than most devs using AWS. They're doing stupid amounts of serialization and network transfer for multi-second HTTP requests, while you're computing physics or game logic on 10s to thousands of objects 30-60 times per second. Or maybe the game engine is, but my point is that your application is actually doing something that's compute intensive, not passing trivial requests with layers of extra shit in-between like most web apps. Learn the networking stack and HTTP and you too can start passing trivial HTTP requests through an excessive number of layers. Or you will see the value in other protocols, like web sockets, and then you can write browser games with multiplayer.
@jamesclark2663
@jamesclark2663 5 ай бұрын
@@david0aloha I mean, potato / potahto. Most of the extra crap those truly massive companies have to deal with is just to help with problem of how big they are. On the other hand those tiny companies that want to pretend they are google could probably learn a thing or two from your comment! As for me - I won't lie. I'm a dogshit programmer. If I need speed then I find clever smoke n mirror tricks to simply hide the fact that I stop doing any work at all. If I can't hide it then I eschew all of the modern amenities of new programming languages just for the sake of cache coherency and avoiding heap allocations in a managed language. If I still need more I reach for the hammers that are my preferred threading and SIMD libraries. Or if I can get away with it, I dump it all on the gpu. Either way I've never solved a problem by actually writing a smarter algorithm. *Every* single algorithm I've ever written in twenty years has always been a for-loop or a nested for-loop.
@datguy4104
@datguy4104 7 ай бұрын
Honestly worrying about "infra" when you're a startup is an oxymoron because a $5 VPS can very easily handle like 10-100k users depending on what the app does so long as it's in a moderately performant language like Go, C#, Java. I think the "need" for this is caused by using languages that just simply shouldn't ever be used on the backend for apps that plan on growing massively like JS/TS.
@gto433
@gto433 7 ай бұрын
What syntax code in js can cause a huge tech debt?
@LtSich
@LtSich 7 ай бұрын
And you don't need to hire a full time infra admin... You can work with a company and pay only for managing / deploying the infra... it's litteraly my job, and the company pay far less with paying me deploying baremetal, that paying the same service on aws... With services running far better, with a better control on their data and system... But a lot of company prefer to pay aws, because you know "the cloud is the future", "baremetal is so old school"...
@CamembertDave
@CamembertDave 7 ай бұрын
Ah, but if you make a load of bad design decisions in pursuit of "being scalable", then the majority of dev time needs to be spent on infra, which then justifies itself due to the horrendous performance of the system even as a startup with well under 1k users.
@datguy4104
@datguy4104 7 ай бұрын
@@gto433 The language's performance is the bottleneck, not the syntax.
@michaelgabriel1069
@michaelgabriel1069 7 ай бұрын
@@gto433it's not any one particular thing, it's just the general way the GC functions, how easy it is to accidentally make an object long-lived, the general ideology of copying memory as much as possible (I.e people using spread syntax when they could just append to an existing array), and the single threaded nature of node limiting the max concurrent requests too much. Don't get me wrong, it's cool that we can use the same language for front-end and back-end, but most node backend apps could have been written in golang with the same velocity while gaining a bunch of performance for free.
@pinoniq
@pinoniq 7 ай бұрын
lambda is meant to be event based. and it allows for fine grained control over how many events get processed per call. and so you can process multipe events at a time. most people however use it with api gateway where its always 1 event at a time. and thats just not what it is built for
@brokensythe
@brokensythe 6 ай бұрын
Given that it is event driven how can there be more than one event processed at a time. I believe that you can send a large payload to a lambda to be processed all at once like a kind of batch processing but it's still just the one event
@pinoniq
@pinoniq 6 ай бұрын
@@brokensythe when you e.g. feed events in from SQS, you can configure the amount of events to be sent together. The only place where you cant do that is through api gateway. for obvious reasons
@saemideluxe
@saemideluxe 7 ай бұрын
Regarding Python/Django: Running it with the dev-tools will by default be a single process that can handle a single request at a time. For production you commonly use a wsgi middleware instead (uwsgi, gunicorn, etc.) which will allow for enabling multiple processes and threads etc. Celery has nothing to do with that. Celery is a background task queue and needs its own processes for that.
@thekwoka4707
@thekwoka4707 7 ай бұрын
But that's mostly just running multiple instances of your application, which is a bit different than the application itself running multiple requests.
@saemideluxe
@saemideluxe 7 ай бұрын
@@thekwoka4707 depends on how you mean "multiple instances". What I did not mention yet is that Django is now getting decent async support, so there it would handle multiple requests in a single instance I guess. Although it might depends where exactly how you define "instance". But aparr from that ersonally I think I would even consider uwsgi with multithreading enabled to be "multiple requests per instance", at least from a sysadmin perspective.
@Sonsequence
@Sonsequence 6 ай бұрын
Yeah the python gripe is old news. It used to be that your only option to utilize you CPU fully was choosing the right size of thread pool for your workload and I/O latency. In comparison, an event loop can automatically make full use of your compute no matter how the workload varies. Comes with some downside. If you're being a "proper engineer" you're forced to write more complex async code and having lots of threads is good insurance against outlier heavy requests blocking your whole server. But nowadays you can use async with FastAPI or if you love Django there's gevent so you'll use some other libs to monkeypatch your I/O and database connection pool and then you can write an async web server in the simple style of a synchronous one. Sounds dodgy, turns out to be trouble free.
@joelv4495
@joelv4495 7 ай бұрын
Love this take. @15:56 Easiest way to do this is build the entire API in a lambda docker container, then transition to ECS Fargate once the service has a somewhat consistent load. Req -> Res cycle is faster too because of no cold starts.
@WillDelish
@WillDelish 7 ай бұрын
Yep, this is the way. Docker / containers also make local dev a lot easier to test/mock
@GackFinder
@GackFinder 7 ай бұрын
@@WillDelish Easier compared to what though? Because last time I checked, pressing F5 in an IDE wasn't that hard.
@utubes720
@utubes720 7 ай бұрын
@@GackFinderCan you, or one of your upvoters, elaborate? I understood ​​⁠​​⁠@WillDelish point about services running locally in containers makes local testing easier, but I’m not following how “pressing F5” solves the use case he’s referring to.
@flyingdice
@flyingdice 7 ай бұрын
Lambdalith it’s a thing
@WillDelish
@WillDelish 7 ай бұрын
@@utubes720 troubleshooting lambdas can be a huge pain if you don’t know if its your code or infra. Some tools like SAM help, but can be slow. Being able to use docker to chain your stuff together & locally test = cool beans, now its only infra problems. Also if you’re wanting to use like rust or latest python etc, lambda might not have a runtime for you, but you can build a custom image and use whatever you want.
@pdougall1
@pdougall1 7 ай бұрын
Ruby servers have been threaded for almost a decade (something like that) but no one really writes threaded code in an API call (obviously there are exceptions, but generally).
@PaulSpades
@PaulSpades 7 ай бұрын
JS devs just can't get their heads round the concept of one thread per request and that your whole app runs for every single request. They're used to dealing with multiple events in their code and their code being resident in memory indefinitely, spinning the event queue (like in a webpage). Which is fine, except when you do scale to multiple threads with each thread having multiple events - then you're in for a headache.
@pdougall1
@pdougall1 7 ай бұрын
Thinking about multiple layers of async (theads AND event loop) seems like it might lead to... productivity loss @@PaulSpades 😂
@PaulSpades
@PaulSpades 7 ай бұрын
@@pdougall1 Yeah, so cue the discussions about shared thread memory (with the security concerns) and thread synchronization, both definitely needed now that the long running program is handling a session that might reside on 3 different threads/processors/machines. This never used to be a problem (the server software just passed very basic data structures about the request to all program instances when invoking them). It used to be simple, understandable and effective, CGI was and still is a workhorse.
@disguysn
@disguysn 7 ай бұрын
I believe Ruby MRI still has a GIL though. You'd have to go with JRuby to get around that.
@trapexit
@trapexit 7 ай бұрын
His love for Node concurrency is weird. Node wasn't unique in its concurrency in any way. The small BEAM VM would scale pretty well on old hardware with a small footprint. I had an Fortune 500 enterprise wide app handling tens of thousands of requests a second running on an Erlang setup on a single Pentium 4 workstation 10+ years ago with almost zero effort put into scaling. Node has enabled concurrency on a midtier, more common language but it isn't great at it.
@disguysn
@disguysn 7 ай бұрын
You can't have a discussion about concurrency without someone mentioning Erlang or Elixir - and for good reason, too.
@ferinzz
@ferinzz 4 ай бұрын
reminds me of how the entirety of Battle Net for the Diablo (can't remember if 1 or 2) was run on a single random PC they had to spare.
@celiacasanovas4164
@celiacasanovas4164 7 ай бұрын
Ruby has fibers (green threads) natively. It also has Ractors (actor-model parallelism for CPU-bound operations) but they're not very performant yet and they're working on it. Ruby has also recently improved massively in terms of speed, but I guess it's past its prime, and there are still faster languages.
@themartdog
@themartdog 7 ай бұрын
You still have to worry about efficiency with serverless / lambda. I've spent a lot of time reducing execution time of lambdas. that's not to say I always use it though, it's still pretty easy to build and deploy a containerized app on ECS in AWS too. Use spot instances and you pay so little.
@madlep
@madlep 7 ай бұрын
The greatest trick Jeff Bezos ever pulled was tricking developers into thinking serialisation and network latency are trivial, and that memory and compute are expensive
@haljohnson6947
@haljohnson6947 5 ай бұрын
Actually it was delivering AAA batteries to my house 3 hours after ordering them.
@maxymura
@maxymura 7 ай бұрын
GCP made a good move with second generation of cloud functions (analogue of AWS Lambda). Every gen2 cloud function is now capable of serving multiple concurrent requests per instance. This way compute resources (vCPU and RAM) get utilised in more efficient way and service still abstracts the complexity of scaling and maintaining of server infra. In the end of the day you basically pay more money not to manage the servers which can be beneficial for some small to medium products in some situations.
@jackpowell9276
@jackpowell9276 5 ай бұрын
Yeah and then you've got K8s as an option down the road, once your costs justify an infra team and tooling team, plus there is now a lot of great kubernetes focused tooling available.
@0e0
@0e0 6 ай бұрын
some of the stuff in the livechat is truly hilarious. this was great
@shaunkeys7887
@shaunkeys7887 7 ай бұрын
Python’s threading is more complicated because of the global interpreter lock. Basically, Python’s threading is real, but only kind of. To run a line of Python, you have to acquire the global interpreter lock. If you call a compute-heavy C function, that function can release the GIL while it runs, but Python must then re-acquire it to go back up the call stack and continue running. If most of your compute is happening in Python-land, threading is an illusion, because all threads are fighting for the same lock. If most of your compute is happening in C, it has a chance of allowing threads to work. Most I/O calls take advantage of this and allow multiple threads for example, due to their speed
@IvanRandomDude
@IvanRandomDude 7 ай бұрын
Isn't GIL fixed in the latest version?
@Fiercesoulking
@Fiercesoulking 7 ай бұрын
@@IvanRandomDude What they did in the last version was removing GIL from multiply instances of python inside one process. This is a baby step in this direction. Example when this happened before: Pytroch when you used the fork library under windows effectively both multi process and multi threading were an illusion (and still is because they have to update to the new version yet)
@jordixboy
@jordixboy 7 ай бұрын
most I/O function reading files, reading network ... is totally done on C land, GIL is released on those, so thats perfeclty multithreaded, so you can def. process multiple requests.
@supergusano2
@supergusano2 7 ай бұрын
@@jordixboy exactly this. threading can be real if you've got an app that's basically all I/O
@jordixboy
@jordixboy 7 ай бұрын
@@supergusano2 yep, not to mention that you can build mission critical things easily using the C api where you can release the gil, not that hard..
@dontdoit6986
@dontdoit6986 7 ай бұрын
As a cloud software engineer(7 years exp + degree), I’ve seen large enterprise systems built in pure serverless using IAC. The bills are laughably, comically small. The trick is designing with event-driven processes in mind. It’s a juggling act, but a decent SA can design a system to substantially keep the bills low. Understand the services you are using and stay away from the shiny stuff. Api gateway, lambda, dynamodb are your legendary mains. Eventbridge, sqs, rds, are epics worth picking up. EC2’s absolutely do not pick up; you may get lucky every once in a while, but you’ll more likely get killed trying to look fancy. If infrastructure is built using something like terraform, you can --somewhat- migrate serverless from AWS to GCP, however TF truly shines if you deploy elements of both (plus others) in the same codebase.
@Sonsequence
@Sonsequence 5 ай бұрын
Laughably small bills with AWS lambda can only come from laughably small active users.
@vikramkrishnan6414
@vikramkrishnan6414 4 ай бұрын
@@Sonsequence : No, it comes from laughably small concurrent active users, which is very true for Enterprise internal applications.
@lashlarue7924
@lashlarue7924 Ай бұрын
@@vikramkrishnan6414☺️👍
@KDill893
@KDill893 7 ай бұрын
I learned about pickle the other day. Totally agree that python packages are named stupidly
@yeetdeets
@yeetdeets 7 ай бұрын
I pickle my pandas with celery!
@pif5023
@pif5023 7 ай бұрын
I’ve heard good things about Clojure performance wise as alternative to popular dynamic languages on the server side. It’s not amazing but people say is quicker to write than Java or Go.
@vncntjms
@vncntjms 7 ай бұрын
"You design by Twitter choices." This is the reason why I'm not improving. I'm just stuck in this cycle of chasing the next big thing.
@yeshwanthreddykarnatakam5652
@yeshwanthreddykarnatakam5652 7 ай бұрын
I totally agree with you on most serverless platforms selling lies and I love servers as modern programming languages like Go, Rust and others are very good with async and can effectively handle 1000's of requests per second on a $5/month nodes. What's your take on Cloudflare workers and WASM. Although they got limits in their use cases but I think they might make a better serverless model to run either as a external platform or as an internal platform.
@bastienm347
@bastienm347 7 ай бұрын
the Python runner for a production server is gunicorn which handles a thread pool for you. So, yes, django multithread requests if you deploy it correctly. It's just that if you run a dev server in production, it will not multithread by default. but that would be dumb. Nobody does that. It takes 30sec on internet to know you need gunicorn and it is very easy to set up
@yeetdeets
@yeetdeets 7 ай бұрын
3:44 As someone building an app with Django and Celery, I felt that 😑
@zyriab5797
@zyriab5797 8 күн бұрын
At my last job I had a debate with a senior dev and the CTO about performance. They were adamant about the fact that today we focus on readability over performance and that modern computers are so powerful that our crappy node backend was just fine. I found it crazy from an engineering and craftsmanship point of view.
@outwithrealitytoo
@outwithrealitytoo 5 күн бұрын
Having Lambda serverless components in your startup codebase is like have a chandelier in your one bed apartment. - more of an expensive art installation than appropriate decor.
@jasonfuscellaro9985
@jasonfuscellaro9985 7 ай бұрын
Software will grow to match the hardware it’s provided
@brunoais
@brunoais 7 ай бұрын
9:00: With sanic (another name you'll "love") I can take, in python, more requests per second than nodejs to do the same work as in nodejs, as long as there's 3 or more async/await in the javascript sequence.
@ErazerPT
@ErazerPT 7 ай бұрын
It's pretty much about (once again) people buying into hype without understanding numbers. Serverless/Cloud is amazing in two scenarios. You have really low load or you have 'near infinite' load. The former will cost you next to nothing, the later will be prohibitively expensive to build infrastructure for. Thus, both make sense. It's all the middle ground in between those that might not.
@disguysn
@disguysn 7 ай бұрын
Even the middle can be cost effective if you plan it out properly. You have to consider the cost of maintaining all this yourself.
@sercantor1
@sercantor1 7 ай бұрын
so, basically spikey traffic. but I agree with prime in that you have to write your code in such a way that it is easily transferrable to a docker container at least
@edoardocostantini2930
@edoardocostantini2930 7 ай бұрын
Blaming the entire serverless paradigm just because people don't understand how to actually use it it's plain dumb imho. Should we say that object oriented programming is dumb just because people started abusing interfaces and inheritance? As everything in IT if you don't know how to do stuff you will probably end up making a mess. Lambdas are not the answer to every problem pros and cons as everything else in life folks. Btw at the end of the day AWS still wins because they also sell raw compute resources so to say that they are sabotaging an entire industry feels too much to be honest.
@AlbertCloete
@AlbertCloete 7 ай бұрын
That's one thing that annoy me as well, when people are comparing Node to PHP and then say PHP can only do single threaded. Yes, but Nginx is multithreaded. You don't use PHP on its own.
@PaulSpades
@PaulSpades 7 ай бұрын
Oh boy, don't even try to explain to people how CGI works, it might explode their brain.
@evergreen-
@evergreen- 7 ай бұрын
How do you do SSE (server-sent events) in PHP? Not possible
@Bozebo
@Bozebo 7 ай бұрын
@@evergreen- You just return data content then two new lines and there's an event sent to client, and you'd need to ob_flush, all encapsulated away, similar to every thing else in http. And streaming the constant response is the same as any other streaming e.g. files. You could even mount your whole app under its framework as a daemon/container listening independently and load balance separately for better performance as you'd need to with most other languages (you could use PHP mostly on its own in other words).
@dave4148
@dave4148 7 ай бұрын
real threads or green threads/asyncio? not the same
@PaulSpades
@PaulSpades 7 ай бұрын
@@evergreen- I did write a chat app with SSE in PHP this year, it was a piece of cake. Very easy. Hard to find documentation, though. But you can also use a network socket with php, python or whatever. Socket server libraries have been around for decades.
@Fiercesoulking
@Fiercesoulking 7 ай бұрын
It always hit me how different the IT space in(central) Europe is compare to the US . I mean we have a lot of small and medium sized companies while in the US the big ones are dominant . Which means a lot more languages are used but also companies make the webpages for other companies . I mean I know one and they host for their clients when they can on their own server. By the way I read today they looking for Golang Devs D. There are also less .NET fear because a lot of companies are already thanks to Microsoft Office depended on Windows so a lot of system integration and inter connectivity works .NET so far you aren't a bank.
@disguysn
@disguysn 7 ай бұрын
The latest versions of .NET are cross platform.
@shadamethyst1258
@shadamethyst1258 7 ай бұрын
Everyone swearing by C# is quite a pain, I must say. Please just give me something like go or ocaml or rust
@Fiercesoulking
@Fiercesoulking 7 ай бұрын
@@shadamethyst1258 The problems with C# are only that you need to use Visual Studio for the UI creation and that is strongly typed the rest is just bias. I programming currently in C++ and C# is way better: Where I meant were it is used is in interfacing with ABB robots, controls system units but the same time with logistics and such stuff. Actually it good it isn't so hyped. When Java got hyped it got a Jenga Tower of Frameworks on top of each other. JS got React *hust*.
@alebarrera1991
@alebarrera1991 7 ай бұрын
In reality, the language that is most suitable for serverless is PHP, which has always used one process per request.
@h.s3187
@h.s3187 7 ай бұрын
Thats why PHP still 80% of internet , PHP is stateless so you send a request to the server , the response will be a process that will die after finish his job such as HTTP protocol thats why PHP is the secret king 👑 of APIs and SaaS
@barneylaurance1865
@barneylaurance1865 7 ай бұрын
@@h.s3187I don't believe the process actually dies after the request in PHP-FPM or most PHP servers, but all your variables are garbaged at the end of the request and not shared with whatever's done for the next request. So it's as if the process died but without all the overhead of starting a new process.
@Waitwhat469
@Waitwhat469 7 ай бұрын
16:45 you can always look at moving to knative at that point, putting in on top of your k8s cluster(s)
@georgehelyar
@georgehelyar 7 ай бұрын
From the Microsoft side, net gets big performance increases every year, including AOT complication, which they show being used on Azure Functions, so that you don't have to pay the JIT cost. Of course they do this because they want more people to use Azure Functions, and serverless is more expensive than other forms of hosting, but they don't intentionally hold back performance just to increase compute time. You can also host Azure functions inside k8s now with keda to scale to 0, which is more cost effective, but then you have to know more about k8s, although there are Azure container apps for that now too. The bigger you get, the more it becomes worth it to spend dev time to reduce operational costs. Docker containers are easy enough to start with to make serverless pretty pointless anyway though, and there are a lot of different ways to host them easily.
@jy3787
@jy3787 7 ай бұрын
I faced this issue of running distinct for an array with 500k hashed IDs ( via Set or any other methods to distinct an array) on AWS lambda with nodejs, the end product returns different size every time. Anyone has any insights on this?
@bonsairobo
@bonsairobo 7 ай бұрын
If you write your code in a way that's tightly coupled to a particular serverless architecture, then you need to be very careful to pay down that tech debt as soon as you experience significant traffic. I don't think using serverless is necessarily a bad idea if you keep in mind that at some point you might need to shift those request handlers into a new infrastructure.
@Skorps1811
@Skorps1811 7 ай бұрын
Default Python everybody uses is single-threaded. Django and Flask web apps achieve concurrent request processing via external tools like gunicorn to run basically N instances of app server. However, Python now has full async/await support baked into language. FastAPI app server leverages that, and you can write apps in Node-style, just be careful and don’t accidentally use sync APIs
@aesthesia5023
@aesthesia5023 7 ай бұрын
You can search a couple of very good papers detailing how building your own infrastructure is actually more profitable long run
@LambdaCalculator
@LambdaCalculator 7 ай бұрын
any specific references you had in mind? I'm interested in reading about this
@everythinggoes850
@everythinggoes850 7 ай бұрын
In the long run. Not in the beginning
@101Mant
@101Mant 7 ай бұрын
It actually depends what your workload looks like, constant vs bursts, how high is peak vs average. Its not that straightforward.
@adenrius
@adenrius 7 ай бұрын
Serverless is pretty profitable as long as you don't have absurdly high usage, from my experience.
@egor.okhterov
@egor.okhterov 7 ай бұрын
What if your startup fails and you never exceed free tier?
@zoltannagy394
@zoltannagy394 7 ай бұрын
I absolutely agree with the title of this video (with the content too…) 👍😊 …more people should have realized this a long time ago…
@LC12345
@LC12345 7 ай бұрын
That frigging smirking cloud knows what it did…
@dandogamer
@dandogamer 7 ай бұрын
Been hit with the max lambda limit before, theres a max amount of lambdas you can create and a different amount for how many can be called and yet another amount for how many can be called concurrently. These are restricted by aws and you have to ask them to bump them up for you ($$$). This happened to me when working for a health company, covid hit and we were swarmed with requests. The queued lambdas were causing all kinds of problems, events were being dropped db connections exceeded max number which caused even more issues. It completely broke our system. A system that is very carefully tested and maintained yet we could not even properly test a scenario like this :(
@briankarcher8338
@briankarcher8338 2 ай бұрын
What were you doing in the Lambda's? If they're being an SQS queue then it's pretty trivial. If the Lambda's can't keep up then the queues will just accumulate but you won't lose anything.
@MilanKazarka
@MilanKazarka 3 ай бұрын
ah, the pay by second actually reminds me of making a mistake when developing a C program for the mainframe back in 2010 or so - the time on the IBM Mainframe was paid by second & you had some wild memory restrained, so if your program ran for a few minutes instead of 20s than that was a problem. This just reminded me how wild I thought the restrains and the prices were back then & now we're again paying wild prices for cloud services.
@bassstorm89
@bassstorm89 6 ай бұрын
The AWS lock-in is wild. You can end up in situations where you are so dependent on their services to have your product up and running and scaling somewhat okay, that you can never really leave without insanely high cost. Also warm tip: keep track of your own usage with a third party service and monitor that well.
@ThomasSuckow
@ThomasSuckow 7 ай бұрын
We used aws lambda for image transcode. Was nice since if we didn't have images to transcode it was a lot less than a server and we could handle bursts of work within seconds. Was nice to not keep that much compute around.
@NotThatKraken
@NotThatKraken 7 ай бұрын
Lots of services on handle a few requests per minute. There is a rate above which switching to containers makes sense.
@solowatcher
@solowatcher 4 ай бұрын
@ThePrimeTime, apart from seo and first load, why would one serverside render react pages like old school mvc frameworks ?
@valhalla_dev
@valhalla_dev 7 ай бұрын
I'm actually converting away from serverless because I simply designed my serverless infra badly and I can design a monoservice REST API easier and more sustainably. It's likely going to cost me more at my app's size, but I'd rather have something sustainable since I'm a solo dev.
@krishnabharadwaj4715
@krishnabharadwaj4715 7 ай бұрын
it's a trade off between how fast do you want to ship the product vs how much it would cost to ship it that fast. If you don't want to spend that money on infra, then you optimize your infra.
@LiveErrors
@LiveErrors 7 ай бұрын
You know what you can dip in the C? The entire flipping Lua!
@travischristensen5385
@travischristensen5385 4 ай бұрын
There’s an aws blog post where they talk about big savings switching some aspect of prime video away from lambdas to ec2
@macverishe3480
@macverishe3480 7 ай бұрын
Node async is idling because it is in a constant loop in the internals of node, so the loop is waiting for processes. Node is good for local dev stuff and builds but not on the server in my opinion.
@tornoutlaw
@tornoutlaw 16 күн бұрын
Python has threads, multiprocessing. FastAPI uses the ASGI uvicorn. We use FastAPI with multiple workers and threading all the time.
@michaelwilson367
@michaelwilson367 4 ай бұрын
Using AWS lambda like a web server seems crazy to me. I’ve only used it for background processes that get triggered by a queue or like a daily/hourly schedule. And I use step functions map iterator to launch them concurrently instead of having loops and things inside a single lambda
@nicholashendrata
@nicholashendrata 3 ай бұрын
Going back and forth between what comes out of prime's mouth and what comes out of theo's mouth is giving me a real bamboozle of two completely different philosophies
@ThePrimeTimeagen
@ThePrimeTimeagen 3 ай бұрын
i think the good part is that people have different ideas how to approach problems
@lavitagrande8449
@lavitagrande8449 7 ай бұрын
As to your opinion on Python I have always referred to this as "Do you want to know, or do you want to learn?" People who want to know something identify a skill or ability that it would be convenient or fun to have. "I would like to know French. I would like to know how to play guitar." Wanting to learn is the act of saying to yourself "I am going to suck at this, and I am going to put time and effort in, and eventually I will suck less. The more time and effort, the less suckage."
@banned_from_eating_cookies
@banned_from_eating_cookies 7 ай бұрын
Great point.
@KangoV
@KangoV 5 ай бұрын
In the java world, we use Micronaut that can create apps that can run as a CLI, Microservice (K8S) and AWS Lambda without recompiling (same jar).
@LusidDreaming
@LusidDreaming 7 ай бұрын
Default per account limit of lambda is 1000, you can request an increase though. My company has a limit of 10k for example.
@thedude7319
@thedude7319 6 ай бұрын
01:33 I love how programmers can talk like artists
@jackevansevo
@jackevansevo 7 ай бұрын
The commenter talking about Celery is kinda misleading. Celery is like the sidekiq equivalent in Ruby land, it's for async/background job processing. Celery workers typically live on a completely separate machine and receive tasks through some sort of broker like Rabbit-MQ / Redis. When you run a Python web service in prod with Flask/Django you typically achieve per-request concurrency by using something like Gunicorn or uWSGI and pointing these at your app. These are pre-fork worker HTTP servers, so each worker essentially takes your application code and runs its own version of Django/Flask within the process. I'm not a NodeJS person, but it seems similar to the way you'd deploy an express app with PM2
@user-dc7ky7lk6b
@user-dc7ky7lk6b 7 ай бұрын
where is the twitter thread that discusses async slowness?
@quantum_dongle
@quantum_dongle 7 ай бұрын
Python only supports nice syntax and associated sugar if they can turn it into a breaking change and release it as 3.x (your code only works with 3.x-1)
@pif5023
@pif5023 7 ай бұрын
“Design by Twitter choices” is what I have witnessed so far in my career and I it’s so frustrating. Management doesn’t care to try out solutions and pick the best one. Never seen a flame graph outside a YT video.
@nefrace
@nefrace 7 ай бұрын
Is that a Miku with bazooka on your wallpaper? At 0:08
@burkskurk82
@burkskurk82 4 ай бұрын
Async-await was adopted long time ago. Was it adopted from C#?
@jgoemat
@jgoemat 3 күн бұрын
You can dip that nodejs into c also... `npm install -g node-gyp` I built a node app that calls windows apis to send keystrokes and mouse movements to run a flash game for me once. The image processing was too slow doing the screen capture and looking at the bitmap in code so I I installed CUDA and figured out how to offload that to the GPU.
@LuealEythernddare
@LuealEythernddare 7 ай бұрын
I’m gonna be going for an aws certification soon. It’s part of the WGU software engineering degree program
@averagegeek3957
@averagegeek3957 7 ай бұрын
0:08 nice
@thekwoka4707
@thekwoka4707 7 ай бұрын
So Python (and Django) are single request blocking by default for webservers. Where as NodeJS is not. You can get Python and Django to allow async (and multithreading) but it takes more other stuff. It actually was only much more recently that Django was able to be non-blocking.
@Im-VT
@Im-VT 7 ай бұрын
Wow what a strange thing! Can’t believe an AWS business would be doing a total business thing, and not acts of philanthropy. What an evil Bezoman!
@calahil28
@calahil28 10 күн бұрын
I mean if your business plan is to milk all the money out of their clients..they won't have any clients after a while... Unchecked capitalism will eat itself alive. Companies do not learn...because they aren't people. They are legal contracts to protect the actual individuals from actual liability from their reckless behavior in pursuit of all that glitters and shines
@DamjanDimitrioski
@DamjanDimitrioski 7 ай бұрын
What I want from a lambda is or any serverless if I have Django for instance, and have small web app for few users, to autosleep the lambda instance, thus saving costs. First user that wakes the lambda will wait 10 seconds max, and TIMEOUT var to wait until the lambda will sleep.
@yuriblanc8446
@yuriblanc8446 7 ай бұрын
concurrency may vary depending on the region
@jsdevtom
@jsdevtom 7 ай бұрын
Node js is not single threaded... under the hood it decides in c++ what is getting multithreaded and what isnt.
@festusyuma1901
@festusyuma1901 7 ай бұрын
The limit isn't really a request limit, it's a lambda instance limit, one lambda could handle a lot of request
@jamievisker1952
@jamievisker1952 7 ай бұрын
AppRunner which uses fargate can scale to .5. You have to pay for the memory of at least one instance but not the cpu. It has its issues and I wish it scaled all the way to zero. But it is interesting.
@putnokiabel
@putnokiabel 7 ай бұрын
Not an issue on GCP, Google cloud functions allow a single function instance to run many concurrent requests.
@darekmistrz4364
@darekmistrz4364 3 ай бұрын
10:24 flash news: You pay for idle CPU even if you run Nodejs process on a typical VM or colocated hardware. It's just at the time when no one uses your application
@genechristiansomoza4931
@genechristiansomoza4931 7 ай бұрын
Php and python run 1 at a time. It's the apache server who does the multiple requests that could run multiple php or python in parallel.
@hellelo.5840
@hellelo.5840 7 ай бұрын
There is tradeoff between low maintenance cost of serverless and efficiency of self maintained servers.
@KangoV
@KangoV 3 ай бұрын
Node is asynchronous, but NOT concurrent! Concurrent and parallel are effectively the same principle. Both are related to tasks being executed simultaneously. Asynchronous methods aren't directly related to the previous two concepts, asynchrony is used to present the impression of concurrent or parallel tasking.
@newjdm
@newjdm 3 ай бұрын
What’s the diagramming tool you’re using?
@styyxofficial
@styyxofficial 3 күн бұрын
Excalidraw
@user-co5bp8nq7e
@user-co5bp8nq7e 7 ай бұрын
prime takes it very personal on poor little ruby thing ((
@richardflosi
@richardflosi 7 ай бұрын
Have you used Netlify?
@HrHaakon
@HrHaakon 7 ай бұрын
Why would your startup with less than 100 users need much more than a VPS? That's not 300k per annum, that's like...100 (no k) per annum.
@LtSich
@LtSich 7 ай бұрын
because the cloud is the future, old vm, or even worse, baremetal, are old school and for loosers... they are cheaper, reliable, but you know, you have to use the "last tech", it's so much better...
@codeman99-dev
@codeman99-dev 7 ай бұрын
3:15 Everyone in the python world reaches for gunicorn for production performance. There's not even an equivalent in the node.js world. Maybe the closest you'll get is something like pm2?
@bamtoday
@bamtoday 7 ай бұрын
Laughs in Apache Spark custom EC2s with PySpark.
@davidraymond8994
@davidraymond8994 6 ай бұрын
Serverless isn't the issue. There are other serverless technologies that you can self-host that are awesome. It is the misuse and lack of understanding of the AWS pricing model which is a bit of a pain to deal with.
I Accidentally Saved HALF A MILLION $ | Prime Reacts
29:12
ThePrimeTime
Рет қаралды 329 М.
$400,000,000 Saved - NO MORE AWS
23:07
ThePrimeTime
Рет қаралды 207 М.
I CAN’T BELIEVE I LOST 😱
00:46
Topper Guild
Рет қаралды 42 МЛН
🌊Насколько Глубокий Океан ? #shorts
00:42
Backstage 🤫 tutorial #elsarca #tiktok
00:13
Elsa Arca
Рет қаралды 48 МЛН
Being An Efficient Developer | Prime Reacts
22:56
ThePrimeTime
Рет қаралды 148 М.
11 - There will be no programmers in 5 years
11:16
Simply Good Business
Рет қаралды 19 М.
"a$$word" LITERALLY SAVED PayPal | Prime Reacts
27:57
ThePrimeTime
Рет қаралды 276 М.
That's It, I'm Done With Serverless*
23:58
Theo - t3․gg
Рет қаралды 195 М.
Jonathan Blow Made Me Quit My Job | Prime Reacts
24:28
ThePrimeTime
Рет қаралды 163 М.
FIRED For Using React?? | Prime Reacts
33:16
ThePrimeTime
Рет қаралды 367 М.
I Quit Amazon after 2 Months | Reaction
29:29
NeetCodeIO
Рет қаралды 68 М.
They got away with this??
1:21:04
ThePrimeTime
Рет қаралды 1,2 МЛН
Iphone or nokia
0:15
rishton vines😇
Рет қаралды 1,9 МЛН
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 7 МЛН
Купил этот ваш VR.
37:21
Ремонтяш
Рет қаралды 296 М.
🔥Идеальный чехол для iPhone! 📱 #apple #iphone
0:36
Не шарю!
Рет қаралды 1,3 МЛН