Surprisingly this is one of the only videos I found that actually goes into specifics of this topic. 👍 All other videos and docs are kind handy-wavy.
@dylanoh342510 ай бұрын
Super high quality content 🎉
@reyariass2 ай бұрын
I had this exact question and I KNEW (felt it in my bones) that the answer wouldn’t be so simple as having just saying “yeah Only like 100”, thanks for the insight!
@bunkerdm63039 ай бұрын
Thanks to you, I have a broader understanding of websockets. I really want to see a video about horizontal scaling. Thank you.
@AblyRealtime9 ай бұрын
Glad to hear you liked the video, thanks for taking the time to comment and we'll keep that in mind :D
@latavish4 ай бұрын
I'm currently developing an app that highly uses websockets and you have really given me a few insights to think about. Thank you so much for this valuable info 😊
@AblyRealtime4 ай бұрын
Great to hear that and thanks for commenting. Are you going to build your own horizontally scaling WebSocket feature?
@AmmariMedAziz5 ай бұрын
High quality content! Looking for a real-life tutorial on horizontally scaling web sockets
@nomadrider720010 ай бұрын
I guess using kafka or rabbit MQ to distribute the load coming from the business logic along with horizontal scaling can further help you achieve more scalability. Great content really enjoyed it
@AblyRealtime10 ай бұрын
Thanks for the kind words, glad you liked it! Let us know if there's any other topics you'd like to see next.
@nomadrider720010 ай бұрын
@@AblyRealtime Load testing using artillery will be a great topic where you not only emit events to server in a loop but also listen to the server sent events at the client side i.e, artillery
@user-ss6yc2kg4y8 ай бұрын
Could you please explain how you would do that ? I guess we could have a chat service that create websocket connections, and subscribe/ push to redis. New messages would be pushed to kafka. We would have another service subscribing to kafka queues dedicated to handling messages, saving them to DB and then publishing to redis Then the chat service receives this message and sends it via ws. What do you think ?
@SiLintDeath5 ай бұрын
Hmm our system process was: ws => Kafka message => consumer to write message to redis and db for recovery. Another Kafka listener that would also send message to ws by looking at redis to find where client was to send back to client.
@anuragbisht1200Ай бұрын
redis can do pub-sub and can be a DB too.
@JackLerouge7611 ай бұрын
Nice one. Thanks
@AblyRealtime11 ай бұрын
We're glad you enjoyed it
@oleksandrsova48038 ай бұрын
Probably, when people ask how many WS connections can a server have - they actually mean "What is the limit of WS/other connections on LB, and what does it depend on? Is it the number of opened file descriptors? Amount of RAM? Anything else?"
@AblyRealtime8 ай бұрын
All very good questions, thanks for sharing! Perhaps for a future video👍🏻
@verified_tinker18188 ай бұрын
The Elixir web framework, Phoenix, solves pretty much all of these problems. The BEAM VM was basically built for this.
@hehimselfishim25 күн бұрын
Need the scaling video! Great content of course.
@user-mn5vp3cr8n7 ай бұрын
I have 3 instances with LB and kafka, when i send the request to kafka server return 200, but on front-end we need to send the response probably via web-socket and this event can be processed on another server. So how front-end can know to which server need to subscribe to socket if we are using LB?
@AblyRealtime7 ай бұрын
In a typical setup with a load balancer, you wouldn’t communicate directly with individual server instances from the frontend. Rather, you would communicate with the LB, which would handle redirecting your requests to the appropriate instances - Alex
@tzuriteshuba27044 ай бұрын
If all your servers use a shared redis instance to communicate with each other, don't we just reintroduce the original problem of a single server handling all the load (defeating the purpose of the load balancer)? I see that it still helps, since non-websocket work is still distributed, but at scale, I dont see how anything is solved. Especially for apps like chat apps where the websockets carry a lot of the work. Great video though!
@bookercodes2 ай бұрын
You’re spot-on, except Redis is well-suited for clustering compared to your own WebSocket server.
@haritpatel5001Ай бұрын
Very insightful video indeed, great work.
@AblyRealtimeАй бұрын
Thanks so much !
@R0hanThakur3 ай бұрын
Amazing video....thanks But you still did not answer the question. How many active websocket connections can a an avg ec2 server hold... or please give a rough ball park estimate range .... This info can be used to decide how many servers we need right ?
@anuragbisht1200Ай бұрын
thanks for the nice video. Could you share your thoughts of choosing redis over other Dbs and would you like to persist the state data to disk ?
@AblyRealtimeАй бұрын
Hey there! Redis is used in this situation more as a cache, optimised brokering the messages with ultra low latency. The classic design to persist messages longterm is to have an additional relational DB as a layer after Redis (to the right in the diagram)
@taki97892 ай бұрын
Thank you for helpful video! I have a question regarding horizontal scaling websicket implementation. Is it possible to create a lookup table that maps roomId, which is often used in chat applications, and server id so that users having the same roomId are navigated to connect to the same server when load balancing?
@AblyRealtimeАй бұрын
Yes, this would be a recommended design pattern, and has increased security benefits over navigating rooms and servers using naming patterns. Thanks for your question!
@taki9789Ай бұрын
@@AblyRealtime I appreciate your reply!
@GuildOfCalamity6 ай бұрын
I think you should run a test with a single mid-tier server and see where the average limit of WS connections would be.
@AblyRealtime6 ай бұрын
Thanks for the suggestion! We'll keep this in mind for future videos.
@aymanimtyaz85292 ай бұрын
Could you elaborate a bit on how the Redis based approach works when scaling out?
@AblyRealtime2 ай бұрын
You have to set-up a way to provision and shed Redis instances to match scaling demands. Some use-cases will demand a Kubernetes type service to manage the instances, and others a more homegrown solution.
@ovnaАй бұрын
Ty
@zhonglin59854 ай бұрын
I think this video would be much more valuable if you could talk more details about how the horizontally scaled system works for a chat app. Everybody knows horizontal scaling is the way to go.
@AblyRealtime4 ай бұрын
Thanks for your feedback! If you're interested, this is certainly the kind of content we'll consider delving deeper into in the future.
@namjitharavind3 ай бұрын
Why cant we use redis instead of websocket?
@AblyRealtime3 ай бұрын
Alex from the video here 👋🏻 That is a good question. WebSockets are a realtime communication protocol that provides a full-duplex communication channel between client and server over a long-lived connection, meanwhile Redis is an in-memory data structure store. Sometimes confusion can arise because Redis does support pub/sub, but that mechanism is primarily designed to handle communication between your app/services and Redis. It's not suitable for realtime interaction between your server and clients (end-users). For example, you'd be hard-pressed to connect to Redis from a browser in a a sensible way but that's exactly what Websockets are designed for.
@namjitharavind3 ай бұрын
@@AblyRealtime Thank you for the reply. Websocket for the realtime experience.
@emaayanАй бұрын
what if your users are actually devices which need always be connected.
@AblyRealtimeАй бұрын
This is often not possible with phones or tablets due to constraints from Apple and Android with apps not allowing background WebSocket connections. It is not even possible to send REST https requests to apps running in the background. The only way around is to send a push notification. If the app is running in the foreground indefinitely, the socket connection can stay open.