Dropbox/Google Drive Design Deep Dive with Google SWE! | Systems Design Interview Question 3

  Рет қаралды 14,236

Jordan has no life

Jordan has no life

Күн бұрын

Don't use the word cloud around me, it triggers my nicotine addiction
Timestamps:
00:00 Introduction
01:34 Functional Requirements
03:10 Capacity Estimations
05:37 API Design
07:35 Database Schema
13:04 Architectural Design

Пікірлер: 89
@ragnawrawk769
@ragnawrawk769 2 жыл бұрын
Just got a call back from my Uber recruiter to tell me I passed my L4 interviews. Huge credit to your videos for helping me ace my systems design interview even with only 1.5 years of experience, no college degree, and no prior experience with systems design. The way you teach is great, easy to understand and super engaging.
@Ms452123
@Ms452123 2 жыл бұрын
Drop some studying and leetcode tips fam. How long was the prep?
@ragnawrawk769
@ragnawrawk769 2 жыл бұрын
@@Ms452123 Probably like 3 months cumulatively of prep, ran through sys designs video on here, grokking, and Neetcode. Also got Leetcode premium and did company tagged questions.
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
Dude holy shit congrats, let's fucking goooo! May need a referral there one day haha
@Ms452123
@Ms452123 2 жыл бұрын
@@ragnawrawk769 Damn I am doing the exact thing bro, since April though. I have a newGrad Bloomberg interview in August. Still don't feel fully confident. Hopefully in two more month of consistency I will be good, if not for Bloomberg than any other company, because I am desperado lol. I dont have a CS deegree.
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
@@Ms452123 fyi not to have you stop watching my channel but I think for that you may wanna mostly prep by leetcoding
@pawandeepchor89
@pawandeepchor89 Жыл бұрын
I liked that you did not blindly follow others design and came up with your own solution. Great work, thanks.
@sagarjvora1
@sagarjvora1 2 жыл бұрын
Excellent! Like the clarity of thoughts. And smartly countered grokking
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
I appreciate it!!
@fischlump
@fischlump 2 жыл бұрын
great video and happy to see a different take on the google drive/dropbox problem. I agree with the point about the "request queue" being superfluous. As for the synchronization/notification part it has a lot of overlap with the design of a notification service in my opinion and can be considered to be put into it's dedicated service. That would also allow for different strategies for popular (i.e. shared by a lot of users) files for example. One idea that I was considering is a single websocket/long polling connection per client to which the notifications service sends out an event through some kind of session manager along with files changed. For offline clients these can be stored in a queue.
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
I think that's reasonable! In reality this design probably would use a dedicated notification service, the one thing to note though is that I feel like if you say that in an interview they'd probably just ask "how do you implement that" haha.
@scottlim5597
@scottlim5597 Жыл бұрын
Excellent content. Great work Jordan :)
@pashazzubuntu
@pashazzubuntu 2 жыл бұрын
yeeeahh! 1st! Happy to see the system design content coming!
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
🥰🥰
@shubhankar915
@shubhankar915 Жыл бұрын
One way to solve the file changes propagation is, have each files assigned a topic which is handled by a bunch of gateway server, which stores the mapping of which user is subscribed to which topic and files. And the subscription store contains the mapping of topic to list gateways. So anytime a file content changes, the event is put in its topic, then subscription service forwardes the change to the associated gateways, which then sends the event to the individual subscribed clients. This is how facebook live is architected
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
Nice!
@andrey_tech
@andrey_tech Жыл бұрын
Thanks! The video is just awesome
@sagarjvora1
@sagarjvora1 2 жыл бұрын
Pl create more system design contents. It’s great to go through your concept videos first followed by system design ones. Keep rocking!!
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
As I come across new concepts that I feel deserve a dedicated video, will do! I appreciate the positive feedback and thanks for viewing, hope this all helps!
@mickeyp1291
@mickeyp1291 6 ай бұрын
great video as allways - the uploading chunks directly to s3 and then updating the meta server is just like a custom distributed transaction. i would just change the upload to an internal service not devulging the s3 endpoints for upload (that would forward the chuhks as http streams to s3), and send back a task_id from this system for all the chunks, when the task is finished the file has either uploaded successfully or not. (there by moving your conflict resolution mechanism in the backend not exposing your endpoints and keeping a distributed-transaction-like interface for the client to poll). this can also help in updating the subscribers of the new change as there is only one location (the meta data server) for driving the events. when the task is completed (successfully) a propigation will ensue, if the task fails, a child task (deleting chunks) will do the maintenance keeping your s3 clear. this housecleaning can be done in batches on a secondary QOS. all this will of course be updated to a metrics server via the CDC you show in your diagram.
@jordanhasnolife5163
@jordanhasnolife5163 6 ай бұрын
That's fair enough, though I should note that with signed URLs I'm not overly concerned with letting clients upload to S3 directly.
@valty3727
@valty3727 Жыл бұрын
you are really really good at explaining your thought processes and why one approach is more logical than the other. can't believe a channel with this dumbass name and profile pic is helping me this much :p thanks for all this system design content!!
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
My pleasure, considering changing my channel name to something more serious like Jordan gets no bitches
@mickeyp1291
@mickeyp1291 6 ай бұрын
re the subcribers to changes, i think the 200 files limitation is for user owned files, there should technically be a limit of 500M User x 200 files making 100Bn open websockets at max for lets say an admin sharing/subscribing. making your 200 websockets assumption moot, under kafka each file would get a topic, and each user would get a merged stream. a notification service would then poll file changes and propagate per user. this would be very resource heavy and would probably require some optimizations hybriding polling and notifications. - as allways love your videos
@jordanhasnolife5163
@jordanhasnolife5163 6 ай бұрын
Thanks!! I tried to improve upon this part of my design in the 2.0 version of this video. I think that for popular files we should do what we do in twitter and effectively have a "popular file" cache.
@igorrybalka2611
@igorrybalka2611 Жыл бұрын
Thanks a lot for the video! I’ve got a rather philosophical question. Whenever we need transactional guarantees but want to stick with NoSQL database, can we achieve this by using a separate synchronisation mechanism e.g. distributed Redis lock? Imagine before updating the hashes table client obtains a distributed lock on the file using fileId. Or this would not work because update can fail mid-way and there’s nobody to roll back? Any thoughts appreciated, thanks.
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
While I imagine that this is possible, it seems like a poor idea just because there's tons of latency induced with a distributed lock. I think you'd be better off just evaluating everything based on the specs of the database :)
@tonyyang8424
@tonyyang8424 8 ай бұрын
Instead of listening on changes by the message queues and keeps the connection open, how about we make it as a pull model so that: 1- only active users care about updates/synchronizations 2- we can force the sync operations on the client side when the clients are doing something (updating some files maybe). We can also have some periodical sync by client itself to make files synced with service side. There will be much lower traffic and connections to the service side if use pull model. Happy to hear what you think!
@jordanhasnolife5163
@jordanhasnolife5163 8 ай бұрын
Seems reasonable to me!
@huguesbouvier3821
@huguesbouvier3821 7 ай бұрын
Great video! Looking forward the v2 of this one :). When looking at the design, there is an arrow of client uploading directly to S3. It think you just didn't draw the service in between but I suppose we would not have client uploading directly into a bucket? For the sake of argument, I am wondering if instead of my-SQL we could use a wide-column DB like Cassandra by adding all the chunks and their URL in one row. If we use one row for one version: I guess the issue is that the row for a new version doesn't exist yet so we should somehow create it in advance to be able to lock it. If we use one row for all the versions: It could work but it means locking older version, which is probably not necessary.
@jordanhasnolife5163
@jordanhasnolife5163 7 ай бұрын
Yeah I actually am proposing to just upload straight from the client, should in theory save a considerable amount of time. I see your point on Cassandra, but for the sake of avoiding write conflicts I do think that something like MySQL makes the most sense.
@yuganderkrishansingh3733
@yuganderkrishansingh3733 Жыл бұрын
Thanks for I think the idea of having response queue per user is that it's easier from client hardware perspective to have 1 websocket connection open which is essentially one process actively running compared to 200 processes corresponding to 200 websocket connections whereas the servers hardwares are optimised for handling large no. of connections so not a big deal for them to deal with large no. of connections. Why having response queue partitioned on user not an issue IMO is because queue like Kafka are designed to handle use cases where there can be multiple client listening for the updates and don't think no. of client listening for a document will match the scale corresponding to twitter followers.
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
You could be right - it's all about how you estimate the capacity really. But that being said I think there may be some org wide docs that 100,000+ people may listen to. Perhaps for those you could just use a pull based model.
@johnsonjthomas
@johnsonjthomas Жыл бұрын
Great Video! Just wanted to get your thoughts on using HDFS instead of S3? What are the pros and cons? Dropbox tech article mentioned that they were using both S3 and HDFS as their initial solution.
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
I touch upon this a bit but HDFS tends to be more expensive to run as it requires you having your own cluster and scaling it on your own, I think S3 is cheaper so if you just need to store static files it tends to be better
@ahnjmo
@ahnjmo Жыл бұрын
Hey Jordan! Got a question about the FileVersionChunk table - As I understand, it tells us which file is associated with which chunk hashes. This makes sense - so let's say I upload one file with 2 chunk hashes. Our DB looks like: fileId: 1, version: 1, chunkHash: abc fileId: 1, version: 1, chunkHash: def Then, I modify the file, create a new chunk hash, and I want to upload the FileVersionChunk table. If in the case I upload a new chunkHash with: ghi, I would imagine I would just append to my DB (and the chunk hash table would be updated), and I would get: fileId: 1, version: 1, chunkHash: abc fileId: 1, version: 1, chunkHash: def fileId: 1, version: 2, chunkHash: ghi In this case, when a user now downloads the file, how do I know which chunk hashes to retrieve from my FileVersionChunk table? If for example, the new chunkHash was a modification of the first chunk hash (abc) how would I know to retrieve: fileId: 1, version: 1, chunkHash: def fileId: 1, version: 2, chunkHash: ghi Or, is it that I would have to also append to the FileVersionChunk table and would just duplicate the previous chunkHashes that were also not modified, like so: fileId: 1, version: 1, chunkHash: abc fileId: 1, version: 1, chunkHash: def fileId: 1, version: 2, chunkHash: ghi
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
I appreciate it! Maybe the best way to structure the DB here is something like: fileId: 1, chunkOrderNumber: 1, version: 1, hash: abc fileId: 1, chunkOrderNumber: 1, version: 1, hash: def fileId: 1, chunkOrderNumber: 2, version: 1, hash: ghi Then query for all of the files with fileId 1, and groupBy chunkOrderNumber to get the hash with the maximum version - sorry if that wasn't clear during the video!
@akibali7123
@akibali7123 Ай бұрын
This is good.
@sudhadevi9363
@sudhadevi9363 6 ай бұрын
Great Video. Thanks for a clear explanation. So in your design we are assuming the client will take care of splitting job and send the chunks to the server. I agree that this helps with the the network bandwidth. But the downside would be splitting logic should be coded for different devices which would add some complexity. Would you agree with this? And when the response is pushed to the queue, is there some kind of web hook or consumer that process that message?
@jordanhasnolife5163
@jordanhasnolife5163 6 ай бұрын
To be honest, I'm not sure why you feel that the splitting logic would have to be coded per device, there's more or less existing libraries that can do this type of thing for us! Secondly, at least assuming we're using Kafka, that uses long polling under the hood, but basically yeah we'd have to be using some sort of real time communication protocol. Thanks for watching!
@amitdubey9201
@amitdubey9201 11 ай бұрын
Thanks a lot
@idobleicher
@idobleicher 11 ай бұрын
Keep up!
@tanvirkekan6111
@tanvirkekan6111 4 ай бұрын
When a file is divided into multiple chunks and when these chunks are uploaded to cloud storage ( eg S3) will this generated unique s3URL for each chunk ?
@jordanhasnolife5163
@jordanhasnolife5163 4 ай бұрын
yep
@SmokinBear199
@SmokinBear199 Жыл бұрын
@11:19 you're mentioning the FileUsers table as a one:many relationship, I think it would be a many:many relationship. One file can have many users (each with its own defined permission) and one user can be a part of many files?
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
I think we agree here, hence the use of userId and fileId
@eudaimonian9473
@eudaimonian9473 2 жыл бұрын
Great video!! Btw which team are you going to? I'm talking to the hiring manager for Google for Education
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
Some marketing analytics team, I don't really even know the specifics yet lol
@yatharthv
@yatharthv 9 ай бұрын
Hi Jordan seems like you can also share some details on how file upload works from browser to server and should we use web sockets for that or not. May be you can make a separate video on that.
@jordanhasnolife5163
@jordanhasnolife5163 9 ай бұрын
Definitely don't think you need websockets for it as nothing about this needs to be particularly realtime - look up FTP on wikipedia :)
@mcee311
@mcee311 7 ай бұрын
What if a user needs to resynchronize all their files after being offline for a prolonged period, such as a month? Would the client keep track of this on its own and make separate requests to the server to download the latest version of each file?
@jordanhasnolife5163
@jordanhasnolife5163 7 ай бұрын
I think that's a very reasonable optimization as opposed to having to get all incremental changes.
@SmokinBear199
@SmokinBear199 Жыл бұрын
Thanks so much for these videos, love the depth! 2 questions 1) @11:30 For the FileVersionChunks table you are storing a chunkHash which is the PK of the Chunks table. I feel dumb for asking but, why not just use a normal PK? The hashing algorithm takes the entire chunk data as input so it's not really saving on time to use this hash as opposed to comparing the whole contents. For example, we have file: Jordan.pdf with 5 chunks. chunk #5 changes, how do we know this? we compare old chunk 1 with new chunk 1, etc and see that only #5 is different. With hashing, we still have to take the new chunk data as input. I guess we don't have to go through old chunk's data again, but is that really the only reason? 2) Another possible dumb question: If we’re using single leader replication for this, what happens when 2 users for the same file are on opposite ends of the globe? I'm guessing you'd say that it is what it is and we have to suffer the high latency to ensure strong consistency but I’m wondering if we can do better.
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
1) I'm using the fileId as the primary key, not the chunk hash. Allows us to keep all chunks for a file adjacent to each other in memory. Using a hash allows us to perform much of the computation on our local machine and then send far less data to the db. 2) We could definitely have a leaderless replication setup, however the penalty of having write conflicts here is pretty large. We generally want to make sure that people are seeing the newest version of a file when they read it. Hope this makes sense!
@SmokinBear199
@SmokinBear199 Жыл бұрын
​@@jordanhasnolife5163 1) Sorry, I meant chunkHash is the PK of the Chunks table. For that, why not use a normal int PK? With your design, we're asynchronously sending actual file contents data to S3 which doesn't require a hash. The only time we need it is to update our metadata db, and the only reason we need it is to know if a file has changed. Aren't we doing that in the client before anyways? Also thanks, you're the man!
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
@@SmokinBear199 Yeah I think this is a fair point, assuming you're capable of generating the hash on your client machine :)
@harishchava1443
@harishchava1443 Жыл бұрын
How the order of chunks for a single file per version is tracked in the DB schema?
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
You can just have a field called chunkNumber - shouldn't be too hard I don't think
@nishankdas1895
@nishankdas1895 Жыл бұрын
Hi Jordan, your point on partitioning the response queues based on fileId is still confusing to me because I can't seem to understand how it improves upon the grokking the systems design solution. It is so because if we go by your approach I feel the problem of pushing changes of a file which has been shared to 10000 users at time still exists because both ways the server needs to push the changes to the sockets.
@jordanhasnolife5163
@jordanhasnolife5163 Жыл бұрын
While you are correct that either way every user will have to receive the change, offloading the change to a queue based on fileId as opposed to many queues based on userId will (at least in my opinion) reduce the amount of work being done by our central server and shift the load to the message queues which are then responsible for delivering messages to clients. I think you can make an argument for both, as you also wouldn't want an invididual client to have to maintain too many websocket connections.
@ziggyzhang4156
@ziggyzhang4156 Жыл бұрын
​@@jordanhasnolife5163 Seems we're talking about two different things. Each user will have to have their own "channel" (or ConsumerGroup in Kafka system's term) to receive their copy of the message, AND they will also have to receive ALL files updates they subscribe to. When you say "partition by fileid" I take it means one user may need to subscribe to multiple message queues in which certain files they need to subscribe as topic, in which case, it's not that different from grokking, you still will have ALL users subscribing to these same queues and same fileid topic as separate ConsumerGroups. "Partition by userId" would be weird but idt that's what grokking means, essentially, each file change results in change stream published to whichever queue this file supposed to go to, and then you have different consumergroups (or kinesis fanout streams) pick those changes up. Grokking brushes past it by saying "separate response queues" but that could just be talking about the consumer fanout part. Other resources sometimes build upon that and say "broadcasting" which adds to further confusion. Is that a fair understanding? This understanding also assumes we're not setting one queue/stream per file either, that seems would be very wasteful? If you talked about a user maintaining 200 websocket connections it would appear you think we will have literally every file in a separate queue? But on the other hand, if we don't do that, then partitioning and distributing which files go into which queue and how that makes consumers listening to as minimal queues as possible, and not listening to queues that have tons of files they are not interested in, also seems tricky?
@neek6327
@neek6327 2 жыл бұрын
I tried to come up with my own solution before watching your video and wanted to get your opinion. What if any client making modifications to a file makes a websocket connection request to a "somewhat sophisticated" load balancer/reverse proxy that will direct their request to a server that is handling changes for that specific file (a single server can be in charge of many files). So all websocket connections for a given file id are all maintained/handled by a single machine. The clients can send modifications for a specific chunk (insert char at index 10 for chunk 3) via the websocket connection to the dedicated server. The server will keep chunks being modified in memory (LRU, also i think chunk size should be way smaller than 4MB, I'm thinking KBs). The server will only ever use one thread at any given time for a specific chunk id. That way there will never be any concurrent modifications to a single chunk. As changes are being made to a given in-memory chunk, the newer versions of that chunk are being pushed to the database and the changes are also being propagated to all websocket connections corresponding to the file id. Maybe I am over-engineering but i just thought the idea of a file that is being modified being assigned to a designated server interesting. I am thinking a coordination service could make this happen in a fault tolerant way.
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
First of all, that's a great idea to come up with your own solution before watching! What you're describing is more like a realtime text editing platform, and you've basically just spoken about operational transform, where all edits to a file are routed to a given server and then ordered and basically merged together using some sort of algorithm. It's definitely possible, but it assumes that one server can also handle all the load for a given doc, and also I think this strategy may be over engineering a bit for this problem, but very relevant for something like Google Docs, which I will eventually cover!
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
There's also the question of what happens to clients who have edited a doc locally and then the websocket pushes a new version of the document to them? Do their local changes just get thrown out? Do they have to perform a merge operation?
@neek6327
@neek6327 2 жыл бұрын
Gotcha, makes sense. Thanks for the feedback! I think I got ahead of myself and started designing without knowing all the requirements first. Fatal mistake! 😵 Now that I think of it, I would be shocked if I was ever asked to design a real-time text editing platform in a system design interview but if I am I can hit them with something like this. Thanks for the videos, man. This channel has helped me a lot!
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
@@neek6327 haha all good glad to help and happy to discuss in more depth! I have heard of people asking for Google docs before so I'll do that one soon enough! Plus it's a cool topic
@iambao1940
@iambao1940 8 ай бұрын
What technology do you use for the log based Message Queues? Thank you.
@jordanhasnolife5163
@jordanhasnolife5163 8 ай бұрын
Kafka
@iambao1940
@iambao1940 8 ай бұрын
@@jordanhasnolife5163 I did not expect you would reply this fast, thanks again. To my understanding, each partition will contain files of different users. How do you handle security here (prevent a client from reading file changes which it does not have permissions to)?
@jordanhasnolife5163
@jordanhasnolife5163 8 ай бұрын
@@iambao1940 I think something like encryption might solve this
@iambao1940
@iambao1940 8 ай бұрын
@@jordanhasnolife5163 by using encryption, you mean the client can still receive the file changes which it is not expected to receive (files of other users) but cannot parse the content. But I think it wastes unnecessary high network bandwidth. I am expecting some kind of filtering with which the client only receives the files in its permissions.
@markblum4854
@markblum4854 2 жыл бұрын
@20:47 one queue per file means a ~65k unique file limit effectively. The server handling the websocket for client #1 cant have more than ~65k connections to that particular client. Not a big deal for average Joe but consider if you're a document heavy user like a lawyers office/doctors office. I think it's absolutely reasonable to have greater than 65k documents over time. Can probably solve via no notifications on old documents except via email or something. Just something to consider!
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
I think there's a misunderstanding between us here. It's not actually necessarily a queue per file, but rather the fileId is partitioned by hash range into queues. That being said, I still don't see why that would limit unique files - we can have many partitioned queues, and any server can push to any queue. Can you elaborate a bit more please on why there is a 65k limit? If anything, it is a limit of 65k clients that can connect to a given queue server, which should be ok assuming there aren't more than 65k people concurrently trying to access a document, and even if there were the queues can be replicated.
@markblum4854
@markblum4854 2 жыл бұрын
@@jordanhasnolife5163 ah I got you, and the 65k limit is per client ip to server port. So one client (on one ip, clients can have multiple ips!) can have 65k connections on one server port.
@markblum4854
@markblum4854 2 жыл бұрын
In the above client #2 with different ip can have its own 65k connections to the same server at the same time
@jordanhasnolife5163
@jordanhasnolife5163 2 жыл бұрын
@@markblum4854 understood, just confused how that limits the entire service to 65k files, I appreciate the input!
@ziggyzhang4156
@ziggyzhang4156 Жыл бұрын
​@@jordanhasnolife5163 Oh because you said if a user have 200 files they will have 200 socket connections, maybe I guess what you meant is "at most" you will 200 connections but in reality you can have fewer because your files should be designed in a way to go into one queue (then pushing that to extreme, it will be as if that queue IS catered just for that user). I wonder in real world is it feasible to just publish one file per queue for all the files in the world on google drive or dropbox, or that's actually not an issue.
@majidshaikh5913
@majidshaikh5913 6 ай бұрын
why you aaarrrrre taaaaakinnn like ttttttisss aaaaaaaa?? Nice video bro ;)
@jordanhasnolife5163
@jordanhasnolife5163 6 ай бұрын
Yeah I really used to talk slower in the older videos for whatever reasom
@maxvettel7337
@maxvettel7337 5 ай бұрын
Why does he wear cap at home?
@jordanhasnolife5163
@jordanhasnolife5163 5 ай бұрын
His hair was gross
@John-nhoJ
@John-nhoJ 7 ай бұрын
@jordanhasnolife5163 what if the storage has a limit?
@jordanhasnolife5163
@jordanhasnolife5163 7 ай бұрын
Partitioning partitioning partitioning
@John-nhoJ
@John-nhoJ 7 ай бұрын
@@jordanhasnolife5163 splitting the file doesn't really work, does it? If I'm uploading a 10 GB file and I have 9 GB left and the chunks are coming in 4 MB at a time... do you just backdelete the other chunks?
Жайдарман | Туған күн 2024 | Алматы
2:22:55
Jaidarman OFFICIAL / JCI
Рет қаралды 1,8 МЛН
HOW DID HE WIN? 😱
00:33
Topper Guild
Рет қаралды 41 МЛН
I Can't Believe We Did This...
00:38
Stokes Twins
Рет қаралды 100 МЛН
System Design Interview: Design Netflix
27:50
Exponent
Рет қаралды 247 М.
Мой инст: denkiselef. Как забрать телефон через экран.
0:54
НЕ ПОКУПАЙ СМАРТФОН, ПОКА НЕ УЗНАЕШЬ ЭТО! Не ошибись с выбором…
15:23
В России ускорили интернет в 1000 раз
0:18
Короче, новости
Рет қаралды 1,9 МЛН
Самый дорогой кабель Apple
0:37
Romancev768
Рет қаралды 349 М.