How GitHub's Database Self-Destructed in 43 Seconds

  Рет қаралды 907,434

Kevin Fang

Kevin Fang

Күн бұрын

A brief maintenance accident turns for the worse as GitHub's database automatically fails over and breaks the website.
Sources:
github.blog/2018-10-30-oct21-...
github.blog/2016-12-08-orches...
github.blog/2018-06-20-mysql-...
news.ycombinator.com/item?id=...
/ github_major_service_o...
hub.packtpub.com/github-down-...
github.blog/2017-10-12-evolut...
Chapters:
0:00 Part 1: Intro
1:25 Part 2: GitHub's database explained
3:40 Part 3: The 43 seconds
5:04 Part 4: Fail back or not?
6:54 Part 5: Recovery process
10:32 Part 6: Aftermath
Notes:
- Funnily enough in this blog post from 4 months prior to the incident github.blog/2018-06-20-mysql-... they specifically explained how cross-data-center failovers could be carried out successfully
Music:
- Hitman by Kevin MacLeod
- Blue Mood by Robert Munzinger
- Pixelland by Kevin MacLeod
- Dumb as a Box by Dan Lebowitz

Пікірлер: 793
@sollybunn
@sollybunn 11 ай бұрын
"We can't delete user data, we aren't gitlab" This video is a goldmine
@robloxboxertblocked
@robloxboxertblocked 11 ай бұрын
gitlab*
@sollybunn
@sollybunn 11 ай бұрын
@@robloxboxertblocked oopsies
@YashendraShuklaTheOG
@YashendraShuklaTheOG 11 ай бұрын
I literally choked on my breakfast.
@pratikkore7947
@pratikkore7947 11 ай бұрын
sounds like I missedsomething, can I have some keywords to look up?
@yungifez
@yungifez 11 ай бұрын
Haha i saw that golden statement
@DanielS-cu2ic
@DanielS-cu2ic 11 ай бұрын
The assumption that 50% of total github users are active is too optimistic
@Backtrack3332
@Backtrack3332 11 ай бұрын
Yea, I'm guessing 2% max
@FiksIIanzO
@FiksIIanzO 11 ай бұрын
It's good to grossly overestimate potential issues
@KaidenBird
@KaidenBird 11 ай бұрын
As someone who hasn't pushed in weeks, that hurts, but is too true.
@lightning_11
@lightning_11 11 ай бұрын
@@Backtrack3332 That's still a lot, though!
@RMDragon3
@RMDragon3 11 ай бұрын
Yeah, those assumptions seem very off to me. I'm feel like less than 50% of GitHub users are active daily between abandoned users and people who rarely use it. On top of that, a significant percentage of users will be students or personal projects that don't really have a monetary impact. Also, most users likely didn't lose anywhere near to 2 hours, especially because the website wasn't fully down for anywhere close to those 24 hours. I'm sure it didn't work great during that time, but it was usable. If it happened to me, I would likely test for 5 minutes, check with collegues and just work locally, testing every hour or so. Some people may have been affected more, but 2 hours of lost productivity seems way too high to me. With that in mind, the estimate would likely be a few orders of magnitude lower.
@RichieYT
@RichieYT 11 ай бұрын
These problems always occur during routine maintenance. That's why I don't do any maintenance whatsoever and my systems have never experienced downtime (although I've never checked)
@nicholasfinch4087
@nicholasfinch4087 11 ай бұрын
can't have a problem if you don't see a problem
@kurdtpage
@kurdtpage 11 ай бұрын
This is the way
@zsoltsz2323
@zsoltsz2323 11 ай бұрын
Even Chernobyl was routine maintenance.
@PieJee1
@PieJee1 11 ай бұрын
That makes your system full of security exploits as security issues are not patched too. You will also face a huge issue if you are forced to update if you use versions that are too old
@elle9834
@elle9834 11 ай бұрын
Out of sight out of mind
@Justin-jm2fd
@Justin-jm2fd 11 ай бұрын
As a former bitbucket employee I can confirm we have disaster recovery plans for a lunar data center outage
@KangJangkrik
@KangJangkrik 11 ай бұрын
Now what?
@fatrobin72
@fatrobin72 11 ай бұрын
Last I checked it was a disaster plan, there was no recovery...
@DaveParr
@DaveParr 11 ай бұрын
I'd assume you would us IPFS.
@jaythecoderx4623
@jaythecoderx4623 11 ай бұрын
@@DaveParr Those have a lot of latency tho, don't they?
@siliconcassettes3369
@siliconcassettes3369 11 ай бұрын
As a time traveller from the future I can confirm the recovery plans are insufficient and the situation becomes irrecoverable
@axelboberg
@axelboberg 11 ай бұрын
Interplanetary failovers are a struggle, not gonna lie.
@__dm__
@__dm__ 11 ай бұрын
ipfs is (was?) a project with interplanetary, high-latency connection in mind with Merkle DAG datastructures for well, unstructured object data. It got adopted by the crypto crowd because memes and idk where it's going
@philip3963
@philip3963 11 ай бұрын
@@__dm__ I work with IT solutions and I swear I've seen IPFS support in the industry before, just can't remember where
@ExEBoss
@ExEBoss 11 ай бұрын
@@philip3963 *Cloudflare* says they have support for it.
@muhammadyusoffjamaluddin
@muhammadyusoffjamaluddin 11 ай бұрын
PHP Devs: YOU THINK SOO??????
@LinhNguyen-zg9kn
@LinhNguyen-zg9kn 11 ай бұрын
bruh they had the option to rollback 40 mins of write on the promoted db and sync both db. They pretty much fucked themselves in the ass tbh
@kalebbruwer
@kalebbruwer 11 ай бұрын
It's bold to assume that a) 50% of Github users are active on any given day b) Their time is worth an average of $50/hr c) Not syncing with remote for one day would affect the average user
@mews75
@mews75 3 ай бұрын
That's what i was thinking lol
@opfipip3711
@opfipip3711 3 ай бұрын
yeah, one of the great things about git is that it is trivial to set up a new remote and even no problem to code for weeks without an internet connection at all. I'd say GitHub could only be up ~20% of the time without that having a strong (financial) impact on most of the projects hosted there. Would piss of lots of devs, tho.
@Ignacio_DB
@Ignacio_DB 2 ай бұрын
im no it guy, but 40 mins of lost data is a better sacrifice than hours of slow time, they couldnt just freeze the west db, and see what it was different, transfer and boom everything has been solved
@mennoltvanalten7260
@mennoltvanalten7260 26 күн бұрын
I push maybe 3 times a week... but I'm basically using GitHub as a backup for some personal projects. So long as my computer survives I can handle not pushing for a few days
@ericlizama8552
@ericlizama8552 11 ай бұрын
Honestly I'm impressed that Bitbucket was able to lower the Earth-Mars latency down to 60 milliseconds.
@Fenhum
@Fenhum 11 ай бұрын
they must've found a cheap way to build those einstein rosen bridges ey?
@wesleyeberly228
@wesleyeberly228 10 ай бұрын
@@Fenhumsomething akin to hyper pulse relays from battletech
@shippo72
@shippo72 8 ай бұрын
@@mikicerise6250 Ansible is instantaneous, no matter the distance. It even allows you to communicate both upstream and downstream of your current dimensional position.
@AR-yd2nd
@AR-yd2nd 2 ай бұрын
Faster than light bitbucket
@edhahaz
@edhahaz 11 ай бұрын
imagine being github and being unable to... MERGE two databases
@littleloner1159
@littleloner1159 11 ай бұрын
It's GitHub Didn't they delete their whole code like twice?
@joelpww
@joelpww 11 ай бұрын
​@@littleloner1159 might be thinking of gitlab
@casev799
@casev799 11 ай бұрын
Yeah, but you'd expect them to learn at some point. They have their whole library of users that could help too....
@ko-Daegu
@ko-Daegu 11 ай бұрын
@@casev799 typical YT reply evrything is easy in their eyes yet they accomplished nothing
@Paulo27
@Paulo27 11 ай бұрын
git push --force -----FORCE ----------FOOOOOOOOORCEEEEEPLEEEEEAAAAASSSSEEEEEE
@riddixdan5572
@riddixdan5572 11 ай бұрын
What a goldmine of a channel. I'm here with you all, witnessing the birth of a great channel
@CoryKing
@CoryKing 11 ай бұрын
I worked at a website that handles millions of write transactions per day across like 7 global data centers. We were starting to think of a way to drop into a “read only” mode in the event something like this happened. Then we wouldn’t need to paw through the mess of uncommitted transactions…
@KF-zb6gi
@KF-zb6gi 10 ай бұрын
that's actually sounds good
@xpusostomos
@xpusostomos 6 ай бұрын
​@@KF-zb6gisure it's good ... If this is the rare web site where it even makes sense to be read only
@GeorgeTsiros
@GeorgeTsiros 3 ай бұрын
when you say millions of transactions per day, is there something difficult about these? I mean, even if you do 100 million per day, that's on the order of 1k transactions per second, that's reasonable, yes?
@xpusostomos
@xpusostomos 3 ай бұрын
@@GeorgeTsiros the difficult part, if you watched the video, is reconciling conflicting changes
@manzenshaaegis8783
@manzenshaaegis8783 11 ай бұрын
This is one of those things that in hindsight, it is so easy to see how they set themselves up for failure. But I bet you a lot of brilliant people looked at this and still did not see the issue until it (inevitably) blew up. It do be like that sometimes...
@simonsomething2620
@simonsomething2620 11 ай бұрын
probably more along the lines of politics and "we'll do it later"
@christianbarnay2499
@christianbarnay2499 11 ай бұрын
I know at least one org that can't have that kind of failure. Their standard operating procedure is to actually force the primary switch on a regular basis. Every 2 or 3 months they power off all primary servers and check that all secondaries have promoted and are now fully operating as primaries with no data was loss. Then they return they restart the old primaries that become the new secondaries. It covers all possible kinds of failures of the primaries. This is also used for the upgrade procedure. Whenever you need to upgrade a server, you upgrade the secondary first, do some offline tests, then promote it to primary, keep the old primary/new secondary ready with the old version for a few days in case a rollback is needed. And finally update it. The first time I saw that choice of having the failover procedure being an integral part of normal operations I thought it was genius. When you have an incident, you don't need to panic and look up for exceptional procedures you are not familiar with. You just change the schedule of the regular routine. And if needed you can do forensics on the system you just put offline while users are working unaffected by the incident.
@travcollier
@travcollier 11 ай бұрын
@@christianbarnay2499 Good idea. Of course, it is also expensive AF. Robustness always costs short term efficiency.
@smugfaced
@smugfaced 10 ай бұрын
it really do be
@checker297
@checker297 10 ай бұрын
@@christianbarnay2499 everyone can have this kind of failure, it just is the level of extremes. It isnt in normal situations when you get pressured as a engineer, its when shit is on fire and suddenly all your plans which required something you assumed would be working due to its robustness, forces your hand to pull a rabbit out of your arse.
@0tiii
@0tiii 10 ай бұрын
dude almost sounds like fireship
@dybdab
@dybdab 11 ай бұрын
One of the greatest "history" channels on KZfaq, love the content.
@l-l
@l-l 11 ай бұрын
Absolutely
@namansoood
@namansoood 11 ай бұрын
Internet Historian: 👀
@JohnAlbertRigali
@JohnAlbertRigali 5 ай бұрын
Considering the scope of the GitHub disaster, it seems to me that recovery with 30 hours is very impressive. I've had to engineer recoveries from much smaller disasters and every one of them took me at least 48 hours if I remember correctly.
@Geolaminar
@Geolaminar 11 ай бұрын
Well, it could have been worse. The automated lunar relay launch could have been misconfigured such that it did not alert US STRATCOM, and therefore appeared to be a ballistic missile launch against a domestic target, which immediately would lead to global thermonuclear war due to improper database failover configuration.
@MrLastlived
@MrLastlived 10 ай бұрын
I swear to god if all of humanity gets wiped out over a stupid accident and not because of a grand painstaking political catastrophe I'ma be real disappointed in hell.
@mattheholic2
@mattheholic2 10 ай бұрын
​@@MrLastlivedThat was close to happening multiple times over the course of history. It's a miracle we haven't already done that.
@thebeber2546
@thebeber2546 11 ай бұрын
The ending was hilarious. Great video overall.
@ccthomas
@ccthomas 11 ай бұрын
When the east coast database recovered and started accepting writes again from applications, they dodged the very common bullet of those apps pushing work at the database as fast as they can and overwhelming it, causing a second wave of outage. In this case, it looks like the controls over the work rate (whether implicit in the nature and scale of the apps, or an explicit mechanism) were sufficient to prevent that.
@rajarshichattopadhyay1728
@rajarshichattopadhyay1728 9 ай бұрын
I love how in the last 30 sec, Kevin was not only able to explain how interplanetary network would work but how a random command would blow everything up in exactly 30 sec 😆
@LolWutMikehSM
@LolWutMikehSM 11 ай бұрын
That interplanetary loop was good
@rigell2764
@rigell2764 11 ай бұрын
These graphics make me laugh. 1, 2, 4, 5, red among us guy, purple among us guy, pizza, 8 ball 😂. Also the Ace Attorney part was great.
@kuroodo_
@kuroodo_ 11 ай бұрын
The explosion at the end threw me into tears lol
@Hopgop1
@Hopgop1 11 ай бұрын
I love these videos, I work in IT but for a much smaller national company, really interesting to learn some lessons from, plus the editing and storytelling makes it very entertaining.
@IroAppe
@IroAppe 11 ай бұрын
This was definitely not a failure. I've seen other videos where "they did everything wrong they could". In this case, in the circumstances, they did exactly what they had to do. Except for those few discussing prioritizing uptime over data consistency, which is a no-no. It's good that the right engineers prevailed. A laggy service is just so much better, than a nightmare collapse or massive inconsistency nightmare that will plague costumers all over for weeks. I get that they're paid for uptime and fluidity of the service, but in a case that is equivalent to a survival situation, you have to prioritize. Worrying about a "laggy service" in the east-coast is then equivalent to complaining about the lack of ice cream in an apocalypse scenario. In fact, I see this as a huge win! How many times have short measures without much thinking trying to treat the superficial symptoms as fast as possible, that are merely an extension of the underlying real problem, led to a full-scale disaster? At once, there were finally people thinking critically before doing something! Treating the core of the problem.
@Penfolduk001
@Penfolduk001 9 ай бұрын
The worry here was that they had to spend the time coming up with the plan to respond. Whilst I realise you can't plan for every contingency, cross-hub failure like this should have already been considered and planned for. From the video this doesn't appear to have been the case. Guess they were lucky the initial fault didn't last more than 49 seconds.
@xpusostomos
@xpusostomos 6 ай бұрын
Nobody was arguing for inconsistency. The argument was getting back up fast vs losing 40 minutes of changes
@leaffinite3828
@leaffinite3828 6 ай бұрын
​@xpusostomos losing 40 minutes of changes is i think the inconsistency in question
@xpusostomos
@xpusostomos 6 ай бұрын
@@leaffinite3828 that's not a data inconsistency
@leaffinite3828
@leaffinite3828 6 ай бұрын
@@xpusostomos why dont you define the term then, get us on even ground
@fairlyfactual451
@fairlyfactual451 11 ай бұрын
This is why you always should practice regional failovers of your cloud architecture and make doing so mandatory company events (or even random events).
@alexischicoine2072
@alexischicoine2072 9 ай бұрын
My company practices that once a year I believe. I had a senior colleague take part in it.
@majesticcok
@majesticcok 11 ай бұрын
I love these videos, but as a DevOps Engineer I get anxious if I watch too many in a short period of time :)
@acoolnameemm
@acoolnameemm 9 ай бұрын
This video is full of explosions and memes but in a tempered manner and it hits all the nerves in my brain. I need more videos like this.
@kriterer
@kriterer 11 ай бұрын
$50 an hour is a wild overstatement
@XxBuzzkill77xX
@XxBuzzkill77xX 11 ай бұрын
This content is incredible! Really has me thinking about some of my architecture and how to think about planning infrastructure going forward, keep up the awesome work!
@hchris96
@hchris96 11 ай бұрын
Thank you! This was perfect. I love this. And the amount of explosions is tasteful and not overdone
@radiosification
@radiosification 11 ай бұрын
I love these incident analysis videos. Please keep making more!
@gleep23
@gleep23 11 ай бұрын
I like how you turned this technical issue into an enjoyable story. Great storytelling skill.
@LemonGingerHoney
@LemonGingerHoney 10 ай бұрын
I felt their pain. What a fantastic job on the recovery and post mortem.
@jermunitz3020
@jermunitz3020 11 ай бұрын
Nice editing Kevin. Really looking forward to the next one.
@jure.
@jure. 11 ай бұрын
I love your videos so much. They're so informative, interesting, well-made and even funny. Keep it up!
@TheNivk1994
@TheNivk1994 11 ай бұрын
Please…. More of these videos of software disasters! Facebook outage etc. !! As a developer myself, it’s somehow calming that such big players fall into these „oh shit….“ situations too! ❤️
@simonsomething2620
@simonsomething2620 11 ай бұрын
They're all humans and none of them conjure magic tricks. Usually using the same jazz us mortals are :D
@eantropix
@eantropix 10 ай бұрын
Bro backing up data to Mars sounds so unbelievably awesome and impractical at the same time, I love it
@CubemasterXD
@CubemasterXD 11 ай бұрын
these videos are so underrated the (visual) humor keeps getting better and better
@CoryKing
@CoryKing 11 ай бұрын
These videos are hilarious! I look forward to more! It’s like the dark net diaries podcast but different and super funny. Good stuff! I watched all of these and am disappointed there isn’t more to binge watch. I hope you keep this format, this is an excellent concept for a KZfaq channel!
@kiro_f
@kiro_f 11 ай бұрын
Can't wait for another video, just kinda wanna go on a binge watch of them but there isn't that many, hopefully in the future though :)
@IceTank
@IceTank 11 ай бұрын
The editing is on point. Very nice video.
@kanal7523
@kanal7523 11 ай бұрын
I love the animations and goofiness, pls never stop making these videos
@VaraNiN
@VaraNiN 11 ай бұрын
This channel gonna be big soon with these high quality vids and the algorithm starting to push em
@vikaskrishnan4018
@vikaskrishnan4018 10 ай бұрын
I loved the whole breakdown of the issue Github faced, but its the last 30 seconds of the video that gained you a Sub! Keep up the crisp K.I.S.S explanation and subtle humour combined with the accurate images and editing!
@mr_darkeye
@mr_darkeye 11 ай бұрын
always nice to see a new video from you
@elatedemu
@elatedemu 8 ай бұрын
Your visuals are probably the best and most entertaining I've ever seen
@ForcefighterX2
@ForcefighterX2 11 ай бұрын
2nd video from your channel. Realized it's awesome. You've got a new subscriber, bro!
@MaNameizJeff
@MaNameizJeff 10 ай бұрын
I am loving your videos so much. You make describing how exactly these internet exploits are done in the most entertaining way. Even someone who only knows basics like myself can follow along and understand.
@EdwardChan.999
@EdwardChan.999 11 ай бұрын
I hate dealing with databases, but watching your database stories is a pleasure 👍🏻
@jetardeshna3449
@jetardeshna3449 10 ай бұрын
Deleting servers? No, on this channel we nuke them . Instant subscription.
@arcaneblackwood3602
@arcaneblackwood3602 11 ай бұрын
The humor in this video is 120%. We need news actors like you in this world.
@miklov
@miklov 11 ай бұрын
Fascinating. Love the bit at the end too! Thank you.
@ellieban
@ellieban 9 ай бұрын
“They expected X to follow a linear trajectory rather than the actually observed power function” can be applied to most of what’s wrong with humanity 🤣
@whynotanyting
@whynotanyting 11 ай бұрын
"For instance, how am I gonna stop some big mean Mother-Hubber from tearin' me a structurally superfluous data center?"
@Crocsx058
@Crocsx058 11 ай бұрын
Man your video are so good and it's so cool to see other company post mortem and the cause so well explained. Thanks
@AdroSlice
@AdroSlice 11 ай бұрын
That last part is gold. Thank you so much.
@nickdaboss03
@nickdaboss03 11 ай бұрын
Loving these new documentary type videos!
@fir3cl4w
@fir3cl4w 11 ай бұрын
Love the Ace Attourney bit, keep up the good work ❤
@benbrist
@benbrist 11 ай бұрын
We're not GitLab had me in stitches
@ironized
@ironized 11 ай бұрын
Founds this video today, please keep these up. I work in business resilience/crisis management and find this very helpful
@PolskaChild
@PolskaChild 10 ай бұрын
Everything about the video was great lmao. The humor, the animations, and not stupidly complicated.
@druidshmooid
@druidshmooid 11 ай бұрын
Loving the videos. Great content. Keep it coming.
@henkfinkers3931
@henkfinkers3931 11 ай бұрын
I absolutely love this channel.
@JxH
@JxH 11 ай бұрын
We do have to admire the self-confidence of the system designers. They plunged right in, built a highly complex system, blissfully unaware of their own naïveté. Failure control is about 30x more complex than they had assumed.
@TheShnitzel
@TheShnitzel 11 ай бұрын
Another great video! Keep up the awesome work!
@kim15742
@kim15742 11 ай бұрын
You are now one of my very favourite youtubers! Great videos
@Pixelhurricane
@Pixelhurricane 11 ай бұрын
your joke at the end about the martian servers hand me in tears, too real
11 ай бұрын
I just found out about your channel, amazingly well put together videos
@beakt
@beakt 11 ай бұрын
Your background music and sound effects are very clever.
@Epausti
@Epausti 11 ай бұрын
Love your stuff! Your channel will blow up
@JAMBUILDER08
@JAMBUILDER08 10 ай бұрын
This is a great example of what to do after a major IT issue, which is make plans to handle such a situation better and easier should it occour again.
@christianbarnay2499
@christianbarnay2499 11 ай бұрын
Github is designed at its core to allow for loss of connectivity anywhere in the network. In this event they completely failed at handling the exact type of issue their system was designed to overcome seamlessly. As mentioned in the video this should have resulted in a 43s downtime for the vast majority of clients. And only a handful of clients having to reconcile data by hand between the west and east coast centers. The major problem is they clearly never tested the primary database loss scenario. They would have identified that they needed to replicate not only the database but the entire infrastructure to the west coast so it could still work during an east coast downtime. Or deactivate cross-country failover. The second problem is they one-sidedly decided they had to reconcile all user data by themselves. Client data belongs to clients. You should never alter client data without full information and consent. Deciding to manually rollback and backup east coast commits was altering client data and a big no-no. The right course of action should be: 1. Inform clients that there is a potential discrepancy between servers and you are building a list of affected projets, 2. Let the system reconcile projects that have no issue at all (no commits during the downtime or only west coast commits that can be pushed to the east with a fast-forward) and inform those clients that everything is fine for them and the system is back to normal operations, 3. Tell clients that need manual reconciliation that you propose the following plan: keep the branch with the most recent commit as is, and rename the conflicting branch as _ so they will have both accesible in the same repo and can reconcile their data as it suits them. And ask them to reply with their approval of the plan or a proposition for an alternate plan before some reasonable deadline. And give them contact info if they need help and/or advice. That way instead of going all in manipulating all clients data, they would only need a small taskforce ready to help those that actually need it.
@eekee6034
@eekee6034 11 ай бұрын
*Git* is designed to allow for loss of connectivity. Git*hub* was designed by the kind of crazies who jump on open source bandwagons.
@samuellourenco1050
@samuellourenco1050 10 ай бұрын
One question about your point 3. How to reconcile two divergent branches?
@christianbarnay2499
@christianbarnay2499 10 ай бұрын
@@samuellourenco1050 There are tons of ways to do it. Simplest is git merge with manual resolution of conflicts. Most tedious is creating a new branch at the diverging point and cherry pick from each side, then destroy both incomplete branches and rename the new branch to the original name. The right strategy is up to each client depending on the state of their data and their own standards for repo cleanliness. Some will want to remove all traces of the incident. Others will consider it's part of the project life and should stay visible in the history.
@JohnSmith-fz1ih
@JohnSmith-fz1ih 9 ай бұрын
Where did you get the notion that they altered client data? My understanding from watching this video is that they rolled back to a consistent state, then restored the two lots of data that ended up split over the two data centres. The result being all data restored. I’m not certain in what users with the data spread across both the east coast and west coast servers experienced. But your post reads to me of “I watched a 12-minute summary and now I think I know better than the staff that worked with the product every day”.
@christianbarnay2499
@christianbarnay2499 9 ай бұрын
@@JohnSmith-fz1ih In a history tool like GIT client data is not limited to the content of latest commit. Client data is the entire tree with all branches, commit dates, comments and commit order. Dealing with conflicting data is an important decision. And the way you want the data to appear and be accessible after the resolution is a decision by the project owner. Each project owner will have a different approach on the way they want to deal with such a situation. And GIT allows for all those approaches. The Github team making a single universal decision for all projects is barring project owners from making their own decision on the matter. What I say doesn't come from just watching a 12 minute video. It comes from using GIT on a daily basis, including a few occasions in which I migrated entire projects from old tech repos like CVS or SVN to GIT. And on some of those occasions I had to retrieve commits that were split over several repos and reconcile them using dates and comments. With the help of some low level GIT commands I could easily automate that process. That's why I am fully confident that GIT has all the tools needed to allow the Github team to automatically rename conflicting branches, regroup everything in the master repo, replicate to all mirrors, and then let project owners do the merge the way they want instead of forcing their own single decision for everyone. The main benefits of GIT over all other versioning systems are its high resilience to conflicts and the possibility for project admins to do absolutely everything with their repo on any PC and push the result to the central repo. This incident was the perfect occasion to highlight those features and display complete transparency by rapidly giving control of the 2 branches of their repos to project owners.
@Froschkoenig751
@Froschkoenig751 11 ай бұрын
Love the humor mixed with the animations and actually insightful content - you got a new subscriber with that video!
@shitshow_1
@shitshow_1 10 ай бұрын
I always thought of inconsistency in divergence timelines and how Engineers would handle it. Great video 👍
@teamwolfyta6511
@teamwolfyta6511 11 ай бұрын
That Bitbucket joke was the funniest thing I've heard in coding terms, Keep up the awesome stuff mate! 🤣
@RaphaelDDL
@RaphaelDDL 11 ай бұрын
Thank you youtube algorithm for suggesting this piece of art
@Gabriel-kl6bt
@Gabriel-kl6bt 11 ай бұрын
The thought of being amidst these people recovering from this kind of chaos gives me stomachache.
@owenschwartz
@owenschwartz 11 ай бұрын
Absolutely loving these videos.
@JohnnyMcMenamin
@JohnnyMcMenamin 11 ай бұрын
First time viewer here and recent subscriber. I enjoy your style of video editing and presentation.
@nealpan
@nealpan 11 ай бұрын
Great info, Kevin. What Software did you use to make/edit this video?
@kubajurka
@kubajurka 10 ай бұрын
I understood virtually nothing but still found the video absolutely exhilarating.
@TheOneAndOnlyMart
@TheOneAndOnlyMart 9 ай бұрын
love your animation style
@juleswinnfield1437
@juleswinnfield1437 11 ай бұрын
This was such a cool video, always great when you learn things and don't realise it!
@jeffreyz4632
@jeffreyz4632 11 ай бұрын
Love ur database videos, keep it up
@MmmmDatAss
@MmmmDatAss 5 ай бұрын
Sounds like a major headache. One "oopsie" and all hell breaks loose.
@matthewschuster4600
@matthewschuster4600 11 ай бұрын
That last 30 seconds or whatever just earned you a sub. Lmao.
@arthurritt3047
@arthurritt3047 11 ай бұрын
You made it so easy to understand man you're good
@sauwurabh
@sauwurabh 9 ай бұрын
Kevin this is some good shizzz, watched the GitLab video first then this on and subscribed.
@Joelitop
@Joelitop Ай бұрын
This is great content, keep it up, you made my day brighter ❤
@Markyroson
@Markyroson 5 ай бұрын
I love the "until next time" segment at the end lolol
@Bozebo
@Bozebo 11 ай бұрын
I mean, cross region issues are something you're meant to have tested disaster recovery from and this is a really obvious point of failure they shouldn't have missed. That's the issue here, not necessarily an architecture problem itself.
@AccurateBurn
@AccurateBurn 11 ай бұрын
Explosions!?!?!? another banger dude, so entertaining. This is so funny, we got HA, also failover is not supported architecture.
@morswinpsiopsiol667
@morswinpsiopsiol667 11 ай бұрын
I love your content, man, keep it up, you are awesome! ^^
@theowinters6314
@theowinters6314 10 ай бұрын
I think the biggest surprise in this was the fact that they had daily tests of restoring from backup, when most companies only tests that after need it.
@x4exr
@x4exr 15 күн бұрын
This video is packed with humor. Its no nonchalant and thats funny to catch 😹I enjoyed watching this video!!
@AlseyMiller
@AlseyMiller 12 күн бұрын
Loved seeing the Swift compiler PRs
@JoshSweetvale
@JoshSweetvale 6 ай бұрын
GitHub: Civil War The king seems to die, so the west coast crown prince declares himself king... And then the king shows back up.
@Caphalem
@Caphalem 11 ай бұрын
This channel is way too small for content this good
@lbgstzockt8493
@lbgstzockt8493 11 ай бұрын
The outro was hilarious, fully expect this to happen with the colonisation of the solar system.
@NickDoddTV
@NickDoddTV 10 ай бұрын
This video was worth it for the ending alone. But damn what a day to be a Software engineer at GitHub
@Ngethe_M
@Ngethe_M 6 ай бұрын
great content, found this channel ad immediately subscribed
@violetwtf
@violetwtf 11 ай бұрын
you are my favorite channel, i love what you do
Cloudflare Deploys Really Slow Code, Takes Down Entire Company
13:24
Dev Deletes Entire Production Database, Chaos Ensues
10:20
Kevin Fang
Рет қаралды 2,5 МЛН
Зу-зу Күлпәш. Агроном. (5-бөлім)
55:20
ASTANATV Movie
Рет қаралды 615 М.
КИРПИЧ ОБ ГОЛОВУ #shorts
00:24
Паша Осадчий
Рет қаралды 6 МЛН
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 910 М.
How This SQL Command Blew Up a Billion Dollar Company
13:11
Kevin Fang
Рет қаралды 616 М.
SQLite is enough
5:58
Martin Baun
Рет қаралды 6 М.
How principled coders outperform the competition
11:11
Coderized
Рет қаралды 1,5 МЛН
I bought the most MINIMALIST Tech ever.
48:11
Mrwhosetheboss
Рет қаралды 1 МЛН
Polish Amazon Offers Deal So Good Their Servers Implode
8:05
Kevin Fang
Рет қаралды 210 М.
How A Steam Bug Deleted Someone’s Entire PC
11:49
Kevin Fang
Рет қаралды 835 М.
Can ChatGPT solve the world's hardest puzzles?
8:48
Kevin Fang
Рет қаралды 54 М.
the new PS4 jailbreak is sort of hilarious
12:21
Low Level Learning
Рет қаралды 37 М.
The Worst Website Launch of All Time
13:33
Kevin Fang
Рет қаралды 334 М.
Apple. 10 Интересных Фактов
24:26
Dameoz
Рет қаралды 69 М.
❌УШЛА ЭПОХА!🍏
0:37
Demin's Lounge
Рет қаралды 383 М.
M4 iPad Pro Impressions: Well This is Awkward
12:51
Marques Brownlee
Рет қаралды 6 МЛН
APPLE УБИЛА ЕГО - iMac 27 5K
19:34
ЗЕ МАККЕРС
Рет қаралды 97 М.
СЛОМАЛСЯ ПК ЗА 2000$🤬
0:59
Корнеич
Рет қаралды 2,4 МЛН