Dissecting GitHub Outage - Downtime due to ALTER TABLE



1963 views Outage Dissections



Can an ALTER TABLE command take down your production? 🤯

It happened to GitHub on 27th November 2021 when most of their services were down because of a large schema migration.

Why did the outage happen?

GitHub ran a migration on a massive MySQL table and it made their replicas enter deadlock and crash. Here are the 5 insights about their architecture

Insight 1: Schema migration can take weeks to complete

Schema migrations are intense operations as in most cases require you to copy the entire table with the new schema. Hence it might take a schema migration weeks to complete.

Insight 2: The last step of migration is RENAME

To protect the database from not taking excessive locks during the migration, we create an empty ghost table from the main table, apply migration, copy the data, and then rename the table. This reduces the locks we need for the migration.

Insight 3: Read Replicas can have deadlocks

The writes happening through the replication job and the production read load can create deadlocks on the read replicas.

Insight 4: Have a separate fleet of replicas for internal traffic

Have a separate fleet of Read Replicas for internal workflows ex: analytics, etc. This way, any internal load will not affect the production load.

Insight 5: Database failures cascade

When a replica fails, the load on healthy one’s increases; which may them down and hence the cascading effect.

Mitigation

The outage happened because there were not enough read replicas to handle the load, hence in order to mitigate it the way out was to add more read replicas. GitHub team very smartly promoted the replicas used for internal workloads to handle production.

Although the move was smart, it did not mitigate the outage because the incoming load was so high that the new replicas added also started crashing.

Data Integrity over Availability

To ensure that the data integrity is not compromised because of repeated crashes, GitHub took a call and let the Read traffic fail. They took the replica out of the production fleet, gave it time to complete the migration, and then added it back.

This way all the replicas got the time they needed to complete the schema migration. It took some but the issue was completely mitigated.

Long-term fix

Vertical Partitioning is a long-term fix for this problem. The idea is to create smaller databases that hold related tables; ex: all tables related to Repositories can go in one DB. This allows migration to quickly complete and during an outage, only the involved functionalities will be affected.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 17000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like


So, the outage is mitigated, now what?

500 views 24 likes 2022-07-08

Outages happen and in such a tense situation, the main priority is to get the system back up, but is that it? Is everyth...

Control an outage by localizing the failures

444 views 31 likes 2022-07-06

Outages are inevitable; but we should design our architecture such that if a component is down, it should not lead to a ...

Dissecting GitHub Outage - Multiple Leaders in Zookeeper Cluster

1059 views 58 likes 2022-07-01

Distributed Systems are prone to problems that seem very obscure. GitHub had an outage because a set of nodes in the Zoo...

GitHub Outage - How databases are managed in production

1165 views 81 likes 2022-06-29

So, how are databases managed in production? When the master goes down, how a replica is chosen and promoted to be the n...


Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

Enrolled by 700+ learners

Details →

Designing Microservices

A free course to help you understand Microservices and their high-level patterns in depth.

Enrolled by 17+ learners

Details →

GitHub Outage Dissections

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 67+ learners

Details →

Hash Table Internals

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 25+ learners

Details →

BitTorrent Internals

A free course to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

Enrolled by 42+ learners

Details →

Topics I talk about

Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.




Arpit's Newsletter read by 17000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v12.4.4
  • © Arpit Bhayani, 2022

Powered by this tech stack.