Dissecting GitHub Outage - Downtime they thought was avoided



458 views Outage Dissections



GitHub thought they avoided an outage by fixing a possible root cause 6 months in advance, but fate had different plans.

Check Suites and Workflows

When we push any changes to any GitHub repository there are some checks that run. We can see them on our Pull Request and it basically prevents us from merging the PR until all checks are successful. We can also add custom checks on our own to the workflow.

An entry is made in the database for every execution of the check suite. This is a high frequency that would lead to heavy ingestion in the database table. A side effect it would be that the auto-incrementing ID column, which is typically a 32-bit integer would exhaust leading to the writes getting failed.

GitHub’s Anticipation

GitHub anticipated this situation 6 months in advance and they altered the column from 32-bit integers to 64-bit integers ensuring that even when the ID range exhausts the 32-bit limit it would lead to downtime.

But still, the team faced an outage, how? what happened?

What exactly happened?

The service was able to create entries in the database about the check suite execution as the database supported 64-bit integers, but there was one external dependency that unmarshalled JSON strings to native objects which only supported 32-bit integers.

The service was responsible for pulling the jobs from the database and putting it in the queue to be picked up by executors. This service depended on the library and hence it was unable to execute the checks. This led to all the checks remaining in the pending state during the course of this outage.

Impact on the Search service

The search service was also impacted by this as the indexing used queue as the source. Since the newer jobs were not put in the queue, they were not indexed in the search cluster (eg: ElasticSearch), and hence when the user searched, they could not find the latest checks and workflows.

Mitigation

In order to mitigate the issue, the GitHub team released a code fix. Speculation: they would have updated the library version that would support 64-bit integers, or they might have quickly forked and patched it with the changes, or they might have written some ad-hoc job that temporarily pulled the jobs and put them in the queue.

This incident shows that no matter how big the company gets and how prepared are you for an extreme event, there would always be some blind spots in the system that would bite us back.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 17000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like


So, the outage is mitigated, now what?

500 views 24 likes 2022-07-08

Outages happen and in such a tense situation, the main priority is to get the system back up, but is that it? Is everyth...

Control an outage by localizing the failures

444 views 31 likes 2022-07-06

Outages are inevitable; but we should design our architecture such that if a component is down, it should not lead to a ...

Dissecting GitHub Outage - Multiple Leaders in Zookeeper Cluster

1059 views 58 likes 2022-07-01

Distributed Systems are prone to problems that seem very obscure. GitHub had an outage because a set of nodes in the Zoo...

GitHub Outage - How databases are managed in production

1165 views 81 likes 2022-06-29

So, how are databases managed in production? When the master goes down, how a replica is chosen and promoted to be the n...


Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

Enrolled by 700+ learners

Details →

Designing Microservices

A free course to help you understand Microservices and their high-level patterns in depth.

Enrolled by 17+ learners

Details →

GitHub Outage Dissections

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 67+ learners

Details →

Hash Table Internals

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 25+ learners

Details →

BitTorrent Internals

A free course to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

Enrolled by 42+ learners

Details →

Topics I talk about

Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.




Arpit's Newsletter read by 17000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v12.4.4
  • © Arpit Bhayani, 2022

Powered by this tech stack.