Dissecting GitHub Outage: Downtime due to an Edge Case



1089 views Outage Dissections



An edge case took down GitHub 🤯

GitHub experienced an outage where their MySQL database went into a degraded state. Upon investigation, it was found out that the outage happened because of an edge case. So, how can an edge case take down a database?

What happened?

The outage happened because of an edge case which lead to the generation of an inefficient SQL query that was executed very frequently on the database. The database was thus put under a massive load which eventually made it crash leading to an outage.

Could retry have helped?

Automatic retries always help in recovering from a transient issue. During this outage, retries made things worse. Automatic retries added the load on the database that was already under stress.

Fictional Example

Now, we take a look at a fictional example where an edge case could potentially take down a DB.

Say, we have an API that returns the number of commits made by a user in the last n days. The way, this API could be implemented is to get the start_date as an integer through the query parameter, and the API server could then fire a SQL query like

SELECT count(id) FROM commits
WHERE user_id = 123 AND start_time > start_time

In order to fire the query, we convert the string start_time to an integer, create the query, and then fire it. In the regular case, we get the correct input and then compute the number of commits and respond.

But as an edge case, what if we do not get the query parameter or we get a non-integer value; then depending on the language at hand we may actually use the default integer value like 0 as our start_time.

There is a very high chance of this happening when we are using Golang which uses 0 as the default integer value. In such a case, the query that gets executed would be

SELECT count(id) FROM commits
WHERE user_id = 123 AND start_time > 0

The above query when executed iterates through all the rows of the table for a particular user, instead of the rows for the last 7 days; making it super inefficient and expensive. The above query would put a huge load on the database and a frequent invocation can actually take down the entire database.

Ways to avoid such situations

  1. Always sanitize the input before executing the query
  2. Put guard rails that prevent you from iterating the entire table. For example: putting LIMIT 1000 would have made you iterate over 1000 rows in the worst case.

Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 15000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other videos that you might like


Dissecting GitHub Outage - Downtime due to Rate Limiter

733 views 34 likes 2022-06-24

Rate limiters are supposed to avoid downtimes, but have you ever heard that a rate limiter caused a downtime? This happe...

Dissecting GitHub Outage - Master failover failed

667 views 26 likes 2022-06-22

Companies announce their planned maintenance, what happens during that? Could something go wrong while running maintenan...

Dissecting GitHub Outage - Downtime they thought was avoided

400 views 24 likes 2022-06-10

Has it ever happened to you that you anticipated that something would go wrong, you pro-actively fixed it, but it still ...

Dissecting GitHub Outage Downtime due to creating an Index

783 views 51 likes 2022-06-06

GitHub wanted to optimize their SQL query performance, and they had to reverse a database index. Instead of getting a pe...


Arpit's Newsletter read by 15000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v11.0.1
  • © Arpit Bhayani, 2022