Dissecting GitHub Outage - Downtime due to Rate Limiter



991 views Outage Dissections



Rate Limiters are supposed to avoid downtimes, but what if they turn out to be the root cause of a major outage?

A large chunk of GitHub users saw elevated error rates and this happened after deploying their A/B Experimentation service. So, what went wrong? but before that let’s understand what is A/B experimentation

What is A/B Experimentation?

It is hard to decide which UI is better and hence before rolling out any critical UI change to all the users, a company tests it through an A/B experiment.

A set of users are chosen at random and a fraction of them are shown the new variation while others are shown the old one. Key vitals and metrics of the features are measured and compared to decide if the variation is indeed an improvement.

If the metrics are positive and significantly better then the new variation is rolled out to 100% of the users. This way companies ensure that the features that are rolled out are genuine improvements in the product.

A/B Testing at GitHub

Every server that needs to participate in any A/B experiment fetches a configuration file that is dynamically generated using, say, Config Generator service.

The configuration allows granular controls for the A/B experiment and holds critical information that shapes experimentation. When any server requests for a config file, the request hits the config service and it, in turn, generates the file and sends it back to the user.

What failed?

Because a lot of requests were made to the Configuration Service, the rate limiting module of the service started throttling and it prevented the configuration file to be generated and sent to the servers.

This affected the users who were part of this experiment and they saw elevated error rates as the frontend did not have the necessary information it required to power the experiment.

Mitigation and Long-term Fix

As quick mitigation, the GitHub team disabled the dependency on the dynamically generated file and it restored the services to normal.

To ensure the outage would not happen due to the same reason, the Config Generator service would generate and cache the configuration files so that when a request comes, the file could be served directly from the cache instead of generating on the fly which was time consuming.

Key Takeaways

  • avoid sync dependencies between services and prefer async
  • classify the services by severity tiers and run periodic audits of tier-1 services to ensure they are architected well and there are no blindspots

Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 17000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like


So, the outage is mitigated, now what?

500 views 24 likes 2022-07-08

Outages happen and in such a tense situation, the main priority is to get the system back up, but is that it? Is everyth...

Control an outage by localizing the failures

444 views 31 likes 2022-07-06

Outages are inevitable; but we should design our architecture such that if a component is down, it should not lead to a ...

Dissecting GitHub Outage - Multiple Leaders in Zookeeper Cluster

1059 views 58 likes 2022-07-01

Distributed Systems are prone to problems that seem very obscure. GitHub had an outage because a set of nodes in the Zoo...

GitHub Outage - How databases are managed in production

1165 views 81 likes 2022-06-29

So, how are databases managed in production? When the master goes down, how a replica is chosen and promoted to be the n...


Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

Enrolled by 700+ learners

Details →

Designing Microservices

A free course to help you understand Microservices and their high-level patterns in depth.

Enrolled by 17+ learners

Details →

GitHub Outage Dissections

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 67+ learners

Details →

Hash Table Internals

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 25+ learners

Details →

BitTorrent Internals

A free course to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

Enrolled by 42+ learners

Details →

Topics I talk about

Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.




Arpit's Newsletter read by 17000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v12.4.4
  • © Arpit Bhayani, 2022

Powered by this tech stack.