Dissecting GitHub Outage - Repository Creation Failed



442 views Outage Dissections



Just imagine you trying to create a repository on GitHub and it is not working, and this happened to GitHub in April 2021 when their users were not able to create a new repository.

The root cause for this outage was something that seems unrelated - Scanning Secrets. The root cause makes this outage super interesting to dissect.

What is Secret Scanning?

Our API servers need to talk to peripheral components like Databases, Cache, SaaS services, etc. This communication involves some sort of authentication and authorization through auth tokens, passwords, or secret keys.

Developers tend to commit the secrets in the settings/constant files and push them to GitHub. What if the repository content gets leaked? What if GitHub itself has a data breach and the attacker gets access to the private repositories?

If the secrets like AWS access keys, auth tokens, and DB passwords are leaked and the attacker can then get the dump of the data and ask for a ransom. Or they may even abuse the infrastructure to perform some illegal activities or mine cryptocurrencies.

Hence, GitHub periodically runs a job that checks all the repositories for any secrets that are committed and warns the user about it.

Repository Creation Flow

When a repository is created an entry is made into the Secret Scanning table which is then used by a job that scans for potential secrets and notifies the owner.

What led to the outage?

The GitHub team ran a data migration in which they moved the Secret Scanning table from a common database to its own cluster allowing it to scale independently.

GitHub team was unaware of this dependency! and hence after the migration of the table happened to a different database the creation of a new repository started failing to lead to this outage. It is interesting to see such mature products having blindspots.

How did GitHub mitigate it?

The mitigation strategy of GitHub was to roll back the migration. Although it is unclear from the incident report on what exactly they did but there are a few speculations

  1. they could have recopied the table quickly to the old database
  2. whitelisted the database so that applications could connect
  3. old table would have been intact and hence they would have just renamed and made it active again.

Again, it is pure speculation given we do not have any insider information nor they specified in the report. It would have been fun to have gone through their actual mitigation steps. We could have learned so much, but nonetheless, we did learn a few interesting insights from this outage.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 15000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other videos that you might like


Dissecting GitHub Outage - Downtime due to Rate Limiter

733 views 34 likes 2022-06-24

Rate limiters are supposed to avoid downtimes, but have you ever heard that a rate limiter caused a downtime? This happe...

Dissecting GitHub Outage - Master failover failed

667 views 26 likes 2022-06-22

Companies announce their planned maintenance, what happens during that? Could something go wrong while running maintenan...

Dissecting GitHub Outage - Downtime they thought was avoided

400 views 24 likes 2022-06-10

Has it ever happened to you that you anticipated that something would go wrong, you pro-actively fixed it, but it still ...

Dissecting GitHub Outage Downtime due to creating an Index

783 views 51 likes 2022-06-06

GitHub wanted to optimize their SQL query performance, and they had to reverse a database index. Instead of getting a pe...


Arpit's Newsletter read by 15000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v11.0.1
  • © Arpit Bhayani, 2022