Dissecting GitHub Outage: ID column reaching the max value 2147483647



1694 views Outage Dissections



On the 5th of May, 2020, GitHub experienced an outage because of this very reason. One of their shared table having an auto-incrementing ID column hits its max limit. Let’s see what could have been done in such a situation.

What’s the next value after MAX int?

GitHub used 4 bytes signed integer as their ID column, which means the value can go from -2147483648 to 2147483647. So, now when the ID column hits 2147483647 and tries to get the next value, it gets the same value again, i.e., 2147483647.

For MySQL, 2147483647 + 1 = 2147483647

So, when it tries to insert the row with ID 2147483647, it gets the Duplicate Key Error given that a row already exists with the same ID.

How to mitigate the issue?

A situation like this is extremely critical given that the database is not allowing us to insert any row in the table. This typically results in a major downtime of a few hours, and it depends on the amount of data in the table. There are a couple of ways to mitigate the issue.

Approach 1: Alter the table and increase the width of the column

Quickly fire the ALTER table and change the data type of the ID column to UNSIGNED INT or BIGINT. Depending on the data size, an ALTER query like this will take a few hours to a few days to execute. Hence this approach is suitable only when the table size is small.

Approach 2: Swap the table

The idea here is to create an empty table with the same schema but a larger ID range that starts from 2147483648. Then rename this new table to the old one and start accepting writes. Then slowly migrate the data from the old table to this new one. This approach can be used when you can live without the data for a few days.

Get warned before the storm

Although mitigation is great, it is better to place a monitoring system that raises an alert when the ID reaches 70% of its range. So, write a simple DB monitoring service that periodically checks this by firing a query on the database.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 14000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other videos that you might like


Dissecting GitHub Outage: Downtime due to an Edge Case

838 views 41 likes 2022-05-23

In August 2021, GitHub experienced an outage where their MySQL Master database went into a degraded state. Upon investig...

Dissecting GitHub Outage - Downtime due to ALTER TABLE

1694 views 88 likes 2022-05-09

Can an ALTER TABLE command take down your production? 🤯 GitHub had a major outage and it all started with a schema migr...

An engineering deep-dive into Atlassian's Mega Outage of April 2022

4303 views 226 likes 2022-04-15

In April 2022, Atlassian suffered a major outage where they "permanently" deleted the data for 400 of their paying cloud...

Dissecting Google Maps Outage: Bad Rollout and Cascading Failures

1104 views 69 likes 2022-04-01

Google Maps had a global outage on 18th March 2022, during which the end-users were not able to use Directions, Navigati...


Arpit's Newsletter read by 14000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v10.6.4
  • © Arpit Bhayani, 2022