Dissecting GitHub Outage - Multiple Leaders in Zookeeper Cluster



1059 views Outage Dissections



The split-brain problem in Distributed Systems is not theoretical. GitHub had an outage because their Zookeeper cluster ended up having two leaders, leading to writes to Kafka failing.

Zookeeper is an essential component for Kafka as the clients connect to get information about the brokers and cluster internally using it to manage its state.

What happened?

During scheduled maintenance the Zookeeper nodes were getting upgraded/patched and during this time, many new nodes were added “too quickly” to the zookeeper cluster.

With a lot of new nodes added “too quickly”, they were unable to self-discover or understand the topology and the way the bootstrap code of Zookeeper is written, they thought the cluster was leaderless. This made them trigger a Leader Election.

Given that a lot of new nodes were added, they formed the majority and elected a new leader. These nodes formed a logical second cluster operating independently. Thus the cluster is now having a split-leadership problem.

Broker Connected

One of the Kafka broker (node) connected to the second cluster and found that no other nodes are present (because second zookeeper cluster had no entries) and hence elected itself as the controller.

When the Kafka clients (producers) were trying to connect to the Kafka cluster they got conflicting information which led to the 10% of writes failing.

If the number of brokers connecting to the new cluster were more, then it could have led to even data consistency issues. But because only one node connected, the impact was minimal.

Recovery

The zookeeper cluster would have auto-healed but it would have taken a long time to converge, and hence a quick way to fix this is to manually update the zookeeper entries and configure it to have a single leader.

To keep things clean, the nodes that were part of second zookeeper cluster could have been deleted as well.

Ensuring zero data loss

Even though 10% of write requests failed, why did it not lead to a data loss? the secret is Dead Letter Queue.

It is a very standard architectural pattern that ensures zero data loss even when the message broker (queue) crashes. The idea is to have a secondary queuing system that you can push messages to if the write on the primary fails.

All the messages that client tried to write to Kafka failed, but they were persisted in DLQ which they processed later.

Key Learnings

  • Make consumers idempotent
  • Automate cluster provisioning
  • Have a DLQ for all critical queueing systems
  • Have retries with exponential back-offs on clients

Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 17000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like


So, the outage is mitigated, now what?

500 views 24 likes 2022-07-08

Outages happen and in such a tense situation, the main priority is to get the system back up, but is that it? Is everyth...

Control an outage by localizing the failures

444 views 31 likes 2022-07-06

Outages are inevitable; but we should design our architecture such that if a component is down, it should not lead to a ...

Dissecting GitHub Outage - Multiple Leaders in Zookeeper Cluster

1059 views 58 likes 2022-07-01

Distributed Systems are prone to problems that seem very obscure. GitHub had an outage because a set of nodes in the Zoo...

GitHub Outage - How databases are managed in production

1165 views 81 likes 2022-06-29

So, how are databases managed in production? When the master goes down, how a replica is chosen and promoted to be the n...


Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

Enrolled by 700+ learners

Details →

Designing Microservices

A free course to help you understand Microservices and their high-level patterns in depth.

Enrolled by 17+ learners

Details →

GitHub Outage Dissections

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 67+ learners

Details →

Hash Table Internals

A free course to help you learn core engineering from outages that happened at GitHub.

Enrolled by 25+ learners

Details →

BitTorrent Internals

A free course to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

Enrolled by 42+ learners

Details →

Topics I talk about

Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.




Arpit's Newsletter read by 17000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v12.4.4
  • © Arpit Bhayani, 2022

Powered by this tech stack.