1059 views • Outage Dissections
The split-brain problem in Distributed Systems is not theoretical. GitHub had an outage because their Zookeeper cluster ended up having two leaders, leading to writes to Kafka failing.
Zookeeper is an essential component for Kafka as the clients connect to get information about the brokers and cluster internally using it to manage its state.
During scheduled maintenance the Zookeeper nodes were getting upgraded/patched and during this time, many new nodes were added “too quickly” to the zookeeper cluster.
With a lot of new nodes added “too quickly”, they were unable to self-discover or understand the topology and the way the bootstrap code of Zookeeper is written, they thought the cluster was leaderless. This made them trigger a Leader Election.
Given that a lot of new nodes were added, they formed the majority and elected a new leader. These nodes formed a logical second cluster operating independently. Thus the cluster is now having a split-leadership problem.
One of the Kafka broker (node) connected to the second cluster and found that no other nodes are present (because second zookeeper cluster had no entries) and hence elected itself as the controller.
When the Kafka clients (producers) were trying to connect to the Kafka cluster they got conflicting information which led to the 10% of writes failing.
If the number of brokers connecting to the new cluster were more, then it could have led to even data consistency issues. But because only one node connected, the impact was minimal.
The zookeeper cluster would have auto-healed but it would have taken a long time to converge, and hence a quick way to fix this is to manually update the zookeeper entries and configure it to have a single leader.
To keep things clean, the nodes that were part of second zookeeper cluster could have been deleted as well.
Even though 10% of write requests failed, why did it not lead to a data loss? the secret is Dead Letter Queue.
It is a very standard architectural pattern that ensures zero data loss even when the message broker (queue) crashes. The idea is to have a secondary queuing system that you can push messages to if the write on the primary fails.
All the messages that client tried to write to Kafka failed, but they were persisted in DLQ which they processed later.
If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.
444 views • 31 likes • 2022-07-06
Outages are inevitable; but we should design our architecture such that if a component is down, it should not lead to a ...
1059 views • 58 likes • 2022-07-01
Distributed Systems are prone to problems that seem very obscure. GitHub had an outage because a set of nodes in the Zoo...
1165 views • 81 likes • 2022-06-29
So, how are databases managed in production? When the master goes down, how a replica is chosen and promoted to be the n...
A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.
Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.
Arpit's Newsletter read by 17000+ engineers
🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.
Powered by this tech stack.