Arpit's Newsletter read by 38000+ engineers
Weekly essays on real-world system design, distributed systems, or a deep dive into some super-clever algorithm.
Resizing a Hash Table is important to maintain consistent performance and efficiency, but how do we actually implement it?
Resize is all about
But a few granular details are specific to the type of hash table.
Resizing a table happens when the load factor hits a threshold. To implement an efficient resize, a Hash Table that uses chaining needs to keep track of
This would help us avoid reevaluation, and we can instantly compute the load factor.
While we are inserting a key in the Hash Table, we keep on checking the load factor. If it breaches the threshold, we trigger the resize.
insert_key(k, table) {
------
LF = count_keys / total_slots;
if (LF >= 0.5) {
resize(table, total_slots * 2)
}
}
While we delete a key from the Hash Table, we check the load factor. If it breaches the threshold, we trigger the shrink. The pseudocode is fairly similar to the above one.
Chained Hashing is implemented using Linked List and there are two ways to resize
In open addressing, we always soft delete so that we can reach the elements placed further in the collision chain. To handle this gracefully, we need two counters at the hash table
For open addressing, the load factor will be counted as the used counter divided by the total slots. The deleted keys also affect the performance as we have to go past them looking for the keys.
Hence, the Key Counter will increase and decrease upon every insert and delete, while the used counter would increase upon insert but would reduce only when we do a resize.
While we are inserting a key in the Hash Table, we check the load factor. If it breaches the threshold, we trigger the resize. Resize operation would skip the deleted keys and re-insert only the active keys.
The shrinking of the hash table will be triggered when the number of active keys falls beyond the threshold, and hence here our load factor for this operation would be active keys / total_slots.
Similar to the insert phase, we would skip the deleted keys and re-insert only the active keys in the new array. The key counter and the user counters are adjusted accordingly.
Here's the video ⤵
Super practical courses, with a no-nonsense approach, are designed to spark engineering curiosity and help you ace your career.
An in-depth, self-paced, and on-demand course that for early engineers to become great at designing scalable, available, and extensible systems at scale.
A masterclass that helps experienced engineers become great at designing scalable, fault-tolerant, and highly available systems.
A course that helps covers Redis internals by reimplementing its core features like - event loop, serialization protocol, pipelining, eviction, and transactions.
Arpit's Newsletter read by 38000+ engineers
Weekly essays on real-world system design, distributed systems, or a deep dive into some super-clever algorithm.