Implementing Resize of a Hash Table



322 views Hash Table Internals



Resizing a Hash Table is important to maintain consistent performance and efficiency, but how do we actually implement it?

Resize is all about

  • allocating a new array of the desired size
  • insert existing keys in this new array
  • delete the old array

But a few granular details are specific to the type of hash table.

Resizing a Chained Hash Table

Resizing a table happens when the load factor hits a threshold. To implement an efficient resize, a Hash Table that uses chaining needs to keep track of

  • number of keys
  • total number of slots

This would help us avoid reevaluation, and we can instantly compute the load factor.

Resize during insert

While we are inserting a key in the Hash Table, we keep on checking the load factor. If it breaches the threshold, we trigger the resize.

insert_key(k, table) {
    ------
    LF = count_keys / total_slots;
    if (LF >= 0.5) {
        resize(table, total_slots * 2)
    }
}

Shrinking during delete

While we delete a key from the Hash Table, we check the load factor. If it breaches the threshold, we trigger the shrink. The pseudocode is fairly similar to the above one.

Two ways to implement resize

Chained Hashing is implemented using Linked List and there are two ways to resize

  1. we iterate through the keys and re-insert them into the new array
  2. we iterate through the keys and just adjust the pointers of the nodes, instead of re-allocating the new set of nodes.

Resizing a Hash Table with Open Addressing

In open addressing, we always soft delete so that we can reach the elements placed further in the collision chain. To handle this gracefully, we need two counters at the hash table

  1. Key Counter: number of active keys in the table
  2. Used Counter: number of used slots in the table

For open addressing, the load factor will be counted as the used counter divided by the total slots. The deleted keys also affect the performance as we have to go past them looking for the keys.

Hence, the Key Counter will increase and decrease upon every insert and delete, while the used counter would increase upon insert but would reduce only when we do a resize.

Resize during insert

While we are inserting a key in the Hash Table, we check the load factor. If it breaches the threshold, we trigger the resize. Resize operation would skip the deleted keys and re-insert only the active keys.

Shrinking during delete

The shrinking of the hash table will be triggered when the number of active keys falls beyond the threshold, and hence here our load factor for this operation would be active keys / total_slots.

Similar to the insert phase, we would skip the deleted keys and re-insert only the active keys in the new array. The key counter and the user counters are adjusted accordingly.


Arpit Bhayani

Arpit's Newsletter

CS newsletter for the curious engineers

❤️ by 17000+ readers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter.




Other essays that you might like


Implementing Hash Maps with Hash Tables

544 views 24 likes 2022-08-03

Maps and Dictionaries are amazing, but how are they implemented? In this 11th video of this Hash Table Internals series...

Implementing Hash Sets with Hash Tables

320 views 8 likes 2022-08-01

Sets are amazing, but how are they implemented? In this 10th video of this Hash Table Internals series, we take an in-d...

Implementing Resize of a Hash Table

322 views 11 likes 2022-07-29

So, the Hash Table needs to be resized in order to maintain consistent performance, but how exactly? In this 9th Video ...

Why are Hash Tables always doubled?

865 views 28 likes 2022-07-27

Why are the underlying arrays of the hash tables always a power of 2? When we trigger a resize why are Hash Tables alway...


Be a better engineer

A set of courses designed to make you a better engineer and excel at your career; no-fluff, pure engineering.


System Design Masterclass

A masterclass that helps you become great at designing scalable, fault-tolerant, and highly available systems.

800+ learners

Details →

Designing Microservices

A free playlist to help you understand Microservices and their high-level patterns in depth.

17+ learners

Details →

GitHub Outage Dissections

A free playlist to help you learn core engineering from outages that happened at GitHub.

67+ learners

Details →

Hash Table Internals

A free playlist to help you understand the internal workings and construction of Hash Tables.

25+ learners

Details →

BitTorrent Internals

A free playlist to help you understand the algorithms and strategies that power P2P networks and BitTorrent.

42+ learners

Details →

Topics I talk about

Being a passionate engineer, I love to talk about a wide range of topics, but these are my personal favourites.




Arpit's Newsletter read by 17000+ engineers

🔥 Thrice a week, in your inbox, an essay about system design, distributed systems, microservices, programming languages internals, or a deep dive on some super-clever algorithm, or just a few tips on building highly scalable distributed systems.



  • v12.7.8
  • © Arpit Bhayani, 2022

Powered by this tech stack.