January 2022 enrollments are closed and the course commences on 8th of January, 2022. For future cohorts Join the waitlist

Benchmark Pagination Strategies in MongoDB

Published on 2nd Jun 2017

1 min read

Share this article on

MongoDB is a document based data store and hence pagination is one of the most common use case of it. So when do you paginate the response? The answer is pretty neat; you paginate whenever you want to process result in chunks. Some common scenarios are

  • Batch processing
  • Showing huge set of results on user interface

There are multiple approaches through which you can paginate your result set in MongoDB. This blog post is dedicated for results of benchmark of two approaches and its analysis, so here we go ...

Benchmark has been done over a non-indexed collection. Each document of the collection looks something like this

        "_id" : ObjectId("5936d17263623919cd5165bd"),
        "name" : "Lisa Rogers",
        "marks" : 34

All records of a collection are fetched page-wise. Size of each page is fixed during fetch of the collection. Each page is fetched 3 times and average of, time to fetch one “page”, 3 is recorded.

Following image shows the how two approach fares against each other.

MongoDB Pagination Benchmark Results

A key observation to note is that, till 500-600 count, both the approaches are comparable, but once it crosses that threshold, there is sudden rise in response time for skip and limit approach than other. The approach using _id and limit almost gives constant performance and is independent of size of the result set.

I tried running this test on different machines with different disks but results were similar. I think diving deep in MongoDB's database drivier will yield better information about this behavior. You could see some spikes in the response times, that are because of Disk Contention.

In short:

  • For huge result set, paginating using _id and limit is far better than using skip and limit.
  • For smaller result set, it does not matter, but prefer skip and limit.

An interesting thing I observed is that after page size crosses 100, the gap between the two approach reduces to some extent. I am yet to perform detailed benchmark on that as such use-case (where page-size is more than 100) is pretty rare in practical applications.

You can find the Python code used for this benchmark here. If you have any suggestion or improvement, do let me know.

If my work adds value, consider supporting me

Buy Me A Coffee

Arpit's Newsletter

2500+ Subscribers

If you like what you read subscribe you can always subscribe to my newsletter and get the post delivered straight to your inbox. I write essays on various engineering topics and share it through my weekly newsletter 👇

Other articles that you might like

Designing Taxonomy on a Relational DB

Designing Taxonomy on a Relational DB

In this essay, design taxonomy on a SQL-based Relational database by taking Udemy as an example, wri...

18th Apr
Eight Rituals to be a Better Programmer

Eight Rituals to be a Better Programmer

"How to get better at programming?" is the question I had been asked quite a few times, and today I ...

28th Feb
Multiple MySQL server running on same Ubuntu server

Multiple MySQL server running on same Ubuntu server

Have multiple MySQL versions running on same server within 5 minutes....

13th May
Sleepsort and Concurrency in Golang

Sleepsort and Concurrency in Golang

Understanding concurrency in any programming language is tricky let alone Golang; hence to get my ha...

16th Jul