Azure Cosmos DB vs MongoDB Atlas

Publish date:

A comparison of features, capabilities, and costs; and an analysis of which one to choose over the other.

I have been involved in discussions around Cosmos DB vs MongoDB Atlas quite a few times, and I have realized that some of the typical questions that these discussions revolve around include:

  • Which is a better choice as document storage?
  • What factors should be considered to decide which better fits the project needs?
  • What about performance, storage capacity, and throughput scale?
  • What about backup?
  • How much is my database going to cost?

I have tried to present a concrete comparison with an analysis that can help you make an informed choice. Before we start, however, here are some notes that you should be aware of:

  • I have made a simple comparison based on criteria that mattered for projects I was involved in. The criteria might be different for you.
  • Cosmos DB is more than just document storage; it has other storage engines too, but this blog post is only about its MongoDB-API storage functionality.
  • Both databases are being actively developed and the situation is changing rapidly.
  • MongoDB Atlas works with multiple cloud providers (including Azure). The storage price differs between them. This affects both data storage and backup price for Atlas.
  • RU – request unit is a kind of IOPS in Cosmos DB terminology.



MongoDB Atlas

Cosmos DB

Storage scaling Automatic storage scaling for cluster with a tier of M10 or larger.
Automatic scaling without limit.
Capacity (CPU/memory) scaling Automatic horizontal capacity scaling for cluster with a tier of M10 or larger.
Horizontal scaling with sharding, available only with a tier of M30 or larger.
Automatic scaling without limit.
Throughput scaling Limited by cluster size tier. Cluster supports vertical autoscaling starting with tier M10.
Manually limited by RUs. Potentially unlimited.
Partitioning sharding Based on user-defined key. Key can be changed (starting from v4.0).
Based on user-defined key. Key can not be changed.
Uptime 99.995% Guaranteed only for a tier M30 or larger.
Fault tolerance A minimum of three data nodes per replica set are automatically deployed across availability zones (AWS), fault domains (Azure), or zones (GCP) for continuous application uptime in the event of outages and routine maintenance.
Within each region, every partition is protected by a replica-set with all writes replicated and durably committed by a majority of replicas. Replicas are distributed across as many as 10-20 fault domains.
Backup User-defined backup policy.
On-demand snapshots.
Only with a tier of M10 or larger.
Backup costs separately, included in common bill.
Automatic backup. Every 4 hours, keep 2 last backup. On-demand data restore, means you need to contact Azure support.
Included in price.
Archiving Automatically moving data to a read-only archive. Only with a tier of M10 or larger
not available out of the box.
Can be build with Azure functions, Azure Data Factory and so on.
Max document size 16MB
2 MB
Tools All the tools that are compatible with MongoDB.
MongoDB in Docker container for testing and development.
All the tools that are compatible with MongoDB v3.6
Azure portal with built-in storage explorer
stand alone desktop storage explorer (Windows/MacOS/Linux)
Emulator for development and testing (Windows only)
MongoDB API support full API support, user-defined version, up to the latest.
subset. Currently MongoDB v 3.6
Indexing Indexes can be created and deleted dynamically, with some limitations.
_id field only by default.
To apply a sort to a query, you must create an index on the fields used in the sort operation.
Unique indexes can be created only when the collection is empty.
TTL yes
yes. TTL consumes RUs
Consistence model 5 levels. The higher the level, the more RUs it uses, then price is higher.
Customer support Basic plan included, but has no response time SLA.
Developer plan 49 USD/month.
Cloud provider support is probably needed anyway.
Included in Azure support plan. Standard plan is $100 per month
Global replication Supported Can be enabled at any time with just some clicks in Azure portal.
Price model Predefined price per cluster per hour usage with predefined CPU/memory/storage capacity.
Pay-as-you-go, per RUs and occupied storage.
Price critical factors cluster memory, storage capacity IOPS, document size, consistency level.
Writes are 5x times more expansive then reads.
Price examples M20 (Azure)
4GB RAM • 32GB storage
– ~160 USD/month
– support 49 USD/month
– Backup 50USD/month (depends on size)
total: ~260 USD/monthM30 (Azure)
8GB RAM • 32GB storage
~408 USD/month
– support 49 USD/month
– Backup 50 USD/month (depends on size)
total: ~508 USD/month

Calculated for a single region with

32GB database
Document size 1KB
IOPS: 2500 read/s + 1000 write/s
~443 USD/month

32GB database
Document size 1KB
IOPS: 2500 read/s + 2500 write/s
~877 USD/month

32GB database
Document size 1KB
IOPS: 2500 read/s + 5000 write/s
~1590 USD/month

32GB database
Document size 10KB
IOPS: 2500 read/s + 2500 write/s
~1634 USD/month

Customer considerations are key to making a choice

  1. In spite of its attractiveness as a pay-as-you-go price model for Cosmos DB and the possibility to create a budget, one customer preferred the predefined cluster price model from MongoDB Atlas. This was based on the customer’s personal requirement whereby he found the predefined pricing model of MongoDB Atlas more clear and safer for budgeting.
  2. For another project, the document limit size of 2MB in Cosmos DB was an impediment due to binary attachments inserted into documents. In such a case, the 16MB document limit for MongoDB Atlas looked much more attractive.
  3. A government customer found it difficult to use MongoDB Atlas for the simple reason that it was one more additional service supplier. Even though the customer had an agreement with Azure, in order to use MongoDB Atlas (even over Azure) the customer needed to follow bureaucratic service purchase routines and go into a new service agreement.

My conclusions so far

Cosmos DB is good:

  1. In case of small documents, preferably less than 1KB, as the cost is much cheaper for smaller documents.
  2. When you read more often than you write, because writes are 5x times more expansive in Cosmos.
  3. If you want to start small and pay-as-you-go.
  4. When you want support included in your Azure subscription, even for a small database.
  5. If you need a guaranteed latency despite of usage.

MongoDB Atlas is good:

  1. For any size of documents, and is the only choice for documents larger than 2MB.
  2. When you like to have a fixed budget for storage.
  3. When you use Mongo-API features that are not covered by Cosmos DB.
  4. When you need the freedom to create unique indexes.
  5. When you write data more often than read, MongoDB storage will be cheaper with other equal parameters.
  6. If you want to decide backup policy.

In some cases, both solutions are good for prototyping and testing.

  • It is easy and costs nothing (free tier up to 400RU/s and 5GB) to start with CosmosDB. Pay-as-you-go pricing model does not require up-front investments and local storage emulator is free.
  • MongoDB Atlas has free tier for databases up to 512MB, and it is easy to run it in container on any platform.

About the Author

Mikael Chudinov
Managing Delivery Architect – Capgemini

Mikael is a dedicated and performance-driven IT professional with over 15 years of expertise in software development, 6 years in solution architecture and cloud architecture.

Related Posts

AI ain’t cheap

Date icon July 26, 2021

AI models have become increasingly accurate, these accuracy improvements come with an...


Adversity breeds innovation

Seth Rachlin
Date icon July 26, 2021

Out of adversity comes innovation and invention. For insurers willing to adopt a new mindset...

AI 4 Education

Pierre-Adrien Hanania
Date icon July 26, 2021

Despite the effort in the field of education towards a sustainable future and the progress in...