Horizontally Scale Your MongoDB Deployment with Bitnami, Kubernetes and Helm

Do you need a solid, scalable NoSQL foundation for your production applications? And do you run a Kubernetes environment? If your answer to both these questions is "yes", then this blog post is for you.

MongoDB is a popular, document-oriented NoSQL database system, used by some of the largest companies in the world and tuned to deliver high performance without any loss of flexibility. And Bitnami now offers a MongoDB Helm chart that makes it quick and easy to deploy MongoDB in a Kubernetes-based environment. This Bitnami MongoDB Helm chart, pre-configured with security best practices, lets you deploy a horizontally-scalable MongoDB cluster with separate primary, secondary and arbiter nodes.

How does it work? Keep reading.

Deployment Options

The Bitnami MongoDB Helm chart can be deployed on any Kubernetes cluster. For this blog post, I'll use the Microsoft Azure Container Service (AKS), but the chart works the same way on Google Kubernetes Engine (GKE), Amazon Elastic Container Service (EKS) or minikube for quick testing.

With the chart, Bitnami provides two configuration files: values.yaml, which initializes the deployment using a set of default values and is intended for development or test environments, and values-production.yaml, which is intended for production environments. The values-production.yaml configuration offers these additional benefits:

  • It enables a MongoDB replica set.
  • Each of the participants in the replica set has a fixed StatefulSet, so you always know where to find the primary, secondary or arbiter nodes.
  • The number of secondary and arbiter nodes can be scaled out independently.

Deployment and Testing

To deploy the Bitnami MongoDB chart on AKS, provision a new Kubernetes cluster on Microsoft Azure and then install and configure kubectl with the necessary credentials. You will find a detailed walkthrough of these steps in our AKS guide. Once done, download the values-production.yaml file and deploy the chart using the commands below, replacing the ROOT_PASSWORD placeholder with a secure password:

$ curl -O https://raw.githubusercontent.com/kubernetes/charts/master/stable/mongodb/values-production.yaml
$ helm install --name my-release -f values-production.yaml stable/mongodb --set mongodbRootPassword=ROOT_PASSWORD

This command creates a deployment with the name my-release. You can use a different release name if you wish - just remember to update it in the previous and following commands. Monitor the pods until the deployment is complete:

$ kubectl get pods -w

If you see output similar to that shown below, your MongoDB cluster is deployed and ready for action.


To check that everything is working correctly, connect to the primary node using a MongoDB client pod, specify the root password as shown, and then run the rs.status() command:

$ kubectl run my-release-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo admin --host my-release-mongodb -u root -p ROOT_PASSWORD

Check the members section of the output. It should show you the role of the current node (in this case, it's the primary node) as well as the roles and IP addresses of the other members of the cluster. If you see output similar to that shown below, your cluster nodes are connected and talking to each other.

Cluster nodes

Default Network Topology and Security

The default Bitnami MongoDB deployment is configured with three nodes: a primary, a secondary and an arbiter. However, you can scale the cluster up or down by adding or removing nodes even after the initial deployment. If you select an even number of nodes, it's a good idea to add an arbiter node. Arbiter nodes do not store any data; their function is to provide an additional vote in replica set elections.

The Bitnami MongoDB chart is configured with password-based authentication enabled by default, and no RBAC rules are required for its deployment. The MongoDB service is available on the standard MongoDB port 27017. Remote connections are enabled for this port by default.

Data Replication and Persistence

Bitnami's MongoDB Helm chart provides a horizontally scalable and fault-tolerant deployment. One node in the cluster is designated as the primary node and receives all write operations, while other nodes are designated as secondary nodes and asynchronously replicate the operations performed by the primary node on their own copies of the data set. If a primary node fails, an election takes place and the first secondary node receiving a majority of votes becomes the new primary node.

Data persistence is enabled by default in the chart configuration. A separate persistent volume is used to store the data on the primary node and on each secondary node. If a secondary node fails, a new one will be scheduled automatically. If the primary node fails, the application connected to the primary node may experience some downtime until a new master is started. However, no data loss would occur as the data is stored in a separate persistent volume.

By default, the values-production.yaml configuration file initializes an 8 GB persistent volume on each node. However, it's possible to modify the size of the disk by setting different values at deployment time, as in the example below which configures a 16 GB persistent volume instead:

$ helm install --name my-release -f values-production.yaml stable/mongodb --set mongodbRootPassword=ROOT_PASSWORD --set data.persistence.size=16Gi

Application Usage

It's quite easy to connect an application running in the same cluster to the MongoDB service. Here's a simple example that illustrates with a script running in a Bitnami Node.js container in the same cluster:

  • Start by deploying Bitnami's Node.js container in the cluster:

      $ kubectl run mean --image=bitnami/node sleep 3600
  • Get the name of the pod, then connect to the running container's console:

      $ kubectl get pods --namespace default -l 'run=mean'
      $ kubectl exec -it mean-7896b5c6c-45zng bash
  • At the console, install the MongoDB connector and a text editor:

      $ npm install mongodb
      $ apt-get install nano
  • Create a script with the content below. This script connects to the MongoDB Kubernetes service. This service is a proxy of the primary MongoDB node and the user only needs to provide the name of the Kubernetes service in the connection string.

      // server.js
      const MongoClient = require('mongodb').MongoClient;
      const uri = 'mongodb://my-release-mongodb:27017';
      // connect to the MongoDB database
      MongoClient.connect(uri, { useNewUrlParser: true }, function(err, db) {
      if (!err) {
        console.log('Successfully connected to MongoDB!');
      } else {
        throw err;
  • Run the script:

      $ node server.js

If you see a message like "Successfully connected to MongoDB!", your script has successfully connected to the MongoDB service.

Horizontal Scaling

You can easily scale the cluster up or down by adding or removing nodes. When using values-production.yaml, the chart creates separate StatefulSets for arbiter, primary and secondary nodes. Depending on your requirements, simply scale the corresponding arbiter or secondary StatefulSet up or down; the number of primary nodes should not be scaled up.

For example, to scale the number of secondary nodes up to 3, use the command below:

$ kubectl scale statefulset my-release-mongodb-secondary --replicas=3

MongoDB does not have any minimal hardware requirements, so the default virtual machine type provisioned by AKS should work without errors. However, depending on the likely workload of your database cluster, you may want to use a different machine type and/or additional disks.


You can update to the latest version with these commands:

$ helm repo update
$ helm upgrade my-release stable/mongodb -f values-production.yaml --set mongodbRootPassword=ROOT_PASSWORD

If this sounds interesting to you, why not try it now? Deploy the Bitnami MongoDB Helm chart now on Microsoft Azure Container Service (AKS) and then tweet @bitnami and tell us what you think!