RabbitMQ is a general purpose message broker that is designed for consistent, highly-available messaging scenarios (both synchronous and asynchronous). To enable quick and easy deployment of RabbitMQ, Bitnami offers a number of different solutions: a single RabbitMQ cloud instance, a multi-node RabbitMQ cloud cluster (for Google Cloud Platform and Microsoft Azure), a RabbitMQ Docker container, a RabbitMQ Helm chart for Kubernetes, as well as RabbitMQ native installers and virtual machines.
To this feast of choices, you can now add another option: Bitnami's RabbitMQ Terraform template for Oracle Cloud users. This template sets up a RabbitMQ cluster on Oracle Cloud Infrastructure (OCI) that is preconfigured in line with current best practices for security, scalability and reliability. In this blog post, I'll walk you through the details of working with the RabbitMQ Terraform template and the resulting OCI cluster.
Before starting with the deployment, make sure that you have the following important pieces of information from to the Oracle Cloud Infrastructure Web dashboard. In case you can't find them, our starter guide will show you where to look.
Once you've acquired the necessary information, follow the steps below:
Prepare an SSH key pair on your local system. This key pair will be needed to log in to your RabbitMQ instances.
$ terraform init
Obtain the Bitnami RabbitMQ Terraform template:
$ git clone https://github.com/bitnami/oci-multi-tier $ cd oci-multi-tier/rabbitmq
Edit the env-vars file included with the Bitnami RabbitMQ Terraform template and fill in the required values using the information obtained from the Oracle Cloud Infrastructure Web dashboard. Add the path to the deployment SSH keys on your local system.
### Authentication details export TF_VAR_tenancy_ocid="<tenancy OCID>" export TF_VAR_user_ocid="<user OCID>" export TF_VAR_fingerprint="<PEM key fingerprint>" export TF_VAR_private_key_path="<path to the private key that matches the fingerprint above>" ### Region export TF_VAR_region="<region in which to operate, example: us-ashburn-1, us-phoenix-1>" ### Compartment export TF_VAR_compartment_ocid="<compartment OCID>" ### Public/private keys used on the instances export TF_VAR_ssh_public_key_path="<path to public key>" export TF_VAR_ssh_private_key_path="<path to private key>"
Source the environment and deploy the template:
$ . env-vars $ terraform apply
You will be prompted to confirm the deployment and Terraform will then proceed. Here's what you should see after deployment is complete:
Once the deployment is complete, execute the following command to see the IP addresses of the nodes and the RabbitMQ username and password:
$ terraform output
Connect to the primary RabbitMQ node using SSH:
$ ssh bitnami@PRIMARY-NODE-PUBLIC-IP-ADDRESS
Once connected, run the command below to start the RabbitMQ control script and check replication status:
$ sudo rabbitmqctl cluster_status
If you see a list of nodes similar to that shown below, your cluster is good to go!
The cluster operates on the standard RabbitMQ ports 5672 and 15672. Port 15672 is for the management panel and port 5672 is for applications. For security reasons, these ports are not open for external connections by default. To allow external access to the RabbitMQ cluster, you can use an SSH tunnel or open the port(s) for external access using an IP address whitelist. Refer to our documentation for more information on these options.
The main RabbitMQ configuration file is at /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq-env.conf, and the RabbitMQ logs are stored in the /opt/bitnami/rabbitmq/var directory.
The RabbitMQ Terraform template automatically configures multiple nodes into a single logical broker. The solution is configured with three nodes by default, but the number of nodes can be changed at deployment time or later. Each node runs the RabbitMQ application, and the nodes share users, virtual hosts, queues, exchanges, bindings, and runtime parameters.
For maximum performance and reliability, the Bitnami RabbitMQ Terraform template includes specific configuration tweaks. These are listed below:
The Bitnami RabbitMQ Terraform template doesn't configure mirror queues, locating the queues in a single node instead. This is deliberately done because mirror queues generate a lot of traffic when copying multiple queues across nodes, thereby reducing the speed of the cluster and making the system difficult to manage.
RabbitMQ supports two types of nodes: disk nodes and RAM nodes. RAM nodes store data in volatile RAM memory, while disk nodes use disk storage. In almost all cases, disk nodes are preferable for data persistence reasons. Bitnami configures the broker with disk nodes with their disk free limit set to mem_relative, 1.0.
Network partitions are set to ignore mode (recommended for reliable networks).
The Shovel and Federation plugins are not used. Message queues reside on one node by default but are visible and reachable from all nodes. With this, a connected client can see the queues on all nodes (even when those queues are not located on the node it is connected to).
You can use the RabbitMQ HTTP API to monitor cluster and node metrics, such as the number of messages, connections, disk and memory usage, and more. Here's an example which returns top-level cluster metrics. Run it at the console on your primary RabbitMQ node and replace the USERNAME and PASSWORD placeholders with the credentials shown by the terraform output command:
$ curl http://USERNAME:PASSWORD@localhost:15672/api/overview | python -m json.tool
You can also see node-level metrics:
$ curl http://USERNAME:PASSWORD@localhost:15672/api/nodes/ | python -m json.tool
Here's an example:
From the above, it should be clear that Bitnami's RabbitMQ Terraform template is the quickest and easiest way to get a horizontally-scalable RabbitMQ cluster running on Oracle Cloud. Try it today and then tweet @bitnami and tell us what you liked (or didn't like) about it!