Kubernetes & MQTT
Usage
Currently, Kubernetes is used in Kamea to deploy a RabbitMQ cluster. It is used to support the MQTT ingestion feature. In the future, Kamea will support a complete Kubernetes-based setup to remove dependencies on cloud providers and enable on-premise setups.
Support
For now, the main deployment target is AKS (Azure Kubernetes Service), but the final implementation will provide tooling to support EKS and on-premise deployments.
Prerequisites
TLS
To set up the RabbitMQ infrastructure, you will need to have a Public Key Infrastructure (PKI) to issue TLS certificates.
You will need certificates for: - the RabbitMQ server - each device (when using certificate-based authentication)
These certificates are expected to be signed by a root CA. This guide does not explain how to generate the certificates; it is up to the customer to do so by following their company's processes.
(Optional) Secrets vault
Secrets used by the cluster are stored in a secret vault. When using Azure, the Key Vault is used. Secrets are mounted from the Key Vault into the pods by using the Secrets Store CSI Driver. The default Terraform configuration will create this resource for you, but you can provide your own vault by passing it as the key_vault
variable of the aks
module when running your own Terraform configuration.
Setup
The cluster creation is entirely done by the pipelines. As with some other Kamea components, it requires running the initial Terraform job with the RUN_TARGETED_TERRAFORM
variable set to true. This step is described in the main setup guide.
Note
When creating an AKS resource, a resource group will automatically be created in the same Azure subscription. It contains resources used by the cluster. Its name can be configured; see the table below.
Add the following environment variables in GitLab to enable Kubernetes creation by the pipelines.
Key | Value |
---|---|
AKS_CLUSTER_NAME | Name of the AKS resource in Azure |
AKS_NODE_RESOURCE_GROUP | Name of the resource group created to host AKS underlying resources |
AKS_RMQ_BASE_DOMAIN | Prefix used to assign a domain name to RabbitMQ. The complete domain name will be <prefix>.<region>.cloudapp.azure.com |
AKS_WHITELISTED_ADMIN_IPS | IP addresses whitelisted to access the Kubernetes control plane. Must have the format of a Terraform array: ["1.2.3.4/32","5.6.7.8/32"] |
USE_AKS | true |
USE_MQTT | true |
Then, you can run the targeted Terraform pipeline job to create AKS. In the output of this job, note the value of the node_ips
variable, and copy it into GitLab environment variables, keeping the Terraform format.
Key | Value |
---|---|
AKS_NODE_IPS | Output IPs of Kubernetes nodes. Example: ["1.2.3.4","5.6.7.8"] |
Now, you need to create the TLS certificate for RabbitMQ and the CA used to validate the devices' certificates when using mTLS. The values of the two certificates and RabbitMQ private key must be available in PEM format. In the Key Vault (either the new one created by Terraform or the existing one you provided), create these three secrets:
rmqtlscertificate
: RabbitMQ certificate, in PEM formatrmqtlsprivatekey
: RabbitMQ private key, in PEM formatrmqtlsca
: CA certificate used to authenticate the devices, in PEM format
To submit these values, the simplest way is to use the az
CLI:
az keyvault secret set --name <secret name> --vault-name <name of your key vault> -f <path to the file>
The setup will be completed by the next pipeline execution, as described in the main setup guide.