Skip to Content

Using SSL/TLS certificates from Azure Key Vault in Kubernetes pods

Eugene Romero
16 May 2023

How to make Kubernetes pods trust internal HTTPS services.

NOTE: This post builds upon my previous post Accessing Azure Key Vault secrets from Kubernetes, and assumes understanding of the subject discussed there.

A common task I face as a DevOps engineer has to do with injecting TLS (formerly SSL) certificates into an application or service. Why would this be needed? There can be many reasons, although by far the one I’ve encountered most, has to do with internal TLS certificates.

Most medium-to-large enterprises use internal TLS certificates to authenticate internal connections. By internal, I mean certificates which have not been obtained from a known Certificate Signing Authority, but instead, have been locally generated, for use by internal applications. This model requires that any client or user attempting to connect have a “Certificate Authority Certificate” installed, which makes the system trust certificates generated by that particular (internally created, and unique to the organization) Certificate Authority.

For example, an organization might have an internal log sink/aggregator which accepts connections over HTTPS. If this sink is only available within the internal network, its TLS certificate will probably have been generated in-house. Now, imagine a microservice running in Kubernetes needs to send logs to this sink. How can we make the Kubernetes pod trust the internal Certificate Authority, so that connections to the log sink are properly secured?

Although there are probably a few different ways of achieving this result, this is one that I have used which has worked well for me, and does not require any additional helper tools/sidecar containers/etc.

Requirements

To start off, the CA certificate to be installed in the microservice should be stored in an Azure Key Vault. For simplicity, I will assume that this certificate has been saved as a secret. This method should also work if it has been saved as a certificate, although the syntax might be different. Refer to the documentation for more information on how to reference the saved cert.

Next, our Kubernetes cluster should already have the Kubernetes Secrets Store CSI Driver set up. For instructions on how to do that, check my previous post on the subject.

The certificate to be used should be in a format that our microservice understands. Since I am using Linux-based microservices, I need to make sure my cert is available as a PEM/CRT file.

Finally, I am going to assume our microservice is based on some flavor of Debian. If it isn’t, the location to mount the certificate or the command to be run might be slightly different. Refer to your distribution’s docs for specific instructions on how to update the local certificate store.

Querying the certificate

The certificate can be queried in the same way as any other key vault object. One thing to notice is that we do not create a Kubernetes secret from the Azure secret (notice the missing spec.secretObjects section):

The secret should now be available for use in our cluster.

Mounting the certificate in our microservice

With the cert now available, we can use the volume functionality in Kubernetes to mount it in our pod. First, we need to declare our secrets provider as an eligible volume:

With that out of the way, we should then mount the secret as a file in our pod. This uses a little trick found in the volumeMounts functionality of Kubernetes, where a single file can be mounted into a directory, instead of mounting on top of a directory and overriding its contents. To achieve this, we use the full path of the mounted file, and use the subPath field to indicate the specific file in the volume we wish to mount. In this case, the subPath should match the name of the secret we are querying with our SecretProviderClass:

With this, the certificate will be available as a file in our pod. However, most Linux-based systems do not just use whatever files are in that folder at any given moment. Instead, the system needs to be told to update the local certificate store, which is built from whatever files are in that directory. We will do that in the next step.

Updating the certificate store

To update the microservice’s certificate store, we use the update-ca-certificates command. To make sure that our new cert is available for our service from the moment it starts up, we can run this command as part of a spec.containers.lifecycle.postStart instruction. PostStart events are sent immediately after a container is started, which means that our command will be run as soon as possible. Additionally, since volume mounts are performed before startup, we can be sure that our cert will be ready to be included in the local certificate store:

This is the last piece of the puzzle. Putting it all together, our pod deployment should look like this:

At this point, our service should be able to perform HTTPS calls to any other internal services using the same CA provider.

Verifying

To verify if our certificate is indeed working, we can exec into our pod:

Once inside, we can try curling into a known internal service:

curl https://internalsite.mycompany.local

If the CA certificate has been set up correctly, curl should be able to successfully connect to the HTTPS service without complaining about insecure certificates.

Meet our Expert

Eugene Romero

Managing Cloud Advisor
As a Managing Cloud Advisor, I help clients reach their business goals by leveraging modern technologies and ways of working. I believe in moving away from manual processes, automating every step of an application’s lifecycle to minimize time-to-market and maximize profits.