Configuring a HashiCorp Vault as a Certificate Manager on HPCC Systems
Nikita Jha is a student studying at Northview High School in Georgia, USA. She joined the 2021 HPCC Systems Intern Program to work on a project focusing on applying Docker image build and Kubernetes security principles to our Cloud Native Platform, which was released earlier in 2021. As well as the instructions included here, her detailed intern project blog journal includes reflections on her Docker image build experience, in particular focusing on when to use the ‘no cache’ option and when not to use it. It’s well worth a read.
Nikita’s mentor, Michael Gardner (Software Engineer III, LexisNexis Risk Solutions Group), has worked with the HPCC Systems Platform team since 2014. His work focuses on our init systems, administration scripts and project builds. In addition, Michael maintains and contributes to several HPCC Systems Java projects. Nikita Jha’s intern project was also supported by other members of the HPCC Systems Platform Team including Xiaoming Wang (Senior Consulting Software Engineer) and Godson Fortil (Software Engineer II).
In this blog, Nikita Jha shares her experience of setting up and configuring automated certificate management using Hashicorp Vault. As well as demonstrating her understanding of the wider implications of setting up a secure system, her tutorial style approach gives detailed steps to follow, providing you with everything you need to know to complete the process yourself. The examples used apply to Kubernetes clusters but note that they are tailored towards using Azure.
******
Security is undoubtedly a major issue and an important aspect of tackling this issue is the automated management of TLS certificates. In the Kubernetes environment, HashiCorp Vault can be configured as a certificate manager, thereby enabling HPCC Systems to communicate securely over the network with other services or clients both external and internal to the cluster. When provisioning this framework on HPCC Systems, keep in mind to ensure you are able to successfully setup the certificate manager without encountering too many errors in the process:
- Which version of the platform to use
- Clearing of persistent volume
- Downloading the necessary packages
This blog covers the importance of certificate management in a Cloud environment, how this framework can be setup using the HPCC Systems Cloud Native Platform and some useful tips to help minimize errors.
Why Certificate Management is Important
TLS (Transport Security Layer) is an encryption in transit security protocol designed to encrypt data sent over the internet, so hackers and other malevolent users will not be able to see private data, such as credit card information and passwords.
For a web application to use TLS, it must have a TLS certificate that contains information about who owns the domain and the server’s public key. These pieces of information can help validate the server’s identity when sending data over the internet. Without these certificates, data transactions and internet browsing connections cannot be secured. Websites secured by TLS/SSL certificates show a small padlock icon in the browser address bar along with a HTTPS display instead of HTTP.
Certificate creation is rarely automated and it still requires contacting the team in charge of the PKI (Public Key Infrastructure) to generate them. This process may be long and tedious because each certificate is issued with a very long validity period so asking for a new certificate every day is neither efficient, not convenient. This model is no longer sustainable in a Cloud environment which is extremely volatile and with a zero-trust network, this is even more pertinent for containerized environments and automated certificate management is the key to a secure data transfer system.
Setting up Automated Certificate Management using HashiCorp Vault in the HPCC Systems Cloud Native Platform
Follow these steps.
- Get an AKS (Azure Kubernetes Service) cluster up and running
- Open another terminal tab
- Add HashiCorp Vault to your Helm Repository using the following command:
helm repo add hashicorp <relevant release version>
- To install HashiCorp Vault with Helm use the following command:
helm install vault hashicorp/vault --set "injector.enabled=false"
- Check the vault pods using the following command, they should be running although not ready yet:
$ kubectl get pods NAME READY STATUS RESTARTS AGE vault-0 0/1 Running 0 6s
- Initialize and Unseal the Vaults. Initialize the Vault with one key share and one key threshold. Save the output in JSON format to allow you to use the unseal key and root token later.
kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > init-keys.json
Note: Make sure the jq (command-line JSON processor) is installed before continuing.
- View the unseal key found in init-keys.json using the following command:
cat init-keys.json | jq -r ".unseal_keys_b64[]"
- Create an environment variable holding the unseal key as follows:
VAULT_UNSEAL_KEY=$(cat init-keys.json | jq -r ".unseal_keys_b64[]")
- Unseal the Vault running on the vault-0 pod with the $VAULT_UNSEAL_KEY:
kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
- Check the pods again using the following command:
kubectl get pods
The Vault pods should now be running and ready.
Configuring the Vault PKI secrets engine (certificate authority)
Follow these steps:
- Use the following command to view the vault root token:
cat init-keys.json | jq -r ".root_token"
- Then create a variable named VAULT_ROOT_TOKEN to capture the root token.
VAULT_ROOT_TOKEN=$(cat init-keys.json | jq -r ".root_token")
- Login to the Vault running on the vault-0 pod with the $VAULT_ROOT_TOKEN using the following command:
kubectl exec vault-0 -- vault login $VAULT_ROOT_TOKEN
- Start an interactive shell session on the vault-0 pod using the following command:
kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh
You are now working from the vault-0 pod and should see a prompt. Enable the PKI secrets engine at its default path using the following command:
vault secrets enable pki
Then configure the max lease time-to-live (TTL) to 8760h using the following command:
vault secrets tune -max-lease-ttl=8760h pki
Vault CA Key Pair
Vault can accept an existing key pair, or it can generate its own self-signed root. To generate a self-signed certificate valid for 8760h, use the following command:
vault write pki/root/generate/internal common_name=example.com ttl=8760h
Configure the PKI secrets engine certificate issuing and certificate revocation list (CRL) endpoints to use the Vault service in the default namespace as follows:
vault write pki/config/urls issuing_certificates="http://vault.default:8200/v1/pki/ca" crl_distribution_points="http://vault.default:8200/v1/pki/crl"
For the public TLS certificates, used in this demo, myhpcc.com is used as the domain. Configure a role named hpccnamespace that enables the creation of the certificates hpccnamespace domain with any subdomains as shown in the following sample:
vault write pki/roles/hpcclocal key_type=any allowed_domains=default allow_subdomains=true allowed_uri_sans="spiffe://*" max_ttl=72h
Then configure a role named myhpcc-dot-com to enable the creation of the certificates myhpcc.com domain with any subdomains as shown here:
vault write pki/roles/myhpcc-dot-com allowed_domains=myhpcc.com allow_subdomains=true allowed_uri_sans="spiffe://*" max_ttl=72h
Next, create a policy named pki that enables read access to the PKI secrets engine paths as shown here:
vault policy write pki - <<EOF path "pki*" { capabilities = ["read", "list"] } path "pki/roles/myhpcc-dot-com" { capabilities = ["create", "update"] } path "pki/sign/myhpcc-dot-com" { capabilities = ["create", "update"] } path "pki/issue/myhpcc-dot-com" { capabilities = ["create"] } path "pki/roles/hpcclocal" { capabilities = ["create", "update"] } path "pki/sign/hpcclocal" { capabilities = ["create", "update"] } path "pki/issue/hpcclocal" { capabilities = ["create"] } EOF
Then enable the Kubernetes authentication method:
vault auth enable kubernetes
Configure the Kubernetes authentication method to use the service account token, the location of the Kubernetes host and its certificate:
vault write auth/kubernetes/config \ token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
The final step is to create a Kubernetes authentication role named issuer that binds the pki policy with a Kubernetes service account using the same name:
vault write auth/kubernetes/role/issuer \ bound_service_account_names=issuer \ bound_service_account_namespaces=cert-manager,default \ policies=pki \ ttl=20m
Now, exit from the vault pod using the following command:
exit
Deploying the Cert Manager
This involves configuring an issuer and the generation of a certificate The cert-manager allows you to define Issuers that interface with the Vault certificate generating endpoints. These Issuers are invoked when a Certificate is created.
The first step is to create a namespace named cert-manager to host the cert-manager as follows:
kubectl create namespace cert-manager
Next, install the cert-manager custom resource definitions. This adds new custom resource types to Kubernetes for certificate issuers and certificates and is done using the following command:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.crds.yaml
Installing the Cert Manager Helm Chart
First, add the Jetstack helm repo using the following command:
helm repo add jetstack https://charts.jetstack.io
Now install cert-manager using the following command:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.1.0
Then create a service account named issuer within the default namespace:
kubectl create serviceaccount issuer
The service account generates a secret that is required by the Issuer. To get all the secrets in the default namespace, use the following command:
kubectl get secrets
Then create a variable named ISSUER_SECRET_REF to capture the secret name.
ISSUER_SECRET_REF=$(kubectl get serviceaccount issuer -o json | jq -r “.secrets[].name”)
Installing HPCC Systems with Certificate Generation Enabled
First, if you haven’t done so already, add the HPCC Systems helm repo using the following command:
helm repo add hpcc https://hpcc-systems.github.io/helm-chart
Then update your helm repositories using this command:
helm repo update
Note: Because this blog uses files from our helm examples, these commands should be run from the helm directory in the hpcc source or the directory where copies of those files have been made available.
Now, install the HPCC Systems helm chart with the –set certificates.enabled option set to true using the following command:
helm install myhpcc hpcc/hpcc --version=8.2.6 --set certificates.enabled=true --set certificates.issuers.local.spec.vault.auth.kubernetes.secretRef.name=$ISSUER_SECRET_REF --set certificates.issuers.public.spec.vault.auth.kubernetes.secretRef.name=$ISSUER_SECRET_REF --values examples/certmanager/values-vault-pki.yaml -f examples/azure/values-auto-azurefile.yaml
Use kubectl to check the status of the deployed pods as shown in the command below and wait until all pods are running before continuing:
kubectl get pods
Check whether the certificate issuers have been successfully created using this command:
kubectl get issuers -o wide
The results should look similar to this example:
NAME READY STATUS AGE hpcc-local-issuer True Vault verified 78s hpcc-public-issuer True Vault verified 78s
Also, check whether the certificates have been successfully created:
kubectl get certificates
This should provide you with a list of all the certificates created.
Take a look at the list of Kubernetes secrets, which now also include the generated TLS secrets using this command:
kubectl get secrets
Additional Useful Notes:
- Make sure jq (the command-line JSON processor) is installed before starting.
- You may see this error:
Pods not running (CrashLoopBackOff)
It can be caused by multiple issues, but it might indicate your persistent volume has to be deleted, which can be done using the following commands:
kubectl delete pv --all
kubectl delete pvc --all