Voiced by Amazon Polly |
Overview
This blog gives a detailed deep dive into Hashi-Corp Vault, its integration with the Kubernetes cluster, and its use cases.
HashiCorp Vault is the newest feature that HashiCorp launched to address the management issue and securely inject secrets into a Kubernetes cluster. It is a valuable tool, especially when dealing with entities like databases that need credentials inserted into the Frontend pod in the Kubernetes cluster.
The traditional way to secure the credentials inside Kubernetes is by creating a Kubernetes secret object and embedding a database or other credentials into it. These Secrets encoded in the Kubernetes secret object were in base64 format, which could be easily decoded by anyone using the base64 encryption data.
Customized Cloud Solutions to Drive your Business Success
- Cloud Migration
- Devops
- AIML & IoT
What is a Secret?
Secrets are the credentials we use to authenticate and authorize, which need to be secured.
What is a HashiCorp Vault?
HashiCorp Vault is a tool that manages secrets that are specifically constructed to control confidential data in any environment. It can be used to store sensitive values and, at the same time, dynamically generate access for specific services/applications on lease.
Deployment Diagram
How Hashicorp vault solves real-life problems
In Vault implementation, we use the Kubernetes service account.
You may wonder how the vault agent injects the secrets into the pod from the vault server.”?
There are two ways to inject the traffic into the pod.
Static approach:
Step 1: Admin will insert credentials in the vault manually in a key-value format
Step 2: A service account needs to be created and attached to the pod. The same service account needs to be defined in the Hashi-Corp vault.
Step 3: Vault will fetch the credentials and mount it into the pod Automatically
- Dynamic approach:
Step 1: Vault will generate and fetch the temporary credentials from the database.
Step 2: The pod authenticates to the vault using the service account.
Step 3: The vault agent will mount the credentials into the pod automatically.
The pod can use this secret for the life cycle of the pod.
If the pod gets deleted, the vault will automatically go back to the database and clean up the temporary credentials it created.
Therefore, any operational team does not have to maintain the database user.
Prerequisites
- An AWS account
- Kubernetes knowledge
- Helm chart
- EKS cluster
Kubernetes Components
- Deployment
- Stateful set
- Services
- Secrets
- Volumes
Step-by-Step Guide
Here we are going to Perform the Static approach to inject the secrets into pods.
Step 1: Create one EKS cluster in AWS, or you can create a Kubeadm cluster.
Step 2: Install Helm.
1: Run the wget command with the tar file provided for the lab to install Helm.
1 |
$ wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz |
2: Run the ls command to ensure the tar file is downloaded.
3: Run the tar -xzf, and ls -l commands to extract and view the linux-amd64 folder.
1 |
$ tar -xzf helm-v3.9.0-linux-amd64.tar.gz |
4: Run the mv command to move the helm executable folder to the /usr/local/bin/ directory.
1 2 3 |
$ mv./linux-amd64/helm /usr/local/bin/ $ helm version |
Step 3: Deploy Vault Helm Chart with the below code.
1 2 3 4 5 |
$ helm repo add hashicorp https://helm.releases.hashicorp.com $ helm repo update $ Helm install vault hashicorp/vault --set "server.dev.enabled=true" |
Now type the below command to check whether the pods are provisioned or not.
$ kubectl get pods
Step 4: Unseal the Vault.
1: Exec into the vault using the below command
$ kubectl exec -it vault-0 — /bin/sh
Type the below command to initialize the vault. It gives the unseal keys and a token for login.
$ vault operator init
Use the unseal key to unseal the vault. Type the below command thrice and provide the different unseal keys each time.
$ Vault operator unseal $ <paste the key1> $ Vault operator unseal $ <paste the Key2> $ Vault operator unseal $ <paste the Key3
Provide the vault token after typing the below command for logging into the vault.
$ Vault login $ <Vault login id> exit
Step 4: Set a secret in the vault
Exec into the vault to set a secrets
kubectl exec -it vault-0 — /bin/sh
Step 5: Enable the Kubernetes authentication method.
$ vault secrets enable -path=internal kv-v2
Step 6: Create a secret at path internal/database/config with a username and password.
$ vault kv put internal/database/config username=”postgress” password=”CloudThat”
Check whether the data was inserted into the vault or not.
$ vault kv get internal/database/config
Step 6: Configure Kubernetes authentication
Vault accepts a service token from any client in the Kubernetes cluster. During authentication, the vault verifies that the service account token is valid by querying a token review Kubernetes endpoint.
$ vault auth enable Kubernetes
Configure the Kubernetes authentication method to use the location of the Kubernetes API.
$ vault write auth/kubernetes/config \ kubernetes_host=https://$KUBERNETES_PORT_443_TCP_ADDR:443
For the client, to read the secret data defined at path “internal/database/config”, read capability is required for the path “internal/data/database/config”. This is an example of a policy. A policy defines a set of capabilities.
Write out the ” internal-app ” policy that enables the “read” capability for secrets at path = internal/data/database/config.
vault policy write internal-app – <<EOF path “internal/data/database/config” { capabilities = [“read”] } EOF
Create a Kubernetes authentication role named internal-app
vault write auth/kubernetes/role/internal app \ bound_service_account_names=internal-app \ bound_service_account_namespaces=default \ policies= internal-app \ ttl=24h
Step 7: Define a Kubernetes service account
The Vault Kubernetes authentication role defined a Kubernetes service account named internal-app.
Get all the service accounts in the default namespace.
Step 8: Launch a frontend application
- Nano wordpress.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: website labels: app: website spec: selector: matchLabels: app: website replicas: 1 template: metadata: annotations: labels: app: website spec: # This service account does not have permission to request the secrets. serviceAccountName: internal-app volumes: – name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim containers: – name: wordpress image: wordpress:5.1.1-php7.3-apache ports: – containerPort: 80 volumeMounts: – name: wordpress-persistent-storage mountPath: /var/www/html — apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: capstone spec: storageClassName: manual accessModes: – ReadWriteMany resources: requests: storage: 1Gi — kind: PersistentVolume apiVersion: v1 metadata: name: pv-wordpress labels: type: local spec: storageClassName: manual capacity: storage: 2Gi accessModes: – ReadWriteMany hostPath: path: “/mnt/data” — apiVersion: v1 kind: Service metadata: name: wordpress-service labels: app: capstone tier: frontend spec: selector: app: capstone tier: frontend type: LoadBalancer ports: # – protocol: TCP – port: 80 targetPort: 80 nodePort: 30050
- Apply the file
- Kubectl apply –f yaml
Verify that no secrets are written to the WordPress container in the website pod.
kubectl exec \ $(kubectl get pod -l app=website -o jsonpath=”{.items[0].metadata.name}”) \ –container wordpress — ls /vault/secrets
The Vault Agent Injector only modifies a deployment if it contains a specific set of annotations. An existing deployment may have its definition patched to include the necessary annotations.
- vi pathwebsite.yaml
spec:
template: metadata: annotations: vault.hashicorp.com/agent-inject: ‘true’ vault.hashicorp.com/role: ‘internal-app’ vault.hashicorp.com/agent-inject-secret-database-config.txt: ‘internal/data/database/config’
These annotations define a partial structure of the deployment schema and are prefixed with vault.hashicorp.com.
- agent-inject : – It enables the vault agent injector service
- role : – it is the Role created in the vault for vault Kubernetes authentication.
- agent-inject-secret-FILEPATH prefixes the file’s path, database-config.txt written to the /vault/secrets The value is the path to the secret defined in the vault.
- kubectl patch deployment website –patch “$(cat patchwebsite.yaml)”
Verify that secrets are written to the WordPress container in the website pod using the code below.
Next, you can verify that secrets are written to the WordPress container in the website pod using the code below.
$(kubectl get pod -l app=website -o jsonpath=”{.items[0].metadata.name}”) \ –container wordpress — cat /vault/secrets/database-config.txt
The unformatted secret data is present on the container:
Step 9: Apply a template to the injected secrets
The structure of the injected secrets may need to be structured in a way for an application to use. Before writing the secrets to the file system a template can structure the data. To apply this template a new set of annotations need to be applied.
Display the annotations file that has a template definition.
- vi patch.yaml
spec:
template: metadata: annotations: vault.hashicorp.com/agent-inject: ‘true’ vault.hashicorp.com/agent-inject-status: ‘update’ vault.hashicorp.com/role: ‘internal-app’ vault.hashicorp.com/agent-inject-secret-database-config.txt: ‘internal/data/database/config’ vault.hashicorp.com/agent-inject-template-database-config.txt: | {{- with secret “internal/data/database/config” -}} postgresql://{{ .Data.data.username }}:{{ .Data.data.password }}@postgres:5432/wizard {{- end -}}
- kubectl patch deployment website –patch “$(cat patch.yaml)”
Verify that secrets are written to the WordPress container in the website pod by using the below code, it will give output in a well-formatted way.
- kubectl exec \
$(kubectl get pod -l app=website -o jsonpath=”{.items[0].metadata.name}”) \
–container wordpress — cat /vault/secrets/database-config.txt
Now we got the secrets in a much more formatted way.
Successfully injected the PostgreSQL secrets from the vault to the website pods.
Conclusion
Nowadays, the Hashicorp vault is widely used in many environments as it is very secure in terms of login. Whenever the vault starts, it will be in a sealed state. We need to unseal it as a Shamir seal-type login access. Vault divides the login key into five parts. Out of the five keys, at least three are needed to unseal the vault.
About CloudThat
CloudThat is also the official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner and Microsoft gold partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best in industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.
Drop a query if you have any questions regarding Hashicorp Vault, Kubernetes, or managing secrets and I will get back to you quickly.
To get started, go through our Consultancy page and Managed Services Package that is CloudThat’s offerings.
Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.
- Cloud Training
- Customized Training
- Experiential Learning
FAQs
1. Do Vault secrets bound to the namespaces?
ANS: – Yes. The Vault secrets are bound to the namespaces. Pods should run in a namespace defined in the Vault Kubernetes authentication role. In that case, the pod can access the secrets defined on that path. In the above Process, we have given the default namespace in the vault Kubernetes authentication.
2. Why is the Secrets component in Kubernetes considered not very safe?
ANS: – Secrets encode data in base64 format. Anyone with the base64 encoded secret can easily decode it. As such the secrets can be considered as not very safe. Credentials might be leaked when it moves across the system.
WRITTEN BY Karthik Kumar P V
Karthik Kumar Patro Voona is a Research Associate (Kubernetes) at CloudThat Technologies. He Holds Bachelor's degree in Information and Technology and has good programming knowledge of Python. He has experience in both AWS and Azure. He has a passion for Cloud-computing and DevOps. He has good working experience in Kubernetes and DevOps Tools like Terraform, Ansible, and Jenkins. He is a very good Team player, Adaptive and interested in exploring new technologies.
Click to Comment