Keeping secrets secure with Vault inside a Kubernetes cluster

In today’s world where data plays a huge part in our lives, it is important to keep that data safe and secure. Everyday sites are getting hacked, databases breached and personal data stolen. That can lead to huge financial losses but can also damage the reputation of the company. According to CSOonline about 3.5 billion people saw their personal data stolen in the top two biggest breaches of this century alone. As data grows so does the need for storage where it will be stored. To handle more storage we need more processing power (servers) that will manage those databases. The more servers we involve in this process the more we increase the risk that those servers will get compromised and our database credentials stolen. To prevent that, engineers are working on tools that can help us minimize that risk or at least reduce the damage inflicted.

One of those tools, probably the best and most famous one, is Hashicorp’s Vault. Hashicorp is a company that is mostly known for infrastructure tools like Terraform and Vagrant. They have a few more really powerful and useful tools, such as Vault and Consul.

Vault is a tool for securely accessing secrets. Vault offers features like Secure Secret Storage, Dynamic Secrets, Data Encryption, Leasing, Renewal, and Revocation.

Consul is a tool that is mostly used as a service mash solution. Consul offers many powerful features such as Service Discovery, Health Checking, Key-Value Store, Secure Service Communication, and more.

For the example, below we are going to use Vault and Consul, both provided by Hashicorp. Consul will only be used as backend storage, we won’t use any other features.

We will deploy a PostgreSQL database and a simple Go app that connects to that database and let Vault manage the secrets* for us. We will use Minikube to simulate a Kubernetes cluster and Helm to install Consul and Vault. Instead of using static secrets, we will use dynamic ones.

*Secret is anything that you want to tightly control access to, such as API keys, passwords, usernames, certificates, etc…

Dynamic secrets and why we should use them

Dynamic secrets are secrets that are automatically being rotated/changed after a certain amount of time. That means that Vault will generate database credentials and assign them to our app instances. Every instance will have a different account. So if our database gets breached we can pinpoint exactly where it happened and revoke access for that account.

Having a TTL (time to live) on our secrets will minimize the damage if secrets get compromised.

Setting up the environment

You can find all the files used for this example here and if you want a more detailed and in-depth explanation of how to set up Vault, you can check out the official documentation here.

First, we need to set up our Kubernetes cluster. For that, we are going to use Minikube to simulate a one node Kubernetes Cluster.

We will assign 4GiB of RAM memory to Minikube:

$ minikube start --memory='4g'

After the Minikube setup is finished we need to add the official Hashicorp Helm repository:

$ helm repo add hashicorp

First, let’s install Consul as backend storage for Vault. We will use a custom values.yaml file. This is the recommended way to use Vault, having a separate Consul deployment only dedicated to Vault. If you plan on using Consul for other things like service discovery, health checking, etc… that should be a separate Consul deployment:

$ helm install consul hashicorp/consul -f

Wait for pods to come up:

$ kubectl get pods -w

After the pods are ready, install Vault with custom helm values:

$ helm install vault hashicorp/vault -f

Wait again for pods to come up:

$ kubectl get pods -w

NAME                                    READY   STATUS    RESTARTS   AGE

consul-consul-server-0                  1/1     Running   0          8m34s

consul-consul-spbqx                     1/1     Running   0          8m34s

vault-0                                 0/1     Running   0          109s

vault-agent-injector-857cdd9594-dj7kb   1/1     Running   0          110s

Now we see that Vault is running but it’s not ready, that is because Vault is now in the “unsealed” state. We need to unseal Vault so that we can configure it. First, we generate the keys (we generate 5 keys but only 3 are required to unseal Vault).

In a real scenario, you will distribute the keys to different administrators or more likely machines that will unseal Vault or you can also configure auto-unseal. We are doing that so that if a key gets compromised it will be useless without the other keys:

$ kubectl exec vault-0 -- vault operator init -key-shares=5 -key-threshold=3 -format=json > keys.json

Be aware that these keys are accessible and downloadable only once, you will not be able to get them again. Let’s check Vaults status:

$ kubectl exec vault-0 -- vault status

Key                Value

---                -----

Seal Type          shamir

Initialized        true

Sealed             true

Total Shares       5

Threshold          3

Unseal Progress    0/3

Unseal Nonce       n/a

Version            1.5.2

HA Enabled         true

We can see that 0/3 keys are used. Now we need to use 3 out of the 5 keys that we generated earlier. First, let’s list those keys:

$ cat keys.json | jq -r ".unseal_keys_b64[]"






Use each key one by one:

$ kubectl exec vault-0 -- vault operator unseal {{ key_here }}

After the Vault is unsealed we also need to get the root_token* because that token will be used for signing in to Vault:

$ cat keys.json | jq -r ".root_token"


*root_token is the main token that’s used for API requests and also for accessing the UI

Configuring Vault

After unsealing Vault we need to configure it. First, start an interactive shell session on the Vault container.

$ kubectl exec -it vault-0 -- sh

Login to vault. We will now use that root_token that we got earlier:

/ $ vault login

Token (will be hidden): root_token

Integrating Vault with Kubernetes

We have to enable the Kubernetes authentication method so that we can use Vault inside our Kubernetes cluster. This is needed for an easier and more secure authentication because Kubernetes service accounts will be used for authenticating with Vault:

/ $ vault auth enable kubernetes

Configure Vault to use the service account token, Kubernetes host, and its ca certificate:

/ $ vault write auth/kubernetes/config \
  token_reviewer_jwt="$(cat /var/run/secrets/" \
  kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443 \

Exit the shell:

/ $ exit

Setting up PostgreSQL database

After the Kubernetes setup is finished we need to set up and deploy the database. In the example, we will deploy our own PostgreSQL database but you can use any database you want as long as it’s supported by Vault. You don’t even have to have your database deployed inside Kubernetes, you can use a remote database or use managed databases provided by AWS, GCP, Azure, or other Cloud providers:

$ kubectl apply -f

We will also need to create a service account so that our app can authenticate with Vault:

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
  name: myapp

Now let’s configure Vault so that it can dynamically generate usernames and passwords and also rotate them after a certain amount of time. We again have to get the shell inside the Vault container:

$ kubectl exec -it vault-0 -- sh

Now we need to enable the Vault database engine:

/ $ vault secrets enable database

Let’s configure Vault so that it can connect to our database:

/ $ vault write database/config/postgresdb \
  plugin_name=postgresql-database-plugin \
  allowed_roles="myapp" \
  username="postgresuser" \
  password="12345" \

  • plugin_name – we are using the official PostgreSQL plugin here.
  • allowed_roles – which roles can use the connection.
  • connection_url – URL that is used to connect to our database.
  • username – this is the main user that Vault will use to access the database, in most cases, it will have superuser privileges so that it can create/delete users and give them access to specific tables.
  • password – a password that we can use to establish the initial connection. It can be a simple password because we will rotate it after the initial setup.

Now we need to create a role (user) that Vault will use to create and delete database users:

/ $ vault write database/roles/myapp \
  db_name=postgresdb \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
  default_ttl="1m" \
  • db_name – our database name.
  • creation_statements – SQL query that will be executed when the user or application requests credentials.
  • default_ttl – default lease time, after the time runs out (1 minute) we have to renew it. Since we are running Vault inside a Kubernetes cluster, Vault will do that for us for as long as the pod is running. If the pod gets terminated the lease will be revoked.
  • max_ttl – maximum time that credentials can live until being revoked. This is important because even if the credentials get leaked and we don’t know it, they won’t be useful for long.

Let’s test the generation of the credentials:

/ $ vault read database/creds/myapp

Key                Value

---                -----

lease_id           database/creds/myapp/qAD7h0At7alaKmtZiEnS3VTq

lease_duration     1m

lease_renewable    true

password           A1a-lgQZsK5UYTRaUohs

username           v-root-myapp-ZjA52a4Jw4bDWQLSt0AP-1602063212

Managing access

Now we are going to use the service account we created earlier. First, we need to create a policy for our role, so that Vault knows what capabilities our role has:

/ $ cat <<EOF > /home/vault/postgres-myapp-policy.hcl
path "database/creds/myapp" {
  capabilities = ["read"]

Here we say that this policy gives read access to credentials associated with the database “myapp”.

Let’s apply the policy to Vault:

/ $ vault policy write postgres-myapp-policy /home/vault/postgres-myapp-policy.hcl

The Vault secret injector uses the Service Account Token allocated to the pod for authentication to Vault. Vault exchanges this for a Vault Token, which has policies assigned.

We need to configure our role to be able to do that:

/ $ vault write auth/kubernetes/role/myapp \
  bound_service_account_names=myapp \
  bound_service_account_namespaces=default \
  policies=postgres-myapp-policy \
  • bound_service_account_names – service accounts that can use that role.
  • bound_service_account_namespaces – allowed namespaces for the service accounts.
  • policies – policies that will be attached to the token.
  • ttl – time to live for the Vault token returned from successful authentication.

To make our database more secure we will rotate the superuser password, this means that our super secret password (“12345”) won’t be valid anymore. Keep in mind that after rotating the password you can’t retrieve it anymore which means that only Vault knows it. This is why it’s recommended to have a separate user only dedicated to Vault in your database:

/ $ vault write -force database/rotate-root/postgresdb

Exit the shell:

/ $ exit

Deploying the application

Now that we have Vault set up, we are going to deploy a demo app called myapp. It’s a simple Go app made to demonstrate how the database secrets are being dynamically rotated. The app reads the values from a configuration file and connects to a database. If it can’t connect to the database it will read the configuration file again and retry every 2 seconds until it’s connected.

$ kubectl apply -f

Vault creates different users for every instance of the app. If the pod restarts, the old user/password will be revoked and new ones generated. That means that each pod has a different configuration file and connects with a different user so that if something happens we can trace to where it happened. Let’s now check if the app works.

First, we are going to port-forward so that we can access the app on localhost:

$ kubectl port-forward deployment/myapp 8090:8090

Forwarding from -> 8090

Forwarding from [::1]:8090 -> 8090

Now let’s open our browser or use curl to see which credentials were used:

$ curl localhost:8090

Successfully connected!

Current username: v-kubernet-myapp-xWySZwKQvn894Lv2mZgX-1602063418

Current password: A1a-zjCkew3hPu0rnklL

We can see that the app automatically connects to our PostgreSQL database with the newly generated credentials and prints what credentials were used to connect to that database. Now let’s demonstrate if something happens, e.g. a database breach, and we want to revoke/rotate all the secrets. We only have to run a simple command to tell Vault to revoke everything with the prefix database/creds/myapp:

kubectl exec vault-0 -- vault lease revoke -prefix database/creds/myapp

All revocation operations queued successfully!

If we curl our app again we can see that new credentials were generated and our app has successfully reconnected to the database.

curl localhost:8090

Successfully connected!

Current username: v-kubernet-myapp-pIYJECVXDj6Wvj2HxD88-1602063882

Current password: A1a-RS5wamLWHzFcXsIl



Want to discuss this in relation to your project? Get in touch:

Leave a Reply