As a developer, we are always concerned about storing our sensitive information as a plain text in the code. Even though Kubernetes secrets lets us store our sensitive information as “Secrets”. But by default secrets are stored as unencrypted base64-encoded strings, that can be retrieved by anyone with API access or anyone who has access to Kubernetes underlying data store.
On the other hand, Kubernetes Secrets store CSI driver integrates secrets store with Kubernetes via Container Storage Interface (CSI) volume. The Secrets Store CSI driver secrets-store.csi.k8s.io allows Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted into the container’s file system. This is supported by all the major cloud vendors, but in this guide, I will talk about the configuration for Azure Key Vault.
Implementation steps :
Using the below script you can integrate the existing Key Vault with the AKS cluster. As part of this script, we assume you have already deployed AKS cluster with managed identity and azure key vault. Few points about this script:
1.) Set all the variables which will be used later throughout this guide. You will have to change AKSname, version, keyvault, secrets and other variables in the first section.
$subscriptionId = (az account show | ConvertFrom-Json).id $tenantId = (az account show | ConvertFrom-Json).tenantId $location = "westeurope" $resourceGroupName = "AKS" $aksName = "aks-aad-kubenet" $aksVersion = "1.19.11" $keyVaultName = "aks-aad-kubenet" $secret1Name = "secret3" #This is the secret in your key vault. $secret2Name = "secret4" $secret1Alias = "secret3_alias" #This will be the alias to mount secret in the container $secret2Alias = "secret4_alias" $acrName = "aksaadacr" #No need to change these variables, unless you want to align it with your naming convention $identityName = "identity-aks-kv-kube" $identitySelector = "azure-kv-kube" $secretProviderClassName = "secret-provider-kv-kube" $isAKSWithManagedIdentity = "true"
2.) Now we will use these parameters to get all this information we need further
#Get the resource group information $resourceGroup = az group show -n $resourceGroupName | ConvertFrom-Json #Get the ACR information and login $acr = az acr show --name $acrName | ConvertFrom-Json az acr login -n $acrName --expose-token #Get AKS information and connect to the cluster $aks = az aks show -n $aksName -g $resourceGroupName | ConvertFrom-Json #Connect to AKS cluster az aks get-credentials -n $aksName -g $resourceGroupName --overwrite-existing #Get the keyvault information for use in the next steps $keyVault = az keyvault show -n $keyVaultName -g $resourceGroupName | ConvertFrom-Json
3.) Since Key-Vault integration requires CSI driver to be installed, we install these pods using Helm chart in the below steps. This can be verified using "kubectl get pods -n csi-driver" and the output should show you csi pods running.
kubectl create ns csi-driver helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts helm install csi-azure csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --namespace csi-driver kubectl get pods -n csi-driver
4.) We will now create our own custom SecretProviderClass object with provider-specific parameters for the secrets. Please note the kind parameter in this yaml, this is of SecretProviderClass type. This is the object that will actually provide identity access to our keyvault. If you have changed the secret variable in the first step, you need to edit the objects section in this yaml to align with the secret name and alias used above. I am using two secrets for reference, if you have more or less, you can change the “objects” section accordingly.
Detailed documentation can be checked here Using the Azure Key Vault Provider
$secretProviderKV = @" apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: $($secretProviderClassName) spec: provider: azure parameters: usePodIdentity: "true" useVMManagedIdentity: "false" userAssignedIdentityID: "" keyvaultName: $keyVaultName cloudName: AzurePublicCloud objects: | array: - | objectName: $secret1Name objectAlias: $secret1Alias objectType: secret objectVersion: "" - | objectName: $secret2Name objectAlias: $secret2Alias objectType: secret objectVersion: "" resourceGroup: $resourceGroupName subscriptionId: $subscriptionId tenantId: $tenantId "@ $secretProviderKV | kubectl update -f -
5.) Our cluster will need the correct role assignments to perform Azure-related operations. These commands provide AKS and aks-engine-clusters identity to communicate with Azure.This $aks.nodeResourceGroup will be the default MC_AKS resource group that has the managed identity stored.
az role assignment create --role "Managed Identity Operator" --assignee $aks.identityProfile.kubeletidentity.clientId --scope /subscriptions/$subscriptionId/resourcegroups/$($aks.nodeResourceGroup) az role assignment create --role "Virtual Machine Contributor" --assignee $aks.identityProfile.kubeletidentity.clientId --scope /subscriptions/$subscriptionId/resourcegroups/$($aks.nodeResourceGroup)
6.) AAD Pod identity helps the AKS cluster access keyvault. Using the below steps we configure the AAD Pod identity. Please note that I am using --set nmi.allowNetworkPluginKubenet=true here since AAD Pod identity doesn’t support kubenet by default. If you are using “Azure CNI” driver, make sure you remove this parameter. You should see AAD-POD running successfully after these steps.
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts helm install pod-identity aad-pod-identity/aad-pod-identity --set nmi.allowNetworkPluginKubenet=true kubectl get pods
7.) We can use the below command to filter the managed identity created as part of this deployment, but this command can take a lot of time to fetch the details if you have deployed AKS cluster recently.
If this command doesn’t return any output you can add the identity name and resource group name in second command manually (refer inline image below)
$agentPoolIdentity = az resource list -g $aks.nodeResourceGroup --query "[?contains(type, 'Microsoft.ManagedIdentity/userAssignedIdentities')]" | ConvertFrom-Json $identity = az identity show -n $agentPoolIdentity.name -g $agentPoolIdentity.resourceGroup | ConvertFrom-Json
8.) In the below steps, we will assign the reader role to this identity so that it can read the key vault contents. After assigning read permission to the key vault we also need to make sure it can access the secrets, so we assign the “Get secret” permissions to the identity.
az role assignment create --role "Reader" --assignee $identity.principalId --scope $keyVault.id az keyvault set-policy -n $keyVaultName --secret-permissions get --spn $identity.clientId
9.) Now comes the main part, since we are using pod identities, we are going to create an AzureIdentity in our cluster. It will reference the identity that we created in the above step. Then we create an AzureIdentityBinding which will references the AzureIdentity created as part of this step. Please note the spec section of the AzureIdentity type.
This is referring to the identity from above step. And in the spec section of AzureIdentityBinding azureIdentity helps it to identify where it needs to bind.
$aadPodIdentityAndBinding = @" apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentity metadata: name: $($identityName) spec: type: 0 resourceID: $($identity.id) clientID: $($identity.clientId) --- apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentityBinding metadata: name: $($identityName)-binding spec: azureIdentity: $($identityName) selector: $($identitySelector) "@ $aadPodIdentityAndBinding | kubectl apply -f -
10.) Once this setup is finished you can deploy any pod in the cluster. We need to use “volumes” section in the spec configuration on the POD and container as well. We are specifying csi driver as part of the POD spec volume section. This mounts “secrets-store” as a volume and we can eventually access the secrets in the container.
I have created a pod with ubuntu image, you can use your application to create the pod and test this configuration. But if you want to follow along you can copy paste and run this on your machine.
$ubuntuPod = @" kind: Pod apiVersion: v1 metadata: name: ubuntu-debug labels: aadpodidbinding: $($identitySelector) spec: containers: - name: ubuntu image: ubuntu:latest command: ["/bin/sleep", "3650d"] volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: $($secretProviderClassName) "@ $ubuntuPod | kubectl apply -f - kubectl get pods
11.) In this section, I am verifying if my secrets are mounted successfully. You will note there is no space or next line which is ok, this is to avoid any unwanted characters when I reference this in my code.