Categories
Uncategorized

Manage Kubernetes Secrets with External Secrets Operator

Table of Contents

What is External Secrets Operator?

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets ManagerHashiCorp VaultGoogle Secrets ManagerAzure Key VaultIBM Cloud Secrets ManagerCyberArk ConjurPulumi ESC and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

You can read more about it here https://external-secrets.io/v0.16.1

What Problem does it solve?

It solves the problem of consistency of your secrets across different environments.

Take for example you have a database password that you store in AWS/GCP secrets manager. You also have that same database password stored as k8s secrets in 5 different namespaces as well. If you update the password you’d have to update it in 6 different places. It would be a pain to have to update everywhere. That’s where the External Secrets Operator makes life so easy.

With External Secrets Operator you’ll make the source of truth be secrets manager. You’ll define a manifest that has a reference to the secret manager secret and the k8s secret will be created with that secret. Please see the example below.

apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
  name: "hello-world"
spec:
  # This has permission to query Secrets Manager
  secretStoreRef:
    name: secret-store-name
    kind: SecretStore  # or ClusterSecretStore

  # RefreshInterval is the amount of time before the values reading again from the SecretStore provider
  # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h" (from time.ParseDuration)
  # May be set to zero to fetch and create it once
  refreshInterval: "1h"

  # the target describes the secret that shall be created
  # there can only be one target per ExternalSecret
  target:

    # The secret name of the resource
    # Defaults to .metadata.name of the ExternalSecret
    # It is immutable
    name: my-secret # It'll appear as secret name when you run `kubectl get secrets`

  # Data defines the connection between the Kubernetes Secret keys and the Provider data
  data:
    - secretKey: secret-key-to-be-managed # Name of the secret
      remoteRef:
        key: provider-key # name of the Secrets manager secret name
        version: provider-key-version # The version of the Secrets manager secret

How to setup External Secrets Operator in GKE

Lets create a script called run-setup.sh

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
NAMESPACE=external-secrets
KSA_NAME=external-secrets # This will be created by

CLUSTER_STORE_MANIFEST=cluster-store.yaml
EXTERNAL_SECRET_MANIFEST=external-secret.yaml
GCP_SECRET_NAME=my-secret
K8S_SECRET_NAME=my-k8s-secret-yay

# Installing the helm chart for external secrets. You don't need to be an expert in helm chart
# but I heavily suggest you learn the basics of it.
# Check out Ahmed Elfakharany course on it on udemy
# https://www.udemy.com/share/105eEs3@HJ8aCtyHLG8Xg2rrdoCuepCPztyv_F_KAyXhJXzsKwD-zRl_ojP7th1zyt-_m9co/
helm repo add external-secrets https://charts.external-secrets.io

helm install external-secrets \
   external-secrets/external-secrets \
    -n $NAMESPACE \
    --create-namespace \
    --set installCRDs=true

# Workload Federation. Role is applied directly to KSA
# See https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#configure-authz-principals
gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
    --role=roles/secretmanager.secretAccessor \
    --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$KSA_NAME \
    --condition=None


echo -n "my super secret data" | gcloud secrets create $GCP_SECRET_NAME --data-file=-

# ClusterSecretStore represents a secure external location for storing secrets. In actuality it'll make a api call to the Secrets manager to get the secret value
cat > $CLUSTER_STORE_MANIFEST << EOL
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: gcp-store
spec:
  provider:
    gcpsm:
      projectID: $PROJECT_ID
EOL

cat > $EXTERNAL_SECRET_MANIFEST << EOL
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: my-external-secret
spec:
  refreshInterval: 1h             # rate SecretManager pulls the secrets manager
  secretStoreRef:
    kind: ClusterSecretStore
    name: gcp-store  # name of the ClusterSecretStore (or kind specified)
  target:
    name: $K8S_SECRET_NAME  # name of the k8s Secret to be created
    creationPolicy: Owner
  data:
    - secretKey: SECRET_KEY
      remoteRef:
        version: "1" # Version of the secret. If not specified it'll use the latest
        key: $GCP_SECRET_NAME # name of the GCP Secrets Manager name

EOL

# We are going to make the cluster store
kubectl apply -f $CLUSTER_STORE_MANIFEST

# We are going to create the external-secret
kubectl apply -f $EXTERNAL_SECRET_MANIFEST

If everything went to plan then a Kubernetes Secret called my-k8s-secret-yay with a data field called SECRET_KEY should have been created.

 $ kubectl get secrets/my-k8s-secret-yay -o json | jq -r .data.SECRET_KEY | base64 -d && echo ""

my super secret data

Author Work Story

I’m using helm charts and argo-cd to manage my k8s clusters. I needed a way to have some consistency between secrets that were in secret manager and the k8s secrets. I started off using helm secrets. It solved the problem of consistency between my k8s secrets and GCP secret manager secrets. However the cracks began to show after I started using argo-cd to control the Continuous Delivery of my apps. It became quickly apparent supporting helm secrets wasn’t going to work out as seen in the documentation to integrate helm secrets with argo-cd. Yikes!

Being able to store the references to GCP secret manager secrets in git without risk of exposing the sensitive information was a Godsend. Give external secrets operator a try and star/contribute to the project if you can.

Cheers!

Categories
gcp

Keda Pub/Sub Scaler

Table of Contents

Keda Pub/Sub Scaler was an unnecessary challenge I had to face over the course of 3 days. If you were to cross reference these 3 sources:

  • https://cloud.google.com/kubernetes-engine/docs/tutorials/scale-to-zero-using-keda#setup-env
  • https://keda.sh/docs/2.10/scalers/gcp-pub-sub/
  • https://keda.sh/docs/2.14/authentication-providers/gcp-workload-identity/

You can come to a reasonable idea of what you need to do. As long as you read it thoroughly…

Or you can see a working example here 😀

TLDR;

Go to the full example

Assumptions

  • Workload Identity is turned on for you cluster
  • Your node pool has “GKE Metadata Server” enabled
  • Your GCP user has the permissions to create a workload identity for a Kubernetes Service Account
  • You’re using helm to install keda

Getting Started

To get Keda working you first need to get Custom Metrics Stackdriver Adapter working.

Please see my article on GCP Horizontal Pod Autoscaling with Pub/Sub to learn how to set that up.

Configuring Keda

After getting the custom metrics stackdriver adapter working now its time to install keda

helm install --repo  https://kedacore.github.io/charts --version 2.16.0 keda keda -n keda

Create a bash script to add a policy binding to the keda-operator KSA and call the script add-policy.sh.

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
KEDA_NAMESPACE=keda

gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/monitoring.viewer \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
      --condition=None
  echo "Added workload identity to keda-operator"

That is all you need to do to enable Keda Pub/Sub Scaler. Continue on if you want a full example


Full Example

This script will

  • install keda via helm
  • add policy binding for the keda-operator and pubsub-sa KSA’s
  • create a topic/subscription
  • Deploys an app that reads from the pub/sub subscription
  • Creates Keda TriggerAuthentication and ScaledObject objects
PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=custom-metrics-stackdriver
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
KEDA_NAMESPACE=keda
APP_NAMESPACE=default
PUBSUB_TOPIC=echo
PUBSUB_SUBSCRIPTION=echo-read
YAML_FILE_NAME=test-app.yaml

create(){
  helm install keda kedacore/keda -n keda

  sleep 3

  gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/monitoring.viewer \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
      --condition=None
  echo "Added workload identity to keda-operator"

  gcloud pubsub topics create $PUBSUB_TOPIC
  sleep 5
  echo "Created $PUBSUB_TOPIC Topic"

  gcloud pubsub subscriptions create $PUBSUB_SUBSCRIPTION --topic=$PUBSUB_TOPIC
  echo "Created Subscription $PUBSUB_SUBSCRIPTION to Topic $PUBSUB_TOPIC"

  gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/pubsub.subscriber \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$APP_NAMESPACE/sa/pubsub-sa
    echo "Added workload identity to to pubsub-sa"

cat > $YAML_FILE_NAME << EOL
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pubsub-sa
---
# [START gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]
# [START container_pubsub_workload_identity_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pubsub
spec:
  selector:
    matchLabels:
      app: pubsub
  template:
    metadata:
      labels:
        app: pubsub
    spec:
      serviceAccountName: pubsub-sa
      containers:
        - name: subscriber
          image: us-docker.pkg.dev/google-samples/containers/gke/pubsub-sample:v2
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: keda-trigger-auth-gcp-credentials
spec:
  podIdentity:
    provider: gcp
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: pubsub-scaledobject
spec:
  scaleTargetRef:
    name: pubsub #Deployment
  minReplicaCount: 1
  maxReplicaCount: 2
  triggers:
    - type: gcp-pubsub
      authenticationRef:
        name: keda-trigger-auth-gcp-credentials
      metadata:
        subscriptionName: "echo-read" # Required
        value: "5"
        activationValue: "5"
#        credentialsFromEnv: GOOGLE_APPLICATION_CREDENTIALS_JSON
# [END container_pubsub_workload_identity_deployment]
# [END gke_deployment_pubsub_with_workflow_identity_deployment_pubsub]

EOL

  kubectl apply -f $YAML_FILE_NAME -n $APP_NAMESPACE
  echo "Deployed test application"
}


delete(){
  kubectl delete -f $YAML_FILE_NAME -n $APP_NAMESPACE
  gcloud projects remove-iam-policy-binding projects/$PROJECT_ID \
        --role=roles/pubsub.subscriber \
        --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$APP_NAMESPACE/sa/pubsub-sa \
        --condition=None
  gcloud pubsub subscriptions delete $PUBSUB_SUBSCRIPTION
  gcloud pubsub topics delete $PUBSUB_TOPIC
  gcloud projects remove-iam-policy-binding projects/$PROJECT_ID \
        --role=roles/monitoring.viewer \
        --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$KEDA_NAMESPACE/sa/keda-operator \
        --condition=None
  helm uninstall keda -n keda
}

create

In another window send messages to the topic

$ for i in {1..200}; do gcloud pubsub topics publish echo --message="Autoscaling #${i}"; done

Its going to take a few minutes for the scaling to occur.

Watch as the pod count goes up. Eventually you’ll see the targets start to go up

$ watch kubectl get hpa -n default

NAME                           REFERENCE           TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-pubsub-scaledobject   Deployment/pubsub   2/5 (avg)   1         2         2          10m

TroubleShooting

  • You see the dreaded <unkown>/5 error.
    • This can happen for a variety of reasons. Its best to check the output of all the commands and make sure they all work. If any of them failed then the hpa setup will fail.

Categories
argocd

Manage Multiple Kubernetes Clusters with Argocd

Table of Contents

Manage Multiple Kubernetes Clusters with Argocd

Argocd makes it easy to manage multiple kubernetes clusters with a single a single instance of Argocd. Lets get to it

Assumptions

  • You have a remote cluster you already want to manage.
  • You are using GKE
    • This guide can still help you. Just make sure the argocd-server and the argocd-application-controller service accounts have admin permissions to the remote cluster.
  • You are using helm to manage argocd.
    • If not then dang that must be rough.
  • You have the ability to create service accounts with container admin permissions.
    • Or argocd-server and the argocd-application-controller service accounts have admin permissions to the remote cluster.

IAM Shenanigans

We need to

  • Create a service account with the container.admin role.
  • Bind the iam.workloadIdentityUser role to the kubernetes Service accounts argocd-server & argocd-application-controller so that it can impersonate the service account that will be created.

Here’s a simple script to do just that. Call it create-gsa.sh.

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=argo-cd-01
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")


gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME \
  --description="custom metrics stackdriver" \
  --display-name="custom-metrics-stackdriver"
echo "Created google service account(GSA) $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com"

sleep 5 #Sleep is because iam policy binding fails sometimes if its used to soon after service account creation


gcloud projects add-iam-policy-binding $PROJECT_ID \
 --role roles/container.admin	 \
 --member serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added role monitoring.viewer to GSA $SERVICE_ACCOUNT_NAME@$PROJECT_ID.m.gserviceaccount.com"

# Needed so KSA can impersonate GSA account
gcloud iam service-accounts add-iam-policy-binding  \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-server]" \
  $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added iam policy for KSA serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-server]"

# Needed so KSA can impersonate GSA account
gcloud iam service-accounts add-iam-policy-binding  \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-application-controller]" \
  $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
echo "added iam policy for KSA serviceAccount:$PROJECT_ID.svc.id.goog[argocd/argocd-application-controller]"

Get IP & Certificate Authority of the Remote K8s Clusters

Get Public IP and Unencoded Cluster Certificate

In the console

  • Go to the cluster details
  • Look under the Control Plane Networking section the public endpoint and the text “Show cluster certificate.”
  • Press the “Show cluster certificate” button to get the certificate.

Base64 Encode Cluster Certificate

  • Copy the certificate to a file called cc.txt
  • Run the base64 command to encode the certificate
    • Be sure to copy everything including the BEGIN/END CERTIFICATE
base64 cc.txt -w 0 && echo ""

Create Argocd Helm Chart Values File

Add the base64 encode cluster certificate and public IP to the CLUSTER_CERT_BASE64_ENCODED & CLUSTER_IP respectively.

Create a bash script create-yaml.sh and execute

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
SERVICE_ACCOUNT_NAME=argo-cd-01
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
CLUSTER_CERT_BASE64_ENCODED=""
CLUSTER_IP="" # Example 35.44.34.111. DO NOT INCLUDE "https://"

cat > values.yaml <<EOL
configs:
  clusterCredentials:
    remote-cluster:
      server:  https://${CLUSTER_IP}
      config:
        {
          "execProviderConfig": {
            "command": "argocd-k8s-auth",
            "args": [ "gcp" ],
            "apiVersion": "client.authentication.k8s.io/v1beta1"
          },
          "tlsClientConfig": {
            "insecure": false,
            "caData": "${CLUSTER_CERT_BASE64_ENCODED}"
          }
        }
  rbac:
    ##################################
    # Assign admin roles to users
    ##################################
    policy.default: role:readonly  # ***** Allows you to view everything without logging in.
    policy.csv: |
      g, myAdmin, role:admin
  ##################################
  # Assign permission login and to create api keys for  users
  ##################################
  cm:
    accounts.myAdmin: apiKey, login
    users.anonymous.enabled: true
  params:
    server.insecure: true #communication between services is via http

  ##################################
  #  Assigning the password to the users. Argo-cd uses bycypt.
  #  To generate a new password use https://bcrypt.online/ to generate a new password and add it here.
  ##################################
  secret:
    extra:
      accounts.myAdmin.password: \$2y\$10\$p5knGMvbVSSBzvbeM1tLne2rYBW.4L6aJqN.Fp1AalKe3qh3LuBq6 #fancy_password
      accounts.myAdmin.passwordMtime: 1970-10-08T17:45:10Z


controller:
  serviceAccount:
    annotations:
      iam.gke.io/gcp-service-account: ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com

server:
  serviceAccount:
    annotations:
      iam.gke.io/gcp-service-account: ${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
  service:
    type: LoadBalancer


EOL

Run Helm Install/Upgrade

helm install --repo  https://argoproj.github.io/argo-helm --version 7.6.7 argocd argo-cd -f values.yaml 

If you run helm upgrade make sure you delete the argocd-server and argocd-application-controller pods to make sure the the service account changes took effect.

Confirm everything is working

You can create your own application on the remote server or can run this script to create one. Create a bash script called apply-application.sh and execute it.


YAML_FILE_NAME="guestbook-application.yaml"

cat > $YAML_FILE_NAME << EOL
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  destination:
    namespace: guestbook
    name:  remote-cluster #Name of the remote cluster
  project: default
  source:
    path: helm-guestbook
    repoURL: https://github.com/argoproj/argocd-example-apps # Check to make sure this still exists
    targetRevision: HEAD
  syncPolicy:
    automated:
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

EOL

kubectl apply -f $YAML_FILE_NAME

The Application should have successfully been automatically synced and healthy.

Troubleshooting

  • If you did a helm upgrade instead of a helm install then you may want to delete the argocd-server and argocd-application-controller pods to make sure the the service account changes took effect.