Categories
kubernetes Uncategorized

How to Scrape cAdvisor Metrics in GKE Using Prometheus

Table of Contents

TLDR;

The prometheus configurations are below. Be sure to give the prometheus service account cluster permissions to GET nodes/proxy and nodes api endpoints.

Go directly to the 3. Prometheus Configurations


Google cloud monitor only exposes a small subsection of cAdvisor metrics. With the setup below you’ll be able to collect all of the cAdvisor metrics from GKE. Here are the steps to directly query kubernetes to get cAdvisor metrics and the Prometheus configuration.

1. Create Service

To scrape the cAdvisor endpoint you’ll need to create a service account with cluster permissions to GET nodes/proxy and nodes.

Create a manifest called sa-manifests.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: test
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: test
rules:
  - apiGroups: [""]
    resources:
      - nodes
      - nodes/proxy
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: test
subjects:
  - kind: ServiceAccount
    name: test
    namespace: default

Run kubectl apply -f sa-manifests.yaml

2. Test API Manually

Create manifest file call pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: network
  namespace: default
spec:
  containers:
    - name: network
      image: praqma/network-multitool:c3d4e04
  serviceAccountName: test

Run the following commands

kubectl apply -f pod.yaml

kubectl exec -it network bash -n default

Now that we are in the lets actually make a call api to kubernetes api get the cAdvisor Metrics. Run these individual commands.

# export the KSA bearer token to an env variable
export BEARER_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

# Find the first K8s node
 export NODE_NAME=$(curl https://kubernetes.default.svc.cluster.local:443/api/v1/nodes/ -s -H "Authorization: Bearer $BEARER_TOKEN" -k | jq -r .items[0].metadata.name)

# Make an api call to kubernetes using curl
curl https://kubernetes.default.svc.cluster.local:443/api/v1/nodes/$NODE_NAME/proxy/metrics/cadvisor -H "Authorization: Bearer $BEARER_TOKEN" -k

After that you should see metrics for the node

# HELP machine_nvm_capacity NVM capacity value labeled by NVM mode (memory mode or app direct mode).
# TYPE machine_nvm_capacity gauge
machine_nvm_capacity{boot_id="bf88bcb1-f7dc-425d-87cc-ec4994216eb9",machine_id="b1962a4fef066daf20ce3f9adc1ca5e5",mode="app_direct_mode",system_uuid="b1962a4f-ef06-6daf-20ce-3f9adc1ca5e5"} 0
machine_nvm_capacity{boot_id="bf88bcb1-f7dc-425d-87cc-ec4994216eb9",machine_id="b1962a4fef066daf20ce3f9adc1ca5e5",mode="memory_mode",system_uuid="b1962a4f-ef06-6daf-20ce-3f9adc1ca5e5"} 0

You can find a complete list of cAdvisor metrics on the official github repository.

3. Prometheus Configurations

Lets put these pieces together and create a Prometheus configuration that can read from the cAdvisors metrics.

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: kubernetes-cadvisor
    honor_timestamps: true
    scrape_interval: 15s
    scrape_timeout: 10s
    metrics_path: /metrics/cadvisor
    scheme: https
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      insecure_skip_verify: true
    kubernetes_sd_configs:
      - role: node
    relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc.cluster.local:443
      - source_labels: [ __meta_kubernetes_node_name ]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    metric_relabel_configs:
      - source_labels: [ namespace ]
        separator: ;
        regex: ^$
        replacement: $1
        action: drop
      - source_labels: [ pod ]
        separator: ;
        regex: ^$
        replacement: $1
        action: drop

Cheers!

Categories
Uncategorized

Manage Kubernetes Secrets with External Secrets Operator

Table of Contents

What is External Secrets Operator?

External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets ManagerHashiCorp VaultGoogle Secrets ManagerAzure Key VaultIBM Cloud Secrets ManagerCyberArk ConjurPulumi ESC and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.

You can read more about it here https://external-secrets.io/v0.16.1

What Problem does it solve?

It solves the problem of consistency of your secrets across different environments.

Take for example you have a database password that you store in AWS/GCP secrets manager. You also have that same database password stored as k8s secrets in 5 different namespaces as well. If you update the password you’d have to update it in 6 different places. It would be a pain to have to update everywhere. That’s where the External Secrets Operator makes life so easy.

With External Secrets Operator you’ll make the source of truth be secrets manager. You’ll define a manifest that has a reference to the secret manager secret and the k8s secret will be created with that secret. Please see the example below.

apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
  name: "hello-world"
spec:
  # This has permission to query Secrets Manager
  secretStoreRef:
    name: secret-store-name
    kind: SecretStore  # or ClusterSecretStore

  # RefreshInterval is the amount of time before the values reading again from the SecretStore provider
  # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h" (from time.ParseDuration)
  # May be set to zero to fetch and create it once
  refreshInterval: "1h"

  # the target describes the secret that shall be created
  # there can only be one target per ExternalSecret
  target:

    # The secret name of the resource
    # Defaults to .metadata.name of the ExternalSecret
    # It is immutable
    name: my-secret # It'll appear as secret name when you run `kubectl get secrets`

  # Data defines the connection between the Kubernetes Secret keys and the Provider data
  data:
    - secretKey: secret-key-to-be-managed # Name of the secret
      remoteRef:
        key: provider-key # name of the Secrets manager secret name
        version: provider-key-version # The version of the Secrets manager secret

How to setup External Secrets Operator in GKE

Lets create a script called run-setup.sh

PROJECT_ID=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_ID)")
PROJECT_NUMBER=$(gcloud projects list --filter="$(gcloud config get-value project)" --format="value(PROJECT_NUMBER)")
NAMESPACE=external-secrets
KSA_NAME=external-secrets # This will be created by

CLUSTER_STORE_MANIFEST=cluster-store.yaml
EXTERNAL_SECRET_MANIFEST=external-secret.yaml
GCP_SECRET_NAME=my-secret
K8S_SECRET_NAME=my-k8s-secret-yay

# Installing the helm chart for external secrets. You don't need to be an expert in helm chart
# but I heavily suggest you learn the basics of it.
# Check out Ahmed Elfakharany course on it on udemy
# https://www.udemy.com/share/105eEs3@HJ8aCtyHLG8Xg2rrdoCuepCPztyv_F_KAyXhJXzsKwD-zRl_ojP7th1zyt-_m9co/
helm repo add external-secrets https://charts.external-secrets.io

helm install external-secrets \
   external-secrets/external-secrets \
    -n $NAMESPACE \
    --create-namespace \
    --set installCRDs=true

# Workload Federation. Role is applied directly to KSA
# See https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#configure-authz-principals
gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
    --role=roles/secretmanager.secretAccessor \
    --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$KSA_NAME \
    --condition=None


echo -n "my super secret data" | gcloud secrets create $GCP_SECRET_NAME --data-file=-

# ClusterSecretStore represents a secure external location for storing secrets. In actuality it'll make a api call to the Secrets manager to get the secret value
cat > $CLUSTER_STORE_MANIFEST << EOL
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: gcp-store
spec:
  provider:
    gcpsm:
      projectID: $PROJECT_ID
EOL

cat > $EXTERNAL_SECRET_MANIFEST << EOL
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: my-external-secret
spec:
  refreshInterval: 1h             # rate SecretManager pulls the secrets manager
  secretStoreRef:
    kind: ClusterSecretStore
    name: gcp-store  # name of the ClusterSecretStore (or kind specified)
  target:
    name: $K8S_SECRET_NAME  # name of the k8s Secret to be created
    creationPolicy: Owner
  data:
    - secretKey: SECRET_KEY
      remoteRef:
        version: "1" # Version of the secret. If not specified it'll use the latest
        key: $GCP_SECRET_NAME # name of the GCP Secrets Manager name

EOL

# We are going to make the cluster store
kubectl apply -f $CLUSTER_STORE_MANIFEST

# We are going to create the external-secret
kubectl apply -f $EXTERNAL_SECRET_MANIFEST

If everything went to plan then a Kubernetes Secret called my-k8s-secret-yay with a data field called SECRET_KEY should have been created.

 $ kubectl get secrets/my-k8s-secret-yay -o json | jq -r .data.SECRET_KEY | base64 -d && echo ""

my super secret data

Author Work Story

I’m using helm charts and argo-cd to manage my k8s clusters. I needed a way to have some consistency between secrets that were in secret manager and the k8s secrets. I started off using helm secrets. It solved the problem of consistency between my k8s secrets and GCP secret manager secrets. However the cracks began to show after I started using argo-cd to control the Continuous Delivery of my apps. It became quickly apparent supporting helm secrets wasn’t going to work out as seen in the documentation to integrate helm secrets with argo-cd. Yikes!

Being able to store the references to GCP secret manager secrets in git without risk of exposing the sensitive information was a Godsend. Give external secrets operator a try and star/contribute to the project if you can.

Cheers!

Categories
Uncategorized

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!