Keptn Metrics
The Keptn metrics component allows you to define any type of metric from multiple instances of any type of data source in your Kubernetes cluster. You may have deployment tools like Argo, Flux, KEDA, HPA, or Keptn that need observability data to make automated decisions such as whether a rollout is good, or whether to scale up or down.
Your observability data may come from multiple observability solutions -- Prometheus, Thanos, Cortex, Dynatrace, Datadog and others -- or may be data that comes directly from your cloud provider such as AWS, Google, or Azure. The Keptn Metrics Server unifies and standardizes access to all this data. Minimal configuration is required because Keptn hooks directly into Kubernetes primitives.
The Keptn metrics feature integrates metrics from any number of metrics providers into a single set of metrics. This makes it easier to use than the Kubernetes metric server, which requires that you maintain point-to-point integrations from each source -- Argo Rollouts, Flux, KEDA, HPA, etc. Each has plugins but it is difficult to maintain them, especially if you are using multiple tools, multiple observability platforms, and multiple instances of some tools or observability platforms.
Using this exercise
This exercise runs on a kind cluster that you set up locally but could be run on any Kubernetes cluster.
- Set up the cluster by creating a local Kubernetes cluster, installing an instance of Prometheus to use as a data source, and installing Keptn on the cluster.
- Configure Keptn for your metrics by defining custom resources for each data provider and each piece of data you use.
- Run the metrics
If you want to create your own cluster to run this exercise, follow the instructions in Installation.
Set up the cluster
-
Create the kind cluster:
-
Install and configure Prometheus as your data source
For this simple exercise, we use on instance of kube-prometheus-stack as our data source. In actual practice, you can use multiple instances of multiple types of data sources to gather your metrics.
Use the following command sequence to install an instance of Prometheus:
-
Install Keptn on your cluster
Use the following command sequence to install Keptn on your cluster:
helm repo add keptn https://charts.lifecycle.keptn.sh/ helm repo update helm upgrade --install keptn keptn/keptn -n keptn-system --create-namespace --wait
For more details about how to install Keptn, see the Installation Guide.
Expose Prometheus and get an existing metric
Now we need to expose Prometheus and chose an existing metric to use for this exercise.
-
Use port forwarding to expose prometheus and find a suitable metric:
This command shows multiple services. Look for the Prometheus UI (the one on port
9090
) which should be calledprometheus-operated
(no typo). -
Port forward to access the UI:
-
Go to
http://localhost:9090/
and, in the search box, typeprometheus_
and selectpr
. This produces a list of metrics that are defined -
For this exercise, we are going to use a single metric. In real practice, you would be defining many metrics. Choose a metric that has a non-zero value. For example:
prometheus_http_requests_total{code="200", container="prometheus", \ endpoint="http-web", handler="/-/healthy", instance="10.244.0.15:9090", \ job="kube-prom-stack-kube-prome-prometheus", namespace="default",\ pod="prometheus-kube-prom-stack-kube-prome-prometheus-0", \ service="kube-prom-stack-kube-prome-prometheus"}
For a cleaner display, remove some attributes:
prometheus_http_requests_total{code="200", container="prometheus", \ endpoint="http-web", handler="/-/healthy", \ job="kube-prom-stack-kube-prome-prometheus", namespace="default", \ service="kube-prom-stack-kube-prome-prometheus"}
This is your metric and value as it is stored in Prometheus.
Configure Keptn for your metrics
Now you must tell Keptn about the metrics you are using.j You do this by defining:
- A KeptnMetricsProvider resource to define the external observability platform you are using as a data source. For this exercise, this is the Prometheus server you installed and exposed above.
- A KeptnMetric resource to define each metric query you want to pull.
The steps are:
-
Create a KeptnMetricsProvider resource for the observability platform you are using as a data source. To do this, create a new
.yml
file with content like the following:apiVersion: metrics.keptn.sh/v1 kind: KeptnMetricsProvider metadata: name: local-prometheus namespace: default spec: type: prometheus targetServer: "http://prometheus-operated.default.svc.cluster.local:9090/"
You can specify a virtually unlimited number of providers, including multiple instances of each observability platform. Each one must be assigned a unique
name
, identified by thetype
of platform it is and the URL of the target server. If the target server is protected by a Secret, provide information about the token and key.Keptn uses the
name
you assign to reference this data source.The
targetServer
field tells Keptn where to find this data source; in this case, it points to a prometheus API endpoint on port 9090.Apply this file with the following command:
shell kubectl apply -f YOUR-KEPTN-METRIC-PROVIDER.yml
-
Define your KeptnMetric custom resource for each piece of data you want to pull (in this case, the Prometheus query for the metric you selected).
This is where you use the Prometheus query from before. Note: The namespaces of
KeptnMetricsProvider
andKeptnMetric
must match.apiVersion: metrics.keptn.sh/v1 kind: KeptnMetric metadata: name: prometheus-http-requests-total namespace: default spec: provider: name: local-prometheus query: 'prometheus_http_requests_total{code="200", handler="/-/healthy", job="kube-prom-stack-kube-prome-prometheus", namespace="default", service="kube-prom-stack-kube-prome-prometheus"}' fetchIntervalSeconds: 10
This data is pulled and fetched continuously at an interval you specify for each specific bit of data. Data is available through the resource and through the data provider itself, as well as the Kubernetes CLI.
Define KeptnMetric information
The KeptnMetric resource defines the information you want to gather, specified as a query for the particular observability platform you are using. You can define any type of metric from any data source.
In our example, we define two bits of information to retrieve:
- Number of CPUs, derived from the
dev-prometheus
data platform availability
SLO, derived from thedev-dynatrace
data platform
Each of these are configured to fetch data every 10 seconds
but you could configure a different fetchIntervalSeconds
value
for each metric.
Note the following:
- Populate one YAML file per metric then apply all of them.
- Each metric is assigned a unique
name
. - The value of the
spec.provider.name
field in theKeptnMetric
resource must correspond to the name assigned in themetadata.name
field of aKeptnMetricsProvider
resource. - Information is fetched in on a continuous basis
at a rate specified
by the value of the
spec.fetchIntervalSeconds
field.
View available metrics
Keptn automatically starts pulling the metrics
(every fetchIntervalSeconds
seconds):
The above command should show:
NAME PROVIDER QUERY INTERVAL VALUE
prometheus-http-requests-total local-prometheus prometheus_http_requests_total{code="200", handler="/-/healthy", job="kube-prom-stack-kube-prome-prometheus", namespace="default", service="kube-prom-stack-kube-prome-prometheus"} 756
or to retrieve a specific KeptnMetric value:
Use the following command to view the metrics that are configured in your cluster:
Run the metrics
As soon as you define your KeptnMetricsProvider
and KeptnMetric
resources,
Keptn begins collecting the metrics you defined.
You do not need to do anything else.
Observing the metrics
The metrics can be retrieved through CRs and through the Kubernetes Metric API.
The syntax to retrieve metrics from the CR is:
The syntax to retrieve metrics through the Kubernetes API is:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/<namespace>/keptnmetrics.metrics.sh/<metric-name>/<metric-name>"
You can also display the metrics graphically using a dashboard such as Grafana.
Learn more
To learn more about the Keptn Metrics Server, see:
- Architecture: Keptn Metrics Operator
- More information about implementing Keptn Metrics: Keptn Metrics
- How to integrate Keptn metrics with the Kubernetes HorizontalPodAutoscaler (HPA) to provide autoscaling for the cluster: Using the HorizontalPodAutoscaler