In the dynamic world of cloud-native and microservices, installation and continuous upgrades of distributed systems
is a priority.
Embarking on a journey into the world of containerized applications often leads us into a labyrinth of configuration
and deployment intricacies.
Amidst this complexity, Helm charts emerge as the guiding light, streamlining the orchestration of Kubernetes-based
In this article, we're delving into the topic of Helm charts, in particular Umbrella charts - an approach that
simplifies the deployment and maintenance of microservice application architectures.
Let's uncover how these consolidated charts pave the way for smoother deployments, enhance scalability, and bring
harmony to the orchestration of complex Kubernetes environments.
Hello and Happy New Year from the entire Keptn Team.
We hope you had a wonderful holiday season and that you’re all having a wonderful 2024 so far!
We’d like to take a moment to reflect back on what a great year 2023 was for Keptn and what we're looking forward to in 2024!
The biggest news for the project in 2023 was probably the maturing of the cloud native Keptn,
with the former subproject named Keptn Lifecycle Toolkit officially becoming Keptn in August 2023,
and the end-of-life (EOL) for Keptn v1.
We focused on having Keptn be a 100% cloud native,
opinionated way of delivering Kubernetes apps and it was important for us to implement the
“big pillars” of Keptn v1: supporting the metrics component of the Keptn v1 quality gates feature,
observability (new), and deployments by defining KeptnTasks & Evaluations to execute any container
the user provides and extending deployments (job executor service in Keptn v1).
In the dynamic world of DevOps and continuous delivery, keeping applications reliable
and high-performing is a top priority.
Site reliability engineers (SREs) rely on Service Level Objectives (SLOs) to set the standards that the
Service Level Indicators (SLIs) of an application must meet, like response time, error rate,
or any other metric that might be relevant to the application.
The use of SLOs is not a new concept, but integrating them into an application comes with its own set of issues:
Figuring out which SLIs and SLOs to use— do you get the SLI values from one monitoring source or multiple?
This complexity makes it harder to use them effectively.
Defining SLO priorities.
Imagine a new version of a service that fixes a concurrency problem but
slows down response time.
In this case, this may be a valid trade-off and the new version should
not be denied due to an increase in the response time, given that the error rate will decrease.
Situations like these call for a way of defining a grading logic
where different priorities can be assigned to SLOs.
Defining and storing SLOs.
It's crucial to clearly define and store these goals in one central place,
ideally a declarative resource in a GitOps repository, where each change can be easily traced back.
In this article, we'll explore how Keptn tackles these challenges with its new Analysis feature.
We will deploy a demo application onto a Kubernetes cluster to show Keptn helps SREs gather
and make sense of SLOs, making the whole process more straightforward and efficient.
The example application will provide some metrics by itself by serving them via its Prometheus endpoint,
while other data will come from Dynatrace.