Performance as a Service

This use case shows how to setup a fully automated on-demand self-service model for performance testing.

About this use case

The Dynatrace Sockshop sample used in keptn v.0.1 consists of eight microservices that are under development. The goal of this use case is to provide an automated performance testing model for developers to run a performance test for these services on demand. This supports the implementation of an advanced DevOps approach due to early performance feedback regarding a service in a development environment and before it gets deployed into a production environment. All in all, this helps to move from manual sporadic execution and analysis of performance tests to a fully automated on-demand self-service model for developers.

To illustrate the scenario this use case addresses, you will change one service of Sockshop that gets deployed to the dev environment. Eager to understand the performance characteristics of this new deployment, you trigger a performance test. However, this performance test fails due to a quality gate in place. To investigate the issues resulting in this failed performance test, you will use a monitoring solution that allows a comparison of test load.

Step 1 (optional): Verify request attributes in Dynatrace

During the setupInfrastructure.sh installation procedure, request attributes in the Dynatrace tenant have been set up. With this, the data stored in the request header x-dynatrace-test will be extracted to create request attributes that tag and distinguish service traffic. For further information on how to capture request attributes, please see this page in the Dynatrace documentation.

  • Verify request attributes

    1. Go to Settings, Server-side monitoring, and click on Request attributes.
    2. Verify that three request attributes have been created
      • LTN
      • LSN
      • TSN

    The screenshot shows the rule definition for LTN.

    Rule definition

Step 2: Run performance test on carts service

In this step you trigger a performance test for (1) the current implementation of carts and (2) a new version of the carts service. The new version of carts intentionally contains a slow down of the service, which will be detected by the performance validation.

  1. Run performance test on current implementation

    1. Go to Jenkins and click on sockshop folder.
    2. Click on carts.performance and click on Scan Multibranch Pipeline Now.
    3. Then select the master branch and click on Build Now to trigger the performance pipeline.
  2. Introduce a slowdown in the carts service

    1. In the directory of ~/keptn/repositories/carts/, open the file: ./src/main/resources/application.properties
    2. Change the value of delayInMillis from 0 to 1000
    3. Commit/Push the changes to your GitHub Repository carts
    $ git add .
    $ git commit -m "Property changed"
    $ git push
    
  3. Build this new version

    1. Go to your Jenkins and click on sockshop folder.
    2. Click on carts and select the master branch (or click on Scan Multibranch Pipeline Now).
    3. Click on Build Now to trigger the performance pipeline.
    4. Wait until the pipeline shows: Success.
  4. Run performance test on new version

    1. Go to Jenkins and click on sockshop folder.
    2. Click on carts.performance and select the master branch.
    3. Click on Build Now to trigger the performance pipeline.
  5. Explore results in Jenkins

    1. After a successful pipeline execution, click on Performance Trend. This opens a trend analysis of the JMeter test results. In more details, it shows a chart for the throughput, response time, and percentage of errors as shown below.

      Performance trend

    2. Click on Performance Signature. There you get an overview of the last builds similar to the screenshot below.

      Last builds in Jenkins

    3. Click on the Build No of one particular build and click on Performance Signature. This opens a detailed view about the performance validation of the selected build.

      Performance signature

Step 3: Compare builds in Dynatrace

In this step you will leverage Dynatrace to identify the difference between two performance tests. Literally, a couple of clicks can tell you the reason why one build was slower compared to another one.

  1. Open Dynatrace from Jenkins Pipeline

    1. In the Performance Signature for a selected build, click on open in Dynatrace. (This opens the correct timeframe.)
    2. Go to Diagnostic tools and click on Top web requests.
    3. (optional) Filter on a Management Zone.
  2. Narrow down the requests based on request attributes

    1. Click on Add filter.
    2. Create filter for: Request attribute > LTN.
    3. Click on Apply.
  3. Open comparison view

    1. Select the timeframe of a good build.
    2. Click on and select Comparison as shown below:
      Compare builds
  4. Compare response time hotspots

    1. Select the timeframe of the bad build by selecting compare with: custom time frame

    2. Click on Compare response time hotspots.

      Compare hotspots

    3. There you can see that the Active wait time increased.

      Comparison overview

    4. Click on View method hotspots to identify the root cause.

      Method hotspot

Step 4. Cleanup

In this step you will clean up the applications.properties file and rebuild the artifact.

  1. Remove the slowdown in the carts service

    1. In the directory of ~/keptn/repositories/carts/, open the file: ./src/main/resources/application.properties
    2. Change the value of delayInMillis from 1000 to 0
    3. Commit/Push the changes to your GitHub Repository carts
    $ git add .
    $ git commit -m "Set delay to 0"
    $ git push
    
  2. Build this new version

    1. Go to your Jenkins and click on sockshop folder.
    2. Click on carts and select the master branch (or click on Scan Multibranch Pipeline Now).
    3. Click on Build Now to trigger the performance pipeline.
    4. Wait until the pipeline shows: Success.

Understanding what happened

In this use case, you triggered a performance test for the current version of the carts service. Then you changed its configuration to introduce a slowdown into the service. This change caused a second performance test execution to fail and this failed test where then further investigated to derive the deviation to the prior service version.

By providing such a self-service model for performance testing, developers can trigger a performance validation on demand and get immediate feedback regarding their changes.