Automated Performance Pipeline using Apache JMeter and AKS
As the Azure DevOps cloud-based load testing by Microsoft has been deprecated, we evaluated the options and finalized on using Apache JMeter with Azure Kubernetes Service (AKS) in a distributed architecture to carry out an intensive load test by simulating hundreds and thousands of simultaneous users.
Currently we have also implemented an automated pipeline for running the performance test using Apache JMeter and AKS, which is also extended to simulate parallel load from multiple regions to reproduce a production scenario.
- create the test suite with the help of how to setup JMeter test plan(https://jmeter.apache.org/usermanual/build-web-test-plan.html).
- Check in the JMX file and supporting files in a repository
- Create AKS cluster with the help of how to create a AKS cluster(https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal)
- Provide access to a Service Principal Name which would be used to run the JMX file in the cluster.
-
Fork the YAML pipeline from the repository: JMeterAKSLoadTest(https://github.com/microsoft/JMeterAKSLoadTest.git)
-
Inside the JMeterFiles folder add the JMX and supporting files there
-
Overview on the variable set up:
-
JMX file has below variables, which can be used from the variable group or pipeline variables according to the setup:
- PerfTestResourceId – Resource Id for the API Auth
- PerfTestClientId – Client Id for the API Auth
- PerfTestClientSecret – Client secret for the API Auth
- JmeterFolderPath – JMX File folder path
- JmeterFileName – JMX File name
-
AKS set up related variables:
- AKSClusterNameRegion1 -Cluster name of the respective region
- AKSResourceGroupRegion1 – Cluster resource name for the region
- AKSSPNClientIdRegion1 – client id for the region
- AKSSPNClientSecretRegion1 – client secret for the region
- TenantId – tenant id
- CSVFileNames – list of supported file names for execution like “users.csv,ids.csv”
-
Set the Variable group linked from Key vault.
-
The results of the execution is published as artifact and it can be downloaded. The index.html file holds the report of the run.
- With minimal cost you can simulate parallel load from different regions to replicate the production scenario.
- As all the Loops, Threads and Ramp up time variables are configured through pipeline variables you can run the test suite with minimal changes
- Once the setup is complete no dependency on any specific machine or user credential, therefore it could be run more frequently to understand the application performance.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.