🚨 This project was based on Keptn Pitometer that was deprecated with Keptn version 0.6.0. Please reach out to the Keptn team using Slack, a GitHub issue, or any other way you prefer to learn about the transition from Pitometer to the new Lighthouse-Service. You can find new sandbox projects in the keptn-sandbox org 🚨
Command line Node.js script that provides the processing logic of a passed in "perf spec" and start/end time frame. This service can be used as a software quality gate within continuous integration software pipelines.
The "perf spec" processing logic uses the Keptn Pitometer NodeJS modules. This web application uses these specific modules.
- pitometer - Core module that acts as monspec processor to the request
- source-dynatrace - interfaces to Dynatrace API to collect metrics
- grader-thresholds - evaluates the threasholds and scores the request
- Have an application to test that is configured with the Dynatrace OneAgent for monitoring. See these insructions for creating a Dynatrace Free Trial.
- Have a Dynatrace API token that Pitometer requires. See these instructions for how to generate the Dynatrace API
- install node
- option 1: set OS variables
export DYNATRACE_BASEURL=<dynatrace tenant url, example: https://abc.live.dynatrace.com> export DYNATRACE_APITOKEN=<dynatrace API token>
-
Start and Stop Times
node pitometer.js -p [perfspec file] -s [Start Time] -e [End Time]
-
Relative Time
node pitometer.js -p [perfspec file] -r [Relative Time]
- Arguments
- perfSpec File - a file in JSON format containing the performance signature. Example:
./samples/perfspec-sample.json
- start time - start time in UTC unix seconds format used for the query
- end time - end time in UTC unix seconds format used for the query
- relativeTime - possible values:
10mins,15mins,2hours,30mins,3days,5mins,6hours,day,hour,min,month,week
- perfSpec File - a file in JSON format containing the performance signature. Example:
Below is an example UNIX shell script to call the CLI and parse the output using jq json query utility
#!/bin/bash
CURRENT_TIME=$(printf "%(%s)T")
START_TIME=$(printf "$(( $(printf "%(%s)T") - 360 * 60 ))")
node pitometer.js -s $START_TIME -e $CURRENT_TIME -p ./samples/perfspec-springmusic.json | tail -n +5 > pitometer_output.json
jq '.' pitometer_output.json
result="$(jq -r '.result' pitometer_output.json)"
if [ "$result" = "fail" ]; then
echo "This build has failed based on pitometer evaluation"
exit 1
else
echo "This build has passed pitometer evaluation"
exit 0
fi
Below is an example utilizing the pitometer-cli from Docker hub which has been packaged as a standalone binary. Best practices dictate that perfspec files should be stored alongside application code in repo so this command will fetch the perfspec file from source control:
docker pull mvilliger/pitometer-cli
docker run --name pitometer-cli --rm -it mvilliger/pitometer-cli /bin/bash -c \
'export DYNATRACE_BASEURL="https://<insert-your-dynatrace-url>" && \
export DYNATRACE_APITOKEN=<insert-your-apitoken> && \
export CURRENT_TIME=$(printf "%(%s)T") && \
export START_TIME=$(printf "$(( $(printf "%(%s)T") - 60 * 60 ))") && \
wget -q -o perfspec.json https://<link to perfspec.json> && \
pitometer-cli -s $START_TIME -e $CURRENT_TIME -p $PWD/perfspec.json | jq .'
See the [Pitometer documentation] https://keptn.github.io/pitometer/) for the latest information, but below is an overview.
-
spec_version - string property with pitometer version. Use 1.0
-
indicator - array of each indicator objects
-
objectives - object with pass and warning properties
-
Body Structure Format
{ "spec_version": "1.0", "indicators": [ { <Indicator object 1> } ], "objectives": { "pass": 100, "warning": 50 } }
A valid response will return an HTTP 200 with a JSON body containing these properties:
- totalScore - numeric property with the sum of the passsing indicator metricScores
- objectives - object with pass and warning properties passed in from the request
- indicatorResults - array of each indicator and their specific scores and values
- result - string property with value of 'pass', 'warn' or 'warning'
Example response message
{
"totalScore": 60,
"objectives": {
"pass": 100,
"warning": 50
},
"indicatorResults": [
{
"id": "P90_ResponseTime_Frontend",
"violations": [
{
"value": 5824401.800000001,
"key": "SERVICE-BAB018A09DA36B75",
"breach": "upper_critical",
"threshold": 4000000
}
],
"score": 20
},
{
"id": "AVG_ResponseTime_Frontend",
"violations": [
{
"value": 2476689.888888889,
"key": "SERVICE-BAB018A09DA36B75",
"breach": "upper_warning",
"threshold": 2000000
}
],
"score": 40
}
],
"result": "warning"
}
A valid response will return an HTTP 400 with a JSON body containing these properties:
- result - string property with value of 'error'
- message - string property with error messsage
Example response message
{
"result": "error",
"message": "Missing timeStart. Please check your request body and try again."
}
A valid response will return an HTTP 500 with a JSON body containing these properties:
- result - string property with value of 'error'
- message - string property with error messsage
Example response message
{
"result": "error",
"message": "The given timeseries id is not configured."
}
- You must have node installed locally.
- Once you clone the repo, you need to run
npm install
to download the required modules - Confugure these environment variables
- option 1: set environment variables in the shell
export DYNATRACE_BASEURL=<dynatrace tenant url, example: https://abc.live.dynatrace.com> export DYNATRACE_APITOKEN=<dynatrace API token>
- option 2: make a
.env
file in the root project folder wit these valuesDYNATRACE_BASEURL=<dynatrace tenant url, example: https://abc.live.dynatrace.com> DYNATRACE_APITOKEN=<dynatrace API token>