-
Notifications
You must be signed in to change notification settings - Fork 145
Transforms
Metric query grammar includes a wide variety of utility transforms. Transforms take time-series data as the result of a metric query or of another transform. Transforms can also require constants as parameters. Constant values are specified by a # prefix.
In the syntax examples, the <time_series> token represents the result of either a <time_series> or a <transform>. All metric queries and transforms are multiple input and multiple output, however they return one or more time-series in the result set, depending on the behavior of specific transforms.
Transforms | Description |
---|---|
ABOVE | Culls all input metrics whose set of data point values, when evaluated, are not above the numerical limit. |
ABSOLUTE | Converts the data point values to their corresponding absolute value. |
ALIAS | Transforms the name of one or more metrics. |
ALIASBYTAG | Transforms the display name for one or more metrics setting it to the value corresponding to the provided tag/tags. If no tag is provided, then it uses all the available tags on the metric. |
ALIASBYREGEX | Transforms the display name for one or more metrics setting it to the value corresponding to the regex. |
ANOMALY_DENSITY | Calculates an anomaly score for each data point based on the probability density (PDF) of the data point value. |
ANOMALY_KMEANS | Calculates an anomaly score for each data point using a K-Means clustering of the metric data. |
ANOMALY_RPCA | Calculates an anomaly score for each data point using RPCA for detecting anomalies in seasonal data. |
ANOMALY_STL | Performs a seasonal trend decomposition and returns the probability that each point is an anomaly based on the residual component. |
ANOMALY_ZSCORE | Calculates an anomaly score for each data point based on the z-score of the data point value. |
AVERAGE | Calculates the average of all values at each time stamp. |
BELOW | Culls all input metrics whose set of data point values are not below the numerical limit. |
CONSECUTIVE | Returns all data points that are consecutive. |
COUNT | Calculates a metric having a set of time stamps. |
CULL_ABOVE | Removes data points from metrics if the value is above a limit. |
CULL_BELOW | Removes data points from metrics id the value is below a limit |
DERIVATIVE | Calculates the discrete time derivative. |
DEVIATION | Calculates the standard deviation per time stamp for a collection of input series, or the standard deviation for all data points for a single input series. |
DIFF | Calculates an arithmetic difference. |
DIFF_V | Calculates and arithmetic difference using a vector of subtrahends to be subtracted from each input time series. |
DIVIDE | Calculates a quotient. |
DIVIDE_V | Calculates a quotient using a vector of divisors to be divided into each time series. |
DOWNSAMPLE | Down samples one or more metrics. |
EXCLUDE | Culls metrics based on the matching of regular expression against scope, metric, tag keys and tag values. |
FILL | Creates additional data points to fill gaps. |
FILL_CALCULATE | Creates constant line based on the calculated value. |
GROUP | Calculates the union of all data points of time series that match the regular exception. Also, having the time periods that calculate overlap as the intersection of the time series. |
GROUPBY | Groups metrics together and performs operations on the grouped set of metrics. |
GROUPBYTAG | Groups different metrics together using the given tags and then executes the provided Transform on the grouped set of metrics. |
HIGHEST | Evaluates all input metrics based on an evaluation of the metric data point values, returning top metrics having the highest evaluated value. |
INCLUDE | Retains metrics based on the matching of a regular expression against scope, metric, tag keys and tag values. |
INTEGRAL | Calculates the discrete time integral. |
JOIN | Joins multiple lists of metrics into a single list. |
LIMIT | Returns a subset input metrics in stable order from the head of the list not to exceed the specified limit. |
LOG | Calculates the logarithm according to the specified base. |
LOWEST | Evaluates all input metrics based on an evaluation of the metric data point values, returning top metrics having the lowest evaluated value. |
MAX | For each timestamp in the input range, calculate the maximum value across all input metrics. |
MIN | For each timestamp in the input range, calculate the minimum value across all input metrics. |
MOVING | Evaluates input metrics using a moving window. |
NORMALIZE | Normalizes the data point values of time series. |
NORMALIZE_V | Normalizes the data point value of time series using a vector of unit normal's to be applied to each input time series. |
PERCENTILE | Calculates the Nth percentile. |
PROPAGATE | Forward fills gaps with the last known value at the start(earliest occurring time) of the gap. |
RANGE | Calculates the difference between the maximum and minimum values at each timestamp. |
RATE | Calculates the rate of change. |
ROUNDING | Round the value at each timestamp to a mathematical integer. |
SCALE | Calculates a product. |
SCALE_V | Calculates a product using a vector of multipliers to be multiplied into each input time series. |
SHIFT | Shifts the timestamp for each data point by the specified constant. |
SLICE | Removes data points from metrics if the timestamp is outside of start time and end time. |
SMOOTHEN | Smooths noisy metrics and reveals trends. |
SORT | Sorts a list of metrics. |
SUM | Calculates an arithmetic sum. |
SUM_V | Calculates an arithmetic sum using a vector of addends to be summed with each input time series. |
UNION | Perform the union of all data points for the given time series. |
Culls all input metrics whose set of data point values, when evaluated, are not above the numerical limit. Type indicates the type of evaluation to perform. Must be one of 'avg', 'min', 'max', 'recent', 'p50, p60 etc (percentile)'. All evaluation types consider all data points for a metric with the exception of 'recent' which only evaluates the most recent data point in the series. If the type is not specified, ‘avg’ is used.
ABOVE(<time_series>[,<time_series>]*,<limit>,<type>)
ABOVE(<time_series>[,<time_series>]*,<limit>)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "0"
}
}
Output:
[]
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "1"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "1"
}
}
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {host=host1},
"namespace": null,
"datapoints": {
"1444377000000": "1",
"1444377100000": "2",
"1444377200000": "3",
"1444377300000": "4",
"1444377400000": "5",
"1444377500000": "6",
"1444377600000": "7",
"1444377700000": "8",
"1444377800000": "9",
"1444377900000": "10"
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {host=host2},
"namespace": null,
"datapoints": {
"1444377000000": "1",
"1444377100000": "1",
"1444377200000": "1",
"1444377300000": "1",
"1444377400000": "1",
"1444377500000": "1",
"1444377600000": "1",
"1444377700000": "1",
"1444377800000": "1",
"1444377900000": "1"
}
}]
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {host=host1},
"namespace": null,
"datapoints": {
"1444377000000": "1",
"1444377100000": "2",
"1444377200000": "3",
"1444377300000": "4",
"1444377400000": "5",
"1444377500000": "6",
"1444377600000": "7",
"1444377700000": "8",
"1444377800000": "9",
"1444377900000": "10"
}
}
Converts the data point values to their corresponding absolute value.
ABSOLUTE(<time_series>[,<time_series>]*)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "109.6666666666666",
"1444392000000": "114.3333333333334",
"1444406400000": "-37.5",
"1444420800000": "-34.5",
"1444435200000": "-15.5",
"1444449600000": "1.0"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "109.6666666666666",
"1444392000000": "114.3333333333334",
"1444406400000": "37.5",
"1444420800000": "34.5",
"1444435200000": "15.5",
"1444449600000": "1.0"
}
}
Transforms the name of one or more metrics.
ALIAS(<time_series)[,<time-series>]*, #new_metric#, #literal# [, #new_scope#, #literal#])
ALIAS(<time_series)[,<time-series>]*, #/old_pattern_metric/new_metric#, #regex# [, #/new_pattern_scope/new_scope#, #regex#])
Input | Output |
---|---|
ALIAS(-1d:scope:metric:avg:4h-avg, #new_metric#, #literal#) | -1d:scope:new_metric:avg:4h-avg (Changes the output metric name to new_metric) |
ALIAS(-1d:scope:metric:avg:4h-avg, #/metric/new_metric/#, #regex#) | -1d:scope:new_metric:avg:4h-avg (Replaces “metric” in input metric name with “new_metric” in output metric name) |
ALIAS(-1d:scope:old_metric:avg:4h-avg, #/old/new/#, #regex#) | -1d:scope:new_metric:avg:4h-avg (Replaces “old” in input metric name with “new” in output metric name |
ALIAS(-1d:scope:metric:avg:4h-avg, #new_metric#, #literal#, #new_scope#, #literal#) | -1d:new_scope:new_metric:avg:4h-avg (Changes the output metric name to new_metric and output scope to new_scope) |
Transforms the display name of one or more metrics using the provided tags.
ALIASBYTAG(<time_series)[,<time-series>]*)
ALIASBYTAG(<time_series)[,<time-series>]*, #tagk#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server1,collectd",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server2,collectd",
"datapoints": {
...
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server1",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server2",
"datapoints": {
...
}
}]
Transforms the display name of one or more metrics using the provided tags.
ALIASBYTAG(<time_series)[,<time-series>]*)
ALIASBYTAG(<time_series)[,<time-series>]*, #tagk#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server1,collectd",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server2,collectd",
"datapoints": {
...
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server1",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server2",
"datapoints": {
...
}
}]
Transforms the display name of one or more metrics using the provided regex.
ALIASBYREGEX(<time_series)[,<time-series>]*, #regex#)
Example 1 (Extract part of the device name): ALIASBYREGEX(-1d:scope:metric{device=*}:sum,#device=(\S*).domain.net#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1.domain.net",
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2.domain.net"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1.domain.net",
},
"namespace": null,
"displayName": "argus-server1",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2.domain.net"
},
"namespace": null,
"displayName": "argus-server2",
"datapoints": {
...
}
}]
Example 2(Extract all tags. Tags exist between '{' and '}': ALIASBYTAG(-1d:scope:metric{device=*,source=*}:sum, #\{(.*)\}#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": null,
"datapoints": {
...
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server1",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server1,source=collectd",
"datapoints": {
...
}
},
{
"scope": "scope",
"metric": "metric",
"tags": {
"device": "argus-server2",
"source": "collectd"
},
"namespace": null,
"displayName": "argus-server2,source=collectd",
"datapoints": {
...
}
}]
Calculates an anomaly score for each data point based on the probability density (PDF) of the data point value. An anomaly score is a value between 0 - 100 that reflects how likely a data point is an anomaly relative to other data points in the metric - a higher score indicates that the point is more likely to be an anomaly.
Probability density is a measure of the relative likelihood of a value occurring in a distribution. This transform finds the probability density of each data point and converts the value to a more meaningful anomaly score.
This transform supports contextual anomaly detection. This is useful for detecting anomalies in periods of data instead of the entire distribution (ex. "Is this point an anomaly compared to data in the past week?"). Given a value for #context#
, the transform will evaluate each data point against the data points in the context period before it. For example, if #context#
had a value of #7d#
, then each data point would be evaluated against data from 7 days before it to determine its anomaly score.
Important: The probability density technique assumes that the underlying data is normally distributed. For best results, this transform should be applied to metrics with an approximately normal distribution.
ANOMALY_DENSITY(<time_series>)
ANOMALY_DENSITY(<time_series>, #context#)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1000": "84",
"2000": "21",
"3000": "904",
"4000": "485",
"5000": "38",
"6000": "85408",
"7000": "283497",
"8000": "43"
}
}
Output:
{
"scope": "ANOMALY_DENSITY",
"metric": "probability density (neg. log)",
"tags": {},
"namespace": null,
"datapoints": {
"1000": "1.111274199787585",
"2000": "1.1219238424590996",
"3000": "0.9739827941095587",
"4000": "1.0438283521764644",
"5000": "1.1190487004992344",
"6000": "0.0",
"7000": "100.0",
"8000": "1.118203271501305"
}
}
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"0": "0.64",
"151200": "-1.13",
"302400": "0.00",
"453600": "0.90",
"604800": "-0.96",
"756000": "-0.52",
"907200": "0.24",
"1058400": "-0.01",
"1209600": "0.53",
"1360800": "-0.34",
"1512000": "1.11",
"1663200": "-0.21",
"1814400": "0.54"
}
}
Output:
{
"scope": "ANOMALY_DENSITY",
"metric": "probability density (neg. log)",
"tags": {},
"namespace": null,
"datapoints": {
"0": "0.0",
"151200": "0.0",
"302400": "0.0",
"453600": "0.0",
"604800": "0.0",
"756000": "0.0",
"907200": "9.67824967824967",
"1058400": "0.0",
"1209600": "67.34372588362405",
"1360800": "33.82936507936509",
"1512000": "100.0",
"1663200": "17.429426860564593",
"1814400": "0.7294429708222699"
}
}
Calculates an anomaly score for each data point using a K-Means clustering of the metric data with #k#
clusters. An anomaly score is a value between 0 - 100 that reflects how likely a data point is an anomaly relative to other data points in the metric - a higher score indicates that the point is more likely to be an anomaly.
K-Means is an unsupervised clustering algorithm that groups data into clusters based on similarities between the data points. Given a dataset and an input k, the algorithm splits the dataset into k clusters with k "cluster centroids". After the clusters have been identified, the anomaly scores are assigned by calculating the relative distance of each data point to its cluster centroid.
K-Means is a non-parametric algorithm, so it works well even when the data is not normally distributed. This transform is useful for detecting anomalies in clusters of data, where the number of clusters is known. It minimizes false positives by detecting actual anomalies instead of natural variations in clusters of data.
ANOMALY_KMEANS(<time_series>, #k#)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"0": "-300",
"1000": "-200",
"2000": "-100",
"3000": "-6",
"4000": "-4",
"5000": "0",
"6000": "4",
"7000": "6",
"8000": "100",
"9000": "200",
"10000": "300",
"11000": "400",
"12000": "500"
}
}
Output:
{
"scope": "ANOMALY_KMEANS",
"metric": "K-means anomaly score",
"tags": {},
"namespace": null,
"datapoints": {
"0": "31.428571428571427",
"1000": "31.428571428571427",
"2000": "100.0",
"3000": "6.0",
"4000": "4.0",
"5000": "0.0",
"6000": "4.0",
"7000": "6.0",
"8000": "100.0",
"9000": "47.14285714285714",
"10000": "15.714285714285714",
"11000": "15.714285714285714",
"12000": "47.14285714285714"
}
}
Performs a seasonal trend decomposition and returns the probability that each point is an anomaly based on the residual component. With no options supplied, the default season is the entire time series and the result is the anomaly score. If specified, the season size is the season size as a fraction of a single calendar year. For example a season of a single day would be specified as #365# and a season of one week would be #52#. Specifying #anomalyScore# results in the normalized anomaly score where as #resid# results in the corresponding residuals.
ANOMALY_STL(<time_series>)
ANOMALY_STL(<time_series>, #seasonsize#, #resid#)
ANOMALY_STL(<time_series>, #seasonsize#, #anomalyScore#)
Calculates an anomaly score for each data point using Robust Principal Component Analysis (RPCA) for detecting anomalies in seasonal data. An anomaly score is a value between 0 - 100 that reflects how likely a data point is an anomaly relative to other data points in the metric - a higher score indicates that the point is more likely to be an anomaly.
RPCA is a dimensionality reduction technique that identifies a low rank approximation of the data, random noise, and a set of outliers. It produces a noise vector, which is used to assign anomaly scores to data points. More details here: http://techblog.netflix.com/2015/02/rad-outlier-detection-on-big-data.html.
RPCA is particularly good at detecting anomalies in seasonal data. The constant #length_of_season#
describes the length of a season in the metric, so that the algorithm can ignore seasonal variability and identify actual anomalies.
ANOMALY_RPCA(<time_series>, #length_of_season#)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1462665600000": "300.0",
"1462752000000": "2800.0",
"1462838400000": "1000.0",
"1462924800000": "3000.0",
"1463011200000": "2900.0",
"1463097600000": "2700.0",
"1463184000000": "500.0",
"1463270400000": "300.0",
"1463356800000": "2900.0",
"1463443200000": "2900.0",
"1463529600000": "2800.0",
"1463616000000": "3000.0",
"1463702400000": "2900.0",
"1463788800000": "600.0",
"1463875200000": "300.0",
"1463961600000": "2700.0",
"1464048000000": "2800.0",
"1464134400000": "2900.0",
"1464220800000": "4000.0",
"1464307200000": "2600.0",
"1464393600000": "400.0"
}
}
Output:
{
"scope": "ANOMALY_RPCA",
"metric": "RPCA anomaly score",
"tags": {},
"namespace": null,
"datapoints": {
"1462665600000": "78.15686673970755",
"1462752000000": "20.893463744440293",
"1462838400000": "95.79447049928406",
"1462924800000": "35.210383029988044",
"1463011200000": "7.388288563808816",
"1463097600000": "14.409341147976244",
"1463184000000": "67.68125851759875",
"1463270400000": "75.27486231827",
"1463356800000": "29.13843568055999",
"1463443200000": "51.8022986312567",
"1463529600000": "15.279421414880636",
"1463616000000": "14.767366921524191",
"1463702400000": "32.09938048963673",
"1463788800000": "55.77310727738868",
"1463875200000": "63.29754571980628",
"1463961600000": "5.926546216570088",
"1464048000000": "41.36741326689364",
"1464134400000": "19.39190936616735",
"1464220800000": "100.0",
"1464307200000": "0.0",
"1464393600000": "63.76119695964706"
}
}
Calculates an anomaly score for each data point based on the z-score of the data point value. An anomaly score is a value between 0 - 100 that reflects how likely a data point is an anomaly relative to other data points in the metric - a higher score indicates that the point is more likely to be an anomaly.
Z-score measures the number of standard deviations a value is away from the mean of the distribution. This transform finds the z-score of each data point and converts the value to a more meaningful anomaly score.
This transform supports contextual anomaly detection. This is useful for detecting anomalies in periods of data instead of the entire distribution (ex. "Is this point an anomaly compared to data in the past week?"). Given a value for #context#
, the transform will evaluate each data point against the data points in the context period before it. For example, if #context#
had a value of #7d#
, then each data point would be evaluated against data from 7 days before it to determine its anomaly score.
Important: The z-score technique assumes that the underlying data is normally distributed. For best results, this transform should be applied to metrics with an approximately normal distribution.
ANOMALY_ZSCORE(<time_series>)
ANOMALY_ZSCORE(<time_series>, #context#)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1000": "84",
"2000": "21",
"3000": "904",
"4000": "485",
"5000": "38",
"6000": "85408",
"7000": "283497",
"8000": "43"
}
}
Output:
{
"scope": "ANOMALY_ZSCORE",
"metric": "z-score (abs value)",
"tags": {},
"namespace": null,
"datapoints": {
"1000": "3.598382545219573",
"2000": "3.630186431351565",
"3000": "3.184427201914291",
"4000": "3.3959482858715013",
"5000": "3.621604430331821",
"6000": "0.0",
"7000": "100.0",
"8000": "3.6190803123848374"
}
}
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"0": "0.64",
"151200": "-1.13",
"302400": "0.00",
"453600": "0.90",
"604800": "-0.96",
"756000": "-0.52",
"907200": "0.24",
"1058400": "-0.01",
"1209600": "0.53",
"1360800": "-0.34",
"1512000": "1.11",
"1663200": "-0.21",
"1814400": "0.54"
}
}
Output:
{
"scope": "ANOMALY_ZSCORE",
"metric": "z-score (abs value)",
"tags": {},
"namespace": null,
"datapoints": {
"0": "0.0",
"151200": "0.0",
"302400": "0.0",
"453600": "0.0",
"604800": "0.0",
"756000": "0.0",
"907200": "26.666666666666664",
"1058400": "0.0",
"1209600": "79.17888563049853",
"1360800": "57.40740740740741",
"1512000": "100.0",
"1663200": "29.940119760479035",
"1814400": "1.7241379310344869"
}
}
Calculates the average of all values at each timestamp.
Input:
{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "2",
"1444406400000": "2",
"1444420800000": "2",
"1444435200000": "2",
"1444449600000": "2"
}
}]
Output:
{
"scope": "scope",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1.5",
"1444392000000": "1.5",
"1444406400000": "1.5",
"1444420800000": "1.5",
"1444435200000": "1.5",
"1444449600000": "1.5"
}
Culls all input metrics whose set of data point values, when evaluated, are not below the numerical limit. Type indicates the type of evaluation to perform. Must be one of 'avg', 'min', 'max', or 'recent'. All evaluation types consider all data points for metics with the exception of 'recent', which only evaluats the most recent data point in the series. If the type is not specified, 'avg' is used.
BELOW(<time_series>[,<time_series>]*,<limit>,<type>)
BELOW(<time_series>[,<time_series>]*,<limit>)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "0"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "0"
}
}
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
Output:
[]
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "0"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "0"
}
}
Returns all data points that are consecutive. The first interval parameter defines the threshold of consecutivity that you want to collect. Any continuous data points whose connected time window larger than this threshold will be collected. The second internal parameter is the by default data point density on the time series.
CONSECUTIVE(,time_series>,#interval#,#interval#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1400000000000": "1",
"1400000003000": "2",
"1400000004000": "3",
"1400000005000": "4",
"1400000008000": "5",
"1400000009000": "6"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1400000003000": "2",
"1400000004000": "3",
"1400000005000": "4",
}
}]
Calculates a metric having a set of timestamps that are the union of all input metric timestamp values. Each timestamp value is the constant value of the count of input metrics.
COUNT(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Output:
{
"scope": "COUNT",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "2",
"1444406400000": "2",
"1444420800000": "2",
"1444435200000": "2",
"1444449600000": "2"
}
}
Removes data points from metrics that have their value above a limit or percentile. CULL_ABOVE takes type as a parameter. If the type is value, then CULL_ABOVE removes data points from metrics that have their value above the limit. If the type is percentile, then CULL_ABOVE removes data points that have their value above (limit)th percentile of the data points.
CULL_ABOVE(<time_series>[,<time_series>]*,<limit>,<type>)
(type can be either "value" or "percentile")
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
}
Removes data points from metrics that have their value below a limit or percentile. CULL_BELOW takes type as a parameter. If the type is value, then CULL_BELOW removes data points from metrics that have their value below the limit. If the type is percentile, then CULL_BELOW removes data points that have their value below (limit)th percentile of the data points.
CULL_BELOW(<time_series>[,<time_series>]*,<limit>,<type>)
(type can be either "value" or "percentile")
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Calculates the discrete time derivative. This transform has two variations -
DERIVATIVE(<time_series>[,<time_series.]*)
DERIVATIVE(<time_series>[,<time_series.]*,#expectedFrequency#)
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444420800000": "98.3340924629247",
"1444435200000": "105.48854166666666",
"1444449600000": "113.86979166666667",
"1444464000000": "129.76145833333334",
"1444478400000": "118.52708333333334",
"1444492800000": "126.11441532258064"
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444420800000": null,
"1444435200000": "7.154449203741962",
"1444449600000": "8.381250000000009",
"1444464000000": "15.891666666666666",
"1444478400000": "-11.234375",
"1444492800000": "8.481530112044823"
}
}
The second variation here takes in a constant #expectedFrequency#. This constant denotes how often data is expected to be published. This variation is used to calculate the derivative correctly even when there are data points at some timestamps missing in the query results.
Input:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444420800000": "100",
"1444420860000": "200",
"1444420980000": "400",
}
}
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444420800000": null,
"1444420860000": "100",
"1444420980000": "100",
}
}
Calculates the standard deviation per timestamp for a collection of input series, or the standard deviation for all data points for a single input series. The required tolerance parameter is a decimal fraction between 0.0 and 1.0 that describes the allowed percentage of missing data to be considered before not performing the operation. The operational points parameter is the number of point to evaluate starting with the most recent. Ig specified, the points parameter will always evaluate the deviation as a row operation on each input time series, If the point parameter is omitted, then for an input of a single time series, all the data points in the series will be used for evaluation.
DEVIATION(<time_series>[,<time_series>]*,#tolerance#,#points#)
DEVIATION(<time_series>[,<time_series>]*,#tolerance#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444334400000": "182.36805555555554",
"1444348800000": "296.03333333333336",
"1444363200000": "282.7388888888889",
"1444377600000": "286.65277777777777",
"1444392000000": "242.75",
"1444406400000": "130.17277397219954"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444406400000": "67.04009541743302"
}
}]
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444334400000": "182.36805555555554",
"1444348800000": "296.03333333333336",
"1444363200000": "282.7388888888889",
"1444377600000": "286.65277777777777",
"1444392000000": "242.75",
"1444406400000": "130.17277397219954"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444334400000": "43768.333333333336",
"1444348800000": "71048.0",
"1444363200000": "67857.33333333333",
"1444377600000": "68796.66666666667",
"1444392000000": "58260.0",
"1444406400000": "26601.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444334400000": "30819.93161247807",
"1444348800000": "50029.19541228457",
"1444363200000": "47782.45396759747",
"1444377600000": "48443.89540001789",
"1444392000000": "41024.390900795224",
"1444406400000": "18717.701435141746"
}
}]
Calculates an arithmetic difference. If no subtrahend is provided, the data point values of each time series timestamp for all but the first metric are subtracted from the data point value of the first metric. If a subtrahend is provided as value, it's subtracted from each data point in the set of input metrics. If a subtrahend is provided as a constant "UNION", the data point values of each time series timestamp for all but the first metric are subtracted from the data point value of the first metric, and all datapoints that do not share common timestamps will be left as is in the first metric.
DIFF(<time_series>[,<time_series>]*,#subtrahend#)
DIFF(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "1",
"1444406400000": "2",
"1444420800000": "3",
"1444435200000": "4",
"1444449600000": "5"
}
}]
Inout:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "1",
"1444406400000": "2",
"1444420800000": "3",
"1444435200000": "4",
"1444449600000": "5"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "0"
}
}]
Calculates an arithmetic difference using a vector of subtrahends to be subtracted from each input time-series.
DIFF_V(<time_series>[,<time_series>]*,<subtrahend_time_series.)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "subtrahends",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "DIFF",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "0",
"1444392000000": "1",
"1444406400000": "2",
"1444420800000": "3",
"1444435200000": "4",
"1444449600000": "5"
}
},{
"scope": "DIFF",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Calculates a quotient. If no divisor is provided, the data point values of each timestamp for all but the first metric are divided into the data point value of the first metric. If a divisor is provided as a value, it's divided into each data point in the set of input metrics. If a divisor is provided as constant 'UNION', the data point values of each timestamp for all but the first metric are divided into the data point value of the first metric, and all data points that do not share common timestamp will be left as is in the first metric.
DIVIDE(<time_series>[,<time_series>]*,#divisor#)
DIVIDE(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "DIVIDE",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Calculates a quotient using a vector of divisors to be divided into each input time series.
DIVIDE_V(<time_series>[,<time_series>]*,<divisor_time_series>)
Input:[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "divisors",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "DIVIDE_V",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "DIVIDE_V",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
}]
Down samples one or more metrics. Downsample start time is based off the first datapoint found from returned metric result, unless you provide a start time. The downsampler expression supplied is the standard metric downsampler consisting of the aggregation function and period. For example, '1s-avg'. The following aggregation functions are allowed:
- avg
- min
- max
- sum
- count
- percentile
- deviation
DOWNSAMPLE(<time_series>[,<time_series>]*,#downsampler#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444507200000": "566.0",
"1444510800000": "541.0",
"1444514400000": "574.0",
"1444518000000": "694.0",
"1444521600000": "535.0",
"1444525200000": "522.0",
"1444528800000": "552.0",
"1444532400000": "653.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444507200000": "593.75",
"1444521600000": "565.5"
}
}]
This variant of the downsample transform lets you specify a default value to substitute, if there is no data present in a particular interval
DOWNSAMPLE(<time_series>[,<time_series>]*,#downsampler#,#defaultValue#)
In this example, we are trying to do a 1h-avg of last 8 hours of data, while substituting 0 for any intervals for which data might be missing.
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444507200000": "566.0",
"1444510800000": "541.0",
"1444514400000": "574.0",
"1444525200000": "522.0",
"1444528800000": "552.0",
"1444532400000": "653.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444507200000": "566.0",
"1444510800000": "541.0",
"1444514400000": "574.0",
"1444518000000": "0",
"1444521600000": "0",
"1444525200000": "522.0",
"1444528800000": "552.0",
"1444532400000": "653.0"
}
}]
By default the downsampling transform tries to normalize the start and end times of each of the data points in the output to the specified downsampling interval. For example, if the downsampling constant is specified as #1h-sum#, the generated intervals will always start and end at the hourly boundaries. This can at times lead to unpredictable results for the first and last downsampled intervals depending on at what time the query was executed. Say for example, the query was executed at 1:20PM with the look back period of last 4 hours, and the downsampling constant of #1h-sum#. In this case, the last interval for which downsampling is performed will be 1PM to 1:20PM, which only covers 20 minutes of data. Since we are using a 'sum' aggregator here, this may produce a much lesser value in the last interval compared to the intervals before it.
To prevent this problem, you can specify an additional constant for the downsample transform called #abs#. When this constant is specified, the intervals do not get rounded up to hourly boundaries like above. Instead for the above example, the intervals generated will be equi-width namely - (9:20AM - 10:20AM), (10:20AM - 11:20AM), (11:20AM - 12:20PM) and (12:20PM - 1:20PM). This will give more predictable results for the downsample transform. Note that you need to always specify the defaultValue constant (in case if data is missing in an interval) with this option.
The full syntax for this variation is below -
DOWNSAMPLE(<time_series>[,<time_series>]*,#downsampler#,#defaultValue#,#abs#)
Culls metrics based on the matching of a regular expression against scope, metric, tag keys and tag values.
EXCLUDE(<time_series>[,<time_series>]*,#regex#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Creates additional data points to fill gaps. The interval parameter specified the maximum gap allowed before inserting a new data point. The offset parameter specifies the offset applied to each datapoint applied after fill data points are generated. If after the offset is applied, a fill data point coincides with an existing data point, the fill data point is discarded. The value parameter specifies the numeric value for generated data points.
FILL(<time_series>[,<time_series>]*,#interval#, #offset#, #value#)
The second form of the FILL function is used to generate a constant line. Rather than filling one or more time series, the start and end parameters define the time range to be filled. This form results in a single time series result.
FILL(#start#,#end#,#interval#, #offset#, #value#)
Example 1: FILL(-2d:-1d:scope:metricA:avg:4h-avg,#4h#,#0m#,#0#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444449600000": "6"
}
}]
Output:
{
"scope": "FILL",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "0",
"1444406400000": "0",
"1444420800000": "0",
"1444435200000": "0",
"1444449600000": "6"
}
}
Input:
[]
Output:
{
"scope": "FILL",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "100",
"1444392000000": "100",
"1444406400000": "100",
"1444420800000": "100",
"1444435200000": "100",
"1444449600000": "100"
}
}
Creates a constant line based on the calculated value. The interval parameter specified the maximum gap allowed before inserting a new data point. The offset parameter specifies the offset applied to each data point applied after fill data points are generated. If after the offset is applied, a fill data point coincides with an existing data point, The fill data point is discarded. The calculation type parameter specifies the numeric value for generated data points based on the function selected. Supported calculation types are min, max, dev, p1...1000 (percentile).
FILL_CALCULATE(<time_series>[,<time_series>]*, #type#, #interval#, #offset#)
FILL_CALCULATE(<time_series>[,<time_series>]*,#type#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444449600000": "2",
"1444381600000": "3",
"1444482600000": "4",
"1444384600000": "5",
"1444486600000": "6",
"1444388600000": "7",
"1444490600000": "8",
"1444392600000": "9",
"1444494600000": "10"
}
}]
Output:
{
"scope": "FILL_CALCULATE",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "9",
"1444449600000": "9",
"1444381600000": "9",
"1444482600000": "9",
"1444384600000": "9",
"1444486600000": "9",
"1444388600000": "9",
"1444490600000": "9",
"1444392600000": "9",
"1444494600000": "9"
}
}
Calculates the union of all data points of time series which match the regular exception, having the time periods for which there is overlap calculated as the intersection of the time series. The resulting data points within the intersecting time period are selected as the first examined data point for the timestamp from the intersecting series. The type parameter must be one of ‘inclusive’ or ‘exclusive’. If the ‘inclusive’ value is specified, only the series that match the expression will be grouped. If the ‘exclusive’ value is specified, only the series not matching the expression will be grouped. If the type parameter is unspecified, the ‘inclusive’ value will be used.
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444392000000": "1",
"1444406400000": "2",
"1444420800000": "3",
}
}]
Output:
{
"scope": "GROUP",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "3",
}
}
Groups different metrics together using the provided regular expression (java capturing group regular expression) and then perform the provided Transform on the grouped set of metrics. This transform takes a regular expression as a parameter which is used to group metrics. It also takes another Transform as a parameter which is used to operate on the grouped metrics. The Transform parameter is then followed by constants that would be needed in order to successfully perform the Transform.
GROUPBY(<time_series>[,<time_series>]*, #capturing_group_regex#, #TRANSFORM_T#, #constant_for_transformT#[,#constant_for_transformT#])
The above example returns 4 different timeseries as show below in the Input section. The capturing group regex in the above example specifies that we want to capture anything that contains myhost followed by a number between 1 and 9. For the given 4 timeseries, this would then create 2 different groups, one for myhost1 (for the first and second metric) and another for myhost2 (for the third and fourth metric). Once the timeseries are grouped, we then perform the SUM operation on each group and return the result for each group. Hence the Output section contains 2 timeseries, one for each group.
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost1-1.mycompany.net"
},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost1-2.mycompany.net"
},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost2-1.mycompany.net"
},
"namespace": null,
"datapoints": {
"1444377600000": "10",
"1444392000000": "20",
"1444406400000": "30",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost2-2.mycompany.net"
},
"namespace": null,
"datapoints": {
"1444377600000": "10",
"1444392000000": "20",
"1444406400000": "30",
}
}]
Output:
[{
"scope": "SUM",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "4",
"1444406400000": "6",
}
},{
"scope": "SUM",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "20",
"1444392000000": "40",
"1444406400000": "60",
}
}]
Groups different metrics together using the given tags and then executes the provided Transform on the grouped set of metrics. This transform takes a list of tags as parameters by which to group metrics. It takes a second Transform as a parameter to operate on the grouped metrics. The Transform parameter is followed by any constants needed to successfully perform the Transform.
GROUPBYTAG(<time_series>[,<time_series>]*, #tag1#[, #tag2#]*, #TRANSFORM_T#, #constant_for_transformT#[,#constant_for_transformT#])
Example: GROUPBYTAG(-2d:-1d:scope:metricA{host=*}:avg,#host#, #SUM#, #union#)
The capturing tag (host) in the above example captures any time-series metrics with matching "host" values and groups them under the capturing tag. Once the data points are grouped, the SUM operation is performed on each group and the results are returned.
This example is illustrated in the following time-series data, which shows input metrics, along with the output time series. Note the matching timestamps.
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost1"
},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost2"
},
"namespace": null,
"datapoints": {
"1444377600000": “4”,
"1444392000000": “5”,
"1444406400000": “6”,
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost1”
},
"namespace": null,
"datapoints": {
"1444377600000": "10",
"1444392000000": "20",
"1444406400000": "30",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {
host: "myhost2"
},
"namespace": null,
"datapoints": {
"1444377600000": “40",
"1444392000000": “50",
"1444406400000": “60",
}
}]
Output:
[{
"scope": "SUM",
"metric": "result",
"tags": {host: "myhost1”},
"namespace": null,
"datapoints": {
"1444377600000": “11”,
"1444392000000": “22”,
"1444406400000": “33”,
}
},{
"scope": "SUM",
"metric": "result",
"tags": {host: "myhost2”},
"namespace": null,
"datapoints": {
"1444377600000": “44”,
"1444392000000": “55”,
"1444406400000": "66”,
}
}]
Evaluates all input metrics based on an evaluation of the metric data point values, returning top metrics having the highest evaluated value.
The limit parameter indicates the maximum number of time series to return.
The type parameter indicates the type of evaluation to perform. Must be one of 'avg', 'min', 'max', 'recent'. All evaluation types consider all data points for a metric with the exception of 'recent' which only evaluates the most recent data point in the series. If null the value defaults to 'average'.
HIGHEST(<time_series>[,<time_series>]*,#limit#,#type#)
HIGHEST(<time_series>[,<time_series>]*,#limit#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "3",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
}
},{
"scope": "scope",
"metric": "metricC",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
}
}]
Output:
[{
"scope": "HIGHEST",
"metric": "metricC",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "3",
}
}]
Retains metrics based on the matching of a regular expression against scope, metric, tag keys and tag values.
INCLUDE(<time_series>[,<time_series>]*,#regex#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Output:
[{
"scope": "INCLUDE",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Calculates the discrete time integral.
INTEGRAL(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444464000000": "519.0458333333333",
"1444478400000": "474.10833333333335",
"1444492800000": "600.1791666666667",
"1444507200000": "524.3465829846583",
"1444521600000": "499.70833333333337",
"1444536000000": "511.7277777777778"
}
}]
Output:
[{
"scope": "INTEGRAL",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444464000000": "519.0458333333333",
"1444478400000": "993.1541666666667",
"1444492800000": "1593.3333333333335",
"1444507200000": "2117.6799163179917",
"1444521600000": "2617.388249651325",
"1444536000000": "3129.128581143038"
}
}]
Joins multiple lists of metrics into a single list.
JOIN(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}],
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Output:
[{
"scope": "JOIN",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Returns a subset input metrics in stale order from the heas of the list not to exceed the specified limit.
LIMIT(<time_series>[,<time_series>]*,#limit#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Output:
{
"scope": "LIMIT",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}
Calculates the logarithm according to the specified base.
LOG(<time_series>[,<time_series>]*,#base#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444608000000": "61.0",
"1444629600000": "61.0",
"1444651200000": "61.0",
"1444672800000": "61.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444608000000": "1.785329835010767",
"1444629600000": "1.785329835010767",
"1444651200000": "1.785329835010767",
"1444672800000": "1.785329835010767"
}
}]
Evaluates all input metrics based on an evaluation of the metric data point values, returning top metrics having the lowest evaluated value.
The limit parameter indicated the maximum number of time series to return.
The type parameter indicates the type of evaluation to perform. Must be one of the following: avg, min, max, or recent. All evaluation types consider all data points for a metric with the exception of 'recent', which only evaluates the most recent data point in the series. If null the value defaults to 'average'.
LOWEST(<time_series>[,<time_series>]*,#limit#,#type#)
LOWEST(<time_series>[,<time_series>]*,#limit#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "3",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
}
},{
"scope": "scope",
"metric": "metricC",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
}
}]
Output:
[{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "3",
}
}]
For each timestamp in the input range calculate the maximum value across all input metrics.
MAX(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "6",
"1444392000000": "5",
"1444406400000": "4",
"1444420800000": "3",
"1444435200000": "2",
"1444449600000": "1"
}
}]
Output:
{
"scope": "MAX",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "6",
"1444392000000": "5",
"1444406400000": "4",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}
For each timestamp in the input range calculate the minimum value across all input metrics.
MIN(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "6",
"1444392000000": "5",
"1444406400000": "4",
"1444420800000": "3",
"1444435200000": "2",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "MIN",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "3",
"1444435200000": "2",
"1444449600000": "1"
}
}]
Evaluates input metics using moving window. Takes 2 constants, interval (required) and type. The interval specifies the width of the evaluation window and is of the form "1h", "2s", etc. Type parameter specifies the type of moving aggregation to perform. Allowed values are "avg", "median" and "sum". If the type parameter is omitted, then "avg" is used.
MOVING(<time_series>[,<time_series>]*,#interval#, #type#)
MOVING(<time_series>[,<time_series>]*,#interval#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "477.0",
"1444665600000": "680.0",
"1444680000000": "486.0",
"1444694400000": "287.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": null,
"1444636800000": "459.0",
"1444651200000": "471.5",
"1444665600000": "578.5",
"1444680000000": "583.0",
"1444694400000": "386.5"
}
}]
Normalizes the data point values of time-series. If a normal constant is supplied, it is used as the unit normal otherwise, the unit normal is the sum of data point values at each time stamp.
NORMALIZE(<time_series>[,<time_series>]*,#unitnormal#)
NORMALIZE(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "477.0",
"1444665600000": "680.0",
"1444680000000": "486.0",
"1444694400000": "287.0"
}
}]
Output:
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "45.2",
"1444636800000": "46.6",
"1444651200000": "47.7",
"1444665600000": "68.0",
"1444680000000": "48.6",
"1444694400000": "28.7"
}
}
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "477.0",
"1444665600000": "680.0",
"1444680000000": "486.0",
"1444694400000": "287.0"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "477.0",
"1444665600000": "680.0",
"1444680000000": "486.0",
"1444694400000": "287.0"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "0.5",
"1444636800000": "0.5",
"1444651200000": "0.5",
"1444665600000": "0.5",
"1444680000000": "0.5",
"1444694400000": "0.5"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "0.5",
"1444636800000": "0.5",
"1444651200000": "0.5",
"1444665600000": "0.5",
"1444680000000": "0.5",
"1444694400000": "0.5"
}
}]
Normalizes the data point values of time-series using vector of unit normals to be applied to each input time-series.
NORMALIZE_V(<time_series>[,<time_series>]*,<unitnormal_time_series>)
Vector Normalize Example: NORMALIZE_V(-1d:scope:metricA:avg,-1d:scope:metricB:avg,-1d:scope:normals:avg)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "normals",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "10",
"1444406400000": "100",
"1444420800000": "100",
"1444435200000": "10",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "0.2",
"1444406400000": "0.03",
"1444420800000": "0.04",
"1444435200000": "0.5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "0.3",
"1444406400000": "0.04",
"1444420800000": "0.05",
"1444435200000": "0.6",
"1444449600000": "7"
}
}]
Calculates the Nth percentile. If "individual" is specified as a constant, each metric will be evaluated individually resulting in a single percentile value for the entire time series. If there are multiple time series, this will result in a single datapoint for each series. Otherwise, the set of data points across metrics at each given timestamp are evaluated resulting in a single metric result. Only timestamps occurring in each of the metrics will be present in the resultant metric. The Nth percentile value must be between 0 and 100, inclusive.
By default, the percentile calculation considers all the data regardless of whether some of the time-series are missing data at certain timestamps. If you would like to consider only the timestamps at which all the time-series have the data, you can specify #INTERSECT# as a constant. If this option is specified, timestamps at which not all time-series in the query result have data will be ignored for percentile calculation.
PERCENTILE(<time_series>[,<time_series>]*,#npercent#)
PERCENTILE(<time_series>[,<time_series>]*,#npercent#, #INTERSECT#)
PERCENTILE(<time_series>[,<time_series>]*,#npercent#,#individual#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "-1",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
}
},{
"scope": "scope",
"metric": "metricC",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "4",
}
},{
"scope": "scope",
"metric": "metricD",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "8",
}
}]
Output:
[{
"scope": "PERCENTILE",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7.4",
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "-1",
"1444392000000": "2",
"1444406400000": "4",
"1444420800000": "8",
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444420800000": "7.4",
}
}]
Forward fills gaps with the last known value at the star (earliest occurring time) of the gap. The maximum gap size is specified using the interval parameter.
PROPAGATE(<time_series>[,<time_series>]*,#interval#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444420800000": "4",
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "4",
}
}]
Calculate the difference between maximum and minimum values at each time stamp. If a single time-series is specified, the minimum and maximum of all values in the time-series is returned.
RANGE(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "-1",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
}
},{
"scope": "scope",
"metric": "metricC",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "4",
}
},{
"scope": "scope",
"metric": "metricD",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "8",
}
}]
Output:
[{
"scope": "RANGE",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "9.0",
}
}]
Calculates the rate of change. If no parameter is provided, the rate transform calculates the rate of increase per minute and also handles counter resets. This transform has two variations
RATE(<time_series>[,<time_series.]*)
RATE(<time_series>[,<time_series.]*, #interval#, #handleCounterResets#, #interpolateMissingDatapoints#)
The interval parameter specifies the gap between two data points. If handleCounterResets parameter is true then counter resets will be handled (all negative values, after calculating the rate will be removed). if interpolateMissingDatapoints parameter is true then missing data points will be interpolated.
Input:
[
{
"scope": “scope",
"metric": “metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000060000": 1,
"1500000120000": 2,
"1500000180000": 3,
"1500000240000": 4,
"1500000300000": 5,
"1500000360000": 6,
"1500000420000": 7,
"1500000480000": 8,
"1500000540000": 9,
"1500000600000": 10
},
"metatagsKey": null,
"metatags": null
}
]
Output:
[
{
"scope": “scope",
"metric": “metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000120000": 1,
"1500000180000": 1,
"1500000240000": 1,
"1500000300000": 1,
"1500000360000": 1,
"1500000420000": 1,
"1500000480000": 1,
"1500000540000": 1,
"1500000600000": 1
},
"metatagsKey": null,
"metatags": null
}
]
Input:
[
{
"scope": “scope",
"metric": “metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000060000": 1,
"1500000600000": 10
},
"metatagsKey": null,
"metatags": null
}
]
Output:
[
{
"scope": “scope",
"metric": “metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000120000": 1,
"1500000180000": 1,
"1500000240000": 1,
"1500000300000": 1,
"1500000360000": 1,
"1500000420000": 1,
"1500000480000": 1,
"1500000540000": 1,
"1500000600000": 1
},
"metatagsKey": null,
"metatags": null
}
]
Input:
[
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000060000": 1,
"1500000120000": 2,
"1500000180000": 3,
"1500000240000": 4,
"1500000300000": 5,
"1500000360000": 1,
"1500000420000": 2,
"1500000480000": 3,
"1500000540000": 4,
"1500000600000": 5
},
"metatagsKey": null,
"metatags": null
}
]
Output:
[
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000120000": 1,
"1500000180000": 1,
"1500000240000": 1,
"1500000300000": 1,
"1500000420000": 1,
"1500000480000": 1,
"1500000540000": 1,
"1500000600000": 1
},
"metatagsKey": null,
"metatags": null
}
]
Example 4 [Calculating the Rate at 2 minute interval]: RATE(-1h:scope:metric:avg,#2m#,#true#,#true#)
Input:
[
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000060000": 1,
"1500000180000": 2,
"1500000300000": 3,
"1500000420000": 4,
"1500000540000": 5
},
"metatagsKey": null,
"metatags": null
}
]
Output:
[
{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"displayName": null,
"units": null,
"datapoints": {
"1500000180000": 1,
"1500000300000": 1,
"1500000420000": 1,
"1500000540000": 1
},
"metatagsKey": null,
"metatags": null
}
]
ROUND/CEIL/FLOOR the value at each timestamp to a mathematical integer.
An optional parameter #type# accepts one of the values:
#round#(based on java.lang.Math.rint)
#ceil# (based on java.lang.Math.ceil)
#floor#(based on java.lang.Math.floor)
If no parameter is provided, the default one is #round#.
ROUNDING(<time_series>[,<time_series>]*)
ROUNDING(<time_series>[,<time_series>]*,#type#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.3",
"1444636800000": "466.45",
"1444651200000": "477.5",
"1444665600000": "680.6",
"1444680000000": "-486.6",
"1444694400000": "-287.4"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "478.0",
"1444665600000": "681.0",
"1444680000000": "-487.0",
"1444694400000": "-287.0"
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.3",
"1444636800000": "466.45",
"1444651200000": "477.5",
"1444665600000": "680.6",
"1444680000000": "-486.6",
"1444694400000": "-287.4"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "453.0",
"1444636800000": "467.0",
"1444651200000": "478.0",
"1444665600000": "681.0",
"1444680000000": "-486.0",
"1444694400000": "-287.0"
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.3",
"1444636800000": "466.45",
"1444651200000": "477.5",
"1444665600000": "680.6",
"1444680000000": "-486.6",
"1444694400000": "-287.4"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444622400000": "452.0",
"1444636800000": "466.0",
"1444651200000": "477.0",
"1444665600000": "680.0",
"1444680000000": "-487.0",
"1444694400000": "-288.0"
}
}]
Calculates a product. If no multiplier is provided, the data point values of each time stamp for all but the first metric are multiplied into the data point value of the first metric. If a multiplier is provided as a value, it's multiplied into each data point in the set of input metrics. If a multiplier is provided as a constant "UNION", the data point values of each time stamp for all but the first metric are multiplied into the data point value of the first metric, and all data points that do not share any common timestamp will be left as is in the first metric.
SCALE(<time_series>[,<time_series>]*,#multiplier#)
SCALE(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "SCALE",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "SCALE",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Calculates a product using a vector of multipliers to be multiplied into each input time-series.
SCALE_V(<time_series>[,<time_series>]*,<multiplier_time_series>)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "multipliers",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "SCALE_V",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "SCALE_V",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
}]
Shifts the time stamp for each data point by the specified constant.
SHIFT(<time_series>[,<time_series>]*,#interval#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444392000000": "1",
"1444406400000": "2",
"1444420800000": "3",
"1444435200000": "4",
"1444449600000": "5"
}
}]
Data points with timestamp outside of start_time and end_time will be removed. Both start_time and end_time can be epoch timestamps or relative time with respect to query start time and end time.
SLICE(<time_series>[,<time_series>]*,#start_time#, #end_time#)
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444444100000": "1",
"1444444200000": "2",
"1444444300000": "3",
"1444444400000": "4",
"1444444500000": "5",
"1444444600000": "6",
"1444444700000": "7",
"1444444800000": "8"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1444444300000": "3",
"1444444400000": "4",
"1444444500000": "5",
"1444444600000": "6"
}
}]
Input:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1565305200000": "1",
"1565305260000": "2",
"1565305320000": "3",
"1565305380000": "4",
"1565305440000": "5",
"1565305500000": "6",
"1565305560000": "7",
"1565305620000": "8"
"1565305680000": "9"
"1565305740000": "10"
}
}]
Output:
[{
"scope": "scope",
"metric": "metric",
"tags": {},
"namespace": null,
"datapoints": {
"1565305380000": "4",
"1565305440000": "5",
"1565305500000": "6",
"1565305560000": "7",
}
}]
SMOOTHEN transform helps to smooth noisy metrics and reveal trends. It uses a moving window average to smooth the timeseries of data points. SMOOTHEN transform is based on ASAP (Automatic Smoothing for Attention Prioritization) algorithm (http://futuredata.stanford.edu/asap/) developed by Stanford’s Future Data Systems Research Group, which helps to find the optimized window size. The larger the window size is, the smoother the data is but less details. ASAP works on finding a window size as large as possible while including deviation information as much as possible.
An optional parameter in SMOOTHEN, #resolution# is used to determine how smooth the output is. The default value of #resolution# is number of data points in Input Stream. The valid value of #resolution# is from 1 to its default value, from most smooth to least smooth.
Important: If #resolution# is set to be larger than the default value, there will be no difference in the Output Stream between the default one. For example, if the number of data points in Input Stream is 500, changing #resolution# from default to 1000 will not make the Output Stream less smooth.
SMOOTHEN(<time_series>)
SMOOTHEN(<time_series>, #resolution#)
comments: #resolution# is larger than default value, the output stream is same as default one
Sorts a list of metrics. The required type parameter can be one maxima, minima, name, or dev. The required order parameter must be one of ascending or descending. The optional limit parameter indicates the maximum number of series to return. If the limit parameter is omitted, all results are returned.
The maxima sort uses the maximum value of all the data points in each series to perform the sort. Likewise, the minima sort uses the minimum value from all the data points in each series. The deviation sort uses the standard deviation calculated for each series to perform the sort. The name sort uses the metric identifier to perform the sort.
SORT(<time_series>[,<time_series>]*,#limit#,#type#,#order#)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "7",
"1444392000000": "8",
"1444406400000": "9",
"1444420800000": "10",
"1444435200000": "11",
"1444449600000": "12"
}
},{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
}]
Calculates an arithmetic sum. If no addend is provided, the data-point values of each time stamp are summed if they share a common timestamp. If an addend is provided as value, it is added to each overlapping data point in the set of input metrics.
If an addend is provided as constant 'INTERSECT', the data-point values of each time stamp are summed only if they share a common timestamp with all other timeseries in the query result. If this constant is not provided, then by default if a particular timeseries is missing a value at a particular timestamp, the value is assumed to 0 for computing the sum at that timestamp
SUM(<time_series>[,<time_series>]*,#addend#)
SUM(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "SUM",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
}]
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "2",
"1444406400000": "2",
"1444420800000": "2",
"1444435200000": "2",
"1444449600000": "2"
}
}]
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444377700000": "1",
"1444377800000": "1",
"1444377900000": "1"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444377700000": "2"
}
}]
Output:
[
{
"scope": "scope",
"metric": "result",
"tags": {},
"datapoints": {
"1444377600000": 3,
"1444377700000": 3
}
}
]
Calculates an arithmetic sum using a vector of addends to be summed with each input time-series.
SUM_V(<time_series>[,<time_series>]*,<addend_time_series>)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "4",
"1444435200000": "5",
"1444449600000": "6"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "addends",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "1",
"1444406400000": "1",
"1444420800000": "1",
"1444435200000": "1",
"1444449600000": "1"
}
}]
Output:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "2",
"1444392000000": "3",
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "3",
"1444392000000": "4",
"1444406400000": "5",
"1444420800000": "6",
"1444435200000": "7",
"1444449600000": "8"
}
}]
Performs the union of all data points for the given time-series. If more than one data point exists at a given time stamp, the first value encountered is used.
UNION(<time_series>[,<time_series>]*)
Input:
[{
"scope": "scope",
"metric": "metricA",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
}
},{
"scope": "scope",
"metric": "metricB",
"tags": {},
"namespace": null,
"datapoints": {
"1444406400000": "4",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
}]
Output:
{
"scope": "UNION",
"metric": "result",
"tags": {},
"namespace": null,
"datapoints": {
"1444377600000": "1",
"1444392000000": "2",
"1444406400000": "3",
"1444420800000": "5",
"1444435200000": "6",
"1444449600000": "7"
}
}