Status | |
---|---|
Stability | beta: traces, metrics, logs |
Distributions | contrib |
Issues | |
Code Owners | @Aneurysm9, @MovieStoreGuy |
The kinesis exporter currently exports dynamic encodings to the configured kinesis stream. The exporter relies heavily on the kinesis.PutRecords api to reduce network I/O and and reduces records into smallest atomic representation to avoid hitting the hard limits placed on Records (No greater than 1Mb). This producer will block until the operation is done to allow for retryable and queued data to help during high loads.
The following settings are required:
aws
stream_name
(no default): The name of the Kinesis stream to export to.
The following settings can be optionally configured:
aws
kinesis_endpoint
(no default)region
(default = us-west-2): the region that the kinesis stream is deployed inrole
(no default): The role to be used in order to send data to the kinesis stream
encoding
name
(default = otlp): defines the export type to be used to send to kinesis (available isotlp_proto
,otlp_json
,zipkin_proto
,zipkin_json
,jaeger_proto
)- Note :
otlp_json
is considered experimental and should not be used for production environments.
- Note :
compression
(default = none): allows to set the compression type (defaults BestSpeed for all) before forwarding to kinesis (available isflate
,gzip
,zlib
ornone
)
max_records_per_batch
(default = 500, PutRecords limit): The number of records that can be batched together then sent to kinesis.max_record_size
(default = 1Mb, PutRecord(s) limit on record size): The max allowed size that can be exported to kinesistimeout
(default = 5s): Is the timeout for every attempt to send data to the backend.retry_on_failure
enabled
(default = true)initial_interval
(default = 5s): Time to wait after the first failure before retrying; ignored ifenabled
isfalse
max_interval
(default = 30s): Is the upper bound on backoff; ignored ifenabled
isfalse
max_elapsed_time
(default = 120s): Is the maximum amount of time spent trying to send a batch; ignored ifenabled
isfalse
sending_queue
enabled
(default = true)num_consumers
(default = 10): Number of consumers that dequeue batches; ignored ifenabled
isfalse
queue_size
(default = 1000): Maximum number of batches kept in memory before dropping data; ignored ifenabled
isfalse
; User should calculate this asnum_seconds * requests_per_second
where:num_seconds
is the number of seconds to buffer in case of a backend outagerequests_per_second
is the average number of requests per seconds.
Example Configuration:
exporters:
awskinesis:
aws:
stream_name: raw-trace-stream
region: us-east-1
role: arn:test-role