Skip to content
This repository has been archived by the owner on Mar 30, 2021. It is now read-only.

Runtime Views

hbutani edited this page Aug 2, 2016 · 4 revisions

We provide a set of d$ views that expose the runtime state of the system.

d$druidservers view

Show the Historical Servers available for querying.

col_name data_type comment
druidHost string
druidServer string
maxSize bigint
serverType string
tier string
priority int
numSegments int
currSize bigint

d$druidsegments view

Show the Druid Segments of each DataSource.

col_name data_type comment
druidHost string
druidDataSource string
interval string
version string
binaryVersion string
size bigint
identifier string
shardType string
partitionNum int
partitions int

d$druidserverassignments view

Show which Druid Segments of are served by which _Historical Server_(s)

col_name data_type comment
druidHost string
druidServer string
segIdentifier string

d$druidqueries view

Shows details about the Druid Queries that were issued during Query Execution. Maintains a history of the last sparkline.queryhistory.maxsize queries. By default this value is 500.(Note this is not a Spark property, by a Java system property)

col_name data_type comment
stageId int
partitionId int
taskAttemptId bigint
druidQueryServer string
druidSegIntervals array<struct_1:string,_2:string>
startTime string
druidExecTime bigint
queryExecTime bigint
numRows int
druidQuery string
StageId, PartitionId, and TaskAttempId
use them to associate this Druid Query to a Spark Task-Stage-Job
druidQueryServer
is the Historical/Broker server that was sent this query.
druidSegIntervals
is the list of (Segment Id, Interval) that this Druid Query ran on.
startTime
Time when Druid Query was issued, in ISO DateTime string
druidExecTime
Time for Druid Server to respond with first byte.
queryExecTime
Time for all the result data to be received by the Spark Task.
numRows
num rows received from Druid.
druidQuery
The Query sent to the Druid Server.
Clone this wiki locally