This repository has been archived by the owner on Mar 30, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 92
Runtime Views
hbutani edited this page Aug 2, 2016
·
4 revisions
We provide a set of d$
views that expose the runtime state of the system.
Show the Historical Servers available for querying.
col_name | data_type | comment |
---|---|---|
druidHost | string | |
druidServer | string | |
maxSize | bigint | |
serverType | string | |
tier | string | |
priority | int | |
numSegments | int | |
currSize | bigint |
Show the Druid Segments of each DataSource.
col_name | data_type | comment |
---|---|---|
druidHost | string | |
druidDataSource | string | |
interval | string | |
version | string | |
binaryVersion | string | |
size | bigint | |
identifier | string | |
shardType | string | |
partitionNum | int | |
partitions | int |
Show which Druid Segments of are served by which _Historical Server_(s)
col_name | data_type | comment |
---|---|---|
druidHost | string | |
druidServer | string | |
segIdentifier | string |
Shows details about the Druid Queries that were issued during Query Execution.
Maintains a history of the last sparkline.queryhistory.maxsize
queries. By default this value is 500
.(Note this is not a Spark property, by a Java system property)
col_name | data_type | comment |
---|---|---|
stageId | int | |
partitionId | int | |
taskAttemptId | bigint | |
druidQueryServer | string | |
druidSegIntervals | array<struct_1:string,_2:string> | |
startTime | string | |
druidExecTime | bigint | |
queryExecTime | bigint | |
numRows | int | |
druidQuery | string |
- StageId, PartitionId, and TaskAttempId
- use them to associate this Druid Query to a Spark Task-Stage-Job
- druidQueryServer
- is the Historical/Broker server that was sent this query.
- druidSegIntervals
- is the list of (Segment Id, Interval) that this Druid Query ran on.
- startTime
- Time when Druid Query was issued, in ISO DateTime string
- druidExecTime
- Time for Druid Server to respond with first byte.
- queryExecTime
- Time for all the result data to be received by the Spark Task.
- numRows
- num rows received from Druid.
- druidQuery
- The Query sent to the Druid Server.
- Overview
- Quick Start
-
User Guide
- [Defining a DataSource on a Flattened Dataset](https://github.com/SparklineData/spark-druid-olap/wiki/Defining-a Druid-DataSource-on-a-Flattened-Dataset)
- Defining a Star Schema
- Sample Queries
- Approximate Count and Spatial Queries
- Druid Datasource Options
- Sparkline SQLContext Options
- Using Tableau with Sparkline
- How to debug a Query Plan?
- Running the ThriftServer with Sparklinedata components
- [Setting up multiple Sparkline ThriftServers - Load Balancing & HA] (https://github.com/SparklineData/spark-druid-olap/wiki/Setting-up-multiple-Sparkline-ThriftServers-(Load-Balancing-&-HA))
- Runtime Views
- Sparkline SQL extensions
- Sparkline Pluggable Modules
- Dev. Guide
- Reference Architectures
- Releases
- Cluster Spinup Tool
- TPCH Benchmark