-
Notifications
You must be signed in to change notification settings - Fork 92
Simple Jmeter Guide in Spark Testing
Though the Spark SQL requests share the same sampler format(JDBC Request) with queries against SQL Database, there is still a little difference: The Spark SQL requests should not be put the semicolon(;) at the end. You need to remove it or the query won't work. As for the query type in the configuration of JDBC Request, select statement can meet many kinds of JDBC requests even include the create and set.
In the JDBC Connection Configuration, set “Auto Commit” to True, or the Spark SQL won't support it.
Here is a reference for how to solving this issue
https://github.com/prasanthj/jmeter-hiveserver2
Dragging the right border of GUI and you will see the hidden part. It shows the duration of test,the
console window button(Click it to display/hide console) and currently running threads(users)/total
threads(users).
In the test plan sometimes you want just execute part of the thread groups. Then you can cut the part you
don't want to be executed and then paste it in the workbench. However, remember recovering it before exit
Jmeter or you will lose it.
If you execute several times of the test plan, you will find history record would not be excluded. To
clear the history. Click the button in the buttons bar to clear it. One is for part remove and the another
is for global clearing.
If the running of test plan stuck and costs several hours(if you don't set the running duration), see the
logs or the command line of jmeter: there might be a heap dump which means that you set so many threads
that your machine can not handle it. In that time, even the result shows at end. Sometimes the final
result just refelect your machine performance, not your server clusters'.
1. Jmeter(tgz or zip)
http://jmeter.apache.org/download_jmeter.cgi
2. related jar(put into lib folder) hive-jdbc--SNAPSHOT-standalone.jar
1.Thread Group(Set the concurrent users and loop count)
2.JDBC Connection Configuration(Set the connection)
3.JDBC Request (put only one sql query in, the "varible name bound to the pool" should be equal to related
jdbc connection configuration)
4.Listeners (Display of result)
Note: In one thread group, the requests are executed sequentially, So if you had queries for setting pool
or database, please put them at first.
Reference of driver class and database url https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
1. Referring to the hidden part(1.3) to monitor the status of test plan.
2. Monitoring the Graph result in Sampler to see the details.
Graph Result(number of samplers/requests executed,Throughput, average/median/last sampler execution time)
View Result in table/tree(details of each request)
Aggregate report/graph (for details(throughput, kb/s) of each kind of sampler)
- Overview
- Quick Start
-
User Guide
- [Defining a DataSource on a Flattened Dataset](https://github.com/SparklineData/spark-druid-olap/wiki/Defining-a Druid-DataSource-on-a-Flattened-Dataset)
- Defining a Star Schema
- Sample Queries
- Approximate Count and Spatial Queries
- Druid Datasource Options
- Sparkline SQLContext Options
- Using Tableau with Sparkline
- How to debug a Query Plan?
- Running the ThriftServer with Sparklinedata components
- [Setting up multiple Sparkline ThriftServers - Load Balancing & HA] (https://github.com/SparklineData/spark-druid-olap/wiki/Setting-up-multiple-Sparkline-ThriftServers-(Load-Balancing-&-HA))
- Runtime Views
- Sparkline SQL extensions
- Sparkline Pluggable Modules
- Dev. Guide
- Reference Architectures
- Releases
- Cluster Spinup Tool
- TPCH Benchmark