-
Notifications
You must be signed in to change notification settings - Fork 170
/
README
267 lines (208 loc) · 10.1 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
#
# $Id$
#
# Author: Thilee Subramaniam
#
# Copyright 2012,2016 Quantcast Corporation. All rights reserved.
#
# This file is part of Kosmos File System (KFS).
#
# Licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
#
MSTRESS : A framework for metaserver/namenode benchmarking
==========================================================
Contents:
[1] Framework description
[2] Files in this direcotry
[3] Running benchmark
[4] Setting up DFS metaserver/namenode
[1] Framework
=============
The mstress master would invoke mstress.py in slave mode on the client hosts
through SSH.
Each mstress slave would invoke the necessary number of load-generating clients,
which would stress the meta server.
+-----------------------------+
| +-------------------+ |
| | mstress.py +-----+----------------------+
| | (--mode master) +-----+------------------+ |
| +-------------------+ | | |
| (master host) | | |
+-----------------------------+ | |
| |
+--------------------------------------+ | |
| | | |
+-----------+ | +--------------+ +--------------+ | | |
| |<---------+-|mstress_client|<--| mstress.py |<-+--+ |
| | | +--------------+ |(--mode slave)| | |
| DFS meta | | +--------------+ | |
| server | | (client host 1) | |
| | +--------------------------------------+ |
| | |
| | |
| | +--------------------------------------+ |
| | | +--------------+ +--------------+ | |
| |<-----------|mstress_client|<--| mstress.py |<-+------+
+-----------+ | +--------------+ |(--mode slave)| |
| +--------------+ |
| (client host 2) |
+--------------------------------------+
The clients will do file or directory tree creation, stat, or directory walk as
specified by the benchmark plan.
[2] Files
=========
- CMakeLists.txt
Builds the QFS stress client (C++) and HDFS stress client (Java) along with
the main release build.
- build.xml, ivy.xml, ivysettings.xml
Ant build file and Ivy dependency and settings files, used by CMakeLists.txt
to build the HDFS stress client (Java). Ensure that $JAVA_HOME is set.
- mstress_client.cc
Produces the mstress_client binary that actually drives the QFS metaserver.
Builds to $GIT_DIR/build/release/bin/benchmarks/mstress_client
See 'Benchmarking Procedure' below for details.
- MStress_Client.java
Produces the java MStress_Client for HDFS namenode. Built using ant to
$GIT_DIR/build/release/bin/bin/benchmarks/mstress.jar
See 'Benchmarking Procedure' below for details.
- mstress_install.sh
Helper script used to deploy mstress to a list of hosts. Will invoke cmake
and make under ./build/.
- mstress_plan.py
Used to generate a plan file for benchmarking.
Args: client hosts list, number of clients per client host, file tree depth,
nodes per level etc.
The generated plan file is also copied to the /tmp firectory of the
participating client hosts.
Do ./mstress_plan.py --help to see all options.
- mstress.py
Used to run the metaserver test with the help of the plan file.
Args: dfs server host & port, planfile etc.
This script invokes mstress.py on the remote host through SSH. For this
reason, the mstress path should be the same on the participating hosts.
Do ./mstress.py --help to see all options.
- mstress_run.py
Essentially a wrapper around mstress_plan.py and mstress.py
Args: client hosts list and DFS server:port information.
Do mstress_run.py --help to see usage.
- mstress_sample_run.sh
Used to run sample benchmarks on given QFS and HDFS servers by launching
clients on localhost. Essentially a wrapper around mstress_initialize.sh,
make, mstress_prepare_master_clients.sh, and mstress.run.py.
- mstress_cleanup.py
Used to clean up the plan files and log files created on participating
hosts.
Do ./mstress_cleanup.py --help to see usage.
[3] Benchmarking Procedure
==========================
In reality, benchmark would use separate physical machines each for compiling,
running the DFS server, running mstress master, and load generating clients.
The procedure below assumes different machines, but one can also run all
on the same box, "localhost".
(1) Setup the QFS metaserver and HDFS namenode with the help of
section [4] "Setting up DFS metaserver/namenode" below.
(2) You should have SSH key authentication set up on the hosts involved so
that the scripts can do password/passphrase-less login.
(3) On the build host, compile and install QFS using the steps described in
https://github.com/quantcast/qfs/wiki/Developer-Documentation.
(4) Determine the master and load generating client hosts that you want to use
to connect to the DFS server. This could just be "localhost" if you want to
run the benchmark locally.
(5) From the build host in this directory, run `./mstress_install.sh hosts..`
to deploy mstress files to the participating hosts under ~/mstress.
(6) On the master host change directory to ~/mstress
Create a plan file using mstress_plan.py.
Do ./mstress_plan.py --help to see example usage.
Eg:
./mstress_plan.py -c localhost,127.0.0.1 -n 3 -t file -l 2 -i 10 -n 139
This will create a plan that creates 2 levels of 10 inodes each by 3
processes on 2 hosts. Since each client creates 110 inodes (10 directories
with 10 files each) and since there are 6 clients (3 x 2), this plan is to
create 660 inodes on the DFS server.
The planfile will pick N files to stat per client such that
(N x client-host-count x clients-per-host) is just enough to meet 139.
The plan file gets copied to the /tmp directory where you run it. It will
also get copied to the participating client hosts in the '-c' option.
(7) Checklist: check the presence of,
- the plan file on master host and client hosts (step 6 does this for you)
- the mstress_client binaries (QFS and HDFS clients) on master and all
client hosts (step 5).
(8) Run the benchmark from the master with mstress.py.
Do ./mstress.py --help to see options.
Eg:
./mstress.py -f qfs -s <metahost> -p <metaport> -a </tmp/something.plan>
./mstress.py -f hdfs -s <namehost> -p <nameport> -a </tmp/something.plan>
(9) The benchmark name, progress, and time taken will be printed out.
[4] DFS Server Setup
====================
[4.1] QFS Metaserver Setup
-------------------------
You can setup the QFS metaserver using the steps described in
https://github.com/quantcast/qfs/wiki
If you want to set up a simple metaserver for local testing, please use the
script ~/code/qfs/examples/sampleservers/sample_setup.py.
[4.2] HDFS Namenode Setup
-------------------------
This will setup the HDFS namenode to listen on port 40000.
The webUI will run on default port 50070.
The installation used here is based on Cloudera's CDH4 release.
(1) Ensure java is installed, and $JAVA_HOME is set.
(2) Add the following to /etc/yum.repos.d/thirdparty.repo (sudo needed)
-----------------------------------
[cloudera-cdh4]
name=Cloudera's Distribution for Hadoop, Version 4
baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/
gpgkey = http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1
-----------------------------------
(3) Install hadoop-hdfs-namenode and update the configs.
sudo yum install hadoop-hdfs-namenode
sudo mv /etc/hadoop/conf /etc/hadoop/conf.orig
sudo cp -r /etc/hadoop/conf.empty /etc/hadoop/conf
(4) Update /etc/hadoop/conf/core-site.xml (enter your server name instead
of 10.20.30.255)
----------------------------------
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://10.20.30.255:40000</value>
</property>
</configuration>
----------------------------------
(5) Edit /etc/hadoop/conf/hdfs-site.xml, fix or ensure that there is
a "file://" prefix to avoid warnings.
----------------------------------
<configuration>
<property>
<name>dfs.name.dir</name>
<value>file:///var/lib/hadoop-hdfs/cache/hdfs/dfs/name</value>
</property>
</configuration>
----------------------------------
(6) Format the namenode:
sudo service hadoop-hdfs-namenode init
(7) Start namenode.
sudo service hadoop-hdfs-namenode start
(8) Now namenode should be running. Confirm this by running,
ps aux | grep java
sudo netstat -pan | grep 40000
(9) To administer the files and directories,
/usr/lib/hadoop/bin/hadoop fs -ls /
(10) The user with write access on this namenode is "hdfs". Therefore, give
write permission to "/" folder (for mstress benchmark to use) by logging
in as "hdfs" user.
sudo bash
su hdfs
JAVA_HOME=<java-home> /usr/lib/hadoop/bin/hadoop fs -chmod 777 /
exit
(11) Now the namenode is ready for running benchmarks.