forked from cvmfs/doc-cvmfs
-
Notifications
You must be signed in to change notification settings - Fork 0
/
cpt-configure.tex
359 lines (302 loc) · 23.9 KB
/
cpt-configure.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
\chapter{Client Configuration}
\section{Structure of /etc/cvmfs}
The local configuration of \cvmfs\ is controlled by several files in \texttt{/etc/cvmfs} listed in Table~\ref{tbl:configfiles}.
For every .conf file except for site.conf you can create a corresponding .local file having the same prefix in order to customize the configuration.
The .local file will be sourced after the corresponding .conf file.
In a typical installation, a handful of parameters need to be set in /etc/cvmfs/default.local.
Most likely, this is the list of repositories (\texttt{CVMFS\_REPOSITORIES}), HTTP proxies (see Section~\ref{sct:config:network}), and perhaps the cache directory and the cache quota (see Section~\ref{sct:config:cache})
In a few cases, one might change a parameter for a specific domain or a specific repository, \eg provide an exclusive cache for a specific repository (see Section~\ref{sct:config:cache}).
The .conf and .local configuration files are key-value pairs in the form \texttt{PARAMETER=value}.
They are sourced by /bin/sh.
Hence, a limited set of shell commands can be used inside these files including comments, \texttt{if} clauses, parameter evaluation, and shell math (\texttt{\$((\dots))}).
Special characters have to be quoted.
For instance, instead of \texttt{CVMFS\_HTTP\_PROXY=p1;p2}, write \texttt{CVMFS\_HTTP\_PROXY='p1;p2'} in order to avoid parsing errors.
For a list of all parameters, see Appendix~\ref{apx:parameters}.
\begin{table}
\begin{center}
\begin{tabularx}{\linewidth}{lX}
\toprule
{\bf\centering File} & {\bf\centering Purpose} \\
\midrule
\texttt{config.sh} & Set of internal helper functions \\
\texttt{default.conf} & Set of parameters reflecting the standard configuration \\
\texttt{site.conf} & Site specific set of parameters that overwrites the standard configuration.
This file is used by the \cernvm\ contextualization. \\
\texttt{domain.d/\$domain.conf} & Domain-specific parameters and implementations of the functions in \texttt{config.sh} \\
\texttt{config.d/\$repository.conf} & Repository-specific parameters and implementations of the functions in \texttt{config.sh} \\
\texttt{keys/*.pub} & Public keys used to verify the digital signature of file catalogs \\
\bottomrule
\end{tabularx}
\end{center}
\caption{List of configuration files for \cvmfs\ in \texttt{/etc/cvmfs}}
\label{tbl:configfiles}
\end{table}
\section{Mounting}
Typically, mounting of \cvmfs\ repositories is handled by \autofs.
Just by accessing a repository directory under /cvmfs (\eg /cvmfs/atlas.cern.ch), \autofs\ will take care of mounting.
\autofs\ will also automatically unmount a repository if it is not used for a while.
Instead of using \autofs, \cvmfs\ repositories can be mounted manually with the system's \texttt{mount} command.
In order to do so, use the \texttt{cvmfs} file system type, like
\begin{verbatim}
mount -t cvmfs atlas /cvmfs/atlas.cern.ch
\end{verbatim}
Likewise, \cvmfs\ repositories can be mounted through entries in /etc/fstab.
A sample entry in /etc/fstab:
\begin{verbatim}
atlas.cern.ch /mnt/test cvmfs defaults 0 0
\end{verbatim}
Every mount point corresponds to a \cvmfs\ process.
Using \autofs\ or the system's mount command, every repository can only be mounted once.
Otherwise multiple \cvmfs\ processes would collide in the same cache location.
If a repository is needed under several paths, use a \emph{bind mount} or use a private file system mount point (see Section~\ref{sct:privatemount}).
\subsection{Private Mount Points}
\label{sct:privatemount}
In contrast to the system's \texttt{mount} command which requires root privileges, \cvmfs\ can also be mounted like other Fuse file systems by normal users.
In this case, \cvmfs\ uses parameters from one or several user-provided config files instead of using the files under /etc/cvmfs.
\cvmfs\ private mount points do not appear as \texttt{cvmfs2} file systems but as \texttt{fuse} file systems.
The \texttt{cvmfs\_config} and \texttt{cvmfs\_talk} commands do not affect privately mounted \cvmfs\ repositories.
On an interactive machine, private mount points are for instance unaffected by an administrator unmounting all system's \cvmfs\ mount points by \texttt{cvmfs\_config unmount}.
In order to mount \cvmfs\ privately, use the \texttt{cvmfs2} command like
\begin{verbatim}
cvmfs2 -o config=myparams.conf atlas.cern.ch /home/user/myatlas
\end{verbatim}
A minimal sample myparams.conf file could look like this:
\begin{verbatim}
CVMFS_CACHE_BASE=/home/user/mycache
CVMFS_RELOAD_SOCKETS=/home/user/mycache
CVMFS_SERVER_URL=http://cvmfs-stratum-one.cern.ch/opt/atlas
CVMFS_HTTP_PROXY=DIRECT
\end{verbatim}
Make sure to use absolute path names for the mount point and for the cache directory.
Use \texttt{fusermount -u} in order to unmount a privately mounted \cvmfs\ repository.
The private mount points can also be used to use the \cvmfs\ Fuse module where it has not been installed under /usr and /etc.
If the public keys are not installed under /etc/cvmfs/keys, the directory of the keys needs to be specified in the config file by \texttt{CVMFS\_KEYS\_DIR=<directory>}.
If the libcvmfs\_fuse.so library is not installed in one of the standard search paths, the \texttt{LD\_LIBRARY\_PATH} variable has to be set accordingly for the \texttt{cvmfs2} command.
\section{Network Settings}
\label{sct:config:network}
\cvmfs\ uses HTTP~\cite{rfc1945,rfc2616} for data transfer.
Repository data can be replicated to multiple web servers and cached by standard web proxies such as Squid.
In a typical setup, repositories are replicated to a handful of web servers in different locations.
These replicas form the \cvmfs\ Stratum 1 service, whereas the replication source server is the \cvmfs\ Stratum 0 server.
On every cluster of machines, there should be two or more web proxy servers that \cvmfs\ can use (see Section~\ref{sct:squid}).
These site-local web proxies reduce the network latency for the \cvmfs\ clients and they reduce the load for the Stratum 1 service.
\cvmfs\ supports choosing a random proxy for load-balancing and automatic fail-over to other hosts and proxies in case of network errors.
Roaming clients can connect directly to the Stratum 1 service.
\subsection{Stratum 1 List}
To specify the Stratum 1 servers, set \texttt{CVMFS\_SERVER\_URL} to a semicolon-separated list of known replica servers (enclose in quotes).
The so defined URLs are organized as a ring buffer.
Whenever download of files fails from a server, \cvmfs\ automatically switches to the next mirror server.
For repositories under the cern.ch domain, the Stratum 1 servers are specified in /etc/cvmfs/domain.d/cern.ch.conf.
It is recommended to adjust the order of Stratum 1 server so that the closest servers are used with priority.
For roaming clients (\ie clients not using a proxy server), the Stratum 1 servers can be automatically sorted according to round trip time by \texttt{cvmfs\_talk host probe} (see Section~\ref{sct:tools}).
Otherwise, the proxy server would invalidate round trip time measurement.
The special sequence \texttt{@org@} in the \texttt{CVMFS\_SERVER\_URL} string is replaced by the repository name (not fully qualified).
That allows to use the same parameter for many repositories hosted under the same domain.
For instance, \texttt{http://cvmfs-stratum-one.cern.ch/opt/@org@} can resolve to \url{http://cvmfs-stratum-one.cern.ch/opt/atlas}, \url{http://cvmfs-stratum-one.cern.ch/opt/cms}, and so on depending on the repository that is being mounted.
The same works for the sequence \texttt{@fqrn@} with fully qualified repository names (\eg atlas.cern.cn, cms.cern.ch, \dots).
\subsection{Proxy Lists}
\cvmfs\ uses a dedicated HTTP proxy configuration, independent from system-wide settings.
Instead of a single proxy, \cvmfs\ uses a \emph{chain of load-balanced proxy groups}.
The \cvmfs\ proxies are set by the \texttt{CVMFS\_HTTP\_PROXY} parameter.
Proxies within the same proxy group are considered as a load-balance group and a proxy is selected randomly.
If a proxy fails, \cvmfs\ automatically switches to another proxy from the current group.
If all proxies from a group have failed, \cvmfs\ switches to the next proxy group.
After probing the last proxy group in the chain, the first proxy is probed again.
To avoid endless loops, for each file download the number of switches is restricted by the total number of proxies.
The chain of proxy groups is specified by a string of semicolon separated entries, each group is a list of pipe separated hostnames\footnote{The usual proxy notation rules apply, like \texttt{http://proxy1:8080|http://proxy2:8080;DIRECT}}.
The \texttt{DIRECT} keyword for a hostname avoids using proxies.
Note that the \texttt{CVMFS\_HTTP\_PROXY} parameter is necessary in order to mount.
If you don't use proxies, set the parameter to \texttt{DIRECT}.
Multiple proxy groups are often organized as a primary proxy group at the local site and backup proxy groups at remote sites.
In order to avoid \cvmfs\ being stuck with proxies at a remote site after a fail-over, \cvmfs\ will automatically retry to use proxies from the primary group after some time.
The delay for re-trying a proxies from the primary group is set in seconds by \texttt{CVMFS\_PROXY\_RESET\_AFTER}.
The distinction of primary and backup proxy groups can be turned off by setting this parameter to 0.
\subsubsection{Automatic Proxy Configuration}
The proxy settings can be automatically gathered through WPAD.
The special proxy server ``auto'' in \texttt{CVMFS\_HTTP\_PROXY} is resolved according to the proxy server specification loaded from a PAC file.
PAC files can be on a file system or accessible via HTTP.
\cvmfs\ looks for PAC files in the following order:
\begin{enumerate}
\item Semicolon separated URLs in the \texttt{PAC\_URLS} environment variable
\item \url{http://wpad/wpad.dat}
\item \url{http://wlcg-wpad.cern.ch/wpad.dat}
\end{enumerate}
\subsection{Timeouts}
\cvmfs\ tries to gracefully recover from broken network links and temporarily overloaded paths.
The timeout for connection attempts and for very slow downloads can be set by \texttt{CVMFS\_TIMEOUT} and \texttt{CVMFS\_TIMEOUT\_DIRECT}.
The two timeout parameters apply to a connection with a proxy server and to a direct connection to a Stratum 1 server, respectively.
A download is considered to be ``very slow'' if the transfer rate is below 100 Bytes/second for more than the timeout interval.
A very slow download is treated like a broken connection.
On timeout errors and on connection failures (but not on name resolving failures), \cvmfs\ will retry the path using an exponential backoff.
This introduces a jitter in case there are many concurrent requests by a cluster of nodes, allowing a proxy server or web server to serve all the nodes consecutively.
\texttt{CVMFS\_MAX\_RETRIES} sets the number of retries on a given path before \cvmfs\ tries to switch to another proxy or host.
The overall number of requests with a given proxy/host combination is \texttt{\$CVMFS\_MAX\_RETRIES}+1.
\texttt{CVMFS\_BACKOFF\_INIT} sets the maximum initial backoff in seconds.
The actual initial backoff is picked with milliseconds precision randomly in the interval $[1, \text{\$CVMFS\_BACKOFF\_INIT}\cdot 1000]$.
With every retry, the backoff is then doubled.
\section{Cache Settings}
\label{sct:config:cache}
Downloaded files will be stored in a local cache directory.
The \cvmfs\ cache has a soft quota; as a safety margin, the partition hosting the cache should provide \SI{15}{\percent} more space than the soft quota limit.
Once the quota limit is reached, \cvmfs\ will automatically remove files from the cache according to the least recently used policy~\cite{lru06}.
Removal of files is performed bunch-wise until half of the maximum cache size has been freed.
Currently, \cvmfs\ is not able to access files in the repository that are larger than half of the cache quota.
The quota limit can be set in Megabytes by \texttt{CVMFS\_QUOTA\_LIMIT}.
For typical repositories, a few Gigabytes make a good quota limit.
For repositories hosted at \cern, quota recommendations can be found under \url{http://cernvm.cern.ch/portal/cvmfs/examples}.
The cache directory needs to be on a local file system in order to allow each host the accurate accounting of the cache contents; on a network file system, the cache can potentially be modified by other hosts.
Furthermore, the cache directory is used to create (transient) sockets and pipes, which is usually only supported by a local file system such as ext3 or XFS.
The location of the cache directory can be set by \texttt{CVMFS\_CACHE\_BASE}.
Each repository can either have an exclusive cache or join the \cvmfs\ shared cache.
The shared cache enforces a common quota for all repositories used on the host.
File duplicates across repositories are stored only once in the shared cache.
The quota limit of the shared directory should be at least the maximum of the recommended limits of its participating repositories.
In order to have a repository not join the shared cache but use an exclusive cache, set \texttt{CVMFS\_SHARED\_CACHE=no}.
\subsection{Alien Cache}
An ``alien cache'' provides the possibility to use a data cache outside the control of \cvmfs.
This can be necessary in HPC environments where local disk space is not available or scarce but powerful cluster file systems are available.
The alien cache directory is a directory in addition to the ordinary cache directory.
The ordinary cache directory is still used to store control files.
The alien cache directory is set by the \texttt{CVMFS\_ALIEN\_CACHE} option.
It can be located anywhere including cluster and network file systems
If configured, all data chunks are stored there.
\cvmfs\ ensures atomic access to the cache directory.
It is safe to have the alien directory shared by multiple \cvmfs\ processes and it is safe to unlink files from the alien cache directory anytime.
The contents of files, however, must not be touched by third-party programs.
In contrast to normal cache mode where files are store in mode 0600, in the alien cache files are stored in mode 0660.
So all users being part of the alien cache directory's owner group can use it.
The skeleton of the alien cache directory should be created upfront.
Otherwise, the first \cvmfs\ process accessing the alien cache determines the ownership.
The \texttt{cvmfs2} binary can create such a skeleton using
\begin{verbatim}
cvmfs2 __MK_ALIEN_CACHE__ $alien_cachedir $owner_uid $owner_gid
\end{verbatim}
Since the alien cache is unmanaged, there is no automatic quota management provided by \cvmfs; the alien cache directory is ever-growing.
The \texttt{CVMFS\_ALIEN\_CACHE} requires \texttt{CVMFS\_QUOTA\_LIMIT=-1} and \texttt{CVMFS\_SHARED\_CACHE=no}.
The alien cache might be used in combination with a special repository replication mode that preloads a cache directory (\cf Section~\ref{sct:replica}).
This allows to propagate an entire repository into the cache of a cluster file system for HPC setups that do not allow outgoing connectivity.
\section{NFS Server Mode}
In case there is no local hard disk space available on a cluster of worker nodes, a single \cvmfs\ client can be exported via NFS~\cite{rfc1813,rfc3530} to these worker nodes.
This mode of deployment will inevitably introduce a performance bottleneck and a single point of failure and should be only used if necessary.
NFS export requires Linux kernel >= 2.6.27 on the NFS server.
It works for Scientific Linux 6 but not for Scientific Linux 5.
NFS clients can run both SL5 and SL6.
The NFS server should run a lock server as well.
For proper NFS support, set \texttt{CVMFS\_NFS\_SOURCE=yes}.
Also, \autofs\ for \cvmfs\ needs to be turned off and repositories need to be mounted manually.
In the NFS mode, upon mount an additionally directory nfs\_maps.\$repository\_name appears in the \cvmfs\ cache directory.
These \emph{NFS maps} use \leveldb\ to store the virtual inode \cvmfs\ issues for any accessed path.
The virtual inode may be requested by NFS clients anytime later.
As the NFS server has no control over the lifetime of client caches, entries in the NFS maps cannot be removed.
Typically, every entry in the NFS maps requires some 150-200 Bytes.
A recursive \texttt{find} on /cvmfs/atlas.cern.ch with 25 million entries, for instance, would add up \SI{5}{\giga\byte} in the cache directory.
For a \cvmfs\ instance that is exported via NFS, the safety margin for the NFS maps needs be taken into account.
It also might be necessary to monitor the actual space consumption.
\subsection{Tuning}
For decent performance, the amount of memory given to the meta-data cache should be increased.
By default, this is 16M.
It can be increased, for instance, to 256M by setting \texttt{CVMFS\_MEMCACHE\_SIZE} to 256.
Furthermore, the maximum number of download retries should be increased to at least 2 for the NFS use case.
The number of NFS daemons should be increased as well.
A value of 128 NFS daemons has shown perform well.
In Scientific Linux, the number of NFS daemons is set by the \texttt{RPCNFSDCOUNT} parameter in /etc/sysconfig/nfs.
The performance will benefit from large RAM on the NFS server ($\geq\SI{16}{\giga\byte}$) and \cvmfs\ caches hosted on an SSD hard drive.
\subsection{Shared NFS Maps (HA-NFS)}
As an alternative to the existing, \leveldb\ managed NFS maps, the NFS maps can optionally be managed out of the \cvmfs\ cache directory by \sqlite.
This allows the NFS maps to be placed on shared storage and accessed by multiple \cvmfs\ NFS export nodes simultaneously for clustering and active high-availablity setups.
In order to enable shared NFS maps, set \texttt{CVMFS\_NFS\_SHARED} to the path that should be used to host the \sqlite\ database.
If the path is on shared storage, the shared storage has to support POSIX file locks.
The drawback of the \sqlite\ managed NFS maps is a significant performance penalty which in practice can be covered by the memory caches.
\subsection{Example}
An example entry /etc/exports (note: the fsid needs to be different for every exported \cvmfs\ repository)
\begin{verbatim}
/cvmfs/atlas.cern.ch 172.16.192.0/24(ro,sync,no_root_squash,\
no_subtree_check,fsid=101)
\end{verbatim}
A sample entry /etc/fstab entry on a client:
\begin{verbatim}
172.16.192.210:/cvmfs/atlas.cern.ch /cvmfs/atlas.cern.ch nfs \
ro,nfsvers=3,noatime,nodiratime,ac,actimeo=60,lookupcache=all 0 0
\end{verbatim}
\section{Hotpatching and Reloading}
\label{sct:hotpatch}
By hotpatching a running \cvmfs\ instance, most of the code can be reloaded without unmounting the file system.
The current active code is unloaded and the code from the currently installed binaries is reloaded.
Hotpatching is logged to syslog.
Since \cvmfs\ is re-initialized during hotpatching and configuration parameters are re-read, hotpatching can be also seen as a ``reload''.
Hotpatching has to be done for all repositories concurrently by
\begin{verbatim}
cvmfs_config [-c] reload
\end{verbatim}
The optional parameter \texttt{-c} specifies if the \cvmfs\ cache should be wiped out during the hotpatch.
Reloading of the parameters of a specific repository can be done like
\begin{verbatim}
cvmfs_config reload atlas.cern.ch
\end{verbatim}
In order to see the history of loaded CernVM-FS Fuse modules, run
\begin{verbatim}
cvmfs_talk hotpatch history
\end{verbatim}
The currently loaded set of parameters can be shown by
\begin{verbatim}
cvmfs_talk parameters
\end{verbatim}
The \cvmfs\ packages use hotpatching in order to update previous versions.
\section{Auxiliary Tools}
\label{sct:tools}
\subsection{cvmfs\_fsck}
\cvmfs\ assumes that the local cache directory is trustworthy.
However, it might happen that files get corrupted in the cache directory caused by errors outside the scope of \cvmfs.
\cvmfs\ stores files in the local disk cache with their cryptographic content hash key as name, which makes it fairly easy to verify file integrity.
\cvmfs\ contains the \texttt{cvmfs\_fsck} utility to do so for a specific cache directory.
Its return value is comparable to the system's \texttt{fsck}.
For example,
\begin{verbatim}
cvmfs_fsck -j 8 /var/lib/cvmfs/shared
\end{verbatim}
checks all the data files and catalogs in \texttt{/var/lib/cvmfs/shared} using 8 concurrent threads.
Supported options are:
\begin{tabularx}{\linewidth}{lX}
\texttt{-v} & Produce more verbose output.\\
\texttt{-j \#threads} & Sets the number of concurrent threads that check files in the cache directory. Defaults to 4. \\
\texttt{-p} & Tries to automatically fix problems. \\
\texttt{-f} & Unlinks the cache database. The database will be automatically rebuilt by \cvmfs\ on next mount.\\
\end{tabularx}
\subsection{cvmfs\_config}
The \texttt{cvmfs\_config} utility provides commands in order to setup the system for use with \cvmfs.
\begin{description}
\item[setup] The \texttt{setup} command takes care of basic setup tasks, such as creating the cvmfs user and allowing access to \cvmfs\ mount points by all users.
\item[chksetup] The \texttt{chksetup} command inspects the system and the \cvmfs\ configuration in /etc/cvmfs for common problems.
\item [showconfig] The \texttt{showconfig} command prints the \cvmfs\ parameters for all repositories or for the specific repository given as argument.
\item [stat] The \texttt{stat} command prints file system and network statistics for currently mounted repositories.
\item [status] The \texttt{status} command shows all currently mounted repositories and the process id (PID) of the \cvmfs\ processes managing a mount point.
\item [probe] The \texttt{probe} command tries to access /cvmfs/\$repository for all repositories specified in \texttt{CVMFS\_REPOSITORIES}.
\item [reload] The \texttt{reload} command is used to reload or hotpatch \cvmfs\ instances (see Section~\ref{sct:hotpatch}).
\item [unmount] The \texttt{unmount} command unmounts all currently mounted \cvmfs\ repositories, which will only succeed if there are no open file handles on the repositories.
\item [wipecache] The \texttt{wipecache} command tries to unmount all currenlty mounted \cvmfs\ repositories and, in case of success, removes all files in the \cvmfs\ cache.
\item [bugreport] The \texttt{bugreport} command creates a tarball with collected system information which helps to debug a problem (see Section~\ref{sct:debugginghints}).
\end{description}
\subsection{cvmfs\_talk}
The \texttt{cvmfs\_talk} command provides a way to control a currently running \cvmfs\ process and to extract information about the status of the corresponding mount point.
Most of the commands are for special purposes only or covered by more convenient commands, such as \texttt{cvmfs\_config showconfig} or \texttt{cvmfs\_config stat}.
Two commands might be of particular interest though.
\begin{verbatim}
cvmfs_talk cleanup 0
\end{verbatim}
will, without interruption of service, immediately cleanup the cache from all files that are not currently pinned in the cache.
\begin{verbatim}
cvmfs_talk internal affairs
\end{verbatim}
prints the internal status information and performance counters.
It can be helpful for performance engineering.
\subsection{Other}
Information about the current cache usage can be gathered using the \texttt{df} utility.
For repositories created with the \cvmfs\ 2.1 toolchain, information about the overall number of file system entries in the repository as well as the number of entries covered by currently loaded meta-data can be gathered by \texttt{df -i}.
For the \nagios\footnote{\url{http://www.nagios.org}}~\cite{nagios08} monitoring system, a checker plugin is available under \url{http://cernvm.cern.ch/portal/filesystem/downloads}.
\section{Debug Logs}
The \texttt{cvmfs2} binary forks a watchdog process on start.
Using this watchdog, \cvmfs\ is able to create a stack trace in case certain signals (such as a segmentation fault) are received.
The watchdog writes the stack trace into syslog as well as into a file \texttt{stacktrace} in the cache directory.
In addition to the debugging hints in Section~\ref{sct:debugginghints}, \cvmfs\ can be started in debug mode.
In the debug mode, \cvmfs\ will log with high verbosity which makes the debug mode unsuitable for production use.
In order to turn on the debug mode, set \texttt{CVMFS\_DEBUGFILE=/tmp/cvmfs.log}.