From 46dd72b9c203d66669bd2453953b94ec827f822e Mon Sep 17 00:00:00 2001 From: Mark Wong Date: Thu, 22 Feb 2024 09:01:01 -0800 Subject: [PATCH] updated from v0.53.10 release with minor syntax edit --- index.html | 310 ++++++++++++++--------------------------------------- 1 file changed, 79 insertions(+), 231 deletions(-) diff --git a/index.html b/index.html index ecb1aaa..d63703a 100644 --- a/index.html +++ b/index.html @@ -724,67 +724,56 @@

Database Test 2 (DBT-2) Documentation

  • POSTRUNNING ANALYSES

  • -
  • Oracle

    +
  • PostgreSQL

    -
  • -
  • PostgreSQL

    +
  • Setup

  • -
  • SAP DB

    +
  • SAP DB

  • -
  • SQLite

  • -
  • YugabyteDB

  • +
  • SQLite

  • +
  • YugabyteDB

  • -
  • Operating System Notes

    +
  • Operating System Notes

  • -
  • Developer Guide

    +
  • Developer Guide

    @@ -1127,13 +1116,14 @@

    Linux AppIma it hasn't already been done. Examples in the documentation will assume it has been renamed to dbt2.

    If FUSE is not available, the AppImage is self-extracting and provides a script -(currently only for bash shells) to set your PATH and LD_LIBRARTY_PATH to -use the extracted files:

    +AppRun that can be executed as if running the AppImage itself, when the +APPDIR environment variable is set to the absolute path of the extracted +squashfs-root directory.:

    dbt2-*.AppImage --extract-appimage
    -. squashfs-root/activate
    -

    Then run deactivate to restore your environment, or exit the shell.

    -

    Other shells can still set PATH and LD_LIBRARTY_PATH manually to use the -extracted environment.

    +export APPDIR="$(pwd)/squashfs-root" +squashfs-root/AppRun +

    Alternatively, one could also set PATH and LD_LIBRARTY_PATH manually to +include the extracted environment.

    Linux Distribution Packages

    @@ -1617,156 +1607,10 @@

    POSTRUNNING r esults/report.txt - results of the test

    -
    -

    Oracle

    -
    -

    Topics

    -
      -
    1. Introduction

    2. -
    3. System Requirements

    4. -
    5. Setup

    6. -
    7. Steps for manually run DBT2 Kit

    8. -
    9. Logs

    10. -
    11. Errors and Debugging

    12. -
    -
    -
    -

    Introduction

    -
    -

    The DBT-2 kit provides an on-line transaction processing (OLTP) workload using oracle database and a set of defined transactions. DBT-2 simulates a workload that represents a wholesale parts supplier operating out of a number of warehouses and their associated sales districts. This kit mainly focuses on three kind of test cases i.e IO,IO_CPU and CPU_ERP,IO testcase mainly imposes load on IO subsystem,CPU_ERP on CPU and IO_CPU on CPU aswell as IO subsystem. This is achieved by varying size of sga paramaters, size of database, io paramaters, and transaction mixes. -These transactions basically include entering and delivering orders, recording payments, checking the status of the orders, and monitoring the level of stock at the warehouses.

    -
    -
    -
    -

    System Requirements

    -

    Storage:

    -
    <work-home>        --  10G -- location of logs,results,and server/client configuration scripts
    -<archive location> --  5G -- location of archive files
    -<storage location> --  9G for  2 warehouses --databse storage location
    -                   --  18G for 15 warehouses
    -                   --  36G for 40 warehouses
    -                   --  72G for 80 warehouses
    - /tmp              -- 10MB
    -

    for granularity:

    -
    (i.e specifying location for each type of files and tablespace's,instaed of specifying only one storage location for all)
    -2-Warehouse: ctrlloc=7.1M    datafiles=3.5G  logloc1=481G   logloc2=481G   tmptsloc=234M  undotsloc=2.8G
    -15-Warehouse: ctrlloc=7.1M   datafiles=7.0G  logloc1=3.1G   logloc2=3.1G   tmptsloc=234M  undotsloc=3.0G
    -40-Warehouse: ctrlloc=7.1M   datafiles=14G   logloc1=7.9G   logloc2=7.9G   tmptsloc=234M  undotsloc=4.4G
    -80-Warehouse: ctrlloc=7.1M   datafiles=25G   logloc1=16G    logloc2=16G    tmptsloc=234M  undotsloc=5.8G
    -

    Packages:

    -
    Distribution specific development packages(e.g: gcc,make,binutils,glibc-devel(32 bit and 64 bit))
    -sysstat
    -bind-utils
    -oracle-related package dependencies
    -libaio packages
    -For gcc version higher than 3.3, compat- gcc for 2.96 or 3.2
    -
    -
    -

    Setup

    -
      -
    1. Install oracle database

    2. -
    3. Copy dbt2 kit to your work location ( if it is tar file untar it, This should create a directory named dbt2)

    4. -
    5. cd dbt2/scripts/oracle/install

    6. -
    -
    -
    -

    Steps for manually run DBT2 Kit

    -
      -
    1. Kit Setup:

      -
      Setup of dbt2 kit mainly performas following tasks:
      -accepts user parameters ,
      -creates directries on all nodes,
      -generates db create,db build scripts
      -generates init ora parameter file,
      -generates env file to start client and server,
      -generates scripts to start and stop oracle,listener. and then
      -copies these scripts to all server and client nodes.
      -
    2. -
    3. How to run setup:

      -
      ./setup or ./setup -responsefile rsp
      -where rsp is responsefile which contains all user parameters
      -template is available rsp-rac and rsp-sn in <dbt2/scripts/oracle/responsefile> for RAC and single node respectively
      -
      -user parameters are:
      -TESTCASE=io/io_cpu/cpu_erp          :    type of test
      -WORKHOME=<any location>             :    location of work directory for dbt2 kit
      -RAC=true/false                      :    true for Rac and false for running on single node
      -SERVERS="<hostname>"                :    Name of servers (for rac you can enter more than one server for Single node only one server)
      -CLIENTS="<hostname>"                :    Name of clients
      -SERVERORAHOME=                      :    server oracle home(location of oracle binaries)
      -CLIENTORAHOME=                      :    client oracle home
      -WAREHOUSE=2/15/40/80                :    Number of warehouses
      -STORAGE TYPE=filesystem/asm         :    Type of storage
      -if STORAGE TYPE=filesystem then choose granularity
      -GRAN=true/false                     :    true for granularity for database loaction
      -if Granularity is true then ,pass locations for each following loaction else no
      -LOGFILESLOC1                        :    First Log file location
      -LOGFILESLOC2                        :    Second Log file location
      -TMPTSLOC                            :    Temporary tablespace location
      -UNDOTSLOC                           :    Undo Table space location
      -CTRLLOC                             :    Control file location
      -if STORAGE TYPE=asm then select type of redundancy
      -REDUNDANCY=external/noraml/high     :    Type of redundancy for asm storage
      -ASMDISKLIST1=""                     :    asm disks groups for external redundancy
      -ASMDISKLIST2=""                     :    asm disk groups for normal redundancy
      -ASMDISKLIST3=""                     :    asm disk groups for high redundancy
      -ARCHIVE=true/false                  :    true for archiving else false
      -ARCHIVELOC=<any location>           :    location for archive files
      -DBBLOCKSIZE=2K/4k/8k/16k            :    data block size
      -SGA=1                               :    size of SGA in GB
      -AIO=true/false                      :    Asynchronous I/O
      -DIO=true/false                      :    Direct I/O
      -OSSTATS=true/false                  :    True to collect OS statistics for standalone kit
      -DRIVER=odbc/oci                     :    type of driver
      -
    4. -
    5. Creating the database for the test run:

      -
      Once setup is done it creates a work directory in WORKHOME(passed during first step) with kitname
      -i.e dbt2-work,next to create DataBase move to the following location:
      -i.e cd <WORK-HOME/dbt2-work/server/warehouse[number]>
      -
      -./oracle-dbt2.sh -d  --> creates database
      -
    6. -
    7. Test run execution:

      -
      oracle-dbt2.sh  [-r ]--> run kit with default values i.e 100 users and 300 seconds duration
      -
      -other optional parameters are
      -    [-u users]--> Number of users
      -    [-n testname]--> Testname
      -    [-t duration]--> Duration in seconds
      -    [-h ]--> print this mesage
      -    [-nodb]---> do not start oracle db/asm instance
      -    [-nolsnr]--> do not start oracle listener
      -    [-debug]--> For debug on
      -    [-nocfg]--> Do not change init.ora based on number of users
      -    [-osstat]--> Do not change init.ora based on number of users
      -
    8. -
    -

    example:

    -
    ./oracle-dbt2.sh -d -r -n mytest -u 300 -t 3600 -osstat
    -will create datbase and run kit with test name "mytest", for the duration 3600 seconds with 300 users
    -
    -
    -

    Logs

    -
    <WORK-HOME>/dbt2-work/server/result/testname/analyze    : It contains files related to error during run
    -<WORK-HOME>/dbt2-work/server/result/testname/metrics    : It contains result metrics after run for each transactions
    -<WORK-HOME>/dbt2-work/server/result/log/datagen-loc     : It contains all data generated during transactions
    -<WORK-HOME>/dbt2-work/server/result/log/db-logs         : It contains all database log files
    -<WORK-HOME>/dbt2-work/server/result/ora-alert/run-db    : It conatains all trace files generated during run
    -<WORK-HOME>/dbt2-work/server/result/ora-alert/create-db : It contains all trace files generated during creation of database
    -
    -
    -

    Errors/Debugging

    -

    DB creation Errors: look for:

    -
    <WORK-HOME>/dbt2-work/server/result/log/db-logs
    -<WORK-HOME>/dbt2-work/server/result/ora-alert/create-db
    -

    Run time Errors: look for:

    -
    <WORK-HOME>/dbt2-work/server/result/ora-alert/run-db
    -
    -
    -

    PostgreSQL

    -
    -

    Setup

    +

    PostgreSQL

    +
    +

    Setup

    The DBT-2 test kit has been ported to work with PostgreSQL starting with version 7.3. It has be updated to work with later version and backwards compatibility may vary. Source code for PostgreSQL can be obtained from their @@ -1796,27 +1640,27 @@

    Setupcontrib/pg_autovacuum directory.

    The following subsections have additional PostgreSQL version specific notes.

    -
    v7.3
    +
    v7.3

    With PostgreSQL 7.3, it needs to be built with a change in pg_config.h.in where INDEX_MAX_KEYS must be set to 64. Be sure to make this change before running the configure script for PostgreSQL.

    -
    v7.4
    +
    v7.4

    With PostgreSQL 7.4, it needs to be built with a change in src/include/pg_config_manual.h where INDEX_MAX_KEYS must be set to 64.

    Edit the parameter in postgresql.conf that says tcpip_socket = false, uncomment, set to true, and restart the daemon.

    -
    v8.0
    +
    v8.0

    For PostgreSQL 8.0 and later, run configure with the --enable-thread-safety to avoid SIGPIPE handling for the multi-thread DBT-2 client program. This is a significant performance benefit.

    -

    A really quick howto

    +

    A really quick howto

    Edit examples/dbt2_profile and follow the notes for the DBT2PGDATA directory. DBT2PGDATA is where the database directory will be created.

    Create a 1 warehouse database by running:

    @@ -1825,7 +1669,7 @@

    A really qui
    dbt2 run -d 300 pgsql /tmp/result

    -

    Building the Database

    +

    Building the Database

    The dbt2-pgsql-build-db script is designed to handle the following steps:

    1. create the database

    2. @@ -1853,7 +1697,7 @@

      Building the
      dbt2 build pgsql

    -

    Environment Configuration

    +

    Environment Configuration

    The DBT-2 scripts required environment variables to be set in order to work properly (e.g. examples/dbt2_profile) in order for the scripts to work properly. For example:

    @@ -1861,26 +1705,30 @@

    Environment DBT2DBNAME=dbt2; export DBT2DBNAME DBT2PGDATA=/tmp/pgdata; export DBT2PGDATA

    An optional environment variable can be set to specify a different location for -the transaction logs (i.e. pg_xlog):

    +the transaction logs (i.e. pg_xlog or pg_wal) when the dbt2-pgsql-init-db +script:

    DBT2XLOGDIR=/tmp/pgxlogdbt2; export DBT2XLOGDIR
    -

    The environment variables must be defined in ~/.ssh/environment file on each -system for multi-tier environment for ssh. Make sure PATH is set to cover -the location where the DBT-2 executables and PostgreSQL binaries are installed. -For example:

    +

    The environment variables may need to be defined in ~/.ssh/environment file +on each system for multi-tier environment for ssh. The ssh daemon may need +to be configured to enable the use of user environment files. Make sure PATH +is set to include the location where the DBT-2 executables and PostgreSQL +binaries are installed, if not in the default PATH. For example:

    DBT2PORT=5432
     DBT2DBNAME=dbt2
     DBT2PGDATA=/tmp/pgdata
     PATH=/usr/local/bin:/usr/bin:/bin:/opt/bin

    -

    Tablespace Notes

    -

    The scripts assumes a specific tablespace layout.

    -

    The ${DBT2TSDIR} variable in dbt2_profile defines the directory where all +

    Tablespace Notes

    +

    The scripts assumes a specific tablespace layout for keeping the scripts +simple.

    +

    The ${DBT2TSDIR} environment variable defines the directory where all tablespace devices will be mounted. Directories or symlinks can be substituted for what is assumed to be a mount point from this point forward.

    -

    dbt2-pgsql-create-tables is where the tablespaces are created.

    -

    The mount points that need to be created, and must be owned by the user running -the scripts, at:

    +

    dbt2-pgsql-create-tables and dbt2-pgsql-create-indexes are where the +tablespaces are created.

    +

    The expected mount points or symlinks, which must also be writeable by the +database owner, need to be at:

    ${DBT2TSDIR}/warehouse
     ${DBT2TSDIR}/district
     ${DBT2TSDIR}/customer
    @@ -1902,9 +1750,9 @@ 

    Tablespace N ${DBT2TSDIR}/pk_warehouse

    -

    AppImage Notes

    +

    AppImage Notes

    -
    Limitations
    +
    Limitations

    Using the AppImage has some limitations with PostgreSQL:

    1. The AppImage cannot alone be used to build a database with C stored @@ -1917,9 +1765,9 @@

      Limitations<
    -

    SAP DB

    -
    -

    Setup

    +

    SAP DB

    +
    +

    Setup

    After installing SAP DB create the user id sapdb, assigning the user to group sapdb.

    Create the database stats collection tool, x_cons, by executing the following:

    @@ -1929,7 +1777,7 @@

    Setuphttp://www.unixodbc.org/unixODBC-2.2.3.tar.gz

    -

    ODBC

    +

    ODBC

    An .odbc.ini file must reside in the home directory of the user attempting to run the program. The format of the file is:

    [alias]
    @@ -1948,7 +1796,7 @@ 

    ODBC

    -

    Building the Database

    +

    Building the Database

    The script dbt2/scripts/sapdb/create_db_sample.sh must be edited to configure the SAP DB devspaces. The param_adddevspaces commands need to be changed to match your system configuration. Similarly, the backup_media_put commands also @@ -1983,7 +1831,7 @@

    Building the param_adddevspace 5 DATA /dev/raw/raw6 R 204800

    -

    Results

    +

    Results

    Each output files starting with m_*.out, refers to a monitor table in the SAP DB database. For example:

    @@ -2021,7 +1869,7 @@

    Results<
    -

    SQLite

    +

    SQLite

    To create a database:

    dbt2-sqlite-build-db -g -w 1 -d /tmp/dbt2-w1 -f /tmp/dbt2data

    To run a test:

    @@ -2030,7 +1878,7 @@

    SQLite

    -

    YugabyteDB

    +

    YugabyteDB

    A really quick howto.

    The YugabyteDB scripts makes use of PostgreSQL's psql client program and libpq C client library interface, thus any environment variables that libpq would @@ -2054,9 +1902,9 @@

    YugabyteDB

    -

    Operating System Notes

    +

    Operating System Notes

    -

    Linux

    +

    Linux

    If you want readprofile data, you need to give the user sudo access to run readprofile. Root privileges are required to clear the counter data. Additionally, the Linux kernel needs to be booted with `profile=2` in the @@ -2089,7 +1937,7 @@

    Linuxkernel.shmmax = 41943040

    -

    Solaris

    +

    Solaris

    These notes are from testing with OpenSolaris 10.

    Install the SFWgplot package for gnuplot from the Solaris 20 companion DVD.

    Added the following to your path in order to use gcc (and other development @@ -2098,10 +1946,10 @@

    Solaris<

    -

    Developer Guide

    +

    Developer Guide

    This document is for detailing any related to the development of this test kit.

    -

    Building the Kit

    +

    Building the Kit

    CMake is build system used for this kit. A Makefile.cmake is provided to automate some of the tasks.

    Building for debugging:

    @@ -2116,23 +1964,23 @@

    Building the container that can create an AppImage.

    -

    Testing the Kit

    +

    Testing the Kit

    The CMake testing infrastructure is used with shUnit2 to provide some testing.

    -

    datagen

    +

    datagen

    Tests are provided to verify that partitioning does not generate different data than if the data was not partitioned. There are some data that is generated with the time stamp of when the data is created, so those columns are ignored when comparing data since they are not likely to be the same time stamps.

    -

    post-process

    +

    post-process

    A test is provided to make sure that the post-process output continues to work with multiple mix files as well as with various statistical analysis packages.

    -

    AppImage

    +

    AppImage

    AppImages are only for Linux based systems:

    https://appimage.org/

    @@ -2151,7 +1999,7 @@

    AppImage

    See the README.rst in the tools/ directory for an example of creating an AppImage with a Podman container.

    -

    Building the AppImage

    +

    Building the AppImage

    The AppImages builds a custom minimally configured PostgreSQL build to reduce library dependency requirements. Part of this reason is to make it easier to include libraries with compatible licences. At least version PostgreSQL 11