-
Notifications
You must be signed in to change notification settings - Fork 0
/
rt-doc.txt
194 lines (142 loc) · 8.5 KB
/
rt-doc.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
#|----------------------------------------------------------------------------|
| Copyright 1990 by the Massachusetts Institute of Technology, Cambridge MA. |
| |
| Permission to use, copy, modify, and distribute this software and its |
| documentation for any purpose and without fee is hereby granted, provided |
| that this copyright and permission notice appear in all copies and |
| supporting documentation, and that the name of M.I.T. not be used in |
| advertising or publicity pertaining to distribution of the software |
| without specific, written prior permission. M.I.T. makes no |
| representations about the suitability of this software for any purpose. |
| It is provided "as is" without express or implied warranty. |
| |
| M.I.T. DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING |
| ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL |
| M.I.T. BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR |
| ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, |
| WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, |
| ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS |
| SOFTWARE. |
|----------------------------------------------------------------------------|#
(This is the December 19, 1990 version of brief documentation for the
RT regression tester. A more complete discussion can be found in
the article in Lisp Pointers.)
The functions, macros, and variables that make up the RT regression tester are
in a package called "RT". The ten exported symbols are documented below. If
you want to refer to these symbols without a package prefix, you have to `use'
the package.
The basic unit of concern of RT is the test. Each test has an identifying name
and a body that specifies the action of the test. Functions are provided for
defining, redefining, removing, and performing individual tests and the test
suite as a whole. In addition, information is maintained about which tests have
succeeded and which have failed.
<> deftest NAME FORM &rest VALUES
Individual tests are defined using the macro DEFTEST. The identifying NAME is
typically a number or symbol, but can be any Lisp form. If the test suite
already contains a test with the same (EQUAL) NAME, then this test is redefined
and a warning message printed. (This warning is important to alert the user
when a test suite definition file contains two tests with the same name.) When
the test is a new one, it is added to the end of the suite. In either case,
NAME is returned as the value of DEFTEST and stored in the variable *TEST*.
(deftest t-1 (floor 15/7) 2 1/7) => t-1
(deftest (t 2) (list 1) (1)) => (t 2)
(deftest bad (1+ 1) 1) => bad
(deftest good (1+ 1) 2) => good
The FORM can be any kind of Lisp form. The zero or more VALUES can be any kind
of Lisp objects. The test is performed by evaluating FORM and comparing the
results with the VALUES. The test succeeds if and only if FORM produces the
correct number of results and each one is EQUAL to the corresponding VALUE.
<> *test* NAME-OF-CURRENT-TEST
The variable *TEST* contains the name of the test most recently defined or
performed. It is set by DEFTEST and DO-TEST.
<> do-test &optional (NAME *TEST*)
The function DO-TEST performs the test identified by NAME, which defaults to
*TEST*. Before running the test, DO-TEST stores NAME in the variable *TEST*.
If the test succeeds, DO-TEST returns NAME as its value. If the test fails,
DO-TEST returns NIL, after printing an error report on *STANDARD-OUTPUT*. The
following examples show the results of performing two of the tests defined
above.
(do-test '(t 2)) => (t 2)
(do-test 'bad) => nil ; after printing:
Test BAD failed
Form: (1+ 1)
Expected value: 1
Actual value: 2.
<> *do-tests-when-defined* default value NIL
If the value of this variable is non-null, each test is performed at the moment
that it is defined. This is helpful when interactively constructing a suite of
tests. However, when loading a test suite for later use, performing tests as
they are defined is not liable to be helpful.
<> get-test &optional (NAME *TEST*)
This function returns the NAME, FORM, and VALUES of the specified test.
(get-test '(t 2)) => ((t 2) (list 1) (1))
<> rem-test &optional (NAME *TEST*)
If the indicated test is in the test suite, this function removes it and returns
NAME. Otherwise, NIL is returned.
<> rem-all-tests
This function reinitializes RT by removing every test from the test suite and
returns NIL. Generally, it is advisable for the whole test suite to apply to
some one system. When switching from testing one system to testing another, it
is wise to remove all the old tests before beginning to define new ones.
<> do-tests &optional (OUT *STANDARD-OUTPUT*)
This function uses DO-TEST to run each of the tests in the test suite and prints
a report of the results on OUT, which can either be an output stream or the name
of a file. If OUT is omitted, it defaults to *STANDARD-OUTPUT*. DO-TESTS
returns T if every test succeeded and NIL if any test failed.
As illustrated below, the first line of the report produced by DO-TEST shows how
many tests need to be performed. The last line shows how many tests failed and
lists their names. While the tests are being performed, DO-TESTS prints the
names of the successful tests and the error reports from the unsuccessful tests.
(do-tests "report.txt") => nil
; the file "report.txt" contains:
Doing 4 pending tests of 4 tests total.
T-1 (T 2)
Test BAD failed
Form: (1+ 1)
Expected value: 1
Actual value: 2.
GOOD
1 out of 4 total tests failed: BAD.
It is best if the individual tests in the suite are totally independent of each
other. However, should the need arise for some interdependence, you can rely on
the fact that DO-TESTS will run tests in the order they were originally defined.
<> pending-tests
When a test is defined or redefined, it is marked as pending. In addition,
DO-TEST marks the test to be run as pending before running it and DO-TESTS marks
every test as pending before running any of them. The only time a test is
marked as not pending is when it completes successfully. The function
PENDING-TESTS returns a list of the names of the currently pending tests.
(pending-tests) => (bad)
<> continue-testing
This function is identical to DO-TESTS except that it only runs the tests that
are pending and always writes its output on *STANDARD-OUTPUT*.
(continue-testing) => nil ; after printing:
Doing 1 pending test out of 4 total tests.
Test BAD failed
Form: (1+ 1)
Expected value: 1
Actual value: 2.
1 out of 4 total tests failed: BAD.
CONTINUE-TESTING has a special meaning if called at a breakpoint generated while
a test is being performed. The failure of a test to return the correct value
does not trigger an error break. However, there are many kinds of things that
can go wrong while a test is being performed (e.g., dividing by zero) that will
cause breaks.
If CONTINUE-TESTING is evaluated in a break generated during testing, it aborts
the current test (which remains pending) and forces the processing of tests to
continue. Note that in such a breakpoint, *TEST* is bound to the name of the
test being performed and (GET-TEST) can be used to look at the test.
When building a system, it is advisable to start constructing a test suite for
it as soon as possible. Since individual tests are rather weak, a comprehensive
test suite requires large numbers of tests. However, these can be accumulated
over time. In particular, whenever a bug is found by some means other than
testing, it is wise to add a test that would have found the bug and therefore
will ensure that the bug will not reappear.
Every time the system is changed, the entire test suite should be run to make
sure that no unintended changes have occurred. Typically, some tests will fail.
Sometimes, this merely means that tests have to be changed to reflect changes in
the system's specification. Other times, it indicates bugs that have to be
tracked down and fixed. During this phase, CONTINUE-TESTING is useful for
focusing on the tests that are failing. However, for safety sake, it is always
wise to reinitialize RT, redefine the entire test suite, and run DO-TESTS one
more time after you think all of the tests are working.