Skip to content

RAVEN Testing Standards and Practices

Andrea Alfonsi - INL edited this page Sep 16, 2018 · 1 revision

Tests in RAVEN

RAVEN has two types of tests: regression tests and unit tests.

Regression tests in RAVEN are used to test consistent performance of full workflows, and ensure values are not changed as the result of changes in the code. Unit tests, on the other hand, are clusters of tests that try very small portions of code, usually individual methods or functions. These assist in breaking down full workflows to understand performance.

Analytic Tests

The best of both regression tests and unit tests are analytic. By this, we mean tests that have some outputs that have been calculated by hand and should never change, regardless of code changes. Stronger than consistency checks, analytic tests help developers assure that code has not changed behaviors in undesired ways.

When a test is analytic, the analyticity is documented in /raven/doc/tests under the master LaTeX file analytic_tests.tex, which includes several other files to help organize the analytic tests. If an analytic tests doesn't match with existing tests, a new file can be added. In the documentation, a brief description of the analyticity by way of derivation and a summary of analytic results should be included.

In a test input, there is a node used to describe how the test is analytic within the <TestInfo> block called <analytic>. In that node, the section of the analytic tests documentation referred to by this test should be listed, along with where to find analytic results (the output CSV name and variable for example). Tested outputs that are not analytic are treated as regression values.

Adding Regression Tests

As RAVEN has expanded, the number of tests covering basic functionality has swelled and the need risen for a more organized structure.

TestInfo Block

RAVEN input files that are destined to become input files for tests have a node added to them called <TestInfo>. An example follows:

<TestInfo>
  <name>framework/Samplers.MonteCarloIndependent</name>
  <author>username</author>
  <created>2000-04-01</created>
  <classesTested>Sampler.MonteCarlo</classesTested>
  <description>
    A brief description of what workflow this test is testing.
  </description>
  <analytic>
    A brief description of what is analytic about this case.  Optional, don't include on non-analytic tests.
  </analytic>
  <revisions>
    <revision author="username" date="YYYY-MM-DD">description of change</revision>
  </revisions>
  <requirements>R-IS-42</requirements>
</TestInfo>

The following are required nodes:

  • name: The name of the test as it appears in the regression test system. This includes the path from "framework" to the directory of the test itself, then a period, followed by the regression test name (see Test Naming Conventions).
  • author: The name of the author of this test. Either internal INL HPC user names or GitHub contribution names can be used.
  • created: Date of test creation, with the format YYYY-MM-DD.
  • classesTested: A description of entities within RAVEN that are targeted by this regression test.
  • description: A brief description including notes for future developers to help understand this test.

The following nodes are optional:

  • revisions: Whenever a significant revision is made to the contents of the test, a revision node is added. Not meant to replace git revision history, this node provides an easy way to describe why particular changes were made. This node is not necessary until a significant revision is made from the original test.
  • requirements: Some tests in RAVEN are base-level software requirements tests, documented in our software plan. In general, these are not added to without consulting the control board.
  • analytic: For analytic tests, describes the analyticity of the test. See Analytic Tests.

Test Naming Conventions

There are two distinct names to use for regression tests: one for input file names and one for the regression test classification name. For consistency, the input name and regression name should be the same. However, the input file name should be all lower-case letters with underscores connecting words, while the regression name should use CapitalCase with no underscores.

For example, if I develop a new regression test that checks Monte Carlo samples are independent, I might choose the name "Monte Carlo independent". The regression test file name should then be monte_carlo_independent.xml, and the regression test name is MonteCarloIndependent.

The regression test name is used in several places: in the <TestInfo> block within the test (see Test Info Block), in the tests file where the test is listed for the regression test framework, and also as the <WorkingDir> in the <RunInfo> block in the input file.

This naming convention makes it much easier for other developers to find and interpret test results with minimum searching.

Adding a Test to the Regression System

Tests for RAVEN workflows are generally stored in raven/tests/framework/, which is further subdivided into categories like the source code in raven/framework. Try to find a location that best suits the nature of the new test. There, place the test input file and the run directory according to the naming conventions.

In order to have the regression system added to the regression system, it must be added to the tests file in your new test's location. The syntax for these tests is currently provided by the MOOSE regression test system, which uses a GetPot keyword entry structure. This is described in part here. The regression system will search for all files named tests under raven/tests and attempt to run any indicated tests. RAVEN has several additional options beyond the base MOOSE system for checking, including UnorderedCsv and UnorderedXml.

Typically, a test will check against values in a specified CSV file to determine correctness, either an exact match using csv or a content match using UnorderedCsv. Be aware that regression tests will run a wide variety of operating systems, and this often introduces subtle differences in behavior. To help combat this, a relative error (rel_err) can be used to mitigate the exactness of the match required.

CodeInterface Tests

If a test is being created to test a new code interface, follow the guide here.