A JUnit5 extension to make the test data setup easier.
You can write the test data definition on your test codes by the annotations. And the test data is cleaned up and inserted by 🦝 !
- cleanup-insert the specified test data before test execution
- cleanup the specified test data after test execution
Adding DbRaccoon to your project.
Maven
<dependency>
<groupId>com.github.yuizho</groupId>
<artifactId>db-raccoon</artifactId>
<version>0.3.0</version>
<scope>test</scope>
</dependency>
Gradle
testImplementation 'com.github.yuizho:db-raccoon:0.3.0'
Now, you can define the test data on your test codes like this.
class SampleTest {
companion object {
@JvmField
@RegisterExtension
val dbRaccoonExtension = DbRaccoonExtension(
dataSource = JdbcDataSource().also {
it.setUrl("jdbc:h2:file:./target/db-raccoon")
it.user = "sa"
}
)
}
@Test
@CsvDataSet(testData = [
CsvTable(name = "player", rows = [
"id, first_name, last_name, birthdate , created",
"11, Yu , Darvish , 1986-08-16, 2020-03-13 00:45:00",
"22, Clayton , Kershaw , 1988-03-19, 2020-03-13 00:45:00"
], id = ["id"])
])
fun `test`() {
// some test code...
}
}
You can register DbRaccoonExtension programmatically by annotating the field in your test classes that are written in JUnit 5.
For more information about JUnit 5 Extension Model, please refer to this document. https://junit.org/junit5/docs/current/user-guide/#extensions
Parameter | Required / Optional | Default value | Description |
---|---|---|---|
dataSource | Required | - | The JDBC data source object to connect to the database that should be registered test data. |
cleanupPhase | Optional | CleanupPhase.BEFORE_AND_AFTER_TEST |
The execution phase of the cleanup task. For more details about the options, see CleanupPhase options . |
cleanupStrategy | Optional | CleanupStrategy.USED_ROWS |
The target of the cleanup task. For more details about the options, see CleanupStrategy options |
setUpQueries | Optional | - | The queries to execute before clean-insert tasks on beforeTestExecution. For example, when you want to temporarily disable a foreign key constraint, you can set the query here. |
tearDownQueries | Optional | - | The queries to execute after clean tasks on afterTestExecution. For example, when you want to enable again the foreign key constraint, you can set the query here. |
For Java, We recommend using builder API when you create the object.
@RegisterExtension
DbRaccoonExtension dbRaccoonExtension;
{
JdbcDataSource dataSource = new JdbcDataSource();
dataSource.setUrl("jdbc:h2:file:./target/db-raccoon");
dataSource.setUser("sa");
dbRaccoonExtension = new DbRaccoonExtension.Builder(dataSource)
.cleanupPhase(CleanupPhase.BEFORE_AND_AFTER_TEST) // clean up the test data on before and after each test
.cleanupStrategy(CleanupStrategy.USED_ROWS) // clean up the rows which are defined in @DataSet, @CsvDataSet
.setUpQueries(Arrays.asList("SET REFERENTIAL_INTEGRITY FALSE")) // disable a foreign key constraint in H2 before each test
.tearDownQueries(Arrays.asList("SET REFERENTIAL_INTEGRITY TRUE")) // enable a foreign key constraint in H2 after each test
.build();
}
companion object {
@JvmField
@RegisterExtension
val dbRaccoonExtension = DbRaccoonExtension(
dataSource = MysqlDataSource().also {
it.setUrl("jdbc:mysql://127.0.0.1:13306/testdb")
it.user = "test"
it.password = "password"
},
cleanupPhase = CleanupPhase.BEFORE_TEST, // clean up the test data on before each test
cleanupStrategy = CleanupStrategy.USED_ROWS, // clean up the rows which are defined in @DataSet, @CsvDataSet
setUpQueries = listOf("SET FOREIGN_KEY_CHECKS = 0"), // disable a foreign key constraint in MySQL before each test
tearDownQueries = listOf("SET FOREIGN_KEY_CHECKS = 1") // disable a foreign key constraint in MySQL before each test
)
}
You can configure the execution phase of the test data cleanup task by CleanupPhase enum.
The option name | Description |
---|---|
BEFORE_TEST | The cleanup task just executes before each test case. |
AFTER_TEST | The cleanup task just executes after each test case. |
BEFORE_AND_AFTER_TEST | The cleanup task executes before and after each test case. |
You can configure the cleanup target of the test data cleanup task by CleanupStrategy enum.
The option name | Description |
---|---|
USED_ROWS | Deletes the rows which are defined in @DataSet, @CsvDataSet. |
USED_TABLES | Deletes the table which are defined in @DataSet, @CsvDataSet. |
You can use the following annotations to the test class or method that needs test data before execution.
Each test data is inserted in a defined order. And the inserted test data is cleaned up at the timing that is specified by the CleanupPhase options .
When the annotations are applied to both the test class and the test method, the annotation applied to the method will be used
@DataSet
allows you to define the test data by several annotations.
@Test
@DataSet(testData = {
@Table(name = "parent_table", rows = {
@Row(columns = {
@Col(name = "id", value = "2", isId = true),
@Col(name = "name", value = "parent_record")
})
}),
@Table(name = "child_table", rows = {
@Row(columns = {
@Col(name = "id", value = "2", isId = true),
@Col(name = "name", value = "child_record"),
@Col(name = "parent_table_id", value = "2"),
})
})
})
public void test() {
// ...
}
@Test
@DataSet([
Table("parent_table", [
Row([
Col("id", "2", true),
Col("name", "parent_record")
])
]),
Table("child_table", [
Row([
Col("id", "2", true),
Col("name", "child_record"),
Col("parent_id", "2")
])
])
])
fun `test`() {
// ...
}
At least one Id column (isId
parameter in @Col
is true) requires in each @Row
.
The Id column is used when the cleanup task is executed.
@DataSet
allows you to define the test data as comma-separated values.
@Test
@CsvDataSet(testData = {
@CsvTable(name = "parent_table", rows = {
"id, name",
"1 , parent_record"
}, id = "id"),
@CsvTable(name = "child_table", rows = {
"id, name , parent_id",
"1 , child_record, 1"
}, id = "id")
})
public void test() {
// ...
}
@Test
@CsvDataSet(testData = [
CsvTable(name = "parent_table", rows = [
"id, name",
"1 , parent_record"
], id = ["id"]),
CsvTable(name = "child_table", rows = [
"id, name , parent_id",
"1 , child_record, 1"
], id = ["id"])
])
fun `test`() {
// ...
}
At least one id column requires. The Id column is used when the delete task is executed.
When you define null data, you can use the null value string like this.
@CsvDataSet(testData = [
CsvTable(name = "table", rows = [
"id, name",
"1 , [null]" // name is registered as null
], id = ["id"])
])
And when you change the null value string, you can define own null value string.
@CsvDataSet(testData = [
CsvTable(name = "table", rows = [
"id, name",
"1 , <NULL>" // name is registered as null
], id = ["id"])
], nullValue = "<NULL>")
The first row of the comma-separated values is a header row to set the column names.
You can use a single quote (') as the quote character. And you can also use backslash (\) as the escape character. Refer char_column value in the following examples.
@CsvDataSet(testData = [
CsvTable(name = "sample_table", rows = [
"id, char_column , timestamp_column",
"1 , 'foo, \'bar\'', '2014-01-10 12:33:49.123'"
], id = ["id"])
])
@CsvTable.rows Example |
Parsing Result |
---|---|
"col1, col2", "foo, bar" |
{ "col1": "foo", "col2": "bar" } |
"col1, col2", "'foo, bar', baz" |
{ "col1": "foo, bar", "col2": "baz" } |
"col1, col2", "\'foo\', bar" |
{ "col1": "'foo'", "col2": "bar" } |
"col1, col2", "'foo, \'bar\'', baz" |
{ "col1": "foo, 'bar'", "col2": "baz" } |
"col1, col2", "foo, ''" |
{ "col1": "foo", "col2": "" } |
"col1, col2", "foo, " |
{ "col1": "foo", "col2": "" } |
"col1, col2", "foo, [null]" |
{ "col1": "foo", "col2": null } |
String instances defined as column value are converted to the types corresponding to ColType
.
ColType
is obtained by Table scannning in default.
But you can also specify explicitly by @TypeHint
.
@DataSet([
Table("sample_table", [
Row([
Col("id", "1", true),
Col("binary_column", "YWJjZGVmZzE=") // this column is inserted as BINARY type
])
], types = [TypeHint("binary_column", ColType.BINARY)]
)
])
@CsvDataSet([
CsvTable("sample_table", [
"id, binary_column",
"1 , YWJjZGVmZzE=" // binary_column is inserted as binary type
], ["id"],
types = [TypeHint("binary_column", ColType.BINARY)])
])
The conversion is executed before building a SQL.
When you specify binary data, convert the value into base64 string.
Col("binary_column", "YWJjZGVmZzE=")
The column value (value
parameter in the column) are converted to the following types corresponding to ColType
.
ColType | Column Value Example | Result |
---|---|---|
CHAR VARCHAR LONGVARCHAR |
"abcdef" | "abcdef" |
VARBINARY LONGVARBINARY |
"abcdef" | "abcdef".getBytes() |
BINARY | "YWJjZGUxMjM0NQ==" | Base64.getDecoder().decode("YWJjZGUxMjM0NQ==") |
BLOB | "YWJjZGUxMjM0NQ==" | new SerialBlob(Base64.getDecoder().decode("YWJjZGUxMjM0NQ==")) |
CLOB | "abcdef" | new SerialClob(value.toCharArray()) |
BOOLEAN BIT |
"true" | true |
DATE | "2019-01-11" | java.sql.Date.valueOf("2019-01-11") |
TIME | "12:50:59" | java.sql.Time.valueOf("12:50:59") |
TIMESTAMP | "2019-12-31 01:02:03.123456789" | java.sql.Timestamp.valueOf("2019-12-31 01:02:03.123456789") |
TIMESTAMP_WITH_TIMEZONE | "2019-10-09T03:53:01+09:00" | java.time.OffsetDateTime.parse("2019-10-09T03:53:01+09:00") |
TINYINT | "32767" | (short) 32767 |
INTEGER SMALLINT |
"2147483647" | 2147483647 |
BIGINT | "9223372036854775807" | 9223372036854775807L |
REAL | "1.0" | 1.0f |
FLOAT DOUBLE |
"1.0" | 1.0d |
DECIMAL | "1.0" | new BigDecimal("1.0") |