To run the tests, you can use any of the following commands:
# runs all tests without printing any log generated by Beagle
yarn test
# runs all tests printing all logs generated by Beagle
yarn test:verbose
# runs all unit tests. Logs won't be printed
yarn test:unit
# runs all integration tests. Logs won't be printed
yarn test:integration
You can also replace yarn
by npm run
when using npm.
The tests for the core of Beagle WEB are located under the directory __tests__
and can be split into
two categories: unit and integration.
Most of the tests are unit tests. They're located under the directory __tests__/unit
and their
objective is to isolate and test every scenario of a feature while mocking the rest of the
application. For instance, when testing the navigation, we don't need to know if the requested
view will be correctly rendered, we just need to know if it has been called, another test will
be responsible to check if the rendering works.
This concept is true for every unit test and the amount of mocked services is decided in a test-by-test basis. Usually, the more isolated the test, the better, because it will be easier to detect exactly where a bug happens. But, at the same time, if we isolate them too much, we might end up with test suit that don't test much. We need a balance between these two things.
The unit tests are guided by the organization of the actual source code, as said before, they're
feature oriented, i.e. there must be a test for each file/folder/function of the source code. The
ideal is that the same structure of the directory src
is replicated in the directory
__tests__/unit
, and we're refactoring our test so it becomes true. It is not yet true because
we went through a refactoring recently and we didn't have time yet to also move the test files. All
test files in the old structure are located under the directory __tests__/unit/old-structure
.
In some cases, we'll want to test a big feature both in small parts and an integrated run, it's not
a problem. This is the case of the rendering process, there are tests for each part of the rendering
process, but there are also some tests that run everything together, while still mocking most of the
application (everything that is not part of the rendering process). Although we might name these
integration
they're still unitary, since they're testing an isolated feature.
In summary, unit tests should:
- be feature oriented;
- be as isolated as possible, by mocking everything else;
so we know exactly where to look when a test fails and we can easily fix the problem.
In this project, we must always have at least 75% of the code covered by unit tests.
Our integration tests are located under __tests__/integration
and they're far less numerous than
the unit tests.
Integration tests are very different than unit tests. In these tests we mock nothing, the intention is to run the beagle web core as a whole. Integration tests are also not feature oriented, instead we create a hypothetical application that uses Beagle and we test this application. We try to use most of the features we can in this application so we can cover most of our source code.
It is interesting to check many scenarios of the the application with tests, but there's no need to test every possible scenario of a beagle feature, the responsibility for testing every scenario of a beagle feature is of the unit test, not the integration.
The main objective of the integration tests is to guarantee that an application using Beagle in a previous version will keep working the same way even after the changes made to the code. For this reason, there will be many snapshot tests within the integration tests. We want to know if a view now is rendered the same way as it was being rendered before.
Another common type of test is "click a button and check what happens to the view". This is very important to check if the behavior of an application stays the same after changing the beagle's source code.
The Beagle Web Core is basically a tree processor, so, everything we need to do in the integration tests is to give it a initial tree and check if it process it correctly.
In summary, integration tests should:
- be application oriented;
- be broad, they should test big parts of the code at once;
- never use mocks, with the only exception of logging; so we know something that worked before is not working now, even if it doesn't tell us exactly where the problem is.
Beagle keep is the application we chose to build our integration tests upon. This is inspired by the tool "keep" from Google.
Beagle keep is an application to store simple notes. Each note has a title, a text (content) and a set of labels. The application allows the user to view, create, edit and remove both notes and labels.
The application Beagle Keep is composed by three views, they are:
The main information in the home page is the list of notes in the center. By default, all notes are shown, but they can be filtered by the labels in the menu on the left. To remove any filters, click the menu item "Notes".
To go to the view where labels can be created, edited or removed, click the item "Edit Labels" in the menu.
To create a new note, the floating button on the bottom right corner must be clicked.
To edit or view the complete version of a note, the note itself must be clicked.
To remove a note, the trash bin button of the note must be clicked.
This view shows every label in the database. Each label can have its name or color changed and it can also be entirely removed.
To create a label, click the button in the right, to go back to the home page, click the button on the left.
In this page, the content of the note will be fully visible instead of truncated. Also, it will be in the format of a form, making it possible to edit both the title and the text (content).
The button in the right submits the form and saves the changes. The button in the left returns to the home page.
The creation of a note also uses this view, but the form is started empty.
The application Beagle Keep is located under the directory __tests__/integration/beagle-keep
,
which is organized in the following way:
- assertions: tests themselves
- actions: tests the behavior of the application upon an action of the user. e.g. "should remove a note when the remove button is clicked".
- render: tests if each view is correctly rendered. Most of these tests will be snapshots.
- backend: these won't test beagle, they're just so we know a bug comes from beagle and not the backend of the application we created.
- backend: simulates the backend of our application.
- database: simulates the database, containing both notes and labels.
- routes: simulates the server, it creates the endpoints for each of the operations and views we need to call.
- views: the views to populate the screen: home, labels and details.
- frontend: simulates the frontend of our application, the part that actually uses beagle web.
- components: the implementation for each of the components. Most will be empty functions since we don't need to actually render anything. One exception is the repeater, that must create its own children based on its template and data source.
- config: the configuration for Beagle.
- operations: the custom operations for this application.
- service: the initialization of the Beagle service.
The Beagle Keep application is a great way to start learning how Beagle works under the hood and we want to keep it that way, so, for every test created, there must be as many commentaries as necessary to explain exactly what's happening and why it's expected.
Just by reading the test suits, the developer should be able to understand how Beagle fetches, processes a tree and deals with actions.
- In the view "details", there's no way to add/remove labels;
- We are not testing the validation of the form in the view "details";
- We could test the NavigationControllers by adding a view to login;
- Some tests are marked as todo, they should be implemented;
- We should separate the reports for unit tests and integration tests;
- We don't have enough code coverage yet. We should increase it to at least 75%.