(Please see the NIST disclaimer.)
This template repository is provided for those looking to develop command-line utilities using ontologies within the Cyber Domain Ontology ecosystem, particularly CASE and UCO.
This template repository provides a Make-based test workflow used in some other CASE projects. The workflow exercises this project as a command-line interface (CLI) application (under tests/cli/
), and as a package (under tests/package/
).
This is only one possible application development style, and templates are available to support other styles. See for instance:
- casework/CASE-Mapping-Template-Python, which demonstrates an approach based on constructing Python
dict
s and checking generated results afterwards for CASE conformance with the CASE Validation Action.
Testing procedures run in this repository are:
- GitHub Actions: Workflows are defined to run testing as they would be run in a local command-line environment, reviewing on pushes and pull requests to certain branches.
- Supply chain review: One workflow checks dependencies on a schedule, confirming pinned dependencies are the latest, and loosely-pinned dependencies do not impact things like type review.
- Type review:
mypy --strict
reviews the package source tree and the tests directory. - Code style:
pre-commit
reviews code patches in Continuous Integration testing and in local development. Runningmake
will installpre-commit
in a special virtual environment dedicated to the cloned repository instance. - Doctests: Module docstrings' inlined tests are run with
pytest
. - CASE validation: Unit tests that generate CASE graph files are written to run
case_validate
before considering the file "successfully" built. - Editable package installation: The test suite installs the package in an "editable" mode into the virtual environment under
tests/venv/
. Activating the virtual environment (e.g. for Bash users, runningsource tests/venv/bin/activate
from the repository's top source directory) enables "live code" testing. - Parallel Make runs: Tests based on
make
have dependencies specified in a manner that enablesmake --jobs
to run testing in parallel. - Directory-local Make runs: The Makefiles are written to run regardless of the present working directory within the top source directory or the
tests/
directory, assumingmake check
has been run from the top source directory at least once. If a test is failing,cd
'ing into that test's directory and runningmake check
should reproduce the failure quickly and focus development effort.
To use the template, push the "Use this template" button on GitHub, and adapt files as suits your new project's needs. The README should be revised at least from its top to the "Versioning" section. Source files should be renamed and revised, and any other files with a TODO
within it should be adjusted.
After any revisions, running make check
(or make -j check
) from the top source directory should have unit tests continue to pass.
Below this line is sample text to use and adapt for your project. Most text above this line is meant to document the template, rather than projects using the template.
To install this software, clone this repository, and run pip install .
from within this directory. (You might want to do this in a virtual environment.)
This provides a standalone command:
case_cli_example output.rdf
The tests build several examples of output for the command line mode, under tests/cli
.
The installation also provides a package to import:
import case_cli_example
help(case_cli_example.foo)
This project follows SEMVER 2.0.0 where versions are declared.
Some make
targets are defined for this repository:
all
- Installspre-commit
for this cloned repository instance.check
- Run unit tests. NOTE: The tests entail an installation of this project's source tree, including prerequisites downloaded from PyPI.clean
- Remove test build files.
This repository is licensed under the Apache 2.0 License. See LICENSE.
Portions of this repository contributed by NIST are governed by the NIST Software Licensing Statement.
Participation by NIST in the creation of the documentation of mentioned software is not intended to imply a recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that any specific software is necessarily the best available for the purpose.