Thanks for your interest in contributing to the Opentrons platform! This Contributing Guide is intended to ensure best practices for both internal Opentrons contributors as well as any external contributors. We want to make sure you’re set up to contribute effectively, no matter if you’re helping us out with bug reports, code, documentation, feature suggestions, or anything else. This guide covers:
- Opening Issues
- Opening Pull Requests
- Commit Guidelines
- Project and Repository Structure
- Development Setup
- Prior Art
Filing an issue is a great way to contribute to the project! Bug reports and feature requests are really useful for us as we plan our work. If you’d like to open an issue, please consider the following questions before opening:
- Is this issue for a bug, a feature request, or something else?
- Please make this is clear in your description so it’s easier to address
- Has this issue already been opened?
- Duplicate tickets slow things down, so make sure to search before you open!
- If there’s already a ticket, please comment on the existing thread!
- Is this a support request?
- If yes, you're better off checking out our support page rather than opening a GitHub issue
To ensure your issue can be addressed quickly, please fill out the sections in the existing issue template to the best of your ability!
If you’d like to contribute code to the Opentrons platform, pull requests (PR's) are the way to do it. Any code contributions are greatly appreciated! If you’re an external contributor, we’re going to assume you are familiar with the fork and pull request flow. If not, this blog post by Scott Lowe is a good introduction.
Please note that by contributing to the Opentrons platform, you agree to share those contributions under the terms of the Apache 2.0 license.
Before opening any PR, please run through the following questions:
- Does this PR address an already open issue?
- If not, please consider opening an issue first
- This is to ensure you don't end up duplicating work or wasting time on a PR that won't be accepted
- Does this PR incorporate many different changes?
- If yes, would the PR work better as a series of smaller PR's?
- Our team is more than happy to help you figure out an incremental plan
- Does this PR include code changes without test and/or documentation updates?
- If yes, your PR may not be ready to open
- Tests and documentation are a vital part of any code contribution
- Are there a reasonable number of commits and are they properly informative?
- The best kind of PR is a tiny PR with a single commit
- To avoid introducing problems into our Git history, we may have to ask you to squash or otherwise amend your commit(s)
- See Commit Guidelines below for tips on keeping a good Git history
To ensure your code is reviewed quickly and thoroughly, please fill out the sections in the existing pull request template best of your ability! If you’d like some recommended reading for writing good pull requests, check out:
- How to write the perfect pull request
- The (written) unwritten guide to pull requests
- The Art of a Pull Request
After your Pull Request is merged (or otherwise closed), you’ll want to make sure to delete the branch in GitHub. You probably want to delete your local branch, too, depending on your own personal organizational strategies / general paranoia.
If you're looking for something to work on, especially for a first contribution, check out our list of easy issues. Be sure to drop a comment in the thread before starting work to make sure nobody else has picked it up.
Before you're ready to make a commit, you should do you best to make sure that:
- All tests are passing
make test
- See Testing section for more details
- All code quality checks are passing
make format lint
- See Code quality section for more details
- Especially consider setting up your code editor to run formatting and quality checks automatically as you code
Good commit messages are essential to keeping an organized and readable Git history. A readable Git history makes our lives easier when doing necessary work like writing changelogs or tracking down regressions. Please read How to Write a Git Commit Message by Chris Beams and then come back here. These selected guidelines (copied and pasted from that article) are a very good starting point to think about when writing your commit message:
- Separate subject from body with a blank line
- Capitalize the subject line
- Do not end the subject line with a period
- Use the imperative mood in the subject line
- Use the body to explain what and why vs. how
When committing, we use commitizen to format our commit messages according to the Conventional Commits specification. This allows us to automatically generate CHANGELOGs based on commit messages. To commit files, first install commitizen
, then:
git add path/to/files
git cz
This will launch the commitizen wizard, which will ask you to:
- Select a commit type, which will be one of:
feat
- A new featurefix
- A bug fixdocs
- Documentation only changesstyle
- Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etcrefactor
- A code change that neither fixes a bug nor adds a featureperf
- A code change that improves performancetest
- Adding missing tests or correcting existing testsbuild
- Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)ci
- Changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)chore
- Other changes that don't modify src or test files
- Select a scope
- For
feat
,fix
,refactor
, andperf
, this should a top-level project, e.g.app
orapi
- For other commit types, use your best judgement or omit
- For
- Write a short commit title
- Written according to the guidelines above
- Write a longer description if necessary
- Also written according to the guidelines above
- Mention any tickets addressed by the commit
- e.g.
Closes #xyz
- e.g.
Most of Opentrons’ projects live in the Opentrons/opentrons repository. Having multiple projects in one repository (also known as a monorepo) is convenient for keeping various inter-project dependencies in sync, but does require workflow considerations to keep everything organized and trackable.
Generally, the directory / file structure of our monorepo looks something like this:
- [Project]
- [Another Project]
- etc.
scripts
- Repository level scripts (mostly for CI)Makefile
- Top level makefile for CI- Various repository level dotfiles (CI and git config)
README.md
,CONTRIBUTING.md
,LICENSE
, etc.
Our projects use a mix of languages, but mostly Python (backend + robotics) and JavaScript (frontend). Each project has its own README
+ Makefile
+ dependency management.
If you'd like to contribute (or maybe just run the very latest and greatest version), this section details what you need to do to get your computer and local repository set up.
Individual projects may have additional instructions, so be sure to check out the various project README
s, too.
Your computer will need the following tools installed to be able to develop with the Opentrons platform:
-
macOS 10.11+, Linux, or Windows 10
- On Windows, please configure Git’s
core.autocrlf
setting (see the Git config docs) toinput
so that shell scripts required for the robot’s boot process inapi/opentrons/resources
do not have carriage returns inserted.
- On Windows, please configure Git’s
-
Python 3.7 (pyenv is optional, but recommended for macOS / Linux. If
pyenv
is not available for your system or you do not want to use it, you can set the environment variableOT_PYTHON
to the full path to the Python 3.7 executable) -
If you wish to use
pyenv
but are experiencing issues with macOS Mojave please see the common build problems section ofpyenv
documentation.pyenv install 3.7.6
MacOS Big Sur Note: due to this known issue we recommend using:
pyenv install 3.7.10
-
Node v12 - nvm is optional, but recommended
nvm install 12
-
yarn - JavaScript package manager
-
commitizen - Commit message formatter
yarn global add commitizen
-
GNU Make - we use Makefiles to manage our builds
-
cURL - used to push development updates to robots
-
On Linux, you will need libsystemd and headers. On Ubuntu, install
libsystemd-dev
andpython3-dev
before runningmake setup
.
Once you're set up, clone the repository and install all project dependencies:
git clone https://github.com/Opentrons/opentrons.git
cd opentrons
make setup
In addition, if (and only if) you want to build a PDF version of the Opentrons API documentation, you must install a latex distribution that includes a callable pdflatex. If that is installed, you can do make -C api docs-pdf
.
We use:
- pytest to test Python
- Jest to test JavaScript
- To run tests in watch mode, you should also install watchman
- Cypress.io for end to end UI testing
You can tests with:
# run all tests (except e2e)
make test
# run tests per language
make test-py
make test-js
# run cypress e2e tests
make test-e2e
You can pass some options to the JavaScript tests:
# run JavaScript tests in watch mode
make test-js watch=true
# disable test coverage
make test-js cover=false
# update snapshot tests
# https://jestjs.io/docs/en/snapshot-testing
make test-js updateSnapshot=true
And you can run code linting / typechecking with:
# lint all code
make lint
# lint + typecheck specific languages
make lint-py
make lint-js
make lint-css
make check-js
To help with code quality and maintainability, we use a collection of tools that can be roughly sorted into the following categories (with some overlaps):
- Linters
- Typecheckers
- Formatters
These tools can be run with the following commands:
# lint all code and run all typechecks
make lint
# lint by language
# note: Python linting also includes typechecking
make lint-py
make lint-js
make lint-json
make lint-css
# typecheck JavaScript code
make check-js
# format JavaScript, JSON, Markdown, and YAML
make format
Most, if not all, of the tools above have plugins available for your code editor that will run quality checks and formatting as you write and/or save. We highly recommend setting up your editor to format and check your code automatically.
- Pylama - Search your editor's package manager
- ESLint - https://eslint.org/docs/user-guide/integrations#editors
- stylelint - https://stylelint.io/user-guide/complementary-tools/#editor-plugins
- mypy - Search your editor's package manager
- Flow - https://flow.org/en/docs/editors/
- Prettier - https://prettier.io/docs/en/editors.html
JavaScript dependencies are installed by yarn. When calling yarn, you should do so from the repository level.
A development dependency is any dependency that is used only to help manage the project. Examples of development dependencies would be:
- Build tools (webpack, babel)
- Testing/linting/checking tools (jest, flow, eslint)
- Libraries used only in support scripts (aws, express)
To add a development dependency:
# with long option names
yarn add --dev --ignore-workspace-root-check <dependency_name>
# or, with less typing
yarn add -DW <dependency_name>
A project dependency is a dependency that an application or library will import
at run time. Examples of project dependencies would be:
- UI / state-management libraries (react, redux)
- General utility libraries (lodash)
Project dependencies should be added to the specific project that depends on them. To add one:
yarn workspace <project_name> add <dependency_name>
After you have installed a dependency (development or project), you may find that you need to also install flow type definitions. Without type definitions for our external dependencies, we are unable to typecheck anything we import
. We use flow-typed to install community-created type definitions. To add type definitions for an installed package:
yarn run flow-typed install <dependency_name>@<installed_version>
Not every JavaScript package has an available flow-typed definition. In this case, flow-typed will generate a stub file in the directory flow-typed/npm/
. You may find it useful to fill out this stub; See the flow-typed wiki for a library definition writing guide.
Be sure to check out the API README
for additional instructions.
To run the Opentrons HTTP API in development mode:
# run API with virtual robot
make -C robot-server dev OT_ROBOT_SERVER_simulator_configuration_file_path=simulators/test.json
# run API with robot's motor driver connected via USB to UART cable
make -C robot-server dev
Generally to test your code on the robot, you will want to push the whole mono-repo to the robot in case there are large differences between your robot's server version and the code you are using from github. You can do this via:
# Use this command From the top level opentrons folder
make push host=${some_other_ip_address}
To put the robot server on a test robot, if it's on buildroot do:
# push the current contents of the api directory to robot for testing
# defaults to currently connected ethernet robot
make push-api
# takes optional host variable for other robots
make push-api host=${some_other_ip_address}
and if it's still on balena do:
# push the current contents of the api directory to robot for testing
# defaults to currently connected ethernet robot
make -C api push-balena
# takes optional host variable for other robots
make -C api push-balena host=${some_other_ip_address}
To SSH into the robot, do
# SSH into the currently connected ethernet robot
make term
# takes optional host variable for other robots
make term host=${some_other_ip_address}
If make term
complains about not having a key, you may need to install a public key on the robot. To do this, create an ssh key and install it:
ssh-keygen # note the path you save the key to
make -C robot-server install-key br_ssh_pubkey=/path/to/pubkey host=${some_other_ip_address}
and subsequently, when you do make term
, add the br_ssh_key=/path/to/key
option:
make term br_ssh_key=/path/to/privkey
If you create the key as ~/.ssh/robot_key
and ~/.ssh/robot_key.pub
then make term
and make install-key
will work without arguments.
Note that all of our release process information for different projects can be found in RELEASING.md.
When you have sshd in to a robot using make term
, its behavior depends on whether it is on balena or on buildroot. You can tell by doing ps | head -n 2
; if pid 1 is systemd
, you're on buildroot and if it is tini
you're on balena.
The files /data/logs/api.log.*
and /data/logs/serial.log.*
contain logs from the api server and serial system, respectively. Only things logged with python logging
are here. To get logs printed to stdout by the api server, you can check balena. nginx logs are in /var/log/nginx
. This system is inside a docker container; you can get a shell to the host from balena if you need deeper logging. However, things printed to stdout and stderr by the api server are only available in the balena log viewer and therefore are frequently lost.
Buildroot robots use systemd-journald for log management. This is a single log manager for everything on the system. It is administrated using the journalctl utility. You can view logs by just doing journalctl
(it may be better to do journalctl --no-pager | less
to get a better log viewer), or stream them by doing journalctl -f
. Any command that displays logs can be narrowed down by using a syslog identifier: journalctl -f SYSLOG_IDENTIFIER=opentrons-api
will only print logs from the api server's loggers, for instance. Our syslog identifiers are:
opentrons-api
: The API server - anything sent tologging
logs from the api server package, except the serial logsopentrons-update-server
: Anything sent tologging
logs from the update server packateopentrons-api-serial
: The serial logs
You can't really restart anything from inside a shell on balena, since it all runs in a docker container. Instead, you can do restart
and restart the docker container, but this will disconnect you. Stop ongoing processes by doing ps
, finding the pid, and then doing kill (pid)
. This can be useful to temporarily run one of the servers directly, to see what it prints out on the command line. Note that when you do this the environment won't be quite the same as what the server sees when it is run directly by balena due to the implementation details of docker run
, tini
, and our start scripts.
Buildroot robots use systemd
as their init system. Every process that we run has an associated systemd unit, which defines and configures its behavior when the robot starts. You can use the systemctl utility to mess around with or inspect the system state. For instance, if you do systemctl status opentrons-api-server
you will see whether the api server is running or not, and a dump of its logs. You can restart units with systemctl restart (unitname)
, start and stop them with systemctl start
and systemctl stop
, and so on. Note that if you make changes to unit files, you have to run systemctl daemon-reload
(no further arguments) for the init daemon to see the changes.
Our systemd units are:
opentrons-api-server
: The API serveropentrons-update-server
: The update server
When a robot is running on buildroot, its filesystem is mounted from two separate locations. /data
, /var
, and /home
are from the "data" partition, and everything else is from the root partition (or generated by the system). The root partition is what gets updated, by being overwritten. To make this work, the root partition is mounted readonly, which causes writes to files in that partition to fail with the error "readonly filesystem". To prevent this, you can remount the partition:
mount -o remount,rw /
This Contributing Guide was influenced by a lot of work done on existing Contributing Guides. They're great reads if you have the time!