-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transfer CalcJob #4194
Transfer CalcJob #4194
Conversation
@ramirezfranciscof could you please already rebase this on the latest |
Ah, sure, I didn't notice this. I'll rebase, thanks for the heads up. Regarding the squashing: do I need to do this now? This should only matter when we are merging, and since there might still be some corrections to make does it make any difference? (as a matter of fact I'm not even sure that we want to keep any of this commits or just squash everything; or at least everything so far, maybe test and doc could be separate) |
Sure, there may be additional changes that need to be done during the review, but squashing now and writing the corresponding commit messages, forces you to think how to describe the changes and why they are important. Of course they might be explained in detail in the docs and docstrings, but it is important that the commit message also contains a resume of why this feature is being added and describe the main design decisions made. This not only makes it easy for the reviewer to get a quick context of what is happening but already prepares the text for when it is time to merge. |
30700f6
to
ab0b476
Compare
Ok, now everything works locally and the docstring should be fixed. I also squashed into two distinct commits. Anyways this push was mainly for people to be able to access the code if they want to try to run the example workflow, but I think the idea for now was that they provide the feedback mostrly on the interface explained in the linked document. |
Codecov Report
@@ Coverage Diff @@
## develop #4194 +/- ##
===========================================
+ Coverage 79.31% 79.43% +0.13%
===========================================
Files 482 483 +1
Lines 35373 35485 +112
===========================================
+ Hits 28052 28185 +133
+ Misses 7321 7300 -21
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Hey @ramirezfranciscof, I noticed there's a permission prompt when trying to access the document - since it's on Google Docs. I'm wondering if this is intentional (i.e., the doc is internal), or if you could make the document (read) accessible in general. I've send a "request for access", but if it's not meant to be internal it would be great if we can remove this friction for other people who might want to check it out. |
ab0b476
to
6607863
Compare
6607863
to
1a547f0
Compare
9a7bf16
to
5bca3e1
Compare
@ramirezfranciscof is there still work to be done with this PR before making a proper PR from a Draft one? |
I think I just need to add the tests. When @mkotiuga tried it she seemed to have problems when deleting nodes only when using this branch, but I've been trying to reproduce the error and just can't find anything, so I might check with her one more time. |
0f2a75c
to
06f391c
Compare
So, I just want to keep a record that I tried to use |
Not sure what you mean, the tests you have added look fine. We're you perhaps trying to do |
06f391c
to
89a86f0
Compare
89a86f0
to
f45ea37
Compare
I have a question for this. For how
The coroutine
Finally, in
Is this last Since I ommited a lot of content (which I thought was not relevant but I might be wrong), here are the sections of the code referenced:
|
f45ea37
to
89e744b
Compare
@sphuber I just implemented the skip of the submit/update steps and added the integration test we discussed. Probably this is not the best place for it, but I didn't know exactly where to put it, so I'll wait for your feedback on this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ramirezfranciscof . Implementation of skip_submit
looks good. Just some minor suggestions. Mostly using f-strings now that we support that and maybe extend the integration test to also include one with a different remote computer.
@sphuber friendly reminder of this good-ol' PR. |
@ramirezfranciscof apologies for the delay. I responded to your comments and resolved all outstanding conversations except for one. If you can update the branch and address that final one, this would be good for me to merge. I will just quickly ping @giovannipizzi if he still wanted to check, but if not, I will merge this. Thanks |
Sorry, I won't have time now - in order not to further delay, maybe just make sure it's marked as beta so we can change the API later? (e.g. to extend to stashing etc.)? |
@ramirezfranciscof I cannot approve or merge this until you have rebased and addressed the conflicts. |
006886b
to
d27b164
Compare
This calcjob allows the user to copy files between a remote machine and the local machine running AiiDA. More specifically, it can do any of the following: * Take any number of files from any number of RemoteData folders in a remote machine and copy them in the local repository of a single newly created FolderData node. * Take any number of files from any number of FolderData nodes in the local machine and copy them in a single newly created RemoteData folder in a given remote machine. These are the main two use cases, but there are also other more complex combinations allowed by the current implementation. Please check the documentation for more details.
This is used by the process handler to decide wether to do the normal sequence of steps (upload, submit, update, retrieve) or just skip the middle two and immediately retrieve after uploading.
I am using TransferData as a way to check that the whole process of setting and executing the run steps works correctly when using the CalcInfo setup to skip the submit/update steps.
d27b164
to
a71d929
Compare
@sphuber There is one test still running but should pass. I just rebased and adapted the modifications to the changes in the engine, but although you already saw it maybe you could give that a second quick pass in case you notice something of these adaptations. Other than that, the only new thing is the doc section in the howto (work with) data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ramirezfranciscof . Since the docs are new, I still have to ask some changes I'm afraid. I also noted that the input validators are not tested at all for incorrect output. Would be important to quickly add this. Should be straightforward since they are loose functions.
from aiida.common.datastructures import CalcInfo | ||
|
||
|
||
def validate_instructions(instructions, _): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like these validation methods are untested when they receive invalid input. It would be good to add tests for this. Since they are simple functions they should be really easy to test. Simply call them with incorrect arguments and check the return value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, since one of the checks sees that the computer provided directly is the same as the ones of the RemoteData
nodes, I need to set up a new computer and I was trying to look for how it is done in the aiida_localhost
fixture but I can't find it anywhere by grepping. Where is this defined??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def aiida_localhost(temp_dir): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uh, I see, thanks. Question: why is this in aiida/manage/tests
instead of tests/...
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because in this way it is also importable by plugins to use directly in their testing frameworks.
https://github.com/aiidateam/aiida-quantumespresso/blob/2c6d02f69cf273cb5f78f6f9f39839d6397793ae/tests/conftest.py#L11
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
@sphuber cancelled tests?? wot? |
It got killed because it ran longer than the 30 min timelimit. Looks like it is a random one off. I restarted them |
* Dependencies: bump cryptography to 3.2 in `requirements` (#4520) Bumps `cryptography` from 2.8 to 3.2. Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Sebastiaan Huber <mail@sphuber.net> * CI: remove `run-on-comment` job in benchmark workflow (#4569) This job is failing due to this change: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/ It's not really used, so lets just remove it * Docs: update citations with AiiDA workflows paper (#4568) Citation for the latest paper on the engine is added to the README and the documentation index page. The paper in `aiida/__init__.py` is also updated which was still referencing the original publication of 2016. * Enforce verdi quicksetup --non-interactive (#4573) When in non-interactive mode, do not ask whether to use existing user/database * `SinglefileData`: add support for `pathlib.Path` for `file` argument (#3614) * DOCS: Reverse daemon start and profile setup sections in intro. (#4574) The profile must be setup prior to starting the daemons to avoid an error. * Fix `verdi --version` in editable mode (#4576) This commit fixes a bug, whereby click was using a version statically stored on install of the package. This meant changes to `__version__` were not dynamically reflected. * Improve `verdi node delete` performance (#4575) The `verdi node delete` process fully loaded all ORM objects at multiple stages during the process, which is highly inefficient. This commit ensures the process now only loads the PKs when possible. As an example, the time to delete 100 "empty" nodes (no attributes/objects) is now reduced from ~32 seconds to ~5 seconds. * `CalcJob`: add the `additional_retrieve_list` metadata option (#4437) This new option allows one to specify additional files to be retrieved on a per-instance basis, in addition to the files that are already defined by the plugin to be retrieved. This was often implemented by plugin packages itself through a `settings` node that supported a key that would allow a user to specify these additional files. Since this is a common use case, we implement this functionality on `aiida-core` instead to guarantee a consistent interface across plugins. * Add options for transport tasks (#4583) * Add options for transport tasks When encountering failures during the execution of transport tasks, a runner will wait for a time interval between transport task attempts. This time interval between attempts is increased using an exponential backoff mechanism, i.e. the time interval is equal to: (TRANSPORT_TASK_RETRY_INITIAL_INTERVAL) * 2 ** (N_ATTEMPT - 1) where N_ATTEMPT is the number of failed attempts. This mechanism is interrupted once the TRANSPORT_TASK_MAXIMUM_ATTEMPTS is reached. The initial interval and maximum attempts are currently fixed to 20 seconds and 5, respectively. This commit adds two configuration options that use these defaults, but allow the user to adjust them using `verdi config`. * Fix command for getting EBM config options (#4587) Currently the transport options for the EBM are obtained by using the get_config function, e.g.: `initial_interval = get_config_option(RETRY_INTERVAL_OPTION)` However, it seems that `get_config()` does not get you the current configuration (see #4586). Replacing `get_config().get_option()` with `get_config_option()` fixes this issue for the EBM options. * CI: revert apt source list removal This work around was added some time ago because this source for the `apt` package manager was causing the install of system dependencies to fail. * CI: Add workflow to run tests against various RabbitMQ versions The main test workflow runs against a single version of RabbitMQ but experience has shown that the code can break for different versions of the RabbitMQ server. Here we add a new CI workflow that runs various unit tests through pytest that simulate the typical interaction with the RabbitMQ server in normal AiiDA operation. The difference is that these are tested against the currently available versions of RabbitMQ. The current setup, still only tests part of the functionality that AiiDA uses, for example, the default credentials and virtual host are used. Connections over TLS are also not tested. These options would require the RabbitMQ service that is running in a docker container to be configured differently. It is not clear how these various options can be parametrized in concert with the actual unit tests. * Engine: replace `tornado` with `asyncio` The `plumpy` and `kiwipy` dependencies have already been migrated from using `tornado` to the Python built-in module `asyncio` in the versions `0.16.0` and `0.6.0`, respectively. This allows us to also rid AiiDA of the `tornado` dependency, which has been giving requirement clashes with other tools, specifically from the Jupyter and iPython world. The final limitation was the `circus` library that is used to daemonize the daemon workers, which as of `v0.17.1` also supports `tornado~=5`. A summary of the changes: * Replace `tornado.ioloop` with `asyncio` event loop. * Coroutines are marked with `async` instead of decorated with the `tornado.gen.coroutine` decorator. * Replace `yield` with `await` when calling a coroutine. * Replace `raise tornado.gen.Return` with `return` when returning from a coroutine. * Replace `add_callback` call on event loop with `call_soon` when scheduling a callback. * Replace `add_callback` call on event loop with `create_task` when scheduling `process.step_until_terminated()`. * Replace `run_sync` call on event loop with `run_until_complete`. * Replace `pika` uses with `aio-pika` which is now used by the `plumpy` and `kiwipy` libraries. * Replace `concurrent.Future` with `asyncio.Future`. * Replace `yield tornado.gen.sleep` with `await asyncio.sleep`. Additional changes: * Remove the `tornado` logger from the logging configuration. * Remove the `logging.tornado_loglevel` configuration option. * Turn the `TransportQueue.loop` attribute from method into property. * Call `Communicator.close()` instead of `Communicator.stop()` in the `Manager.close()` method. The `stop` method has been deprecated in `kiwipy==0.6.0`. * `Process.kill`: properly resolve the killing futures The result returned by `ProcessController.kill_process` that is called in `Process.kill` for each of its children, if it has any, can itself be a future, since the killing cannot always be performed directly, but instead will be scheduled in the event loop. To resolve the future of the main process, it will have to wait for the futures of all its children to be resolved as well. Therefore an intermediate future needs to be added that will be done once all child futures are resolved. * Unwrap the futures returned by `ProcessController` in `verdi process` The commands of `verdi process` that perform an RPC on a live process will do so through the `ProcessController`, which returns a future. Currently, the process controller uses the `LoopCommunicator` as its communicator which adds an additional layer of wrapping. Ideally, the return type of the communicator should not change depending on the specific implementation that is used, however, for now that is the case and so the future needs to be unwrapped explicitly one additional time. Once the `LoopCommunicator` is fixed to return the same future type as the base `Communicator` class, this workaround can and should be removed. * `Runner`: use global event loop and global runner for process functions With the migration to `asyncio`, there is now only a single event loop that is made reentrant through the `nest-asyncio` library, that monkey patches `asyncio`'s built-in mechanism to prevent this. This means that in the `Runner` constructor, we should simply get the global event loop instead of creating a new one, if no explicit loop is passed into the constructor. This also implies that the runner should never take charge in closing the loop, because it no longer owns the global loop. In addition, process functions now simply use the global runner instead of creating a new runner. This used to be necessary because running in the same runner, would mean running in the same loop and so the child process would block the parent. However, with the new design on `asyncio`, everything runs in a single reentrant loop and so child processes no longer need to spawn their own independent nested runner. * Engine: cancel active tasks when a daemon runner is shutdown When a daemon runner is started, the `SIGINT` and `SIGTERM` signals are captured to shutdown the runner before exiting the interpreter. However, the async tasks associated with the interpreter should be properly canceled first. * Engine: enable `plumpy`'s reentrant event loop policy The event loop implementation of `asyncio` does not allow to make the event loop to be reentrant, which essentially means that event loops cannot be nested. One event loop cannot be run within another event loop. However, this concept is crucial for `plumpy`'s design to work and was perfectly allowed by the previous event loop provider `tornado`. To work around this, `plumpy` uses the library `nest_asyncio` to patch the `asyncio` event loop and make it reentrant. The trick is that this should be applied at the correct time. Here we update the `Runner` to enable `plumpy`'s event loop policy, which will patch the default event loop policy. This location is chosen since any process in `aiida-core` *has* to be run by a `Runner` and only one runner instance will ever be created in a Python interpreter. When the runner shuts down, the event policy is reset to undo the patch. * Tests: do not create or destroy event loop in test setup/teardown * Engine: explicitly enable compatibility for RabbitMQ 3.5 RabbitMQ 3.6 changed the way integer values are interpreted for connection parameters. This would cause certain integer values that used to be perfectly acceptable, to all of suddent cause the declaration of resources, such as channels and queues, to fail. The library `pamqp`, that is used by `aiormq`, which in turn is used ultimately by `kiwipy` to communicate with the RabbitMQ server, adapted to these changes, but this would break code with RabbitMQ 3.5 that used to work just fine. For example, the message TTL when declaring a queue would now fail when `32767 < TTL < 655636` due to incorrect interpretation of the integer type. The library `pamqp` provides a way to enable compatibility with these older versions. One should merely call the method: pamqp.encode.support_deprecated_rabbitmq() This will enable the legacy integer conversion table and will restore functionality for RabbitMQ 3.5. * Dependencies: update minimum version for `notebook>=6.1.5` (#4593) Lower versions suffer from vulnerability `GHSA-c7vm-f5p4-8fqh`. Also update the requirement files to only use explicit pinned versions. The compatibility operator was erroneously used for the `aio-pika`, `pamqp` and `pytest-asyncio` dependencies. For `pamqp` the minimum required version is upped to `2.3` since that was the version that introduced the `support_deprecated_rabbitmq` function that is required from that library. * Daemon: replace deprecated classmethods of `asyncio.Task` in shutdown (#4608) The `shutdown` function, that was attached to the loop of the daemon runner in `aiida.engine.daemon.runner.start_daemon`, was calling the classmethods `current_task` and `all_tasks` of `asyncio.Task` which have been deprecated in Python 3.7 and are removed in Python 3.9. This would prevent the daemon runners from being shutdown in Python 3.9. The methods have been replaced with top level functions that can be imported directl from `asyncio`. This was not noticed in the tests because in the tests the daemon is stopped but it is not checked whether this happens successfully. Anyway, the error would only show up in the daemon log. To test the shutdown method, it has been made into a standalone coroutine and renamed to `shutdown_runner`. Since the `shutdown_runner` is a coroutine, the unit test that calls it also has to be one and therefore we need `pytest-asyncio` as a dependency. The `event_loop` fixture, that is provided by this library, is overrided such that it provides the event loop of the `Manager`, since in AiiDA only ever this single reentrant loop should be used. Note that the current CI tests run against Python 3.6 and Python 3.9 and so will still not catch this problem, however, the `test-install` workflow _does_ run against Python 3.9. I have opted not to change the continuous integrations to run against Python 3.9 instead of 3.8, since they take more than twice the time. Supposedly this is because certain dependencies have to be built and compiled from scratch when the testing environment is started. * CLI: add the `verdi database version` command (#4613) This shows the schema generation and version of the database of the given profile, useful mostly for developers when debugging. In addition to the new command, the code in `aiida.manage.manager` had to be updated for the new functionality to work. The `get_backend_manager` was so far _not_ loading the backend, although that really doesn't make any sense. It is providing access to data from the database, but to do so the backend should be loaded, otherwise a connection isn't possible. This problem went unnoticed, because the `BackendManager` was so far only used in `aiida.engine.utils.set_process_state_change_timestamp`. By the time this gets used, the database backend will already have been loaded through another code path. For the change `verdi database version` command, however, the call to get the backend manager needed to make sure that the database backend itself was also loaded. It was not possible to have `get_backend_manager` simply call `_load_backend()` because this would lead to infinite recursion as `_load_backend()` also calls `get_backend_manager`. Therefore `_load_backend` is refactored to not call the former but rather to directly fetch it through `aiida.backends`. * Add the `TransferCalcJob` plugin (#4194) This calcjob allows the user to copy files between a remote machine and the local machine running AiiDA. More specifically, it can do any of the following: * Take any number of files from any number of `RemoteData` folders in a remote machine and copy them in the local repository of a single newly created `FolderData` node. * Take any number of files from any number of `FolderData` nodes in the local machine and copy them in a single newly created `RemoteData` folder in a given remote machine. These are the main two use cases, but there are also other more complex combinations allowed by the current implementation. Co-authored-by: Sebastiaan Huber <mail@sphuber.net> * Dependencies: update requirement `kiwipy~=0.7.1` and `plumpy~=0.18.0` (#4629) A breaking change was released with `kiwipy==0.5.4` where the default value for the task message TTL was changed. This caused connections to existing RabbitMQ queues to fail. Since process task queues are permanent in AiiDA, this would break all existing installations. This problem was fixed by reverting the change which was released with `kiwipy==0.5.5`, however, this was a support patch at the time and the revert never made it into the main line, leaving all versions up from `v0.6.0` still affected. Since these versions of `kiwipy` were never required by a released version of `aiida-core`, but only the current `develop`, which will become `v1.6.0`, we can simply update the requirement to the latest patch `kiwipy==0.7.1` that addressed the problem. The dependency requirement for `plumpy` also had to be updated because the old pinned minor version was pinned to `kiwipy~=0.6.0` which is not compatible with our new requirements. * Docs: add content from old documentation on caching/hashing (#4546) Move the content of "Controlling hashing" and "Design guidelines" inside of `developer_guide/core/caching.rst` to `topics/provenance/caching`. * Engine: remote `with_persistence=False` from process function runner (#4633) In principle the runner for a process function does not need a persister since it runs in one go and does not have intermediate steps at which the progress needs to be persisted. However, since the process function implementation calls `Manager.get_runner`, if a runner has not yet been created in the interpreter, one will be created and set to be the global one. This is where the problem occurs because the process function specifies `with_persistence=False` for the runner. This will cause any subsequent process submissions to fail since the `submit` function will call `runner.persister.save_checkpoint` which will fail since the `persister` of the runner is `None`. * `CalcJob`: improve testing and documentation of `retrieve_list` (#4611) The documentation on the `retrieve_list` syntax and its functioning was incorrect. The inaccuracies are corrected and extensive examples are provided that give an example file hierarchy for the remote working directory and then for a variety of definitions of the `retrieve_list` the resulting file structure in the retrieved folder is depicted. * CI: remote the `numpy` install workaround for `pymatgen` The problem occurred due to an outdated version of `setuptools` which would be invoked when `pymatgen` gets installed from a tarball, in which case the wheel has to be built. In this scenario, the build requirements get installed by `setuptools`, which at outdated versions did not respect the Python requirements of the dependencies which would cause incompatible version of `numpy` to be installed, calling the build to fail. By updating `setuptools` the workaround of manually installing a compatible `numpy` version beforehand is no longer necessary. * CI: skip `restapi.test_threaded_restapi:test_run_without_close_session` This test has been consistently failing on Python 3.8 and 3.9 despite the two reruns using flaky. For now we skip it entirely instead. * Dependencies: update requirement `plumpy~=0.18.1` (#4642) This patch release of `plumpy` fixes a critical bug that makes the new `asyncio` based implementation of the engine compatible with Jupyter notebooks. * CLI: ensure `verdi database version` works even if schema outdated (#4641) The command was failing if the database schema was out of sync because the backend was loaded, through `get_manager`, with the default schema check on. Since the database does not actually have to be used, other than to retrieve the current schema version and generation, we can load the backend without the check. * Add `verdi group delete --delete-nodes` (#4578) This commit makes a number of improvements to the deletion of nodes API/CLI: 1. Makes `delete_nodes` usable outside of `click`; adding a callback for the confirmation step, rather than calling `click.confirm` directly, and using logging instead of `click.echo` 2. Moves the function from `aiida/manage/database/delete/nodes.py` to `aiida/tools/graph/deletions.py`, leaving a deprecation warning at the old location. This is a more intuitive place since the function is directly build on the graph traversal functionality. 3. Exposes API functions *via* `from aiida.tools import delete_nodes` and adds their use to the documentation. 4. Adds `delete_group_nodes` mainly as a wrapper around `delete_nodes`; querying for all the node pks in the groups, then passing these to `delete_nodes` 5. Adds the ability to delete nodes to `verdi group delete --delete-nodes`, with the same flags and logic as `verdi node delete` 6. Fixes a bug in `verdi node delete`, introduced by #4575, if a node does not exist * 🧪 FIX: engine benchmark tests (#4652) The `test_workchain_daemon` test group required updating to using asyncio (rather than tornado) * Docs: Minor documentation fixes (#4643) Small changes and fixes in the documentation. * Docs: clarify docstrings of `get_last_job_info` and `get_detailed_job_info` (#4657) `CalcJobNode`s contain two differente job infos, the `detailed_job_info` and the `last_job_info`. The distinction between the two was not obvious, and not documented. The docstrings are improved to clarify the difference. * docs: simplify proxycommand (#4662) The 'netcat mode' `-W` was added in OpenSSH 5.4, released March 2010. Given that this simplifies the setup and and delegates handling of netcat to ssh, this is what we should recommend. For example, MacOS ships with OpenSSH 5.6 since MacOS 10.7, released October 2010. * Docs: Add redirect for database backup page (#4675) * Type checking: `aiida/engine` (+bug fixes) (#4669) Added type checking for the modules * `aiida.engine` * `aiida.manage.manager` Move `aiida.orm` imports to top of file in `aiida.engine` module. This should be fine as `aiida.orm` should not import anything from `aiida.engine` and this way we don't need import guards specifically for type checking. * Fix `run_get_node`/`run_get_pk` namedtuples (#4677) Fix a regression made in #4669, whereby the namedtuple's were incorrectly named * REST API fixes - Use node_type in construct_full_type(). - Don't use try/except for determining full_type. - Remove unnecessary try/except in App for catch_internal_server. - Use proper API_CONFIG for configure_api. * New /querybuilder-endpoint with POST for REST API The POST endpoint returns what the QueryBuilder would return, when providing it with a proper queryhelp dictionary. Furthermore, it returns the entities/results in the "standard" REST API format - with the exception of `link_type` and `link_label` keys for links. However, these particular keys are still present as `type` and `label`, respectively. The special Node property `full_type` will be removed from any entity, if its value is `None`. There are two cases where this will be True: - If the entity is not a `Node`; and - If neither `node_type` or `process_type` are among the projected properties for any given `Node`. Concerning security: The /querybuilder-endpoint can be toggled on/off with the configuration parameter `CLI_DEFAULTS['POSTING']`. Added this to `verdi restapi` as `--posting/--no-posting` option. The option is hidden by default, as the naming may be changed in the future. Reviewed by @ltalirz. * Use importlib in .ci folder * Fix: pre-store hash for -0. and 0. is now the same * ci: update paramiko version (#4686) Now that the Github Action runners switched to Ubuntu 20.04, the default SSH key format of OpenSSH changed and is no longer supported by paramiko <=2.7.1. * Fix: release signal handlers after run execution (#4682) After a process has executed (when running rather than submitting), return the signal handlers to their original state. This fixes an issue whereby using `CTRL-C` after a process has run still calls the `process.kill`. It also releases the `kill_process` function's reference to the process, a step towards allowing the finished process to be garbage collected. * Fix: `PluginVersionProvider` should cache process class (#4683) Currently, the `PluginVersionProvider` is caching process instance, rather than class. This commit fixes the bug, meaning the cache will now work correctly. Removing the reference to the process instance also is a step towards allowing it to be garbage collected. * remove leftover use of Computer.name (#4681) Remove leftover use of deprecated Computer.name attribute in `verdi computer list`. Also update minimum version of click dependency to 7.1, since click 7.1 introduces additional whitespace in the verdi autodocs (running with click 7.0 locally resulted in pre-commit check failing on CI). Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> * Add `to_aiida_type` to the public API (#4672) Since `to_aiida_type` is intended for public use, this commit makes it part of the public API, via `from aiida.orm import to_aiida_type`. * Add .dockerignore (#4564) This commit adds a `.dockerignore` file to inhibit any unecessary/unwanted files being copied into the Docker container, during the `COPY . aiida-core` command, and also reduces the build time. * CI: Remove `--use-feature=2020-resolver` pip feature flag tests. (#4689) The feature is now on by default in the latest stable release. * CI: Notify slack on failure of the test-install workflow. (#4690) * Improve namedtuples in aiida/engine (#4688) This commit replaces old-style namedtuples with `typing.NamedTuple` sub-classes. This allows for typing of fields and better default value assignment. Note this feature requires python>=3.6.1, but it is anyhow intended that python 3.6 be dropped for the next release. * test AiiDA ipython magics and remove copy-paste in docs (#4548) Adds tests for the AiiDA IPython extension. Also: * move some additional lines from the registration snippet to aiida-core (where we can adapt it if the IPython API ever changes) * rename and deprecate misnomer `load_ipython_extension` to `register_ipython_extension` (to be removed in aiida 3) * include the snippet to register the AiiDA ipython magics from the aiida-core codebase instead of the (already outdated) copy-pasted version. * revisit the corresponding section of the documentation, starting with the setup, and removing some generic information about jupyter. * 🐛 FIX: typing failure (#4700) As of numpy v1.20, `numpy.inf` is no longer recognised as an integer type * 📚 DOCS: fix typo (#4711) * BUILD: drop support for python 3.6 (#4701) Following our support table, we drop python 3.6 support. * BUILD: bump jenkins dockerimage to 20.04 (#4714) Despite python3.7 being installed on the Jenkins dockerimage, pip install failed after dropping python 3.6 support (likely because pip from python 3.6 was being used). We update ubuntu to 20.04, which comes with python 3.8.2 by default. * Switch matrix order in continuous-integration tests job. (#4713) To harmonize with test-install workflow. * ♻️ REFACTOR: verdi export/import -> verdi archive (#4710) This commit deprecates `verdi export` and `verdi import` and combines them into `verdi archive`. * Dependencies: Require `ipython~=7.20` (#4715) * Dependencies: Require `ipython~=7.20` Package jedi version 0.18 introduces backwards incompatible changes that break compatibility with ipython<7.20. Fixes issue #4668. * Automated update of requirements/ files. (#4716) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * ♻️ REFACTOR: `ci/` folder (#4565) This commit looks to address two issues: 1. The `ci/` folder has become cluttered; it contains configuration and scripts for both the GitHub Actions and Jenkins CI and it is not easily clear which is for which. 2. The Jenkins tests are somewhat of a black-box to most, since it is certainly not trivial to set up and run them locally. This has lead to them essentially not being touched since they were first written. The changes are as follows: 1. Moved the GH actions specific scripts to `.github/system_tests` 2. Refactored the Jenkins setup/tests to use [molecule](https://molecule.readthedocs.io) in the `.molecule/` folder (note we use molecule for testing all the quantum mobile code). You can read about this setup in `.molecule/README.md`, but essentially if you just run `tox -e molecule-django` locally it will create/launch a docker container, setup and run the tests within that container, then destroy the container. Locally, it additionally records and prints an analysis of queries made to the database during the workchain runs. 3. Moved the Jenkins configuration to `.jenkins/`, which is now mainly a thin wrapper around (2). This makes these tests more portable and easier to understand, modify or add to. * 🔧 MAINTAIN: drop setuptools upper pinning (#4725) * CI: Improve polish workchain failure debugging (#4729) * fix: don't pass process stack via context (#4699) This PR fixes a memory leak: when running `CalcJob`s over an SSH connection, the first CalcJob that was run remained in memory indefinitely. `plumpy` uses the `contextvars` module to provide a reference to the `current_process` anywhere in a task launched by a process. When using any of `asyncio`'s `call_soon`, `call_later` or `call_at` methods, each individual function execution gets their own copy of this context. This means that as long as a handle to these scheduled executions remains in memory, the copy of the `'process stack'` context var (and thus the process itself) remain in memory, In this particular case, a handle to such a task (`do_open` a `transport`) remained in memory and caused the whole process to remain in memory as well via the 'process stack' context variable. This is fixed by explicitly passing an empty context to the execution of `do_open` (which anyhow does not need access to the `current_process`). An explicit test is added to make sure that no references to processes are leaked after running process via the interpreter as well as in the daemon tests. This PR adds the empty context in two other invocations of `call_later`, but there are more places in the code where these methods are used. As such it is a bit of a workaround. Eventually, this problem should likely be addressed by converting any functions that use `call_soon`, `call_later` or `call_at` and all their parents in the call stack to coroutines. Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> * CI: Add retry for polish workchains (#4733) To mitigate failures on Jenkins * 🐛 FIX: Standardise transport task interrupt handling (#4692) For all transport tasks (upload, submit, update, retrieve), both `plumpy.futures.CancelledError` and `plumpy.process_states.Interruption` exceptions should be ignored by the exponential backoff mechanism (i.e. the task should not be retried) and raised directly (as opposed to as a `TransportTaskException`), so that they can be correctly caught by the `Waiting.execute` method. As an example, this fixes a known bug, whereby the upload task could not be cancelled via `CTRL-C` in an ipython shell. * Update use of various deprecated APIs (#4719) This replaces the use of various deprecated APIs pointed out by warnings thrown during runs of the test suite. It also introduces one new feature and a bug fix. Features: * Add non-zero exit code for failure to most `verdi daemon` commands, so tests will catch possible errors. Bug fixes: * A couple of files were opened but not closed Updates of deprecated APIs: * np.int is deprecated alias of int * np.float is deprecated alias of float * put_object_from_filelike: force is deprecated * archive import/export: `silent` keyword is deprecated in favor of logger * computer name => label * Fix tests writing to the repository of nodes after they had been stored by replacing all times we use `.open` with `'w'` or `'wb'` mode with a correct call to `put_object_from_filelike` *before* the node is stored. In one case, the data comes from a small archive file. In this case, I recreated the (zipped) .aiida file adding two additional (binary) files obtained by gzipping a short string. This was used to ensure that `inputcat` and `outputcat` work also when binary data was requested. Actually, this is better than before, where the actual input or output of the calculation were overwritten and then replaced back. * communicator: replace deprecated stop() by close() * silence some deprecation warnings in tests of APIs that will be removed in 2.0 Note that while unmuting the `ResourceWarning` was good to spot some issues (bug fix above), the warning is raised in a couple more places where it's less obvious to fix (typically related to the daemon starting some process in the background - or being started itself - and not being stopped before the test actually finished). I think this is an acceptable compromise - maybe we'll figure out how to selectively silence those, and keeping warnings visible will help us figure out possible leaks in the future. Co-authored-by: Giovanni Pizzi <giovanni.pizzi@epfl.ch> * ✨ NEW: Add `verdi database summary` (#4737) Prints a summary of the count of each entity and, with `-v` flag, additional summary of the unique identifiers for some entities. * Upgrading dependency of sqlalchemy-utils (#4724) * Upgrading dependency of sqlalchemy-utils In sqlalchemy-utils 0.35, imports from collections where correctly fixed to import from collections.abc (where this is needed). This removes a few deprecation warnings (claiming that this will not work in py 3.9, even if in reality this will stop working in py 3.10). This partially addresses #4723. We are actually pinning to >=0.36 since in 0.36 a feature was dropped that we were planning to use (see #3845). In this way, we avoid relying on a feature that is removed in later versions (risking to implement something that then we have to remove, or even worse remain "pinned" to an old version of sqlalchemy-utils because nobody has the time to fix it with a different implementation [which is tricky, requires some knowledge of how SqlAlchemy and PosgreSQL work]). * Automated update of requirements/ files. (#4734) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Carl Simon Adorf <simon.adorf@epfl.ch> * Bump aiida-prerequisites base image to 0.3.0 (#4738) Changes in the new image: - Updated conda (4.9.2) - Start ssh-agent at user's startup Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> * Add CalcJob test over SSH (#4732) Adds a configuration for a remote computer (slurm docker container) and uses it to run a CalcJob test over SSH. This is a follow-up on the memory leak tests, since the leak of the process instance was discovered to occur only when running CalcJobs on a remote computer via an SSH connection. Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> * 🧪 TESTS: Add pytest `requires_rmq` marker (#4739) * Work on `verdi group remove-nodes` command (#4728). * `verdi group remove-nodes`: Add warning when nodes are not in the group Currently, the `verdi group remove-nodes` command does not raise any warning when the nodes that the user wants to remove are not in the group. It also says it removed the number of requested nodes from the group, even when none of them is in the group specified. Here we: * Have the command fail with a `Critical` message when none of the requested nodes are in the group. * Raise a warning when any of the nodes requested are not in the specified group, and list the PK's of the nodes that are missing. Note that the Group.remove_nodes() command still does not raise any warning when the requested nodes are not in the group. * Fix bug and improve API Fixes a bug when the user actually doesn't provide any nodes. In case the `--clear` flag is also not provided, the command will fail since there is nothing to remove. In case it is provided, the command will ask for confirmation to remove all nodes unless the force flag is also set. * Fail if both the `--clear` flag and nodes are provided In the current API, it doesn't make sense to provide *both* the `--clear` flag and a list of node identifiers. Here we check if both are provided and abort the command in this case. * Add tests. * CI: Increase output verbosity of tests suite. (#4740) * Skip test 'TestVerdiProcessDaemon::test_pause_play_kill'. (#4747) The test randomly fails to complete within a reasonable amount of time leading to a significant disruption of our CI pipeline. Investigated in issue #4731. * 🧪 TESTS: Fix pre-commit (pin astroid) (#4757) Temporary fix for https://github.com/PyCQA/astroid/issues/895 * 🧪 TESTS: Fix plumpy incompatibility (#4751) As of https://github.com/aiidateam/plumpy/commit/7004bd96bbaa678b5486a62677e139216877deef, a paused workchain will hang if it is closed then played. This test violated that rule and also was faulty, in that it should test that the reloaded workchain can be played, not the original workchain. * 🔧 MAINTAIN: Reduce test warnings (#4742) This commit reduces the number of pytest warnings of the test suite, from 719 to 122: - Replace `collections` with `collections.abc` - pytest-asyncio does not work with `unittest.TestCase` derived tests (https://github.com/pytest-dev/pytest-asyncio/issues/77). - `ProcessFuture` already closed via polling should not set a result via a broadcast event. - Upgrade kiwipy and plumpy to fix: - https://github.com/aiidateam/kiwipy/pull/98 - https://github.com/aiidateam/plumpy/pull/204 - https://github.com/aiidateam/plumpy/pull/206 * 📚 DOCS: Add `BaseRestartWorkchain` how-to (#4709) This section is adapted from: https://github.com/aiidateam/aiida-tutorials/blob/master/docs/pages/2020_Intro_Week/sections/workflows_adv.rst * CI: Bump reentry to v1.3.2 (#4746) * 🐛 FIX: Node comments API (#4760) * Fix hanging direct scheduler+ssh (#4735) * Fix hanging direct scheduler+ssh The fix is very simple: in the ssh transport, to emulate 'chdir', we keep the current directory in memory, and we prepend every command with a `cd FOLDER_NAME && ACTUALCOMMAND`. One could put `;` instead of `&&`, but then if the folder does not exist the ACTUALCOMMAND would still be run in the wrong folder, which is very bad (imagine you are removing files...). Now, in general this is not a problem. However, the direct scheduler inserts a complex-syntax bash command to run the command in the background and immediately get the PID of that process without waiting. When combined with SSH, this hangs until the whole process is completed, unless the actual command is wrapped in brackets. A simple way to check this is running these two commands, that reproduce the issue with plain ssh, without paramiko: This hangs for 5 seconds: ``` ssh localhost 'cd tmp && sleep 5 > /dev/null 2>&1 & echo $!' ``` This returns immediately, as we want: ``` ssh localhost 'cd tmp && ( sleep 5 > /dev/null 2>&1 & echo $! )' ``` Also, adding a regression test for the hanging direct+ssh combination This test checks that submitting a long job over the direct scheduler does not "hang" with any plugin. Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com> * ♻️ REFACTOR: configuration management API and CLI (#4712) This commit primarily refactors the `verdi config` command and merges the `cache_config.yml` into the `config.json`. `config.json` changes: - A jsonschema is added to validate the `config.json`, and also provide the options/defaults previously in `aiida/manage/configuration/options.py`. - Rename option keys (with migration), for consistency with the internal representation (also rename `user.` fields to `autofill.user.`) - Allow the `config.json` to contain a `$schema` key, that is preserved when storing new data - Deprecated `cache_config.yml`: auto-merged into `config.json`, with deprecation warning, then renamed - An `rmq.task_timeout` option has also been added (with default increased from 5 to 10 seconds), to fix timeout errors at high process loads. `verdi config` changes: - Refactor `verdi config` into separate commands: list/get/set/show/unset - Include deprecation for current `verdi config <KEY>` - `verdi caching` lists all process entry points that are enabled/disabled for caching Also, code in `aiida/manage/caching.py` now utilises the `get_config_option` function to retrieve caching configuration. * 🧪 TESTS: add `config_with_profile` fixture (#4764) This allows for the removal of `temporary_config_instance` and `with_temporary_config_instance` from `tests/utils/configuration.py` * 👌 IMPROVE: `verdi config list/show` (#4762) Ensure these commands still work before a profile has been configured. * 👌 IMPROVE: Add config `logging.aiopika_loglevel` (#4768) * 📚 DOCS: Add process submit diagram (#4766) * 📚 DOCS: Add process submit diagram * Create submit_sysml.pptx * 👌 IMPROVE: CTRL-C on running process (#4771) Do not call `kill` on a process that is already being killed. Also log a different message, so that the user can see that the original CTRL-C was actioned. * 🐛 FIX: kill_calculation before job submitted (#4770) `job_id` will not yet have been set, so we should not ask the scheduler to kill it. * 🐛 FIX: `ModificationNotAllowed` on workchain kill (#4773) In `Process.kill` the parent is killed first, then the children. However, for workchains when entering the `Wait` state, awaitables (e.g. children) are each assigned to `WorkChain.on_process_finished` as a callback on termination. When the child is killed, this callback then calls `resolve_awaitable`, which tries to update the status of the parent. The parent is already terminated though and the node sealed -> `ModificationNotAllowed`. In this commit we therefore check if the parent is already in a terminal state, before attempting to update its status. * 👌 IMPROVE: capture of node hashing errors (#4778) Currently all exceptions are caught and ignored. This commit adds a specific `HashingError` exception, for known failure modes. Only this exception is caught, if `ignore_errors=True`, and the exception logged. Also an `aiida_caplog` pytest fixture is added, to enable logs from `AiiDA_LOGGER` to be captured. * ⬆️ UPDATE: kiwipy/plumpy (#4776) Update to new patch versions: kiwipy v0.7.3: - 👌 IMPROVE: Add debug logging for sending task/rpc/broadcast to RMQ. - 👌 IMPROVE: Close created asyncio loop on RmqThreadCommunicator.close plumpy v0.18.6: - 👌 IMPROVE: Catch state change broadcast timeout When using an RMQ communicator, the broadcast can timeout on heavy loads to RMQ. This broadcast is not critical to the running of the process, and so a timeout should not except it. In aiida-core, the broadcast is subscribed to by `verdi process watch` (not critical), in `aiida/engine/processes/futures.py:ProcessFuture` (unused), and in `aiida/engine/runners.py:Runner.call_on_process_finish` which has a backup polling mechanism on the node. Also ensure the process PID is included in all log messages. * Add fallback equality relationship based on uuid (#4753) Add fallback equality relationship based on node uuid . * Simplify AiidaTestCase implementation (#4779) This simplifies the `AiidaTestCase` implementation - not yet replacing it with pytest fixtures, but hopefully getting one step closer to doing so eventually. In particular * only truly backend-specific code is left in the backend-specific test classes * introduces `refurbish_db()` which includes the combination of cleaning the db and repopulating it with a user (which is a common combination) * move creation of default computer from `setUpClass` to "on demand" (not needed by many tests) * merges `reset_database` and `clean_db` function that basically did the same * factors out the `get_default_user` function so that it can be reused outside the AiidaTestCase (`verdi setup`, pytest fixtures, ...) in a follow-up PR * add `orm.Computer.objects.get_or_create` (in analogy to similar methods for user, group, ...) Note: While this change gets rid of unnecessary complexity, it does *not* switch to a mode where the database is cleaned between *every* test. While some subclasses of `AiidaTestCase` do this, the `AiidaTestCase` itself only cleans the database in `setupClass`. Some subclasses do significant test setup at the class level, which might slow things down if they had to be done for every test. * 👌 IMPROVE: add broker_parameters to config schema (#4785) * 👌 IMPROVE: Add 'exception' to projection mapping (#4786) This commit adds `exception` to the list of allowed projections, and also standardises the way the exception is set on the node (capturing both the type and message). * docs: reorder/simplify caching howto (#4787) The howto on enabling caching has been reordered to move the concepts to the beginning and technical details to where they fit better. The figure has been simplified (complexity introduced by second input node unnecessary). Added explicit mention of the fact that hashing is enabled by default (which may not be obvious). Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> * docs: reference caching howto in workflow section (#4789) It might be helpful to people learning about AiiDA workflows to know that caching exists and point them in that direction. Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com> * setup: move away from legacy build backend (#4790) The `pyproject.toml` was originally added in ca75832afb002b344b5854f2f049c74e80cad36b without specifying a backend, which implicitly defaults to the legacy `setuptools.build_meta:__legacy__` one. This choice was made explicit in a2bebb422f4a7b75e8ef65fd797f128abf12c6cc This can lead to issues when using a *system* version of setuptools < 40.8.0, see [1]. We believe there is no good reason for sticking with the legacy build system. I've tested that the `reentry_register` hook still works with the new build backend. [1] https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982 * fix pymatgen imports (#4794) pymatgen made a breaking change in v2021.3.4 that removed many classes from the top level of the package. The alternative imports were already available in previous versions, i.e. we don't need to upgrade the pymatgen dependency. * 🐛 FIX: `get_pymatgen_version` (#4796) In version 2022.0.3 it was moved * 👌 IMPROVE: add type checking for aiida/orm/nodes/process (#4772) This commit adds type definitions to all code in `aiida/orm/nodes/process`, and enables mypy type checking of the files. Additionally, to fix mypy failures, two changes to the code were made: 1. Change `CalcJobNode.get_description` to return a string 2. In `aiida/engine/processes/calcjobs/tasks.py`, change `node.computer.get_authinfo(node.user)` to `node.get_authinfo()`, to use `CalcJobNode.get_authinfo` which checks if the computer is set. * 🐛 FIX: `WorkChain.resolve_awaitable` (#4795) An alteration to a recent fix (#4773); `Process.has_terminated` is a method, not a property. * 🐛 FIX: `Task.cancel` should not set state as EXCEPTED (#4792) Currently, stopping the daemon in python 3.7 excepts all processes. This is due to the code in `shutdown_runner`, which cancels all asyncio tasks running on the loop, including process continue and transport tasks. Cancelling a task raises an `asyncio.CancellErrror`. In python 3.8+ this exception only inherits from `BaseException`, and so is not caught by any `except Exception` "checkpoints" in plumpy/aiida-core. In python <= 3.7 however, the exception is equal to `concurrent.futures.CancelledError`, and so it was caught by one of: `Process.step`, `Running.execute` or `ProcessLauncher.handle_continue_exception` and the process was set to an excepted state. Ideally in the long-term, we will alter `shutdown_runner`, to not use such a "brute-force" mechanism. But in the short-term term this commit directly fixes the issue, by re-raising the `asyncio.CancelledError` exception. * Docs: fix the citation links on the index page (#4800) The links were still using markdown syntax instead of restructured text. * `CalcJob`: add the option to stash files after job completion (#4424) A new namespace `stash` is added to the `metadata.options` input namespace of the `CalcJob` process. This option namespace allows a user to specify certain files that are created by the calculation job to be stashed somewhere on the remote. This can be useful if those files need to be stored for a longer time than the scratch space where the job was run is typically not cleaned for, but need to be kept on the remote machine and not retrieved. Examples are files that are necessary to restart a calculation but are too big to be retrieved and stored permanently in the local file repository. The files that are to be stashed are specified through their relative filepaths within the working directory in the `stash.source_list` option. For now, the only supported option is to have AiiDA's engine copy the files to another location on the same filesystem as the working directory of the calculation job. The base path is defined through the `stash.target_base` option. In the future, other methods may be implemented, such as placing all files in a (compressed) tarball or even stash files on tape. Which mode is to be used is communicated through the enum `aiida.common.datastructures.StashMode` which for now therefore only has the `COPY` value. If the `stash` option namespace is defined for a calculation job, the daemon will perform the stashing operations before the files are retrieved. This also means that the stashing also happens before the parsing of the output files (which occurs after the retrieving step) which means that the files will be stashed independent of the final exit status that the parser will assign to the calculation job. This may cause files to be stashed of calculations that will later be considered to have failed. However, the stashed files can always be deleted manually by the user afterwards if needed. Finally, the stashed files are represented by an output node that is attached to the calculation node through the label `remote_stash`. Just like the `remote_folder` node, this represents a location or files on a remote machine and so is merely a "symbolic link" of sorts. AiiDA does not actually own the files and the contents may disappear at some point. To be able to distinguish the stashed folder from the remote folder, a new data plugin is used, the `RemoteStashFolderData`. The base class is `RemoteStashData` which is not instantiable, but will merely serve as a base class for future subclasses, one for each `StashMode` value. The reason is that the way files need to be accessed depend on the way they were stashed and so it is good to have separate classes for this. It was considered to give `RemoteFolderData` and `RemoteData` the same base class (changing the type of the `remote_folder` to a new subclass `RemoteFolderData`) but this would introduce breaking changes and so this was relegated to a potential future major release. * `verdi process play`: only query for active processes with `--all` flag (#4671) The query used to target all process nodes with the `paused` attribute, so even those in a terminal state. Here an additional filter is added to only query for nodes in an active process state, because terminal nodes should not be affected. This should speed up the query in principle. * Dependencies: update pymatgen version specification (#4805) Addresses #4797 * Dependencies: Pin sqlalchemy to minor release (#4809) Version 1.4 currently breaks `verdi setup` and indeed, according to https://www.sqlalchemy.org/download.html, minor releases of SqlAlchemy may have breaking changes. * 📚 DOCS: Add documentation on stashing (#4812) Some additional minor changes * Add link for `TransferCalcjob` feedback * Add `versionadded` to `TransferCalcjob` docs * 🔧 MAINTAIN: Add PyPI release workflow (#4807) This is workflow is intended to reduce the potential for manual errors and faulty releases. When you create the release, and hence git tag, this workflow is triggered; checks the tag created matches the aiida package version, runs pre-commit and (some) pytests and, if they all pass, deploys to PyPI. * 🚀 RELEASE: v1.6.0 (#4816) Co-authored-by: ramirezfranciscof <ramirezfranciscof@users.noreply.github.com> Co-authored-by: Sebastiaan Huber <mail@sphuber.net> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dominik Gresch <greschd@users.noreply.github.com> Co-authored-by: flavianojs <flavianojs@live.com> Co-authored-by: Marnik Bercx <mbercx@gmail.com> Co-authored-by: Jason Eu <morty.yu@yahoo.com> Co-authored-by: ramirezfranciscof <ramirezfranciscof@users.noreply.github.com> Co-authored-by: Pranjal Mishra <39010495+pranjalmish1@users.noreply.github.com> Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com> Co-authored-by: Casper Welzel Andersen <casper.andersen@epfl.ch> Co-authored-by: Carl Simon Adorf <carl.simon.adorf@gmail.com> Co-authored-by: Carl Simon Adorf <simon.adorf@epfl.ch> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Giovanni Pizzi <giovanni.pizzi@epfl.ch> Co-authored-by: Aliaksandr Yakutovich <yakutovicha@gmail.com> Co-authored-by: DanielMarchand <Daniel.marchand@gmail.com>
Fixes #4600
This is a utility that provides users a way to locally retrieve files that were left in remote folders after calculations and put them back in the remote machine once they are needed.
You can see an explanation of its interface and how it works in this document. You can leave there your comments on the interface. In the meantime I will think of examples that don't include QE to put in the AiiDA doc as well as tests for this, but before incorporating those here I wanted to make sure we all agreed on the interface first.
You can also try it yourselves by running the workflow example found in the same folder, that illustrates how to locally save the necessary restart files for QE and set them again in the remote to lunch a restart.