https://github.com/aiidateam/aiida_core

sort by:
Revision Author Date Message Commit Date
dfd1602 Merge pull request #4545 from aiidateam/release/1.4.3 Release `v1.4.3` 06 November 2020, 13:41:35 UTC
2d2ac39 Release `v1.4.3` 06 November 2020, 11:12:01 UTC
78cf6e1 Fix `UnboundLocalError` in `aiida.cmdline.utils.edit_multiline_template` (#4436) If `click.edit` returns a falsy value, the following conditional would be skipped and the `value` variable would be undefined causing an `UnboundLocalError` to be raised. This bug was reported by @blokhin but the exact conditions under which it occurred are not clear. Cherry-pick: 861a39f268954833385e699b3acbd092ccd04e5e 06 November 2020, 10:05:33 UTC
0a7039e RabbitMQ: remove validation of `broker_parameters` from profile (#4542) This validation was added as an attempt to help users with detecting invalid parameters in the `broker_parameters` dictionary of a profile, but `aiida-core` internally has no real limitations here. It is the libraries underneath that decide what is acceptable and this can differ from library to library plus it is not always clear. For example, currently we use `topika` and `pika` which behave different from `aiormq` which will be replacing them soon once `tornado` is replaced with `asyncio`. It is best to not limit the options on `aiida-core`'s side and just let it fail downstream as to not artificially limit the parameters that might be perfectly acceptable by the libraries downstream. 05 November 2020, 11:02:52 UTC
a6c6cc4 Merge pull request #4421 from aiidateam/release/1.4.2 Release `v1.4.2` 04 October 2020, 18:01:11 UTC
e7b8943 Release `v1.4.2` 04 October 2020, 17:10:42 UTC
91df33e `CalcJob`: make sure `local_copy_list` files do not end up in node repo (#4415) The concept of the `local_copy_list` is to provide a possibility to `CalcJob` plugins to write files to the remote working directory but that are not also copied to the calculation job's repository folder. However, due to commit 9dfad2efbe9603957a54d0123a3cec2ee48b54bd this guarantee is broken. The relevant commit refactored the handling of the `local_copy_list` in the `upload_calculation` method to allow the target filepaths in the list to contain nested paths with subdirectories that might not yet necessarily exist. The approach was to first write all files to the sandbox folder, where it is easier to deal with non-existing directories. To make sure that these files weren't then also copied to the node's repository folder, the copied files were also added to the `provenance_exclude_list`. However, the logic in that part of the code did not normalize filepaths, which caused files to be copied that shouldm't have. The reason is that the `provenance_exclude_list` could contain `./path/file_a.txt`, which would be compared to the relative path `path/file_a.txt` which references the same file, but the strings are not equal. The solution is to ensure that all paths are fully normalized before they are compared. This will turn the relative path `./path/file_a.txt` into `path/file_a.txt`. 30 September 2020, 21:43:33 UTC
9c4a8b4 Merge pull request #4402 from sphuber/release/1.4.1 Release `v1.4.1` 28 September 2020, 08:46:08 UTC
fa64dba Release `v1.4.1` 28 September 2020, 06:39:33 UTC
f07bf63 `verdi setup`: improve validation and help string of broker virtual host (#4408) The help string of the `--broker-virtual-host` option of `verdi setup` incorrectly said that forward slashes have to be escaped but this is not true. The code will escape any characters necessary when constructing the URL to connect to RabbitMQ. On top of that, slashes would fail the validation outright, even though these are common in virtual hosts. For example the virtual host always starts with a leading forward slash, but our validation would reject it. Also the leading slash will be added by the code and so does not have to be used in the setup phase. The help string and the documentation now reflect this. The exacti naming rules for virtual hosts, imposed by RabbitMQ or other implemenatations of the AMQP protocol, are not fully clear. But instead of putting an explicit validation on AiiDA's side and running the risk that we incorrectly reject valid virtual host names, we simply accept all strings. In any case, any non-default virtual host will have to be created through RabbitMQ's control interface, which will perform the validation itself. 28 September 2020, 06:36:22 UTC
cc5af0e `verdi setup`: forward broker defaults to interactive mode (#4405) The options for the message broker configuration do define defaults, however, the interactive clones for `verdi setup`, which are defined in `aiida.cmdline.params.options.commands.setup` override the default with the `contextual_default` which sets an empty default, unless it is taken from an existing profile. The result is that for new profiles, the broker options do not specify a default, even though for most usecases the defaults will be required. After the changes of this commit, the prompt of `verdi setup` will provide a default for all broker parameters so most users will simply have to press enter each time. 28 September 2020, 06:36:22 UTC
1c85bc8 Dependencies: increase minimum version requirement `plumpy~=0.15.1` (#4398) The patch release of `plumpy` comes with a simple fix that will prevent the printing of many warnings when running processes. So although not critical, it does improve user experience. 25 September 2020, 14:27:04 UTC
4074110 Implement `next` and `iter` for the `Node.open` deprecation wrapper (#4399) The return value of `Node.open` was wrapped in `WarnWhenNotEntered` in `aiida-core==1.4.0` in order to warn users that use the method without a context manager, which will start to raise in v2.0. Unfortunately, the raising came a little early as the wrapper does not implement the `__iter__` and `__next__` methods, which can be called by clients. An example is `numpy.getfromtxt` which will notice the return value of `Node.open` is filelike and so will wrap it in `iter`. Without the current fix, this raises a `TypeError`. The proper fix would be to forward all magic methods to the wrapped filelike object, but it is not clear how to do this. 25 September 2020, 14:23:42 UTC
59ebaf4 Merge pull request #4385 from aiidateam/release/1.4.0 Release `v1.4.0` 24 September 2020, 09:08:58 UTC
ea5b7f5 Release `v1.4.0` 24 September 2020, 08:06:06 UTC
0b155a5 Remove duplicated migration for SqlAlchemy (#4390) The `0edcdd5a30f0_add_extras_to_group.py` migration is a duplicate of `0edcdd5a30f0_dbgroup_extras.py` and was accidentally committed in commit `26f14ae0c352bfe7b7f3bd0282291831b71320ed`. The migration is exactly the same, including the revision numbers, except the human readable part was changed. 23 September 2020, 21:04:18 UTC
f2f6e2f `SshTransport` : refactor interface to simplify subclassing (#4363) The `SshTransport` transport plugin is refactored slightly to make it easier for sublcasses to adapt its behavior. Specifically: * Add simple wrappers around SFTP calls (stat, lstat and symlink) such that they can be overriden in subclasses, for example if SFTP is not available and pure SSH needs to be used. * New method to initialize file transport separately. Also adds error checking for SFTP initialization, with an explicit message if it fails to launch, and a possible solution. * Add `_MAX_EXEC_COMMAND_LOG_SIZE` class attribute that can be used to limit the length of the debug message containing the command that is executed in `_exec_command_internal`, which can grow very large. 23 September 2020, 11:30:27 UTC
26f14ae `Group`: add support for setting extras on groups (#4328) The `DbGroup` database models get a new JSONB column `extras` which will function just like the extras of nodes. They will allow setting mutable extras as long as they are JSON-serializable. The default is set to an empty dictionary that prevents the ORM from having to deal with null values. In addition, this keeps in line with the current design of other database models. Since the default is one defined on the ORM and not the database schema, we also explicitly mark the column as non-nullable. Otherwise it would be possible to still store rows in the database with null values. To add the functionality of setting, getting and deleting the extras to the backend end frontend `Group` ORM classes, the corresponding mixin classes are added. The functionality for the `BackendGroup` was already accidentally added in a previous commit `65389f4958b9b111756450ea77e2` so only the frontend is touched here. 23 September 2020, 10:59:46 UTC
ac0d559 Prepare the code for the new repository implementation (#4344) In `v2.0.0`, the new repository implementation will be shipped, that despite our best efforts, requires some slight backwards-incompatible changes to the interface. The envisioned changes are translated as deprecation warnings: * `FileType`: `aiida.orm.utils.repository` ->`aiida.repository.common` * `File`: `aiida.orm.utils.repository` ->`aiida.repository.common` * `File`: changed from namedtuple to class * `File`: iteration is deprecated * `File`: `type` attribute -> `file_type` * `Node.put_object_from_tree`: `path` -> `filepath` * `Node.put_object_from_file`: `path` -> `filepath` * `Node.put_object_from_tree`: `key` -> `path` * `Node.put_object_from_file`: `key` -> `path` * `Node.put_object_from_filelike`: `key` -> `path` * `Node.get_object`: `key` -> `path` * `Node.get_object_content`: `key` -> `path` * `Node.open`: `key` -> `path` * `Node.list_objects`: `key` -> `path` * `Node.list_object_names`: `key` -> `path` * `SinglefileData.open`: `key` -> `path` * Deprecated use of `Node.open` without context manager * Deprecated any other mode than `r` and `rb` in the methods: o `Node.open` o `Node.get_object_content` * Deprecated `contents_only` in `put_object_from_tree` * Deprecated `force` argument in o `Node.put_object_from_tree` o `Node.put_object_from_file` o `Node.put_object_from_filelike` o `Node.delete_object` The special case is the `Repository` class of the internal module `aiida.orm.utils.repository`. Even though it is not part of the public API, plugins may have been using it. To allow deprecation warnings to be printed when the module or class is used, we move the content to a mirror module `aiida.orm.utils._repository`, that is then used internally, and the original module has the deprecation warning. This way clients will see the warning if they use it, but use in `aiida-core` will not trigger them. Since there won't be a replacement for this class in the new implementation, it can also not be replaced or forwarded. 23 September 2020, 09:33:51 UTC
aa3b009 `BaseRestartWorkChain`: do not run `process_handler` when `exit_codes=[]`. (#4380) When a `process_handler` explicitly gets passed an empty `exit_codes` list, it would previously always run. This is now changed to not run the handler instead. The reason for this change is that it is more consistent with the semantics of passing a list of exit codes, where it only triggers if the child process has any of the listed exit codes. 23 September 2020, 06:59:19 UTC
93bde42 `CalcJob`: improve logging in `parse_scheduler_output` (#4370) The level of the log that is fired if no detailed job info is available is changed from `WARNING` to `INFO`. Since not all schedulers implement the feature of retrieving this detailed job info, such as the often used `DirectScheduler`, using a warning is not very apt. If the information is missing, nothing is necessarily wrong, so `INFO` is better suited. On the contrary, if the `Scheduler.parse_output` excepts, that is grave and so its level is changed from a warning to an error. Finally, a new condition is added where the scheduler does implement the method to retrieve the detailed job info, but the command fails. In this case, the return value will be non-zero. This value is now checked explicitly and if the case, a info log is fired and the detailed job info is set to `None`, which will cause the parsing to be skipped. This case can for example arise when using the `SlurmScheduler` plugin, which does implement the detailed job info feature, however, not all SLURM installations have the job accounting feature enabled, which is required by the plugin. 22 September 2020, 15:46:41 UTC
8dec326 ORM: move attributes/extras methods of frontend node to mixins Move all methods related to attributes and extras from the frontend `Node` class to separate mixin classes called `EntityAttributesMixin` and `EntityExtrasMixin`. This makes it easier to add these methods to other frontend entity classes and makes the code more maintainable. 22 September 2020, 09:12:38 UTC
65389f4 ORM: move attributes/extras methods of backend node to mixins Move the attributes and extras methods to two mixin classes called `BackendEntityAttributesMixin` and `BackendEntityExtrasMixin`, stored in the new aiida.orm.implementation.entities.py module. The mixin classes rely on the `is_stored` and `_flush_if_stored` methods, so these are added as abstract methods. They are "mixed in" at the BackendNode` level where the abstract methods of the attributes and extras are removed. Move the `_flush_if_stored` method to the `BackendEntity` class, which is added leftmost to the `BackendNode` parent classes. This method can be used by all backend entities. Move `BackendEntity` and `BackendCollection` classes to `aiida.orm.implementation.entities.py` module. Move `validate_attribute_extra_key` and `clean_value` methods to new module `aiida.orm.implementation.utils.py`. Move the calls to `validates_attribute_extra_key` method from the front end `Node` class to the backend mixin classes `AttributesBackendEntity` and `ExtrasBackendEntity`. This way the key/value validation/cleaning is both done at the backend level, which is more consistent. Moreover, this means other frontend classes won't have to add this call to `validates_attribute_extra_key` to their methods, when they want to use the attributes/extras methods. Add exception chaining for all the modules that are adjusted. 22 September 2020, 09:12:38 UTC
8005040 ORM: homogenize attributes/extras methods of backend node Make sure the code for the attributes and extras methods are identical, as a first step towards refactoring the code to use a mixin class for these methods. These changes should have no influence on how the methods function. Add exception chaining for exceptions raised directly during the handling of another exception. There is only a minor difference in the output, but it should make it clear that this exception was raised purposefully. 22 September 2020, 09:12:38 UTC
12be9ad Depedencies: remove upper limit and allow `numpy~=1.17` (#4378) The limit was introduced in `f5d6cba2baf0e7ca69b742f7e76d8a8bbcca85ae` because of a broken pre-release. Now that a stable release is out, the requirement is relax to allow newer versions as well. Note that we keep the minimum requirement of `numpy==1.17` following AEP 003. One change had to be applied in the code to make it compatible with newer versions of `numpy`. In the legacy kpoints implementation, the entries in `num_points` are of type `numpy.float64` for recent versions of `numpy`, but need to be integers so they can be used for indexing in `numpy.linspace()` calls. 19 September 2020, 09:16:21 UTC
c6bca06 Update citations in `README.md` and documentation landing page (#4371) The second AiiDA paper was published in Scientific Data on September 8, 2020. The suggested citations are updated, where the original AiiDA paper is kept to be cited when people use AiiDA with version before v1.0 or if they reference the original ADES model. 17 September 2020, 20:54:58 UTC
9dfad2e `CalcJob`: allow nested target paths for `local_copy_list` (#4373) If a `CalcJob` would specify a `local_copy_list` containing an entry where the target remote path contains nested subdirectories, the `upload_calculation` would except unless all subdirectories would already exist. To solve this, one could have added a transport call that would create the directories if the target path is nested. However, this would risk being very inefficient if there are many local copy list instructions with relative path, where each would incurr a command over the transport. Instead, we change the design and simply apply the local copy list instructions to the sandbox folder on the local file system. This also at the same time allows us to get rid of the inefficient workaround of writing the file to a temporary file, because the transport interface doesn't accept filelike objects and the file repository does not expose filepaths on the local file system. The only additional thing to take care of is to make sure the files from the local copy list do not end up in the repository of the node, which was the whole point of the `local_copy_list`'s existence in the first place. But this is solved by simply adding each file, that is added to the sandbox, also to the `provenance_exclude_list`. 17 September 2020, 19:24:38 UTC
ff7b9e6 CI: skip the code tests if only docs have been touched (#4377) This requires splitting the `pre-commit` and `tests` steps in separate workflows. 17 September 2020, 14:29:39 UTC
b002a9a CI: add `pytest` benchmark workflows (#4362) The basic steps of the workflow are: 1. Run `pytest` to generate JSON data. By default, these tests are switched off (see `pytest.ini`) but to run them locally, simply use `pytest tests/benchmark --benchmark-only`. This runs each test, marked as a benchmark, n-times and records the timing statistics (see pytest-benchmark). When run also with `--benchmark-json benchmark.json`, a JSON file will also be created, with all the details about each test. 2. Extract information from the above JSON, and also data about the system (number of CPUs, etc) and created a "simplified" JSON object. 3. Read the JSON object from the specified `gh-pages` folder (data.js), which contains a list of all these JSON objects. These are split OS and backend. 4. If available, compare the new JSON section against the last one to be added `data.js`, and comment in the PR and/or fail the workflow if the timings have sufficiently degraded, depending on GH action configuration. 5. If configured, add the new data to `data.js`, update the other website assets (HTML/CSS/JS) and commit the updates to `gh-pages`. Since at ~7/8 minutes, these tests are slower than standard unit tests, even with the current fairly conservative tests/# of repetitions, they are not run by default on each commit. The current solution for this is to have two workflow jobs: * One runs on every commit to develop, unless it is just updating documentation, and will actually update the `gh-pages` data. * The second is triggered by a commit to a branch with an open PR to `develop`, but only if it includes `[run bench]` in the title of the commit message. This will report back the timing data but not update `gh-pages`. The idea is that this is run on the final commit of a PR that may affect performance. On to the actual tests. They are split into three categories: 1. Basic node storage/deletion, i.e. interactions with the ORM 2. Runs of workchains with internal (looped) calls to workchains and calcjobs. These are duplicated using both a local runner and a daemon runner. The daemon runner code is a bit tricky and may break once we finalize the move to `asyncio`. 3. Expoting/importing archives. 16 September 2020, 10:08:59 UTC
34eef0b CI: continue on errors for install jobs using pip beta 2020 resolver (#4369) Update the `test-install` workflow to not fail when any of the install jobs with pip new dependency resolver fails. This new feature is in development and so bugs are expected that are unrelated to our changes and so its failure should not fail our builds which disrupts our own CI process too much. The compromise here is that in order to not block merges, the associated steps and jobs still get a little green check-mark, and a simple warning is displayed if issues with the 2020-resolver are encountered. 16 September 2020, 08:03:24 UTC
d1bc513 `verdi export migrate`: add `--in-place` flag to migrate archive in place (#4220) When an export archive needs to be migrated, one often does not care about the original archive and simply wants to overwrite the existing file with the migrated archive. The new flag `--in-place` saves users from having to specify a temporary filename and copying it over the original file after it has been migrated. This commit also changes the exit code for migrating an export file that already is up to date from an error to success (0). 11 September 2020, 09:26:46 UTC
ec32fbb `SlurmScheduler`: implement `parse_output` to detect OOM and OOW (#3931) This implements the `Scheduler.parse_output` method that allows parsing the detailed job info that is retrieved from the scheduler when a job is finished. For the time being, only the out-of-memory error and out-of-walltime errors are detected. 10 September 2020, 17:18:49 UTC
e974d3f Fix profile creation in the test fixture manager (#4360) The test profile provided by the test fixture manager was broken after the recent added feature where the broker configuration becomes configurable. Since the test fixture uses custom code to create the profile that is not tested in the test suite of `aiida-core`, it went unnoticed that it was not updated to also include the broker information. Adding the broker defaults fixes the problem, but really the fixture code should be rewritten to not have its own profile generation code. 09 September 2020, 21:08:24 UTC
35eac9c Merge pull request #4364 from mbercx/fix/rabbitmq/table Docs: Fix RabbitMQ configuration table 09 September 2020, 15:10:52 UTC
15542ac Docs: Fix RabbitMQ configuration table 09 September 2020, 14:52:32 UTC
759d666 🔧 Add tox configuration (#4355) This commit adds configuration to use the [tox](https://github.com/tox-dev/tox) testing automation tool. This configuration is added to `pyproject.toml`, which required some fixes to the pre-commit tests, for its creation and validation. In particular, it is now parsed with tomlkit, which maintains any existing the formatting and comments. 09 September 2020, 14:11:35 UTC
0094f70 Docs: add section to installation guide on configuring RabbitMQ With the new feature of the RabbitMQ URI being fully customizable, the setup guide needed a new section on how to configure those settings for a given profile through `verdi setup`. 04 September 2020, 19:20:56 UTC
bf7d523 `verdi status`: fix broker URL and add `--print-traceback` option The `verdi status` command now reports the proper URL that is used to connect to the RabbitMQ message broker. In case the connection fails, the exception message is printed. Passing the `--print-traceback` option will force the command to print the entire stack trace as well. 04 September 2020, 19:20:56 UTC
568ccd4 `Config`: add migration to add default message broker configuration The migration simply adds the new broker related fields to each profile using the default value, unless the profile already defines it. The migration version is upped, but the last backwards compatible version is kept the same. This is because if the configuration is used with version 3 of the config file, the new keys are simply removed as the config file is parsed. When the code is updated again, the new keys are added again using the defaults. 04 September 2020, 19:20:56 UTC
75fe1c0 `verdi`: add message broker configuration options to profile setup Most installations will just need the defaults, which have been set to the default localhost configuration of RabbitMQ, but this now offers the option to administrators to also use a RabbitMQ server that does not run on the same machine as the AiiDA instance itself, or requires actual user authentication. 04 September 2020, 19:20:56 UTC
5623f5e `Profile`: add message broker configuration getter and setter properties Add property getter and setters to the `Profile` class for the configuration parameters of the message broker, that is currently furnished by RabbitMQ. These parameters determine the URI that needs to be used to connect to the message broker. 04 September 2020, 19:20:56 UTC
755fe47 Make the RabbitMQ connection parameters configurable Up till now, the URL to connect to the RabbitMQ server was hardcoded. This means it could only connect to the localhost over the standard port and with the default credentials. Certain users, will require to deploy RabbitMQ on a different machine than the AiiDA instance so the server details should be configurable. Since this will no longer guarantee that the RabbitMQ server is running on localhost, it should also be possible to use SSL by changing the protocol from `amqp` to `amqps` and provide specific user credentials. The `aiida.manage.external.rmq.get_rmq_url` is responsible for formatting the correct URI. The method takes the values that form the scheme, netloc and path as arguments, whereas optional query parameters can be specified through the keyword arguments. The supported arguments are: * protocol * username * password * host * port * virtual_host In addition, the following keyword arguments can be specified: * heartbeat # heartbeat timeout in seconds * cafile # string containing path to ca certificate file * capath # string containing path to ca certificates * cadata # base64 encoded ca certificate data * keyfile # string containing path to key file * certfile # string containing path to certificate file * no_verify_ssl # boolean disables certificates validation should be "0" or "1" Note that the hearbeat, unless explicitly specified, will be set to 600 seconds as a default. 04 September 2020, 19:20:56 UTC
ea033b4 Bump base docker image version. (#4353) The new prerequisites image is based on the latest stable Bionic Beaver (18.04) Ubuntu distribution, which contains fixes for critical vulnerabilities. Additionally, this image contains a fix for `ruamel.yaml` package installation. 04 September 2020, 10:36:22 UTC
7c31bc2 `TemplateReplacerCalculation`: make `files` namespace dynamic (#4348) The `files` namespace is supposed to accept any `SinglefileData` or `RemoteData`, but it was not marked dynamic explicitly. In that case, only explicitly defined ports are excepted, which is not what is intended here. 04 September 2020, 08:16:59 UTC
8f4eb96 `Dict`: allow setting attributes through setitem and `AttributeManager` (#4351) It was not possible to change the value of a key, either directly on the node through setitem, nor through setattr via the `AttributeManager` returned by the `dict` property, even though there is no reason not to allow this, as long as the node is not stored. The changes in this commit now allow the following pattern: node = Dict() node['x'] = 'Set a value for x' node.dict.x = 'Overwrite the value for x' The `__setattr__` on the node itself is intentionally not implemented because it would conflict with all the existing properties inherited from the base classes. 03 September 2020, 19:32:37 UTC
0f0dda4 CI: limit upper version of `setuptools<50` for Jenkins build (#4343) Builds on Jenkins started failing after `setuptools==0.50.0` was released on August 30, 2020. This new version gets automatically installed when using a `pyproject.toml` for the build system, but it causes the installation of the `aiida-core` package through `pip` to fail with the exception: ModuleNotFoundError: No module named 'setuptools._distutils' Temporarily limiting the version of `setuptools` in the `pyproject.toml` works around the problem for the time being. The build is updated to also update the version of `pip` before installing the package. 03 September 2020, 10:03:57 UTC
44fe2a7 `SlurmScheduler`: always raise for non-zero exit code (#4332) The `SlurmScheduler` intentionally ignored non-zero exit codes returned by SLURM when asking the status for a number of job ids. This was put in place because SLURM will return a non-zero exit code not only in case of actual errors in attempting to retrieve the status of the requested jobs but also when specifying just a single job that no longer is active. Since the latter is not really an error, yet is difficult to distinguish from a "real" error, the exit code was ignored. However, this could lead to the plugin sometimes incorrectly ignoring a real problem and assuming a job was completed when it was in fact still active. The solution is to use the weird behavior of SLURM that when asking for more than one job, it will never return a non-zero status, even when one or more jobs have finished. That is why, when asking for the status of a single job, we duplicate the job id, such that even when it is no longer active, the exit status will still be zero. 31 August 2020, 08:06:04 UTC
0345e61 Merge pull request #4334 from aiidateam/merge/master Merge master 28 August 2020, 06:06:37 UTC
150ce93 Merge remote-tracking branch 'origin/master' into develop 27 August 2020, 21:38:25 UTC
31c4e7f Make `--prepend-text` and `--append-text` options properly interactive (#4318) The `--prepend-text` and `--append-text` options, used for both the `verdi computer setup` and `verdi code setup` commands were normal options as opposed to all other options that are interactive options. The `InteractiveOption` is a custom option class type we developed that will present the user with a prompt if it was not explicitly provided on the command line. The `--non-interactive` flag can be used by the user to prevent prompting, as long as a default value is defined. The two options in question were implemented differently, presumably because they take a potentially multiline string, which is not easily defined on a prompt and therefore instead these options would be defined through a text editor. The prompting of this text editor, if necessary, was however not performed in the option itself, but in the command body. At this point, the prompt cycle of the parameter cycle controlled by click is already over. Additionally, since the function checking whether the option had been defined also considered an empty string as undefined, despite it being the default, a specified empty string would still lead to the user to be prompted, even when specifying `--non-interactive`. The fix is to make both options proper interactive options just like the rest. To this end, we create the `TemplateInteractiveOption` that works just like the `InteractiveOption` with the only difference being that it uses a file editor instead of an inline prompt. This change does force us to have both options pop up their own file editor, whereas before they were joined in a single file and both files were specified in the same file, separated by a header that we defined. 27 August 2020, 21:18:40 UTC
cb273e7 Rename folder `test.fixtures` to `test.static` (#4219) The name "fixtures" is currently used both for the `pytest` fixtures and for test data in tests/fixtures, such as AiiDA export files. This is confusing and makes searching in the codebase unnecessarily difficult. Here, we rename the test data folder to "static", which indicates the static nature of the files residing there, while avoiding a clash of definition with the `pytest` fixures residing in `aiida.manage.tests`. 27 August 2020, 20:27:43 UTC
31e981b Merge pull request #4333 from aiidateam/release/1.3.1 Release `v1.3.1` 27 August 2020, 20:24:38 UTC
bed2014 Release `v1.3.1` 27 August 2020, 18:36:13 UTC
f9f6c9c `Runner`: close loop when runner stops if runner created it (#4307) If the loop is not closed, the file handles it managed during its lifetime might not be properly cleaned up, leading to file leaks. Therefore, if the loop is created by the `Runner` upon construction, it should also close it when the runner closes. It cannot be done for loops passed into the constructor, because they migth actually still be in use by other parts of the code. 27 August 2020, 18:32:21 UTC
856fc06 `ArithmeticAddParser`: attach output before checking for negative value (#4267) For the recent documentation revamp, the `ArithmeticAddCalculation` and `ArithmeticAddParser` were simplified significantly, by getting rid off as much as the unnecessary code, because they are being literally included as example for the basic how-to create a code plugin. A part that was removed was the `settings` input node that allowed to change the behavior of the parser and allow negative sums instead of it returning an exit code. This, however, in turn cause the Reverse Polish Notation tests on Jenkins to fail. Since these tests are not required to pass, the changes were merged without a fix. In the scope of the RPN tests, negative sums are fine, which anyway is just a mechanism to introduce some kind of failure mode for demonstration purposes. To fix this, without making the logic of the parser more complex, we simply change the order of attaching the output node and performing the final check. Since the RPN workchains only check if the output node is there and do not care about the exit status of the calculation, they will happily continue and the code of the parser keeps the same complexity. 27 August 2020, 16:59:14 UTC
f23078e Remove superfluous `ERROR_NO_RETRIEVED_FOLDER` from `CalcJob` subclasses The `ERROR_NO_RETRIEVED_FOLDER` is now defined on the `CalcJob` base class and the `CalcJob.parse` method already checks for the presence of the retrieved folder and return the exit code if it is missing. This allows us to remove the similar exit codes that are currently defined on the calculation plugins shipped with `aiida-core` `ArithmeticAddCalculation` and `TemplateReplacerCalculation` as well as the check for the presence of the `retrieved` output from the corresponding parsers. The fact that is now checked in the `CalcJob` base class means that `Parser` implementations can assume safely that the retrieved output node exists. 27 August 2020, 13:30:47 UTC
31e3c4e Add infrastructure to parse scheduler output for `CalcJobs` Add a new method `Scheduler.parse_output` that takes three arguments: `detailed_job_info`, `stdout` and `stderr`, which are the dictionary returned by `Scheduler.get_detailed_job_info` and the content of scheduler stdout and stderr files from the repository, respectively. A scheduler plugin can implement this method to parse the content of these data sources to detect standard scheduler problems such as node failures and out of memory errors. If such an error is detected, the method can return an `ExitCode` that should be defined on the calculation job class. The `CalcJob` base class already defines certain exit codes for common errors, such as an out of memory error. If the detailed job info, stdout and stderr from the scheduler output are available after the job has been retrieved, and the scheduler plugin that is used has implemented `parse_output`, it will be called by the `CalcJob.parse` method. If an exit code is returned, it is set on the corresponding node and a warning is logged. Subsequently, the normal output parser is called, if any was defined in the inputs, which can then of course check the node for the presence of an exit code. It then has the opportunity to parse the retrieved output files, if any, to try and determine a more specific error code, if applicable. Returning an exit code from the output parser will override the exit code set by the scheduler parser. This is why that exit code is also logged as a warning so that the information is not completely lost. This choice does change the old behavior when an output parser would return `None` which would be interpreted as `ExitCode(0)`. However, now if the scheduler parser returned an exit code, it will not be overridden by the `None` of the output parser, which is then essentially ignored. This is necessary, because otherwise, basic parsers that don't return anything even if an error might have occurred will always just override the scheduler exit code, which is not desirable. 27 August 2020, 13:30:47 UTC
477fe30 Add check to `verdi` to ensure extra dependencies are not imported (#4324) The command `verdi devel check-undesired-imports` is added that checks that when loading `verdi` there are no libraries imported that are part of one of the extras and not of the base requirements. If this would be the case `verdi` would except with normal execution. This command showed that `seekpath` was imported in the `aiida.tools` top level module and is now moved to prevent this. 27 August 2020, 08:03:11 UTC
bc52ab1 `verdi computer test`: fix bug in spurious output test (#4316) The test that checks for spurious output when executing a normal command on a computer over the transport had a bug in it that went unnoticed because the code path was not tested. If the `stderr` of the command contained any output the command would raise because the test that is called `_computer_test_no_unexpected_output` would incorrectly return a tuple of length one instead of two in that case. In addition to adding tests to hit this code path, the message that is printed in the case of non-empty stdout or stderr is deduplicated and adapted to be bit clearer and refer directly to the documentation instead of through a Github issue. 27 August 2020, 07:21:35 UTC
50988f5 Run the test install workflow with new pip dependency resolver (#4320) As of version 20.2, `pip` ships with a new dependency solver, however, since it is not yet ready for every-day-use, given that it is a lot more strict in resolving dependency conflicts, it can only be enabled with the flag `--use-feature=2020-resolver`. They plan to change this to the default with v20.3 which will be released around October. To anticipate this change, we already switch to use the new resolver in the normal continuous integration workflow. In addition, the test workflow is updated to use both the old and new resolvers as well as testing installing without extras and all extras. 26 August 2020, 20:16:34 UTC
ac801c9 Dependencies: update minimum requirement `paramiko~=2.7` (#4222) Version 2.7 of paramiko finally brings support for the OpenSSH private key format, which has been the default on MacOS for some time. This now no longer requires those users to create keys with the PEM format. 26 August 2020, 10:17:27 UTC
5977dac `verdi status`: distinguish database schema version incompatible (#4319) If `verdi status` was called for a profile whose database schema version is incompatible with the current code, a generic error was thrown that no connection could be made to PostgreSQL. The connection is often fine, it is just that AiiDA prohibits it until the database is made compatible. Often one simply has to migrate the database after installing a newer version of the code. This case is no caught separately and the user is pointed to `verdi database migrate`. 23 August 2020, 13:06:46 UTC
377f137 `CalcJob`: improve scheduler resource validation error message (#4312) The `NodeNumberJobResource.validate_resources` contained a bug, such that when one of the default fields was set to `None` it would not be caught. Instead it would bubble up and be caught in the validator of the `CalcJob` input namespace `validate_calc_job`, which did catch the `TypeError` thrown by `Scheduler.validate_resources`. The `TypeError` was thrown by `None` being cast to `int` in the `NodeNumberJobResource.validate_resources`, however, at this stage, a `None` value should be accepted. Only if an actual value is specified should it be checked to be a valid integer. The check that sufficient fields have none `None` values is done later on. This will provide a more intuitive error message, instead of the vague message: 'field must be greater than or equal to one' that was raised due to the bug even when the field was not explicitly set by the user, causing even more confusion. Note that since the bug in the validate resources is now fixed, and therefore properly raises `ValueError` in case of a problem, the input validator no longer needs nor should catch the `TypeError`. Finally, the `__str__` method is implemented for the `Scheduler` class such that it formats in a nicer way in error messages. 21 August 2020, 15:54:51 UTC
e8d5e76 Deprecate methods that refer to a computer's label as name (#4309) All entities use `label` as the human readable string identifier, but `Computer` was using `name`. This was already changed in the front-end ORM in a previous commit where a `label` property was introduced and the old `name` properties were deprecated, however, a few derivative methods in other classes were missed and still use contain "name". These methods are now also deprecated: * `verdi computer rename`: use `verdi computer relabel` instead * `Code.get_computer_name`: use `self.computer.label` instead * `Code.get_full_text_info`: will be removed * `RemoteData.get_computer_name`: use `self.computer.label` instead * `Transport.get_valid_transports`: `get_entry_point_names` instead Finally, deprecations of `Computer` getters and setters as introduced in commit 592dd365658b0b were still being used internally leading to a lot of deprecation warnings. These have now been properly replaced. 20 August 2020, 15:29:35 UTC
f250a04 Remove duplicated `pk` property from `BackendModelEntity` (#4310) Define `BackendNode` properly as an abstract class by defining the metaclass to be `abc.ABCMeta`. This will guarantee that if abstract methods are added, they are also added in the concrete classes, because classes with unimplemented abstract methods cannot be instantiated. Also remove the duplicated `pk` properties from the `SqlaModelEntity` and `DjangoModelEntity` classes since it is already implemented in their base class `BackendEntity`. 20 August 2020, 15:01:13 UTC
fe8333e Add support for "peer" authentication with PostgreSQL (#4255) PostgreSQL allows "peer" authentication to connect to the database. This is signaled by a `hostname` that is set to `None` through `pgsu`. In this case, the `hostname` part of the connection string of the SQLAlchemy engine should be left empty. If it is `None` it is converted to an empty string otherwise it would be converted to the string literal "None". 19 August 2020, 21:07:32 UTC
45d4cfa Bump base `aiida-prerequisites` docker image to v0.2.0 (#4308) The `latest` tag just points to latest commit on `develop` branch, so it is better to be more specific and specify an exact release. 18 August 2020, 14:47:33 UTC
f2d1e94 Docs: fixed incorrect line numbers and formatting in Topics section (#4294) Fixed incorrect line numbers in the code snippets of the `Parser` topic and removed extraneous colons that prevented literal code blocks from properly being displayed. 14 August 2020, 12:21:07 UTC
27171d0 `QueryBuilder`: Accept empty string for `entity_type` in `append` method (#4299) The `append` method allows the entity that should be appended to be defined by a class, through the `cls` argument, or as a string, through the `entity_type` argument. The logic that validates that at least one and at most one of these two arguments was defined by the caller was bugged, since it did not compare with explicit `None` but generic falsy values. This lead to `entity_type=''` raising an exception, even though this is a valid entity type string and corresponds to the base `Node` class. Co-authored-by: Sebastiaan Huber <mail@sphuber.net> 13 August 2020, 10:53:18 UTC
5786921 Add test fixtures that allow running tests only for specific db backend (#4279) The new fixtures `skip_if_not_sqlalchemy` and `skip_if_not_django` are added. Any test using this fixture will be skipped if the loaded profile does not use the SqlAlchemy or Django backend, respectively. This allows to move some tests that were defined in `tests/backends/aiida_sqlalchemy` to be moved to the backend agnostic location. Typically, in order to skip tests conditionally, the specific decorator `pytest.mark.skipif` is used. However, these are executed at startup time, at which point the test environment including the test profile has not yet been loaded, making it impossible to determine what the database backend is and therefore whether the test should be run. The fixture is executed only after the the profile is loaded, making it possible to know the backend. A few other tests that in the backend specific folder were removed since they were already tested in the corresponding `tests/orm/implementation` files. 12 August 2020, 16:37:48 UTC
153cc5f CI: Unpin python version (#4290) The virtual environment of runners on Github Actions had an issue, where installing a different pyyaml version than the one present resulted in an error. The issue has been resolved in the latest image, and so the workaround of specifically requesting the (outdated) python version 3.7.7 can be dropped. 12 August 2020, 06:35:40 UTC
1fdcf0f rerun flaky tests (#4291) Over time, we've accumulated a handful of tests that pass just fine most of the time, but fail every now and then when the bits align with the wrong star. While the right thing to do would be a deep dive into astrology, there's always so many other things to do! As a mitigation strategy, this PR marks those tests as 'flaky' and uses the pytest-rerunfailures plugin to automatically try re-running those tests a few times before considering them failed. Note that this also makes it easier to identify flaky tests, a simple `git grep mark.flaky` will do the trick. 10 August 2020, 08:39:53 UTC
46af330 Docs: Fix broken link to work chain exit codes (#4284) 29 July 2020, 09:30:41 UTC
592dd36 Deprecate getter and setter methods of `Computer` properties (#4252) From `aiida-core==1.0.0`, we have started using properties for getters and setters of the basic attributes of ORM entities. Rule of thumb here is that if the attribute corresponds to a column on the underlying database model, a property is used. In addition, `label` is the preferred name for the third entity identifier, alongside the ID and UUID. This is already the case for most entities, except for `Computer` which is still using `name`. Here, `name` is deprecated and replaced for `label`. The changes are not yet propagated to the backend, which will be done once the deprecated resources are fully removed. This is fine because the backend is not part of the public API so doesn't have to go through a deprecation path. The `name` keyword in `Computer.objects.get` is also deprecated and replaced by `label`. 27 July 2020, 06:29:37 UTC
bced84e Implement `skip_orm` option for SqlAlchemy `Group.remove_nodes` (#4214) The current implementation of `Group.remove_nodes` is very slow. For a group of a few tens of thousands of nodes, removing a thousand can take more than a day. The same problem exists for `add_nodes` which is why a shortcut was added to the backend implementation for SqlAlchemy. Here, we do the same for `remove_nodes`. The `SqlaGroup.remove_nodes` now accepts a keyword argument `skip_orm` that, when True, will delete the nodes by directly constructing a delete query on the join table. 24 July 2020, 13:43:37 UTC
3a4eff7 `Transport`: add option to not use a login shell for all commands (#4271) Both the `LocalTransport` as well as the `SshTransport` were using a bash login shell, i.e., using the `-l` flag in the bash commands, in order to properly load the user environment which may contain crucial environment variables to be set or modules to be loaded. However, for certain machines, the login shell will produce spurious output that prevents AiiDA from properly parsing the output from the commands that are executed. The recommended approach is to remove the code that is producing the output, but this is not always within the control of the user. That is why a `use_login_shell` option is added to the `Transport` class that switches the use of a login shell. The new option is added to the `verdi computer configure` command and as such is stored in the `AuthInfo`. The new logic affects all bash commands that are executed, including the `gotocomputer` command that follows a slightly different code path. 23 July 2020, 08:25:28 UTC
ef1caa0 Fix bug in `aiida.engine.daemon.execmanager.retrieve_files_from_list` (#4275) The `retrieve_files_from_list` would loop over the instructions of the `retrieve_list` attribute of a calculation job and for each entry define the variables `remote_names` and `local_names` which contain the filenames of the remote files that are to be retrieved and with what name locally. However, for the code path where the element of `retrieve_list` is a list or tuple and the first element contains no wildcard characters, the `remote_names` variable is not defined, meaning that the value of the previous iteration would be used. This was never detected because this code path was not actually tested. This bug would only affect `CalcJob`s that specified a `retrieve_list` that contained an entry of the form: ['some/path', 'some/path', 0] where the entry is a list and the first element does not contain a wildcard. 22 July 2020, 10:10:42 UTC
a5829e0 Make the loglevel of the daemonizer configurable (#4276) The loglevel of the daemonizer, currently `circus`, was hardcoded in the code of the daemon client `aiida.engine.daemon.client.DaemonClient`, even though the configuration options already provided a way to change the logging level for the `circus` logger. The hardcoded value is now replaced by fetching the value from the profile configuration. 21 July 2020, 10:47:12 UTC
0d7baa5 Add benchmark workflow (#4270) 16 July 2020, 14:03:54 UTC
67791f6 CI: Run `test-install` workflow only on main repository (#4269) Without these guards, it would also run on forks. 15 July 2020, 10:17:45 UTC
38aece4 Docs: use new page rank feature on ReadTheDocs (#4217) ReadTheDocs just introduced a way to control the search rank of documentation pages, allowing us to push hits in the autogenerated API docs further down in the list of search results. 15 July 2020, 07:15:59 UTC
30c9486 Docs: various fixes for the command line reference (#4268) The reference of the command line interface is generated automatically by the `verdi-autodocs` pre-commit hook, that uses `click` itself to format the help strings of the top-level commands. A few changes are made to improve the appearance of the generated docs: * Specify explicit maximum width, by setting `terminal_width` when the `Context` object is constructed. It is set to 90 because with the current styling of the docs, this nicely fills the code boxes that are rendered for the documentation * Switch from `::` markers to `.. code:: console` in front of each command help string. This ensure the code is properly formatted and certain keywords are not colored because they are interpreted as Python keywords. * The docstrings of some `verdi` commands were adapted such that they are formatted properly. The `\b` magic marker is used to instruct `click` to respect literal whitespace, such as newline characters. 14 July 2020, 20:28:36 UTC
73a5cca CI: Update `setup-python` action to v2 in order to pin Python to 3.7.7 (#4265) Python 3.7.8, which was installed by default has some issues with our requirement for `pyyaml==5.1.2. Since we cannot release this requirement very easily, we temporarily workaround it by pinning 3.7.7 for the tests job, which is the only job requiring Python 3.7. Note that this required `setup-python@v2` since v1 does not allow specifying an exact version. 14 July 2020, 15:41:51 UTC
aad2d61 `ArithmeticAddParser`: attach output before checking for negative value (#4267) For the recent documentation revamp, the `ArithmeticAddCalculation` and `ArithmeticAddParser` were simplified significantly, by getting rid off as much as the unnecessary code, because they are being literally included as example for the basic how-to create a code plugin. A part that was removed was the `settings` input node that allowed to change the behavior of the parser and allow negative sums instead of it returning an exit code. This, however, in turn cause the Reverse Polish Notation tests on Jenkins to fail. Since these tests are not required to pass, the changes were merged without a fix. In the scope of the RPN tests, negative sums are fine, which anyway is just a mechanism to introduce some kind of failure mode for demonstration purposes. To fix this, without making the logic of the parser more complex, we simply change the order of attaching the output node and performing the final check. Since the RPN workchains only check if the output node is there and do not care about the exit status of the calculation, they will happily continue and the code of the parser keeps the same complexity. 14 July 2020, 13:55:03 UTC
b316ab3 Docs: fix typos and add sub-headers to "Using virtual environments" (#4263) 14 July 2020, 10:15:47 UTC
18d2258 `verdi status`: do not except when no profile is configured (#4253) Instead, print that no profile could be found and suggest that one is setup with `verdi quicksetup` or `verdi setup`. To test this, a new pytest fixture is created that creates a completely new and independent configuration folder, along with fixtures to create profiles to add to the config and caching configuration files. Similar code already exists for normal unittests, but this can be removed once those have been refactored to pytests. 14 July 2020, 08:23:46 UTC
dd01f68 Add section on SSH passphrase storage with osx keychain (#4259) Co-authored-by: Carl Simon Adorf <carl.simon.adorf@gmail.com> 14 July 2020, 01:34:35 UTC
b8664d1 Docs: fix pip install command for zsh (#4261) In zsh pip install targets including extras (i.e. `package[extra]`) must be wrapped in quotation marks. Co-authored-by: Chris Sewell <chrisj_sewell@hotmail.com> 14 July 2020, 01:15:57 UTC
1df6689 Docs: move "Running on supercomputers" section to "Run codes" (#4242) It is important for users to be aware of these mechanisms and the potential problems it can cause if not respected, before they run into problems. That is why it is better to move these instructions as close as possible as to where most users will learn how to connect to remote clusters. 13 July 2020, 12:47:40 UTC
99e608c Fix pre-commit configuration to reinstate `pylint` running (#4258) The `pylint` hook was ignored because the `hook` key in the `local` category was defined twice. In this case `pre-commit` won't complain that the file is invalid but the second just overwrites the first declaration. In addition, the hook was missing the keys `entry`, which is required, and the `types` key which restricts it to running only on Python file. 12 July 2020, 08:59:44 UTC
9636ac7 Docs: move in the documentation on caching (#4228) Split off the technical details to a Topics section. It is placed under the "Provenance" topic, as I don't think it needs its own top level section. Even though caching only applies to calculation jobs at the moment, and so one could argue to place it there, really this is a current implementation detail and this is explained in the limitations. Since it has fundamentally to do with the provenance, this seems like the best fit. Finally, an `important` block at the beginning of the how-to to explain why it is not enabled by default and to warn users of the caveats, which links to the topics section. 10 July 2020, 16:09:55 UTC
6b2f4dd CI: do not fail the build when the coverage upload fails (#4239) This happens quite a bit and will fail the entire build. Since GHA does not yet allow rerunning a single failed job, we have to restart the entire build. Since on top of that we are currently just using the coverage as an indicator, it seems excessive to fail the entire build if the coverage upload of a single job fails. 09 July 2020, 16:58:24 UTC
d7a250b Docs: small fix in plugin code how-to launch script section (#4236) The information printed by the launch script was not described correctly. 09 July 2020, 15:41:12 UTC
05a0dbe Docs: small code fixes for plugin codes how-to (#4234) 09 July 2020, 12:05:27 UTC
a968523 Docs: revise the how-to write plugin for external codes (#4208) The how-to section on how to create a calculation job and parser plugin is significantly changed, in order to make it more self-contained. The new version really attempts to give a complete manual from start to finish to constructing new plugins and running them. This therefore includes creating a minimal package with entry points, which is necessary for the `Parser` to be able to be specified as the parser for the calculation job. It also includes a launch script. The `ArithmeticAddParser` was copied to provide a simpler version that does not do any error checking and returning of exit codes. This is done such that the first time the parser interface is explained, there is as little detail as possible. The code is included in the existing parser module such that it can be tested and the snippet is literally included to prevent it from going out of sync with the actual code. 08 July 2020, 20:22:26 UTC
4dfc01f CI: run all jobs on Python 3.8 (#4229) The `pre-commit` and `verdi` steps of the CI workflow started failing. Weirdly enough, it only seems to fail for the `aiidateam` repository but the same branches on some other forks run fine. It might be due to the Python version being used not being compatible with the runner, so we update it to 3.8 for now. 08 July 2020, 15:54:16 UTC
1a4cada Add defaults for configure options of the `SshTransport` plugin (#4223) The options for the `verdi computer configure` command are created dynamically based on the `_valid_connect_options` and the `_valid_auth_options` class attributes of the transport plugin class. These are interactive options whose defaults are context based, meaning they can be defined by a previously existing transport configuration. However, the "default" defaults, if you will, i.e. the defaults when the computer has never been configured before, were not defined, so they were also not printed in the help message string of the command. We now explicitly define these base defaults. 07 July 2020, 09:45:30 UTC
5e5d5e0 Docs: Add computer and code setup how-to (#4216) This PR's aim is to re-instate documentation for setting up computers and codes. - Firstly it merges `docs/source/get_started/codes` and `docs/source/get_started/computers` into "How to run external codes" - The computers text then had a number of references to SSH sections. These have been consolidated into "How to setup SSH connections". - The computer text also referenced the schedulers, which have been move into a topic on "Batch Job Schedulers" - Finally, relevant CLI help texts have been improved Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com> Co-authored-by: Sebastiaan Huber <mail@sphuber.net> 06 July 2020, 16:17:56 UTC
7af3b48 Add the `--paused` flag to `verdi process list` (#4213) This flag will filter for processes that are currently paused which is useful to find calculation jobs that may have hit the exponential backoff mechanism, among other things. 02 July 2020, 05:50:42 UTC
855ae82 Remove all files from the pre-commit exclude list (#4196) Except for the documentation of course, which should remain excluded. Also move the `pylint` pre-commit hook back under `local`. The idea by putting it under the remote repos was to profit from the separate virtual environment that is created with the exact version specified in order to prevent clashes with requirements of other projects being developed in the same virtual environment. However, this approach leads to many spurious false positive import-errors because `pylint` cannot find all other third party dependencies. This can be fixed by specifying `language: system`, but this just forces the normal virtual environment to be used, rendering the whole point of using the remote repos moot. 01 July 2020, 21:20:27 UTC
back to top