Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vault data disappears when upgrading 1.23.0 -> 1.25.0 #3111

Closed
vitorbaptista opened this issue Jan 5, 2023 · 21 comments · Fixed by #3133
Closed

Vault data disappears when upgrading 1.23.0 -> 1.25.0 #3111

vitorbaptista opened this issue Jan 5, 2023 · 21 comments · Fixed by #3133
Labels
bug Something isn't working enhancement New feature or request troubleshooting There might be bug or it could be user error, more info needed

Comments

@vitorbaptista
Copy link

Subject of the issue

I have a Vaultwarden deployment using docker-compose currently on 1.23.0. I tried upgrading to 1.27.0, but then my vault was empty (I was able to login though). I tried all versions between them, and the only that worked was 1.24.0.

Deployment environment

  • vaultwarden version: 1.23.0
  • Install method: Docker-compose

  • Clients used: Web Vault

  • Other relevant details:

Steps to reproduce

I'm not sure if there's something on my installation, but I guess you could reproduce it by:

  1. Install vaultwarden:1.23.0 (Docker container), setup with SQLite
  2. Create an account and add passwords to your vault
  3. Upgrade the container to 1.25.0

Expected behaviour

All passwords in the vault would be the there.

Actual behaviour

The vault is empty

Troubleshooting data

These are the logs on Vaultwarden 1.27.0, but the error is the same in 1.25.0 and 1.26.0.

/--------------------------------------------------------------------\
|                        Starting Vaultwarden                        |
|                           Version 1.27.0                           |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the   |
| official channels to report bugs/features, regardless of client.   |
| Send usage/configuration questions or feature requests to:         |
|   https://vaultwarden.discourse.group/                             |
| Report suspected bugs/issues in the software itself at:            |
|   https://github.com/dani-garcia/vaultwarden/issues/new            |
\--------------------------------------------------------------------/
[INFO] No .env file found.
[DEPRECATED]: `SMTP_SSL` or `SMTP_EXPLICIT_TLS` is set. Please use `SMTP_SECURITY` instead.
[2023-01-05 19:25:16.020][vaultwarden::api::notifications][INFO] Starting WebSockets server on 0.0.0.0:3012
[2023-01-05 19:25:16.024][start][INFO] Rocket has launched from http://0.0.0.0:80
[2023-01-05 19:25:27.090][request][INFO] POST /identity/connect/token
[2023-01-05 19:25:27.098][response][INFO] (login) POST /identity/connect/token => 200 OK
[2023-01-05 19:25:27.240][request][INFO] GET /api/sync?excludeDomains=true
[2023-01-05 19:25:27.653][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: tokio::runtime::context::exit_runtime
   8: tokio::runtime::scheduler::multi_thread::worker::block_in_place
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}}
  11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  15: tokio::runtime::task::core::Core<T,S>::poll
  16: tokio::runtime::task::harness::Harness<T,S>::poll
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  18: tokio::runtime::scheduler::multi_thread::worker::Context::run
  19: tokio::macros::scoped_tls::ScopedKey<T>::set
  20: tokio::runtime::scheduler::multi_thread::worker::run
  21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  22: tokio::runtime::task::core::Core<T,S>::poll
  23: tokio::runtime::task::harness::Harness<T,S>::poll
  24: tokio::runtime::blocking::pool::Inner::run
  25: std::sys_common::backtrace::__rust_begin_short_backtrace
  26: core::ops::function::FnOnce::call_once{{vtable.shim}}
  27: std::sys::unix::thread::Thread::new::thread_start
  28: start_thread
  29: clone
[2023-01-05 19:25:27.661][_][ERROR] Handler sync panicked.
[2023-01-05 19:25:27.661][_][WARN] A panic is treated as an internal server error.
[2023-01-05 19:25:27.661][_][WARN] No 500 catcher registered. Using Rocket default.
[2023-01-05 19:25:27.669][response][INFO] (sync) GET /api/sync?<data..> => 500 Internal Server Error
[2023-01-05 19:25:28.233][vaultwarden::api::notifications][INFO] Accepting WS connection from 172.22.0.3:38398
@BlackDex
Copy link
Collaborator

BlackDex commented Jan 5, 2023

May i ask how many vault items you have? (You can see this in the admin environment, please provide both personal and all orgs items you are a member of)
It looks like you have so many items that causes the query to be overloaded.

Also, could you try the Alpine based images to see if that does work?

@vitorbaptista
Copy link
Author

@BlackDex thanks for the quick reply

I couldn't find the number of vault items, but it's in the thousands (I guess between 2,000 ~ 5,000). Interestingly, I tried logging in with a different user that had much less vault items and it worked fine.

I tried the Alpine 1.27.0-alpine and I see the same error in the logs, but it shows the vault names. Maybe the Alpine version supports more vault items?

[2023-01-06 13:08:29.859][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: tokio::runtime::context::exit_runtime
   8: tokio::runtime::scheduler::multi_thread::worker::block_in_place
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}}
  11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  15: tokio::runtime::task::core::Core<T,S>::poll
  16: tokio::runtime::task::harness::Harness<T,S>::poll
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  18: tokio::runtime::scheduler::multi_thread::worker::Context::run
  19: tokio::macros::scoped_tls::ScopedKey<T>::set
  20: tokio::runtime::scheduler::multi_thread::worker::run
  21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  22: tokio::runtime::task::core::Core<T,S>::poll
  23: tokio::runtime::task::harness::Harness<T,S>::poll
  24: tokio::runtime::blocking::pool::Inner::run
  25: std::sys_common::backtrace::__rust_begin_short_backtrace
  26: core::ops::function::FnOnce::call_once{{vtable.shim}}
  27: std::sys::unix::thread::Thread::new::thread_start
[2023-01-06 13:08:29.867][_][ERROR] Handler sync panicked.
[2023-01-06 13:08:29.867][_][WARN] A panic is treated as an internal server error.
[2023-01-06 13:08:29.867][_][WARN] No 500 catcher registered. Using Rocket default.

@BlackDex
Copy link
Collaborator

BlackDex commented Jan 6, 2023

You can use the admin interface to see the amount of items.
I would really like to know the amount so that i can try to replicate.

@vitorbaptista
Copy link
Author

vitorbaptista commented Jan 6, 2023

@BlackDex I couldn't get to the admin interface, but I queried the DB directly:

sqlite> SELECT COUNT(*) FROM ciphers;
43931

Much more than I expected. Does that work for you? Or is there another query I can run on the DB that would help?

@BlackDex
Copy link
Collaborator

BlackDex commented Jan 6, 2023

That is a lot. Probably not all your ciphers i think.
The admin interface is vw.domain.tld/admin.

@BlackDex BlackDex added the troubleshooting There might be bug or it could be user error, more info needed label Jan 6, 2023
@BlackDex
Copy link
Collaborator

BlackDex commented Jan 7, 2023

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

I'll have to look into this and see if we can solve this in a decent way without slowing everything down again.

I also wonder how a key rotation will perform, because that will probably take a long time, and will also cause a lot of queries.

I also wonder if this breaks on MySQL or PostgreSQL.

@BlackDex BlackDex added bug Something isn't working enhancement New feature or request labels Jan 7, 2023
@sorcix
Copy link

sorcix commented Jan 9, 2023

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

That's the limit on number of parameters (binding ? or :var in a query). That shouldn't be an issue, right?

@BlackDex
Copy link
Collaborator

BlackDex commented Jan 9, 2023

That is an issue, because that is used in the newer versions to speed-up the sync process.
I might need to revisit that approach, or limit the amount and merge in-code.

The speed-up was about 3x quicker sync then before if not more.
But if that causes very very large vaults to not being able to sync. That is an issue of course.

I didn't had time yet to reproduce and look at this.

@stefan0xC
Copy link
Contributor

stefan0xC commented Jan 9, 2023

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

That's the limit on number of parameters (binding ? or :var in a query). That shouldn't be an issue, right?

If I understood the issue correctly this line creates an IN statement that is incompatible with SQLite if the number of ciphers gets too large:

.filter(attachments::cipher_uuid.eq_any(cipher_uuids))

According to https://www.sqlite.org/limits.html#max_variable_number the maximum was only 999 until 3.32.0 so in theory some users (that don't use the docker image and build the binary with an older SQLite) could be more affected by this (I think 999 would still be a large number of ciphers but should be a bit easier to reach than 32766 for most users).

@BlackDex
Copy link
Collaborator

BlackDex commented Jan 9, 2023

@stefan0xC if I'm correct, from the top of my head, that optimization i build was after a newer version of the sqlite library. Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course.

While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it.

@stefan0xC
Copy link
Contributor

Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course.

Ah, okay. I just assumed the sqlite version in use would depend on the build platform while I could have looked in the Cargo.toml file instead. 🤦

(I was initially also wondering if it might be worth exploring that option to reproduce the issue more easily but I think it does not matter as even a limit of 999 is so large that it should be automated either way...)

While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it.

Ah, yeah I was just wondering how to get so many entries (as I store almost all my credentials to Vaultwarden myself and I am nowhere near that).

@vitorbaptista May I ask how you got that many ciphers? Did you test something or do you maybe have an automated script? (If so you could maybe share it or a reworked version so we can more easily reproduce that issue).

I was also thinking if maybe switching to a more robust database backend like Postgres or MariaDB (which according to many answers on StackExchange apparently don't have such a "low" limit like SQLite) might be a workaround for you in the meantime (until there is a fix) but I've not tested it myself.

@vitorbaptista
Copy link
Author

vitorbaptista commented Jan 10, 2023

@stefan0xC We use it to store third-parties' credentials. It's an automated process that use the bw CLI to add/update passwords into Vaultwarden. We also have a staging and production organization to test this process, doubling the number of passwords we have.

Regarding migrating to another DB, that might be a better option. However, at this point, I think we'd better bite the bullet and use a Bitwarden.com organization, as we don't have that many users to begin with. I wonder if they would be able to handle this number of passwords, though.

@BlackDex
Copy link
Collaborator

Good test environment for Vaultwarden haha.

@BlackDex
Copy link
Collaborator

I Think i have found a good solution. Which may be nicer also.
I may do some more changes to see if we can improve the performance.

Also, @stefan0xC it is actually very easy to lower the limit.

SQLITE_MAX_VARIABLE_NUMBER=999 cargo build --features sqlite

@BlackDex
Copy link
Collaborator

BlackDex commented Jan 11, 2023

Ok, PR done. It should solve your issue, and it looks like i shaved off a bit more time it takes to sync. Not much, but every bit counts. Especially with a huge cipher base 😉 .

Compared to the version you are currently running and the one with this patch, you wont have time to get a ☕ .

@vitorbaptista
Copy link
Author

That's awesome, @BlackDex! I'll keep an eye on when this is released. Thank you for the quick turnaround.

@BlackDex
Copy link
Collaborator

@vitorbaptista I'm curious to known if this solution works for you, and what your feeling is on the loading part.
If you could try out the testing tagged image, that would be great!

@vitorbaptista
Copy link
Author

@BlackDex hey, I've been checking to see when this would be released. If you don't mind, I'd rather wait for the next release, given that this is a key infrastructure of our company. Hopefully it won't take too long. I'll ping you here with the results

@vitorbaptista
Copy link
Author

@BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix.

@BlackDex
Copy link
Collaborator

@BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix.

Probably this weekend somewhere

@vitorbaptista
Copy link
Author

vitorbaptista commented Mar 28, 2023

@BlackDex As promised, I did some quick performance comparisons. I was using version 1.23.0 before, upgraded to 1.28.0. The load time of the main page (just after logging in) is the same, 28 seconds. However, the load time of the /sync?excludeDomains=true endpoint went from 10.44s to 2.61s, a whopping 75% reduction!!!

The amount downloaded in that endpoint increased a bit, from 18.61 MB to 19.22 MB.

None of these checks were in any way scientific, I just used my regular clock to do the timings, and Firefox's Network tab to check the /sync endpoint size and timing.

Thanks a lot for your work! The bug is solved, and I now can resume using the Bitwarden apps.

Ping-timeout pushed a commit to Ping-timeout/vaultwarden that referenced this issue Apr 3, 2023
* Fix remaning inline format

* Use more modern meta tag for charset encoding

* fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory

* Optimize CipherSyncData for very large vaults

As mentioned in dani-garcia#3111, using a very very large vault causes some issues.
Mainly because of a SQLite limit, but, it could also cause issue on
MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory
allocations.

This PR solves this by removing the need of all the cipher_uuid's just
to gather the correct attachments.

It will use the user_uuid and org_uuid's to get all attachments linked
to both, weither the user has access to them or not. This isn't an
issue, since the matching is done per cipher and the attachment data is
only returned if there is a matching cipher to where the user has access to.

I also modified some code to be able to use `::with_capacity(n)` where
possible. This prevents re-allocations if the `Vec` increases size,
which will happen a lot if there are a lot of ciphers.

According to my tests measuring the time it takes to sync, it seems to
have lowered the duration a bit more.

Fixes dani-garcia#3111

* Add MFA icon to org member overview

The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.

* Add avatar color support

The new web-vault v2023.1.0 supports a custom color for the avatar.
bitwarden/server#2330

This PR adds this feature.

* Update Rust to v1.66.1 to patch CVE

This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.

* Use more modern meta tag for charset encoding

* Use more modern meta tag for charset encoding

* Fix remaning inline format

* Use more modern meta tag for charset encoding

* Fix remaning inline format

* fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory

* Use more modern meta tag for charset encoding

* Fix remaning inline format

* fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory

* Add MFA icon to org member overview

The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.

* Use more modern meta tag for charset encoding

* Fix remaning inline format

* fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory

* Add MFA icon to org member overview

The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.

* Update Rust to v1.66.1 to patch CVE

This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.

* Use more modern meta tag for charset encoding

* Fix remaning inline format

* fix (2fa.directory): Allow api.2fa.directory, and remove 2fa.directory

* Add MFA icon to org member overview

The Organization member overview supports showing an icon if the user
has MFA enabled or not. This PR adds this feature.

This is very useful if you want to enable force mfa for example.

* Update Rust to v1.66.1 to patch CVE

This PR sets Rust to v1.66.1 to fix a CVE.
https://blog.rust-lang.org/2023/01/10/cve-2022-46176.html
https://blog.rust-lang.org/2023/01/10/Rust-1.66.1.html

Also updated some packages while at it.

* Add avatar color support

The new web-vault v2023.1.0 supports a custom color for the avatar.
bitwarden/server#2330

This PR adds this feature.

* Update web vault to 2023.1.0

* include key into user.set_password

* include key into user.set_password

* Validate note sizes on key-rotation.

We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes dani-garcia#3152

* include key into user.set_password

* Update KDF Configuration and processing

- Change default Password Hash KDF Storage from 100_000 to 600_000 iterations
- Update Password Hash when the default iteration value is different
- Validate password_iterations
- Validate client-side KDF to prevent it from being set lower than 100_000

* include key into user.set_password

* Validate note sizes on key-rotation.

We also need to validate the note sizes on key-rotation.
If we do not validate them before we store them, that could lead to a
partial or total loss of the password vault. Validating these
restrictions before actually processing them to store/replace the
existing ciphers should prevent this.

There was also a small bug when using web-sockets. The client which is
triggering the password/key-rotation change should not be forced to
logout via a web-socket request. That is something the client will
handle it self. Refactored the logout notification to either send the
device uuid or not on specific actions.

Fixes dani-garcia#3152

* Updated web vault to 2023.1.1 and rust dependencies

* Re-License Vaultwarden to AGPLv3

This commit prepares Vaultwarden for the Re-Licensing to AGPLv3
Solves #2450

* Remove `arm32v6`-specific tag

This section of code seems to be breaking the Docker release workflow as of a
few days ago, though it's unclear why. This tag only existed to work around
an issue with Docker pulling the wrong image for ARMv6 platforms; that issue
was resolved in Docker 20.10.0, which has been out for a few years now, so it
seems like a reasonable time to drop this tag.

* Rename `.buildx` Dockerfiles to `.buildkit`

This is a more accurate name, since these Dockerfiles require BuildKit, not Buildx.

* Disable Hadolint check for consecutive `RUN` instructions (DL3059)

This check doesn't seem to add enough value to justify the difficulties it
tends to create when generating `RUN` instructions from a template.

* added database migration

* working implementation

* fixes for current upstream main

* "Spell-Jacking" mitigation ~ prevent sensitive data leak from spell checker.
@see https://www.otto-js.com/news/article/chrome-and-edge-enhanced-spellcheck-features-expose-pii-even-your-passwords

* Fix Javascript issue on non sqlite databases

When a non sqlite database is used, loading the admin interface fails
because the backup button is not generated.
This PR is solves it by checking if the elements are valid.

Also made some other changes and fixed some eslint errors.
Showing `_post` errors is better now.

Update jquery to latest version.

Fixes dani-garcia#3166

* Allow listening on privileged ports (below 1024) as non-root

This is done by running `setcap cap_net_bind_service=+ep` on the executable
in the build stage (doing it in the runtime stage creates an extra copy of
the executable that bloats the image). This only works when using the
BuildKit-based builder, since the `COPY` instruction doesn't copy
capabilities on the legacy builder.

* don't nullify key when editing emergency access

the client does not send the key on every update of an emergency access
contact so the field would be emptied on a change of the wait days or access level.

* Replaced wrong mysql column type

* improved security, disabling policy usage on
email-disabled clients and some refactoring

* rust lang specific improvements

* completly hide reset password policy
on email disabled instances

* change description of domain configuration

Vaultwarden send won't work if the domain includes a trailing slash.
This should be documented, as it may lead to confusion amoung users.

* improve wording of domain description

* Generate distinct log messages for regex vs. IP blacklisting.

When an icon will not be downloaded due to matching a configured
blacklist, ensure that the log message indicates the type of blacklist
that was matched.

* Ensure that all results from check_domain_blacklist_reason are cached.

* remove documentation of bug since I'm fixing it

* fix trailing slash not being removed from domain

* allow editing/unhiding by group

Fixes dani-garcia#2989

Signed-off-by: Jan Jansen <jan.jansen@gdata.de>

* Revert "fix trailing slash not being removed from domain"

This reverts commit 679bc7a.

* fix trailing slash in configuration builder

* remove warn when sanitizing domain

* add argon2 kdf fields

* Add support for sendmail as a mail transport

* check if SENDMAIL_COMMAND is valid using 'which' crate

* add EXE_SUFFIX to sendmail executable when not specified

* Updated Rust and crates

- Updated Rust to v1.67.0
- Updated all crates except for `cookies` and `webauthn`

* docs: add build status badge in readme

* Fix Organization delete when groups are configured

With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes dani-garcia#3247

* Fix Collection Read Only access for groups

I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification

* Validate all needed fields for client API login

During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.

* Make the admin cookie lifetime adjustable

* Add function to fetch user by email address

* Apply Admin Session Lifetime to JWT

* Apply rewording

* docs: add build status badge in readme

* docs: add build status badge in readme

* Validate all needed fields for client API login

During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.

* docs: add build status badge in readme

* Validate all needed fields for client API login

During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.

* Fix Organization delete when groups are configured

With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes dani-garcia#3247

* docs: add build status badge in readme

* Validate all needed fields for client API login

During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.

* Fix Organization delete when groups are configured

With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes dani-garcia#3247

* Fix Collection Read Only access for groups

I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification

* docs: add build status badge in readme

* Validate all needed fields for client API login

During the client API login we need to have a `device_identifier`, `device_name` and `device_type`.
When these were not provided Vaultwarden would panic.

This PR add checks for these fields and makes sure it returns a better error message instead of causing a panic.

* Fix Organization delete when groups are configured

With existing groups configured within an org, deleting that org would
fail because of Foreign Key issues.

This PR fixes this by making sure the groups get deleted before the org does.

Fixes dani-garcia#3247

* Fix Collection Read Only access for groups

I messed up with identation sorry it's my first PR

Fix Collection Read Only access for groups

Fix Collection Read Only access for groups

With indentation modification

* Make the admin cookie lifetime adjustable

* Apply Admin Session Lifetime to JWT

* Apply rewording

* Add missing collections/details endpoint, based on the existing one

* Update web vault to v2023.2.0 and dependencies

* Fix vault item display in org vault view

In the org vault view, the Bitwarden web vault currently tries to fetch the
groups for an org regardless of whether it claims to have group support.
If this errors out, no vault items are displayed.

* Add confirmation for removing 2FA and deauth sessions in admin panel

* Fix the web-vault v2023.2.0 API calls

- Supports the new Collection/Group/User editing UI's
- Support `/partial` endpoint for cipher updating to allow folder and favorite update for read-only ciphers.
- Prevent `Favorite`, `Folder`, `read-only` and `hide-passwords` from being added to the organizational sync.
- Added and corrected some `Object` key's to the output json.

Fixes dani-garcia#3279

* Some Admin Interface updates

- Updated datatables
- Added NTP Time check
- Added Collections, Groups and Events count for orgs
- Renamed `Items` to `Ciphers`
- Some small style updates

* Fix confirmation for removing 2FA and deauthing sessions in admin panel

* Admin token Argon2 hashing support

Added support for Argon2 hashing support for the `ADMIN_TOKEN` instead
of only supporting a plain text string.

The hash must be a PHC string which can be generated via the `argon2`
CLI **or** via the also built-in hash command in Vaultwarden.

You can simply run `vaultwarden hash` to generate a hash based upon a
password the user provides them self.

Added a warning during startup and within the admin settings panel is
the `ADMIN_TOKEN` is not an Argon2 hash.

Within the admin environment a user can ignore that warning and it will
not be shown for at least 30 days. After that the warning will appear
again unless the `ADMIN_TOKEN` has be converted to an Argon2 hash.

I have also tested this on my RaspberryPi 2b and there the `Bitwarden`
preset takes almost 4.5 seconds to generate/verify the Argon2 hash.

Using the `OWASP` preset it is below 1 second, which I think should be
fine for low-graded hardware. If it is needed people could use lower
memory settings, but in those cases I even doubt Vaultwarden it self
would run. They can always use the `argon2` CLI and generate a faster hash.

* Add HEAD routes to avoid spurious error messages

Rocket automatically implements a HEAD route when there's a matching GET
route, but relying on this behavior also means a spurious error gets
logged due to <rwf2/Rocket#1098>.

Add explicit HEAD routes for `/` and `/alive` to prevent uptime monitoring
services from generating error messages like `No matching routes for HEAD /`.
With these new routes, `HEAD /` only checks that the server can respond over
the network, while `HEAD /alive` also checks that the database connection is
alive, similar to `GET /alive`.

* Fix web-vault Member UI show/edit/save

There was a small bug left in regards to the web-vault v2023.2.0 fixes.
This PR fixes the left items. I think all should be addressed now.
When editing a User, you were not able to see or edit groups, or see
wich collections a user bellonged to.

Fixes dani-garcia#3311

* Upd Crates, Rust, MSRV, GHA and remove Backtrace

- Changed MSRV to v1.65.
  Discussed this with @dani-garcia, and we will support **N-2**.
  This is/will be the same as for the `time` crate we use.
  Also updated the wiki regarding this https://github.com/dani-garcia/vaultwarden/wiki/Building-binary
- Removed backtrace crate in favor of `std::backtrace` stable since v1.65
- Updated Rust to v1.67.1
- Updated all the crates
- Updated the GHA action versions
- Adjusted the GHA MSRV build to extract the MSRV from `Cargo.toml`

* Merge ClientIp with Headers.

Since we now use the `ClientIp` Guard on a lot more places, it also
increases the size of binary, and the macro generated code because of
this extra Guard. By merging the `ClientIp` Guard with the several
`Header` guards we have it reduces the amount of code generated
(including LLVM IR), but also a small speedup in build time.

I also spotted some small `json!()` optimizations which also reduced the
amount of code generated.

* Add support for `/api/devices/knowndevice` with HTTP header params

Upstream PR: bitwarden/server#2682

* Update Rust, MSRV and Crates

- Updated all the crates
- Updated Rust and MSRV

* Update web vault to v2023.3.0 and dependencies

* add endpoint to bulk delete groups

* add endpoint to bulk delete collections

* don't use `assert()` in production code

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>

* Add support for Quay.io and GHCR.io as registries

- Added support for Quay.io
- Added support for GHCR.io

To enable support for these container image registries the following needs to be added.

As `Actions secrets and variables` - `Secrets`
- `DOCKERHUB_TOKEN` and `DOCKERHUB_USERNAME`
- `QUAY_TOKEN` and `QUAY_USERNAME`

As `Actions secrets and variables` - `Variables` - `Repository Variables`
- `DOCKERHUB_REPO`
- `GHCR_REPO`
- `QUAY_REPO`

The `DOCKERHUB_REPO` currently configured in `Secrets` can be removed if wanted, probably best after this PR has been merged.

If one of the vars/secrets are not configured it will skip that specific registry!

* Some small fixes and updates

- Updated workflows to use new checkout version
  This probably fixes the curl download for hadolint also.
- Updated crates including Rocket to the latest rc3 :party:
- Applied 2 nightly clippy lints to prevent future clippy issues.

* Update web vault to v2023.3.0b

* Decode knowndevice `X-Request-Email` as base64url with no padding

The clients end up removing the padding characters [1][2].

[1] https://github.com/bitwarden/clients/blob/web-v2023.3.0/libs/common/src/misc/utils.ts#L141-L143
[2] https://github.com/bitwarden/mobile/blob/v2023.3.1/src/Core/Utilities/CoreHelpers.cs#L227-L234

* Fix password reset issues

There was used a wrong macro to produce an error message when mailing
the user his password was reset failed. It was using `error!()` which
does not return an `Err` and aborts the rest of the code.

This resulted in the users password still being resetted, but not being
notified. This PR fixes this by using `err!()`. Also, do not set the
user object as mutable until it really is needed.

Second, when a user was using the new Argon2id KDF with custom values
like memory and parallelism, that would have rendered the password
incorrect. The endpoint which should return all the data did not
returned all the new Argon2id values.

Fixes dani-garcia#3388

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>

* support `/users/<uuid>/invite/resend` admin api

* fmt

* always return KdfMemory and KdfParallelism

the client will ignore the value of theses fields in case of `PBKDF2`
(whether they are unset or left from trying out `Argon2id` as KDF).

with `Argon2id` those fields should never be `null` but always in a
valid state. if they are `null` (how would that even happen?) the
client still assumes default values for `Argon2id` (i.e. m=64 and p=4)
and if they are set to something else login will fail anyway.

* clear kdf memory and parallelism with pbkdf2

when changing back from argon2id to PBKDF2 the unused parameters
should be set to 0.

also fix small bug in _register

* add mail check

* add check user state

* Revert setcap, update rust and crates

- Revert dani-garcia#3170 as discussed in #3387
  In hindsight it's better to not have this feature
- Update Dockerfile.j2 for easy version changes.
  Just change it in one place instead of multiple
- Updated to Rust to latest patched version
- Updated crates to latest available
- Pinned mimalloc to an older version, as it breaks on musl builds

* Fix sending out multiple websocket notifications

For some reason I encountered a strange bug which resulted in sending
out multiple websocket notifications for the exact same user.

Added a `distinct()` for the query to filter out multiple uuid's.

---------

Signed-off-by: Jan Jansen <jan.jansen@gdata.de>
Co-authored-by: BlackDex <black.dex@gmail.com>
Co-authored-by: Rychart Redwerkz <redwerkz@users.noreply.github.com>
Co-authored-by: GeekCorner <45696571+GeekCornerGH@users.noreply.github.com>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
Co-authored-by: sirux88 <sirux88@gmail.com>
Co-authored-by: Jeremy Lin <jjlin@users.noreply.github.com>
Co-authored-by: Daniel Hammer <daniel.hammer+oss@gmail.com>
Co-authored-by: Stefan Melmuk <stefan.melmuk@gmail.com>
Co-authored-by: BlockListed <44610569+BlockListed@users.noreply.github.com>
Co-authored-by: Kevin P. Fleming <kevin@km6g.us>
Co-authored-by: Jan Jansen <jan.jansen@gdata.de>
Co-authored-by: Helmut K. C. Tessarek <tessarek@evermeet.cx>
Co-authored-by: soruh <mail@soruh.de>
Co-authored-by: r3drun3 <simone.ragonesi@kiratech.it>
Co-authored-by: Misterbabou <58564168+Misterbabou@users.noreply.github.com>
Co-authored-by: Nils Mittler <nmittler@bcf-pc03.desktop>
Co-authored-by: Jeremy Lin <jeremy.lin@gmail.com>
Co-authored-by: Jonathan Elias Caicedo <jonathan@jcaicedo.com>
Co-authored-by: Dylan Pinsonneault <dylanp2222@gmail.com>
Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
Co-authored-by: Nikolay Nikolaev <nikolaevn.home@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request troubleshooting There might be bug or it could be user error, more info needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants