Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ironing out regex-lite #961

Closed
CosmicHorrorDev opened this issue Feb 24, 2023 · 16 comments
Closed

Ironing out regex-lite #961

CosmicHorrorDev opened this issue Feb 24, 2023 · 16 comments

Comments

@CosmicHorrorDev
Copy link
Contributor

Just giving a more convenient place to iron out the details behind a potential regex-lite crate that aims to be light on compile time and binary size

I'll let @BurntSushi start things off since they're more qualified and have likely given this a great deal more thought

@BurntSushi
Copy link
Member

BurntSushi commented Feb 24, 2023

Thank for filing this! So I want to be clear that I'm not convinced we should actually do this. And if we do, it's probably wise to wait to land it until after #656 is done. Although one could start work on it now. (Keeping in mind that I'm still not sold on the idea.)

The high level motivation here is that the regex crate generally prioritizes performance and correctness (especially with respect to Unicode and worst case behavior) above almost all else, and this usually comes at the cost of code complexity and compilation times. Compilation times and binary sizes have gotten bad enough that folks resist using regex. This was the main reason why I added a number of Cargo features that permit disabling all Unicode support and pretty much all performance optimizations. But this still isn't enough, in part because some parts of regex are just indivisible. For example, regex-syntax is made up of a parser that converts a &str pattern to an Ast, and then a whole separate translator that converts an Ast to an Hir. Most regex engines combine these into a single step. There's also a ton of infrastructure for supporting Unicode. Then there's the NFA compiler which is pretty big. And then there's all the search and prefilter logic. Some of this gets dropped when the corresponding features are dropped, but not all of it.

I don't really see how to fix that other than just splintering off and creating a totally new and separate "lite" version of regex. It could benefit being part of this project by declaring that its syntax is roughly equivalent (explained below) and passes the same test suite. It could also benefit by declaring that its API should be identical, and so folks could pretty easily switch between them. That is, regex-lite would be a mostly drop-in replacement for regex.

OK, so how do we actually simplify things? Here's what I think:

  • We drop Unicode support completely. \b is ASCII aware. Case insensitivity is ASCII-only. [^\n] and . both match the same thing and that's every single individual byte except for \n. There is no \pL. Things like \w, \s and \d are ASCII-only.
  • We generally retain the same syntax. Obviously it wouldn't have the Unicode things in the current syntax. I also think we should drop all character class set operations except for the obvious union operation. So for example, there is no [a-z&&[b-g]]. Generally, the set operations are most useful with Unicode. And they represent a fair bit of complexity in the parser.
  • Ideally, any syntax that is supported in regex but not supported in regex-lite would return an error/panic (see below). That way, people don't accidentally assume the same regex does the same thing.
  • Decide how to handle syntax that is the same but means different things. For example, if \w is ASCII-only in regex-lite and Unicode-aware in regex, then someone switching over to regex-lite might be surprised to find out that it no longer matches the same stuff. I think we kind of just have to document this. I'm not sure if there's an alternative.
  • Decide how to handle the fact that not supporting Unicode means that things like \W and . can match any individual byte and thus split a codepoint. That in turn means that if it's searching a &str, it could report match offsets that split a codepoint and thus cause &string[s..e] to panic. That's non-ideal... We might need to fix that by supporting at least UTF-8, but this is going to be a little gnarly. (That is, maybe we need to make . match a full codepoint.)
  • I wonder whether regex-lite should just panic if the pattern is invalid and not worry at all about error messages. I feel like this would make the parser substantially simpler. But it's kind of a bummer to do this to people. Perhaps a compromise is to just return a pub struct Error(&'static str) with a short message indicating what went wrong instead. So basically, none of the fancy error reporting with spans and all that that's in regex-syntax currently.
  • We get rid of the division between Ast and Hir and just parse a &str into an Hir. (I suspect we still need an Hir or something like it before building a matching engine. But maybe not.)
  • We don't do any literal optimizations.
  • We always have one internal regex engine: the PikeVM.
  • We support resolving capture groups.
  • We don't support RegexSet.
  • It should always have zero dependencies.
  • In general, when making decisions about what to do, compile times and binary sizes are weighted heavily. Certainly much more than performance. Basically, performance is a total non-goal.

Those are my initial thoughts. regex-lite might be a smaller simpler regex, but this is still a big project I think!

@CosmicHorrorDev
Copy link
Contributor Author

CosmicHorrorDev commented Feb 26, 2023

Thanks for the great write-up! I'm fine with either waiting for #656 or starting on things now (although I'm very excited for the refactor 🚀)

  • Decide how to handle the fact that not supporting Unicode means that things like \W and . can match any individual byte and thus split a codepoint. That in turn means that if it's searching a &str, it could report match offsets that split a codepoint and thus cause &string[s..e] to panic. That's non-ideal... We might need to fix that by supporting at least UTF-8, but this is going to be a little gnarly. (That is, maybe we need to make . match a full codepoint.)

Having &string[s..e] panic would be less than ideal to say the least. I think it would definitely be worth supporting UTF-8 if that's what we wind up settling on, even if that adds in more complexity

  • I wonder whether regex-lite should just panic if the pattern is invalid and not worry at all about error messages. I feel like this would make the parser substantially simpler. But it's kind of a bummer to do this to people. Perhaps a compromise is to just return a pub struct Error(&'static str) with a short message indicating what went wrong instead. So basically, none of the fancy error reporting with spans and all that that's in regex-syntax currently.

My preferred would be pub struct Error(&'static str). I think that just panicking would be too limiting here since that would drastically limit uses where a user may provide a regex that will filter a small number of entries (for instance criterion takes a regex just to filter benchmark names)

👍 to everything else

@BurntSushi
Copy link
Member

I think you can get started whenever. I think the thing you shouldn't do is come up with a huge test suite. We'll be able to reuse the one that I have as part of #656. Obviously you can write tests as you build, but don't go out of your way to build something huge and comprehensive. Otherwise, I don't think #656 and this project would really intersect much, by design.

I think the UTF-8 issue is going to make this gnarly unfortunately. Ideally we would just use char everywhere, but arguably, regex-lite should support searching &[u8] just like regex does. And in order to do that, you kind of need to use u8. Perhaps there's a simple way to reconcile this. But I'm not sure.

@BurntSushi
Copy link
Member

BurntSushi commented Feb 28, 2023

I've been thinking more about this, and I feel like it would be helpful to write down some core data types. It will help structure thought, discussion and work on this I think.

I think the two primary data types are an Hir and State, where State is a single state in an NFA. I'll start by writing down what my initial instinct for these two types are, and then explain the problems. For Hir:

enum Hir {
  Empty,
  Char(char),
  Class(Vec<(char, char)>),
  Repetition {
    min: u32,
    max: Option<u32>,
    greedy: bool,
    child: Box<Hir>,
  },
  Capture {
    index: u32,
    name: Option<Box<str>>,
    child: Box<Hir>,
  },
  Concat(Vec<Hir>),
  Alternation(Vec<Hir>),
}

And now for State:

enum State {
  Range {
    start: char,
    end: char,
  },
  Split {
    alt1: StateID,
    alt2: StateID,
  },
  Goto {
    target: StateID,
  },
  Fail,
  Match,
}

And that's pretty much it. The core matching loop would then proceed by decoding a single rune from the haystack and applying it to the NFA state graph above. This should all work fine for &str APIs because everything is guaranteed to be valid UTF-8.

The problem comes up with the &[u8] APIs. Ideally, we would want it to be possible for things like . to match through invalid UTF-8. So for example, compile(".").is_match(b"\xFF") should return true. Another thing that seems like something we should support is compile(r"\xFF").is_match(b"\xFF") should return true. The question is how to do this. In the status quo, we complicate the above representation by adding things like State::ByteRange { start: u8, end: u8 }. And then in order to compile such things, you usually need to build UTF-8 decoding into the automaton. Which kind of stinks.

I do wonder if we might be able to take Go's approach to this problem. Go's regexp engine doesn't support the compile(r"\xFF").is_match(b"\xFF") use case, but it does permit . to match through invalid UTF-8. Basically, it does this by using lossy UTF-8 decoding in its core matching loop. Any byte that is not valid UTF-8 gets treated as indistinguishable from U+FFFD. Thus, regexes like . and things like [^a] will match invalid UTF-8 because they match U+FFFD, but they also simultaneously will never split a codepoint.

This feels like an optimal place to land for me. It keeps the implementation very simple, makes searching &str sensible and permits quite a bit of flexibility with respect to searching on &[u8] as well.

Once difference from Go is that we should probably use "substitution of maximal subparts" as our lossy UTF-8 decoding strategy. So for example, a\xF0\x9F\x87z would decode as [a, U+FFFD, z], where as in Go, that would decode as [a, U+FFFD, U+FFFD, U+FFFD, z].

Please ask questions if any of this doesn't make sense!

@spencermountain

This comment was marked as off-topic.

@SimonSapin
Copy link
Contributor

Decide how to handle the fact that not supporting Unicode means that things like \W and . can match any individual byte and thus split a codepoint. That in turn means that if it's searching a &str, it could report match offsets that split a codepoint and thus cause &string[s..e] to panic. That's non-ideal... We might need to fix that by supporting at least UTF-8, but this is going to be a little gnarly. (That is, maybe we need to make . match a full codepoint.)

IMO . should match a single byte if matching against &[u8] input, or a single code point if matching against &str input. I’d expect the compilation time and code size impact of supporting UTF-8 to be much much less than that of supporting Unicode case-insensitivity, character classes, etc.

@BurntSushi
Copy link
Member

IMO . should match a single byte if matching against &[u8] input, or a single code point if matching against &str input.

That is indeed the case for regex. Well, . always matches a codepoint. It is only (?-u:.) that matches a single byte. The latter is banned in regex::Regex but allowed in regex::bytes::Regex. Generally speaking, Unicode mode is orthogonal to searching &str and &[u8], although disabling Unicode is in practice the only way to match invalid UTF-8.

I’d expect the compilation time and code size impact of supporting UTF-8 to be much much less than that of supporting Unicode case-insensitivity, character classes, etc.

Yes of course. But if you're going to make . match any arbitrary byte for &[u8], then you basically run into a whole pile of worms, both in terms of API design and simplicity of implementation:

  • It would be undesirable to make . unconditionally match a single byte when searching &[u8], because then it becomes impossible to use . to match an entire codepoint when searching &[u8]. Which is an entirely reasonable thing to do.
  • If one permits . to match a single byte, then you probably also need to make [^a] match a single byte. And if [^a] matches a single byte, then what does [^β] match? Do you just ban non-ASCII completely? That seems like kind of a bummer. So then you start thinking, "well let's just add a Unicode mode..." And now you're circling the complexity drain.

Basically, as soon as you start talking about matching individual bytes, you wind up with an NFA state that instead looks like this:

enum State {
  RangeByte {
    start: u8,
    end: u8,
  },
  RangeChar {
    start: char,
    end: char,
  },
  Split {
    alt1: StateID,
    alt2: StateID,
  },
  Goto {
    target: StateID,
  },
  Fail,
  Match,
}

And now your regex engine needs to know how to deal with both and you need to write an abstraction over what is considered a "character." And mixing RangeByte and RangeChar into the same NFA turns out to make things rather difficult too.

Basically, I'm trying to simplify things here so that:

  • &str and &[u8] searching are as similar as possible, if not identical. Otherwise you wind up needing to choose the type of your haystack based on the regex semantics you want, and this is just quite inconvenient.
  • There is no "Unicode" mode. We adhere to the semantics of &str so that we never split a codepoint, and we let people type in Unicode literals and ranges, but that's it.
  • There is no "allow invalid UTF-8" mode.
  • Overall avoid needing to build UTF-8 automata.

BurntSushi added a commit that referenced this issue Apr 25, 2023
BurntSushi added a commit that referenced this issue Apr 25, 2023
BurntSushi added a commit that referenced this issue Apr 26, 2023
BurntSushi added a commit that referenced this issue Apr 26, 2023
BurntSushi added a commit that referenced this issue Apr 27, 2023
BurntSushi added a commit that referenced this issue Apr 27, 2023
@BurntSushi
Copy link
Member

I ended up moving forward with this. My plan is to release it at the same time as regex 1.9. I will still consider it experimental before committing to any kind of longer term support for it, but the main reason why I decided to just go ahead with this is because the regex-automata changes are going to make compile times and binary sizes a little worse for the regex crate. And they're kind of already not great. So it feels like providing folks with an escape route to a smaller crate is the nice thing to do. Here's a quick breakdown of compile times and binary sizes for regex-lite and for regex with just the std feature enabled. Using only std is the minimal regex crate configuration, so it represents the choice you want to make if you want to swing as far as possible in the "less functionality and speed, but better compile times and smaller footprint" direction as possible.

  • regex 1.7.3: 1.41s compile time, 373KB relative size increase
  • regex 1.8.1: 1.46s compile time, 410KB relative size increase
  • regex 1.9.0: 1.93s compile time, 565KB relative size increase
  • regex-lite 0.1.0: 0.73s compile time, 94KB relative size increase

I'm not too happy about the size increase in the baseline that we're going to see in the regex 1.9.0 release. The figure above already represents me pairing back what I had planned originally. I had originally planned to include the full DFA regex engine too, but it increases compile time and binary size even more. (One can opt into it though.)

My plan is for regex-lite to be an exact copy of the regex API for the things it supports. Quirks and all. So that in most cases, you just change use regex::Regex; to use regex_lite::Regex; and everything continues to work.

You can browse the source of regex-lite here (note that I force push to this branch): https://github.com/rust-lang/regex/tree/ag/regex-automata/regex-lite

And the API docs here (which aren't written yet, and the API is meant to be a clone of regex::Regex so there isn't much to see here I think): https://burntsushi.net/stuff/tmp-do-not-link-me/regex/regex_lite/

I think I have three main concerns right now:

  • The maintenance overhead of dealing with regex-lite.
  • Folks switching to regex-lite, everything "appearing to work," but the semantics subtly changing. For example \w+ matches a lot more using a regex::Regex than a regex_lite::Regex because it is Unicode-aware in the regex crate and not Unicode-aware in regex-lite. I don't really see a way around this.
  • Folks might be hesitant to switch to regex-lite specifically because of its lack of Unicode support. People want Unicode. If it were just a matter of adding some tables or not, I could be convinced I think to add that with the ability to opt out. But adding Unicode support also comes with lots of additional code that can be a little difficult to opt out of. Moreover, it adds a lot of complexity to what I'm hoping is a simple crate. Still though, if most people are like "I'd use regex-lite if it had Unicode support," then the choice is going to come down to this: insist that they use regex proper or add Unicode support to regex-lite.

@BurntSushi
Copy link
Member

@CosmicHorrorDev Also, I know you said you were planning to work on this, but it's been a few months. And I was able to get an end-to-end regex-lite crate working in a few days. I also want to try to put this out there with regex 1.9 as an escape valve for folks who are unhappy about the compile time and binary size regressions. I suspect that if anyone but me did this work, it would have taken an order of magnitude longer. My process was basically to copy code from regex-automata and then ruthlessly simplify it given the semantics described above. I was able to do this very quickly because regex-automata is fully paged into context.

@CosmicHorrorDev
Copy link
Contributor Author

I'm just happy that it's being created in the first place :)

I worked on it originally for about a week, and then set it down with the promise that I would get back to it sometime (which evidently never happened 😅)

@NobodyXu
Copy link

@BurntSushi I think regex-lite can be used in unicode-linebreak's build.rs.

It uses regex to match against utf-8 string so it should be able to switch to regex-lite.

Although a better solution might be to generate the tables.rs and upload it as part of the crate.

@BurntSushi
Copy link
Member

@BurntSushi I think regex-lite can be used in unicode-linebreak's build.rs.

It uses regex to match against utf-8 string so it should be able to switch to regex-lite.

Although a better solution might be to generate the tables.rs and upload it as part of the crate.

Yes, the UCD data files are generally all ASCII. So you should be able to get away with using non-Unicode regexes to parse them.

And yeah, personally, I'd suggest generating and committing Unicode table source files. That's what ucd-generate is designed to do. It's how regex-syntax and bstr work.

@BurntSushi
Copy link
Member

@NobodyXu Note for example that your regex uses \w which is Unicode-aware, so switching to regex-lite will change the semantics of your regex. But the semantics only change if your haystack is non-ASCII, which I believe it is not. So you're fine in this specific case.

@NobodyXu
Copy link

@BurntSushi Thanks for the advice, I've opened an issue there to discuss with the maintainer on which solution they would like to pick.

BurntSushi added a commit that referenced this issue Apr 28, 2023
BurntSushi added a commit that referenced this issue Apr 28, 2023
BurntSushi added a commit that referenced this issue Apr 28, 2023
BurntSushi added a commit that referenced this issue Apr 29, 2023
BurntSushi added a commit that referenced this issue Apr 29, 2023
BurntSushi added a commit that referenced this issue Apr 30, 2023
BurntSushi added a commit that referenced this issue Apr 30, 2023
BurntSushi added a commit that referenced this issue Apr 30, 2023
BurntSushi added a commit that referenced this issue Apr 30, 2023
BurntSushi added a commit that referenced this issue May 1, 2023
BurntSushi added a commit that referenced this issue May 1, 2023
BurntSushi added a commit that referenced this issue May 1, 2023
BurntSushi added a commit that referenced this issue May 2, 2023
BurntSushi added a commit that referenced this issue May 4, 2023
BurntSushi added a commit that referenced this issue May 5, 2023
BurntSushi added a commit that referenced this issue May 14, 2023
BurntSushi added a commit that referenced this issue May 18, 2023
@nicoburns
Copy link

Those compile time and binary size improvements look pretty nice! But I think it would be good to track compile times / binary sizes for crates that include both regex and regex-lite in their dependency trees, as I imagine this will end up being a lot of high-level crates if regex-lite becomes popular.

@BurntSushi
Copy link
Member

@nicoburns can you say what I would do with that information? Like how would it be actionable?

BurntSushi added a commit that referenced this issue May 21, 2023
BurntSushi added a commit that referenced this issue May 22, 2023
BurntSushi added a commit that referenced this issue May 22, 2023
BurntSushi added a commit that referenced this issue May 24, 2023
BurntSushi added a commit that referenced this issue May 24, 2023
BurntSushi added a commit that referenced this issue May 25, 2023
BurntSushi added a commit that referenced this issue Jun 13, 2023
BurntSushi added a commit that referenced this issue Jul 5, 2023
I usually close tickets on a commit-by-commit basis, but this refactor
was so big that it wasn't feasible to do that. So ticket closures are
marked here.

Closes #244
Closes #259
Closes #476
Closes #644
Closes #675
Closes #824
Closes #961

Closes #68
Closes #510
Closes #787
Closes #891

Closes #429
Closes #517
Closes #579
Closes #779
Closes #850
Closes #921
Closes #976
Closes #1002

Closes #656
BurntSushi added a commit that referenced this issue Jul 5, 2023
I usually close tickets on a commit-by-commit basis, but this refactor
was so big that it wasn't feasible to do that. So ticket closures are
marked here.

Closes #244
Closes #259
Closes #476
Closes #644
Closes #675
Closes #824
Closes #961

Closes #68
Closes #510
Closes #787
Closes #891

Closes #429
Closes #517
Closes #579
Closes #779
Closes #850
Closes #921
Closes #976
Closes #1002

Closes #656
crapStone added a commit to Calciumdibromid/CaBr2 that referenced this issue Jul 18, 2023
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [regex](https://github.com/rust-lang/regex) | dependencies | minor | `1.8.4` -> `1.9.1` |

---

### Release Notes

<details>
<summary>rust-lang/regex (regex)</summary>

### [`v1.9.1`](https://github.com/rust-lang/regex/blob/HEAD/CHANGELOG.md#191-2023-07-07)

[Compare Source](rust-lang/regex@1.9.0...1.9.1)

\==================
This is a patch release which fixes a memory usage regression. In the regex
1.9 release, one of the internal engines used a more aggressive allocation
strategy than what was done previously. This patch release reverts to the
prior on-demand strategy.

Bug fixes:

-   [BUG #&#8203;1027](rust-lang/regex#1027):
    Change the allocation strategy for the backtracker to be less aggressive.

### [`v1.9.0`](https://github.com/rust-lang/regex/blob/HEAD/CHANGELOG.md#190-2023-07-05)

[Compare Source](rust-lang/regex@1.8.4...1.9.0)

\==================
This release marks the end of a [years long rewrite of the regex crate
internals](rust-lang/regex#656). Since this is
such a big release, please report any issues or regressions you find. We would
also love to hear about improvements as well.

In addition to many internal improvements that should hopefully result in
"my regex searches are faster," there have also been a few API additions:

-   A new `Captures::extract` method for quickly accessing the substrings
    that match each capture group in a regex.
-   A new inline flag, `R`, which enables CRLF mode. This makes `.` match any
    Unicode scalar value except for `\r` and `\n`, and also makes `(?m:^)` and
    `(?m:$)` match after and before both `\r` and `\n`, respectively, but never
    between a `\r` and `\n`.
-   `RegexBuilder::line_terminator` was added to further customize the line
    terminator used by `(?m:^)` and `(?m:$)` to be any arbitrary byte.
-   The `std` Cargo feature is now actually optional. That is, the `regex` crate
    can be used without the standard library.
-   Because `regex 1.9` may make binary size and compile times even worse, a
    new experimental crate called `regex-lite` has been published. It prioritizes
    binary size and compile times over functionality (like Unicode) and
    performance. It shares no code with the `regex` crate.

New features:

-   [FEATURE #&#8203;244](rust-lang/regex#244):
    One can opt into CRLF mode via the `R` flag.
    e.g., `(?mR:$)` matches just before `\r\n`.
-   [FEATURE #&#8203;259](rust-lang/regex#259):
    Multi-pattern searches with offsets can be done with `regex-automata 0.3`.
-   [FEATURE #&#8203;476](rust-lang/regex#476):
    `std` is now an optional feature. `regex` may be used with only `alloc`.
-   [FEATURE #&#8203;644](rust-lang/regex#644):
    `RegexBuilder::line_terminator` configures how `(?m:^)` and `(?m:$)` behave.
-   [FEATURE #&#8203;675](rust-lang/regex#675):
    Anchored search APIs are now available in `regex-automata 0.3`.
-   [FEATURE #&#8203;824](rust-lang/regex#824):
    Add new `Captures::extract` method for easier capture group access.
-   [FEATURE #&#8203;961](rust-lang/regex#961):
    Add `regex-lite` crate with smaller binary sizes and faster compile times.
-   [FEATURE #&#8203;1022](rust-lang/regex#1022):
    Add `TryFrom` implementations for the `Regex` type.

Performance improvements:

-   [PERF #&#8203;68](rust-lang/regex#68):
    Added a one-pass DFA engine for faster capture group matching.
-   [PERF #&#8203;510](rust-lang/regex#510):
    Inner literals are now used to accelerate searches, e.g., `\w+@&#8203;\w+` will scan
    for `@`.
-   [PERF #&#8203;787](rust-lang/regex#787),
    [PERF #&#8203;891](rust-lang/regex#891):
    Makes literal optimizations apply to regexes of the form `\b(foo|bar|quux)\b`.

(There are many more performance improvements as well, but not all of them have
specific issues devoted to them.)

Bug fixes:

-   [BUG #&#8203;429](rust-lang/regex#429):
    Fix matching bugs related to `\B` and inconsistencies across internal engines.
-   [BUG #&#8203;517](rust-lang/regex#517):
    Fix matching bug with capture groups.
-   [BUG #&#8203;579](rust-lang/regex#579):
    Fix matching bug with word boundaries.
-   [BUG #&#8203;779](rust-lang/regex#779):
    Fix bug where some regexes like `(re)+` were not equivalent to `(re)(re)*`.
-   [BUG #&#8203;850](rust-lang/regex#850):
    Fix matching bug inconsistency between NFA and DFA engines.
-   [BUG #&#8203;921](rust-lang/regex#921):
    Fix matching bug where literal extraction got confused by `$`.
-   [BUG #&#8203;976](rust-lang/regex#976):
    Add documentation to replacement routines about dealing with fallibility.
-   [BUG #&#8203;1002](rust-lang/regex#1002):
    Use corpus rejection in fuzz testing.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNi4wLjAiLCJ1cGRhdGVkSW5WZXIiOiIzNi44LjExIiwidGFyZ2V0QnJhbmNoIjoiZGV2ZWxvcCJ9-->

Co-authored-by: cabr2-bot <cabr2.help@gmail.com>
Co-authored-by: crapStone <crapstone01@gmail.com>
Reviewed-on: https://codeberg.org/Calciumdibromid/CaBr2/pulls/1957
Reviewed-by: crapStone <crapstone01@gmail.com>
Co-authored-by: Calciumdibromid Bot <cabr2_bot@noreply.codeberg.org>
Co-committed-by: Calciumdibromid Bot <cabr2_bot@noreply.codeberg.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants