-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-author sites #468
Comments
There has been quite a lot of further discussion on this point. Looks like we're going to sacrifice this goat to accommodate sites which, frankly, are already broken. Le sigh. We're likely going to go with a CSP-based OPT IN to enable Service Workers. The straw man is to require a CSP header for the on-origin SW script. Thoughts? |
Filed crbug.com/423983 for tracking in Blink |
The amount of developer hurdles keeps growing, but we have required opting into Danger Zone for less. Are we still requiring that the SW is served with a correct JavaScript MIME type? A CSP header on the SW script itself affects what the SW can do. It would make more sense to me if the opt-in came with the document or worker that wanted to use a SW. |
Yes, still requiring valid JS. The CSP header affecting what the SW can do seems good? We want more people setting CSP, no? Do you have a straw-man for another way of doing this that you prefer? |
A new token for Content-Security-Policy: |
We need to make sure some spec says you can't set this new CSP token via (I'm a CSP n00b, but it looks like this is the first opt-in CSP directive? Are there any other places where the general opt-out design of CSP will require special handling?) |
I don't think page-based CSP is a good idea here, the impact of a SW is beyond the page. Also, CSP is so-far opt-out, and we're proposing opt-in. I think we should reconsider the SW script location when it comes to scope. The benefit of this is we don't break hosts that are safe, such as github. If we must go for an opt-in solution, I'd rather we went for a content type like |
Implementer feedback: While we haven't looked at this in great depth yet, our plan a this point is to implement the CSP thing (token or header) on the Service Worker script. Our reasoning is:
The solutions in @jakearchibald's previous comment would be easier for us, frankly. |
I really like the content-type idea, mainly because then static content servers could come configured out of the box with a .swjs -> application/javascript+serviceworker mapping, and then once that gets rolled out to GitHub pages we could use service workers there. (Whereas, with the CSP solution, we're unlikely to ever get a generic host like GitHub pages to work with service workers.) |
@domenic what are your thoughts on the path-based method? It has the benefit of working securely without changing servers. Also, appcache uses the same method for restricting how |
@jakearchibald sounds pretty good. A tiny bit ugly since I like to keep my JS nice and tidy inside a |
@annevk @slightlyoff can I persuade you to reconsider the service-worker script path approach? Failing that, a special content-type. This avoids the messiness of a CSP opt in, and confusions around which client's CSP it applies to. |
|
I think having to add special headers is liked less. |
That is fair and this is less constrained than |
So a few thoughts:
This leads me to suggesting a header on the script file and, if we can't agree on CSP for the script file, thinking we should do something like Reactions? |
Pointer to John's examples? And an explanation of sorts why paths are bad and don't address the examples if that's not self-evident. |
CSP is opt out, and CSP blocks requests happening not responses being used. Since we're talking about something that's opt-in and blocks on response, I can't understand why we think CSP is the answer here. The path solution means github pages, other github raw viewers, and tilde-based sites just work. If you can put a script there, you control it. jsbin.com allowed you to put a script in the root, which is highly unusual, and @remy already fixed that. |
My examples of common sites that miss out on Same-origin policy protection (because they put content from non-mutually-trusted users on a single origin) were:
Of these jsbin.com was the only one which didn't use paths to separate content from different users, and that's now fixed. It's likely there are more such sites (maybe some sites upload everything to the root with a hashed filename, or use paths within the query string, like |
http://www.w3.org/2001/tag/issues.html#siteData-36 should be considered. From Tim Berners-Lee:
Path-based restrictions run afoul of this, albeit not as badly as |
We're trying to fix cases that already run afoul of that. Cases where part
|
Just to chime in, I didn't fix it by separating out paths for users. http://jsbin.com/foo.js is still valid. We simply serve scripts identifying themselves as service workers: jsbin/jsbin@ce53bb2
The internet is a big place, "highly unusual" is going to be a much bigger number than you anticipated when this thing is shipped. If you take the path scoping approach, I suspect (aka gut feeling) that in years to come, it'll catch out new devs, mostly because web devs tend to throw something together/copy & blindly paste before reading the fine print specs. And by catch out, I mean some will use path scoping and accidently protect themselves, others won't and it could be too late. |
Before SW, the amount of damage /A/ could do to /B/'s content depends on how reliant /B/ was on origin storage such as local storage or IDB. The thing that was deemed unacceptable here (as far as I can tell) is SW allowed /A/ to take over the content of /B/ even if /B/ was totally unreliant on storage, and that was new. We've protected against that. But if /B/ itself becomes reliant on origin storage (through SW or otherwise) we cannot protect it from /A/, but that's old news. Ideally no one would have let untrusted parties share an origin & this issue wouldn't have existed, but bleh. |
We are creating a false sense of security by providing protections against /X/ taking over /Y/. The same false sense of security cookies give with |
We're not trying to create any sense of security here -- we're trying to avoid surprising users who have existing, deployed content that can get screwed by SW. Getting on our collective high horse (illustration to follow) and saying "you already have these theoretical security holes" (never mind that they're not actually using cookies, etc.) is going to look like a poor excuse indeed to these people. It's true that this is mitigated a bit by the fact that SW is https only, and that these sites are less common than they used to be. That said, I suspect that someone is going to get caught out by this, and some of the early press around SW is going to be "it's that thing that introduced a new security hole". When I talked with @slightlyoff about this in SF, I think we came to a place where we could either address this with some form of scoping, or take the risk and just try to do an education campaign -- i.e., "if you run a HTTPS multi-user site, be aware that you need to take these steps..." |
Even with scoping and /A/ being a non-malicious entity, if /B/ is a malicious entity, /A/ is screwed the moment /A/ deploys service workers. Again, restricting things where there is no security boundary will lead to /A/ feeling secure, while an XSS on /B/ or a malicious /B/ can seriously hurt /A/. |
With the introduction of "same-path policy" (I don't know the correct name) Is better to keep those denied from being used or change the specs to make those URLs have the same path as the script which created those when using those as service workers? As the FileSystem API is considered dead and the content from |
This was already the case. We can't use a browser storage resource for the SW as it becomes really tricky to update. You end up relying on cached assets updating cached assets, which will likely lead to lock-in. |
One of the main reasons for using It can be either better than the current updating process, as WebCrypto can be used for improved security, or not, in case content from insecure origins get being used without authentication or when the updating process fails or is not implemented. In case of error then simply unregister the worker. In case it isn't implemented, which will end in a lock-in, why not doing the same as when cache expiration headers is badly set: just allow the user to unregister the service worker like how the cache can be cleared. Storing those locally also can allow some possibilities, like checking the integrity of the script used (by the user in a browser interface, not by the script, except for updates). Even if hashes get used for this, which are not optimal as explained by Tor Project, it is still better as it is now. Some people suggested PGP but for me it's too counter-intuitive (although it can work for applications where encryption is the main focus). |
From where? If your SW content is: self.onfetch = e => e.respondWith(new Response("Hi")); …you've just locked yourself in. No way to update. Not only tricky, it's impossible. |
As I said updating isn't hard but preventing hard locks is: is not possible As bugs are the only case where lock-ins happen I showed in the last On Tue, Nov 11, 2014 at 5:01 PM, Jake Archibald notifications@github.com
|
Right, but I've definitely heard of buggy code being released before. Not by me of course. I'm ace.
It wouldn't be exactly like my code, it would be a lot more complex making it difficult to spot before it's too late. Also, I think it's really easy to launch something without thinking of the update path, that's why we put the new worker in charge of how it wants to update, rather than require something of the old worker. Eg I accidentally set a 1 year max-age on an appcache manifest before. It locked users in & there was nothing I could do about it. We really don't want ServiceWorker getting into that situation. I guess I only see downsides to blob service workers. What are the benefits? |
Well, limiting the potential of this because we're humans. Moving the
|
"Header to control the restriction" tracked via http://crbug.com/436747 for blink |
@mvano, @jakearchibald : I agree that we should change the default path. Have been bit by this now that Chrome implements the path restriction. |
@slightlyoff see #595 - we've done just that |
@jungkees would you mind updating the spec to take into account the Service-Worker-Allowed header as described in #468 (comment) ? Background: we've got feedback from Service Worker customers that the ability to override the path restriction via this header is important to them. The team has pro-actively started its implementation assuming that it is in the pipeline of spec updates. |
Alright. I'll work on that. |
Updated the steps to allow setting a max scope with the The scope check has been moved from Register algorithm to Update algorithm 4.3.9 ~ 4.3.17 as such. |
As discussed, https:// sites that have multiple authors may be surprised to discover that user a can now overwrite content for user b.
What's needed:
Will do a pull for 1.
The text was updated successfully, but these errors were encountered: