-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preventing server-forced updates #822
Comments
Side-note: it was a great idea by you guys to develop this spec on Github. 👍 This allows for a very open discussion. (I have never made any proposals like this to any sort of standards body before this). |
I can't quite get my dead around the use cases.
How does the current update model prevent this?
SW scripts are same-origin for security reasons. When you're adding thigns to the cache you can already verify integrity, although CSP is a better mechanism for this. |
Imagine we have created a PGP app to encrypt messages, and we serve it at https://mypgpapp.com. It generates private keys and stores them in IndexedDB, then lets the user encrypt messages with those keys so they can copy their encrypted messages to send via email. So far, our security is pretty good. Since we used HTTPS, we can be reasonably sure the user won't load a backdoored script from some attacker's server. However, what happens if our server gets compromised? If an attacker had access to it, they could deploy an update to the registered SW script which fetches code that will upload the private key somewhere. Now it's game over for any user who visits the app and has their SW script caching expire.
By this example I meant that when the app is being hosted on servers owned by some other party, the host who actually runs the machines could deploy backdoored code. But if a SW prevented forced updating, it could be built to check that the updated version is cryptographically signed by the author before accepting updates. Then, the author can sign the resources offline, and never has to give the private key to the host to have the users verify authenticity. CSP doesn't solve this since the malicious scripts are coming from a whitelisted domain. (I suppose I didn't need to mention integrity, authenticity was the important part). |
Related if not dup of #761 |
I resonate with this use-case (I worked on the Firefox OS mail app which similarly wanted what amounted to signed-offline app packages), but I think giving the service-worker the ability to defeat upgrades of itself is the wrong way to handle it. (And I think this issue is roughly equivalent to 761, although this issue more concretely describes motivating use-cases.) The security model of https is trust in an origin, authenticated by the certificate authority (CA) infrastructure, with compromises being handled via OCSP (stapling) and/or short-lived certificates. Using a trust-on-first-use service worker that can deny upgrades is a clever way to attempt to embed an offline signature based security/trust model inside the existing online signature trust model. Some people have even come up with an exceedingly clever approach also combining HPKP and AppCache, see https://www.reddit.com/r/encryption/comments/4027ci/how_2_spacex_alums_are_using_encryption_for_good/cyywmc8 The major risk of a prevent-update feature, as covered by Jake, in #761 is allowing an evil/attacker-controlled service worker to extend the duration of its nefarious residency, potentially permanently. And also as Jake suggests, it seems better to make the validation part of a standard already concerned with validating contents, namely CSP which already has precedent with its SRI-like "valid hash" mechanism https://w3c.github.io/webappsec-csp/2/#source-list-valid-hashes. This also allows the browser to alert the user, helping avoid spoofable UI and related confusion/fatigue. Additionally:
The big wrinkle is that what is really needed is a cross-browser effort with this specific use-case in mind because it really is its own big picture idea. I believe there are many people who care about the use-case, but I suspect most browser engines are focusing their efforts just on getting service workers and progressive web apps going. The best hope for this use-case right now are browser extensions/add-ons. All browsers seem to be converging on WebExtensions and at least some browsers (ex: Firefox) allow easy disabling of extension auto-updates. Which is important since the trust-model looks like it depends on the extensions marketplaces' authentication mechanisms and them not being compromised themselves, which is strictly weaker than an air-gapped private key. (NB: Pre-WebExtensions Firefox extensions can, however, be cryptographically signed.) This is clearly nowhere as good as a packaged apps model that does not require installation like Firefox OS was shooting for, but it seems to be the most realistic cross-browser solution. |
[Sorry for the 1.5 year late reply :) coming here from #1208, which is similar to this issue but instead of preventing the update, the old Service Worker warns the user about the update.] @asutherland and others have brought up valid concerns about this approach, e.g., what if the user force-refreshes the web app, what if they open it in an incognito window, what about when they open it for the first time, what about on a new device. I agree with all of those, in fact, I proposed a different solution built on Certificate Transparency a while back that would solve those things, and if browsers implemented that, I would be very happy too :) However, a solution built on Service Workers has the big advantages that
And I think it would solve the biggest part of this problem, especially because any attempt to send the user unsigned code runs the risk of detection by the SW, unless the server can somehow detect that the SW is being bypassed. That might be possible in the case of force-refresh (Cache-Control headers), though. Maybe we can use the SW to always send force-refresh-like Cache-Control headers? And while some have indeed used browser extension to solve this problem (Signal, blockchain.info and mega.nz come to mind, although the first two of those are Chrome apps, which are getting phased out. I myself made an old-style Firefox addon), it would be preferable if all users are protected, and they don't even protect against most of the loopholes ("what if they open it in an incognito window, what about when they open it for the first time, what about on a new device"). |
Hi, I want to secure my app by storing each asset SHA256SUM on a blockchain, then allow update when on-line file match the shasum. This way release have to be signed by author/auditors before update. This is an innovative use case of webworker/blockchain that can bring a new level of security to web applications, because an attacker would have to gain access to both the service and the private keys to perform a mass-attack. However, this will be impossible to do as long as the service worker update is forced. Does someone knows about an alternate way to achieve this or eventually a feature implementation that could open this possibility? I understand the issue about attacker being able to leverage the service worker self-control to implement malicious code durably. I'm not sure how this could be mitigated. |
I've been thinking a bit about it and I believe we can solve this one properly. The core idea is having the installed service worker being able to check the new available version before installation; Eventually, it would have the power to prevent it if certain condition are not met (essentially if the new version is judged insecure). This should be an exceptional event, especially as securing service worker update would make the attack useless. I'm basing this affirmation on the proposed use case for this feature so far. On the other hand, we don't want an attacker to install a corrupted service worker that wouldn't allow any update, implementing durably malicious code. This attack could happen massively on websites which didn't implement an integrity check on service worker update. It could also happen on supposedly secured website on a per-user basis. Meaning, by altering someone's browser files by physical access or exploiting a flaw in the operating system. This last attack doesn't scale, but has the viciousness of breaking a system in which the user would have the highest confidence, which could lead to the biggest damage (think banking/crypto currency apps). A natural solution respecting thoses two requirements would be warning the user when an update have been blocked. This highlight that an exceptional situation is happening, and that something may need to be done about it. The active service worker would have to provide a message explaining the reason why it blocked an update, and the user would be able to inform herself on social medias / forums /... about what's happening. An option would be provided to update anyway. The banner could use the same UI than the one proposing PWA installation. I would advice to use a sober banner, but to put the message telling the update have been blocked in red. I would rather avoid something like what we've seen in the past where everything becomes orange and red and convey more fear than knowledge to non-technical people. But, some bit of something is needed to call the attention of the user, as we want to be sure the exceptional situation is known by her. I would rather make the banner non-intrusive and the navigation to continue. If closed, a new pop would happen on the next normal cache update if the situation is still abnormal (so every 24h or more often depending on cache headers). Another strategy could be blocking totally the navigation. This would prevent corrupted code to run at all in some case (locally corrupted service worker). On the other hand, it could give some incentive to allow the update no matter what so the website can be used. I think some further thinking on this very aspect is needed. Technically, I would expect this feature to be available from the service worker environment. There could be an event so we can use The function blocking the update could be more widely available as 'navigator.serviceWorker.rejectUpdate(message)'. The hard-coded reaction of the browser, which's showing a banner, would prevent any abuse/hackerish use of this fonctionality. I have no technical knowledge about how service workers are currently implemented, so this may not really fit. I'm basing this mostly on how I would find it convenient and logical to use as a website developer. It would be nice if someone could confirm this actually make sense implementation-wise. I'd like to ask anybody interested in this feature to participate and challenge the design I'm proposing. Better think good about it before going ahead. I'd also like to hear about implementators and weither they would consider this solution as acceptable and implementable. I think we have a nice opportunity to push further the usefulness of this new web technology, so I hope we can go ahead and add this one to the specs. |
It's been 3 month since I did propose a fix to this, and got no answer. Is there anything I can do so we can move on with this? Should I go ahead and make contact somewhere else? Or is it already settled ? |
This issue is on the upcoming ServiceWorkers F2F agenda for discussion in October, although I can't promise any resolution. Note that my previous comments still stand about the security model of the web and in particular, the Clear-Site-Data header since implemented by Chrome and Firefox has a "kill switch" which will wipe out all storage for an origin including ServiceWorkers. That's an intentional feature that a SW would never be able to inhibit even if normal updates could be blocked or delayed. There is a new experimental option that seems better suited to your use-case. There's a very experimental Mozilla project https://github.com/mozilla/libdweb that enables WebExtensions to implement custom protocols like IPFS (see the blog post). In regards to your proposal, I understand the core of your suggestion to be that a banner would be presented if the potentially-malicious SW blocks an update and that the potentially-malicious SW is able to provide some of the text to be displayed in the banner. The user would then reach a decision informed by searching the web and asking other people on social media what they should do. The issue with prompting the user in cases like this is that the user is frequently unable to make an informed decision about what is going on. Especially if the attacker can supply scary text like "There is a virus in the update, don't update or your computer will be permanently damaged!" or text that leverages update fatigue like "The update will take 10 minutes to install, are you sure you want to update?" or unique strings that the attacker can use to game any search the user would make so that the guidance they find on the internet is from the attacker. This isn't really solving the problem, it's making the problem the user's problem. |
Thank you for the answer and feeding the search. My proposal has indeed nothing to do with bypassing clear browser cache functionality. One of my premise was that in legit case, the banner wouldn't show for long as the server take-over is likely to be fixed within hours or at worth within days. On the other hand, a malicious SW would continue to pop a warning forever until updated, incentivising user to do something about it. Non technical users are likely to accept the update after a few days and wipe malicious SW just to get rid of the banner. Not saying it's so great, just saying it leverages blind behaviors. Now there may be a other ways to leverage this difference in update denial timespan, like setting a reasonable time limit to rejection at 24 or 48 hours. The problem I see in their cases is that domain owners may know users have a malicious SW but remains powerless about it. Another option (if it's acceptable for browsers) I can see is having the SW rejection option being enabled for a website through a DNS TXT field. In this scenario, the malicious SW squat attack against a website that hadn't the SW update rejection enabled could work only by taking over the DNS, and only as long as control over DNS remains as legit owner would be able to switch off the rejection option, triggering SW renewal. The TXT field would be either inexistent (option not enabled) either a number that represent for how much minutes an update rejection remains valid, maybe with a reasonable maximum limit. |
Just thought I should chime in as I was the one who opened this issue: I now understand the rationale in preventing updates. I previously thought the 24-hour limit was to prevent the accidental bricking of apps by well-meaning server admins, but really it's about preventing an attacker from intentionally bricking the app forever (so my proposed solution of using response headers doesn't really help). I no longer think it makes sense to be able to fully prevent updates in the Service Worker API. BTW, I have some ideas about how to accomplish what I want with a Subresource Integrity attribute for iframes, hopefully that will be implemented some day. |
The exact use case @mappum was describing seems to be what is being discussed over here. Although I can imagine not only E2EE apps can benefit from this. Almost any downloaded application uses codesigning to verify updates nowadays. Perhaps being able to prevent server-forced updates is too generic. Maybe what we need here is something more specific to code signing updates. So that only Service Workers that make use of code signing may prevent server forced updates. |
Indeed, I previously proposed a solution for this problem, namely simply adding SRI support to service workers: w3c/webappsec-subresource-integrity#66 Created an issue in the correct repository for discussion: #1680 |
SRI cannot help this use case as SRI just uses hashes, not asymmetric signatures. SRI is only useful for cross-origin security. I would like to implement signature verification in a service worker in my current project, where unlike in a typical web app, the client code is user-controlled; the user would upload a signed app update to the server, and the service worker would check the signature on the files coming from the server – allowing a trust-on-first-use model with regards to the server… if there was a "paranoia mode" for service workers that would disable all the escape hatches, making sure that the service worker could strip Please, please, please give us the choice. For the vast majority of web apps the current way of working, which would indeed "prevent an attacker from intentionally bricking the app forever", is the appropriate one. For our paranoid E2EE apps with unconventional update models, we would like to opt in to full control of the update process with no escape hatches. |
Not true, as SRI can be used to specify an immutable bootloader which implements the asymmetric signature verification. |
Ah, that's clever. But either way, whether triggered by SRI or by passing some |
Oh, one key thing I've just realized: to not allow attackers to "permanently screw up" a compromised site that was using the normal mode (or no SWs at all), opting in to the secure mode should require at least a permission prompt, and probably be reflected in the browser UI (such as a "lock with refresh arrows" icon in place of the normal lock icon). |
While this doesn't solve the problem of preventing Service Worker updates, here's a rough sketch of an idea that might (with user co-operation) at least allow cobbling together a ToFU level of security to detect unauthorized updates, vet authorized updates, and fail-safe if an unvetted update occurs, on existing browsers:
This should maybe be safe, since while an adversary can overwrite the existing service worker by simply serving a 200 OK response that would blow away CS1 (cf. an Evil Maid failing by blowing away TPM-protected secrets when factory-resetting the PC), I don't think there's any way for them to actually get at the source code of the currently running service worker. The update mechanism is hella janky, though, and I'm less sure of it (especially since the "offline update" of a service worker isn't proven yet). The initial installation sort of has to include a leap of faith (that's ToFU), but I had hoped to allow the application to reduce the surface area the user has to think about to nothing more than matching an on-screen hash to an otherwise-known-good value. Obviously, this is a LOT of engineering work, and an additional UX burden of some song-and-dance. But, of course, at least some UX burden is absolutely necessary due to this crux. Certainly something more "batteries-included" for web applications to keep sensitive data safe from adversarial takeover at an unknown-but-surely-upcoming future date would be nice. |
Hi all 👋 FYI, I proposed an alternative solution to the underlying goal (of facilitating web apps that don't trust the server) at the WICG (working title: Source Code Transparency). I also presented on it at the WebAppSec WG meeting at TPAC (minutes), and it seemed like there was interest from the browsers there. Instead of trying to prevent updates, the proposal here is to make all updates transparent and publicly auditable by security researchers, to make it detectable if any malicious code gets deployed by a web app's server. While this doesn't prevent malicious code from being deployed, it strongly discourages servers from ever doing so (due to the risk of reputational damage). The security model here is similar to Certificate Transparency, which has been very successful at detecting and preventing malicious certificates from being issued. And contrary to the proposals here, it wouldn't be TOFU, but protect users from the first time they open the web app (if the browser implements source code transparency, obviously). Even though I also previously commented in favor of the proposal in this issue, my impression is that browsers are quite resistant to preventing updates entirely, and would actually be more open to a solution dedicated to the underlying problem, rather than something "hacked" on top of Service Workers, even if it's more work to implement. For the full proposal, please see the explainer. If you have any comments or suggestions, please open an issue or discussion on the repo. If you support the proposal, please leave a 👍 on the WICG proposal. Thanks! |
I'm interested in this functionality too, to protect disruption of the ServiceWorker within RecipeRadar, a web-hosted Progressive Web Application. I've reviewed and broadly like @twiss's proposal, and I have a competing proposal that is compatible with delivery over both HTTP and HTTPS (my understanding is that SCT requires TLS). My competing proposal is that site operators should deploy a W3C-SRI-format compatible value to DNS containing the expected hash of the content body served from the root path of the webserver. I admit that this single-path limitation is somewhat constraining. Although I've asked the
(note that when I deploy updated code for RecipeRadar, this entry temporarily contains two hashes -- one for the cached/stale app allowing web browsers using web caches to continue to use the stale app until it expires, and one for the current/fresh app. the W3C SRI spec foresaw this requirement for subresources hosted by CDNs, and so it does support multiple values at a given hash strength level - see example 7 here) No web client currently supports this in practice, as far as I'm aware - however the idea would be that if the integrity check for the root resource fails, then we should be careful about trusting or loading any of the referenced subresources (including the ServiceWorker script), even if they contain SRI hashes (in other words: the root resource hash appears faulty, so all bets are off on the subresource hashes). |
Sometimes I write comments too hastily. To clarify: I've informally reviewed SCT -- I'm not a member of any standards bodies, only a keen technologist -- and my proposal is to some extent competing, but there's no mutual exclusion between them (that is to say: both could be deployed in parallel). |
This might be really useful for concerns about a CDN becoming malicious while the legitimate operator still controls the DNS, but doesn't do anything for the "don't trust the operator" use case.
Actually sounds compelling!… with the caveat that not every app is public and wants to be transparent, I guess. Now that I think about it, what if we could have ServiceWorker/page-controlled updates under a special "installed web app" concept? So instead of imposing the controlled update model onto regular |
Thanks @valpackett - yep, that's exactly the kind of scenario that a DNS webintegrity checksum would be intended to guard against; and correct, the mechanism does not protect against an untrusted operator. (if an application is free-and/or-open-source and reproducibly-buildable, then continuous inspection and confirmation of the published integrity hashes may be possible, but that'd be an independent process. less-transparent sites could continue to offer content integrity) I don't feel knowledgeable enough about either ServiceWorkers or web origins to comment on the |
I'd like to propose a change to the ServiceWorker spec that allows for applications which cannot be forced by the server to install updates.
Background
Native applications have a significant security advantage over web applications: they control their own update lifecycles. New code is not loaded by default, so if there is no auto-update functionality built in, new versions are only adopted at the user's will. By contrast, web applications load new, untrusted code from a server each time the user navigates to them. This increases the attack surface for malicious code to be deployed by whoever has control of the server (hackers, political actors, rogue service administrators, etc.).
The Application Cache API allowed this feature; servers could set the AppCache manifest
max-age
very far in the future which would prevent the browser from contacting the server. However, servers configured to cache aggressively would cache these manifests and brick apps accidentally, leaving developers unable to deploy fixes to their apps. Since this problem was common, the ServiceWorker spec is defined to limitmax-age
to 24 hours, after which an update is ensured to happen.Use Cases
Server-forced updates need to be prevented in the following cases:
For an example of code that uses this ability of the Application Cache API, see
hyperboot
(written by @substack, the author ofbrowserify
).Solutions
In the ServiceWorker spec, preventing forced updates must only happen when explicitly requested, so as not to cause the accidental bricking tragedies seen with AppCache.
Possible methods for adopting this feature:
Service-Worker-Max-Age
header (IntroduceService-Worker-Max-Age
header #721). Servers likely won't set this header by default.Service-Worker-No-Forced-Updates
header, that should only be used to specifically opt in to this feature, which will remove the 24-hour cap.These changes still allow for applications to trigger their own updates by unregistering their ServiceWorker and registering the new version. It may also be beneficial to add a method which triggers a reload of the registered ServiceWorker and does the standard byte-for-byte comparison to see if there is an update.
Thank You
Thanks for considering this proposal. I believe this small change would make ServiceWorkers much more powerful, and bring the web a huge step closer to parity with native applications.
The text was updated successfully, but these errors were encountered: