-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trusted Types #20
Comments
cc @ckerschb |
also cc @dveditz @jonathanKingston @mozfreddyb |
I have read this a few times already, besides that there might be some missing sinks (I haven't checked) it seems like a worthwhile addition to auditing. I specifically like the CSP header to turn off all sinks, it might need to be more granular perhaps and could be in a FeaturePolicy perhaps? We should also update https://github.com/mozilla/eslint-plugin-no-unsanitized to include these types. My only questions/concerns are:
|
I find this an interesting approach, but I'm not sure a type system is solution the web has been looking for. FWIW, I'd be more inclined to widely discuss the approach we have recently taken in Firefox's chrome privilege code: Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML). |
My take on brief skim is that this is a good space to explore, but I'm unconvinced of the complexity of the policy setup. @martinthomson any thoughts? |
After so many years XSS is still the most prevalent type of attacks on web applications and I agree that some kind of type system for manipulating the DOM could improve the situation. As mentioned in the explainer, security reviews of large applications have always been a pain point and I can see the benefit that security reviewers could focus their efforts around the creation of trusted types. Additionally we could enforce a variety of policies on the various types (maybe even within the DOM Bindings) which sounds tempting to me. What I am mostly worried about is the policy creation. In the current form of the proposal a careless developer would most likely just register a no-op policy and it would be up to the security team to call that out - so this part would only be a minor improvement to the current string based APIs. Additionally I don’t think it should even be the developer to register the policy, wouldn’t it be better to separate the trusted type creation from the policy creation? Maybe we should try to think of some other policy creation and delivery mechanism. In turn, this shift of responsibility would remove complexity from the type creation system and the burden for developers to come up with a sophisticated policy. But maybe that would also require to build in some string sanitizers into the platform, which I personally would like to see anyway. |
We've been talking in feature policy discussions about what sites might do to disable those features they most dislike. I'm starting to think that there might be an HMTL6 effort in our future. There are things like I too am concerned at the complexity of this. Part of that is the |
So it sounds like people both (a) see value in what's being explored here, and (b) also have some serious concerns about it. Perhaps that implies a position of |
I suspect that part of the reason we struggle to get any conclusions here is that discussion about the details is only just beginning and sometimes those details are what our position turns on. |
Hey, given that we Intend to Experiment on TT, I'd like to revive this thread and comment on existing, valid concerns:
Point taken. What we notice so far, especially for migrating existing code, polices become practically no-op, but then are allowed only to be used in certain application parts (e.g. sanitizer, templating system, our own Safe* type system). Policies do introduce complexity, and were not part of the initial design, but we think that they offer interesting properties for securing existing applications (more on that below).
In the current form, the security team still has ways of controlling the policy creation (via the headers with the unique policy name whitelist). In similar fashion, CSP whitelists for script-src are sometimes maintained by security teams to detect developers loading scripts from other sources. The improvement I see is in orders-of-magnitude reduction of the (security) review surface. I agree it's still possible for careless developers to remove all the benefits of typed approach (by e.g. specifying a no-op policy and using it all over the application), but at least it becomes possible to limit that, and that's the design we'd ideally encourage in userland libraries. For example, even the no-op policy can be controlled at its creation time (via name whitelist), and at its usage (code review determining how the policy reference is used).
In the browser implementation, the enforcement is implemented in sinks and makes the sink simply accept a typed value (with optional promotion of a string via a default policy). Policies are now a way of creating typed values. Previously, there were just In the current iteration, the policy impl. are userland functions, defined per document. We couldn't find a way to make this simpler (e.g. to define them statically, in some header or metadata file), and in general having configurable type factories as JS object seemed useful in practice. I'm all for trying to find a better way, and brainstorming on this. To find a common ground, do we all think that having typed sinks (with the same names as the legacy ones) are useful for DOM XSS prevention? If so, then we just need to figure out what's the best way of creating types, given the constraints (developer laziness, large security review surface, existing code). Policies are one idea we're trying to battle test now, but we're open to discussing other ways. |
Eval / location scope increaseWhen the developer is implementing a policy they will have the ability to have visibility and mutability of the attributes set on properties and functions. Current APIs like Eval and window.location aren’t currently pollyfillable such that the first party can’t gain visibility of the content going through them. So the first party will be able to embed a third party script and load it into the first party context via a script tag and have visibility of code that would get evaluated. Backwards compatibilityThe specification has solved the ability for developers to progressively enhance their site in a few ways within the specification. Browsers that don’t implement the policy type should just work as is. When a browser decides to ship trusted types however the website will be locked into using these policies for future APIs but also the following cases cause some concern: More sinksIf a DOM sink to an existing API was missed at implementation time, then the current implementation of the policies would then apply to the new DOM sink. This is likely to be untested for these websites and cause violations/breakage across these sites. More typesIf is seems that more types are needed to fix XSS on the web, the current shape of the APIs would apply also to the websites that wouldn’t be implementing a policy type here. So for example if browser vendors decide that there should be a CSS type that prevents injection of CSS custom properties by XSS into the page. In the current implementation of trusted types the existing implementing sites would then apply policies here but not handle the type, in which case any site using CSS variables would throw CSP violations and prevent their use. This seems like a big concern and likely something the specification could solve with some kind of policy prefixing perhaps. Separating by policy also gives the developer some more flexibility and clarity on what the browser would do for each type. Partial updatesIn the current model of Trusted Types updating parameters on a type doesn’t go through the trusted types. It seems to me that changing a protocol on a URL object should then get checked against the same policy as when it was created. Alternatives to this would be to block all updates to types that have gone through trusted types but this doesn’t seem tenable. I suggest there needs to be a solution here before this can ship. Advantages to Mozilla implementing for internal about pagesHaving the ability to further restrict internal about pages across other vectors would be useful. Currently we restrict the use of unsafe HTML by running it through our internal sanitizer. It seems we could change this instead to a throwing model when unsafe assignments are used to the DOM sinks. My understanding is this would have performance improvements when we can guarantee that we don’t need sanitization as sanitization can’t be zero cost, however I don’t think we use many places where we could assert this safely currently perhaps developer tools would be setup for this. We also gain the advantage of running policies on URL and ScriptURL which is currently seemingly impossible to audit. Strategies for developers to roll-outThere appears to be a few clear roll-out strategies that help developers become XSS free. In codebases that are produced mostly by compiled output from TypeScript, Rust, Java etc the compiler may be able to annotate types that clearly never are modified by user data. In such case the compiler can annotate wrap sinks with a blank policy that doesn’t check anything. The browser then will throw exceptions for third party scripts that might not be within the developers control. They could also choose to use a fallback to something like DOMPurify. In codebases that use a framework such as react, angular the frameworks can implement their own policy within their UI layer. The site then will gain the advantage of the policies with code that doesn’t exist within those frameworks. Sites that mostly use custom JavaScript can choose to roll-out a stringent policy that strips XSS and gradually change their code to wrap Trusted Types in a way that becomes more performant over time. Developers also will gain a simpler interface and not be required to know where all of the 70 DOM sinks are for XSS. Simply enumerating all the places in a codebase that can call these APIs currently can be very difficult for existing sites and providing developers with the report only functionality is another useful tool. There have been concerns about if developers are able to make sane informed choices with such a type system. I'm not too sure this is a problem itself, especially as developers who attempt to implement anything are likely to improve the status quo. If developers choose to use blank policies that do no data manipulation simply to annotate for their own auditing this also gives them new flexibility they currently don't have in auditing these sinks. Overall I would like for us to collaborate on these issues as I see it as a worthwhile improvement to the web. |
There is a alternate proposal from @isocroft on TT which can be found here as a proof of concept and his thoughts around this is to help the ongoing discussions on how TT should be rolled out in browsers and also how simple the web developer experience should be. The idea is to "invert control" in a sense to make it possible to reduce the cognitive inertia that web developers today might have with the current spec direction of TT. So, DOM sinks like
window.TrustedTypes.HTML.registerPolicySanitizer('alt-policy', function(TrustedType){
window.DOMPurify.addHook('afterSanitizeElements', function(currentNode, data, config){
// more code here
});
return function(dirty){
return window.DOMPurify.sanitize(dirty, {
USE_PROFILES: {svg: true, svgFilters: true, html: true},
ADD_TAGS: ['trix-editor'], // Basecamp's Trix Editor
ADD_ATTR: ['nonce', 'sha257', 'target'], // for Link-Able Elements / Content-Security-Policy internal <script> / <style> tags
KEEP_CONTENT:false,
IN_PLACE:true,
ALLOW_DATA_ATTR:true,
FORBID_ATTR:['ping', 'inert'], // Forbid the `ping` attribute as in <a ping="http://example.com/impressions"></a>
SAFE_FOR_JQUERY:true,
WHOLE_DOCUMENT:false,
ADD_URI_SAFE_ATTR:['href']
})
};
});
window.TrustedTypes.URL.registerPolicySanitizer('alt-policy', function(TrustedType){
return function(url){
// URISanity is a ficticious URL sanitizer (doesn't exist - yet)
return window.URISanity.vet(url);
};
})
/****=== configurations ===****/
/* blockIncludes: blocks including potentially unsafe HTML strings into the DOM hence modifying the behavior of `innerHTML` */
/*throwErrors: throws errors when the sanitizer detects unsafe HTML string content */
window.TrustedTypes.HTML.config = {
throwErrors:true,
blockIncludes:true,
reportViolation:false
};
/* blockNavigation: blocks navigating to potentially unsafe URLs hence modifying the behavior of `location.href` or `location.assign()` */
/*throwErrors: throws errors when the sanitizer detects unsafe URL string content */
window.TrustedTypes.URL.config = {
throwErrors:true,
blockNavigation:true,
reportViolation:false
};
/* In the example below, `innerHTML` throws a `TrustedTypesError` because of the "ping" attribute and also does include the HTML string into the DOM too*/
document.body.getElementsByName('wrapper')[0].innerHTML += '<a ping="https://www.evilattacker.com" href="#">Hello World!</a>';
/* This will also work for other DOM sinks and chokepoints too */
document.body.lastElementChild.insertAdjacentTML();
/* Also for this too, the behviour of assign is modified by */
document.location.assign("http://my.nopoints.edu.ng/_/profiling/28167/#section1") <meta http-equiv="Content-Security-Policy" content="trusted-types alt-policy"> The above actually proposes programmatic configurability over declarative as it is more cheaper and also doesn't require the web developer to keep all 70 DOM sinks in mind as he/she writes code based on TT. It also proposes that types should not be proliferated to deal with each kind on data passed around on the front-end. The use of a policy trusted ( types group ) or ( types form - a he (@isocroft) calls it) might be more efficient going forward. For example: URIs for scripts / dynamic resources / stylesheets can come under a single types group or types form : URL So, for stylesheets, there would also be: window.TrustedTypes.CSS.registerPolicySanitizer('alt-policy', function(TrustedType){
// code goes here
}); We would love your take on this alternate proposal on TT. The POC implementation of the above is here. You can try it out yourselves to see how it works. |
Thanks for this. I think there's enough here that I'd like to encourage you to create your own repo with its own issue tracker. Issue threads are linear, and when discussing a full-fledged counter-proposal, it's important to keep separable issues separate. I'm probably misunderstanding this, so apologies if parts are off the mark.
Yes, most policies probably only have a create method for one type. Policy names are first-come first-serve, so this would only allow each whitelist entry to correspond to one trusted type. The default policy is likely to cover more than one trusted type. It doesn't seem to me that the ergonomic benefit of:
over
justifies the loss of generality.
How, under the wicg/trusted-types proposal, does a developer have to do that? It seems to me that the developer who is crafting a trusted value only has to keep in mind the type of content that they're crafting: one of (HTML, Script, ScriptURL, URL). A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those. The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using.
This seems separable from the API for crafting policies. The problem with having just one kind of URL is that some sinks load URLs into the current origin as code, and some load URLs into separate origins or as constrained media. // Loads into a separate document with an origin determined by the URL (modulo javascript:)
myAElement.href = url;
// Loads constrained media
myImgElement.src = url;
// Loads content into this document. Origin of URL not used to separate.
myScriptElement.src = url; It seems that the first two need far less scrutiny than the third, and within Google, we've simplified migrating legacy applications a lot by just allowing any (
Is the proposal to do policy configuration in JavaScript instead of as document-scoped security metadata that is loaded separate from page content? |
@mikesamuel Our intention was not to diverge discussions on the current spec direction but to bring to the notice of stakeholders such as yourself that a counter-proposal is out there and that we feel is worth discussing. However, you are correct. We need to keep separable things separate and we would have a new repo created to track discussions on this novel counter-proposal on our end and reference here perhaps (or not). The major reason for this counter proposal and why we brought it to this particular issue on this repo stems from the comments of @mozfreddyb here on this issue. We seem to agree with his (@mozfreddyb) position on a quote (from him) below: "Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML). We feel that exposing the sanitizer API e.g. DOMPurify to the DOM via the policies created/registered for HTML trusted type (for example) will make the
A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those. The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using. The developer has to write code against each DOM sink like so: let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', {
createHTML(html){
return window.DOMPurify(html);
}
});
document.getElementById('main-page').innerHTML = TrustedPolicy.createHTML('<span x=alert(0xed)>Content!</span>'); This means that the developer will have to remember at each point when dealing with any DOM sink that is in use in the JS codebase under development/review not to do this (When a let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', {
createHTML(html){
return window.DOMPurify(html);
}
});
document.getElementById('main-page').insertAdjacentHTML('afterbegin', '<span x=alert(0xed)>Hello there...</span>') // this will throw error because a string is used directly with this DOM sink There needs to be a presence of mind of the developer to be careful with using each DOM sink as they have to use the value-objects (TrustedTypes) at every instant.
We believe the ergonomic benefit of the newly proposed API for
We do agree with your assertion as the URLs for window.TrustedTypes.URL.registerPolicySanitizer('a-policy', function(TrustedType){
return function(url, category){ // category can be either one of: 'document' , 'script', 'style'
category = category || 'script'
// this sanitizer will vet the "url" according to the "category"
return window.URISanity.vet(url, category);
};
}) From the above code for the
/**!
* When the DOM sink API below is called, it
* calls the registered URL sanitizer and passes
* "javascript:void(prompt('Hello'));" as the > url and
* "script" as the > category parameters respectively
*/
myScriptElement.src = "javascript:void(prompt('Hello'));"
Yes, it is a proposal to do policy configuration in JavaScript (as an additional method of policy configuration to document-scoped metadata loaded in HTTP response headers. However, we do see a fault in allowing policy configurations in JavaScript. client-side code could be compromised by a third-party attacker which is why we also made a safety hatch for these configurations such that policy configurations are allowed only once. trying to configure again via JavaScript or via a Raw HTTP headers formatting below
The above draws from work and discussions here on Per-Type Enforcement
or using Javascript e.g: window.TrustedTypes.HTML.config = {
throwErrors:true,
blockIncludes:true,
reportViolation:false
};
window.TrustedTypes.URL.config = {
throwErrors:true,
blockIncludes:true,
reportViolation:true
}; The Finally, someone can close this issue out so fresh discussions on this counter-proposal can begin here. |
Thanks for responding and I will follow up there. |
I would like to highlight a few things here, the ongoing TAG review: w3ctag/design-reviews#198 and also the feedback from @annevk w3c/trusted-types#176 Largely the issue mentioned is around the brittle nature of the current implementation in that policies aren't manadated against a feature and instead are applying to the callsite. This comes at a cost of potentially missing many APIs as new features to the web get added. Overall I think there is interest in implementing Trusted Types but depending on webidl/callsites may be an oversight that we end up regretting. |
It's a fair point. After a few iterations (especially after w3c/trusted-types#204 deprecating
The application at callsites is important; Focusing on sources of the trusted values and erroring out as soon as the untrusted value is used in a DOM sink allows the authors to verify the enforcement statically, and avoid runtime surprises. We intend to shorten the feedback loop that current CSP has. In short, I'd rather the API keeps throwing at It's hard to predict how specs will evolve, but it seems we might be able to assert the restrictions we want to be respected at the beginning of those two algorithms:
If possible, I'd like those assertions to be based on the TrustedTypes IDL extended attribute, so something like:
Additional sinks, e.g. for eval, setTimeout, and Does that sound reasonable? |
I'm not entirely sure that always works and it's definitely not as simple as that, but it's probably getting too much in the weeds for the scope of this repository. |
@jonathanKingston
Would w3c/trusted-types#176 be a better place to get into the weeds? |
We've had a couple more rounds of brief discussions since August, both internally and with Google. The major takeaways:
So I guess we're somewhere between I think |
At some point we have to pick. Does @lukewagner have an opinion here? |
Trusted types are being a useful security mechanism for my implementation. The types I have implemented are:
If it doesn't come from one of these, and I'm using it with an injection site, Chrome will throw an error. On Firefox these also work, but the browser doesn't enforce their use. I'm a 66-year-old systems programmer, and can't call myself any sort of web API expert. It took a few hours to implement and I used a well-accepted sanitizer, DOMPurify, without auditing that myself. I didn't really experience any grief with the API. Other APIs are notable for the amount of craft knowledge required beyond what is visible on MDN, not this one. |
At Meta, we see Trusted Types as a useful security mechanism as well. I believe that broader support across browsers and broader deployment across websites would be beneficial to the web platform overall. I wrote down some data points from earlier this year here. |
I posted this on Interop 2024 as well but posting across the position issues as well just in case it gets lost: Salesforce strongly supports the Trusted Types proposal, considering the imminent regulatory changes in the Netherlands and the broader EU, as outlined in the eIDAS Regulation. The U/PW.03 Standard of DigiD assessment demands the removal of 'unsafe-eval' from CSP, a challenge that will be mirrored across Europe. This presents critical compliance and potential reputation risks for our customers, especially in the public sector and healthcare. Trusted Types have shown efficacy in XSS risk reduction, demonstrated by Google's successful adoption. This underlines the standard's relevance and potential impact. |
I'm not sure if this forum is the best place to discuss this further, but I'm super curious. As someone who's extremely unaware of the web security regulation within eIDAS. Can you help point us in the right direction? In case this gets too much into a back-and-forth discussion, I suggest this conversation is moved to the mozilla matrix #security channel. https://matrix.to/#/#security:mozilla.org |
We at Mozilla have done a thorough spec review and intend to change our standards position to positive: We are convinced of the track record that Trusted Types has in terms of preventing DOM-based XSS on popular websites (thanks to folks in thread for providing these insights!). That being said, there are some important concerns that need to be addressed before this can ship in a release build for all of our users. First and foremost, there is some functionality (e.g., We also spent some time on the Chrome implementation and found some features that are not even in the standard, which is a bit problematic (e.g., |
@otherdaniel You authored the patch that adds use counters. Can you make sure this is exhaustive? From looking just at the aforementioned changeset it seems the event handler is missing. There's likely more. |
@otherdaniel: wondering about "largely a transition measure". When considering trusted types policies (not the static methods like |
It seems useful to figure out something here early given its impact across a wide range of standards.
The text was updated successfully, but these errors were encountered: