Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trusted Types #20

Closed
annevk opened this issue Sep 26, 2017 · 36 comments · Fixed by #291 or #936
Closed

Trusted Types #20

annevk opened this issue Sep 26, 2017 · 36 comments · Fixed by #291 or #936
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG)

Comments

@annevk
Copy link
Contributor

annevk commented Sep 26, 2017

It seems useful to figure out something here early given its impact across a wide range of standards.

@annevk
Copy link
Contributor Author

annevk commented Sep 26, 2017

cc @ckerschb

@dbaron dbaron added the venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG) label Feb 9, 2018
@dbaron
Copy link
Contributor

dbaron commented Feb 16, 2018

also cc @dveditz @jonathanKingston @mozfreddyb

@jonathanKingston
Copy link

I have read this a few times already, besides that there might be some missing sinks (I haven't checked) it seems like a worthwhile addition to auditing.

I specifically like the CSP header to turn off all sinks, it might need to be more granular perhaps and could be in a FeaturePolicy perhaps?

We should also update https://github.com/mozilla/eslint-plugin-no-unsanitized to include these types.

My only questions/concerns are:

  • Naming, does TrustedHTML imply that it will never permit XSS to developers?
  • Does this have any impact to the XSS filter in the browser? Perhaps it could be disabled when this type is used?

@mozfreddyb
Copy link
Contributor

I find this an interesting approach, but I'm not sure a type system is solution the web has been looking for.

FWIW, I'd be more inclined to widely discuss the approach we have recently taken in Firefox's chrome privilege code: Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML).
One could also debate exposing a sanitizer API to the DOM.

@bzbarsky
Copy link
Contributor

My take on brief skim is that this is a good space to explore, but I'm unconvinced of the complexity of the policy setup.

@martinthomson any thoughts?

@ekr
Copy link
Contributor

ekr commented Dec 14, 2018

@ckerschb

@ckerschb
Copy link

After so many years XSS is still the most prevalent type of attacks on web applications and I agree that some kind of type system for manipulating the DOM could improve the situation. As mentioned in the explainer, security reviews of large applications have always been a pain point and I can see the benefit that security reviewers could focus their efforts around the creation of trusted types. Additionally we could enforce a variety of policies on the various types (maybe even within the DOM Bindings) which sounds tempting to me.

What I am mostly worried about is the policy creation. In the current form of the proposal a careless developer would most likely just register a no-op policy and it would be up to the security team to call that out - so this part would only be a minor improvement to the current string based APIs. Additionally I don’t think it should even be the developer to register the policy, wouldn’t it be better to separate the trusted type creation from the policy creation? Maybe we should try to think of some other policy creation and delivery mechanism. In turn, this shift of responsibility would remove complexity from the type creation system and the burden for developers to come up with a sophisticated policy. But maybe that would also require to build in some string sanitizers into the platform, which I personally would like to see anyway.

@martinthomson
Copy link
Member

We've been talking in feature policy discussions about what sites might do to disable those features they most dislike. I'm starting to think that there might be an HMTL6 effort in our future. There are things like document.write() and synchronous XHR that don't really need to survive long term. Element.innerHTML is a more difficult proposition though.

I too am concerned at the complexity of this. Part of that is the perl -T thing that haunts me. A bigger part of that derives from the desire to have custom sanitization routines. Maybe that is unavoidable, but there are probably ways in which this could be simplified. How far do we get with an in-browser sanitizer that essentially only prevents script execution? If we were able to neuter the current entry points and provide unsafe variants with sanitizer hooks, would that be a better model?

@dbaron
Copy link
Contributor

dbaron commented Dec 17, 2018

So it sounds like people both (a) see value in what's being explored here, and (b) also have some serious concerns about it. Perhaps that implies a position of defer? Or is there some other position that you think could represent a Mozilla consensus (or a process that might achieve one)?

@martinthomson
Copy link
Member

defer seems right in that it's premature to be deciding on this.

I suspect that part of the reason we struggle to get any conclusions here is that discussion about the details is only just beginning and sometimes those details are what our position turns on.

@koto
Copy link

koto commented Jan 16, 2019

Hey, given that we Intend to Experiment on TT, I'd like to revive this thread and comment on existing, valid concerns:

I'm unconvinced of the complexity of the policy setup.

Point taken. What we notice so far, especially for migrating existing code, polices become practically no-op, but then are allowed only to be used in certain application parts (e.g. sanitizer, templating system, our own Safe* type system). Policies do introduce complexity, and were not part of the initial design, but we think that they offer interesting properties for securing existing applications (more on that below).

In the current form of the proposal a careless developer would most likely just register a no-op policy and it would be up to the security team to call that out - so this part would only be a minor improvement to the current string based APIs.

In the current form, the security team still has ways of controlling the policy creation (via the headers with the unique policy name whitelist). In similar fashion, CSP whitelists for script-src are sometimes maintained by security teams to detect developers loading scripts from other sources.

The improvement I see is in orders-of-magnitude reduction of the (security) review surface. I agree it's still possible for careless developers to remove all the benefits of typed approach (by e.g. specifying a no-op policy and using it all over the application), but at least it becomes possible to limit that, and that's the design we'd ideally encourage in userland libraries. For example, even the no-op policy can be controlled at its creation time (via name whitelist), and at its usage (code review determining how the policy reference is used).

I don’t think it should even be the developer to register the policy, wouldn’t it be better to separate the trusted type creation from the policy creation? Maybe we should try to think of some other policy creation and delivery mechanism. In turn, this shift of responsibility would remove complexity from the type creation system and the burden for developers to come up with a sophisticated policy.

In the browser implementation, the enforcement is implemented in sinks and makes the sink simply accept a typed value (with optional promotion of a string via a default policy). Policies are now a way of creating typed values. Previously, there were just TrustedHTML.unsafelyCreate functions, which are now removed, because we believed that exposing them only encouraged the design that removes the benefits of the types i.e. there's not enough control possible, and as a consequence, there would be no review surface reduction.

In the current iteration, the policy impl. are userland functions, defined per document. We couldn't find a way to make this simpler (e.g. to define them statically, in some header or metadata file), and in general having configurable type factories as JS object seemed useful in practice. I'm all for trying to find a better way, and brainstorming on this.

To find a common ground, do we all think that having typed sinks (with the same names as the legacy ones) are useful for DOM XSS prevention? If so, then we just need to figure out what's the best way of creating types, given the constraints (developer laziness, large security review surface, existing code). Policies are one idea we're trying to battle test now, but we're open to discussing other ways.

@jonathanKingston
Copy link

jonathanKingston commented Mar 9, 2019

Eval / location scope increase

When the developer is implementing a policy they will have the ability to have visibility and mutability of the attributes set on properties and functions. Current APIs like Eval and window.location aren’t currently pollyfillable such that the first party can’t gain visibility of the content going through them. So the first party will be able to embed a third party script and load it into the first party context via a script tag and have visibility of code that would get evaluated.
It seems like this is a slight increase in scope of how the web platform works today but not a big concern.
@annevk probably has knowledge of the problem space here and if it's a concern.

Backwards compatibility

The specification has solved the ability for developers to progressively enhance their site in a few ways within the specification. Browsers that don’t implement the policy type should just work as is.

When a browser decides to ship trusted types however the website will be locked into using these policies for future APIs but also the following cases cause some concern:

More sinks

If a DOM sink to an existing API was missed at implementation time, then the current implementation of the policies would then apply to the new DOM sink. This is likely to be untested for these websites and cause violations/breakage across these sites.
However this seems limited in scope given that many DOM sinks are known. So it would likely be a low breakage if such an event occurred across all implementing browsers.

More types

If is seems that more types are needed to fix XSS on the web, the current shape of the APIs would apply also to the websites that wouldn’t be implementing a policy type here.

So for example if browser vendors decide that there should be a CSS type that prevents injection of CSS custom properties by XSS into the page. In the current implementation of trusted types the existing implementing sites would then apply policies here but not handle the type, in which case any site using CSS variables would throw CSP violations and prevent their use.

This seems like a big concern and likely something the specification could solve with some kind of policy prefixing perhaps. Separating by policy also gives the developer some more flexibility and clarity on what the browser would do for each type.

Partial updates

In the current model of Trusted Types updating parameters on a type doesn’t go through the trusted types. It seems to me that changing a protocol on a URL object should then get checked against the same policy as when it was created. Alternatives to this would be to block all updates to types that have gone through trusted types but this doesn’t seem tenable.

I suggest there needs to be a solution here before this can ship.

Advantages to Mozilla implementing for internal about pages

Having the ability to further restrict internal about pages across other vectors would be useful.

Currently we restrict the use of unsafe HTML by running it through our internal sanitizer. It seems we could change this instead to a throwing model when unsafe assignments are used to the DOM sinks. My understanding is this would have performance improvements when we can guarantee that we don’t need sanitization as sanitization can’t be zero cost, however I don’t think we use many places where we could assert this safely currently perhaps developer tools would be setup for this.

We also gain the advantage of running policies on URL and ScriptURL which is currently seemingly impossible to audit.

Strategies for developers to roll-out

There appears to be a few clear roll-out strategies that help developers become XSS free.

In codebases that are produced mostly by compiled output from TypeScript, Rust, Java etc the compiler may be able to annotate types that clearly never are modified by user data. In such case the compiler can annotate wrap sinks with a blank policy that doesn’t check anything. The browser then will throw exceptions for third party scripts that might not be within the developers control. They could also choose to use a fallback to something like DOMPurify.

In codebases that use a framework such as react, angular the frameworks can implement their own policy within their UI layer. The site then will gain the advantage of the policies with code that doesn’t exist within those frameworks.

Sites that mostly use custom JavaScript can choose to roll-out a stringent policy that strips XSS and gradually change their code to wrap Trusted Types in a way that becomes more performant over time.

Developers also will gain a simpler interface and not be required to know where all of the 70 DOM sinks are for XSS. Simply enumerating all the places in a codebase that can call these APIs currently can be very difficult for existing sites and providing developers with the report only functionality is another useful tool.

There have been concerns about if developers are able to make sane informed choices with such a type system. I'm not too sure this is a problem itself, especially as developers who attempt to implement anything are likely to improve the status quo. If developers choose to use blank policies that do no data manipulation simply to annotate for their own auditing this also gives them new flexibility they currently don't have in auditing these sinks.

Overall I would like for us to collaborate on these issues as I see it as a worthwhile improvement to the web.

@stitchng
Copy link

stitchng commented Jul 18, 2019

There is a alternate proposal from @isocroft on TT which can be found here as a proof of concept and his thoughts around this is to help the ongoing discussions on how TT should be rolled out in browsers and also how simple the web developer experience should be. The idea is to "invert control" in a sense to make it possible to reduce the cognitive inertia that web developers today might have with the current spec direction of TT.

So, DOM sinks like innerHTML for example have knowledge of using a registered policy santizer to act on any (perhaps potentially unsafe) HTML string passed to it. Its behavior can also be modified accordingly too.

Here is code that explains the alternate proposal

window.TrustedTypes.HTML.registerPolicySanitizer('alt-policy', function(TrustedType){
    window.DOMPurify.addHook('afterSanitizeElements', function(currentNode, data, config){
       // more code here 
    });

    return function(dirty){
       return window.DOMPurify.sanitize(dirty, {
		      USE_PROFILES: {svg: true, svgFilters: true, html: true},
		      ADD_TAGS: ['trix-editor'], // Basecamp's Trix Editor
		      ADD_ATTR: ['nonce', 'sha257', 'target'], // for Link-Able Elements / Content-Security-Policy internal <script> / <style> tags
          KEEP_CONTENT:false,
		      IN_PLACE:true,
		      ALLOW_DATA_ATTR:true,
		      FORBID_ATTR:['ping', 'inert'], // Forbid the `ping` attribute as in <a ping="http://example.com/impressions"></a>
          SAFE_FOR_JQUERY:true,
          WHOLE_DOCUMENT:false,
          ADD_URI_SAFE_ATTR:['href']
	     	})
    };
});

window.TrustedTypes.URL.registerPolicySanitizer('alt-policy', function(TrustedType){
    return function(url){
      // URISanity is a ficticious URL sanitizer (doesn't exist - yet)
      return window.URISanity.vet(url);
    };
})

/****=== configurations ===****/

/* blockIncludes: blocks including potentially unsafe HTML strings into the DOM hence modifying the behavior of `innerHTML` */
/*throwErrors: throws errors when the sanitizer detects unsafe HTML string content */
window.TrustedTypes.HTML.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:false
};

/* blockNavigation: blocks navigating to potentially unsafe URLs hence modifying the behavior of `location.href` or `location.assign()` */
/*throwErrors: throws errors when the sanitizer detects unsafe URL string content */
window.TrustedTypes.URL.config = {
    throwErrors:true,
    blockNavigation:true,
    reportViolation:false
};

/* In the example below, `innerHTML` throws a `TrustedTypesError` because of the "ping" attribute and also does include the HTML string into the DOM too*/
document.body.getElementsByName('wrapper')[0].innerHTML += '<a ping="https://www.evilattacker.com" href="#">Hello World!</a>';

/* This will also work for other DOM sinks and chokepoints too */
document.body.lastElementChild.insertAdjacentTML();

/* Also for this too, the behviour of assign is modified by */
document.location.assign("http://my.nopoints.edu.ng/_/profiling/28167/#section1")
<meta http-equiv="Content-Security-Policy" content="trusted-types alt-policy">

The above actually proposes programmatic configurability over declarative as it is more cheaper and also doesn't require the web developer to keep all 70 DOM sinks in mind as he/she writes code based on TT. It also proposes that types should not be proliferated to deal with each kind on data passed around on the front-end. The use of a policy trusted ( types group ) or ( types form - a he (@isocroft) calls it) might be more efficient going forward.

For example: URIs for scripts / dynamic resources / stylesheets can come under a single types group or types form : URL

So, for stylesheets, there would also be:

window.TrustedTypes.CSS.registerPolicySanitizer('alt-policy', function(TrustedType){
   // code goes here
});

We would love your take on this alternate proposal on TT. The POC implementation of the above is here. You can try it out yourselves to see how it works.

@mikesamuel
Copy link

@stitchng

Thanks for this. I think there's enough here that I'd like to encourage you to create your own repo with its own issue tracker. Issue threads are linear, and when discussing a full-fledged counter-proposal, it's important to keep separable issues separate.

I'm probably misunderstanding this, so apologies if parts are off the mark.

window.TrustedTypes.HTML.registerPolicySanitizer

Yes, most policies probably only have a create method for one type.
This does look very nice and clean.

Policy names are first-come first-serve, so this would only allow each whitelist entry to correspond to one trusted type.

The default policy is likely to cover more than one trusted type.

It doesn't seem to me that the ergonomic benefit of:

window.TrustedTypes.HTML.registerPolicySanitizer(name, methodDefinition)

over

window.TrustedTypes.createPolicy(name, { HTML: methodDefinition })

justifies the loss of generality.

also doesn't require the web developer to keep all 70 DOM sinks in mind

How, under the wicg/trusted-types proposal, does a developer have to do that?

It seems to me that the developer who is crafting a trusted value only has to keep in mind the type of content that they're crafting: one of (HTML, Script, ScriptURL, URL).

A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those.

The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using.

URIs for scripts / dynamic resources / stylesheets can come under a single types group or types form : URL

This seems separable from the API for crafting policies.

The problem with having just one kind of URL is that some sinks load URLs into the current origin as code, and some load URLs into separate origins or as constrained media.

// Loads into a separate document with an origin determined by the URL (modulo javascript:)
myAElement.href = url;
// Loads constrained media
myImgElement.src = url;
// Loads content into this document.  Origin of URL not used to separate.
myScriptElement.src = url;

It seems that the first two need far less scrutiny than the third, and within Google, we've simplified migrating legacy applications a lot by just allowing any (http:, https:) URL for the first two.
Applying that same policy to the third would be disastrous.

window.TrustedTypes.HTML.config = ...

Is the proposal to do policy configuration in JavaScript instead of as document-scoped security metadata that is loaded separate from page content?

@stitchng
Copy link

stitchng commented Jul 19, 2019

I think there's enough here that I'd like to encourage you to create your own repo with its own issue tracker. Issue threads are linear, and when discussing a full-fledged counter-proposal, it's important to keep separable issues separate.

@mikesamuel Our intention was not to diverge discussions on the current spec direction but to bring to the notice of stakeholders such as yourself that a counter-proposal is out there and that we feel is worth discussing. However, you are correct. We need to keep separable things separate and we would have a new repo created to track discussions on this novel counter-proposal on our end and reference here perhaps (or not). The major reason for this counter proposal and why we brought it to this particular issue on this repo stems from the comments of @mozfreddyb here on this issue. We seem to agree with his (@mozfreddyb) position on a quote (from him) below:

"Automatically sanitize within the APIs that parse strings into HTML (e.g., innerHTML).
One could also debate exposing a sanitizer API to the DOM"

We feel that exposing the sanitizer API e.g. DOMPurify to the DOM via the policies created/registered for HTML trusted type (for example) will make the TrustedTypes usage much more comprehensive, effective and robust it seems.

How, under the wicg/trusted-types proposal, does a developer have to do that?

A policy author potentially has to think about all of (HTML, Script, ScriptURL, URL) but that's far less than 70 and, as you've pointed out, most policies other than the default policy tend to only deal with one of those.

The developer who is using a trusted value with a sink, probably has 1 kind of sink in mind: the sink they're using.

The developer has to write code against each DOM sink like so:

     let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', { 
        createHTML(html){
               return window.DOMPurify(html);
        }  
    });

    document.getElementById('main-page').innerHTML = TrustedPolicy.createHTML('<span x=alert(0xed)>Content!</span>');

This means that the developer will have to remember at each point when dealing with any DOM sink that is in use in the JS codebase under development/review not to do this (When a trusted-types policy is in effect - i.e. registered via the CSP header):

    let TrustedPolicy = window.TrustedTypes.createPolicy('my-policy', { 
        createHTML(html){
               return window.DOMPurify(html);
        }  
    });

    document.getElementById('main-page').insertAdjacentHTML('afterbegin', '<span x=alert(0xed)>Hello there...</span>') // this will throw error because a string is used directly with this DOM sink

There needs to be a presence of mind of the developer to be careful with using each DOM sink as they have to use the value-objects (TrustedTypes) at every instant.

It doesn't seem to me that the ergonomic benefit of:
window.TrustedTypes.HTML.registerPolicySanitizer(name, methodDefinition)
over
window.TrustedTypes.createPolicy(name, { HTML: methodDefinition })
justifies the loss of generality.

We believe the ergonomic benefit of the newly proposed API for TrustedTypes (above) does not lead to a loss of the comprehensiveness or extensiveness stemming from its form. One could argue otherwise that this version of the API for TrustedTypes (above) promotes visibility into each types policy details.

The problem with having just one kind of URL is that some sinks load URLs into the current origin as code, and some load URLs into separate origins or as constrained media.

We do agree with your assertion as the URLs for scripts serve a different contextual use case than other URLs for documents. We could modify the current API signature as follows to accommodate this as such:

window.TrustedTypes.URL.registerPolicySanitizer('a-policy', function(TrustedType){
    return function(url, category){ // category can be either one of: 'document' , 'script', 'style'
      category = category || 'script'
      // this sanitizer will vet the "url" according to the "category"
      return window.URISanity.vet(url, category);
    };
})

From the above code for the a-policy registration, the DOM sinks will be responsible for interacting/callingwith the function with signature (url: String, category: String) with the correct "url" and "category" passed to the DOM sink API e.g.

 
/**!
 * When the DOM sink API below is called, it
 *  calls the registered URL sanitizer and passes
 * "javascript:void(prompt('Hello'));" as the > url and
 * "script" as the > category parameters respectively
 */
myScriptElement.src = "javascript:void(prompt('Hello'));"

Is the proposal to do policy configuration in JavaScript instead of as document-scoped security metadata that is loaded separate from page content?

Yes, it is a proposal to do policy configuration in JavaScript (as an additional method of policy configuration to document-scoped metadata loaded in HTTP response headers. However, we do see a fault in allowing policy configurations in JavaScript. client-side code could be compromised by a third-party attacker which is why we also made a safety hatch for these configurations such that policy configurations are allowed only once. trying to configure again via JavaScript or via a <meta http-equiv="Trusted-Types" content="..."> tag throws an error. So, one can only do policy configurations once either using HTTP response headers e.g:

Raw HTTP headers formatting below

Trusted-Types: type-html 'block-inclusion'; type-document-url 'block-navigation' 'report-violation'; type-script-url 'block-execution' \r\n

The above draws from work and discussions here on Per-Type Enforcement

Content-Security-Policy: trusted-types a-policy; default-src 'self' https: blob: \r\n

Report-To: { "max_age": 10476400, "endpoints": [{ "url": "https://analytics.provider.com/trusted-types-errors" }] } \r\n

or using Javascript e.g:

window.TrustedTypes.HTML.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:false
};

window.TrustedTypes.URL.config = {
    throwErrors:true,
    blockIncludes:true,
    reportViolation:true
};

The Report-To header applies to both cases however as it stipulates the endpoint to report to.

Finally, someone can close this issue out so fresh discussions on this counter-proposal can begin here.

@mikesamuel
Copy link

Finally, someone can close this issue out so fresh discussions on this counter-proposal can begin here.

Thanks for responding and I will follow up there.
I suspect Mozillans may keep this issue open since it's a vehicle for them to collect their thoughts.

@jonathanKingston
Copy link

I would like to highlight a few things here, the ongoing TAG review: w3ctag/design-reviews#198 and also the feedback from @annevk w3c/trusted-types#176

Largely the issue mentioned is around the brittle nature of the current implementation in that policies aren't manadated against a feature and instead are applying to the callsite. This comes at a cost of potentially missing many APIs as new features to the web get added.

Overall I think there is interest in implementing Trusted Types but depending on webidl/callsites may be an oversight that we end up regretting.

@koto
Copy link

koto commented Aug 20, 2019

It's a fair point. After a few iterations (especially after w3c/trusted-types#204 deprecating TrustedURLs that were the most loosely defined), we focused the API on DOM XSS prevention, explicitly removing the containment - or preventing the requests - from the goals. What's left is quite limited number of vectors / sinks, roughly:

  • callsites of HTML parsers (innerHTML, DOMParser.parseFromString, document.write etc)
  • JS execution (eval, setTimeout, javascript: navigation, script node manipulation, inline event handlers)
  • loading scripts dynamically (e.g. JSONP)

The application at callsites is important; Focusing on sources of the trusted values and erroring out as soon as the untrusted value is used in a DOM sink allows the authors to verify the enforcement statically, and avoid runtime surprises. We intend to shorten the feedback loop that current CSP has. In short, I'd rather the API keeps throwing at setAttribute, innerHTML setter and such.

It's hard to predict how specs will evolve, but it seems we might be able to assert the restrictions we want to be respected at the beginning of those two algorithms:

If possible, I'd like those assertions to be based on the TrustedTypes IDL extended attribute, so something like:

Assert: If this algorithm is called from an IDL construct, that construct has TrustedTypes extended attribute.

Additional sinks, e.g. for eval, setTimeout, and javascript: navigations are already explained in CSP terms.

Does that sound reasonable?

@annevk
Copy link
Contributor Author

annevk commented Aug 28, 2019

I'm not entirely sure that always works and it's definitely not as simple as that, but it's probably getting too much in the weeds for the scope of this repository.

@mikesamuel
Copy link

@jonathanKingston
Please excuse my ignorance. What does it mean "that policies aren't mandated against a feature?"
I understand the part about fundamental algorithms should be guarded.
Do you want type metadata to contain enough information to distinguish setters that should be guarded from those that shouldn't?
If assertions around callers (presumably statically checked) are insufficient, are you envisioning runtime enforcement, i.e. trusted types discipline for browser internals?

for the scope of this repository

Would w3c/trusted-types#176 be a better place to get into the weeds?

dbaron added a commit to dbaron/standards-positions that referenced this issue Nov 16, 2019
@annevk
Copy link
Contributor Author

annevk commented Nov 18, 2019

We've had a couple more rounds of brief discussions since August, both internally and with Google. The major takeaways:

  • Some trusted type enforcement will move to the underlying primitives. This reduces the amount of APIs that are impacted, but might make debugging slightly more involved. The exact details are still being worked out.
  • The scope of the API will be re-reviewed to ensure it's forward compatible with adding more primitive enforcement points.
  • There's a worry that the API is too complex for adoption by the long tail of sites and the work done on frameworks to date still puts the onus on individual site developers to take advantage. We know that frameworks, e.g., Ember.js, have raised the adoption issue with CSP in the past and it would be really good to know that that isn't a problem this time around. (It does seem more doable as enforcement is configurable through APIs.)

So I guess we're somewhere between worth prototyping and non-harmful.

I think defer might be problematic as it could have somewhat significant impact to many core APIs.

@dbaron
Copy link
Contributor

dbaron commented Dec 7, 2019

So I guess we're somewhere between worth prototyping and non-harmful.

At some point we have to pick.

Does @lukewagner have an opinion here?

@BrucePerens
Copy link

BrucePerens commented Nov 20, 2023

Trusted types are being a useful security mechanism for my implementation. The types I have implemented are:

  1. One that must be initialized before the window load event. This assures that it doesn't come from user input.
  2. One that runs the data through a sanitizer.

If it doesn't come from one of these, and I'm using it with an injection site, Chrome will throw an error. On Firefox these also work, but the browser doesn't enforce their use.

I'm a 66-year-old systems programmer, and can't call myself any sort of web API expert. It took a few hours to implement and I used a well-accepted sanitizer, DOMPurify, without auditing that myself.

I didn't really experience any grief with the API. Other APIs are notable for the amount of craft knowledge required beyond what is visible on MDN, not this one.

@bartoszniemczura
Copy link

At Meta, we see Trusted Types as a useful security mechanism as well. I believe that broader support across browsers and broader deployment across websites would be beneficial to the web platform overall. I wrote down some data points from earlier this year here.

@gregwhitworth
Copy link

I posted this on Interop 2024 as well but posting across the position issues as well just in case it gets lost:

Salesforce strongly supports the Trusted Types proposal, considering the imminent regulatory changes in the Netherlands and the broader EU, as outlined in the eIDAS Regulation.

The U/PW.03 Standard of DigiD assessment demands the removal of 'unsafe-eval' from CSP, a challenge that will be mirrored across Europe. This presents critical compliance and potential reputation risks for our customers, especially in the public sector and healthcare.

Trusted Types have shown efficacy in XSS risk reduction, demonstrated by Google's successful adoption. This underlines the standard's relevance and potential impact.

@mozfreddyb
Copy link
Contributor

I'm not sure if this forum is the best place to discuss this further, but I'm super curious. As someone who's extremely unaware of the web security regulation within eIDAS. Can you help point us in the right direction?

In case this gets too much into a back-and-forth discussion, I suggest this conversation is moved to the mozilla matrix #security channel. https://matrix.to/#/#security:mozilla.org

@mozfreddyb
Copy link
Contributor

We at Mozilla have done a thorough spec review and intend to change our standards position to positive: We are convinced of the track record that Trusted Types has in terms of preventing DOM-based XSS on popular websites (thanks to folks in thread for providing these insights!).

That being said, there are some important concerns that need to be addressed before this can ship in a release build for all of our users. First and foremost, there is some functionality (e.g., getPropertyType, getAttributeType that seem a bit odd and their usage in the wild isn't clear to us. Conversations with the google web sec team confirms that there is a lack of clarity in terms of usefulness and usage on the web. Chrome has started to add UseCounters (thanks!).

We also spent some time on the Chrome implementation and found some features that are not even in the standard, which is a bit problematic (e.g., beforepolicycreation event). We expect those features to go through standardization or to be deprecated and removed similarly to the methods mentioned above.

@mozfreddyb
Copy link
Contributor

mozfreddyb commented Dec 13, 2023

@otherdaniel You authored the patch that adds use counters. Can you make sure this is exhaustive? From looking just at the aforementioned changeset it seems the event handler is missing. There's likely more.

@mbrodesser-Igalia
Copy link

largely a transition measure to help get an application from its legacy state to a safe-by-default state, where one can then simply use require-trusted-types-for "script" to lock it down.

@otherdaniel: wondering about "largely a transition measure". When considering trusted types policies (not the static methods like fromLiteral), are they only intended as a transition measure?

mbrodesser-Igalia added a commit to mbrodesser-Igalia/trusted-types that referenced this issue Jan 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG)
Projects
None yet