-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fenced frames with local unpartitioned data access #975
Comments
Yes I think focusing this issue for "local unpartitioned access" makes sense, since that's an added major functionality on existing fenced frames. |
To give some more context on the 3 fenced frames TAG reviews so far: These 2 that were created where fenced frames support use cases requiring src which should be hidden from the embedder. Examples of these use cases are Protected Audience and selectURL
And this TAG review that specifically focuses on local unpartitioned data access rendering If it is recommended to instead converge this issue with #838 to review all use cases together, we could do that. |
The TAG agrees that it would be useful to enable the user-focused use case here. Specifically, the web is in the situation that sites show a list of third-party providers, each of which might or might not be able to help the user sign in, pay, or perform some other function on the main site. The user may not remember which of those providers they've stored the relevant data with, and it's frustrating to click one of these buttons only to find that it can't help you. In a browser with 3p cookies, those buttons can give an indication of what data they have access to, but as browsers phase that out, this sort of button can no longer provide these capabilities. It seems useful to try to prevent that frustration, if doing so is possible without confusing users about what identity they've already presented to the main site. The explainer is very unclear about whether that use case is actually the fundamental goal of this proposal. If the explainer is literally correct that the goal is to "decorate a third-party widget with cross-site information about the user", we think that's very likely to be a harmful goal and incompatible with our work on privacy principles for the web. Even if we've correctly understood the use case, we think the proposed solution in this feature makes it too hard for users to correctly infer who already has access to that information. If a user incorrectly infers that the containing site already knows their identity, they're more likely to then "agree" to share their identity with the site, violating the privacy principle on identity. The two concrete examples of when to use this feature appear to be causing this sort of mistake in practice, whether or not their designers intended the deception. For example, Google Accounts presents a login chip on a number of websites (such as Reddit). Some versions of this chip show your Google account name, profile image, and email address. Several members of the TAG have concluded from this UI that they had already used Google to log into a site, even though they hadn't. We then clicked through the login chip, creating a connection between Google and the site that we hadn't intended or wanted. Even if it wasn't intentional on the part of the UI designers, this had the effect of reducing our autonomy. FedCM seems like a better solution for login than letting the providers embed cross-site data. Google Pay implemented a button that presents the last four digits of a credit card, taken from the last transaction with that service, even if the transaction was elsewhere on the web. This greatly improved the rates at which people completed a purchase. However, we're concerned that, like in the login case, this increase in purchases might be happening because users incorrectly concluded that they'd already bought something from the active site, and we haven't seen UX research that explored users' beliefs in this case. Further work on Payment Handlers might be a better way to expose this sort of hint. We don't mean to imply that Google is unusual in these practices. These techniques lead to better business outcomes for websites and their service providers, and it's perhaps unsurprising that neither group has checked what fraction of users are getting the outcomes they want. But we need that evidence before considering this UI in user agents. And these are just the relatively benign cases: once a browser removes 3p cookies, truly malicious actors have a much stronger incentive to find ways to trick users into joining their identities (see some ideas in WICG/turtledove#990). This proposal doesn't analyze or protect against that risk. One might argue that the proposal is ok because it just allows websites to give their users false beliefs, and a user has to still separately consent before their private information is released, but as far as we can tell, identities can be joined as soon as the user clicks, which isn't sufficient for the browser to know they've consented. Even if there were a separate consent screen, its task seems very difficult, needing to both explain what the user's being asked to consent to, and override anything the user's been convinced to believe about what information the site already has. We suspect that embedding information from a different context inherently enables deception about the surrounding site's knowledge. Certainly embedded sites could do the work of explaining what information their embedders already have, but we've seen that the default behavior is not to do that, and we haven't seen either what malicious actors could do with this when motivated, or a creative analysis of the worst abuse cases. If you intend to pursue something akin to this general approach, a thorough analysis of the ways in which this might be abused and mitigated is essential. We want to reiterate that the core use case seems valuable to solve, and we encourage you to keep trying to solve it. To do this safely, we suspect that you'll need to show the available information in browser UI, rather than inside the content area. |
Could you elaborate on the understanding that a click would be sufficient for identities to be joined with the fenced frames solution? |
As I understand it, something happens when someone clicks on the frame. So there is a difference between an outcome where someone clicks and when someone doesn't. The embedding contexts learns about whether there was a click. If the content shown can affect whether a click occurs, then the embedding context gains information. In the extreme, you might imagine content in the frame that guarantees a click in one case ("click here to enable this free addon") and guarantees no click in another (leaving the area blank or "click here to agree to something awful"), then the information carried by that click (or absence thereof) is high. Maybe it's not perfect, because people are often perverse like that, but you have created a means of exfiltration. WICG/turtledove#990 goes into more detail about the sorts of things you might do to gain information from the human involved. |
We can't say "identities can be joined as soon as the user clicks" if we mean "one bit leaks for each click". I'd missed that the fenced frame can't itself open a popup at an arbitrary URL. https://github.com/WICG/fenced-frame/blob/master/explainer/fenced_frames_with_local_unpartitioned_data_access.md#information-flow-and-design says the surrounding page has to decide what URL to open. Since that has no more information than an This still puts a lot of responsibility on whatever consent screen appears behind that popup: if users have concluded that the surrounding page already knows their cross-site identity, the consent screen has to counteract that in order to get an accuration notion of the user's intent. Is that plausible? |
Responded to the linked issue WICG/turtledove#990 (comment) with the mitigations for these attacks, ranging from visited links (which is observable w/o user action) to the grid attack (where multiple fenced frames are shown to the user and clicking on one presents some information). |
the click could lead to many possible user experiences, depending on the consumer API in the embedding context, e.g. request storage access, FedCM, PaymentHandler or simply creating a pop-up, etc. so it seems a bit out of scope of this review, as there can be many possible UX's based on the click. |
I'm requesting a TAG review of Fenced Frames with local unpartitioned data access.
Overview of proposal
There are situations in which it is helpful to personalize content on pages with cross-site data, such as knowing whether a user has an account with a third-party service, whether a user is logged in, displaying the last few digits of a user’s credit card to give them confidence that the check-out process will be seamless, or a personalized sign-in button. These sorts of use cases will be broken by third-party cookie deprecation (3PCD). Fenced frames are a natural fit for such use cases, as they allow for frames with cross-site data to be visually composed within a page of another partition but are generally kept isolated from each other.
The idea proposed here is to allow fenced frames to have access to the cross-site data stored for the given origin within shared storage. In other words, a payment site could add the user’s payment data to shared storage when the user visits the payment site, and then read it in third-party fenced frames to decorate their payment button.
Today’s fenced frames prevent direct communication with the embedding page via the web platform, but they have network access, allowing for data joins to occur between colluding servers. Since the fenced frame in this proposal would have unfettered access to user’s cross-site data, we cannot allow it to talk to untrusted networks at all once it has been granted access to cross-site data. Therefore, we require that the fenced frame calls
window.fence.disableUntrustedNetwork()
before it can read from shared storage.The driving motivation for this variant of fenced frames are customized payment buttons for third-party payment service providers (as discussed in this issue) but this proposal is not restricted to payments and we anticipate many other content personalisation use cases will be found with time.
Further details:
Security and Privacy questionnaire based on https://www.w3.org/TR/security-privacy-questionnaire/
What information might this feature expose to Web sites or other parties, and for what purposes is that exposure necessary?
Fenced frames can be viewed as a more private and restricted iframe. Fenced frames with the unpartitioned data access allows it to read unpartitioned data from shared storage to show personalized user information to the user, e.g. personalized payment button as described in the explainer. Existing fenced frames functionality already disables communication from the fenced frame to the embedding context but to access the unpartitioned data, the fenced frame is also required to disable network communications, with exceptions such as private aggregation report as described in the explainer here.
Do features in your specification expose the minimum amount of information necessary to enable their intended uses?
Yes, see above answer for ways information exposure is minimized.
How do the features in your specification deal with personal information, personally-identifiable information (PII), or information derived from them?
Any unpartitioned data that the fenced frames read, if it contains PII, is not exfiltrated out of the fenced frame.
How do the features in your specification deal with sensitive information?
Same answer as # 3.
Do the features in your specification introduce a new state for an origin that persists across browsing sessions?
No.
Do the features in your specification expose information about the underlying platform to origins?
No
Does this specification allow an origin to send data to the underlying platform?
No
Do features in this specification allow an origin access to sensors on a user’s device
No
What data do the features in this specification expose to an origin? Please also document what data is identical to data exposed by other features, in the same or different contexts.
Same answer as # 1.
Do features in this specification enable new script execution/loading mechanisms?
No
Do features in this specification allow an origin to access other devices?
No
Do features in this specification allow an origin some measure of control over a user agent’s native UI?
No
What temporary identifiers do the features in this specification create or expose to the web?
None.
How does this specification distinguish between behavior in first-party and third-party contexts?
Fenced frames are always present as embedded frames.
How do the features in this specification work in the context of a browser’s Private Browsing or Incognito mode?
No difference with a regular mode browser
Does this specification have both "Security Considerations" and "Privacy Considerations" sections?
Yes, privacy considerations and security considerations.
Do features in your specification enable origins to downgrade default security protections?
No
How does your feature handle non-"fully active" documents?
Based on https://www.w3.org/TR/design-principles/#support-non-fully-active:
What should this questionnaire have asked?
N/A
The text was updated successfully, but these errors were encountered: