Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider differences in trusted interfaces #5

Open
johnpallett opened this issue May 24, 2019 · 5 comments
Open

Consider differences in trusted interfaces #5

johnpallett opened this issue May 24, 2019 · 5 comments

Comments

@johnpallett
Copy link

Per immersive-web/webxr#424 (comment) the method for displaying a trusted interface may vary depending upon the user agent and form factor.

For cross-origin navigation this is of concern because it could lead to user discomfort during navigation. Presumably the user would need some indication of what origin they were visiting from a trusted interface, otherwise any origin could pretend to be TrustedSite.com (even if the origin is actually BadSite.com) and solicit sensitive information.

The mechanism for providing this interface may vary - for example, on some form factors the user may be required to remove the HMD or exit an immersive session to see the trusted interface.

@AlbertoElias
Copy link

Absolutely, I think in the general discussion around navigation we agreed that there needed to be some kind of trusted interface like you put it, and that each UA would experiment with different solutions, always putting user's privacy first.

@johnpallett
Copy link
Author

Understood that UAs could experiment, but I'm curious whether there is a proposal that would ensure site developers could add cross-origin navigation and get a reasonably predictable user experience across platforms.

From immersive-web/webxr#424 (comment): Different platforms may give the user agent different options for how to present a trusted interface, but those differences can result in significantly different user experiences:

  • On an HMD with a handheld input device, a user agent might reserve certain inputs (e.g. a home button) as an affordance for the user to confirm that a user interface can be trusted;
  • On a standalone HMD without a handheld input device, a user agent might require that the immersive session be suspended in order to present a trusted interface;
  • On a desktop or mobile device without an HMD, similar to HTML5 fullscreen the user agent may have no alternative but to suspend/resume the session in order to present a trusted interface (one analysis for HTML5 fullscreen is here);
  • On a tethered HMD, a user agent might preserve the session, but may or may not require the user to remove the headset in order to present a trusted interface on the desktop;
  • On other form factors such as CAVE systems there may be additional input constraints or requirements for the persistence of the session that limit the ability for the presentation of a trusted interface in other ways.

@AlbertoElias
Copy link

I definitely agree that there should be a spec around this at some point, but I also feel that, even though there should be a spec around trusted interfaces, it might be too soon to determine how it should be done in all cases. I feel it would be better if browsers gained some data beforehand for a few months, and with that, determine what works best.

@coderofsalvation
Copy link

coderofsalvation commented Dec 22, 2020

Although I do understands the concerns, I agree with @AlbertoElias.
I'm afraid, with current user-numbers, it's still a theoretical problem, not a practical one (we don't have data).
Two practical things i can think of now, are:

  1. Consent in the form of a UA flag (chrome://settings) which unlocks hasslefree link-traversal for webxr (+vrdisplayactivate).

  2. Piggyback the solution found in browser extensions: manifest.json. Webxr developers can limit the untrusted effect by specifying it in their https://mywebxrapp.com/manifest.json:

{
  "short_name": "My app",
  ...
  "trusted_interfaces":{
    "version":1,
    "vrdisplay":{
      "matches":["https://otherwebxrapp.com"]         // adding 'https://*/*' would trigger consentscreen by UA
    },
   "fullscreen":[
      "matches":["https://otherwebxrapp.com/*"]`     // adding 'https://*/*' would trigger consentscreen by UA
    ]
  }
}

The manifest.json is already parsed by browsers anyways, and can also be parsed by webxr-developers, so there's a common ground for innovation.
This way the developers can also choose what to go with (the UA solution or their own).

I'm aware of the fact that I might be overlooking a lot of things here.
However, as an webxr developer, imho semantical linking of webxr apps is such a huge essential, that it seems odd to wait for the evolution of cross-UA consentscreen-specs.
Cross-UA consentscreens are such a rabbithole, just look at its evolution for 2D websites.
I'd rather have 'vrdisplayactive' link-traversal available as a flag in the meantime, and act upon its data afterwards.

@cabanier
Copy link
Member

Consent screen can be spoofed by malicious sites so they are not a solution. There was a proposal in the past to have a personalized consent screen but research indicated that people just ignore it and click through.

I think we could make a case for same origin to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants