-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scope for PATCG Privacy Principles #36
Comments
Context and reporting: The context in which an advertisement appeared must be reported to, and available to, the advertiser. (This is related to the "harmful use" principle: most advertising-related harms to users are side effects of obfuscating context in reports to the advertiser) |
This may seem basic/fundamental, but seems important to include explicitly in some framing:
|
I think "harmful use" can be a broad title, as there are harms that aren't even privacy harms, as dmarti has noted as an example. (I don't think reporting where I saw an ad back to the advertiser helps protect my privacy at all, it might intrude on it, but I think Don is getting at a separate category of harms where advertising can be used to financially support harmful activities.) One narrower category might be distress and intrusion to cover harmful uses where ads are abused to cause harm just by their presentation, by bothering people, following them around with distressing content, showing particular content to people who are vulnerable in some way, etc. |
To the scoping question, I think consolidation is an example of a privacy concern which competition issues might implicate, but competition is relevant beyond privacy and I wouldn't expect this document to address all potential impacts, including competition impacts or impacts on every potential business, of privacy protections. |
@npdoty My suggested principle would be reporting ad locations, not (location+user) matches. I agree that reporting location+user is a likely privacy violation. There just needs to be a principle that advertising contexts must be reported to the advertiser. Reporting the location to the advertiser helps the user by helping participants in the advertising market to enforce norms. A more honest advertising market is less likely to reward companies for perpetrating privacy and other harms to users. This applies even if you take the position of being totally neutral on the value or harm of any particular context. A good example is adware/spyware browser extensions that insert ads on Wikipedia.
|
I'm thinking it might be worthwhile to have a general principle that user identifying data should be abstracted out of advertising data so that the latter is not be linkable to users; something along the lines of: User data applied to, or generated by, advertising should not be linkable to data outside the advertising context and should provide no information about a specific person. Any user data exposed in advertising use-cases or generated in an advertising context should be rendered unlinkable, either directly or indirectly, to a user and unusable outside of the advertising context to which it applies through the use of aggregation, redaction, mutation or some combination of these. A high-level principle like this would simplify the trust model and reduce the potential for harm caused by repurposing of advertising data. |
We should add Transparency & Trust here: What all can the user verify regarding their privacy? Can they verify when the system fails and privacy leaks? Who do they have to trust, and can they make meaningful choices regarding who they trust? (this latter is similar to the Security / Trust Model in #36 (comment)) |
Maybe Transparency, Security, and Trust Model could all be listed separately. |
I agree with Don that this sort of question quickly ends up intertwined with privacy issues (and not just business goals). The problem that we've encountered in trying to formulate something like this in the past is that "context" can easily be specific to a single user. Obviously my signed-in social media feed is unique to me, so reporting that an ad appeared in that context could be tantamount to reporting what person it was shown to. I would be very happy if we could find a way to pull those apart from each other, so that we could believe there was a real difference between the person who sees something and the context in which it's seen. From the browser-implementer POV this has been difficult, but maybe from the principle-writer POV it will be easier. Some aspiration to "Separate the user from the context" would make me very happy. |
@npdoty @grahammudd it looks like we're collecting principles here. Should we spin up/add to an actual doc so we can make PRs etc? |
Any opinions on an "Explainability" principle? Obviously I don't think we should try to solve the ML explainability problem in general, or ask the ad industry to do so. But we could opine on a principle that prefers a situation in which e.g. it's possible for a person to understand what information of theirs was used to make a decision about what ad they saw. |
@michaelkleber Explainability should also include telling the user what party who holds the data. (If the disclosure states, "you are receiving this ointment ad because you are likely to have a fungal infection" the user is going to want to know who has that information) |
@dmarti I agree though I want to avoid over-promising — for example, we shouldn't make it seem like the browser can tell you everyone who has some piece of information. But I think this issue isn't supposed to be about hashing out the details of any particular principle, but rather the scope of what our principles ought to talk about :-). As @ShivanKaul said, maybe let's take it to a doc where we can start the work part of the work. |
A lot of those concepts (consent, controls, profiling...) are also addressed in the TAG document https://www.w3.org/TR/privacy-principles/ |
I assume this document is building on the TAG document, fleshing out how to apply the overlapping concepts in the specific domain that PAT CG is working on?
They shouldn't, anyway.
The TAG doesn't reject or accept things; it has no gating power.
I don't think the group should wait for the TAG document to be "finalized". There's no reason why work on both documents can't proceed in parallel. |
The goal of this issue is to iterate on and eventually arrive at an agreed upon outline of the scope the privacy principles this group has committed to developing. With scope in place, we can begin drafting principles that align with each area of focus.
Most of these scope dimensions were discussed in our 3/13 meeting. In no particular order, our principles should address:
Feedback appreciated.
The text was updated successfully, but these errors were encountered: