You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We (@GeetikaGopi, @amad-person, @omkhar, @gfanti) are a team of researchers from Carnegie Mellon University and OpenSSF. In our recent study to appear at SOUPS 2024, we have found that a significant fraction of DPGs respond to question 9(a) on privacy with responses that are incomplete or misleading (more details in the paper). For example, we find that the level of detail that many DPGs provide in response to 9(a) is insufficient to understand much about their privacy posture. This can make it challenging to understand if PII is being handled properly. We would hence like to discuss the possibility of updating the privacy requirements for being classified as a DPG.
Starting point: A proposed solution
It may not be scalable or feasible for the DPGA to meaningfully evaluate the privacy posture of DPGs, given that many DPGs consist of large and complex codebases. In our paper (Section 6.2.1), we propose an alternative architecture for privacy evaluation of DPGs. Roughly, the proposed process would proceed as follows:
DPGs would complete and submit a standardized privacy assessment developed by an external body; for example, privacy impact assessments (PIAs) are widely used.
DPGs could either complete this assessment on their own (self-attestation) or submit a certified assessment from third parties approved by the DPGA (e.g., many consulting firms routinely conduct PIAs today).
DPGs would submit the documentation of their privacy assessment along with their DPG application.
The DPGA would not evaluate the quality of the privacy assessment, beyond ensuring a good-faith response—it would simply post the assessment information on the DPGA website, along with the remaining DPG standard responses.
Adopters would evaluate themselves whether a DPG meets their privacy requirements; the privacy assessment would give them a summary from which they can make an initial assessment.
We believe this process has a few desirable properties:
It provides adopters with a more nuanced evaluation of DPGs’ privacy postures, compared to the current responses to 9(a).
It does not require the DPGA to decide what privacy features are important.
It does not require the DPGA to evaluate DPGs’ privacy postures.
It makes use of existing, widely-adopted privacy evaluation tools and ecosystems.
This is of course not the only possible process, and as always, there are tradeoffs. We would be happy to discuss this issue (both the underlying problem and potential solutions) further.
The text was updated successfully, but these errors were encountered:
+1 @gfanti I wholeheartedly endorse this proposal. I agree with the proposed steps in the interim but ultimately harmonize the DPGA to specify common privacy controls.
We (@GeetikaGopi, @amad-person, @omkhar, @gfanti) are a team of researchers from Carnegie Mellon University and OpenSSF. In our recent study to appear at SOUPS 2024, we have found that a significant fraction of DPGs respond to question 9(a) on privacy with responses that are incomplete or misleading (more details in the paper). For example, we find that the level of detail that many DPGs provide in response to 9(a) is insufficient to understand much about their privacy posture. This can make it challenging to understand if PII is being handled properly. We would hence like to discuss the possibility of updating the privacy requirements for being classified as a DPG.
Starting point: A proposed solution
It may not be scalable or feasible for the DPGA to meaningfully evaluate the privacy posture of DPGs, given that many DPGs consist of large and complex codebases. In our paper (Section 6.2.1), we propose an alternative architecture for privacy evaluation of DPGs. Roughly, the proposed process would proceed as follows:
We believe this process has a few desirable properties:
This is of course not the only possible process, and as always, there are tradeoffs. We would be happy to discuss this issue (both the underlying problem and potential solutions) further.
The text was updated successfully, but these errors were encountered: