- From: Martin Thomson <notifications@github.com>
- Date: Tue, 30 Jul 2024 15:58:11 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/issues/838/2259337750@github.com>
The core problem we're concerned about is that the use cases are in some ways also abuse cases. The [Google Pay example](https://developers.googleblog.com/en/updated-google-pay-button-increases-click-through-rates/) is a great example here. No doubt the Google Pay team believes that this is an unqualified improvement to their product. They show that more people buy things if they show the last four digits of the card number in the Google Pay button. If we think of the feature from the perspective of making shopping more pleasant and streamlined, by showing people that payment through this button uses a service that is known to them, that has real upsides. People presented with information from an actor they trust (if they do in fact trust Google Pay services, which seems likely, at least to some extent, if they've already added their card info to it) might then feel reassured about the handling of their information. However, that example also demonstrates the use of a subtle misrepresentation. People might reasonably believe that they have made a purchase on this website before, because the site appears to already have their payment information, when this is not necessarily true. In the case where the user *has not* bought anything from the site before, pressing the pay button and proceeding through the payment flow reveals lots of information about the user to the site which the site did not previously have. Yet the appearance of the button makes the user believe that the site already has this information, and that the net increase of information about themselves that the site possesses is zero. That increases the perception that the site is trustworthy in their eyes. Using that misrepresentation to nudge behavior is the very definition of a deceptive design pattern ([Privacy Zuckering](https://en.wikipedia.org/wiki/Dark_pattern#Privacy_Zuckering), as it happens). The same arguments might be used for federated login buttons. Google accounts also presented similar UX for logins, showing a user icon and account name on sites that people had not visited before. Blocking third-party cookies disabled that feature; this use of fenced frames would re-enable it. The effect is the same sort of misrepresentation, and may result in users revealing information to sites that they otherwise would not have. We are also concerned that the abuse scenarios here have not been given due consideration. The potential for abuse from a good actor here seems pretty strong, but the potential for this capability to be exploited by a bad actor is potentially far worse. -- Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/issues/838#issuecomment-2259337750 You are receiving this because you are subscribed to this thread. Message ID: <w3ctag/design-reviews/issues/838/2259337750@github.com>
Received on Tuesday, 30 July 2024 22:58:14 UTC