Re: When is "phone home" ok, if ever?

First, a practical observation: in disaster situations, a "phone home" that
accomplishes a verification upon which access depends is probably risky,
because disasters and guaranteed internet connections don't always go
together. I wouldn't want firefighters who are trying to pull me out of a
high-rise in Myanmar after an earthquake to have to be verified by phone
home over the internet before they can get through an access gate to reach
me. On the other hand, a "phone home" that simply notifies the issuer (or a
different party that the issuer designates) on a best-effort basis (e.g.,
so the first responders can be counted) might be okay. The intermediate
position -- best effort to do prior verification, with denial if
verification works but returns a "not OK" result, combined with a "default
to allow access if verification is impossible" can be gimmicked by
attackers and may be unwise.

(Aside: a possible response to this is "well, we don't have to use the
internet for the phone home; we could use LoRa or LEO satellites or
shortwave radio" -- to which I would say, "Yes! That's why I argued that VC
API was setting its sights too low to define verification interactions only
over HTTP. And it's why DIDComm always described proving interactions
without reference to HTTP constructs. But it seems that this is a minority
position?")

Regarding the observation that the imagined scenarios all presuppose
careful prior consent, I would say that it's important to keep prior
consent *in a use case* distinct from prior consent *in a credential*. Part
of the value prop of VCs is supposed to be that the holder can use them
with arbitrary verifiers. This means that although
proof-of-firefighter-hood might carry prior consent for phone-home during a
crisis response, it might be wrong to assume it also carries prior consent
for phone-home when applying for a discount on auto insurance.

On Fri, May 2, 2025 at 2:42 PM Manu Sporny <msporny@digitalbazaar.com>
wrote:

> Starting the weekend off with a charged question that I expect this
> community to have some strong feelings about. :)
>
> As we presented earlier this year, some of us are working with first
> responders (fire fighters, emergency medical technicians, law
> enforcement, and support personnel) to deploy verifiable credentials
> for large scale disaster response scenarios.
>
> The first and simplest use case is a "digital badge" for a first
> responder that identifies who they are to security personnel that are
> trying to secure a particular area during a wildfire, earthquake,
> hurricane, or other large scale disaster. It can also be useful for
> citizens that need to check a first responder's credentials that might
> need to enter their property or their home.
>
> For this use case, some of these first responder organizations are
> wondering if we can implement a form of "phone home", with the consent
> of the responder, to "check in" when their badge is verified. There
> are even requests for an "active tracking beacon" for firefighters
> going into dangerous areas that might need to be rescued themselves if
> they get into trouble.
>
> So, the "phone home" here is opt-in/consent-based and viewed by both
> the responders and their agencies as a safety feature that could save
> lives. This feature would exist on the physical badges (VC barcodes)
> and digital badges (VCs). It could probably be implemented as a
> ping-back mechanism, where a verifier scanning the badge would call an
> HTTP endpoint with the VC that was scanned and possibly geocoordinates
> (for rescue/audit purposes) and a VC for the entity performing the
> scan (for auditability purposes). It could be "turned off" by choosing
> NOT to selectively disclose the pingback location (but that would
> probably only work in the digital  badge version).
>
> Now, clearly, this sort of functionality is something we've
> collectively warned against for a very long time. Implementing this
> for something like a driver's license is a horror show of potential
> privacy and civil liberty violations. However, implementing this for a
> first responder that's running into a wildfire to save a town feels
> different.
>
> If we think this is a legitimate use case, standardizing it might
> allow digital wallets to warn people before presentation of the
> digital credential. So, rather than organizations implementing this
> anyway, but in a proprietary way where the "phone home" is hidden,
> this would be a way of announcing the privacy danger if the badge is
> used w/o consent or selective disclosure.
>
> So, some questions for this community:
>
> 1. Is this a legitimate use case?
> 2. Is this sort of feature worth standardizing?
> 3. Is there a more privacy-preserving way to accomplish this feature?
> 4. Should there be wallet guidance around this feature? If so, what
> should it be?
> 5. Should there be verifier guidance around this feature? If so, what
> should it be?
> 6. What horrible, civil liberties destroying outcome are we most afraid of
> here?
>
> Interested to... oh, wait a sec... *puts on a flame retardant suit*...
>
> Interested to hear everyone's thoughts. :)
>
> -- manu
>
> --
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/
>
>

Received on Friday, 2 May 2025 21:40:39 UTC