RE: Security, Privacy and Accessibility use cases

To the RQTF

Just caught up on the minutes from Wednesday and following your thread. One case study that may contribute to the discussion was in Australia recently. The government introduced a system called My Health record which essentially allows any medical professional to access anyone's record so that in scenarios such as an emergency, its quick and easy to find out a person's medical history and act accordingly. Disability groups were particularly supportive initially as trying to explain constantly how a disability may impact on medical treatment, even just about how best to communicate medical condition information, is understandably a big deal.

However, while Australians are generally fine to have heaps of personal data stored in its social security and tax information in an online government portal, and generally in favour of medical professionals gaining access to my Health Record, people weren't convinced about the technological solutions being provided for this, and the level of access the system might permit.

As such last year, 2.5 million Australians - 10% of the population -  opted out of the system prior to launch. https://www.abc.net.au/news/2019-02-20/my-health-record-opt-outs-top-2.5-million/10830220.

The privacy and security options for the system are described as follows:

"My Health Record is a safe and secure system that stores your health information. You can take further steps to control your privacy by limiting who has access to your record.

This means that:

You can invite someone like a close friend or family member to help you manage your record
You can decide which healthcare organisations can access your record
You also choose to restrict access to specific information within your record. "

While this all sounds good in theory, the biggest issue at the time was that there was very little information on how this was technically implemented. Even now it's not that clear.

So coming back to the discussion, I mention all this because I also agree with Josh: from a regulation standpoint the rollout of My Health Record wasn't that different to how other government information or personal data of citizens is made available online, and much of what My Health Record offered was already happening but manually instead of online.  What differentiated this was a specific fear over the technical implementation of the privacy and security and the lack of trust that the technical solutions would have adequate safeguards. It's particularly notable that people with disabilities in many cases would prefer difficulties and inconvenience, possibly mistreatments, rather than make use of the system.

Hope this helps contribute to the conversation.

Scott.


[Scott Hollier logo]Dr Scott Hollier
Digital Access Specialist
Mobile: +61 (0)430 351 909
Web: www.hollier.info<http://www.hollier.info/>

Technology for everyone

Keep up with digital access news by following @scotthollier on Twitter<https://twitter.com/scotthollier> and subscribing to Scott's newsletter<mailto:newsletter@hollier.info?subject=subscribe>.

From: Joshue O Connor <joconnor@w3.org>
Sent: Thursday, 27 February 2020 10:13 PM
To: White, Jason J <jjwhite@ets.org>
Cc: RQTF <public-rqtf@w3.org>
Subject: Re: Security, Privacy and Accessibility use cases

White, Jason J wrote on 27/02/2020 13:00:

Thank you, Josh, for your observations.

Thanks Jason. Some comments inline.


The following additional ideas on this subject were developed during the meeting, in your absence.


  1.  It is important to acknowledge the limited role of technological measures in ensuring privacy, and that much of the responsibility lies with regulatory arrangements rather than with the design of technologies.

I'm not sure I agree with that.



  1.  However, there are cases in which it makes good sense to use technological measures to limit the disclosure of information which might reveal a person's disability, so as to make the task of ascertaining these facts without the user's consent more difficult for any party who seeks to do so.

+1 and I think we should be looking at what they are. Ultimately there are issues of personal integrity and control over your data, an indeed over how you choose to, or need to use, various types of technology. Maybe we need browsers that inform us with even more levels of detail when we are being tracked, sniffed etc? I think browsers hold the key for much of this line of defense, look at the great work Mozilla is doing for example, with more generic privacy concerns.

There are also more benign aspect, as Janina touched on where if feature x is enabled in the browser an application could recognise a users need for other accessibility related customisations or features. But it should all be with the users consent and somehow reduce the leveraging of any blackhat profiling via privacy firewalls or fingerprint spoofers.

Looking forward to further discussion.

Thanks

Josh



  1.  The indicators that may be used to infer disability status (with a high degree of probability) may be different from indicators used to draw other inferences about the user. Hence, they should be identified and considered with a view to establishing appropriate technical controls, where appropriate. On this view, the distinctiveness of the disability-related issue consists in the types of information that are likely to be inadvertently revelatory.
There are additional issues that we didn't discuss, including security more broadly, and disclosure of individual needs/preferences to applications for the purpose of enhancing accessibility.


From: Joshue O Connor <joconnor@w3.org><mailto:joconnor@w3.org>
Date: Thursday, February 27, 2020 at 06:56
To: RQTF <public-rqtf@w3.org><mailto:public-rqtf@w3.org>
Subject: Security, Privacy and Accessibility use cases
Hi all,

Our discussion in RQTF yesterday about security, privacy and
accessibility has produced some further thoughts.
Janina makes the point that many of the user needs and requirements for
security and privacy are generic, and broadly applicable for e'one. I
agree. There is a point of interest for us here in that there is
arguably an imperative when it comes to protecting the right of people
with disabilities online, or those who may be classed as vulnerable.

I think this is where we could lead the charge by further exploring the
potential impact of fingerprinting and how it relates to profiling
people with disabilities, if we can come up with ways of ensuring that
the integrity of the user is maintained throughout for people with
disabilities, my point is this may lead to interesting benefits for the
'ordinary' end user.

I think this is the use case (protecting the integrity of disability
related user data) - that could catch attention, headlines etc rather
like the traveling animal one in verifiable claims.

Janina also expressed the interesting nature of these technologies that
they are neither good or bad - it is how they are used that is
important. But to paraphrase some other clever guy, neither are they
neutral.

Thanks

Josh

--
Emerging Web Technology Specialist/Accessibility (WAI/W3C)

________________________________

This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.


Thank you for your compliance.

________________________________


--
Emerging Web Technology Specialist/Accessibility (WAI/W3C)

Received on Friday, 28 February 2020 04:24:18 UTC