W3C home > Mailing lists > Public > public-rqtf@w3.org > June 2019

RE: Another CAPTCHA approach

From: Scott Hollier <scott@hollier.info>
Date: Wed, 26 Jun 2019 04:00:53 +0000
To: Janina Sajka <janina@rednote.net>, "public-rqtf@w3.org" <public-rqtf@w3.org>
CC: Michael Cooper <cooper@w3.org>
Message-ID: <DM6PR01MB43453F01BE82752973CE0DACDCE20@DM6PR01MB4345.prod.exchangelabs.com>
To Janina

I've done some digging and as best I can tell we've covered off on the accessibility-related issues. The parts of the process that don't interact with users are not so dissimilar to reCAPHCA v3 and the inaccessible fallbacks are well covered in our other visual CAPTCHA information so I can't see that there's anything new here from a specifically digital access perspective. That said, if the company wants to provide its take on how the process speicifcally interacts with accessibility beyond what is already in the Note we can certainly take a look, but the information provided to date doesn’t go beyond what we already have IMHO. 

Scott. 

Dr Scott Hollier 
Digital Access Specialist 
Mobile: +61 (0)430 351 909
Web: www.hollier.info
 
Technology for everyone
 
Australian Access Awards 2019 Call For Nominations – celebrate best practice by nominating your favourite accessible Australian website or app. It’s free!  
 
Keep up with digital access news by following @scotthollier on Twitter and subscribing to Scott’s newsletter. 

-----Original Message-----
From: Janina Sajka <janina@rednote.net> 
Sent: Wednesday, 26 June 2019 5:56 AM
To: public-rqtf@w3.org
Cc: Michael Cooper <cooper@w3.org>
Subject: Another CAPTCHA approach

Colleagues:

I rescued the following from my spam box this afternoon.

It's a proprietary CAPTCHA solution provider who believes we need to include them in our document?

Yes, I know. We're on the way to press ...

But, what do you think? I see nothing specifically accessibility here except that they seem to have a mostly noninteractive approach.
Otherwise, is this something we need to add? What will we respond to them if we do NOT include them?

Best,

Janina

Samuel Tyler writes:
> Hi Michael,
> 
> Recently, I stumbled across the working draft of your document 'Inaccessibility of CAPTCHA: Alternatives to Visual Turing Tests on the Web’ and was compelled to reach out with another significant example/alternative to visual Turing tests.
> 
> I work for Arkose Labs, a fraud prevention company that operationalizes its own proprietary challenge–response mechanism for authentication. This mechanism is called Enforcement and is trusted by landmark enterprises like Electronic Arts, GitHub, Singapore Airlines, Roblox, and Twilio to definitively classify the authenticity of requests made to their web and mobile apps (oh, and in licensed console environments which we integrate with seamlessly).
> 
> At a glance
> 
> Arkose Labs is an authentication system with two key components: Telemetry and Enforcement.
> 
> 
> • Telemetry refers to our decision platform that recognizes the 
> context, behavior  and past reputation of a request using machine learning; and • Enforcement refers to our proprietary challenge–response mechanism (CAPTCHA) that classifies the authenticity of unrecognized requests, and provides real-time feedback to Telemetry.
> 
> 
> Together, Telemetry and Enforcement operate with a single-minded focus to intercept inauthentic requests before they can commit fraud and scale.
> 
> 2-Minute Deep Dive
> 
> Requests that cannot be recognized by are intercepted by Enforcement to classify them as being either authentic or inauthentic with evidenced certainty. Authentic requests are passed to the enterprise, while inauthentic requests are punctuated by Enforcement, which serves as an intermediate attack surface. Validating unrecognized requests in this way strengthens Telemetry decisioning in real-time, and incrementally minimizes the number of false positives, all the while generating continuous losses for the would-be attackers. Likewise, extricating the attack surface from the enterprise has empowered Arkose Labs to neutralize attackers and their ability to retool with efficacy that has yet to be achieved in-house, and in-industry.
> 
> Now, attackers automate other challenge–response mechanisms by exploiting image processing tools that have established commercial applications, such as image classification and optical character recognition. These tools have a distinct vested interest in being able to perform the same computer vision tasks needed by attackers to make Inauthentic requests at scale. Presenting a task that commercial computer vision can already perform irreversibly fixes the cost per action of abuse below attackers’ return on investment. We know that attackers rely on low operational costs afforded to them by professional image processing tools. These tools inadvertently provide a computer vision capability to correctly categorise third-party visual data, which other challenge–response mechanisms interpret as valid responses. In contrast, responses to Enforcement are generated from proprietary visual data that has no residual benefit to computer vision for training machine learning models. These secure responses divide decision points into compartmentalized functions that augment in real-time to prevent attackers from anticipating how Enforcement will behave. By removing the prospect of accurately classifying future responses, Enforcement prevents automation at-scale and greatly increases the operational costs incurred by attackers.
> 
> Lastly, it’s important to flag that when requests cannot be recognised by Telemetry, they are challenged with Enforcement – NEVER blocked. Secondary screening ensures that unrecognized requests of human-origin are always afforded the right to prove their authenticity. Emerging threats, such as Single Requests Attacks, simply cannot be detected by artificial intelligence or stopped with bot mitigation because they blur inauthentic requests indiscernibly with authentic requests. These attack protocols are operationalized with automation tools and/or digital sweatshops, and decisioning that relies only on observable tell-tales will undoubtedly misclassify humans too. Furthermore, Enforcement has been statistically proven to achieve the same throughput as using no defense in an enterprise environment with 90M monthly active users (i.e. when inauthentic requests are successfully intercepted by Arkose Labs, the same number of authentic human users still convert).
> 
> Supporting Materials
> 
> 
> • Frost & Sullivan publish an uncommissioned write-up of Enforcement 
> (attached); • White Paper: Advantages of Continuous Machine Learning 
> in Challenge–Response Mechanisms (attached); • Video: Arkose Labs win 
> ‘Best of Show’ at FinovateSpring 2019; • Video: Arkose Labs finalist 
> in RSA Innovation Sandbox Contest; and • Arkose Labs win the MRC Technology Award.
> 
> 
> * * *
> 
> Michael – I’m not here to sell you anything and feel strongly that there’s an opportunity to share insight from/on Enforcement. Please let me know if there’s anything I can do to support your research.
> 
> Cheers,
> 
> Samuel Tyler
> Director of Product Marketing
> (415) 269-9191




-- 

Janina Sajka

Linux Foundation Fellow
Executive Chair, Accessibility Workgroup:	http://a11y.org


The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Chair, Accessible Platform Architectures	http://www.w3.org/wai/apa



Received on Wednesday, 26 June 2019 04:01:20 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 17 January 2023 20:26:46 UTC