Re: [webauthn] <new proposal> Extending WebAuthn Protocol for Remote Authentication (#1580)

So sorry,  I may not have expressed myself clearly. We have worked with Qualcomm to implement a demonstration that we obtain camera data within the TEE boundary. In Trusted Execution Environment, we can handle this data, such as signature, encryption, etc. Camera data is stored in TEE memory from the beginning. Therefore, applications in REE cannot modify or access this data. Based on the above behavior, we can guarantee that the image data comes from the camera and is not injected externally. If the attacker cannot send the original image to the remote server for verification, they can only rip the screen or print it out, then detection is possible using AI algorithm.
We make this proposal to prevent injection attacks. We want to make sure that the data is real from the user device. Today's devices have an increasing variety of sensor, while the cost of falsifying these sensor data is very low. Here're common attacks: fake location, fake health data, fake voice, fake photos. In many scenarios, we have to determine whether the data from the device is fidelity.
From the security perspective, if the browser just collect the data to be signed internally, I guess that also makes sense, in a really sad way. That might like the saftyNet attestation in Android platform. The operating system guarantees that the browser has not been tampered with. Therefore, we need to rely on a system-level service (e.g. [apple attestation]( to tell us if we are running in a secure environment. For protection against general attackers is helpful. If you want to protect against higher level attacks, as you rightly say, we need to make sure the whole camera stack is secure. Different levels of security can be achieved with different approaches. 
Finally, we are very sorry for the confusing expressions. General verification requires a registration process.  For example, for remote face verification, we need to enroll the original face data in the registration procedure. Therefore, we can perform face matching in the verification process. In the documentation, we only describe how to ensure the authenticity of the client data. Naturally, authenticity of data is the basis of remote face verification. Local authentication emphasizes the authenticity of the results, while remote authentication emphasizes the authenticity of the data. In my opinion it is in fact essentially the same. I don't know if you have any other confusion.

GitHub Notification of comment by thedreamwork
Please view or discuss this issue at using your GitHub account

Sent via github-notify-ml as configured in

Received on Friday, 19 March 2021 13:44:03 UTC