- From: Olli Pettay <Olli.Pettay@helsinki.fi>
- Date: Wed, 03 Nov 2010 10:13:17 +0100
- To: public-xg-htmlspeech@w3.org
Hi, I think there should a requirement to support speech UI in offline webapps. In practice that means local speech engines, so offline web apps may not support all kinds of speech interactions, but basic things could be possible. Another possible requirement is that webapps should not know the exact speech engine installed locally. I mean the vendor and version etc. There are few reasons for this; webapps should just work everywhere, no browser/speech engine specific hacks. Another reason is that by exposing the exact vendor/version, that would help hackers to attack against that particular system. (I assume many speech engines are written in C/C++ or in other unsafe languages and may not be fuzz tested properly. Well, implementation done in a memory safe language may still have other security bugs. I basically want to make a new attack vector a tiny bit harder for hackers.) Third reason would be to not add yet another way to fingerprint user. Also, if browser doesn't use local speech engines, user should know about it. But that might be something we can't require, but something which UA implementors need to take care of, since it affects also to the possible speech UI of the browser, not only webapps. (I don't want to send any speech data to random server without knowing it. ) -Olli
Received on Wednesday, 3 November 2010 09:14:18 UTC