W3C home > Mailing lists > Public > public-mw4d@w3.org > September 2009

Re: Feedback on Roadmap

From: Stephane Boyera <boyera@w3.org>
Date: Thu, 24 Sep 2009 18:10:27 +0200
Message-ID: <4ABB99F3.6030002@w3.org>
To: Arun Kumar <kkarun@in.ibm.com>
CC: public-mw4d@w3.org
Hi Arun,

my comments:

> <AK>  Yes, I guess, it is just a wording issue. I wanted to make sure that
> the problem of accessibility is realized as being *more severe* in
> developing countries than in developed countries since technology and
> devices to aid the needy is available and accessible to people in developed
> countries. The document seems to reflects the opposite as of now.
> </AK>

ok. i integrated that, and this will appear in the draft i will release 

>> B. Costs [Section 6.1.5]
>>> . Right; The current situation is ..........this may change in a near
> future. Would that answer your comment ?
> <AK> Sure. That is fine. </AK>

  ok. i integrated that, and this will appear in the draft i will 
release tomorrow.

>> 1)
>> A major challenge.......>
>>> right. i'm happy to mention this as a side note. This is not a
> characteristic of the technology, but just >>an extra layer on top of it,
> in same way as CMS, blog engines and so on. Moreover, this falls in the
>>> category of mobile as an authoring platform which has been considered out
> of the scope of this document.
>>> so i propose to mention that as  note in the section. Would that be ok
> with you ?
> <AK> Not sure whether I understand the implications here. Basically, I
> would consider the mentioned advances in voice application creation as
> improvement in technology and not really a layer in the same layer as CMS
> and Blogs. The reason for that is that such a platform is parallel to
> existing web application frameworks and allows users to create their own
> apps such as CMS, Blogs, Wikis and many others. So, if one considers the
> web application frameworks (Ruby on Rails, Struts etc.) in vogue today as
> technology then a voice application creation framework probably qualifies
> as one too. Though I would admit that it is not yet freely available but
> its use is increasing and slowly opening up.
> If it falls outside of the scope of the document, it is fine to move this
> point into a note. But the weakness of voice as a platform owing to
> expertise required needs to be diluted as development of such apps is now
> not restricted to programmers and computer scientists. This would help
> avoid the readers from turning away from this promising mode of offering
> services.
> Hope, I could clarify.
> </AK>

humm, honnestly i'm not sure i agree with you.
I'm not specialist on web framework, but for me, the example you are 
citing, why being an important step to bring the authoring of voice apps 
to all, is exactly like a blog, who transformed millions of people from 
web consumer to web author: they know nothing on the technology, and use 
templates and forms to fill-in information, arranged in a specific way. 
As far as i know, this is the case of the example: people developing 
their apps does not think about design, or information flow, or things 
like that, they are just filling a set of forms organized in a template, 
and then a site is genrated based on their input and answer from the 
form. Am i wrong ?
So in that sense this is not an imrpovment of the technology but just a 
layer on top of it ?

That said, again, i agree that it is important to mention such examples, 
and also to mention existing wysiwyg tools for voice apps development.

> <AK> I think I understand what you mean. If I got you correctly, the point
> is that VoiceXML (or other voice application technologies) does not provide
> support for including meta information for discoverability. While this may
> be true, it is independent of whether services are discoverable since the
> mechanism to do that might exist outside of the authoring technology.

> I would again draw the comparison with how search evolved in the Web, where
> the static and dynamic websites themselves did not have much information
> but then the search engines developed crawling and indexing schemes outside
> of the web authoring technology itself. It was only later that XML and then
> Semantic Web approaches came into being to be able to specify structural
> meta-information and semantic information respectively. So, while the core
> voice app authoring technology (vxml) may not have discoverability but it
> probably is not accurate to consider the mechanisms outside of it as
> workarounds.
> </AK>

humm, i don't understand here. Most of search engine on the web are not 
using meta-information at all, they are just indexing words. semantic 
search engine are not really here yet.
However, due to the hypertext technology, it is possible to index the 
web, just by following links and indexing words.
This is jsut impossible with voice:
1- you would not know under which phone number there are applications
2- an automatic process cannot understand the different 'pages' of 
information and how to reach them directly
So by essence, it is just impossible to automatically provide to the 
user information about numbers to call and what people would find behind.
It is possible to do that with voicexml, using web technology and 
textual access, but this is not part of voice technology.
So i've the impression that by essence, voice apps are not discoverable, 
and associating meta-information with phone numbers may change that in 
the future, but i don't know any operators providing such a scheme ?

>> 3)
>> Another issue mentioned is that ...........themselves are captured and
> automatically made available on the VoiceSite.
>>> i believe there are two aspects:
>>> 1- the ability to access and reuse an answer without interacting again
> with the service
>>> 2- the ability to access previous results.
>>>  i'm only talking about the point 1. here, while sms are automatically
> stored in the phone of the users, or >>while web pages are cached in the
> browser, the information received through a voice call is transient for
>>> the user. It is possible at the application level to implement a
> workaround through e.g. droppin the result >>of the query to an sms or as a
> voicemail msg in the user voicemail box, but this is not handled at the
>>> technology level. I propose to mention that. would that be ok with you ?
> <AK> I agree with your split of the two issues and that they need to be
> dealt with separately. To add further clarification, SMS is a store and
> forward technology that requires first storing the message at intermediate
> locations before forwarding it further on the route to destination (similar
> to emails).

i have the feeling that wemight entering too much in details. in this 
section, i wanted to adress point 1. i should probably mention that in 
the section.

> In contrast, dynamic web applications as well as dynamic voice applications
> are online content delivery technologies and storage of content delivered
> is not inherently built into the core technology for both. This is
> exemplified in your example of cached content in web browser. It needs an
> external component (cache in a  browser which is at client site and
> independent of the dynamic application at server side) to support offline
> content storage in the online web world. This is also limited since dynamic
> content site such as those delivering weather status or stock quotes cannot
> benefit much from this. The static content can either be cached by browser
> at its own will (i.e. subject to cache size on disk and cache management
> policies) or explicitly saved by the end user.

right, i completely agree with your analysis but i'm not sure we should 
detail that in the document (diff on static vs dynamic content) ?

i wanted to make it understandable, and in my mind this is pretty simple:
1- with sms, you receive the info in a msg, and then you, as a user 
decide to delete it or not, this is your choice
2- with web, you can access offline content you retrieved online, but 
this is highly depend on the cache policy of your browser, and the 
header and caching policy put in the document. There is no way today for 
the user to say: i want to keep this information in my cache for offline 
  use. One can save the information, but this is something else and not 
integrated in the browser (e.g. you can use the uri to access it)
3- for voice, this is just like a phone call, audio is gone when it is 
finished. So obviously you can have an appliance to record what you 
heard, but this is a hack, or at least not a functionality existing by 
default on handset yet.
literacybridge (i met them too) is an example of such hack.
So i'm in favor of keeping that level of details in the document, and 
not go too technical. i'm happy to mention literacybridge as an example 
of recording voice apps result.
would that be ok with you ?

Stephane Boyera		stephane@w3.org
W3C				+33 (0) 5 61 86 13 08
BP 93				fax: +33 (0) 4 92 38 78 22
F-06902 Sophia Antipolis Cedex,		
Received on Thursday, 24 September 2009 16:10:34 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:07:10 UTC