- From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- Date: Sun, 15 Nov 2009 16:57:52 +1100
- To: kunter ilalan <ilalan.kunter@gmail.com>
- Cc: public-html@w3.org
Hi Kunter, On Sat, Nov 14, 2009 at 1:33 AM, kunter ilalan <ilalan.kunter@gmail.com> wrote: > On Fri, Nov 13, 2009 at 1:51 PM, Silvia Pfeiffer > <silviapfeiffer1@gmail.com> wrote: >> On Fri, Nov 13, 2009 at 10:38 PM, Henri Sivonen <hsivonen@iki.fi> wrote: >>> On Nov 13, 2009, at 08:10, Peter Jäderlund wrote: >>> >>>> <media target="mobile"> >>> [...] >>>> <media target="web"> >>> >>> This design assumes that mobile is equally bad across all user locations around the globe and across all devices. >>> >>> The key issues are bandwidth (which varies greatly) and the device's ability to decode video of different sizes in real time. >>> >>> A more appropriate way of providing UA-selected alternatives would be attaching a Media Query to each <source> and extending Media Queries to query bandwidth and some kind of decode performance metric. (Unfortunately, the latter is probably hard to represent as an intuitive and easily measurable scalar.) >> >> An example of a media query was presented by Dave Singer at the recent >> Video A11y workshop. I am taking the freedom to cite him, though I'm >> not sure it's completely accurate. >> >> <video ...> >> <source media="accessibility(captions:yes)" src="A"/> >> <source media="accessibility(captions:no audiodescription:yes)" src="B"/> >> </video> >> >> I guess that could be extended to cover bandwidth and intended size >> etc. and can help a User Agent decide which alternative tracks/videos >> he can choose from. >> >> I still believe there is a need to allow us to describe alternative >> and additive tracks/videos. For example, a video element that has one >> main source element with the A video, and a set of alternative sign >> language tracks that are additional video to A is very difficult to >> describe in a flat structure like this. One additional structuring >> level might be helpful. OTOH, if we can avoid it, it's good, too, >> because we don't want to pollute html with even more element names. >> >> Cheers, >> Silvia. >> >> > > Hello again; > > We've met before at the same topic, several months ago. >> http://blog.gingertech.net/2009/10/06/new-proposal-for-captions-and-other-timed-text-for-html5/ > > There, I have suggested a non-polluting solution upon which we all > agreed, I don't see much agreement with your proposal at the blog post. In fact, I tried to comment how your approach doesn't really fit with the HTML approach and will fail if applied to the header. But I may have misunderstood you - not seeing how a complete Web page with your proposal looks makes it difficult to judge. > which was about RDF-like supportive declarations outside the > body of the main mark-up. From where I'm standing, a multimedia file > should not be any more detailed "in its HTML appearance" than that of > an ordinary image file (which is another medium type): > > <MEDIA SRC="__URI__" /> > > Then, if want to be semantically meaningful, or correct, and > accessible with libraries of information to its receivers, the > benefiters, then I suggest supporting every medium type with > descriptive data in XML format on the head section. It sounds to me like you are trying to re-define something like SMIL. Have you looked at SMIL and worked out whether it fulfills your requirements? > One should not forget that everything qualifies as multimedia -- we > have more than one medium, and if we are to suggest a standards, then > it should be standard for all of these types, not for a single one. > > Therefore, this goes the same for sub titles of a movie, the sub > titles of another movie, yet, of another format, the scripts, say, > lyrics of a song at an audio file, and even the translated version of > textual data. > > For the latter case, I had tried for years for the better use of > multilingual HTML documents, but upon my constant failure, however, I > devised my own formula switching to stand alone valid XML and dirty, > non-standard XML/XSLT tricks for browser usage. It was about writing > the holly Book, where I had to provide verses in its original and > local languages. I would be happy to show you my progress if smo is > interested. > > As of the year 2010, the Video files of this age are not semantically > meaningful. We are not providing our computers, and the search engines > with the vast data of the movie time line - all though technically we > should be. The data available on the net may look already huge, but > when decrypting the burried, hidden, unused data within the video > files, the sum of the binary universe in circulation, or at least > available, will be "enormous". This is what we are trying to achieve with the HTML5 audio and video elements and with creating a standard way to associate time-aligned text. A search engine will then find it easy to extract the text, index it, and include it in its search results with a link to the time offsets (which, incidentally, is being standardised in the W3C Media Fragments Working Group). > I don't believe "accessibility" should be an option for > accessibility..I believe "this concept" should be part of universal > discipline and practices we all must be obeying. I agree. It is what we are working towards. The itext specification is just one proposal at the moment for how we can solve the issues. It is not complete and not accepted. There are also others. If you (or, for that matter, anyone else) has any other proposals of how we should include subtitles, captions, lyrics, and other time-aligned text in a declarative manner into HTML5 audio/video, please go ahead and send it in. The more ideas we have, the better the ultimate solution will be. > best regards; > k.ilalan Best Regards, Silvia.
Received on Sunday, 15 November 2009 05:58:52 UTC