W3C home > Mailing lists > Public > public-rqtf@w3.org > September 2020

Re: Starting points for review of media synchronization requirements

From: Janina Sajka <janina@rednote.net>
Date: Tue, 1 Sep 2020 04:53:29 -0400
To: "White, Jason J" <jjwhite@ets.org>
Cc: "public-rqtf@w3.org" <public-rqtf@w3.org>
Message-ID: <20200901085329.GB308175@rednote.net>
Thanks for the references, Jason. Good stuff.

I also spoke with Peter Korn on the topic. Peter says Amazon has data
and he will look into releasing it to us.

Peter says their findings are an approx 200ms asyncronous window,
something like up to 50ms ahead or nor more than 150ms behind.

Best,

Janina

White, Jason J writes:
> Here are some starting points identified by some rather quick and definitely not thorough searches. I haven’t read any of the references listed.
> 
>   *   Cuzco-Calle, I., Ingavélez-Guerra, P., Robles-Bykbaev, V., & Calle-López, D. (2018, August). An interactive system to automatically generate video summaries and perform subtitles synchronization for persons with hearing loss. In 2018 IEEE XXV International Conference on Electronics, Electrical Engineering and Computing (INTERCON) (pp. 1-4). IEEE.
>   *   Garcia, J. E., Ortega, A., Lleida, E., Lozano, T., Bernues, E., & Sanchez, D. (2009, May). Audio and text synchronization for TV news subtitling based on automatic speech recognition. In 2009 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (pp. 1-6). IEEE.
>   *   Chen, M. (2003, April). A low-latency lip-synchronized videoconferencing system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 465-471).
>   *   Waters, K., & Levergood, T. (1994, October). An automatic lip-synchronization algorithm for synthetic faces. In Proceedings of The second ACM international conference on Multimedia (pp. 149-156).
>   *   Piety, P. J. (2004). The language system of audio description: an investigation as a discursive process. Journal of Visual Impairment & Blindness, 98(8), 453-469.
>   *   Díaz-Cintas, J., Orero, P., & Remael, A. (Eds.). (2007). Media for all: subtitling for the deaf, audio description, and sign language (Vol. 30). Rodopi.
>   *   McCarthy, J. E., & Swierenga, S. J. (2010). What we know about dyslexia and web accessibility: a research review. Universal Access in the Information Society, 9(2), 147-152.
>   *   Petrie, H. L., Weber, G., & Fisher, W. (2005). Personalization, interaction, and navigation in rich multimedia documents for print-disabled users. IBM Systems Journal, 44(3), 629-635.
>   *   Blakowski, G., & Steinmetz, R. (1996). A media synchronization survey: Reference model, specification, and case studies. IEEE journal on selected areas in communications, 14(1), 5-35.
> 
> ________________________________
> 
> This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.
> 
> 
> Thank you for your compliance.
> 
> ________________________________

-- 

Janina Sajka
https://linkedin.com/in/jsajka

Linux Foundation Fellow
Executive Chair, Accessibility Workgroup:	http://a11y.org

The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Co-Chair, Accessible Platform Architectures	http://www.w3.org/wai/apa
Received on Tuesday, 1 September 2020 08:53:47 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 17 January 2023 20:26:47 UTC