[Bug 13943] The "bad cue" handling is stricter than it should be

http://www.w3.org/Bugs/Public/show_bug.cgi?id=13943

Ian 'Hixie' Hickson <ian@hixie.ch> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|                            |WONTFIX

--- Comment #10 from Ian 'Hixie' Hickson <ian@hixie.ch> 2011-09-20 20:04:34 UTC ---
I don't understand your use of the terms "strict", "robust", and "recover".
Allowing syntactically incorrect blocks isn't strict. Ignoring them is robust.
Not ignoring the next block is how we recover.

Parsers for Web language should be designed to be forward-compatible, which
means ignoring content that doesn't match the syntax in a well-defined manner,
so that future extensions can use these syntax "holes" to add new features in a
predictable way. Parsers should handle common authoring errors in a way that
matches author intent or that does nothing, but there is no need to recover
from errors that aren't going to be common — it would just encourage authors
to write bad code that might change meaning in the future. Parsers should avoid
actively handling (i.e. not ignoring) author mistakes in ways that are likely
to differ from author intent.

You have studied SRT data, so you have a good idea of what authoring mistakes
are common; your advice here would be most welcome. However, if the case you
are talking about here is not a common error, then I don't see any value (and I
see some negatives) to trying to automatically work around it.

-- 
Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.

Received on Tuesday, 20 September 2011 20:04:41 UTC