RE: ZIP-based packages and URI references into them ODF proposal

The web has an architecture that is independent of any particular implementation or application. In general, software architecture has some principles; orthogonality of specifications is one.

If you are creating implementations of a particular class of software, then the functional specification of that implementation will, of necessity, need to document the interplay between the components, and it may indeed be useful to specify how to robustly interact with existing legacy components and content that are already widely deployed. However, confusing "implementation functional specification" vs "definition of protocol, format, language" seems like a bad idea.

> Anyway, my proposed solution was supposed to show that HTTP could
> solve the problem of addressing inside packages without requiring any
> extension to HTTP's URI scheme.

I believe it is an important requirement that inter-/intra-/into URI referencing mechanism for packaged components be independent of the protocol used to access those components, whether HTTP or file system sharing or IMAP or carrier pigeon. And it sound be independent of the nature or format of the content, whether it be HTML, SVG, XAML, Flex, Flash, PDF or LaTex.

For example, consider RFC 2557, " MIME Encapsulation of Aggregate Documents, such as HTML (MHTML)", http://www.ietf.org/rfc/rfc2557.txt.

This is an existing, well-deployed packaging mechanism with a way of including relative addressing within the package, as well as references from outside (albeit using globally unique content-ID pointers). It uses MIME's "multipart" aggregation method rather than "ZIP", but very little of the specification depends on ZIP. It was developed primarily for HTML email, but the mechanisms are appropriate for any content.

I don't think RFC 2557 meets all of your other requirements, but it does show a direction for insuring orthogonality.

# The point being that: yes, orthogonality is fundamentally important.
# But the system also needs to be architectured to cope with
# inadequacies or inconsistencies of the layers it depends on.

It depends on what you mean by "the system". To be broadly successful in attracting new users in the market for free downloadable components, software needs to be designed to cope with existing implementations and content, and the functional specification for that category of software can usefully include advice on dealing with existing content and services.

# Maybe a
# better principle would be: design orthogonally but assume your
# dependencies are broken. Or, maybe, design for the problem you are
# trying to solve, but don't assume you've have solved any other
# problems.

You're not seriously arguing that one should specify HTTP with the assumption that TCP might be broken, and that sometimes content is mangled, and put all of the ways of dealing with that into the HTTP specification? Or that the URI specification should also discuss DNS hijacking. I don't think that's feasible, or a good idea to try, in general.

Perhaps there are unusual circumstances where broken or poorly implemented or deployed components are widespread (HTTP content labeled with incorrect MIME types, for example), and addressing backward compatibility is a priority.

But why should this be a strong consideration when specifying *new* functionality which isn't already widely deployed? Why insure that the future will be even worse than the present, promoting "fallback" behavior to "standard"?

Larry

Received on Friday, 28 November 2008 21:44:29 UTC