- From: Dylan Barrell <dbarrell@bb.opentext.com>
- Date: Fri, 25 Jul 1997 11:43:10 -0400
- To: "w3c-dist-auth@w3.org" <w3c-dist-auth@w3.org>
>I really think one has to keep in mind that transparent content negotiation >goes way beyond internationalization. The spec is a little scary in the sense >that a client and a server can collude to use *any* attribute, even one that no >one else in the world knows or cares about, to choose a variant. I also think >you have to keep in mind that there are different relationships possible >between the backend variants gathered under the same URL. > >It may be that there is one master variant and all the others are automatically >derived from it. For example, the prime variant may be SGML and from it flow >HTML and PDF versions. Checking out, editing, and checking in the secondary >variants in this model is senseless. It might be that there are many other scenarios for content negotaiation and I realise that we won't be able to satisfy all of them but accept-language is standardised so we should support it. >It may be that there are several co-equal variants that must be updated >independently and manually. For example, each variant is in a different >language and one doesn't trust automatic translation software. Precisely I've come to the conclusion that as long as it is possible to get at the individual variants when necessary, and determine exactly what variants there are, and to not check out an URL that has variants under it (because one could end up in the bizzare position of using one set of client settings to check it out and another to check it in and having the wrong variant get updated), it is really up to the application built on top of the protocol to sort this all out. How is the application going to "sort it out" if it there is no standardisation on what the variants represent? Cheers Dylan
Received on Friday, 25 July 1997 11:46:35 UTC