W3C home > Mailing lists > Public > public-bpwg-ct@w3.org > October 2008

[W3C] Best practices / specifications

From: <eduardo.casais@areppim.com>
Date: Thu, 30 Oct 2008 13:01:57 +0100
To: "'Francois Daoust'" <fd@w3.org>
Cc: <public-bpwg-ct@w3.org>, <Tom.Hume@futureplatforms.com>
Message-ID: <000001c93a87$501529d0$b39dca3e@AREPPIM002>

> The whole idea is very tempting.

CCPP, Uaprof, HTTP have been developed exactly for this:
capability negotiation.

> Point c) leaves the door open to the transcoder using 
> whatever value it 
> may want to use. It would be good if there was some way for 
> the Content 
> Provider to trust the device description data that is being 
> used within 
> the transcoder. 

It is a matter of defining a hierarchy: a) > b) > c) when
it comes to the relevance of attributes. Nothing difficult. 

> Appendix D.3 in "Scope for Future Work" 
> envisions this 
> possibility:

The scope for future work does not specify how capability 
negotiation takes place, but defines another mechanism for 
parties to exchange information about the confidence they
have in each other's device information. Not quite the same
thing: maybe a party has better information about a device,
but what counts is whether the transformation will be better
-- and that is apparently not the topic for future work.

> In the meantime, I just wonder whether specifying a precise mechanism 
> that cannot be enforced because of c) and because of the fact 

Just define the hierarchy as indicated above.

> whole notion of "user experience" cannot be formally defined carries 
> more value than a simple text "It should only [restructure 
> content] to 
> match the specific capabilities of the user agent". The precise 
> mechanism does look as if we have solved the problem whereas 
> that's not 
> truly the case in practice.

A specific mechanism, such as I proposed, is defined, based
on standards, measurable, and testable. It does not encompass
the entire user experience -- and does not attempt to, hence
is a partial solution. However, it enforces some very basic
consistency and optimization requirements.

In practice, a statement like the one already
present is a generic and untestable declaration
of principles -- i.e. of no operational or
implementational value.

If you do not want to or are not able to specify what 
is meant by "best possible user experience" or "careful
transformation", then I propose you simply eliminate
all references to such recommendations. After all,
the CTG is, contrarily to its title, not about
best (or worst) transformation practices (which, as
I understand, you neither want to define nor to
endorse), but about _signalling mechanisms_ between 
parties in a transcoding environment. In the same
way that I do not remember seeing recommendations
for "appropriate applications" in the HTTP standard, 
maybe the CTG should just refrain from inserting
any text on the quality of transcoding or lack
thereof. In short, reduce the text to what is the
W3C goal, more or less summarized as follows:

a) This is how transcoders signal their presence
to servers and clients.
b) This is how clients can react to the activity
of transcoders.
c) This is how servers indicate to transcoders
what they may/may not do.
d) We do not define what transformations are,
which ones are good, which ones are bad; we do
not recommend any way to perform specific
transformations. We do not endorse any kind of
transformations. A non-exhaustive, provisional
list of issues that are raised by transcoding 
operations can be found in the appendix."

> Any thoughts on that from others participants of the Task Force?
> Side note: out of curiosity, why do you want to use tables instead of 
> lines with hard line breaks?

I have had the dismal experience to see transcoders
transform tables (not for formatting, just to present
figures), and which could be perfectly rendered as 
such on end-user devices, into lines with hard breaks...

This would correspond to a capability negotiation
/ optimization for the attribute "TablesCapable".

> Coming up with a set of normative worst practices (i.e "do 
> not [foo]") 
> that could survive time is not easy.
> If people think it's necessary, I personally would not mind having a 
> list of potential troubles associated with different types of 
> re-structuring operations.

See above.

> were told that it will break with some WAP gateways. We did not look 
> into this in details because recommending the Cache-Control: 
> no-transform directive still holds no matter the results: unless we 
> missed something, it's the only existing mechanism we may use.

So what about that Vary field?

And no, it is not the only mechanism. That is why there
are heuristics, and why it is important to specify them
in detail -- especially since they are based on standards
(MIME types, XML declarations, HTTP field Content-type,
domain names).

> In the D.5 "Scope for future work" appendix (I know, it does look as 
> everything is in scope for future work :(), we also note that more 
> fine-grained values would be needed:
>   "The BPWG believes that amendments to HTTP are needed to 
> improve the 
> inter operability of transforming proxies. For example, HTTP does not 
> provide a way to distinguish between prohibition of any kind of 
> transformation and the prohibition only of restructuring (and not 
> recoding or compression)."

And you rejoin what I stated: you cannot really achieve
a consistent framework without a special-purpose (though
hopefully tight) protocol. As long as it does not exist,
it is necessary to make the most out of existing mechanisms
-- and that also means taking care of the difficult cases
(WML, lcd sites) with heuristics.


Received on Thursday, 30 October 2008 12:44:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:06:30 UTC