W3C home > Mailing lists > Public > public-bpwg@w3.org > March 2009

RE: The Web has flexible presentation Re: (Nielsen article)

From: Rotan Hanrahan <rotan.hanrahan@mobileaware.com>
Date: Sun, 22 Mar 2009 23:37:14 +0000
To: <passani@eunet.no>
Cc: <public-bpwg@w3.org>
Message-ID: <BBC1462C-6FCC-4893-AF0A-EE81F4B9E02C@mimectl>
Thanks for the detailed response, Luca. To answer the question you posed:

>> Thinking about Luca's other concerns, and in light of the above, the issue
>> he is exploring is the degree of transcoding that may take place. Major
>> restructuring of the original content (without any input from the author)
>> does not show respect for the content/author, but lesser adjustments to
>> make the content compatible with the end user device are acceptable.
> I don't think I ever said this. Or maybe I am confused. Are you talking
> about web-content or mobile web sites?

No you did not say this, and I didn't say you did. These are *my* thoughts on the argument that (among many things) you are exploring: the issue of the degree to which transcoding may take place. It ranges from never to always. You have already made it clear that you believe transcoding should not be present at all, but that in some very specific cases it is understandable why it would be present, and in this limited case it would have negligable impact on content authors. When I talk about transcoding of content, I do not limit the original content to mobile. The original content could be anything, for any context. Obviously when the original content is created with mobile in mind, it becomes harder to justify the presence of a subsequent transcoding-for-mobile phase. The discussion is, as you say, about where the boundaries are placed. On one side of the boundary, transcoding has an understandable (albeit unpleasant) reason for being present, but on the other side of the boundary transcoding is unacceptable. The guidelines and heuristics seek to define that boundary.

Like you, I believe that transcoders are fine if they are cooperating with the content author/owner. However, this is a specific use case that is not very prevalent. In general, cooperation (where it exists) is limited to meta tags and some HTTP headers, which is probably not fine-grained enough for most authors. That such controls can vary from transcoder to transcoder just confounds the problem for authors. The CTG might help a little with the latter, but real cooperation doesn't seem to be "around the corner".

So that leaves the problem of what to do when there is unlikely possibility of cooperation (e.g. dealing with legacy content) or when the tools for cooperation are limited (e.g. headers).

> ... if transcoder vendors are there to steal content, ...

Transcoder vendors are making available a very powerful technology. Used with the kind of mutual respect I outlined in my closing paragraph (which you suggested would have no impact in this discussion) the transcoding technology could be mainly beneficial. Without such respect, the technology can have negative effects. Respect is like what you call "good manners", and this is at the heart of the argument.


From: passani@eunet.no
Sent: Sun 22/03/2009 22:02
To: Rotan Hanrahan
Cc: public-bpwg@w3.org
Subject: RE: The Web has flexible presentation Re: (Nielsen article)

Hi rotan, comments in-line

> So, despite the potential for interfering with content that Luca has been
> warning about, this proposed compromise suggests that in the use-case
> where mobile content is not returned it is legitimate to spoof the UA.
> Presumably such spoofing is for the purpose of retrieving alternative
> content, with the strong likelihood of it being transcoded subsequently.
> So this means: there are times when transcoding is OK.

So, to clarify, I would like to make a distinction between my personal
views on transcoders and the view I represent here as the editor of the
Manifesto and representative of those who have signed it.

Personally, I think that transcoder are a bad thing. Unless transcoding
happens with the agreement of the content owner, it is potentially an
abuse and nobody should be allowed to profit from it. Many developers and
authors share this view.

Having said this, and here comes the Manifesto position, a few have argued
that transcoders are launched by operators together with cheap/flat data
rates. Because of this, no matter how screwed up those transcoders may be,
they may represent a step in the right direction: cheap data plans are
likely to stimulate adoption of the mobile internet.
According to this position, web sites without a mobile version will
probably not notice that a negligible fraction of the traffic comes from
mobile devices, so the transcoding is not much damage.

Since those two positions totally converge on the notion that mobile
content must be preserved, that's the area where a compromise with
transcoder vendors was found, which is exactly what the Manifesto is all
about. Unofficially, most operators agree that the Manifesto is quite a
reasonable compromise.

So, i think that summarizing this with "there are times when transcoding
is OK" is not accurate. Transcoding without consent of the content
provider is never OK nor fair. There are cases where unfairness may be
minimized to the point where content owners do not care so much.

> Correct me if I'm wrong, Luca, you also have less of a problem with Mini
> mainly because it is a distributed browser (comprising a transcoder and
> in-device renderer) that openly declares itself. But anyone using the same
> transcoding technology without declaring their presence is not OK. Right?

spoofing the UA is not OK. the way the web works is that the network does
not take decisions on behalf of intelligent clients and servers, or it all
turns into a big mess.

> Thinking about Luca's other concerns, and in light of the above, the issue
> he is exploring is the degree of transcoding that may take place. Major
> restructuring of the original content (without any input from the author)
> does not show respect for the content/author, but lesser adjustments to
> make the content compatible with the end user device are acceptable.

I don't think I ever said this. Or maybe I am confused. Are you talking
about web-content or mobile web sites?

> In
> this case, the motivation is to "bridge the capability gap",

very dangerous concept. I don't like it. On one side we have a very clear
concept which has been the foundation of the web: content providers make
an investment because they have reasonable expectations that their content
won't be messed with by third-parties.
If you forfeit this founding value, you are left in an ocean of unknowns
where everyone will go their at the desperate research of small and large
profit. The mobile web as a platform will be totally disrupted by this.

> not
> marketing, censorship, etc. Also, transcoding of the content within the
> device to make it compatible with the user (who may have some modal
> disability) is also acceptable. Very acceptable, actually, as we are all
> compassionate people.
> With this understanding, Mini might be in the grey area of acceptability
> (from Luca's standpoint) because although it openly declares its presence,
> the significant restructuring of the original content could be trouble.
> However, Mini doesn't remove anything without good reason, and doesn't
> inject any significant material, so perhaps this is OK.
> Hopefully I am interpreting Luca correctly here.

until transcoders came about, I was considering Opera like an opt-in J2ME
browser which relied on a server for pre-digesting the brutish HTML found
in the wild. again, I did not have a particular problem with it.

> Having established that some kinds of transcoding are OK, even if they do
> some major adjustments (for good reasons), the question is: would this
> proposed "double request" compromise work in practise?
> I'm not so sure.

well, I am rather sure. One major transcoder, Openweb, works like this. As
a result, they were able to curb the wave of complaints from content
owners in their deployments.

> I accept that it might work for a lot of existing sites,
> but only the existing mobile aware sites will benefit. (Pun intended.)
> The mobile aware sites would respond with mobile content, and the
> transcoders would (hopefully) not interfere.
> Non mobile aware sites would see their HTTP traffic increase. Where before
> they were getting just one request, now they get two. In addition to the
> performance hit, their statistics would now be skewed. Maybe the
> transcoder would remember non mobile sites, to avoid future double-request
> behaviour, but what happens to the site when they install a product (e.g.
> ours) in order to "go mobile"?

Rotan, these are easily fixed issues. In fact, virtually a non-issue. A
transcoder can discover that a site does not have a mobile version and
remeber this for the following 6-12 hours. After that a new check is
performed, which will cover them for another 6-12 hours. So, the problem
you are mentioning is really not a big deal. One double request every 6 to
12 hours won't skew any statistics worth skeweing.

> In such a scenario, the transcoders would
> still be spoofing because they remember the site as non-mobile. We'd need
> a mechanism to signal to transcoders that the site had gone mobile ("vary"
> header?), or perhaps they could automatically refresh their "mobile
> status" so that newly mobilized sites are detected.

No, we don't. If the transcoder vendor embraces that basic notion that
mobile sites must be preserved, everything else follows very simply,
including the fact that they do not need to re-check at every request,
includind the fact that they do need to re-check every 6-12 hours.

On the other hand, if transcoder vendors are there to steal content, there
is not much a recommendation can do to stop them. the reason I am using my
time here is that I don't want novarra and bytemobile to use W3C to
support abusive business practices.

> What this shows is that while the proposed compromise looks good at first,
> there are still some issues that need to be considered, and other
> techniques/heuristics will still have a role to play.

I mean, you don't want CTG to go to this level of details with heuristics,
right? CTG needs to teach transcoders good manners. Once they learn good
manners, the actual heuristics are up to them.

> The point about search engines distorting content is also rather
> interesting. The question to ask is: who benefits? Users, for one, as it
> helps them find (mobile) content. Authors, because it increases their
> audience. SEs, because their search results are often ordered or
> accompanied by advertising, but only during the search phase. That sounds
> like a win-win-win. Furthermore, as robots.txt is well-established, most
> (knowledgeable) Web sites are effectively signalling their willingness (or
> lack thereof) for their content to be included (and therefore transcoded)
> in search results. So this is a potentially beneficial activity conducted
> in a manner that respects author rights(*), and stretches back through a
> lot of the Web legacy.

I agree

> Regarding the "pixel perfect" point raised in the original email, the Web
> wasn't intended to be pixel perfect though the original small number of
> browsers encouraged many authors to think of the Web in that way. Word
> processing and DTP habits compounded the problem. With the diversity of
> the mobile Web, pixel perfection is near impossible. However, taking our
> own technology as an example, you can still author in a more abstract
> (less pixel perfect) manner yet deliver perfection thanks to clever
> origin-server adaptation. It is very annoying then if something in the
> delivery path steps in and interfers with the perfect response. When an
> author goes to a lot of trouble to provide a contextually great response,
> probably via clever adaptation, there had better be a very good reason to
> interfere. (Subsequent adaptation for the accessibility needs of a
> particular user would be a good reason, to give one example.)

I agree (and please note that I was not the one to come up with
pixel-perfectness in the first place)

> The Web is about negotiated experience of distributed content and
> services. Everyone should have a say. The author. The user. The carrier of
> the data. The builder of the devices. The legal framework of the
> participants. The cultural norms to which we subscribe. All of these (and
> probably more) should have the means to influence the Web experience to
> varying degrees, and it should not be acceptable for one party to ignore
> the effort/rights/needs of others.

Not that i disagree, but I doubt this has a lot of impact in this
discussion, since the discussion is about where those boundaries are (or
better whether it is OK to move those boundaries so significantly as
compared to where they are on the full-web)

Received on Sunday, 22 March 2009 23:47:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:09:53 UTC