W3C home > Mailing lists > Public > public-bpwg@w3.org > March 2009

Re: FYI - "Mobile Web 2009 = Desktop Web 1998"

From: Eduardo Casais <casays@yahoo.com>
Date: Mon, 9 Mar 2009 08:27:42 -0700 (PDT)
Message-ID: <183117.65485.qm@web45003.mail.sp1.yahoo.com>
To: public-bpwg@w3.org

Sorry to be long, I hope to answer the points important to me in all messages
posted thus far.

> no, web pages weren't about pixel-perfect layout or complete control over
> fonts... 

I rejoin that. The effort and attention spent on developing pixel-perfect 
layouts, and in specifying supporting tools, is exagerated and has led to
brittle or inefficient designs.

> I suppose it is possible that banks make naive statements about customer  
> security on the internet, and just trust everyone out there without doing  
> any reasearch.

The few people specialized in Internet security I ever talked to, and who had
worked within IT departments of banks or in projects with banks were quite 
explicit -- and some recent security cases (e.g. British Chip & PIN) seem to 
prove them right: banks are not primarily interested in reducing risks to their
customers; rather, they primarily are in eliminating or decreasing their own 
liability. Unfortunately, assessing the responsibility for fraud or problems 
when utilizing Opera software for mobile banking from Barclays security 
disclaimers may be frustratingly difficult (in the case of Chip & PIN, the 
official institutions in charge such as APACS, CESG, VISA and the British 
Financial Ombudsman Service kept passing the hot potato to each other).

> What I fail to understand is
> 1. The difference between a person choosing their own modifications, and a  
> person choosing a service that does the modification for them.
> 2. How this difference is somehow importantly different to the capacity  
> for different browsers to have different rendering engines (it doesn't  
> come a lot more different than silent onscreen presentation and  
> presentation in voice, for example).

There are obvious theoretical and practical differences in (1): users choosing
their local modifications retain a control that they do not have when delegating
to a proxy. They know fairly precisely -- from a users' handbook, developers' 
manual or any kind of published specification -- what the piece of software they
are installing does. They can switch on or disable the modifiers; they can tweak
the configuration settings; they can test and alter the configuration. None of 
this is possible with proxies. In fact, CT-proxies are black boxes to end users;
only customers of proxy vendors have access to documents specifying how proxies 
transform content -- let alone to configuration parameters.

As for (2), the degrees of freedom conceded to rendering engines and browsers are
specified in standards -- in the HTML specification, sentences abound stating 
that user agents may render specific constructs in different ways (e.g. text, 
graphics, voice), or that they may ignore an unsupported construct. Notice that 
several deployed CT-proxies process content in ways that depart notably from the
degrees of freedom offered by the standards. Notice further that these degrees 
of freedom are conceded to user agents -- not to proxies. Opera mini is somewhat
different, in that it is a composite user agent. 

> Is the core of the issue that content presentation should not be tampered with
> (but for pragmatic reasons it isn't worth worrying about individuals) or is
> there some reason that providing a service to do so is intrinsically wrong 
> even if it were reasonable for users to do this themselves?

The exchange of arguments about the alterations of content presentation or the
behaviour of rendering engines is missing the forest for the tree. 

The Web has long evolved beyond a distribution infrastructure for documents to 
be exchanged between terminals and servers, and rendered on end-user devices. It
is a platform for service delivery -- a platform meaning a run-time engine 
(browser and local OS), an interface (mainly HTML, HTTP, DOM, Javascript), and 
associated libraries (plug-ins, DOM manipulation routines). The fundamental 
problem that mobile developers have with currently deployed transcoders is that
these break the platform:

a) Using a POST in a Web application, which is transformed into a GET.
b) Specifying one HTTP request, receiving two instead.
c) Expecting to establish an end-to-end TLS connection with a TLS-capable user
agent, getting a TLS connection established with a proxy instead.
d) Defining a page with structural elements like <table> or <object> that the
user agent can interpret, seeing them stripped away by a proxy.
e) Specifying a CSS style sheet, but a different style is substituted.
f) Sending valid Javascript to the client, but the client receives syntactically
incorrect Javascript instead.
g) Sending valid XHTML to the client, but the client receives invalid XHTML
instead.
h) Expecting to find user agent information in standard HTTP header fields, but
having to look in several other, proprietary fields instead.
i) Original URL are extended with mysterious arguments -- or worse, original
arguments are overridden by proxy-defined ones.

All these have been, and in many cases still are, happening, with effects that
go beyond page rendering. For instance, (a), (b), (d), (h) have been known to
ruin the delivery of Java applications to mobile phones. I hope that the CT 
guidelines will put an end to that egregious state of affairs.

To my knowledge, Opera mini has not been a problem in this respect.

> Or is the issue really that you object to some specific kind(s) of  
> transformation?

This formulation at least corresponds precisely to my personal concern. 

Modifying content in proxies or gateways before delivering it to mobile devices
is nothing new. About a decade ago, the operations performed were identified as
"content adaptation"; nowadays, the new breed of proxies implements "content
transformation" -- words have a meaning, and the semantic shift is unmistakable.

The following framework helps in appraising proxy operations:

1) Does one perform an analysis of the discrepancy between the attributes of
an application and the capabilities of the target environment (device, browser,
network)?

2) Are application properties altered in order to reduce or eliminate that 
discrepancy?

3) On the balance, do the benefits accrue to the application itself?

Obviously, cleaning up XHTML to make the markup well-formed, disinfecting an
executable plugin, lossly-compressing a document, or converting an image format
to another (e.g. PNG to GIF) do satisfy all three criteria -- they are a kind of
content adaptation to which I have no objection.

On the other hand, inserting a navigation bar into XHTML pages to access some 
specific site or manage bookmarks does not result from a discrepancy between an 
application and the end-user environment, but from a perceived shortcoming of 
the user agent compared to some ideal browsing environment; the functionality 
thus introduced belongs to a browser, not to an application, and the benefits 
accrue to the browsing environment, not to the application. Similarly, inserting
extraneous advertisements into a page does not bridge any gap between the 
application properties and the target environment capabilities, and clearly the 
application does not benefit from it -- on the contrary: it becomes bulkier, 
usability is reduced because of a disrupted layout, and the content providers 
eventually lose revenue from their own ads. When a CT-proxy dumbs down content 
that renders perfectly on the end-user terminal (this is not infrequent), then 
it introduces a discrepancy instead of eliminating one. When it adapts a site of
Blackberry or Mobile Windows applications, with a layout optimized for 
Blackberries or MobileIE, then it does indeed reduce marginally the discrepancy
with a NetFront or Series60 environment -- but the application ultimately does
not benefit from this, since downloaded code cannot run on devices it is not 
intended for. Finally, systematic rewriting of HTTPS URL also violates the 
aforementioned principles: URL and requests are altered without evidence for a 
discrepancy -- the end-user device can establish an end-to-end TLS/SSL link 
directly, and there is no knowledge about the attributes of the application 
pointed at by the URL (and hence of a possible discrepancy) before this 
application is effectively accessed.

> The following operators have all carefully reviewed the Manifesto.

International companies tend to have the unnerving tendency to present themselves
as large, united, leading world corporations and then defer to local branches 
whenever this suits them. In the case of Vodafone, deployments of CT-proxies vary
from one country to another; I would assume that whatever country representatives
state is only valid for that subsidiary, and interpret whatever group 
representatives state in the light of the tradeoffs regarding possible corporate
fights with country-level units. "Carefully reviewed" might mean "Subsidiaries
X, Y and Z have deployed transcoders and claim they will increase their revenue
with them -- perhaps they will, let us not raise a fight with them about the 
Manifesto yet".


Let us see: debating separation of structure and representation as TBL meant it
to be vs. enforcement of pixel-perfect layouts; focusing on page rendering 
instead of an overall service platform; confronting "one Web" convergence with 
specialized mobile Web approaches; proxies that compile WML content into WBXML,
er, HTML into Opera binary code; disputing the WTLS WAP gap, er, point-to-point
HTTPS links; transformation proxies touting their capability to make the whole
Web accessible to technically challenged mobile devices.
 
Yes, this feels like 1998 all right.


E.Casais


      
Received on Monday, 9 March 2009 15:28:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:43:00 UTC