RE: Scope of CT Guidelines

Hi Bryan

Interesting thoughts. Some comments interspersed among your text.

Jo

> -----Original Message-----
> From: public-bpwg-ct-request@w3.org
[mailto:public-bpwg-ct-request@w3.org]
> On Behalf Of Sullivan, Bryan
> Sent: 24 October 2007 09:14
> To: public-bpwg-ct@w3.org
> Subject: RE: Scope of CT Guidelines
> 
> 
> Here is my initial input. Overall I agree with Jo's proposal that the
> initial CT guidelines should focus on a limited scope. To that purpose
I
> suggest we limit the use cases to the three I mention below. There is
> still plenty of variation around that e.g. the available
> representations, who selects them, etc.
> 
> First, some assumptions:
> 
> 1) The focus of the CT guidelines is to enable HTTP-based control of
CT
> for wired web content usability (which includes compatibility and
> effectiveness) by mobile UA.

Yes, it's good to be clear about this. I agree that it is in part about
control, but perhaps more importantly, in the first instance at least,
it's about what is acceptable behaviour in the absence of HTTP based
control, and indeed perhaps what should be done outside of HTTP to allow
the user to exert some control. 

E.g. if my web browser and the site I am accessing know nothing about
the niceties of the CTTF-defined HTTP stuff, and this is the starting
point use case, then we might chose to say that the user SHOULD be
presented with a choice. 

> 
> 2) Given user consent for service via a CT proxy for usability
purposes,
> it's expected that a CT proxy will have no reason to violate a CP or
> user agent directive related to the CT service, even if the directive
is
> incorrect and results in a poor user experience.
> 
I think that's yet to be decided. Firstly what do we mean by user
consent? Is it sufficient for their consent to be buried in the terms of
service of their network provider? What can they do on a case by case
basis about altering that consent?

Also I think we have yet to decide what to do in cases like:

Server sends dangerous markup (in the sense that a proxy thinks this
will trigger a bug in the UA that results in massive misoperation). User
says that in such cases they want the markup fixed. CP says that their
content is not to be altered under any circumstances (that what I take
"no-transform" to mean, because I the server think you the proxy don't
know as much as I do - and this I think implies we need a less stringent
value to mean that content should not in general be altered but may be
at the request of the user).



> 3) There are other roles for CT proxies, but they are not covered by
> this guideline. They include CT functions related to generic policy
> control (e.g. content filtering), content (e.g. ad) insertion,
> optimization, etc. For these roles there are cases in which a CT proxy
> may violate Content Provider (CP) or user agent directives, thus
> negotiation or at least error handling may apply in those cases.

Well, I'm not clear that it is as clear cut as that. I think we need to
resolve where we want to go on some of the things you mention. Though it
seems clear to me that we are not in the business of filtering, we
probably do need to say things about ad removal and insertion. And we
almost certainly, in my view, need to say things about optimization.

> 
> 4) For secure services (HTTPS), the CT proxy service would normally be
> bypassed for UA invoking TLS tunneling to the CP via the HTTP Connect
> method. Alternatively, the CT proxy can rewrite CP URLs as local
> resources (recreating the WAP gap!), but this may not be acceptable
for
> some CP. Thus unless the CT proxy rewrites CP URLs, it can have no
role
> in CT for secure services.

I think that is a possible conclusion, however I think the group would
need to discuss and resolve this.

> 
> 5) The CT guidelines will focus on CT control and thus assumes at
least
> one of the CP or UA is CT-aware. The case in which both CP and UA are
> CT-unaware will be typical for years to come, but this is
> business-as-usual for CT proxies.
> 
The case in which the UA and CP are both unaware is the primary use case
and I think that is the one we are most worried about.

> 6) CT proxies will typically be inserted into the request path by
> UA/application configuration or network routing. Non-web
UA/applications
> will typically be configured to bypass CT proxies. If required,
> detection of non-web UA/applications will likely use the same approach
> as for CT-unaware UA detection, e.g. UA header filtering.

I'm not sure I understand this point. Non-Web applications typically
simulate Web applications in order to borrow the Web application's
transport/connectivity. 

> 
> Re in "Magnus's original contribution"
> http://lists.w3.org/Archives/Public/public-bpwg-ct/2007Sep/0014.html
> "The content transformation proxy needs to be able to tell the client
> browser...where to find the original content if it has been
> transformed.": It seems this would be of no use to the UA unless it
can
> selectively bypass the CT proxy. As well, the original request URI
> should be the same as the location of the original content, unless the
> CT proxy is rewriting the URLs.

Interesting point. Is it possible or desirable to distinguish the
routing or transformation options on the basis of URIs? 

> 
> Re Jo's email:
> "in theory at least there are 64 different combinations of
aware/capable
> component types in the delivery chain":
> I think we should assume the CT proxy is CT-aware. Otherwise all bets
> are off; it may ignore the new headers etc that are specified. If you
> accept that and my assumption (5), there are only three combinations,
> i.e.
> - CT-unaware CP, CT-aware UA
> - CT-aware CP, CT-unaware UA
> - CT-aware CP, CT-aware UA

I think we need to examine what happens in cases where the proxy is CT
unaware. That probably means making sure we scrutinise RFC 2616 for the
bits that say that proxies MUST pass unchanged ... etc. and hope that
there aren't other bits of HTTP that we didn't spot that contradict
that. Having done that we can be confident that a conforming proxy will
behave consistently with what we are trying to achieve, and that the UA
and CP will both be aware of the proxy's presence, aware that it is CT
unaware, but be unsure as to whether it is transformation capable. By
borrowing existing bits of HTTP like "no-transform" the UA and CP should
still be able to exert crude control of an "unaware" proxy, assuming
that it is RFC 2616 conformant.

> 
> Case 3 "Client goes ahead and simulates desktop...this could land the
> user in a delay and cost nightmare...":
> UA that attempt to simulate desktops should support advanced HTTP
> features such as persistent connections and multiple outstanding
> requests. Other than that, I think it should be outside (these) CT
> guidelines scope to address optimization. That begins to get into the
> policy/value-added-service area, in which as I mentioned before, CP or
> UA directives may be violated by the CT proxy.

And as I mentioned above, I'm not clear exactly what represents a
deciding line for "policy" related issues. Also I think that things like
Opera Mini / Onspeed whatever need to be considered in this context.

> 
> Best regards,
> Bryan Sullivan | AT&T | Service Standards
> bryan.sullivan@att.com
> -----Original Message-----
> From: public-bpwg-ct-request@w3.org
> [mailto:public-bpwg-ct-request@w3.org] On Behalf Of Jo Rabin
> Sent: Tuesday, October 23, 2007 10:12 AM
> To: public-bpwg-ct@w3.org
> Subject: Scope of CT Guidelines
> 
> 
> I thought it worth making a note and stimulating further discussion
> following today's productive CT TF call.
> 
> I'm worried that if we step into the space of how to do 3 way content
> negotiation we will be opening the lid on Pandora's box.
> 
> The starting point, I think, is that there are three types of
component,
> server, proxy and browser. Each of those can be capable of
> transformation. Add to this that each of them will independently also
be
> aware or unaware of whatever our guidelines state about how to
> cooperate. So in theory at least there are 64 different combinations
of
> aware/capable component types in the delivery chain. That's a bit too
> complicated for my taste.
> 
> So I wonder if it's worth considering the question of repurposing the
> presentation separately from markup and formatting fixups?
> 
> So far as presentation is concerned I think there's a relatively small
> set of cases, though each of them undoubtedly has its own complexity.
> 
> Case 1.
> a) the Server has only a desktop oriented presentation only
> b) the browser has mobile presentation only
> 
> A proxy may have a useful role in re-presenting the content.
> 
> Note though, that in this case, the server's desktop presentation may
> actually be a universal presentation (call my web pages boring but
they
> are designed to render across all delivery contexts) so in that case
the
> proxy should not interfere either.
> 
> Case 2.
> a) The Server has both a mobile and a desktop experience
> b) Client has mobile experience only
> 
> Server presents mobile experience. Proxy stays out of the way.
> 
> Case 3.
> a) The Server has a desktop oriented presentation only
> b) Client can simulate desktop
> 
> Client goes ahead and simulates desktop
> 
> (There is an argument that says that this could land the user in a
delay
> and cost nightmare. So is that an argument against the concept of
> simulating desktop on a mobile, or is it an argument for saying that
> there is a role for transforming proxy to reduce that delay and cost
> nightmare?)
> 
> Also same case as in 1, where the server has a single presentation,
but
> that presentation is suitable across the board.
> 
> Case 4.
> a) The server has a choice of presentations
> b) The client has a choice of presentations
> 
> The user should be able to get either i) the mobile presentation ii)
the
> simulated desktop presentation. That choice may be triggered by
> selection locally to the client or at the server.
> 
> Case 5.
> 
> Although this looks like a Web request, in fact it isn't. It's some ad
> hoc protocol xmlhttprequest-like thingy.
> 
> (Proxy leaves well alone)
> 
> I'm assuming that this is a more of less complete list of scenarios
> where the presentation is adjusted at most once.
> 
> Now I think that we can consider on top of this whether the proxy has
a
> role adjusting not the presentation, but details of the formatting -
> e.g. to tweak the content type header, or to tweak the DOCTYPE to
avoid
> problems. Possibly to re-render images from one format to another.
> 
> To my mind, that is almost certainly enough to do deal with in volume
1
> of the guidelines. I think we should leave the door open to later
> elaborations that discuss 3 way content negotiation, servers
> deliberately delegating formatting or presentation tasks to proxies
and
> so on. However all that falls firmly in a volume 2.
> 
> I'm hoping that by restricting the initial scope we stand a chance of
> meeting the proposed timescales for the deliverable, and of addressing
> in a timely way the key point of the Task Force's existence - which is
> to provide a way for Transforming Proxies to get out of the way of
> mobile ready content.
> 
> Hope this makes sense and looking forward to comments.
> 
> Jo

Received on Thursday, 25 October 2007 08:51:26 UTC