W3C home > Mailing lists > Public > public-respimg@w3.org > June 2013

Re: A question about the discussions so far

From: Darrel O'Pry <darrel.opry@imagescale.co>
Date: Thu, 27 Jun 2013 19:39:51 -0400
Message-ID: <CAGfUJnM4FG1rJTDTtzZPWMY04ujJnzOwNOE4_YVz08Hje5AKtQ@mail.gmail.com>
To: Adam van den Hoven <adam@littlefyr.com>
Cc: Marcos Caceres <w3c@marcosc.com>, "public-respimg@w3.org" <public-respimg@w3.org>

You're kind of getting into my wheel house, you can ping me off list if you
need implementation help. I've done a lot of server side image processing
and am currently working on a service which does just that.

The 300 approach sounds interesting, but a lot of time the designer has
control of the assets on the server so the additional requests and the
resulting latency are a large price to pay. Giving the UA and end user a
choice is the real benefit to this approach and seems tenable, but getting
the browser developer community to adopt it could be a high barrier to
entry.  If you strongly believe in this approach a sequence diagram
outlining the request response workflow to get the image along with example
headers would be great to help to work out the workflow.

As for simply doing server side device detection it can be done with WURFL,
or DeviceAtlas and the server can make decisions on what to deliver based
on the device. The UA could also send additional headers to describe  the
client capabilities.  Both seem to be more accessible approaches to server
side responsive engineering as they don't require additional requests.
 They however do not give the UA as much control over the interaction which
is the nice part of your suggestion.

On Thu, Jun 27, 2013 at 7:19 PM, Adam van den Hoven <adam@littlefyr.com>wrote:

> Marcos,
> I'll go take a look at what I can do. It really is WELL outside my area of
> expertise so I may have to recruit some outside help.
> For the last question, I don't think that browsers necessarily need to
> change AT ALL. The HTTP spec is explicit in not specifying resolution
> mechanisms for 300 responses. I *think* that the layer handling HTTP
> simply passes resources to the browser and leaves interpretation up to
> them. I imagine that you could engineer the http layer such that it is
> aware of how the request is being made (wifi vs cable vs 3g vs, device
> metrics, etc.) so *it* would be in the best position to decide which file
> to retrieve. so the browser would simply ask for resource foo, and the http
> layer would give it some binary that satisfies that request (which just
> happens to be a lossy webp with 10 quality, in this case). The optional
> change would have to be to provide some mechanism for the HTTP layer to
> communicate the *other* choices to the browser so some UI can be provided
> to the end user to make choices (for example something like the "save
> password" alert you see in some browsers, but with more thought, I guess,
> since you have N images with M variations) but the OS could provide a UI in
> its network settings to set preference for different kinds of connections..
> Adam
> On Thu, Jun 27, 2013 at 3:18 PM, Marcos Caceres <w3c@marcosc.com> wrote:
>> On Thursday, 27 June 2013 at 21:55, Adam van den Hoven wrote:
>> > Marcos,
>> >
>> > Sure, but like I said, I'm not a browser/OS coder and that's really
>> where any such polyfill/prototype would have to live.
>> That's totally ok. Check out http://extensiblewebmanifesto.org for
>> inspiration. Like I said, it doesn't need to be anything fancy. Just keep
>> it simple.
>> > I suppose I could cobble together a script in Ruby or NodeJS that works
>> on the socket level which.
>> That would be awesome (specially if done in NodeJS, as most of us here
>> are JS coders… including me). We have an experiments repo over on github
>> for exactly these kinds of experiments/tests:
>> https://github.com/ResponsiveImagesCG/experiments
>> There are plenty of people here who can review code, comment, etc.
>> Node seems really well suited to this.
>> > if my VERY shaky and fourth hand knowledge of the network layers is
>> correct is the layer right below HTTP, would implement HTTP with my
>> additions, but I'm not sure what that would accomplish.
>> >
>> > Unless you had something else in mind?
>> It would be interesting to see how many requests are required to do what
>> you are suggesting; if it's performant; how would a browser need to be
>> changed to support your proposal etc.

Darrel O'Pry
The Spry Group, LLC.
718-355-9767 x101
Received on Friday, 28 June 2013 07:46:36 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:06:09 UTC