W3C home > Mailing lists > Public > public-respimg@w3.org > June 2013

Re: A question about the discussions so far

From: Ilya Grigorik <igrigorik@google.com>
Date: Fri, 28 Jun 2013 10:21:43 -0700
Message-ID: <CADXXVKrb+JYAP5fVt8rDsFtkT=1mWBWe63JhiA4XfQ+r7XwfAw@mail.gmail.com>
To: "Darrel O'Pry" <darrel.opry@imagescale.co>
Cc: Adam van den Hoven <adam@littlefyr.com>, Marcos Caceres <w3c@marcosc.com>, "public-respimg@w3.org" <public-respimg@w3.org>
Hey Adam.

Apologies, haven't had a chance to read through the entire thread, but from
what I've scanned, you may want to take a look at the following (previous)
work:

Protocol Independent Content Negotiation Framework - RFC 2703
Indicating Media Features for MIME content - RFC 2912
Identifying Composite Media Features - RFC 2938

Between those 3 proposals, I think they more or less tackle the
implementation you're after.

The "gotcha" is: none of them are supported or implemented. One the main
hurdles is the cost of the extra roundtrip to fetch the meta-data container
/ description of the alternative representations.. Roundtrips are
expensive, especially on mobile. Hence the reason why Client-Hints makes
the data available to the server on initial request.

ig


On Thu, Jun 27, 2013 at 4:39 PM, Darrel O'Pry <darrel.opry@imagescale.co>wrote:

> Adam,
>
> You're kind of getting into my wheel house, you can ping me off list if
> you need implementation help. I've done a lot of server side image
> processing and am currently working on a service which does just that.
>
> The 300 approach sounds interesting, but a lot of time the designer has
> control of the assets on the server so the additional requests and the
> resulting latency are a large price to pay. Giving the UA and end user a
> choice is the real benefit to this approach and seems tenable, but getting
> the browser developer community to adopt it could be a high barrier to
> entry.  If you strongly believe in this approach a sequence diagram
> outlining the request response workflow to get the image along with example
> headers would be great to help to work out the workflow.
>
> As for simply doing server side device detection it can be done with
> WURFL, or DeviceAtlas and the server can make decisions on what to deliver
> based on the device. The UA could also send additional headers to describe
>  the client capabilities.  Both seem to be more accessible approaches to
> server side responsive engineering as they don't require additional
> requests.  They however do not give the UA as much control over the
> interaction which is the nice part of your suggestion.
>
>
> On Thu, Jun 27, 2013 at 7:19 PM, Adam van den Hoven <adam@littlefyr.com>wrote:
>
>> Marcos,
>>
>> I'll go take a look at what I can do. It really is WELL outside my area
>> of expertise so I may have to recruit some outside help.
>>
>> For the last question, I don't think that browsers necessarily need to
>> change AT ALL. The HTTP spec is explicit in not specifying resolution
>> mechanisms for 300 responses. I *think* that the layer handling HTTP
>> simply passes resources to the browser and leaves interpretation up to
>> them. I imagine that you could engineer the http layer such that it is
>> aware of how the request is being made (wifi vs cable vs 3g vs, device
>> metrics, etc.) so *it* would be in the best position to decide which
>> file to retrieve. so the browser would simply ask for resource foo, and the
>> http layer would give it some binary that satisfies that request (which
>> just happens to be a lossy webp with 10 quality, in this case). The
>> optional change would have to be to provide some mechanism for the HTTP
>> layer to communicate the *other* choices to the browser so some UI can
>> be provided to the end user to make choices (for example something like the
>> "save password" alert you see in some browsers, but with more thought, I
>> guess, since you have N images with M variations) but the OS could provide
>> a UI in its network settings to set preference for different kinds of
>> connections.
>>
>> Adam
>>
>>
>> On Thu, Jun 27, 2013 at 3:18 PM, Marcos Caceres <w3c@marcosc.com> wrote:
>>
>>>
>>>
>>>
>>> On Thursday, 27 June 2013 at 21:55, Adam van den Hoven wrote:
>>>
>>> > Marcos,
>>> >
>>> > Sure, but like I said, I'm not a browser/OS coder and that's really
>>> where any such polyfill/prototype would have to live.
>>> That's totally ok. Check out http://extensiblewebmanifesto.org for
>>> inspiration. Like I said, it doesn't need to be anything fancy. Just keep
>>> it simple.
>>> > I suppose I could cobble together a script in Ruby or NodeJS that
>>> works on the socket level which.
>>>
>>> That would be awesome (specially if done in NodeJS, as most of us here
>>> are JS coders… including me). We have an experiments repo over on github
>>> for exactly these kinds of experiments/tests:
>>>
>>> https://github.com/ResponsiveImagesCG/experiments
>>>
>>> There are plenty of people here who can review code, comment, etc.
>>>
>>> Node seems really well suited to this.
>>> > if my VERY shaky and fourth hand knowledge of the network layers is
>>> correct is the layer right below HTTP, would implement HTTP with my
>>> additions, but I'm not sure what that would accomplish.
>>> >
>>> > Unless you had something else in mind?
>>> It would be interesting to see how many requests are required to do what
>>> you are suggesting; if it's performant; how would a browser need to be
>>> changed to support your proposal etc.
>>>
>>>
>>
>
>
> --
> Darrel O'Pry
> The Spry Group, LLC.
> http://www.spry-group.com
> 718-355-9767 x101
>
Received on Friday, 28 June 2013 17:22:52 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:06:09 UTC