W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: FYI... Binary Optimized Header Encoding for SPDY

From: James M Snell <jasnell@gmail.com>
Date: Fri, 3 Aug 2012 10:11:53 -0700
Message-ID: <CABP7Rbf2P7qC3Uu_-sC3OqCuUGw5QhF+DNjbA0LhUhj-CDABwA@mail.gmail.com>
To: Zhong Yu <zhong.j.yu@gmail.com>
Cc: "Adrien W. de Croy" <adrien@qbik.com>, Martin J. Dürst <duerst@it.aoyama.ac.jp>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Mike Belshe <mike@belshe.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
+1... at the HTTP level, we should not be trying to split up the request
URI like this. Treat it like an opaque string...

Besides, there are MANY applications that mix application specific
parameters into the path as well as the query string, so splitting those up
to protect privacy doesn't really make any sense at all.

- James

On Fri, Aug 3, 2012 at 10:04 AM, Zhong Yu <zhong.j.yu@gmail.com> wrote:

> Wait... why should HTTP care about the internal structure of a URI? As
> far as HTTP is concerned, a URI is an opaque string(though
> canonicalization needs to be defined)
>
> The internal structure of a URI is agreed upon by the client app and
> the server app. For example, the common ?n1=v1&n2=v2 form is actually
> defined by the HTML standard; the way n/v's are escaped is also
> defined by HTML. HTTP has no business in it.
>
> http://www.w3.org/TR/REC-html40/interact/forms.html#h-17.13.4.1
>
> Zhong YU
>
>
> On Fri, Aug 3, 2012 at 12:15 AM, Adrien W. de Croy <adrien@qbik.com>
> wrote:
> >
> > ------ Original Message ------
> > From: "Martin J. Dürst" <duerst@it.aoyama.ac.jp>
> > To: "James M Snell" <jasnell@gmail.com>
> > Cc: "Poul-Henning Kamp" <phk@phk.freebsd.dk>;"Mike Belshe"
> > <mike@belshe.com>;"ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
> > Sent: 3/08/2012 4:35:00 p.m.
> > Subject: Re: FYI... Binary Optimized Header Encoding for SPDY
> >>
> >> On 2012/08/03 2:48, James M Snell wrote:
> >>>
> >>> On Thu, Aug 2, 2012 at 1:27 AM, Poul-Henning
> >>> Kamp<phk@phk.freebsd.dk>wrote:
> >>
> >>
> >>>> For instance, could we get rid of the %-encoding of URIs by allowing
> >>>> UTF8 ?
> >>>
> >>>
> >>> It would be possible, for instance, to begin using IRI's directly
> without
> >>> translating those to URI's first.
> >>
> >>
> >> Great idea. Please note that that will also save a few bytes (but that's
> >> definitely not the main reason for doing it).
> >>>
> >>> Doing so, however, does not eliminate the need for %-encoding,
> >>
> >>
> >> Yes, a '#' or '?' in a path segment and similar stuff still have to be
> >> %-encoded.
> >
> >
> > if we're defining a new binary-safe transport for header values,
> shouldn't
> > we try to avoid all multiplexing / escaping and parsing of strings?
> >
> > e.g. just put querystring in another "header" instead.  Then anything can
> > contain '?'
> >
> > same with fragments (#) although I thought these weren't allowed on the
> > wire...
> >
> > In fact the concept of a single string which is a URI could be deprecated
> > for 2.0 and just be sent as individual fields in a request.
> >
> > gatewaying back to 1.1 would require assembling a URI from the pieces,
> but
> > that should be easy.
> > Seems a bit nuts to go binary and leave some parts as overloaded string
> > fields requiring string parsing and escaping.
> >
> > Adrien
> >
> >
> >>
> >>
> >>> and there are a range of possible issues that could make this
> >>> problematic.
> >>
> >>
> >> Could you list up the issues you're thinking about? (I don't want to say
> >> there are none, but I can't at the moment come up with something that
> >> wouldn't already be around currently.)
> >> Regards, Martin.
> >
> >
> >
>
Received on Friday, 3 August 2012 17:12:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 3 August 2012 17:12:48 GMT