W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2012

Re: High-resolution timestamp in HTTP response header

From: Mark Nottingham <mnot@mnot.net>
Date: Thu, 15 Nov 2012 12:34:36 +1100
Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-Id: <3196ADF7-9983-4590-B8B3-9F590236C16E@mnot.net>
To: Akon Dey <akon.dey@gmail.com>

On 15/11/2012, at 12:22 PM, Akon Dey <akon.dey@gmail.com> wrote:

> Hi,
> The current HTTP headers related to time have a time granularity of 1 second. This is fine when considering the update time of files on a UNIX filesystem on the HTTP-server but is insufficient when considering today's use cases where the resource is not limited to just files but data store objects or database columns.
> I would like to propose that the granularity of the data fields support higher time granularity given we are in the process of specifying the next version of the HTTP protocol.
> This will enable the use of HTTP to higher throughput "put if-unmodified-since X" type operations among other useful things.
> I would be glad to elaborate the use cases in more detail.

Hi Akon,

HTTP's granularity of time is very intentionally one second, because the clock skew that is prevalent on the Internet as well as the delay introduced by latency mean that an finer granularity is effectively an attractive nuisance; Web developers will believe that there is a degree of control offered that is in fact not available.

Additionally, changing the syntax to allow it in a backwards-compatible way isn't practical; this would have to be done with new headers.

When HTTP/1.1 was created, it was recognised that one-second granularity wasn't adequate for some use cases, especially validation. That's why ETags (along with If-None-Match) were created, to allow arbitrary and server-controlled granularity for validation.


Mark Nottingham   http://www.mnot.net/
Received on Thursday, 15 November 2012 01:34:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:07 UTC