timestamps pretty clearly shouldn't have timezones in them.. that just
makes them harder to deal with and frankly creates a privacy/fingerprinting
problem.
I'd actually like the timestamp format to support sub millisecond
timescales.. (maybe another 8 bits) to help http scale in both directions.
On Wed, Jan 16, 2013 at 6:23 PM, Nico Williams <nico@cryptonector.com>wrote:
> On Wed, Jan 16, 2013 at 5:10 PM, Mark Nottingham <mnot@mnot.net> wrote:
> > On 17/01/2013, at 9:45 AM, James M Snell <jasnell@gmail.com> wrote:
> > Dates in HTTP are explicitly in UTC (we just call it "GMT"), so the
> timezone data isn't helping (and may be hurting).
>
> TZ should be in a separate header then. It helps the server to know
> what TZ a user is in.
>
> > Dates in HTTP have a granularity of one second; although people ask for
> finer granularity from time to time, giving them this capability is IMO
> asking for trouble (because clock sync and the speed of light / disk,
> combined with people's ignorance of distributed systems, leads to lots of
> bugs).
>
> Well, sub-second resolution can help if you're building, say, a
> timesync protocol. (Since every app protocol now has to run over
> HTTP... NTP is no longer good enough. j/k)
>
> > WRT years up to 9999 -- yes. The method I used consumes an extra byte
> after 2106... and then another in 4147. However, just one more byte buys up
> to 36812!
>
> Good!
>
>