W3C home > Mailing lists > Public > www-tag@w3.org > December 2014

Interaction speed Re: Draft finding - "Transitioning the Web to HTTPS"

From: Tim Berners-Lee <timbl@w3.org>
Date: Wed, 10 Dec 2014 08:18:56 +0100
Cc: Marc Fawzi <marc.fawzi@gmail.com>, Bjoern Hoehrmann <derhoermi@gmx.net>, Mark Nottingham <mnot@mnot.net>, Noah Mendelsohn <nrm@arcanedomain.com>, Public TAG List <www-tag@w3.org>
Message-Id: <14B98DE8-FF5E-496D-A64C-31A4F8F68B04@w3.org>
To: Tim Bray <tbray@textuality.com>

Thanks for the pointer.

There are two maybe web-level areas which are not addressed in it.    Here is one, as a separate thread. I think the TAG finding needs indeed to have a reasoned discussion or pointers to them.  It may also need tips.

A result from th early days  of hypertext systems was the rule of thumb was that a user would use a system effectively so long as the response time was 100ms or less.  Any increase of speed below that does not lead to any improvement in problem-solving ability, and increase about would be detrimental, users would tend not to explore as it would not be worth the wait.  So interaction time is critical.

The Linked Data Platform is one example of a system in which a user is kept in sync with shared data on the server.  The code I write, when you change a user control, greys it out until it gets the 200 response from the server, so the user is gently aware when the data has not been saved.

My practical experience is that the greying out is hardly noticeable with a direct HTTP connection but can take a second or two over HTTPS.   Obviously this all depends massively on the state of the internet in between the two at the time and really  we need a large number of tests,  but the effect is directly noticeable.   (I use a node.js data server doing HTTP locally as behind an apache proxy doing HTTPS to the outside world.)

It tends to be worst when making a change (like say changing item in a form, calendar, etc) without any previous interaction for a while. 

I discussed this in a TAG meeting with Mark N and he suggested I could tune the system in some ways, and maybe their could be instructions in the finding.  But a simple working out of the number of round trips involved in each case by those who know the protocols well

(A possibility of course if to switch from HTTP to web sockets, which would improve all but the first interaction but require resources on both sides.)


On 2014-12 -10, at 06:17, Tim Bray <tbray@textuality.com> wrote:

> The arguments about the desirability of ubiquitous encryption have been going on a long time, but unfortunately tend to circularity because few *new* arguments are introduced in any given year.  I have written a draft which assembles the most-commonly-heard arguments against the universal deployment of privacy technology, and provides counter-arguments.  I suspect much of it is material to this discussion, and it’s not very long: https://www.tbray.org/tmp/draft-bray-privacy-choices-00.html : “Privacy Choices for Internet Data Services”

Received on Wednesday, 10 December 2014 07:19:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:57:08 UTC