- From: Marcos Caceres <w3c@marcosc.com>
- Date: Fri, 6 Dec 2013 11:25:18 +1000
- To: Kostiainen, Anssi <anssi.kostiainen@intel.com>
- Cc: "<public-device-apis@w3.org>" <public-device-apis@w3.org>, Puneet Mohan Sangal <pmsangal@yahoo-inc.com>, Mounir Lamouri <mounir@lamouri.fr>
On Friday, December 6, 2013 at 12:23 AM, Kostiainen, Anssi wrote: > > Generally I share the concerns outlined by Peter at [1]. That said, I have a proposal (comes with a use case!) I'd like to test with folks here: > > Perhaps we could abstract the essence of this API into "slow network" and "normal network" events that fire when there's a significant change in the network conditions, and not try to guess the exact bandwidth and/or the connection type as in the current or previous iterations of the API. > > When the implementation notices that loading resources takes longer than on average [2] it would fire the "slow network" event. And when the network conditions resume back to normal, the "normal network" event would get fired. It would be up to the implementation to come up with reasonable heuristics based on bandwidth, latency, connection type, or any other hint that may be available to the implementation in the future. > > Here's a use case that may be familiar to some of you: A user is standing on a subway station, she enters the train, the train enters a tunnel, the connection gets slower, and then, the connection drops completely. When the train finally comes out from the tunnel the connection is back again, first slow, then back to normal. The following events would get fired in this scenario: > > "slow network" -> "offline" -> "online" -> "normal network" > > (We can argue whether "online" and "offline" events are useful as currently spec'd and implemented, but let’s leave that discussion for another thread.) > > When an app gets the "slow network" event it knows it is time to adapt to the changed environment. For example, pull in the FB/Twitter updates before the network connection gets too slow, or downgrade the streaming quality of a video that is being streamed and/or inform the user that the experience may be degraded soon, for example. And for “normal network” similarly, but back to normal. > > Any thoughts? This sounds like a footgun, tbh. You can still have a great connection, but a slow server - and a bunch of stuff in between. Hell, me just walking into a different room in my house causes my bandwidth to change - and even my parents have two wifi points in their house to extend their range of coverage. I do also at home - this means that I commonly switch from one wifi spot to another, and in between I momentarily connect to 3G, but I sure as hell don’t want apps doing anything weird like wasting my limited d/l cap in those few seconds between the switch from Wifi-3G-Wifi. It would be better to actually try to find examples on the web of people actually doing this kind of estimation by pinging servers and measuring latency and seeing what problem they are actually trying to solve. I think that Akamai has something that does this for responsive images, for example - but such cases might already be better handled with Client-Hints and <picture>. My point being, we should try to find actual people trying to actually determine this information in the wild today and actually evaluate what they are trying to do "for realz”. Parallel discussions are happening in the Web Perfomance WG - and they are hitting the same “you don’t have any actual valid in the wild that we can actually see use case”: http://lists.w3.org/Archives/Public/public-web-perf/2013Dec/0043.html
Received on Friday, 6 December 2013 01:25:58 UTC