- From: Ilya Grigorik <igrigorik@gmail.com>
- Date: Thu, 31 Mar 2016 12:58:20 -0700
- To: Göran Eriksson AP <goran.ap.eriksson@ericsson.com>
- Cc: HTTPWG <ietf-http-wg@w3.org>
- Message-ID: <CAKRe7JHX6DOsYGeWG9YVE6i-2iguJ_QLQv0HhthWkaf9ZyjaHg@mail.gmail.com>
On Wed, Mar 30, 2016 at 2:37 PM, Göran Eriksson AP < goran.ap.eriksson@ericsson.com> wrote: > > > > > >Well, the way we defined it [1] is: > >- in absence of any other signals, we would use the theoretical max DL > >rate > >- if we can refine the upper bound (e.g. WiFi drivers do communicate max > >rate based on RSSI), then we would communicate that value > > > > > >As such, the intent is to communicate the upper bound (with respect to > >first network hop) to the best of our abilities. > > Right. The probability of the theoretical downlink rate to be anywhere > near the actual one is often pretty low. Now even if making educated > guesses is tricky in wireless (unless one uses QoS stuff), then it’s at > least more closer to the actual rate than the theoretical (marketing) > download rate. > > This means the server may want to act differently depending on whether > it’s the theoretical downlink rate or an educated > guess/estimate/prediction of the client since these vary in terms of > likelihood of being an actual representation of what the DL bitrate will > be. Now server could deduce that by also keeping a list of the theoretical > rates given in [1] and assuming when DL rate == any_of_theoretical then > client has not other info. > > Making the difference clear in the DL Hint could be useful. > Yep, we discussed exactly this in: https://github.com/w3c/netinfo/issues/27. The conclusion there was that if the browser implements own ~quality estimator metric, then it should probably be surfaced as a different attribute. ig
Received on Thursday, 31 March 2016 19:59:28 UTC