- From: Eric J. Bowman <eric@bisonsystems.net>
- Date: Mon, 19 Jan 2015 15:22:27 -0700
- To: Henri Sivonen <hsivonen@hsivonen.fi>
- Cc: David Sheets <kosmo.zb@gmail.com>, Noah Mendelsohn <nrm@arcanedomain.com>, Tim Berners-Lee <timbl@w3.org>, Public TAG List <www-tag@w3.org>
Henri Sivonen wrote: > > I sure hope that "things", unlike many phones, really end up > downloading these often. There being a cache sharing opportunity, > though, assumes that the "things" on a given network would be > homogenous enough for the same software updates to be applicable to > multiple "things". > This discussion has fallen into the trap of assuming that HTTP intermediaries are caches. In reality, devices and middleware exist which do all sorts of things for end-users. I can imagine plenty of applications where these same sorts of things are done for Things as well, provided the ecosystem allows for it. Calling it two different ecosystems is interesting, inasmuch as it's a political distinction with no technological basis. > > Do developers of package management tools like apt-get think the tools > are part of "the Web" if they use HTTP? > What about HTML interfaces for apt-get using a browser? Once upon a time, my way of doing things *was* "on the Web", in that content served as application/xhtml+xml would reliably engage XSLT 1.1 in browsers, while avoiding intermediaries injecting content into text/ html. XSLT-to-HTML remains a viable approach for implementing a GUI on any package management system using XML for manifests, etc. Browser-resident transformation remains a relevant architecture for intranet applications. Michael Kay's XSLT 2 plugin may be used to cache compiled transformations which convert an XForms UI into HTML5+JS+CSS. There's no reason browsers can't natively support this approach even better than they used to, like by supporting streaming XSLT2. My what a tangled Web we weave, when we attempt to redefine "Web" to exclude RESTful, transformable, XML-driven architectures running in browsers, because they're behind a firewall and require plugins; when that reality came about through abitrary choice. > > Anyway, I think it would be a mistake to scope this TAG finding to > cover "everything addressable by a URL" or "everything that uses > HTTP". Such a broad scope would import enough special interests to > limit what can be definitively said. I think it's more useful to say > something more confident/definitive about the Web (or "the browsable > Web" for those who believe in more expansive definitions of "the Web") > than to say something more vague about "everything addressable by a > URL" or "everything that uses HTTP". > The problem with that small-tent Web, is it bulk-excludes stakeholders from the decisions made within. Perhaps the scope of findings should cover the scope of those affected by said findings. Didn't TAG findings used to be scoped to an architectural definition of the Web? Seems much less convoluted than continually re-defining "Web" to exclude not only more and more stakeholders, but also more and more alternative implementations which are copacetic with commonly held notions of "Web" and "browser". Roy called it "suppressing originality". Tongue firmly in cheek, I believe w3c should delegate the definition of "Web" to WHATWG where it will become a "living" standard basically stating, "whatever those with the greatest market share need it to mean at any given moment, to validate the decisions of informed editors everywhere". -Eric
Received on Monday, 19 January 2015 22:23:19 UTC