W3C home > Mailing lists > Public > public-xmlhypermedia@w3.org > June 2013

RE: The Web as an Application

From: Rushforth, Peter <Peter.Rushforth@NRCan-RNCan.gc.ca>
Date: Thu, 6 Jun 2013 21:33:57 +0000
To: David Lee <David.Lee@marklogic.com>
CC: "public-xmlhypermedia@w3.org" <public-xmlhypermedia@w3.org>
Message-ID: <1CD55F04538DEA4F85F3ADF7745464AF24A0A1FE@S-BSC-MBX1.nrn.nrcan.gc.ca>
> > Perhaps the Unix 'application' is analagous.  
> I would say "The Web" is more analogous to the entire sum of 
> unix systems worldwide that have internet server capability.
> Entirely different from a unix "application" ... 

Sorry I didn't use the right words there.  I mean the Unix (operating) system 
is an application.  Yet, this is not quite right, either.

Unix systems do not interoperate, except via networks, if they are
connected.  Even if they are connected, do they act as one, to the
extent that "an application" does?  This is the essence of "an application",
I think:  the set of software combined to achieve a goal or set of goals.

Wikipedia more or less agrees [1]:

"Application software is all the computer software that causes a computer 
to perform useful tasks (compare with Computer viruses) beyond the running 
of the computer itself. A specific instance of such software is called a 
software application, application or app."

Although I would like to add hardware which embodies software functions
as part of the application (load balancers, caches etc).

[1] http://en.wikipedia.org/wiki/Application_software

I'm thinking of the old Sun motto "The Network Is The Computer" 
(which I think was a play on Marshall McLuhan's "The medium is the message").

Anyway, to the extent that The Web exists as an application, we can't 
claim that The Web has any given purpose, such as checking the weather,
your stock portfolio, etc.  The purpose of the Web is more abstract,
yet obviously quite important.  So, what is the purpose, the useful task,
that The (Abstract) Web serves?  Once we know that we can examine
whether links in the xml namespace are warranted.

> > Ideally, I apply simple primitive tools, possibly in
> > combination, as in Unix pipelines, to achieve necessary complexity.
> Precisely why I invented xmlsh :)
> http://www.xmlsh.org/Philosophy

I thought you might like the analogy :-).

> ----------
> > If simple links were available to connect XML resources 
> (with any other
> > resources), similar to pipes | > etc in unix, standards 
> like XInclude
> > would probably not be necessary, because transclusion for example,
> > could be available via @xml:src.
> I argue this analogy breaks down because you are now assuming 
> that XML itself has web semantics.
> That is @xml:src can be *indirected* 

Could you please explain this, I'm not sure I understand.  The number
1 goal of XML was:

1. XML shall be straightforwardly usable over the Internet

According to Tim Bray (2): 

"This was not taken to mean that you could feed XML to the browsers 
of the day, but that the design would have regard at all times to 
the needs of distributed applications working on large-scale networks."

So I guess you are right, although it's a bit of a difficult pill
to swallow.  If you aren't successful on The Web, especially if you
are markup, you could consider that not successful on the internet.

(2) http://www.xml.com/axml/notes/Goal1.html

> This is precisely why XInclude is at a higher level than XML 
> itself (same with XQuery and XSLT).
> They take a subset of XML and apply additional semantics 
> which is not necessarily approprate for ALL XML.
> I think this is a good thing not a bad thing. 
> But then this is why XML is not HTML ... 

There's the level term.  We need to identify the layering of applications,
especially XML applications, in order to say what appropriate
services and consumers are.  

> But look around.  Look at most "modern" (last 3 years of so) 
> Web Applications.
> What are they being built out of ?   They are primarily pure 
> JavaScript GUI's where the application in the browser is
> driving state, managing  100% of the GUI and making 
> occasional AJAX-y type calls to the back end only as needed 
> and caching the results.    Look at the mobile apps.  
> Successful Mobile apps using either native apps or HTML5 techniques
> pass down entire databases and attempt to run unattached as 
> much as possible.
> Many many page transitions or entire application runs can 
> occur without hitting the back end.
> There are good reasons for this, one huge one being 
> unreliability and latancy of the web.
> If an application doesnt need to do a client/server call to 
> display a new page it is much more resiliant and provides a better
> user experience.   Particularly true on mobile but also true 
> on regular internet connected browsers.

Yah.  I may be stuck in 1999.

> Look at server-server applications.  More and more these are 
> being driven by asynchronous queued messages, not 
> synchronous request/response.
> I do feel that the time to promote page-by-page server driven 
> REST based applications may have passed us by already.

I don`t think so, there is a lot of interest in hypermedia applications.
I think it may just be getting started, in some ways.

Received on Thursday, 6 June 2013 21:34:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:42:06 UTC