W3C home > Mailing lists > Public > www-html@w3.org > April 1998

RE: Client side : an economic perspective was: Re:...

From: Stephanos Piperoglou <sp249@cam.ac.uk>
Date: Sat, 11 Apr 1998 02:29:14 +0100 (BST)
To: Andy Coniglio <waconigl@mccallie.org>
cc: nir dagan <dagan@upf.es>, roconnor@uwaterloo.ca, www-html@w3.org, www-talk@w3.org
Message-ID: <Pine.LNX.3.96.980411020752.368I-100000@localhost>
On Fri, 10 Apr 1998, Andy Coniglio wrote:

> -----Reply from Andy Coniglio-----
> 1.	Clients do not have to implement every feature of the web.  They do not
> have to be heavy.  I might add that pages should be designed with these
> clients and their limitations in mind.

This is true, but the more we can have clients do the better. If most
processing can be done on the server side it means there is less to do when
developing clients. Consider what would be needed to build from scratch, say
(off the top of my head, not that I'm actually considering this or written
any code :-)) to write a browser that can compete with the Big Two:

- An HTTP 1.1 agent
- An FTP, gopher and (at least) simple mailer or OS Mail API interface
- A text-to-DOM translator that will deal
- Around a million little catches in the above to observe backward
compatibility or "feature" propagation
- A stylesheet mechanism with rendering on screen and paper
- A Java VM (small task? Nooope) that reads the DOM
- A scripting mechanism (or two) that reads the DOM
- Incremental rendering and processing by client-side methods (a nightmare!)
- A "plugin" mechanism to facilitate expansion of supported mime types
- Builtin plugins for several mime types (image/* and a few others)

Are we done yet? Try implementing THAT on a wristwatch.

> 2.	Client side applications and scripts came about partly because servers
> were getting bogged down with tasks that clients could perform.  The
> Internet has a finite amount of computing power.  It also has a load that it
> must carry.  It will run faster if the load is spread out.

The main problem with load on the Internet is bandwidth, not computing
power. Computers are getting faster every month with no signs of slowing
(and, PLEASE, even the most intensive server-side processes hardly require a
CRAY on speed to run!). Keeping most of the processing on the server side
would minimize bandwidth consuption... most of the pages that intensively
use JavaScript these days usually transmit twice as much code as content...

Also, do not neglect that most authors use server-side methods anyway
because of their convenience. Not to mention the fact that if Web page
processing was done through a specialised language and not something generic
like Perl, it would be a lot faster. (I like Perl, don't mistake me, and use
it a lot, but it's not exactly the best performance if you're running at
10,000 hits per second or something).

> 3.	Not everyone has the security priveleges on a server needed to write
> server-side applications.  In fact, much of the web developer population
> can't or won't run a server due to inadequate connection or computing power.

A specialised language for web page processing would solve this. If you
could write the content in a language that specialises in this task, it
would mean that it would not have access to other resources, hence could be
very secure. The easiest way to secure a system is to remove its
functionality, not its priviledges.

> 4.	There are things that are not possible with server side scripts/apps.
> Page formatting comes to mind.  Besides the dramatic increase in Internet
> trafic, some things wouldn't be posseble.

Each to his own, granted. Formatting would definitely be done on the
server-side. But things like database access should be done on the server
side, and many of these are moving to the client side.

> 5.	You mentioned "Client-side methods are difficult to standardize and
> implement across platforms."  Any kind of server-side app would have to send
> data back to the client to be displayed.  Any problems that are associated
> with client side methods would also apply to this new Server->Client
> protocol.  You may argue that it could be standardized.  Sure it could.  So
> could JavaScript. (It has been; It's called ECMAScript.)  >Netscape and IE
> don't use standardized DHTML.  That's because it is new and hasn't had time
> to get a standard.

That still leaves for less standardization/implementation. A user agent MUST
conform to a standard of HTML, CSS, ECMAScript etc. A server does not need
to support, say, ANSI C for its CGI binaries.

> A possible solution to the problem of bogged down servers vs. heavy clients:
> In a future generation of HTTP, there should be a header sent to the server
> when a document is requested that specifies what scripts/apps the Client can
> parse.  The server would run anything that the client can't handle.

Wuth the advent of HTTP-NG, it would be easier (I hope) to cache
static parts of Web documents and re-transmit dynamic parts. This would
make everything much easier to handle.

-- Stephanos Piperoglou -- sp249@cam.ac.uk -------------------
All I want is a little love and a lot of money. In that order.
------------------------- http://www.thor.cam.ac.uk/~sp249/ --
Received on Friday, 10 April 1998 19:35:46 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:05:47 UTC