W3C home > Mailing lists > Public > xml-dist-app@w3.org > January 2002

RE: Article: Fat protocols slow Web services

From: lists <lists@amadan.net>
Date: Fri, 11 Jan 2002 01:30:41 -0000
To: <xml-dist-app@w3.org>
Message-ID: <000b01c19a3f$a55fa310$b87ba8c0@MITCHUM>


> -----Original Message-----
> From: xml-dist-app-request@w3.org 
> [mailto:xml-dist-app-request@w3.org] On Behalf Of Paul Duffy
> Sent: 10 January 2002 19:08
> To: xml-dist-app@w3.org
> Subject: RE: Article: Fat protocols slow Web services
> 
> 
> This is probably a diversion from the main topic...
> 
> I often hear it stated that "its generally accepted that 
> synchronous RPC 
> does not scale as well as async messaging".  Could somebody 
> pro/con this 
> statement, possibly clarifying with a simple example ?
> 
> Much appreciated....

Hi Paul,

As crude and simple as I can think of is this: 

synced RPC attempts to emulate a single process of execution across a
network. Think sequential execution.
asynced messaging attempts to allow processes to execute independently.
Think threaded execution.

Tannenbaums and Steens's "Distributed Systems" book explains the choices
well, as does Colouris et al's of the same name, as does the beginning
of Monson-Haefel's  "Java Message Service". I would say that the key to
scaling is how processes coordinate.

My simple example is cocktail bartending, which also does for the
multi/single threading special case. Distributed systems paradigms can
be analysed over a few drinks. Suppose you order two cocktails, a
daiquiri and an old-fashioned. The bartender makes your drinks
sequentially. If there was another bartender then she could make one of
the drinks; that would be like two processes working in parallel. You
would get your drinks out faster. Suppose there was another bartender
but she was about to make just those drinks for someone else. Good
bartenders will communicate and split the drinks, so that one makes
martinis and the other makes old manhattans and whatnot. The customers
are likely to get their drinks out faster as a result, assuming is
doesn't take very long to communicate the task splitting. Plus if the
drinks don't have shared ingredients, the bartenders shouldn't interfere
with each other that much. If the bartenders are well trained (shared
semantics), they will be able to make drinks by type and possibly by
customer (inference) and without explanation (behavioural
encapsulation). Each task closes when the respective drinks are passed
along. If there was one bartender with 2 customers both ordering these
drinks; a good bartender will probably not switch between drinks,
favouring instead to decrease the average customer wait time by making
the 2 same drinks in order (thread interleaving); generally this policy
makes more sense as the number of customers increases beyond 2 but will
top out, but it is only a heuristic and not all bartenders work this way
on their own; they sometimes decide to chat to the second (fast
response) while he waits. Over the counter (peer to peer), where the
customer mixes a drink for themselves, is not something many bars will
allow, unless they know you well.

Overall good, fast bars will display a mix of paradigms, depending on
how busy the bar is (system load) and the nature of customers
(application types) and how many bartenders are stationed. The
bartenders will generally spend more time attending to customers than
communicating with each other; lesser bartenders tend to talk back and
forth to each too much when its busy because they haven't established a
good working rhythm (coordination language).

[I used to use fast food examples but many great thinkers have had the
high ground on computing and food since the 1960s at least. Whereas I
only know my drinks.]

regards,
Bill de hÓra
 
Received on Thursday, 10 January 2002 20:35:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:59:05 GMT