- From: Gavin Thomas Nicol <gtn@rbii.com>
- Date: Thu, 10 Jan 2002 10:44:17 -0500
- To: xml-dist-app@w3.org
> over the wild internet as it does over a tame intranet. The points > that have been raised in this thread about latency, reliability, > "burstiness" etc. can be managed in an intranet by investments in > hardware and competent system administration While this is true, it shouldn't be talen for granted that these things are possible. > Either the internet infrastructure will evolve fast > enough so that the RPC paradigm continues to scale up and the > underlying complexity is hidden from the application programmer, or > it won't and a more loosely coupled, asynchronous model of web > services delivery will be something that web services developers > have to deal with. There used to be a Sun paper called "Notes on Distributed Programming" or somesuch, that was very good at pointing out that the holy grail: transparency (transparent local/remote access), is impossible to achieve. Over time I have come to appreciate their points. That said, I should note that gigabit networks are fast enough that data transfer (for example, file copies) sometime suffers from bottlenecks in the local machine hardware (disk throughput) before the network interferes. Tim made a point on XML-dev which is very true: namely that the server-centric nature of the www, and SOAP et al. in the classic RPC mode of use, is a real bottleneck. We need to offload more onto the client. Back in the late 80's, I wrote a system not unlike SOAP/XML-RPC, that used straight sockets and S-expressions. We found that for it to work well over long distances, clients had to bear a lot of the burden. This system was a shared workspace application (whiteboards, files, etc), and what we did was offload the state management of onto the clients that then sychronized with one another based on a QOS rating based on uptime and latency.
Received on Thursday, 10 January 2002 11:05:00 UTC