- From: Richard Cyganiak <richard@cyganiak.de>
- Date: Thu, 10 Apr 2008 12:59:54 +0100
- To: Dan Brickley <danbri@danbri.org>
- Cc: Pat Hayes <phayes@ihmc.us>, Leo Sauermann <leo.sauermann@dfki.de>, www-tag@w3.org, SWIG <semantic-web@w3.org>
Dan, You are in the fortunate position that your vocabulary is so important that developers will simply pre-load it to work around the inherent slowness of the 303s. It's no accident that Tabulator and Disco come with FOAF pre-loaded. (Same story with DC.) It's all about latency, and each additional lookup has a negative impact. Sure, there are technical means to work around that (incremental rendering, HTTP pipelining etc), but let's remember that current RDF browsers are cobbled together by people in their free time using shoe string and duct tape, and let's not make their job more difficult by adding additional slow-downs for no good reason. Seriously, none of the advantages of slash URIs over hash URIs apply in the case of publishing vocabularies. Richard On 9 Apr 2008, at 09:16, Dan Brickley wrote: > My apologies for not reviewing the document more carefully. It seems > to be good stuff, but I missed this claim. And (as responsible party > for FOAF ns) think this overstates the problem. Overstates it to a > considerable degree, even. > > Clients can cache the 303 redirects, and the resulting URL's content > can also be cached. For a small ontology of 5 or 6 terms, this > involves 5 or 6 HTTP redirects plus the main fetch. All cachable. > For modest sized ontologies like FOAF, with ~60 terms, it may be a > slight nuisance, ... but let's keep it in perspective: loading a > single Flickr page probably involves more HTTP traffic. And for > massive ontologies, like the various wordnet representations, > breaking them up into parts has its own merits: why download a > description of 50000 classes just because you've encountered @yone. > > If somone has specific software engineering problems with a Web > client for FOAF data that is suffering "to a considerable degree", > please post your code and performance stats and let's have a look at > fixing it. Maybe http://en.wikipedia.org/wiki/HTTP_pipelining is > something we can get wired into a few more SemWeb crawling > environments; for instance data as much as for schemas. > > cheers, > > Dan > > -- > http://danbri.org/ > >
Received on Thursday, 10 April 2008 12:00:40 UTC