W3C home > Mailing lists > Public > public-lod@w3.org > June 2011

Re: Squaring the HTTP-range-14 circle

From: Tim Berners-Lee <timbl@w3.org>
Date: Thu, 16 Jun 2011 21:22:43 -0400
Cc: Richard Cyganiak <richard@cyganiak.de>, public-lod@w3.org, Christopher Gutteridge <cjg@ecs.soton.ac.uk>
Message-Id: <9019FC8D-35C8-43CE-BAF6-CA04E7772821@w3.org>
To: Ian Davis <lists@iandavis.com>

On 2011-06 -16, at 16:41, Ian Davis wrote:

> Tim,
> On Thu, Jun 16, 2011 at 6:04 PM, Tim Berners-Lee <timbl@w3.org> wrote:
>> I don't think 303 is a quick and dirty hack.
>> It does mean a large extension of HTTP to be uses with non-documents.
>> It does have efficiency problems.
>> It is an architectural extension to the web architecture.
> We have had many years for this architectural extension to be adopted
> and many of us producing linked data have been diligent in supporting,
> promoting and educating people about it. Even I, with my many many
> attempts to get this decision reconsidered, have promoted the W3C
> consensus. Conversely, many more people have studied this extension
> and rejected it. Companies such as Google, Facebook, Microsoft and
> Yahoo, who are all W3C members and can influence these decisions
> through formal channels if they wish, have looked at the httpRange
> decsion and decided it doesn't work for them.

I haven't seen that them saying that.   I have only seen the 
resulting RDFa.  

> Instead they have chosen
> different approaches that require more effort to consume but lower the
> conceptual barrier for publishers. However, they are convinced of the
> need for URIs to identify things that are not just web pages which is
> a huge positive.

Each of these players has said they nee to use the same URI for BOTH
the document and the dog?  It was perhaps just taking
RDF/a as non-rdf by people who weren't using it in RDF systems and so who were
not combining it with info about the web page.

> These companies collectively account for a very large proportion of
> web traffic and activity. I think just saying that they're wrong and
> should change their approach is simply being dogmatic. They are
> telling us that we are wrong. We should listen to them.

I have not said that they are wrong in trying to make it very simple for 
people to say things about the subject of the page.
Facebook used the standard in the simplest way could.

For example, OGP uses consistently a set of properties which
of the style:

		<>  ogp:foo "Whatever".

where other might have written

		<> foaf:primarySubject <#grapes>.
		<#grapes>  ex:foo  "Whatever".

(Here ex: is a parallel namespace to ogp:)
These triples are all consistent, in fact, so a rule which generates one for the
other is easy.  you can also do it in OWL by declaring org:foo to be a chain of primarySubject and ex:foo.

Two ways to go to make this work, and many more can be done

1) Allow the "parallel properties to exist and be related publically to the normal ones.

2) Fix the RDFa/microdata/whatever syntax to  make it trivial and obvious to make statements
about a single subject.

We can clearly do both. 

>> If you want to give yourself the luxury of being able to refer to the subject of a webpage, without having to add anthing to disambiguate it from the web page, then for the sake of your system, so you can use the billion web pages for your purposes, then you now stop other like me from using semantic web systems to refer to those web pages, or in fact to the other hundred million web pages either.
> The problem here is that there are so few things that people want to
> say about web pages compared with the multitude of things they want to
> say about every other type of thing in existence.

Well, that is a wonderful new thing.  For a long while it was difficult to
put data on the web, while there is quite a lot of metadata.
Wonderful idea that the semantic web may be beating the document
web hands down but that's not totally clear that we should trash the
use of URIs for use to refer to documents as do in the document web.

> Yet the httpRange
> decision makes the web page a privileged component.

With 200 yes, as you have to allow the exiting web, small though you say it is, 
to still function, when 200 is returned.

> I understand why
> that might have seemed a useful decision, after all this is the web we
> are talking about, but it has turned out not to be. The web page is
> only the medium for conveying information about the things we are
> really interested in.

That may be true, but that doesn't mean that anyone should
use the same URI for talking about both.

> The analogy is metadata about a book. Very little of it is about the
> physical book, i.e. the medium. Perhaps you would want to record its
> dimensions, mass, colour, binding or construction. There are many many
> more things you would want to record about the book's content, themes,
> people and places mentioned, author etc.
>> Maybe you should an efficient way of doing what you want without destroying the system (which you as well have done so much to build)
> I think this is unreasonably strong. Nothing is being destroyed.
> Nothing has broken.

If you use HTTP 200 for something different, then 
you break my ability to look at a page, review it, and then
express my review in RDF,  using the page's URI as the identifier.

> A few days after I wrote this post
> (http://blog.iandavis.com/2010/12/06/back-to-basics/) I changed one of
> the many linked datasets I maintain to stop using 303 redirects over a
> few million resources. No-one has noticed yet. Nothing has broken.

That's funny, the web police are normally pretty sharp about looking 
for people writing things which are inconsistent. Not.

It's true in fact that the tabulator, for example, will happily display the 
details of something which is in two disjoint classes without sending
email to the maker of the website.  Maybe it shouldn't.
It does though mess up the UI when for example it allows a document of
a person, which is a simple example of breakage.

If you think that you can populate the web with URIs which are
used ambiguously for web pages and people, then we have one
issue to deal with.

if you don't  but you want to populate the web with HTTP 200 resources
which are not documents, then we have another.

Which way do you go?   Or are you happy to introduce a 209 response for these things?


> Ian
Received on Friday, 17 June 2011 01:22:57 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:29:54 UTC