Re: Discovery/Affordances

hello arnaud.

On 2013-06-14 8:22 , "Arnaud Le Hors" <lehors@us.ibm.com> wrote:
>Ok, help me understand where a requirement such as that a server must
>support turtle come into play in assessing conformance then?
>When we've talked about validation and test suite we've actually talked
>about more than a stand alone validator that would simply parse an LDP
>file. We've talked about an agent that would interact with an LDP server.
>Was that line of thinking completely off?

no, but there are two sides to this, and the tricky part here is the
distinction of media types (i am just using this term here because i am
used to it) and applications. and again, let's look at the human web:

- HTML just tells you what a web page must look like. you can validate a
page without having any notion of where it came from, or where the links
point to. even if HTML had requirements for HTTP headers being set in a
certain way, as long as you captured the interaction *with that resource*
(what request did you send, what did you get back), you can fully and
completely answer the question of whether that interaction worked
according to the rules, or not.

- you can also *validate a web site*, basically by crawling it and looking
at each individual interaction and judging it in the way described above.
but the catch is: for a client/crawler/validator doing this for let's say
1000 pages, there is no way to tell whether it interacted with 1 server,
1000 servers, or something in the middle. it simply acted on client-side
rules (or maybe server-site driven by something like sitemaps) which pages
to crawl and where to stop, but that, strictly speaking, maps in no way at
all to "validating a server". it's simply "validating a set of resources
that often happen to follow a common URI pattern." you built a "validator
application".

so while you can build an agent that "crawls" a set of LDP resources
(driven by some rules), there really is nothing you can say beyond "this
interaction did not how i expected it to go when i followed a link that i
was expecting would allow me to do a certain thing". again, looking at a
web: let's assume you follow <img/> and you GET a text/plain response.
who's to blame? and in which way? HTML clearly allows you to expect
image/*, but then again when you request some URI you GET whatever the
server decides to serve at that point in time, and maybe the link changed
over time from serving image/* to text/plain. in such a scenario, who
would you complain to? to some extent, all of this still "works", right?
clients should be able to handle this (display a little icon instead of
image) instead of just crashing. so as long as the web page made a certain
claim ("GET an image following this link"), that's something you can
operate on and hope that it'll work out, but on the web, you always have
to expect failure. so all you can do when crawling something like this is
reporting that clients may have problems when following this link, but
*nobody actually violated HTML* or, let's say, GIF or JPEG. it is just
that as an application trying to accomplish the goal of successfully
loading all images on a page, in this case you won't succeed. on the other
hand, if you built a validator for accessibility for blind people,
checking for accessibility conformance, you might not even try to follow
the <img/> link, because for your application, it's not relevant. in that
case, everything works just fine.

ok, long story short: trying to drive validation "across links" gets very
complicated very quickly, because it reaches beyond the realm of media
types and into applications built on top of them. and these applications
can have all kinds of goals you might not know about. so validators at
that level always have to be driven by concrete scenarios and goals, and
those always will be more constrained and specialized than the media type
itself.

cheers,

dret.

Received on Friday, 14 June 2013 16:02:15 UTC