RE: Practical application

This is annoying, I really strongly agree with your idea of putting
something out that can be done today, but on the other hand can't help but
find fault.

>No need to be sorry. It *is* just the HTML <meta> and some XML, though not
>hardly arbitrary but Dublin Core Metadata Element Set (DCMES). And
>I have no
>doubt that if this became the method used to indicate a subject
>classification, that it would be very useful for search engines.

I'll apologise again (perhaps this time it might be valid), my response was
a bit of knee-jerk reaction. I've been following the discussions, and seeing
attempts at validation of the RDF M&S to the logicians, and on the other
hand being most aware of how the web technologies are used as glorifed
grocer's windows, my impression was that you were taking indexing back into
the library. Libraries are ok, but the web isn't a library.

>And you're correct, Dublin Core might not express "everything that might
>be needed," but my point all along is that while some people fritter away
>entire lifetimes of man hours developing incredibly complex specifications
>that few mortals can figure out, this is a very simple way of performing
>a very simple task that current has no widely-accepted solution in markup.

Very, very true.

>I thought I made it very clear in the draft (perhaps you didn't read that
>part) that this was an attempt to invent *very little*. Yes, the idea of
>embedding Dublin Core metadata in <meta> tags was invented by DCMI, yes
>the <meta> tag already existed. Yes, DCMES already existed.

Ok, I admit I didn't read it thoroughly on first viewing, but I'm afraid my
reaction of "ok, what's new?" doesn't really get annulled by the argument
that "nothing's new". Or maybe it does.

>The value here is that for the first time an author can annotate a very
>specific piece of markup (a "document component") with metadata following
>an already-established means, can identify the subject (and about a
>dozen other things, such as responsible party, revision date, format,
>etc.) as according to a controlled vocabulary (which is extensible to
>fit their particular vertical industry need), and in a way that works with
>browsers *today*.

Ditto.

>I don't expect browsers to "interpret" this information

All I meant there was what difference would it make to Netscape or IEn -
transparent or confused?

>at all. I expect search engines or metadata harvesters to be able to
>locate information about specific content (by subject) without resorting
>to a brain dead keyword search. If I'm searching on "Harvester Ants" for
>a bio paper, I can search on Dewey 595 or Library of Congress QL568. If
>I'm looking for a particular patent application I can search on "Patent
>73638-398-737", if I'm searching for information on a "polymurrayphase
>interaction in pembroke corgification" (which just happens to be part of
>the "MCSF-SPT" scheme as classification index "PIPC-32" in my vertical
>industry"), well here's a way to do that.

The client has a browser in front of them. Maybe Google too, cor-gi are two
latin words meaning feet-of-rats and body-of-donkey - how does your proposal
improve on meta:description and meta:keywords and raw word mangling of the
text?

>It doesn't slice your bread or solve world hunger. It does perform one
>of the fundamental things I've read about in descriptions of the "Semantic
>Web": allow an author to clearly identify the subject of a particular
>document fragment, and not just the entire document.

I can't deny that addressing the fragments is a step forward. I'll read it
again.

Danny.

Received on Saturday, 23 June 2001 20:41:39 UTC