- From: Mark Birbeck <mark.birbeck@webbackplane.com>
- Date: Sun, 31 Oct 2010 17:35:36 +0000
- To: nathan@webr3.org
- Cc: Shane McCarron <shane@aptest.com>, Arto Bendiken <arto.bendiken@gmail.com>, RDFA Working Group <public-rdfa-wg@w3.org>
Hi Nathan, On Sun, Oct 31, 2010 at 4:18 PM, Nathan <nathan@webr3.org> wrote: > Mark Birbeck wrote: >> >> Hi Nathan, >> >> I know this is not meant to be a completely precise discussion :) but >> I'm a little worried that we might lose sight of our vision here. > > Likewise, many previous mails discussed two or three levels of API, my take > away from those was that our vision was to have a nice level 2 or level 3 > API, but our remit was to provide a level 1 API (basic RDF support) and an > RDFa extension to interact with RDFa documents in the dom. I'm not really sure where that came from, but the history is that the first draft of the API was somewhat in the middle. It had many of the RDF concepts, but it was unclear what was happening 'below' it, and it lacked a layer 'above' it for JavaScript programmers. At the lower level it didn't have separate parser and store interfaces, which meant that anyone who wanted to create a new storage mechanism (e.g., HTML5 local storage) or a new parser (e.g., microformats or microdata) would have had to have implemented the entire set of interfaces in their object; by separating the interfaces we made the whole thing pluggable, allowing people to extend an existing library with new parsers and/or stores. But I wouldn't say that was part of our 'remit' -- it's just good API design. Similarly at the upper level the first draft of the API was expecting JS programmers to get to grips with triples and literals; by introducing the concept of property groups we made a good start in trying to bridge the worlds of RDF and JavaScript programming. But once again, I don't think that was part of our 'remit' -- it just made a lot of sense in terms of trying to create something that was appropriate for a wider audience. > Thus, that we should define just that, but also leverage our collective > understanding and vision of level 2/3 APIs to ensure that what we do is > compatible and usable by by future RDF APIs and libraries which which to > implement that kind of functionality now. Indeed. And I think we're doing that. > Something like: Here, we've defined the interfaces for the RDF concepts, for > Graphs, for parsing and processing documents, take that, extend it and use > it in your libraries - and whilst your there provide these methods on the > DOM so that regardless of which library (or vendor!) people are using, they > have standardized access to RDFa documents. I'm not following you...that's simply asking people to implement the API, isn't it? >> Of course we want to support RDF programmers. >> >> But we have also said many times that we want the API to be usable by >> JavaScript programmers. >> >> We need to constantly remember our constituencies and for the latter >> constituency, if, as you say, they >> >>> ...have to deal with triples, subject, predicate, >>> object, plain literals, typed literals and so forth. >> >> then we have failed! > > Perhaps, or perhaps we will have had great success enabling innovation in > libraries whilst standardizing the core to ensure interoperability. I'm not seeing that. First, you seem to be implying that innovation won't come from within this group, so therefore we should be limit ourselves simply to exposing triples, and let others finish the job. I think that's a blinkered view of the resources we have available to this group, especially given the track record of this group for innovating. Second, innovation doesn't happen at the level of the interface, it happens at the level of what you put into the components that make use of the interface. It's like electrical plugs. :) Who cares whether they have three pins or two, carry 240 volts or 120, have round pins or flat ones? The fact is that by having a standard across a wide enough geographical area you can innovate, but you do that in computers and TVs, not at the interface level. So our task is merely to ensure that we've broken the API into a sufficient number of components so that each of those components can become a site for innovation. But how those components talk to each is less significant. > If we standardize the concepts and the basic interfaces, then end users can > use rdfx.js to lift the triples from a document (because it's faster) then > throw the standardized Graph of RDF Triples in to the store provided by > rdfy.js (because it provides an IndexedDB powered persistent store) and > query them along the way with rdfq.js (because it's got a great in memory > query engine) - if we standardize the goggles through which libraries see > RDF then we enable interoperability at a core level and encourage an > innovative open market place of rdf libraries. But that's *exactly* what is happening; we created a pluggable architecture so that this can be done. That's what interfaces are all about! (See above.) > Personally I feel that if we provide an API that basically means all js > libraries are the same, providing the same functionality, then we will have > failed. I have no idea where this is coming from. We're providing an API where all of the *interfaces* are the same. But the functionality that is enabled by these interfaces should be unlimited. > The other side of this is the RDFa Extensions, which enable working with > RDFa Documents in the DOM, on that side we standardize the interfaces so > that RDFa-centric scripts, libs and plugins work across all vendors, so that > johnny-js-dev can knock up a little script which highlights all his friends > on his personal homepage, and know that just as when he uses > document.getElementByID, it'll work. Likewise for the future authors of some > omg-its-amazing-and-so-easy jRDFaQuery library. Yes, indeed, and I think we've made a good start on that too. Note the careful choice of method names, for example, in an attempt to make developers feel some familiarity. > I guess my point is, let's not aim to provide some amazing API with the > belief that we can do it perfectly, rather let's standardize just enough to > enable interoperability and allow others (and ourselves) to create those > amazing APIs & libraries. But we are defining the *interfaces* between components. What people put into those components is their own business, so I think we are already doing exactly what you say. (And I'm not sure why you think we're not.) >> As it happens, 'graph' is no longer a computer-sciency term; thanks to >> Facebook most programmers will be familiar with the idea of the >> 'social graph' that expresses relationships between people, so all we >> need to do is indicate that we have a mechanism for managing and >> querying graphs of all types. >> >> But even that point is moot since the novice programmer will begin >> with data stores, not graphs -- the store should manage one or more >> graphs, which gives programmers a choice as to which of these levels >> they wish to interact with the data. > > I can only agree partially with that, in some cases they'll want a store, in > others a set of triples (graph) and in most they'll want neither and just > want to do something like: > $("foaf:Person").click(showDetails); Yes, but again you're saying what we're already doing. > So I suggest we ensure that regardless of which lib they use, a triple is a > triple and a graph is a graph, and regardless of which platform that lib is > running on it can depend on the functionality of document.getElementsByType. I can't decide which metaphor is most appropriate; I think you're kicking at an open door, but I also can't help thinking that you are tilting at windmills when you imply that we're naively striving for 'perfection'. I certainly believe there are some areas that we need to improve in the API, but I also believe that we are on pretty solid ground in the way that we've broken the interfaces up, and that things will continue to improve. Regards, Mark
Received on Sunday, 31 October 2010 17:36:57 UTC