Re: RDF *already* supports literal subjects - a thought experiment

Paola--

Please note that I have not opposed comprehensive testing and measurement.   I'm simply suggesting that it's easy to advocate the sort of comparison testing and measurement you seem to be proposing, but it's difficult and expensive to actually do.  On a meta-level, it's also difficult to determine the relative costs and difficulty of actually doing the testing, versus working around the problem, versus simply developing an alternative and seeing what happens.  It's always been this way.  As you say, lots of money has been spent developing things (software development methodologies come to mind) on the theory that they were improvements on conventional practice, but without a whole lot of (if any) rigorous testing to determine that they were actually better.  Usually they were simply dropped into the marketplace (if only the marketplace of ideas) to see how they worked out, in lieu of actually conducting tests.  I suspect that in most cases it was just as well the effort wasn't spent on conducting the tests;  it was usually better spent developing still newer stuff.  In any event, whether my suspicions were right or wrong, it does seem to me fair to ask those who are advocating "rigorous benchmarking/testing and measuring corresponding performance levels, costs, benefits, etc with painstaking detail" *in this particular case* to provide more detail (beyond simple advocacy) about what this program would look like, and how they would carry it out.

--Frank 


On Jul 13, 2010, at 11:22 AM, Paola Di Maio wrote:

> Frank
> 
> 
>  think - the research industry has already spent/invested a lot of money (so much that i cannot be counted) developing applications before the relevant concepts /standards  were tested in the real world . We ended up with suites, toolkits, platforms, services all with limited usefulness/impact (euphemism intended)
> 
> Where did the money cane from to support years and years of research projects and institutes for something that had never
> been tested in practice?
> 
> phds? research money? 
> 
> time to start putting some toward performance measurements?
> 
> Now years later, the consortium is facing amongst others, a dilemma - change the RDF spec or not change it?
> (from what I understand of these conversations)
> 
> I am sure there are valid arguments for and against., so its a matter of making the best possible decision to maximise the opportunity/benefits and minimising the risks/costs. I can see some people are thinking in that direction.
> 
> I dont think such decisions should be made solely on opinions and beliefs , even less based on the abstract elegance of the belief in question, no matter how authoritative and respectable the sources
> 
> Some measurement can be done with simulations. Whoever has invested money so far building applications surely would be happy to chip in
> making sure the proposed future steps are sound, either way?, There are zillion of ways such evaluations can be done, but its important to 
> get people on board who have those skills/abilities or are at least capable of thinking that way
> 
> Find some software engineers or systems engineers to work with? 
> 
> Once there is enough agreement and understanding some testing could be helpful to support the right decision/directions, it should not be
> difficult to come up with ways of getting that done in practice
> 
> 
> P
> 
> 
> 
> 
> 
> 
> 
> On Tue, Jul 13, 2010 at 2:47 PM, Frank Manola <fmanola@acm.org> wrote:
> This discussion reminds me of the old saying "in theory, theory and practice are the same, but in practice, they're different".  Pat's description of a number of potential problems with the data URI idea seemed to prompt Paola's comment that ideas should be tested and the results measured, Henry's response to that, Lin's response to that, and so on.  All the points made are reasonable enough in "theory" (e.g., certainly things ought to be tested, in general).  In practice, how do people propose conducting a realistic test/measurement of data URIs specifically?  I don't mean just writing draft specs of the proposed changes and implementing them, and running a few simple apps.  I mean putting the implementation to use with extensive realistic apps and seeing what happens, in comparison with the current specs and implementations, over a reasonable period of time.  This seems to me to be the only way to get the "rigorous benchmarking/testing and measuring corresponding performance levels, costs, benefits, etc with painstaking detail" that Paola described.  In theory, this seems like a fine idea.  In practice, expecting this sounds unrealistic in the extreme (e.g., who's organization is prepared to pay for this).  So what kind of testing do people have in mind that they think would be satisfactory to decisively determine the right approach?
> 
> --Frank
> 
> On Jul 13, 2010, at 8:21 AM, Lin Clark wrote:
> 
> > On Tue, Jul 13, 2010 at 9:20 AM, Henry Story <henry.story@gmail.com> wrote:
> > Paola, not everything is amenable to testing, measurement, etc... For example, this
> > would hardly make any sense for most of mathematics, since that is what gives you the tools for doing the measurements in the first place. Logic, to which it was thought that most of maths could be reduced to, thefore has the same issue. Similarly by the way for asethetic values. Or even for ethical ones. How would you go around testing whether "killing is bad"? (Rhethorical Question).
> >
> > I have to chime in and disagree on this point. Much of the discussion hasn't centered around the logical and mathematical perfection of any solution, but on what impact solutions have on use and users.
> >
> > A priori reasoning is particularly unsuited to this kind of problem—particularly in the case where the research community is ethnographically different than the users they are trying to reach, as we are. In this particular case, we are trying to reach developers, often times Web developers, who most likely have a very different understanding of the world than the bulk of the Semantic Web research community. If we look to other fields, we can see how user science has been applied in the development of systems, languages, and APIs, by treating developers as users.
> >
> > If assertions about human use are part of the argument, then empirical research about how humans use the tools should be a part of the research and evaluation. We need to build a scientific literature that actually addresses these issues instead of assuming that human mind is the best of all possible (logical) worlds.
> >
> > -Lin
> 
> 
> 

Received on Tuesday, 13 July 2010 16:41:23 UTC