- From: Paola Di Maio <paola.dimaio@gmail.com>
- Date: Tue, 13 Jul 2010 17:02:02 +0000
- To: Frank Manola <fmanola@acm.org>
- Cc: Semantic Web <semantic-web@w3.org>
- Message-ID: <AANLkTinVXOqrpkhWQuLs_21OslXnl9RTzqg9wX0OZta_@mail.gmail.com>
Frank it does seem to me fair to ask those who are advocating "rigorous > benchmarking/testing and measuring corresponding performance levels, costs, > benefits, etc with painstaking detail" *in this particular case* to provide > more detail (beyond simple advocacy) about what this program would look > like, and how they would carry it out. > I have not carried out a feasibility study for what I propose, I just think it wold be an approach/direction worth considering when making decisions Rigorous benchmarking etc etc is done in industry, and I accept that open community may not have the organisation and resources in place to do so, but start with considering the approach. Open a page, where people can structure their thinking Henry has already come up with some thoughts which I read very fast but sound good I would start from encouraging a team of people on this list who agree that such measures would be beneficial, certainly those who have contributed to this particular discussion, to brainstorm around how to go about actually trying out these ideas before implementing either I presume there is some good social capital on this list. Probably there are people beyond this list who could contribute time and expertise/ all the professors, the doctors, the consortia? Once you know who's interested to contribute in terms of time, skills, maybe some student or departmental time, tools etc, make a plan accordingly. some element of crowdsourcing within the SW community,f first making a plan, then breaking down the plan into small tasks that can be performed with limited effort on a voluntary basis by each etc even one small measurement is better than none? Ultimately it looks like its the WG which has to plan for this activity, and if some knowledge/skills/resources are not available within the WG outsource them to the wider SW community and beyond again, I am not trying to sell anything, not sure if the plan I propose here is feasible (that would not mean that the suggestion cannot be implemented otherwise) just sharing thoughts I am sure others can build on them PDM > > --Frank > > > On Jul 13, 2010, at 11:22 AM, Paola Di Maio wrote: > > > Frank > > > > > > think - the research industry has already spent/invested a lot of money > (so much that i cannot be counted) developing applications before the > relevant concepts /standards were tested in the real world . We ended up > with suites, toolkits, platforms, services all with limited > usefulness/impact (euphemism intended) > > > > Where did the money cane from to support years and years of research > projects and institutes for something that had never > > been tested in practice? > > > > phds? research money? > > > > time to start putting some toward performance measurements? > > > > Now years later, the consortium is facing amongst others, a dilemma - > change the RDF spec or not change it? > > (from what I understand of these conversations) > > > > I am sure there are valid arguments for and against., so its a matter of > making the best possible decision to maximise the opportunity/benefits and > minimising the risks/costs. I can see some people are thinking in that > direction. > > > > I dont think such decisions should be made solely on opinions and beliefs > , even less based on the abstract elegance of the belief in question, no > matter how authoritative and respectable the sources > > > > Some measurement can be done with simulations. Whoever has invested money > so far building applications surely would be happy to chip in > > making sure the proposed future steps are sound, either way?, There are > zillion of ways such evaluations can be done, but its important to > > get people on board who have those skills/abilities or are at least > capable of thinking that way > > > > Find some software engineers or systems engineers to work with? > > > > Once there is enough agreement and understanding some testing could be > helpful to support the right decision/directions, it should not be > > difficult to come up with ways of getting that done in practice > > > > > > P > > > > > > > > > > > > > > > > On Tue, Jul 13, 2010 at 2:47 PM, Frank Manola <fmanola@acm.org> wrote: > > This discussion reminds me of the old saying "in theory, theory and > practice are the same, but in practice, they're different". Pat's > description of a number of potential problems with the data URI idea seemed > to prompt Paola's comment that ideas should be tested and the results > measured, Henry's response to that, Lin's response to that, and so on. All > the points made are reasonable enough in "theory" (e.g., certainly things > ought to be tested, in general). In practice, how do people propose > conducting a realistic test/measurement of data URIs specifically? I don't > mean just writing draft specs of the proposed changes and implementing them, > and running a few simple apps. I mean putting the implementation to use > with extensive realistic apps and seeing what happens, in comparison with > the current specs and implementations, over a reasonable period of time. > This seems to me to be the only way to get the "rigorous > benchmarking/testing and measuring corresponding performance levels, costs, > benefits, etc with painstaking detail" that Paola described. In theory, > this seems like a fine idea. In practice, expecting this sounds unrealistic > in the extreme (e.g., who's organization is prepared to pay for this). So > what kind of testing do people have in mind that they think would be > satisfactory to decisively determine the right approach? > > > > --Frank > > > > On Jul 13, 2010, at 8:21 AM, Lin Clark wrote: > > > > > On Tue, Jul 13, 2010 at 9:20 AM, Henry Story <henry.story@gmail.com> > wrote: > > > Paola, not everything is amenable to testing, measurement, etc... For > example, this > > > would hardly make any sense for most of mathematics, since that is what > gives you the tools for doing the measurements in the first place. Logic, to > which it was thought that most of maths could be reduced to, thefore has the > same issue. Similarly by the way for asethetic values. Or even for ethical > ones. How would you go around testing whether "killing is bad"? (Rhethorical > Question). > > > > > > I have to chime in and disagree on this point. Much of the discussion > hasn't centered around the logical and mathematical perfection of any > solution, but on what impact solutions have on use and users. > > > > > > A priori reasoning is particularly unsuited to this kind of > problem—particularly in the case where the research community is > ethnographically different than the users they are trying to reach, as we > are. In this particular case, we are trying to reach developers, often times > Web developers, who most likely have a very different understanding of the > world than the bulk of the Semantic Web research community. If we look to > other fields, we can see how user science has been applied in the > development of systems, languages, and APIs, by treating developers as > users. > > > > > > If assertions about human use are part of the argument, then empirical > research about how humans use the tools should be a part of the research and > evaluation. We need to build a scientific literature that actually addresses > these issues instead of assuming that human mind is the best of all possible > (logical) worlds. > > > > > > -Lin > > > > > > > >
Received on Tuesday, 13 July 2010 17:02:37 UTC