- From: Giovanni Tummarello <giovanni.tummarello@deri.org>
- Date: Tue, 27 Mar 2012 22:01:38 +0200
- Cc: Jeni Tennison <jeni@jenitennison.com>, public-lod community <public-lod@w3.org>
Tom if you were to do a serious assessment then measuring milliseconds and redirect hits means looking at a misleading 10% of the problem. Cognitive loads,economics and perception of benefits are the over the 90% of the question here. An assessment that could begin describing the issue * get a normal webmaster calculate how much it takes to explain him the thing,follow him on and * see how quickly he forgets, * assess how much it takes to VALIDATE the whole thing works (E.g. a newly implemented spects) * assess what are the tools that would check if something break * assess the same thing for implementers e.g. of applications or consuming APIs to get all teh above * then once you calculate the huge cost above then compare it with the perceived benefits. THEN REDO ALL AT MANAGEMENT LEVEL once you're finished with technical level because for sites that matters ITS MANAGERS THAT DECIDE geek run websites dont count, sorry. Same thing when looking at 'real world applications' by counting just geeky hacked together demostrators or semweb aficionados libs has the same skew.. these people and apps were paid by EU money or research money or so they should'n count toward real world economics driven apps, so if one was thinking of counting 50 "apps that would break" that'd be just as partial and misleading. .. and we could go on. Now do you really need to do the above? (let alone how difficult it is to do inproper terms) me and a whole crowd know already the results for the same exercise have been done over and over and we've been witnessing it. i sincerely hope this is the time we get this fixed so we can indeed go back and talk about the new linked data (linked data 2.0) to actual web developers, it managers etc. removing the 303 thing doesnt solve the whole problem, it is just the beginning. Looking forward to discuss next steps Gio On Mon, Mar 26, 2012 at 6:13 PM, Tom Heath <tom.heath@talis.com> wrote: > Hi Jeni, > > On 26 March 2012 16:47, Jeni Tennison <jeni@jenitennison.com> wrote: >> Tom, >> >> On 26 Mar 2012, at 16:05, Tom Heath wrote: >>> On 23 March 2012 15:35, Steve Harris <steve.harris@garlik.com> wrote: >>>> I'm sure many people are just deeply bored of this discussion. >>> >>> No offense intended to Jeni and others who are working hard on this, >>> but *amen*, with bells on! >>> >>> One of the things that bothers me most about the many years worth of >>> httpRange-14 discussions (and the implications that HR14 is >>> partly/heavily/solely to blame for slowing adoption of Linked Data) is >>> the almost complete lack of hard data being used to inform the >>> discussions. For a community populated heavily with scientists I find >>> that pretty tragic. >> >> >> What hard data do you think would resolve (or if not resolve, at least move forward) the argument? Some people > are contributing their own experience from building systems, but perhaps that's too anecdotal? Would a >> structured survey be helpful? Or do you think we might be able to pick up trends from the webdatacommons.org > (or similar) data? > > A few things come to mind: > > 1) a rigorous assessment of how difficult people *really* find it to > understand distinctions such as "things vs documents about things". > I've heard many people claim that they've failed to explain this (or > similar) successfully to developers/adopters; my personal experience > is that everyone gets it, it's no big deal (and IRs/NIRs would > probably never enter into the discussion). > > 2) hard data about the 303 redirect penalty, from a consumer and > publisher side. Lots of claims get made about this but I've never seen > hard evidence of the cost of this; it may be trivial, we don't know in > any reliable way. I've been considering writing a paper on this for > the ISWC2012 Experiments and Evaluation track, but am short on spare > time. If anyone wants to join me please shout. > > 3) hard data about occurrences of different patterns/anti-patterns; we > need something more concrete/comprehensive than the list in the change > proposal document. > > 4) examples of cases where the use of anti-patterns has actually > caused real problems for people, and I don't mean problems in > principle; have planes fallen out of the sky, has anyone died? Does it > really matter from a consumption perspective? The answer to this is > probably not, which may indicate a larger problem of non-adoption. > >> The larger question is how do we get to a state where we *don't* have this permathread running, year in year >> out. Jonathan and the TAG's aim with the call for change proposals is to get us to that state. The idea is that by >> getting people who think that the specs should say something different to "put their money where their mouth is" > and express what that should be, we have something more solid to work from than reams and reams of >> opinionated emails. > > This is a really worthy goal, and thank you to you, Jonathan and the > TAG for taking it on. I long for the situation you describe where the > permathread is 'permadead' :) > >> But we do all need to work at it if we're going to come to a consensus. I know everyone's tired of this discussion, > but I don't think the TAG is going to do this exercise again, so this really is the time to contribute, and preferably >> in a constructive manner, recognising the larger aim. > > I hear you. And you'll be pleased to know I commented on some aspects > of the document (constructively I hope). If my previous email was > anything but constructive, apologies - put it down to httpRange-14 > fatigue :) > > Cheers, > > Tom. > > -- > Dr. Tom Heath > Senior Research Scientist > Talis Education Ltd. > W: http://www.talisaspire.com/ > W: http://tomheath.com/ >
Received on Tuesday, 27 March 2012 20:02:30 UTC