- From: Markus Lanthaler <markus.lanthaler@gmx.net>
- Date: Wed, 31 Dec 2014 17:21:52 +0100
- To: "'Hydra'" <public-hydra@w3.org>
>>> The domain of this property would then be the union of mapping and template. [We are talking about hydra:variableRepresentation here] >> Which tells you what? :-P > > That it's either one or the other :-) > You can make useful deductions from that. Right.. you prevent it from being used on something else in the future. On the other, if we would use schema:domainIncludes we would tell clients and humans that it is *most likely* a template or a variable mapping but would keep us the door open to use it with something else in the future. >> On the other hand, whether I have >> >> hydra:variableRepresentation rdfs:domain hydra:IriTemplate >> >> _:x hydra:variableRepresentation hydra:BasicRepresentation >> ... >> or >> >> _:x rdf:type hydra:IriTemplate >> _:x hydra:variableRepresentation hydra:BasicRepresentation >> ... >> >> tells me (and a machine) exactly the same. The only difference is that the >> author needs to be more explicit. The clear advantage is that it gives us >> the flexibility to reuse hydra:variableRepresentation on things other than >> IRI templates in the future. > > To me, this is the kind of flexibility that only humans deal well with. Well, it depends on how you look at it. Let's say in a couple of years we release Hydra 2.0. With this approach clients would ignore that entity (_:x) because they wouldn't understand what it is (assuming it is something else than a hydra:IriTemplate). If we use rdfs:domain, there would be no way to use hydra:variableRepresentation on anything else in the future. Clients would simply assume that it is a IriTemplate, regardless of whether the author defined it as such or not. > Rigidity is why we have ontologies in the first place; > if machines understood more nuance, they wouldn't need them. Hmm... if you are in a closed system that you fully control you can be that rigid as you have the power to change whatever you want in the future. If you operate on something such as the Web, it is very difficult to anticipate all potential use cases/requirements. Acknowledging that fact gives you a whole lot of flexibility and enables serendipitous reuse. >> Right. The question here is how we provide those instructions to machines. >> Do we want authors to be explicit or do we want to depend on reasoning by >> making the vocabulary more explicit and rigid? I lean towards the former >> because I expect that a lot of the things we define now could be cleanly >> reused for other things in the future. > > I'd say both at the same time; > be rigid, so machines *could* do it even with few info. > But encourage humans to be explicit as well. To me, this sounds like getting the worst of both worlds: clients need to run a reasoner to be sure to catch everything (some authors may have been lazy as they assumed all clients use reasoners), yet authors need to be explicit because some clients may actually not use a reasoner. > For instance, in our current Triple Pattern Fragments server, we say: > <c> a hydra:Collection, hydra:PagedCollection. > even though the former can be inferred from the latter. > But if we are explicit, it also works with less advanced machines. > >>> It costs us not significantly more to get the modeling right, >>> but it enables much more intelligent clients. >> >> IMHO this has nothing to do with right or wrong modeling. The end result is >> exactly the same. > > Only if people are explicit. Of course. What I propose is to *require* people to be explicit to simplify clients. Writing a validator to check such things wouldn't be that difficult. People have already a hard enough time with RDF, we don't need to make it even harder by requiring them to also understand entailment etc. and require them to use reasoners (which still aren't available in many languages and have very indeterministic runtimes/processing needs). -- Markus Lanthaler @markuslanthaler
Received on Wednesday, 31 December 2014 16:22:20 UTC