Re: Vocabulary 'tutorial'?

Good balancing act. :-)

> On 23 Mar 2015, at 15:07, Harry Halpin <hhalpin@w3.org> wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Erik,
> 
> It's JSON-LD based but we are accepting the reality of the situation
> that 99% of the programmers are not currently using RDF  or any
> semantic tooling, especially inference and will glaze over any
> RDF-heavy parts of the spec. That's why any sub-class/sub-type
> relationships have to be coded as such in the spec.
> 
> Thus, most AS2.0 will likely forget @context, not follow any JSON-LD
> processing, etc. Thus, we'll have a implicit context in the media type.
> 
> However, we want the minority of RDF-enabled developers to take
> advantage of their toolsets.  ince it *will* feature URI-based
> extensibility, the vocabularies will use URIs.
> 
> We could for vocabularies just use lists of URIs. I see no harm done
> in allowing these vocabularies to use RDF(S) since the communities
> that care about extensibility may end up using RDF more than others.
> However, again, tooling for RDF vocabulary creation is, 15 years into
> RDF, still seemingly rather undeveloped.
> 
> If we mandated full RDF processing, probably at least half of the
> Working Group would walk away (i.e. the IndieWeb folks). If we
> mandated no extensibility, we'd repeat the mistake of AS1.0 and we'd
> have to redesign AS3.0 pretty soon.
> 
> So JSON-LD is a compromise. If in the future RDF takes off, more
> people can use it. If it fails, then we'll just treat AS2.0 as
> ordinary JSON. It's a win-win situation and it seems any objections
> (i.e. mandating inference, etc.) are effectively edge-cases that
> ignore the reality of modern web development.
> 
>  cheers,
>       harry
> 
> 
> On 03/23/2015 11:28 AM, ☮ elf Pavlik ☮ wrote:
>> On 03/23/2015 10:23 AM, Erik Wilde wrote:
>>> hello elf.
>>> 
>>> On 2015-03-23 10:07, ☮ elf Pavlik ☮ wrote:
>>>> On 03/23/2015 09:56 AM, Erik Wilde wrote:
>>>>> i am wondering if/why semantic tooling would even be
>>>>> required. if we say that AS2 is JSON-based, then there's no
>>>>> requirement to define new vocabularies with RDF, correct?
>>>>> semantic tooling would be necessary for those who *want* to
>>>>> use it, but that would be outside of AS's scope.
>>>> If we design RDF Ontologies, those who want to use them as
>>>> still JSON can do that thanks to JSON-LD. It comes with certain
>>>> limitations but we can consider it a Lite mode which will not
>>>> provide all the robust features. If we don't keep RDF in mind
>>>> while designing, it may not work very well if someone wants to
>>>> use more powerful features and treat it as Linked Data.
>>> 
>>> that's the point i've been trying to make, and decision we've
>>> been dancing around for a couple of months. is AS2 JSON-based, or
>>> is it RDF-based. saying that it's "JSON-LD based" really doesn't
>>> solve the problem, it simply provides rhetoric to justify our
>>> inability to decide.
>>> 
>>>>> the approach follows the idea of
>>>>> https://github.com/dret/sedola, which has the same idea of
>>>>> providing a basic documentation harness (in the case of
>>>>> sedola it's used for for media types, HTTP link headers and
>>>>> link relation types), without forcing people to subscribe to
>>>>> a single modeling framework that's required to formally
>>>>> describe these things.
>>>> Could you please give an example of how those who want to treat
>>>> it as Link Data can simply do so? Once again, we can not just
>>>> say "we don't mind if you try to use it as Linked Data", but if
>>>> we want to make it possible we must keep it in mind when we
>>>> design things.
>>> 
>>> it all comes down to how things are defined. if we *require* all 
>>> identifiers to be dereferencable, then we (probably) require
>>> people to publish RDF at those URIs. if one the other hand we
>>> treat identifiers as identifiers, then it is outside of the scope
>>> of AS2 if people decide to publish RDF at those URIs. if they do,
>>> they're welcome to do so, but if they don't, that's fine, too.
>>> 
>>> conflating the concepts of identifiers and links can be risky. if
>>> AS2 says that concepts such as activity types and object
>>> properties are identifiers, then everything works just fine. if
>>> otoh AS2 says that those concepts must be treated as links,
>>> that's a very different design.
>> Personally I often start with using HTTP URIs which return 404,
>> still at any time in the future I can simply 'fix' it and return
>> some meaningful description for those who dereference it. We could
>> recommend using identifiers for vocabulary terms in such order: 1)
>> Use URIs 2) Use HTTP URIs 3) Provide useful information about term
>> you define for those who dereference its HTTP URI. You may consider
>> using RDFS and OWL but even plain text or HTML (from .md) gives a
>> good start!
>> 
>> I would also consider that we recommend
>> 
>> 2.5) Publish shared vocabularies under https://w3id.org/ namespace
>> to ensure longevity of URI other used it their data.
>> 
>> Those who know what they do can simply ignore such recommendations
>> :)
>> 
>> BTW Jason Hagg (xAPI/adlnet.gov) applied to join IG and will bring
>> to the table very concrete requirements for extending 'verbs' /
>> verb types.
>> 
>> Cheers!
>> 
>> 
>>> 
>>> practically speaking, many linked data implementations treat
>>> core concepts as identifiers anyway, because otherwise the web
>>> would melt down under the constant load of implementations
>>> pulling in all interlinked concepts every time they encounter
>>> them, to check if they may have changed.
>>> 
>>>>> as an experiment, i have created sedola documentation for
>>>>> many W3C and IETF specs, and despite the fact that these are
>>>>> using different (and often no) formalisms, this still results
>>>>> in a useful list of the concepts that matter: *
>>>>> https://github.com/dret/sedola/blob/master/MD/mediatypes.md *
>>>>> https://github.com/dret/sedola/blob/master/MD/headers.md *
>>>>> https://github.com/dret/sedola/blob/master/MD/linkrels.md
>>>> Looks cool! I guess meant for human consumption and not for
>>>> machine processing?
>>> 
>>> so far i'm just publishing MD because it's easy and it's good to
>>> look at. it would be trivial to transform it into other
>>> metamodels, such as JSON, XML, or RDF.
>>> 
>>> wrt to human consumption vs machine consumption: machines can
>>> understand the concepts that have been defined somewhere, so
>>> that's already pretty useful. and that's really all there
>>> practically is, because the vast majority of meaningful concepts
>>> on the internet and the web today have only textual descriptions,
>>> so there's nothing to consume for machines other than a distilled
>>> list of the concepts defined in those specs.
>>> 
>>> cheers,
>>> 
>>> dret.
>>> 
>> 
>> 
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.11 (GNU/Linux)
> 
> iQIcBAEBAgAGBQJVEB4UAAoJEPgwUoSfMzqcCKIP/0kcuxJRuyVo9+iKcf40IzjY
> /oI8PEh5w0kDC5agx8uIp5O0HeCiTqIvheymijn93+DsSymLXI0F2ixTGkSYQfmq
> Wpzv+T1JbJNWZXkzn5rCi90qLk78bj8uctMKm8D1b7Pn6VjHBg6TgDh67coPAl/9
> 5vX4KbRLzyY/krd3vo+d+BouhngMh/tJV5s7K+C/wm94j0zjWE+ZoDENYqdIT/hW
> +L6mmJ/wSvj4mdb3Ze2+JXrhxDqjLBSQgwSbe1SHAq4240Ar2f2xVSQ3Au8bQAWO
> 72Ao9rviWPG9Ha9h0EiexiXm5YE48VDELp7ROe4gMzrroX5wrVM/P2yQI7aGK2qn
> 6td5PmgVzHL2S0rgw+Io2cxZWHgkqT3a2gft/wasq1CyBzs/Q9VbFckWzzJy6imC
> S/vHuILTrgJ1mgctFvkXxRC+uo/sAyAMf6mS61kfFr0lwNmHYlLLg/KccyZiBEC5
> GACdd0dfGZez7RwfDBP1NbUxaT+hrAZ9kXTUEmRYqrIhCwi74qcqxYKWFYSeSFXY
> 9lkZQTvC5HWHlo84HfADhuJMFK5nUNpZ8u8AGmr7psNL4gvc1OZWtMuZZjF1Zh/Q
> FxZU91mwAJ5xPF5NZQEl5RJDQy0AVOnSdPRn1ZXDo7ZOGDb4asszNisYtnHzCgBh
> q1nOEtsYN1DofO8MSKU0
> =wla3
> -----END PGP SIGNATURE-----
> 

Social Web Architect
http://bblfish.net/

Received on Tuesday, 24 March 2015 09:20:28 UTC