- From: Paola Di Maio <paoladimaio10@gmail.com>
- Date: Fri, 28 Jun 2019 13:30:23 +0800
- To: William Waites <wwaites@tardis.ed.ac.uk>
- Cc: Amirouche Boubekki <amirouche.boubekki@gmail.com>, W3C AIKR CG <public-aikr@w3.org>, SW-forum <semantic-web@w3.org>
- Message-ID: <CAMXe=Spo1ywkK67FZ48v+JTbJYrcMW-UATdbkwOmH0LCz5Jjqg@mail.gmail.com>
Thank you all for sharing the interest, I was going over the Special Issue by ACM edited by Brachman and noted only 14 citations? Isnt that strange? https://dl-acm-org.nls.idm.oclc.org/citation.cfm?id=1056752 Also, its content is rather thin- I would have thought otherwise- This makes me feel not too bad that our SI has not yet received relevant submissions (hint, hint.....) https://www.mdpi.com/journal/systems/special_issues/Artificial_Intelligence_Knowledge_Representation So I am requesting an extension until the end of the year and will announce the first article with pointers and extended deadline soon. Bestest On Thu, Jun 27, 2019 at 8:28 PM William Waites <wwaites@tardis.ed.ac.uk> wrote: > >> what is ontology building if not a giant classification exercise? > > As far as know, predicting structure is still an active and recent area of > research. > > I don't mean to suggest that this is a solved problem! Just that it is a > very closely related problem and that the NN aproach looks like it scales > much better in both building and inferencing and though it is error-prone, > it is robust to noise. > > > > Explicit representation of knowledge is almost entirely absent in > > > connectionist systems. > > > > Are you sure? Isn't for instance word embedding relying on sequence of > words > > and as such take features from knowledge representation? Similarly, > markov > > models rely on the probability of appearance of a given "token". The > token > > can encode both sense and grammatical features. > > I think so. Sure you have input and output tokens. But I think the > "meaning" > (or the "semantics", or the "knowledge") is encoded in the mapping. That > mapping is opaque and not really very good for answering "why?" > > > > A child doesn't learn by being fed a bunch of facts and rules, a child > > > learns by example and a trial and error feedback loop. > > > > Again, this doesn't expel rules or dynamic programming. Somehow I connect > > logic to dynamic programming. > > Sort of. Almost every rule governing language and behaviour and interaction > with the world is really very hard to figure out and explicitly state. > Maybe > in some cases it is possible. But that's not what children do. > > > Logic is the source of truth whereas connectionist approach provides a > > summary. > > Interesting. My intuition is precisely the inverse! > > Best wishes, > > William Waites | wwaites@inf.ed.ac.uk > Laboratory for Foundations of Computer Science > School of Informatics, University of Edinburgh > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > >
Received on Friday, 28 June 2019 05:31:25 UTC