W3C home > Mailing lists > Public > www-rdf-interest@w3.org > April 2001

RE: Semantic API

From: Stuart Naylor <indtec@eircom.net>
Date: Thu, 19 Apr 2001 10:35:31 +0100
To: <www-rdf-interest@w3.org>
Message-ID: <DFEDIAMBMMNMHKCCJJBLCELOCFAA.indtec@eircom.net>
With so much mention of DAML+OIL I must ask is anyone out there providing
Ontologies with a similar method to the ebXML standard.

From my perspective the ebXML standard fails with the usual defined in
concrete 'It Shall be so' schema but there methods of business profiles are
very good.

I just wondered if there are any or many RDF ontologies / taxonomy out there
and how far away we are from the inference of one business process to
another.

  -----Original Message-----
  From: Stuart Naylor [mailto:indtec@eircom.net]
  Sent: 17 April 2001 15:31
  To: jos.deroo.jd@belgium.agfa.com
  Cc: www-rdf-interest@w3.org
  Subject: RE: Semantic API


  Obviously I nicked this from the ebXML standard http://www.ebxml.org/ to
get a full overview.





  Business Process and Business Document Analysis Overview 12

  Copyright C ebXML 2001. All Rights Reserved.

  7 Business Process Modeling 284

  7.1 Overview 285

  Business process models define how business processes are described.
Business processes 286

  represent the "verbs" of electronic business and can be represented using
modeling tools. The 287

  specification for business process definition enables an enterprise to
express its business 288

  processes so that they are understandable by other enterprises. This
enables the integration of 289

  business processes within an enterprise or between enterprises. 290

  Business process models specify interoperable business processes that
allow business partners to 291

  collaborate. While business practices vary from one organization to
another, most activities can be 292

  decomposed into business processes that are more generic to a specific
type of business. This 293

  analysis, utilizing business modeling, will identify business processes
and business information 294

  metamodels that can likely be standardized. The ebXML approach looks for
standard reusable 295

  components from which to construct interoperable processes and components.
296

  7.2 Business Process and Information Metamodel 297

  The Metamodel is a description of business semantics that allows Trading
Partners to capture the 298

  details for a specific business scenario using a consistent modeling
methodology. A Business 299

  Process describes in detail how Trading Partners take on roles,
relationships and responsibilities to 300

  facilitate interaction with other Trading Partners in shared Business
Process. The interaction 301

  between roles takes place as a choreographed set of Business Transactions.
Each Business 302

  Transaction is expressed as an exchange of electronic Business Documents.
The sequence of the 303

  exchange is defined by the Business Process, messaging and security
considerations. Business 304

  Documents are composed from re-useable business information components. At
a lower level, 305

  Business Processes can be composed of re-useable Common Business
Processes, and Business 306

  Information Objects can be composed of re- useable Business Information
Objects that may be 307

  composed of core components and domain components. 308

  The Metamodel supports requirements, analysis and design viewpoints that
provide a set of 309

  semantics (vocabulary) for each viewpoint and forms the basis of
specification of the semantics 310

  ebXML BP/CC Analysis Team March 2001

  and artifacts that are required to facilitate business process and
information integration and 311

  interoperability. 312

  An additional view of the Metamodel, The Specification Schema, is also
provided to support the 313

  direct specification of the nominal set of elements necessary to configure
a runtime system in order 314

  to execute a set of ebXML business transactions. By drawing out modeling
elements from several 315

  of the other views, the Specification Schema forms a semantic subset of
the Metamodel. 316

  The Specification Schema is available in two stand-alone representations,
a UML profile, and a 317

  DTD. Figure 7.2-1 shows the high-level elements of The Specification
Schema. 318



  I really liked the spec until schema started to raise it's ugly head. The
ebXML standard is very good but it is still a explicit profiling mechanism.


  Then the Web god doth speak
http://www.scientificamerican.com/2001/0501issue/0501berners-lee.html

  Ontologies
  Of course, this is not the end of the story, because two databases may use
different identifiers for what is in fact the same concept, such as zip
code. A program that wants to compare or combine information across the two
databases has to know that these two terms are being used to mean the same
thing. Ideally, the program must have a way to discover such common meanings
for whatever databases it encounters.

  A solution to this problem is provided by the third basic component of the
Semantic Web, collections of information called ontologies. In philosophy,
an ontology is a theory about the nature of existence, of what types of
things exist; ontology as a discipline studies such theories.
Artificial-intelligence and Web researchers have co-opted the term for their
own jargon, and for them an ontology is a document or file that formally
defines the relations among terms. The most typical kind of ontology for the
Web has a taxonomy and a set of inference rules.

  The taxonomy defines classes of objects and relations among them. For
example, an address may be defined as a type of location, and city codes may
be defined to apply only to locations, and so on. Classes, subclasses and
relations among entities are a very powerful tool for Web use. We can
express a large number of relations among entities by assigning properties
to classes and allowing subclasses to inherit such properties. If city codes
must be of type city and cities generally have Web sites, we can discuss the
Web site associated with a city code even if no database links a city code
directly to a Web site.

  Inference rules in ontologies supply further power. An ontology may
express the rule "If a city code is associated with a state code, and an
address uses that city code, then that address has the associated state
code." A program could then readily deduce, for instance, that a Cornell
University address, being in Ithaca, must be in New York State, which is in
the U.S., and therefore should be formatted to U.S. standards. The computer
doesn't truly "understand" any of this information, but it can now
manipulate the terms much more effectively in ways that are useful and
meaningful to the human user.

  With ontology pages on the Web, solutions to terminology (and other)
problems begin to emerge. The meaning of terms or XML codes used on a Web
page can be defined by pointers from the page to an ontology. Of course, the
same problems as before now arise if I point to an ontology that defines
addresses as containing a zip code and you point to one that uses postal
code. This kind of confusion can be resolved if ontologies (or other Web
services) provide equivalence relations: one or both of our ontologies may
contain the information that my zip code is equivalent to your postal code.

  Our scheme for sending in the clowns to entertain my customers is
partially solved when the two databases point to different definitions of
address. The program, using distinct URIs for different concepts of address,
will not confuse them and in fact will need to discover that the concepts
are related at all. The program could then use a service that takes a list
of postal addresses (defined in the first ontology) and converts it into a
list of physical addresses (the second ontology) by recognizing and removing
post office boxes and other unsuitable addresses. The structure and
semantics provided by ontologies make it easier for an entrepreneur to
provide such a service and can make its use completely transparent.


  It is very hard to explain relevance because it is totally dependent on
the quality of ontology references. We have two applications each with a
internal application ontology both reference external more generic
ontologies which hopefully at some converge on a singular ontology reference
hopefully at the first hop. Relevance is a reverse of TBLs Search agents
where we know where the data is we just have to find the question.

  Like the command line parameter TRACERT [DNSNAME] a list of each hop from
one router to the next across the net is displayed in order of occurrence.
With two destinations and a common source I guess this is what I would call
a expression of relevance.

  This is about as far as I have got with my 'Chess' engine as a lateral hop
is exponential to a vertical and I am having much more fun learning the
finer points of gambits.

  The AI is quite easy it's the construction of the business process and
data ontology's that is causing me the headaches.



  -----Original Message-----
  From: www-rdf-interest-request@w3.org
  [mailto:www-rdf-interest-request@w3.org]On Behalf Of
  jos.deroo.jd@belgium.agfa.com
  Sent: 17 April 2001 10:00
  To: indtec@eircom.net
  Cc: www-rdf-interest@w3.org
  Subject: Re: Semantic API




  > [...]

  fresh thoughts!

  > What I have seen so far from anyone out the is still explicit
declaration as
  > opposed to inference and selection by relevance.

  what do you mean with *selection by relevance* ?

  --
  Jos De Roo, AGFA http://www.agfa.com/w3c/jdroo/



  -----Original Message-----

  From: Stuart Naylor [mailto:indtec@eircom.net]

  Sent: 17 April 2001 08:42

  To: www-rdf-interest@w3.org

  Cc: me@aaronsw.com; johan.hjelm@nrj.ericsson.se; Dan Brickley; Danny

  Ayers

  Subject: Semantic API



  Semantics and RDF it's exciting stuff isn't it.

  I am going to post a follow on to Zen & Chinwag as if I did get a answer
then I missed it but that wouldn't be unusual.

  I am firstly going to explain the background of why I require a API based
on RDF Ontology's.

  As a App developer and integrator I work across various industries.
Currently a large proportion of my time is devoted to Legal concentric
activities. The Legal arena has a representation through WWW.LegalXML.org.

  The Legal industry has a large amount of Legal specifics to it's IT
requirements but like us all it has some very generic processes.

  A Legal Office could be defined as.

  Legal Accounts, Document Automation, Time & Billing, Document Management,
Case Management, Knowledge Management.

  The vendors comprise of a cottage industry which is a blend of grey 'All
in One' and best of breed applications. The actual integration of a Legal
solution is a absolute nightmare which often results in the choice of a 'All
in One' application which often has dictates to a localisation as narrow as
'London' or 'California'.

  My initial thoughts where of a Com, Corba, Dcom with standards in the vain
of MAPI. Then I started looking at the possibilities of XML.

  Initially XML was very exciting until I started parsing documents and
exchanging data and was gob smacked at the actual overheads of schema
handling. I believe various industries have got a little carried away with
bloat schemas to represent common data structures. Also I have a problem
with schema because of the fact they dictate that this is a entity data
structure. Which if I look at application design for reasons of
functionality data structures are never the same even though the do exactly
the same job.

  I believe the two problems above have very strong parallels across many
industries but back to my cause.

  Back to 'Chinwag' which I choose that name because as a solution it will
be the last time your will hear it.

  Chinwag is an idea for application to application interoperability without
the need of formal schema.

  It's reasoning comes from the way many systems work with closed sensitive
data, is that the very idea that a XML repository in it's own right will not
work. This is due to the fact that many systems know exactly what the
require and are returned the bare minimum to satisfy that requirement.
Business data will never be open as it has value and unless their is a
incentive you will not be able to receive it.

  So my logic takes me back to the API which will return fragments of data
on the satisfaction of the calling parameters.

  The problem with a API call is the presumption that you know what it does
and you know what it returns.

  So this is where ontology's come in, business process ontology's to be
precise.

  With my dislike for schema I began to wonder if a different representation
of data would be viable.

  My analogy to this is that a schema is just the company handbook to data
and like a experienced worker who knows their workplace processes they don't
need the handbook (schema).

  So if you had a application ontology that describes it's data elements so
that they are known and positioned by context then it is feasible that no
schema is required.

  You might see where I am going but through the Semantic web a application
will define it's context through a series of referenced ontology's.

  Application Data Ontology -> Application Process Ontology -> Industry
Specific Process Ontology -> Process Ontology and so on until there is
enough to provide data element comprehension and context.

  I will not go into the topics of how two applications could map themselves
to a source of common context, methods of expressing API functionality...

  What I have seen so far from anyone out the is still explicit declaration
as opposed to inference and selection by relevance.

  So all I need is a plethora of well thought ontologies which link by there
context.

  If anyone has any info on any working groups at providing self
comunicating apps or would like to get something more concrete please email
me.

  Also I have many further ideas on this but for the sake of brevity the
above will do for now.

  -----Original Message-----

  From: Stuart Naylor [mailto:indtec@eircom.net]

  Sent: 13 April 2001 12:04

  To: www-rdf-interest@w3.org

  Subject: Zen & Chinwag



  (This was for LegalXML.Org but please comment)

  There seems to be quite a lot of movement in RPC calls for devices. I
posted quite a lot of bumf to the discussion forum Keywords: - Jini, UDDI,
SOAP, UPnP and their surrounding technologies.

  Which even if you don't agree with what I am about to say they are will
worth a look in the context of LegalXML.

  Also there seems to be a lot of postings about TBL's Semantic Web AKA Net
Gods tenth commandment. The web was there to some extent before TBL and I
prefer to think of him as the gardener rather than the creator. This is
because the web has this strange but almost organic growth to it. I see the
Semantic web as a great prophesy to provide the next generation of the web
as a huge monolithic open knowledge service. It tackles the web as a whole
as opposed to many of the above technologies, which make it a collection of
many mini services. I also think that he is correct but like so many of the
devout they can take things out of context.

  The Semantic web will be a revolution to creating a huge open knowledge
store with the emphasis on open. In the context of Legal though I would say
without trying to offend anyone this is not the case. Intellectual property
rights and the whole concept of legal jargon, precedents and so on places
two legal bodies always in the position where any exchange is the minimum
legal requirement to satisfy both parties. As you will tell I have no legal
experience what so ever but I would so the interchange of legal information
is anything but Semantic and Open. May be someone would like to quantify how
wrong I am there, but anyway. What I believe is that business information is
not Semantic at least not until we have been paid for it.

  I started knocking TBL on purpose because I am now going to have a go at
the very idea of LegalXML in its present form. Please bear in mind that
these statements are purely to form a discussion and without a doubt TBL
does have green fingers just like the work that has been undertaken by
LegalXML.org.

  There seems to be a presumption that a given Legal scenario for example a
court filing will be able to be expressed in a defined structure. That Legal
XML will lay down the protocol law and as long as we adhere we will reach
communication Nirvana. This provides problems with the freedom of speech of
applications where an application may find a better method of expression but
have no method of translation.

  I don't want to try and express the meaning of life but I am quite
prepared to say I had a good day. Like that sentence we need systems that
can provide a decomposition of an entity into what we are prepared to
exchange.

  What would be interesting is not only at a B2B scenario that applications
themselves would interact just as we do. It's our first day and we get the
instructions there is the accounts dept, photocopier, your desk, tea break
at 10.30, goodbye.

  In one of my previous emails about John McClure's
http://www.dataconsortium.org/namespace/DCD100.xml I stated I couldn't
understand it's use, but I have seen the light.

  So here goes for my theoretical XML protocol 'Chinwag' the purpose of
Chinwag is to allow two bodies to have a discourse to ascertain there
relevance and have no need of any formal industry specific XML structure.

  When we give two applications there first day at work they need to how
relevant they are and how they will communicate so the only formal
constructs of Chinwag are WHORU, IAM, and THISISME.

  The legal case management app and accounts app are introduced a polite
pause and the legal case management app goes first 'WHORU'.

  'IAM' financial [parents: #Legal, #App Vendor Semantic] GL, Billings.

  'IAM' cms [parents: #Legal, #App Vendor Semantic] Client, Case, PIM.

  John and http://www.dataconsortium.org/namespace/DCD100.xml I now see as
very important because of the following: -

  possession [parents: #Right , #Legal ] The holding, control, or custody of
property for one's own use, either as the owner or person with another
right.

  possession [parents: #Poltergeist, #Supernatural] The holding, control, or
custody of one, either as the owner or person without right.

  Pure example stuff but this is where TBL's Semantics comes in where it is
the web itself as like the DNS (Domain Name System) the hops or metrics
between those two means the application can deduce that maybe, similar but
your coming from Alaska on that one.

  The next conversation is 'THISISME' at this point a full API call list is
presented with the XML fragment that represents the return data but the most
important is a by element reference to its own application definition.

  Through the context of Semantics and the approximation of definitions a
protocol can be deduced without the need of formal schema declaration.

  I am working on 'Chinwag' at this moment the actual protocol is very
simple but it is the AI required for a demonstration. So far I have
dissected a 'Chess' AI engine because I am trying to enable the
functionality for scenario's where a single API call will not satisfy a
transaction but a series of calls (moves in my case) will.

  So it's Pawns away for me at the moment but I think John McClure's
DCD100.xml is a very interesting proposition but instead of describing human
Legal keywords provide context, taxonomy of the components of legal
entities. I believe LegalXML should be defining the elements in context but
not the structure.

  I know the meaning of life it's 42, the problem is what is the question.
Received on Thursday, 19 April 2001 03:42:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:51:49 GMT