Re: Data modelling

* Erik.Wilde@emc.com <Erik.Wilde@emc.com> [2012-06-18 12:16-0400]
> hello andy.
> 
> thanks a lot for your feedback!
> >On the other hand, "excessive modelling" can produce something unusable.
> >  It is hard balance. Some basic guidance by example would probably be
> >very helpful to developers.
> 
> without going into the details of your comments (which are very relevant),
> maybe the interesting question is whether the platform should talk about
> "the things themselves" at all. coming from a SOA perspective, i don't
> think it should even try. we should focus on the service layer, and it is
> up to the service design to decide how to communicate information about

I believe we are doing exactly the right thing by talking about graph
artifacts. These are the units of information exchange which
constitute our services.

SOA does a nice job of factoring out utility and platform-dependent
tasks from compositional "business" processes by drawing practical
boundries around them, which has the nice effect of pulling peoples
vision up from the bits and bytes. This results in isolation of
interests and handy libraries of services. These services can consume
and produce any combination of XML, RDF, JSON, COM, CORBA, XDR (for
RPC), CSV, DSLs, or tens of other formats.

Every time I use a service of some sort, I read some specification to
give me detailed instructions on how to construct the input and parse
the output. While reading that spec, I have to grok the data model of
the service architect (or at least of the person who wrote the docs),
translate my application data into that model, and parse the response
back into my model. Many services don't even use the same model for
input and output.

Service description languages like WADL or WSDL allow me to establish
some parts of a machine-readable contract. These can be exploited by
tools which create stubs where your program can create an issue record
(in some data model) or propagate an address change back to a
database. If part of the output of service A is part of the input of
service B, you simply link to the marshalling libraries for both (if
there are only two) formats, create stub code to parse the relevent
bits from A's output, map that to B's model, construct the rest of B's
input message, and call the marshalling function to send that message
off to fulfill its destiny at service B.

Enterprises tend to limit the formats available in order to minimize
costs and liabilities. It was our intention that the Linked Data
Profile charter would attract people who wanted to establish useful
patterns for accessing and manipulating *RDF* linked data using REST
verbs. Success is when one person produces a service which uses LDBP
and another person uses it without having to spend a lot of cycles
groking formats, assembling libraries, and writing a lot, or possibly
even any, procedural code.

Fewer choices leads to more interop. One data model means our service
abstractions can be grounded in simple, describable graphs with a
standard set of tools to manipulate them.


> resources. for example, http://tools.ietf.org/html/rfc4287#section-4.2.15
> is fuzzy ("indicating the most recent instant in time when an entry or
> feed was modified in a way the publisher considers significant.") for a
> reason: as a consumer, i don't know how data is handled in the back-end,
> and that's a feature. i am interested to learn about relevant events
> published through the service i am consuming. if a typo in a news story is
> corrected, please don't notify me, i really don't want to know. if,
> however, major things happen, i want to know. so the question of what
> we're communicating on the platform's service level should be completely
> decoupled from what we're managing in the back-end, and how you translate
> the data layer into the service level is a question of a service's design,
> and not something that the platform should or even can define.
> 
> cheers,
> 
> dret.
> 
> 

-- 
-ericP

Received on Monday, 18 June 2012 18:37:27 UTC