Re: Extensibility: Fallback vs. Monolithic

Interesting discussion at the telecon today.  My suspicion is that 
fallbacks won't be useful and are an unnecessary complication.  The use 
cases that were suggested seem to validate my suspicion:

negation
aggregation
retract
conjunctive/disjunctive conclusions

The above have no reasonable fallback other than to fail translation 
with an informative error message.

I think things that can be ignored are metadata by definition, and we 
should get on with defining metadata.
I think the "dialect name" is metadata.  It can be ignored, and dialects 
can't change the meaning of syntax elements like
Rule, And, Or, etc.

BTW, fallback vs. monolithic is a false dilemma.  These are orthogonal.  
I prefer NO fallback, but failure only if the translator (from RIF) does 
not understand some (non-metadata) syntax element.

I think a translator from RIF to a target rule language MUST understand 
all the syntax elements but MAY ignore the metadata.  A translator to 
RIF SHOULD generate "complete" metadata and MUST generate "correct" 
metadata.

Extensibility is a lot like luck.  You can feel lucky, but you can only 
prove you *were* lucky by analyzing *past* events...

Sandro Hawke wrote:

>>>My thinking was that a given component could have multiple fallbacks,
>>>each with different impact.  So, typically, I'd expect a minor-impact
>>>fallback to a nearby dialect to be defined, along with a major-impact
>>>fallback to Core.
>>>      
>>>
>>The issue is not how far you need to fallback but whether the fallback 
>>is applicable at all in the presence of other components; i.e. the 
>>fallback (or at least it's impact :-)) needs to be conditional.
>>    
>>
>
>As I'm imagining it, the conditional-nature of the fallback is
>encapulated by the components used in the output of the fallback
>procedure.    So component C3 has two fallbacks, one (low impact) to C2,
>and one (high impact) to Core.  And the implementation should use the C2
>one if that's going to result in less overall impact - even if it has to
>fallback from C2 to C1, etc.  
>
>But you're talking about something a little different, I guess -- you're
>talking abut a case where the fallback might be C3 -> C2 if and only if
>some other component C4 is implemented?  I'm not quite following when
>you would need that.
>
>  
>
>>>>Third, it's not clear to me in what way CRC is a separable component in 
>>>>the first place. Certainly one could have a new conjunction operator 
>>>>intended for use only in rule conclusions but if we were putting that in 
>>>>the Core just now we'd just modify the Rule syntax and reuse the same 
>>>>Conjunction element as used in the condition language. Does this sort of 
>>>>extension (generalizing where certain constructs can be used) fall in 
>>>>the scope of the mechanism?
>>>>        
>>>>
>>>The approach that comes to my mind is to have a different kind of rule.
>>>Core has HornRule, and you're talking about a CRCRule, I think.
>>>      
>>>
>>Yes, that seems right.
>>
>>That does raise a related issue though. If my new extended production 
>>rule dialect has several such extensions about what can be said in the 
>>conclusions then I'd presumably have an EPRule clause which encapsulated 
>>all of those. In that case the different extensions wouldn't be separate 
>>components but one big component and the fallback transformations would 
>>be more complex and conditional.
>>    
>>
>
>*nod*   That might be right, but maybe there's a way to separate them
>better.   I'd need to play with that some more.
>
>  
>
>>>>So test cases would, I think, be a helpful way to clarify the scope of 
>>>>what the mechanism should and should not cover.
>>>>
>>>>
>>>>A couple of more minor comments:
>>>>
>>>>o I have been assuming that a RIF ruleset will include metadata which 
>>>>identifies the intended dialect (including version information). The 
>>>>discussion under "Dialect Identification and Overlap" doesn't seem to 
>>>>reflect that. The extension mechanism is only needed when a processor 
>>>>doesn't recognize the dialect/dialect-version in order to determine 
>>>>whether, despite that, it could still proceed.
>>>>        
>>>>
>>>I understand.  My suspicion is that identifying components instead of
>>>dialects will end up being much more comfortable in the long run.  In
>>>the most obvious case, you might implement D1 (C1,C2,C3) and recieve a
>>>document which uses only C1 and C2, but which would be labeled as being
>>>written in D2 (C1,C2,C3,C4).  The sender might not even know that D1
>>>exists, and so could not label it D1.   (But maybe the D2 author needs
>>>to know about D1 to ensure compatibility; maybe the content could be
>>>labeled as {D1, D2}.)
>>>      
>>>
>>I wasn't objecting to also doing component level analysis but having the 
>>sender and receiver just agree to use the same dialect seems like the 
>>common case which should be supported by dialect-level metadata. That 
>>certainly doesn't preclude translators falling back on component level 
>>analysis.
>>
>>I guess part of my worry is that it's not clear to me how often the 
>>components are going to be neatly semantically composable to make the 
>>componentization useful.
>>    
>>
>
>Yeah, this sounds like a lot of work.  I'm suddenly feeling sympathetic
>with a point Christian often makes when people say all we have to do in
>Phase 1 is Horn, and he replies:  Non!  :-) "We also have to do
>extensibility!"
>
>With that in mind, let me put another strawman on the table.  Let's call
>what we've been talking about "fallback-based extensibility" and the new
>one "whole-dialect" or "monolithic" versioning.
>
>In monolithic versioning you don't really have forward compatibility or
>graceful degradation.  It's much more traditional.  If you want to add
>something which effects the semantics of the language, you create a new
>dialect.  The dialect of a document is named at the top of the document
>-- if you receive a document for which you don't implement the dialect,
>you just reject it.
>
>For performance and presentation information, however, you can use
>metadata.   Unrecognized/unimplemented metadata is just ignored.
>
>There would still be conceptual re-use, of course.  Many dialects would
>have a lot of conceptual overlap, and that would be handled by common
>subroutines in the translators.  But a system which understood four
>dialects would probably have four translators in it.
>
>The downside to this monolithic approach is that it means we lose the
>distinctions in handling the different conditions in that table at the
>end of Extensibility2.  Instead of 12 different cases we have two:
>semantic (data) and non-semantic (metadata).   Or maybe the distinction
>is between "important" and "unimportant" -- where unimportant stuff is
>ignored if it's not understood (metadata) and if you don't understand
>the important stuff (the data) you reject the document.
>
>    -- Sandro
>
>  
>

-- 


Oracle <http://www.oracle.com>
Gary Hallmark | Architect | +1.503.525.8043
Oracle Server Technologies
1211 SW 5th Avenue, Suite 800
Portland, OR 97204

Received on Tuesday, 26 June 2007 17:51:55 UTC