Re: Agent Sub-Types

Hi,
I agree with Yolanda that core of provenance should not include trust, since
in view trust is a function of provenance (computed over provenance
assertions). In a paper by Sizov et al. [1], provenance is modeled as a
layer between trust and proof layers of the Semantic Web layer cake.

Some comments on Reza's point:
> for the first version, we need something that the implementers can provide
that says "the person >creating this mod is not trusted" or "the person
creating this mod is trusted" at that binary simplicity >level.
A follow up query would be (in context of provenance) - "why is the person
trusted or not trusted". Is it due to the algorithm used to compute trust
(there are several, e.g. [2] [3]) or is it the provenance of the person or
the provenance of the mod (which provides the context for trust)?
In addition, how is the trust value in the above statement represented -
binary value, a plain text label, a term from a trust vocabulary/ontology?

Hence, I believe trust is not in scope of the WG.

Best,
Satya

[1] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4397215&tag=1
[2] H.Luo andJ.Tao and Y.Sun Entropy-BasedTrustManagementforData Collection
in Wireless Sensor Networks, Proceedings of WiCom ’09. 5th International
Conference on Wireless Communications, Networking and Mobile
Computing, page(s):
1-4, 2009.
   [3] Y. Wang and M.P. Singh. Formal Trust Model for Multiagent Systems. In
Proceedings of the 20th International Joint Conference on Artificial
Intelligence (IJCAI-07). pp. 1551 - 1556, 2007.

On Thu, Jul 14, 2011 at 2:00 PM, Reza B'Far <reza.bfar@oracle.com> wrote:

>  Yolanda -
>
> Thank you for the response.  Please see responses below -
>
>    1. You're completely correct that trust has shades of gray (accuracy,
>    preciseness, etc.).  This is partly why I also included the PACE reference.
>    However, it should be up to the implementer to determine trust.  All we're
>    doing is providing some very coarse grain way to even express existence or
>    lack of trust.  Perhaps we should add to the two that I put in an
>    "Unknown".  At this point, IMO, for the first version, we need something
>    that the implementers can provide that says "the person creating this mod is
>    not trusted" or "the person creating this mod is trusted" at that binary
>    simplicity level.  Later on, during future versions of the draft, additional
>    attributes can always be added.  I'm even find with doing that now... or
>    creating a pointer to other standards that deal with trust.  But, not
>    dealing with it makes it so that the fact that an agent is mentioned is not
>    all that useful if I have to have trust.  And most, if not all, commercial
>    applications have to have trust.  It's not an option.  I can't go republish
>    some news from some random source that I don't have any trust for or no one
>    vouches for as a reputable org (journalism use-case).  Nor can I provide
>    records management lineage in time for some legal evidence piece.
>     2. I am fine with the proposal of completely removing agent.  I guess
>    it's better than ONLY having a "generic" agent.  But I prefer specific
>    agent(s)
>    3. References from Fugetta, et. al, as well as Russell\Norvig,
>    Taylor/Dashofy, Medvidovich etc. where Software Agents are definitively
>    defined look at the following categories -
>       - Mobile Agents - mobility context
>        - Intelligent Agents - automated processes that make their own
>       decisions without direct human interaction
>       - User-Agent as defined in Http/HTML/etc. within the context of
>       client-server computing
>        4. On (3) above, my "beef" here is that we need to use words that
>    have definitive meaning in software engineering within their own context.
>    System Agent is typically used (and I previously sent a reference on this)
>    to refer to automated intelligent agent... some cron job that's running in
>    the background doing automated stuff.  User-Agent is defined by Fielding in
>    REST.
>     5. Orthogonal to discussion - I generally don't like something called
>    "recipe" for example.  I mean what is a recipe?  It's in my kitchen, but I
>    don't find it in a gang-of-four software engineering book or in anything
>    that I've seen in a graduate or undergraduate software engineering book.
>    Getting creative with words is dangerous.  And I don't think we're inventing
>    anything here in this (or any other) working group in the way of a new
>    theory, principle, etc. so I strongly recommend we use exact words that are
>    in either accepted and semi-mature (few publications, not just 1 paper) or
>    fully mature computer science and/or software engineering disciplines.
>
> Best.
>
>
> On 7/14/11 10:40 AM, Yolanda Gil wrote:
>
> Hi Reza:
>
>  You raise an interesting topic, albeit a tough one.
>
>  Trust tends not to be binary, it comes in all shades of grey (e.g., a
> degree of confidence).
>
>  It is also subjective, the level of trust may depend on the application,
> the domain, or the use of the provenance.
>
>  So in my opinion, the core of a provenance representation should not
> include a representation of trust.  Maybe later we include an extension to
> represent trust, but note that many trust metrics can be derived from a
> given provenance record.
>
>  I am also not sure about your second category.  I am not sure if the NYT
> as publisher of an article would be considered "user-agent" or "system".  I
> am not sure if my personal email agent should be considered "system" or
> "user-agent".
>
>  In general, I think ontologizing agency is tricky.
>
>  In my opinion, the notion of agent should be eliminated from the model
> unless we want to attach a special meaning to a participant which is a
> meaning of responsibility for a step/process.
>
>  Yolanda
>
>
>
>  On Jul 14, 2011, at 10:18 AM, Reza B'Far wrote:
>
>  Creating new thread to put agent sub-typing up for discussion.
>
> Proposal is to have the following sub-types of agent
>
>    1. Trust-based sub-types
>       - Trusted Agent
>       - Untrusted Agent
>    2. Limiting the scope of System vs. Human interaction
>       - User-Agent
>
> Alternative to 2, we could also do Automated System Agent and Human Agent.
>
> Reza
>
>
>

Received on Thursday, 14 July 2011 19:36:12 UTC