RE: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine...

 The fail is a exam test fail -- that I cannot deliver the publicaton part of the spec (using native windows or some normal config of IIS/windows). This is my latest test. (I've given up on using native windows as the validating agent in this project, given windows native SSL's problems with handling _unregistered_ self signed client certs over https.)  Henry is demanding I implement (quite properly). But, the very platform is in the way, Im finding. This time, its not even subtle security stuff with legacy certs that is the problem. The platform just doesnt meet the minimum requirements assumed by the core publication component of the spec.  It cannot even provide a stream for the URIs we expect validation agents to emit!! is this a problem with the spec (and its minimum assumptions) or Windows (implementing what the damn RFC actuallys says)? Yes, say I, head held low: Ive failed to publish a document using the 2 URIs Jurgen demanded as a minimum baseline (with and without fragment). To be fair, he is spot on demanding that this be the baseline. http://www.slideshare.net/guestecacad2/goodrelations-tutorial-part-4 is excellent (and could be a foundation course for our work, here). It even mentions IIS rewrite rules, for several generations of IIS. But, it doesnt provide them (as does no other blog post, or windows support page). It doesnt show it ON windows. It just talks "about" it. And, the limited context for the rewriting going on is obvious (and IIS has done for a decade, mapping a path element to a script file with extension). None of deals with the fragment. Surely in the semweb community, there must be SOMEONE that already delivers an RDFa page from IIS? ...responding to a server-initiated GET that has the #this ...on the wire? (ignoring what the RFC says SHOULD not happen). or perhaps someone can say: you MUST use the beta of windows 8 server edition (which I can go download)... if there are confidentility issues going on. Folks on the list ARE HEARING THAT IT MUST be supported (for the semantic of secure de-referencing) its a critical SEF that is (securitiy enforcing function). Otherwise, we are limited to Bergi-style validation (which produces a different security answer to FOAFSSL and ODS). He implements a lower assurance SEF, that does not enforce correct de-referencing. Date: Thu, 29 Dec 2011 15:08:29 -0500
From: kidehen@openlinksw.com
To: public-xg-webid@w3.org
Subject: Re: neither FCNS nor FOAFSSL can read a new foaf card (hosted in             Azure). RDFa validators at W3C and RDFachecker say its fine...


  


    
  
  
    On 12/29/11 2:23 PM, Peter Williams wrote:
    
      
      
        Get it down to specifics.

         

        As it stands, I, as dumb programmer doing bog standard stuff,
        cannot make an RDFa file on (native) windows work with webid
        validators. This is my second fail (self signed certs being
        the first). Running third party servers on windows is NOT an
        option (for us).

      
    
    

    What's the fail right now about? Fragment Identifiers over the wire?
    Or adding "#this" to cheaply make an unambiguous object name that
    implicitly bound to its description resource (document) address? 

    

    
       

        With two fails, the project is out (under normal rules).

      
    
    

    Yes, but what's the failure? You can make an IIS re-write rule to
    handle fragment identifiers that come over the wire. You simply work
    from the resource outwards using "reflection". Basically, your rule
    is about a resource that has a sense of "self" that's discernible
    from its consistency (i.e., the eav/spo based structured data it
    bears). Thus, you can deduce from the content the fact that a
    resource is a descriptor for an unambiguously named subject. This
    rule isn't common due to the fact that the rest of the world has
    adopted the notion that fragment identifiers don't cross the wire.
    This isn't a dead end. 

    

    If you look at how Facebook has adopted Linked Data (which has
    nothing to do with Turtle support) you'll recognize the "reflection"
    pattern approach re. self describing resource. Thus, Facebook has
    unleashed somewhere in the region of 850 million + self describing
    structured data bearing resources that describe unambiguously named
    subjects, using HTTP URIs. 

    

    If you make the "#this" addition to URL that I suggested earlier,
    you should see this manifest with clarity. 

    

    I've dropped a note on G+ (my preferred blogging platform these
    days) that outlines a simple example of Linked Data deployment using
    our particular Linked Data Deployment platform which is integral to
    Virtuoso: http://goo.gl/fC8Rv . 

    

    

    Kingsley

    
       

        
          From: home_pw@msn.com

          To: kidehen@openlinksw.com; mo.mcroberts@bbc.co.uk

          CC: public-xg-webid@w3.org

          Subject: RE: neither FCNS nor FOAFSSL can read a new foaf card
          (hosted in Azure). RDFa validators at W3C and RDFachecker say
          its fine...

          Date: Thu, 29 Dec 2011 11:12:18 -0800

          

          
          
          
            

            The history is obviously convoluted, as usual with tech.

             

            I know 2 facts:

             

            Kingsley as an experienced person with semweb in enterprise
            theatres implies that  linked data clients are proper for
            putting fragments on the wire (and Henry's server is proper
            for NOT implementing the SHOULD reject rules in the RFC).

             

            I could not make a trivial RDFa page on a windows website
            (after 15 years of doing the same thing...) working with
            both cases Jurgen suggested (implying I need to act my basic
            act together, rather than solve an insoluble-for-me windows
            problem)

             

            Now, Im giving folks a break. my normal  CISO/CIO management
            position in this situation is... I go away for 5 more years
            (till microsoft makes the platform that fits, us, as a
            windows shop, and its 3+ years old, since we only adopt a
            version out of date). We pay Microsoft to make and support
            stuff that fits our interests (working as we do in the
            commodity world, needing low error rates and very HIGH
            stability). I also expect them to meet internet standards
            with hardware and OS solutions (in which IS status is a
            longterm goal track, in which HTTP still has not made
            "internet standard"). Obviousy, HTTP has good reasons for
            not being an internet standard yet: it aint stable, and
            hardware-ready. I wont be the same in 10 years time...(yet).

             

            I give the break as I see a community on the trasition from
            R&D to mass commercialization, and that needs some
            support. it nt proiper that one of the most late-adopters
            (realty) is a driving use case, but I have not see any
            others...come to the fore as NEEDING a distributed,
            de-centralized identity management solution. Everyone else
            is perfectly happy with hub-and-spoke.

             

             

             

             

             

             

            > Date: Thu, 29 Dec 2011 11:46:56 -0500

              > From: kidehen@openlinksw.com

              > To: Mo.McRoberts@bbc.co.uk

              > CC: home_pw@msn.com; public-xg-webid@w3.org

              > Subject: Re: neither FCNS nor FOAFSSL can read a new
              foaf card (hosted in Azure). RDFa validators at W3C and
              RDFachecker say its fine...

              > 

              > On 12/29/11 4:04 AM, Mo McRoberts wrote:

              > > hold on a second.

              > >

              > > is somebody saying fragment identifiers SHOULD
              be included in a request somewhere?

              > >

              > > HTTP/1.0 and HTTP/1.1 very explicitly say
              otherwise (and as far as I can tell, HTTPbis WG outputs
              haven't changed that), and last I checked nothing about
              linked data changes that? part of the point of linked data
              is that it doesn't require anything "special"

              > >

              > > AFAICT, a server is perfectly within its rights
              to return a 4xx response to request containing a fragment,
              and that includes a 400 (Bad Request), given that an
              unescaped '#' isn't permitted in a Request-URI.

              > 

              > Its a loooong story.

              > 

              > Fragment Identifiers not going of the wire arose from
              a typo (I hear) a 

              > long time ago during spec development. It lead to the
              common (and 

              > eventually accepted) practice of not sending Fragment
              Identifiers over 

              > the wire, but Microsoft didn't initially adopt this
              i.e., they stuck to 

              > pre typo definition.

              > 

              > Its also one of the reasons why DBpedia adopted slash
              rather than hash 

              > URIs since the project's goal was about: just
              working, in a browser 

              > agnostic way.

              > 

              > >

              > > if there's a spec somewhere which says
              otherwise, I'd love to know about it (not least so I can
              tweak my own servers), but the current
              httpbis-p1-messaging draft even goes as far as to say:

              > >

              > > "Note: Fragments ([RFC3986], Section 3.5) are
              not part of the request-target and thus will not be
              transmitted in an HTTP request."

              > >

              > > To the best of my knowledge this is a point of
              clarification, rather than a change in specification, it's
              just that some folk hadn't read the URI ABNF properly.

              > 

              > It's a mess. But for now, I think Microsoft has
              tweaked its HTTP 

              > products (browers, servers, and proxies) thereby
              reducing perpetuation 

              > of this problem. They were the last hold out.

              > 

              > I might dig up some reference links if I get some
              time.

              > 

              > Kingsley

              > >

              > > M.

              > >

              > 

              > 

              > -- 

              > 

              > Regards,

              > 

              > Kingsley Idehen 

              > Founder& CEO

              > OpenLink Software

              > Company Web: http://www.openlinksw.com

              > Personal Weblog:
              http://www.openlinksw.com/blog/~kidehen

              > Twitter/Identi.ca handle: @kidehen

              > Google+ Profile:
              https://plus.google.com/112399767740508618350/about

              > LinkedIn Profile: http://www.linkedin.com/in/kidehen

              > 

              > 

              > 

              > 

              > 

              > 

            
          
        
      
    
    

    

    -- 

Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software     
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen




 		 	   		  

Received on Thursday, 29 December 2011 21:02:48 UTC