- From: Kingsley Idehen <kidehen@openlinksw.com>
- Date: Thu, 29 Dec 2011 21:08:15 -0500
- To: public-xg-webid@w3.org
- Message-ID: <4EFD1D0F.80306@openlinksw.com>
On 12/29/11 7:06 PM, Mo McRoberts wrote: > On 29 Dec 2011, at 21:02, Peter Williams wrote: > >> is this a problem with the spec (and its minimum assumptions) or Windows (implementing what the damn RFC actuallys says)? > Last I looked, Apache (on any platform) won’t accept a fragment in a request-URI any more than IIS will without special tweaking. Same applies to lighttpd and most other servers I’ve come across. > > Historical facets regarding the the HTTP spec aside (and Kingsley — if you do manage to dig out some references, I’d be interested to read them; I’m an architectural history junkie at the best of times). From my reading of the long and tedious httpRange-14 discussions and outcomes, the fragment identifier was picked as the differentiator between resources and things described by those resources specifically BECAUSE it's NOT sent on the wire — i.e., because > > http://example.com/foo > > and > > http://example.com/foo#bar > > and > > http://example.com/foo#baz > > …all result in an identical request being sent to the server. Yes, if you didn't perform the HTTP GET using IE with proxies in the mix. The problem is IE (and it seems) bits of Windows still sending Fragment Identifiers over the wires. > This might seem at first glance suboptimal, but it allows the differentiation to happen without needing ANYTHING special to be configured at the server side (except enabling conneg if you want your URIs to last the current serialisation/server software combo du jour — and conneg may not be everybody’s hot topic, but it IS supported by most servers, and has been around longer than most of the people who use the Web today have been doing so). Yes, modulo Windows anomaly, a # based HTTP URI when used as an Object Name carries the benefit of implicit Name/Address disambiguation. Not so if the fragment identifier crosses the wire and the server isn't configured to treat the fragment identifier properly i.e., enact what the Windows user agent didn't implement. > > Not sending the fragment over the wire embodies the fact that the smarts of linked data are a layer above HTTP: the resource may or may not contain structured data, and the client may or may not know what to do with it if it's there. > > Given that: > > - Your SAN URI should include a fragment, because it's the URI for *you*, not the URI for the resource Again, yes, but the explanation is much simpler. HTTP URIs can be used to Name Objects/Entities/Things. They can also be used to Name Locations (Addresses) and function as Resource Locators. Linked Data requires disambiguation since you have Object Names that resolve to Object Descriptors Resources at Addresses. The other baggage we carry here is the overloaded of "Resource" when it should have been "Object". > > - (Ideally, it shouldn't contain something which forces the resource type, like a file extension, because preferred formats change over time — but for testing without conneg turned on you can get away without it) > > - Whatever structured documents you're publishing need to include that fragment in the subject for the statements about you and the key (obviously) The published document should describe an unambiguous named subject. The subject name should resolve to said document. You end up with two paths to the actual description data (an eav/spo based directed graph). > > - Anything which, when presented with your SAN URI including a fragment, includes it in the request-URI to your server does so in violation the HTTP specs (can I squeeze in a Postel’s Law citation, too?) Yes, re. what became the official spec. > > - Whether the server accepts the fragment or not, the consumer STILL has to filter the set of triples it gets back down to those with the right subject in any case, making its inclusion in the request-URI entirely worthless. > > - If you're into the realms of *needing* rewrite rules on any web server just to do static linked data publication, you’ve overcomplicated things. For the simple publication case re. (X)HTML, as stated, just use # URIs. If IIS by default sends the Fragment Identifier part of a URI to some other server in the chain, it should be fixed by way of a re-write rule. Windows is the only platform where this problem potentially arises. Kingsley > > See, e.g., the first part of this presentation — which is geared at Apache but is minimally Web-server-specific (i.e., I enable conneg). No RDFa or Microdata because I threw together the example data in a rush; I should go back and add it… > > http://pres.spindle.org.uk/2011/kultivate/ > > M. > > >> >> Yes, say I, head held low: Ive failed to publish a document using the 2 URIs Jurgen demanded as a minimum baseline (with and without fragment). To be fair, he is spot on demanding that this be the baseline. >> >> http://www.slideshare.net/guestecacad2/goodrelations-tutorial-part-4 is excellent (and could be a foundation course for our work, here). It even mentions IIS rewrite rules, for several generations of IIS. But, it doesnt provide them (as does no other blog post, or windows support page). It doesnt show it ON windows. It just talks "about" it. And, the limited context for the rewriting going on is obvious (and IIS has done for a decade, mapping a path element to a script file with extension). None of deals with the fragment. >> >> Surely in the semweb community, there must be SOMEONE that already delivers an RDFa page from IIS? ...responding to a server-initiated GET that has the #this ...on the wire? (ignoring what the RFC says SHOULD not happen). >> >> or perhaps someone can say: you MUST use the beta of windows 8 server edition (which I can go download)... if there are confidentility issues going on. >> >> Folks on the list ARE HEARING THAT IT MUST be supported (for the semantic of secure de-referencing) its a critical SEF that is (securitiy enforcing function). Otherwise, we are limited to Bergi-style validation (which produces a different security answer to FOAFSSL and ODS). He implements a lower assurance SEF, that does not enforce correct de-referencing. >> >> Date: Thu, 29 Dec 2011 15:08:29 -0500 >> From: kidehen@openlinksw.com >> To: public-xg-webid@w3.org >> Subject: Re: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine... >> >> On 12/29/11 2:23 PM, Peter Williams wrote: >> Get it down to specifics. >> >> As it stands, I, as dumb programmer doing bog standard stuff, cannot make an RDFa file on (native) windows work with webid validators. This is my second fail (self signed certs being the first). Running third party servers on windows is NOT an option (for us). >> >> What's the fail right now about? Fragment Identifiers over the wire? Or adding "#this" to cheaply make an unambiguous object name that implicitly bound to its description resource (document) address? >> >> >> With two fails, the project is out (under normal rules). >> >> Yes, but what's the failure? You can make an IIS re-write rule to handle fragment identifiers that come over the wire. You simply work from the resource outwards using "reflection". Basically, your rule is about a resource that has a sense of "self" that's discernible from its consistency (i.e., the eav/spo based structured data it bears). Thus, you can deduce from the content the fact that a resource is a descriptor for an unambiguously named subject. This rule isn't common due to the fact that the rest of the world has adopted the notion that fragment identifiers don't cross the wire. This isn't a dead end. >> >> If you look at how Facebook has adopted Linked Data (which has nothing to do with Turtle support) you'll recognize the "reflection" pattern approach re. self describing resource. Thus, Facebook has unleashed somewhere in the region of 850 million + self describing structured data bearing resources that describe unambiguously named subjects, using HTTP URIs. >> >> If you make the "#this" addition to URL that I suggested earlier, you should see this manifest with clarity. >> >> I've dropped a note on G+ (my preferred blogging platform these days) that outlines a simple example of Linked Data deployment using our particular Linked Data Deployment platform which is integral to Virtuoso: http://goo.gl/fC8Rv . >> >> >> Kingsley >> >> From: home_pw@msn.com >> To: kidehen@openlinksw.com; mo.mcroberts@bbc.co.uk >> CC: public-xg-webid@w3.org >> Subject: RE: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine... >> Date: Thu, 29 Dec 2011 11:12:18 -0800 >> >> >> The history is obviously convoluted, as usual with tech. >> >> I know 2 facts: >> >> Kingsley as an experienced person with semweb in enterprise theatres implies that linked data clients are proper for putting fragments on the wire (and Henry's server is proper for NOT implementing the SHOULD reject rules in the RFC). >> >> I could not make a trivial RDFa page on a windows website (after 15 years of doing the same thing...) working with both cases Jurgen suggested (implying I need to act my basic act together, rather than solve an insoluble-for-me windows problem) >> >> Now, Im giving folks a break. my normal CISO/CIO management position in this situation is... I go away for 5 more years (till microsoft makes the platform that fits, us, as a windows shop, and its 3+ years old, since we only adopt a version out of date). We pay Microsoft to make and support stuff that fits our interests (working as we do in the commodity world, needing low error rates and very HIGH stability). I also expect them to meet internet standards with hardware and OS solutions (in which IS status is a longterm goal track, in which HTTP still has not made "internet standard"). Obviousy, HTTP has good reasons for not being an internet standard yet: it aint stable, and hardware-ready. I wont be the same in 10 years time...(yet). >> >> I give the break as I see a community on the trasition from R&D to mass commercialization, and that needs some support. it nt proiper that one of the most late-adopters (realty) is a driving use case, but I have not see any others...come to the fore as NEEDING a distributed, de-centralized identity management solution. Everyone else is perfectly happy with hub-and-spoke. >> >> >> >> >> >> >>> Date: Thu, 29 Dec 2011 11:46:56 -0500 >>> From: kidehen@openlinksw.com >>> To: Mo.McRoberts@bbc.co.uk >>> CC: home_pw@msn.com; public-xg-webid@w3.org >>> Subject: Re: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine... >>> >>> On 12/29/11 4:04 AM, Mo McRoberts wrote: >>>> hold on a second. >>>> >>>> is somebody saying fragment identifiers SHOULD be included in a request somewhere? >>>> >>>> HTTP/1.0 and HTTP/1.1 very explicitly say otherwise (and as far as I can tell, HTTPbis WG outputs haven't changed that), and last I checked nothing about linked data changes that? part of the point of linked data is that it doesn't require anything "special" >>>> >>>> AFAICT, a server is perfectly within its rights to return a 4xx response to request containing a fragment, and that includes a 400 (Bad Request), given that an unescaped '#' isn't permitted in a Request-URI. >>> Its a loooong story. >>> >>> Fragment Identifiers not going of the wire arose from a typo (I hear) a >>> long time ago during spec development. It lead to the common (and >>> eventually accepted) practice of not sending Fragment Identifiers over >>> the wire, but Microsoft didn't initially adopt this i.e., they stuck to >>> pre typo definition. >>> >>> Its also one of the reasons why DBpedia adopted slash rather than hash >>> URIs since the project's goal was about: just working, in a browser >>> agnostic way. >>> >>>> if there's a spec somewhere which says otherwise, I'd love to know about it (not least so I can tweak my own servers), but the current httpbis-p1-messaging draft even goes as far as to say: >>>> >>>> "Note: Fragments ([RFC3986], Section 3.5) are not part of the request-target and thus will not be transmitted in an HTTP request." >>>> >>>> To the best of my knowledge this is a point of clarification, rather than a change in specification, it's just that some folk hadn't read the URI ABNF properly. >>> It's a mess. But for now, I think Microsoft has tweaked its HTTP >>> products (browers, servers, and proxies) thereby reducing perpetuation >>> of this problem. They were the last hold out. >>> >>> I might dig up some reference links if I get some time. >>> >>> Kingsley >>>> M. >>>> >>> >>> -- >>> >>> Regards, >>> >>> Kingsley Idehen >>> Founder& CEO >>> OpenLink Software >>> Company Web: http://www.openlinksw.com >>> Personal Weblog: http://www.openlinksw.com/blog/~kidehen >>> Twitter/Identi.ca handle: @kidehen >>> Google+ Profile: https://plus.google.com/112399767740508618350/about >>> LinkedIn Profile: http://www.linkedin.com/in/kidehen >>> >>> >>> >>> >>> >>> >> >> -- >> >> Regards, >> >> Kingsley Idehen >> Founder& CEO >> OpenLink Software >> Company Web: >> http://www.openlinksw.com >> >> Personal Weblog: >> http://www.openlinksw.com/blog/~kidehen >> >> Twitter/Identi.ca handle: @kidehen >> Google+ Profile: >> https://plus.google.com/112399767740508618350/about >> >> LinkedIn Profile: >> http://www.linkedin.com/in/kidehen >> >> >> >> >> >> >> > -- Regards, Kingsley Idehen Founder& CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca handle: @kidehen Google+ Profile: https://plus.google.com/112399767740508618350/about LinkedIn Profile: http://www.linkedin.com/in/kidehen
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Friday, 30 December 2011 02:08:39 UTC