RE: Documenting implicit assumptions?

 
This discussion thread is prompting some my assumptions to come to the fore. These include assumptions about originality, patents, knowhow, and reducing things to practice.
 
URI string types: Lets drop how hard/cool it is to serialize a string when marshalling a constructed type . I did it when I was 21 by hand, using simple rules; and I was at the bottom of the CS class.  Its fine to define a better string type and SAN name-form container for the URI chars in a cert name, since things have changed  in the web world since we assigned IA5String in 2000. Just do it, so the IRI issue is closed. let us know the new oid of the new name-form, so I can change 2 or 3 bytes in my implementation
 
on patents and originality, and "innovation". There is simply nothing novel in the idea that a client's cert once presented induces a recipient to retrieve the associated entry and the cert within - by an act of de-referencing (that only Newton and Leibnitz can understand in the semweb version). Sorry, the procedure comes with the 1988 edition ISO/ITU-T X.500 standards. If one reads those standards, when a user agent does peer entity authentication (known as bind) to a server to establish a [secure] communication channel over which directory operations may pass, there was an expectation that the server's connection-establishment module pings the directory entry "referenced" by the name in the cert - to see if the cert value is still present  in the cache or master node (using an optionally-signed directory request/response).  At a higher layer, should the SERVER agent issue a background signed directory request, the signed request dispatcher in its peer server does, post bind, essentially the same thing as the procedures on the foreground channel - a cert-pingback which delivers the SEF known as "strong authentication" (semantically different to peer entity authentication and its use of the cert-pingback during bind). These bind and strong auth procedures are variants of the simple password procedure (the SEF known as "simple authentication"), in which a [hashed] password "claim" could be checked out by the server by issuing a (optionally-signed) compare operation to a preferred server node holding a cached copy of the hashed password attribute, as denoted by name lookup and lookup operation parameters.
 
There is lots of EC-funded research reports from early 1990s that disclose how this was all reduced to practice, lest folks are thinking about patenting the obvious or writing a refereed paper. Being OSI culture, it was reduced to practice in in several protocols suites, nativeOSI and TCP/IP being the usual candidates in that era - since its all by definition written so its irrelevant which stack one uses. The web "stack" is just another "embodiment".
 
No innovation. Now, this work was all done in an ISO/ITU-T standard, which by definition is 10 years behind the times by the time its prepared, voted on by national bodies, and then issued (in 1988). There is no innovation, and no invention by simply doing what the standard describes and specifies. There is: application and reuse of an (obvious) technique. I know Henry has met the some of the ISO/ITU-T team  who actually designed and specified this work in committee, and Ive met most of them. 
 
Semweb original thinking. Merely using the X.509 strong authentication technique in https (webid protocol) is not creditable (being so obvious). Using it in combination with the "logical" theories of RDF, FOAF and linked data possibly is... assuming someone can get clear about the original claims. The interpretative logic apparatus is adding something, much like prolog added something to computer science that fortran never could... Though its hardly new, the ability to identify and download coded typelibs on the fly on the web is cute. Arguably, the web-world already has it .. its called authenticode-signed windows libraries with typelib metadata stored in the resource fork of the PE file format that provides its implementations. Windows update delivers billions a day to the web. Oracle does the same...with signed java classes.
 
So, in my mind, the ONLY thing that is interesting and novel here is what the semweb really brings to the table. It has to (1) deliver the "promise of the prolog-story" - and thus add something to the highly formal world of security engineering, and (2) induce the wave of adoption for which web story is famous, since it can magically find "the right balance" that causes viral takeoff.
 
reducing to viral uptake: the right balance for me is: have any mom-and-pop add a para of emailed RDFa cert stuff to their home page, and point to it in the re-issued cert. Thats it. Do that 500 million times, and the browser makers will quickly finess the APIs and the crappy cert selector UI for 1996-era https. They will be begging for design "theory" to guide them, at that point.
 
 
 
 
 
 
> From: tai@g5n.co.uk
> To: nathan@webr3.org
> CC: henry.story@bblfish.net; benjamin.heitmann@deri.org; public-xg-webid@w3.org; msporny@digitalbazaar.com; michael.hausenblas@deri.org; timbl@w3.org
> Date: Tue, 1 Feb 2011 10:57:41 +0000
> Subject: Re: Documenting implicit assumptions?
> 
> On Mon, 2011-01-31 at 19:59 +0000, Nathan wrote:
> > subjectAltName tightly binds WebID to x509v3 certificates, x509v3 
> > certificates with subjectAltName extensions are very hard to produce 
> > with common libraries (unless you have a custom setup - e.g.
> > openssl). 
> 
> Point and click certificate generation (on Linux and similar - requires
> OpenSSL and Gambas):
> 
> http://buzzword.org.uk/2008/foaf_ssl/MakeWebIDCert/
> 
> > is subjectAltName IRI compatible?
> 
> HTTP isn't IRI compatible. Very few protocols are. But luckily that
> doesn't matter because there is a mapping from IRIs to URIs - for every
> valid IRI you can calculate an equivalent URI. In fact the only official
> syntax definition for IRIs exclusively defines them in terms of a
> mapping to URIs. (To paraphrase, "if a Unicode string processed this way
> results in a valid URI, then what you started with was an IRI".)
> 
> > use of content negotiation on webid URI dereferencing limits usage to 
> > HTTP.
> 
> Who says you need to conneg? A WebID profile with a single
> representation is perfectly fine. This could be an "ftp:" URI, or even
> (potentially) a "data:" URI. (Anyone played around with "data:" WebIDs
> yet? I've been thinking about them for some time, but my OpenSSL
> wizardry is not quite up to it yet.)
> 
> > use of RDF requires RDF support, use of XML requires XML support, use 
> > of HTML+RDFa requires DOM and RDFa support.
> 
> Use of the network stack requires an Internet connection. So what?
> Virtually any technology these days has a long list of dependencies. The
> number of people programming on bare metal these days is not
> statistically significant. libxml + librdf + libraptor comes in pretty
> small compared to the amount of code needed to, say, playback video
> found on the web.
> 
> > no required serialization makes interoperability nigh on impossible.
> 
> Yes, I agree here. My stance on it is that WebID profiles MUST be
> available in at least one W3C Recommended serialisation of RDF. If
> they're available in other formats too, and if consumers of the profiles
> prefer the other formats, then more power to them.
> 
> Mandating that they be available in one blessed serialisation of RDF is
> not a big ask. It's not like the W3C comes up with new ones every day -
> there was the XML one back in the 1990s; and there was RDFa[1] in 2008.
> That's two.
> 
> Yes, there will probably be a few more over the next few years, with the
> RDFa working group expecting to publish an RDFa 1.1 Recommendation real
> soon now, and the RDF working group expected to publish Turtle and some
> sort of JSON serialisation. But we'll still be able to count them on
> one hand. If we think this is too uncontrolled we could stipulate that
> the mandated list of serialisations for WebID is the list of W3C
> Recommended RDF serialisations at the time of WebID's publication. So
> serialisations recommended after WebID is published don't get on the
> list.
> 
> ____
> 1. One could argue that RDFa is not a single serialisation, but a family
> of them. XHTML+RDFa and SVG+RDFa are already Recommendations, HTML+RDFa
> is on the Rec track, and I'm personally trying to get Atom+RDFa
> published as a working group note.
> 
> -- 
> Toby A Inkster
> <mailto:mail@tobyinkster.co.uk>
> <http://tobyinkster.co.uk>
> 
> 
 		 	   		  

Received on Tuesday, 1 February 2011 17:35:54 UTC