Re: making the webcredits.org spec more strict about 'source' and 'destination' fields.

On 24 April 2012 20:45, David Nicol <davidnicol@gmail.com> wrote:

>
> On Tue, Apr 24, 2012 at 11:14 AM, Melvin Carvalho <
> melvincarvalho@gmail.com> wrote:
>
>>
>> I think the subtle point here that most dont get, is that http urls are
>> documents as defined by the protocol.  And anything inside the documents as
>> denoted with a # are data points.  The hard thing in this is web developers
>> having to UNLEARN their previous assumptions.  This single point causes no
>> end of chaos!  The other problem is that the web, like html, is fault
>> tolernt, so that if you get it wrong your system will probably still work!
>> :)
>>
>> The challenge is to getting the language right so that it's easily
>> understood in the short spec doc., in particular so that people can get up
>> and running in under a day.  I'm going to put out a draft in the next few
>> days that is hopefully more understandable.
>>
>
> Section 11.5.1 of Draft 12 of the OpenID 2.0 spec recommends that OPs
> assign a unique url fragment to an OpenID url that changes when the OpenID
> changes ownership.
>
> an appended generation identifier is very different from having the URL
> refer to a big document (say, a roster) and the fragment point to a part of
> it (page and line of someone's listing in the roster.)
>
> The specification for fragments,
> http://tools.ietf.org/html/rfc3986#section-3.5 , pretty much says
> "anything goes" and delegates all fragment interpretation to specific
> schemes, so an identity scheme (even an OpenID 2.0 provider that uses
> fragments for more than generation differentiation) seems conformant.
>
> I suggest that example identity strings in the short spec doc don't have
> fragments in them, also that the sentence where you state that any URL will
> do could affirm that when fragments are provided, the fragment is important
> and MUST NOT get stripped.
>
> How about http://tools.ietf.org/html/rfc3966#section-5.1.4 globally
> unique telephone numbers of well-known services for the examples? Is that
> too cute?
>

Thanks much for the feedback.

I've uploaded some changes, in line with feedback.

While I think there's some improvements, I think there's still some ways to
go in better communications.

I see 2 challenges:
- Explaining the spec in less than 2 pages
- Given an implementor's guide, examples, best practices, primer etc.

Perhaps this can be two documents

I'm also not sure I've covered the philosophical motivation ie (the
principle of least power) which is one of the axioms of the web ...

http://www.w3.org/DesignIssues/Principles.html

In choosing computer languages, there are classes of program which range
from the plainly descriptive (such as Dublin Core metadata, or the content
of most databases, or HTML) though logical languages of limited power (such
as access control lists, or conneg content negotiation) which include
limited propositional logic, though declarative languages which verge on
the Turing Complete (Postscript is, but PDF isn't, I am told) through those
which are in fact Turing Complete though one is led not to use them that
way (XSLT, SQL) to those which are unashamedly procedural (Java, C).

 The choice of language is a common design choice. The low power end of the
scale is typically simpler to design, implement and use, but the high power
end of the scale has all the attraction of being an open-ended hook into
which anything can be placed: a door to uses bounded only by the
imagination of the programmer.

 Computer Science in the 1960s to 80s spent a lot of effort making
languages which were as powerful as possible. Nowadays we have to
appreciate the reasons for picking not the most powerful solution but the
least powerful. The reason for this is that the less powerful the language,
the more you can do with the data stored in that language. If you write it
in a simple declarative from, anyone can write a program to analyze it in
many ways. The Semantic Web is an attempt, largely, to map large quantities
of existing data onto a common language so that the data can be analyzed in
ways never dreamed of by its creators. If, for example, a web page with
weather data has RDF describing that data, a user can retrieve it as a
table, perhaps average it, plot it, deduce things from it in combination
with other information. At the other end of the scale is the weather
information portrayed by the cunning Java applet. While this might allow a
very cool user interface, it cannot be analyzed at all. The search engine
finding the page will have no idea of what the data is or what it is about.
This the only way to find out what a Java applet means is to set it running
in front of a person.

 I hope that is a good enough explanation of this principle. There are
millions of examples of the choice. I chose HTML not to be a programming
language because I wanted different programs to do different things with
it: present it differently, extract tables of contents, index it, and so on.

Received on Thursday, 26 April 2012 06:41:06 UTC