W3C home > Mailing lists > Public > public-fedsocweb@w3.org > November 2012

RE: WebFinger compromises

From: Paul E. Jones <paulej@packetizer.com>
Date: Thu, 1 Nov 2012 00:16:19 -0400
To: <webfinger@googlegroups.com>
Cc: <public-fedsocweb@w3.org>, <apps-discuss@ietf.org>
Message-ID: <00e201cdb7e7$a5a67390$f0f35ab0$@packetizer.com>
Brad,

 

Comments in green:

 

 

The current language actually isn't political compromise, but more my desire to not break backward-compatibility any more than we have to.  Current spec recognizes, for example, that Google's WF server serves up XML.

 

If I change it to CSV tomorrow, will the spec recognize that?

 

PEJ: The spec does not say that explicitly, but it says that /.well-known/host-meta defaults to XML. If a different format is desired (and the only one mentioned is JSON), it must be requested using the Accept header.  That’s the way web interfaces are supposed to work, after all.  So while I don’t have any expectation of a new format being introduced soon, this allows the possibility and I do think we should allow for that possibility.  Within an enterprise environment, perhaps there is a CSV format preferred and it’s selected using Accept. 

 

Seriously, don't regard at all what Google does today.  It can change on a moment's notice if this thing every shows promise of stabilizing.  The only reason it doesn't support JSON today is that I got bored of this process.

 

PEJ: Google is just one implementation.  There are others and I’ve been asked to try to not break them.  The way the spec is written, what works continues to work.  More importantly, it trivial as hell to maintain that backward-compatibility.  One might even view it as forward-compatibility, too.  I do think we should definitely design for use of other data formats in the future, and that is built in.

 

 Clients expecting XML will still work.

 

How?? The spec says only JSON is required?

 

PEJ: I was referring to existing clients and servers.  Any new JSON-only servers will not work with XML-only clients.  That’s to be expected and that’s OK. There is no loss of existing functionality. 

 

 

 So long as any WF server wants to support both, those clients will work.

 

I might advocate for our webfinger implementation to only return XML-as-requested 25% of the time.  That would be more hilarious than the 0% as required by the spec.

 

PEJ: Now that would be ugly.  At least consistently work or fail… ;-) 

 

 

Going forward, XML is optional and JSON is mandatory.  I wanted to mandate both, but lost that argument.  (Still, supporting both is simple.  My server does both and will honor the Accept header.  It's trivial to do.)

At some point, I will publish my server code.  It's just a simple Perl script, but shows how trivial it is to implement a WF server.  It does both XML and JSON and I do wish we would continue with both.

 

Do not even talk about implementations.  Implementations are always easy.  I made that mistake in the past, trying to convince people how easy things are by showing code.

 

What is harder is winning mindshare, and overly large, schizophrenic specs don't instill confidence in would-be supporters.

 

PEJ: I’ve seen it go both ways.  Sometimes, if you just build it people ignore it, because it’s not a standard. Sometimes if you write the standard, then people open their eyes.  A useful demo might help, such as having Chrome respond to “acct” URIs and display a “profile” page in the browser, pulling info from the WF server and following the links.  Such highly visual demonstrations work better than code in a server or specs on paper.

 

Further, I accept the preference for JSON and only putting the requirement there.  But the fact is that any web resource can return ANY format.  This is a basic part of HTTP and the reason the Accept header exists.

 

So why don't you document the image/gif response type in the spec?  (Because it would be noise.)

 

Just as I can file my own RFC entitled "Recommendations for serving image/gif response payloads in Content-Type-negotiated WebFinger queries", so can the XRD community.

 

The WebFinger spec is only required to document the requirements, not ponies.

 

PEJ: The spec does only list the requirements (and procedures). There’s no point documenting how to send any other format than those that actually exist in practice.  My only point is that I think we should try to ensure the protocol aligns with the intent of HTTP and support other types should ever a day come.  It may never happen, but we should not box ourselves into a corner, either, especially when the HTTP spec explains how it should work.

 

 

So we should design the service such that we can support whatever the next hot format is.

 

I do not object to you clarifying that "WebFinger MAY return alternate response types, if requested by the client with an HTTP Accept header blah blah blah in the absence of an such a header, the default is JSON etc etc"

 

PEJ: That’s what is says for host-meta.json. Strictly because there are some existing implementations, it says the default for host-meta is XML.  I can change that, but I feel like the guy in the middle with people on one side yelling “to hell with XML” and people on the other side saying “we already have that implemented!”  Leaving the text as it is addresses every concern, except for those who hate seeing the word XML or XRD. ;-) Because I even uttered the word – introduced no requirements on new servers – it is said to be complex.  If I change the text, it does not change the implementation requirements in the least if one only implements JSON. 

 


So, my position is:
1) Let's not just kill XML because we decided we do not like it this week (killing all hope for backward-compatibility)

 

Let's kill it because it's not required.  I neither like nor dislike it.  I also neither like nor dislike JSON.  I just think it's stupid for a spec to not decide.

 

PEJ: But it is decided.  Requirement is JSON. People writing code tomorrow ONLY have to worry about JSON, both client-side and server side.  *If* people want to use XML, then the spec explains how that is done, but it is not a mandatory feature, thus it will not be universally implemented.

 

I'm entering this mailing list again because it has no deciders.  Everybody just keeps saying "yes, sure, we'll add that to" (as far as I can tell).

 

PEJ: We’ve not added any new features in a long time.  We have moved XML to the “optional” status, we removed the “acct” URI scheme to a separate document, and next on the list (IMO) should be the “acct” link relation (moving to a separate doc).  It has only gotten simpler, in other words.

 

2) Let's have a web service that could serve XML or JSON or next-hot-thing (i.e., future-proof it)

 

Don't disagree.

 

3) Let's use HTTP the way it is supposed to be used and allow the "Accept" header to work via /.well-known/host-meta.

 

Don't disagree.

 

4) Since host-meta.json is already defined (and I would have argued against it, but it's there), let's fully embrace it.

 

That's fine.  I'm not against supporting that.

 

[snip]


> -- 1 round trip, 2 round trips. Don't really care. 2 round trips keeps
> the spec simpler and the 1st will be highly cacheable (Expires: weeks),
> so it's 1 round trip in practice, but I won't fight (too much)
> *optional* parameters in the 1st request to possibly skip the 2nd
> request.  It worries me, though.  I'd rather see that optimization added
> in a subsequent version of the spec, so all 1.0 implementations have
> then shown that they're capable of performing the base algorithm.  I
> worry that too many servers will implement the optimization and then
> lazy clients will become pervasive which only do one round trip, thus
> making the "optional" optimization now de facto required for servers.
> So I'd really rather drop that from the spec too.  Let's add it only
> later, once it's shown to be needed.  As is, clients could even fire off
> two HTTP requests in parallel to reduce latency, one for host-meta and
> one optimistically for the presumed host-meta location in cases of big
> hosts that rarely change, or expired cached host-meta documents.

We support both.  RFC 6415 defined the base for 2 round trips.  The current WF spec adds that extension to allow for one round trip.

 

Please acknowledge my argument, even if you don't agree with it.  Do you understand my description of how I fear it will become a de-facto requirement?

 

PEJ: The “resource” parameter (allowing one round trip) is mandatory in the spec currently.  I originally introduced it (at Eran Hammer-Lahav’s request) as “optional”.  The group was fairly keen on making that mandatory, pushing for the requirement even before the text was adopted as a WG item.  So, the de facto standard you fear is the standard (per the current text).  Do you not want to mandate the “resource” parameter? It has been mandatory since the draft published in May this year.

 

 

> I will continue to fight for Google's WebFinger support, but I'm not the
> only one losing patience.

You're right there, which is why I'm serving the role of editor.

 

I thank you for that, because it's a largely thankless job.  I'm coming across as aggressive, but I really want this to work, and I view everybody on these mailing lists as friends.  I just think we all need a reality check.

 

PEJ: Aggressive?  You’re mild compared to some others. :-)  People feel passionate about different things.  I also just want this to work.  I see a lot of potential with WF for enriching social networking, enterprise applications, etc.

 

> Everybody please hurry up, simplify, then hurry up.  I'll help however I
> can.  I'm not sure whether this was helpful.

It's doesn't get much simpler than this:

   curl https://packetizer.com/.well-known/host-meta.json?
        resource=acct:paulej@packetizer.com <mailto:acct%3Apaulej@packetizer.com> 

 

 Again, implementation.  Everybody on this mailing list can write a static webserver.

 

PEJ: That one isn’t static: it’s pulling data from a live database and formatting is per the client’s request. But to your point, many people on this list could build that server code, too.  The reason I wrote that was to demonstrate its functionality and to help those who cannot build one themselves.  I do want to ensure anyone with a domain can throw up a WF server.

 

Now we need to just move on to agreeing on some useful link relations for WF.

 

I will stay out of your way there.  I just want a simple base to build upon.

 

PEJ: Yeah, so how do we get to that thing we can build on?  Current requirements … bare bone … are:

·         Servers must support JSON, may support XRD (or TLV or whatever)

·         Servers must make /.well-known/host-meta and /.well-known/host-meta.json resources accessible

·         Servers must support the “resource” parameter

This means the vanilla client on the Internet will query only for JSON.  Client developers have mostly said they want the simplest possible solution, which means most will send requests with the “resource” parameter.  More than one has expressed a desire to be able to cache /.well-known/host-meta to speed processing of resource-specific queries.

 

Personally, I think we have the solution in hand. If I change one thing, there is somebody who will not be happy.  As compromises go, I think we’ve done pretty well.  I say that, because I know one can build both a client and server implementation quite easily and only one format they need to consider.

 

Paul

 
Received on Thursday, 1 November 2012 04:16:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 1 November 2012 04:16:44 GMT