W3C home > Mailing lists > Public > xproc-dev@w3.org > October 2011

Re: Totally non-conformant JSON hack

From: Zearin <zearin@gonk.net>
Date: Mon, 10 Oct 2011 10:40:07 -0400
Cc: XProc Dev <xproc-dev@w3.org>
Message-Id: <C8C9E1A6-A140-403D-865E-7C981A574A77@gonk.net>
To: Norman Walsh <ndw@nwalsh.com>

I’m in a rant-ish mood this morning.  I’m also irritated that JSON is now in fashion, and that XML is increasingly viewed as an inferior choice for many things.  

Please read this post through a certified pair of rant-reduction goggles™.  ☺ 

On Oct 10, 2011, at 9:11 AM, Norman Walsh wrote:
> Fscking JSON. Sigh.
> As more and more APIs move from XML to JSON, I think we need to make
> sure that the XML technology stack evolves so that we can easily
> interact with those services.


JSON is in fashion.  And it’s growing like fertilized kudzu.  

I have nothing against JSON.  For certain things, it’s probably better choice than XML.  What annoys me is, for other things, JSON’s popularity actually hurts XML’s progress!  To developers who aren’t familiar with the more powerful tools of XML, JSON appears the superior choice.  From this perspective there appears to be no obvious reason to ever look back to XML for anything.  

And all that popularity means a faster-growing, faster-building community for JSON.  Meanwhile the XML community appears dedicated, but sluggish, and unable to keep pace with JSON’s agile responses to the real-world needs of today’s Web.

I had hoped EXPath and CXAN would help make XML popular again—but they aren’t really attracting any new people to the community.  Only people already invested with XML are participating in EXPath.  If you look at the progress of JSON & JSON-related projects, EXPath and CXAN are evolving so slowly that they seem to be on life-support by comparison.  

Note to Florent & other EXPath members: 
I love the EXPath and CXAN work that exists!  I’m not saying I dislike these projects—I am very glad they are around!  I’m saying community growth, development activity, and overall progress isn’t enough to make XML attractive to newcomers.  Right now we are a dedicated, small community…but any community that does not adopt new members will eventually die.  And I don’t want that to happen to our community.  That is why I am mentioning all this.  If I hurt anybody’s feelings, that wasn’t my intention, and I apologize if I did.

> The medium/long-term solution to this problem, I think, is to amend
> the XML data model (as the XSLT WG is doing for the XSLT 3.0 (and the
> XQuery WG *is not* doing for XQuery 3.0, sigh.))


Rationale?  (Or should I say “irrationale”?)


How the “fsck” (thanks for that, Norm ☺) is the W3C supposed to “lead the Web to its full potential” with policies like this?!

> so that JSON data can
> be encoded directly. (The XSLT WG is adding the notion of "maps" to
> the data model.)



To quote a comment from your blog post Transclusion in DocBook: “What about an XLink that doesn’t suck?”  I’ve always thought XLink was based on solid ideas.  I believe that it failed because those ideas were a bit ahead of their time (they are more likely to be used now than when XLink was specified), lack of examples useful to the real-world, and some pretty arcane names of attributes in the XLink grammar.

> In the short term, I've added a experimental "transparent-json"
> extension to XML Calabash.

Sounds good to me.

> If transparent-json is true:
> 1. Any application/json data returned from p:http-request is
> automatically converted to XML. (Using the same (conformant)
> conversion that I introduced in p:unescape-markup[*] a while back.)

Sounds good.

> 2. Any data sent by p:http-request that has a content-type of
> application/json is encoded as JSON text before transmission.

Sounds very good! ☺ 

> 3. If application/json data (or a document with a c:json root element)
> is sent to p:store, it's written out as JSON text.

Sounds good.

> 4. If a p:document element fails to load XML, XML Calabash tries to
> parse the data as JSON and returns an XML representation of that if it
> succeeds. (This is the worst part of the hack, but it's hard to tell
> what the MIME type of a random file is.)
> I have very mixed feelings about this sort of hack. As a standards
> guy, it's clearly a violation of the spec and a source of
> interoperability failures. As a user who's bloody frustrated by the
> state of web APIs, it just quietly makes my life easier and better.

Not crazy about this idea.

Why can’t the processor just tell if it’s JSON from its extension?  I’ve never seen a JSON document use any XML file extension.

> Comments, suggestions, flames, demands for my immediate resignation,
> all most humbly accepted.
>                                        Be seeing you,
>                                          norm
> [*] I moved the <json> element and its children into the c: namespace.
> Sorry if that trips you up. My bad.

Does that mean it will be like this?


Cause I’d prefer it mirrored XML results as closely as possible.  (That helps keep it “transparent” IMO.)


Count to ten.  Breathe…

We can make it through this!  ☺ 

Received on Monday, 10 October 2011 14:40:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:03:09 UTC