[whatwg] [rest-discuss] HTML5 and RESTful HTTP in browsers

mike at mykanjo.co.uk writes:

> "It's also the most common case. Supposing I opened the above URL in a
> browser, and it gave me the HTML version; how would I even know that
> the PDF version exists?"
> 
> Hypertext.

OK.

> "Except that in practice on receiving a URL like the above, nearly all
> users will try it in a web browser; they are unlikely to put it into
> their PDF viewer, in the hope that a PDF version of the report will
> happen to be available."
> 
> I've adressed this subsequently:
> 
> 'here's the URL: example.com/report you can open this with adobe,
> excel, powerpoint, word'

Would the sender of that link necessarily know all the formats the URL
provides?  Anyway, that's an unrealistic amount of typing -- typically
round here people just copy and paste a URL into an instant message and
send it without any surrounding text.

Whereas without any other information, people will generally open URLs
in a web browser.  So it'd be faster just to send the URL of the page
which contains hypertext links to all the formats; at which point we no
longer care whether those links specify the format in the URL or
elsewhere.

> "Suppose my browser has a PDF plug-in so can render either the HTML or
> PDF versions, it's harder to bookmark a particular version because the
> URL is no longer sufficient to identify precisely what I was viewing.
> Browsers could update the way bookmarks work to deal with this, but
> any exterrnal (such as web-based) bookmarking tools would also need to
> change."
> 
> I've also already addressed this in the original post; I was quite
> clear that if browsers don't store the application state when you make
> a bookmark (headers, URI, HTTP method), then this is an argument for
> continuing to use URI conneg *aswell* as HTTP conneg; rather than
> instead.

What is the point of doing it in HTTP if it's being done in HTML anyway?

> Until the browsers fix this. ;)

Not just browsers, as I pointed out.  Also many databases which have
tables with URL fields would need extra fields adding.

> Browsers should really be bookmarking the whole request/state; the
> only reason they don't do this is because that's not the way it's done
> now. The reason for that is lack of incentive due to inadequate
> tooling, it's not a fair justification to say 'no one does it at the
> moment because its not necessary', that's disingenuous.

True.  But if the current way of doing it is good enough, there's no
incentive to change.  There's little point in making browsers implement
extra functionality and inventing new mark-up and evangelizing it, only
to end up with the same functionality we started with; there has to be
more.  And the greater the effort involved, the greater the benefit has
to be to make it worthwhile.

> "Or suppose the HTML version links to the PDF version. I wish to
> download the PDF on a remote server, and happen to have an SSH session
> open to it. So I right-click on the link in the HTML version I'm
> looking at, choose 'Copy Link Location' from the menu, and in the
> remote shell type wget then paste in the copied link. If the link
> explicitly has ?type=PDF in the URL, I get what I want; if the format
> is specified out of the URL then I've just downloaded the wrong
> thing."
> 
> Here you go:
> 
> wget example.com/report --header="Accept: application/pdf"

Typing that would require my knowing that URL of the PDF also serves
other formats.

But, moreover, it requires typing.  Currently the URL can be pasted in,
the text that the browser copied to the clipboard.  There's no way that
my browser's 'Copy Link URL' function is going to put on the clipboard
the exact syntax of wget command-line options.  Having to type that lot
in massively increases the effort in this task -- even if I can type the
relevant media type in from the top of my head, without needing to look
it up.

Or what about if I wanted to mail somebody pointing out a discrepency
between two versions of the report, and wished to link to both of them.
That's tricky if they have the same URL.  Possibly I could do it like
you have with the wget command-line above, but that requires me knowing
which browsers my audience use and the precise syntax for them.

Smylers

Received on Monday, 17 November 2008 07:33:27 UTC