Re: Final CFP: In-Use Track ISWC 2013

I'm now thoroughly confused by this conversation.

Talking about LaTeX...

On 2013 May 2, at 17:02, phillip.lord@newcastle.ac.uk (Phillip Lord) wrote:

> Sebastian Hellmann <hellmann@informatik.uni-leipzig.de> writes:
> 
>> Plus it is widely used and quite good for PDF typesetting.
> 
> And sucks on the web, which is a shame. If I could get good HTML out of
> it, I would be a happy man.

_What_ sucks on the web?  Certainly not PDF.

There are hassles with PDFs, yes.  In particular, (i) embedding metadata is underdeveloped (XMP is undertooled), and (ii) deep-linking into PDFs could be better, as has been discussed.  HTML is naturally better at both of these, but neither is a real problem.  (i) between DOIs and metadata from journal webpages, most of the important stuff is available without major difficulty, and various organisations (eg ORCID) are labouring away at making a very messy problem better.  (ii) would be nice to solve (and perhaps Utopiadocs is the way to do it), but doesn't, as far as I can see, offer major advantages beyond 'See sect. xxx'.  Most text is, after all, consumed by humans, and articles tend not to be tens of pages long.

Thus HTML can do some unimportant things better than PDF, but what it can't do, which _is_ important, is make things readable.  The visual appearance -- that is, the typesetting -- of rendered HTML is almost universally bad, from the point of view of reading extended pieces.  I haven't (I admit) yet experimented with reading extended text on a tablet, but I'd be surprised if that made a major difference.

Also, HTML is not the same as linked data; there's no 'dog food' here for us to eat.

Is it possible that folk here are conflating 'LaTeX' with the quite startlingly ugly ACM style?  That's almost as unreadable as HTML.

Best wishes,

Norman


-- 
Norman Gray  :  http://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK

Received on Thursday, 2 May 2013 16:55:33 UTC