[whatwg] Feedback on a variety of elements

On Thu, 6 Sep 2012, Pierre Dubois wrote:
> 
> I developed a javascript table parser based on my research. The parser 
> is able to understand complex relationship in a data table. The 
> relationship association is based on the current algorithm and take in 
> consideration how the header cell (th) is structured, positioned and 
> spanned. All of this is combined on how the column grouping (colgroup) 
> and the row grouping (thead, tbody) is structured.
> 
> My research was based on usability and common use of table. My goal was 
> to find how the HTML markup can be used to represent a complex table 
> based on how a person would understand the complex table by viewing it 
> in a user agent and on paper.
> 
> My research led me to extend the current definition of the table 
> elements (table, caption, colgroup, col, thead, tbody, tfoot, tr, th, 
> td) and I tried to understand the table without discriminating row and 
> column.
>
> http://wet-boew.github.com/wet-boew/demos/tableparser/ExtendedDefinition.html
>
> [...]

On Fri, 28 Sep 2012, Pierre Dubois wrote:
> 
> Proposal: Remove the headers attribute on the th element and td element
> 
> The Information, structure, and relationships conveyed through 
> presentation of a table can be programmatically determined with the 
> table usability algorithm provided below.
> 
> The HTML Table Validator show that by adding the id/headers attribute on 
> the analyzed table the relationships are programmatically determined. 
> (http://wet-boew.github.com/wet-boew/demos/tableparser/validator-htmltable.html)
>
> Proposal: Remove the scope attribute on the th element
> 
> The Information, structure, and relationships conveyed through 
> presentation of a table can be programmatically determined with the 
> table usability algorithm provided below.
> 
> Currently, as per my understanding, the scope set to "rowgroup" or 
> "colgroup" needs to be anchored in a rowgroup or a colgroup. So, the 
> scope are just repeating the information about the grouping element. 
> That are not providing any extra *useful* information on the concerned 
> table.

I don't think this works for all tables. For example, the first example in 
the spec in the <th> element's section does not get handled correctly by 
your algorithm -- it treats the ID column as important, instead of the 
second column. Without the scope="" attributes, I don't think that table 
would make much sense. Similarly, the "Characteristics with positive and 
negative sides" example used a number of times in the HTML spec works 
better with a few headers="" attributes to define the mappings than 
without, as far as I can tell (though your algorithm does make a valiant 
attempt, I will grant you).


> Proposal: Table Usability API

This is a very elaborate and large API. What are the use cases against 
which to evaluate it? i.e. what problem does it solve?


> Proposal: Table Usability Parser Algorithm
> https://github.com/duboisp/Table-Usability-Concept/tree/master/Algorithm

Can you elaborate on how this differs from the algorithm in the HTML spec, 
and in what ways it is better? (e.g. examples that your algorithm handles 
but that the HTML spec doesn't)

I'm all in favour of improving the spec, I just don't have a good frame of 
reference here by which to evaluate the proposal.


On Fri, 19 Oct 2012, Pierre Dubois wrote:
> 
> Sometime the subsequent row grouping under the same data level and the 
> subsequent column grouping under the same data level don't necessary 
> mean a summary group but still a data group.

A summary group is just a group with a heading saying it's a summary 
group, no? I don't really understand what is special about a summary 
group. How should software treat it differently?


> To fix that the solution would be to have a new attribute set on the 
> table element to know if the table contains summaries group.

I would be very surprised if such an attribute was used correctly a useful 
fraction of the time.


On Mon, 1 Oct 2012, Nicholas Shanks wrote:
>
> http://www.w3.org/TR/html-markup/th.html#th.attrs.scope Says nothing 
> about what a UA should do by default, nor when scope can be omitted due 
> to such defaults.

The HTML spec does:

   http://www.whatwg.org/specs/web-apps/current-work/#attr-th-scope

In particular the "auto" value (the default) has a lot of the behaviour 
you describe. Does it do everything you want?


On Mon, 1 Oct 2012, Nicholas Shanks wrote:
> 
> Are there use cases where the value of the scope attribute matters other 
> than as an intermediary for computing the headers applicable to each 
> cell?

Not to my knowledge...


> If not, are there use cases where either the data cell headers have not 
> yet been computed or they are unavailable (perhaps while Javascript DOM 
> tree walking?) where access to the scope attribute would be helpful?

I don't think there's any case where you have access to the scope 
attribute but not the cells, yet you could work out what the scope 
attribute's value should be to reflect the default behaviour.


On Fri, 5 Oct 2012, Mathew Marquis wrote:
> > 
> > Introducing new image formats is so rare (once every 20 years or so, 
> > so far) that I don't think we should optimise for it, certainly not at 
> > such a high cost. There are existing solutions (e.g. <object>) for 
> > handling that kind of thing.
> 
> Could you expand a bit more on the “cost” of an approach that might 
> account for this?

It's the same cost as every new feature:

   http://wiki.whatwg.org/wiki/FAQ#Where.27s_the_harm_in_adding.E2.80.94


> >>> Manipulating <picture> from script would be a huge pain -- you'd 
> >>> have to be manipulating lots of elements and attributes.
> >> 
> >> Well, is manipulating <audio> or <video> from script a huge pain?
> > 
> > Yes.
> 
> Certainly not more so than manipulating strings, having done both 
> frequently myself.

I don't think manipulating strings is the harder of the two, but I 
wouldn't suggest anyone manipulate the srcset attribute either. If there's 
a use case for manipulating it, we should provide a dedicated API.


> >> I actually have one use case that would benefit from having separate 
> >> elements instead of an attribute – replacing <source> elements 
> >> with links to their content for accessability purposes. I did 
> >> something like this when I hacked elinks to (badly) support HTML5 
> >> media elements 
> >> <http://blog.dieweltistgarnichtso.net/html5-media-elements-in-elinks>.
> >> 
> >> Consider that any attribute microsynthax would introduce a burden on 
> >> programmatic DOM manipulation, as the attribute would have to be 
> >> parsed separately. „Do X for every <source> child 
> >> element“ is cognitively cheap in comparison to maintaining a 
> >> mental model of the attribute in question – different from 
> >> other mental models used in HTML – in your working memory.
> > 
> > There are plenty of attributes with more complicated syntaxes, e.g. 
> > all the event handler attributes (whose syntax is JavaScript), or 
> > style="" (whose syntax is CSS). See also <meta content> for many of 
> > the pragmas, the <area coords> attribute, <ins datetime>, media="" 
> > attributes, etc.
> 
> Saying “there are worse things” doesn’t make much of a case for a 
> worse thing. Better that we focus on finding the best possible approach 
> here and avoid our previous mistakes.

My point was just that this is a more common approach in the HTML language 
than is multiple elements for this kind of thing. I agree that 
manipulating strings isn't particularly nice, but at best that just means 
that the multi-element model is no worse for manipulation; it's still no 
better for that mechanism either, and still has the numerous other 
problems that have been brought up in the past (error handling, mutation 
handling, shadow tree handling, handling unexpected non-element nodes, 
parsing and event-loop coordination, etc).

http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Aug/0070.html


> >> This reminds me that ATOM <enclosures> have a byte length. Surfing 
> >> via mobile, I certainly know that I would like images to show if they 
> >> can be downloaded in a reasonable time – but I want to skip 
> >> 5MB photos.
> > 
> > Given that newer mobile networks are actually faster than the 
> > networking a lot of people in the US have to their house, I don't know 
> > how much of a lifetime such a feature would have.
> 
> This seems incredibly specific to privileged browsing contexts, and 
> hardly a standpoint that accounts for the millions of users in 
> developing countries accessing the Internet from mobile devices alone. 
> Users with limited bandwidth and a per-kilobyte economic cost for access 
> to a resource that—by very design—is meant to be open and accessible 
> to users of any context. Tim Berners-Lee can speak to that far better 
> than I ever could.

Tim's welcome to participate in the discussion if he likes. To your point, 
though, I agree that there are countries where bandwidth is at a premium. 
If browsers want to implement something along those lines and authors are 
going to use it, and if we can come up with a solution to the problem that 
actually works technically, then I'm all for it. I've gone into more depth 
on the problem with this before.

http://lists.w3.org/Archives/Public/public-whatwg-archive/2012May/0247.html


> > It's not at all clear to me that the <picture> proposals are more 
> > readable. It's certainly not an enormous enough difference to be 
> > relevant.
> 
> Perhaps, given the lack of clarity on this point, we might consult the 
> opinion of authors.

One thing I learnt in the usability study we did for microdata is that 
what many authors think is simpler doesn't correlate to what they make 
fewer mistakes with, so unfortunately asking authors isn't necessarily a 
good way to answer this question (I also learnt that what I and other 
language designers think is a good idea sometimes isn't either). If we 
could, a usability study would be the best way to get this answer.


> > I agree. Who manipulates <img>, though? Surely you just create the 
> > image with the image you need, and use it. No manipulation involved. 
> > For srcset="", it's at most a concatenation of a few strings. When 
> > would you _parse_ it?
> 
> Should we cite examples of sites that grab/manipulate the value of an 
> img tag’s src?

If it's more than just toggling between images, yes.

I would definitely like to know when srcset="" is going to be parsed, 
because without knowing what the use cases are, it's impossible to 
evaluate the proposals to solve them.


> >> It is possible to address this by repeating the same image at a 
> >> larger breakpoint, like:
> >> 
> >> <img srcset="800.jpg 1x 1599w, 1600.jpg 2x 1599w, 1600.jpg 1x">
> >> 
> >> However, this means you're duplicating data, and have a chance of 
> >> failing to update all of the urls when you update one.  It also 
> >> becomes more hostile as future screens arrive with higher 
> >> resolutions. For example, if 3x screens showed up, one would have to 
> >> write the following to serve things in the most ideal manner:
> >> 
> >> <img srcset="800.jpg 1x 1599w, 1600.jpg 2x 1599w, 2400.jpg 3x 1599w, 
> >> 1600.jpg 1x 2399w, 2400.jpg 1.5x 2399w, 2400.jpg 1x">
> >> 
> >> At this point it's just silly, and very error-prone.
> > 
> > I agree, when there's 3x displays, this could get to the point where 
> > we need to solve it. :-)
> > 
> > With the current displays, it's just not that big a deal, IMHO.
> 
> Perhaps we should skate to where the puck will be, rather than where it 
> is now.

If we knew where the puck was going to be, I'd be all for that. The risk, 
however, is making elaborate solutions that have high costs (see the 
earlier link) but that never pay off, because the puck suddenly veered in 
a different direction than what we guessed.


On Thu, 11 Oct 2012, Markus Ernst wrote:
> 
> My point is, that any device-specific notation, such as "2x", forces the 
> author to make decisions that the browser should actually make. The 
> author does not know if in a few years the image will be viewed with 
> 1.5x or 3x or 7x or whatever devices.

No, but the author does know what pixel density the image has.


> This is why I'd humbly suggest to put information on the image in 
> @srcset rather than info on the device and media. Such as:
> 
> srcset="low.jpg 200w, hi.jpg 400w, huge.jpg 800w"
> 
> Where "200w" is the actual image width and not the viewport width. Like 
> that every browser can decide which source to load based on the display, 
> and available bandwidth or user setting or whatever.

The problem is that the image's width doesn't help the user agent at all. 
How is the user agent supposed to know which image looks best on a 600 
pixel wide viewport? The image could be intended to be a small icon inline 
in the text, or a sidebar full-bleed image, or a photograph in the main 
flow of the text. I just don't see what the UA can do here.


On Sat, 13 Oct 2012, Fred Andrews wrote:
> 
> This does seem to be an important point.  Would the follow be a 
> correction understanding of your point: if there are a range of images 
> each with a different declared size and the CSS pixel size of the image 
> is not constrained then the browser must use the image pixel size to 
> determine the CSS pixel size and without knowing the density then this 
> can not be done uniquely?

I don't really see how this could work.

> Perhaps the 1x density image could be placed first in the list, and then 
> the densities would all be defined.

That assumes the images are the same width, which doesn't handle the "art" 
use case, where the author wants a cropped image on a cell phone screen 
but a wide shot on a tablet.


On Fri, 19 Oct 2012, Fred Andrews wrote:
> 
> If it is really necessary to support this case then perhaps both the 
> image width and the the native pixel breakpoints could be specified in 
> the srcset.
> 
> Then srcset="low.jpg 10w 20w, hi.jpg 20w 40w, huge.jpg 30w" would mean:
> 
> low.jpg is 10 pixels wide and use it if the native pixel width of the 
> image box is less than or equal to 20,
> 
> hi.jpg is 20 pixels wide and use it if the native pixel width of the 
> image box is less than or equal to 40,
> 
> huge.jpg is 30 pixels wide and use it if the native pixel width of the 
> image box is less than greater than 40 pixles
> 
> The default break points could be the image sizes, and would typically 
> not be needed.
> 
> The first image could be the 1x density image, allowing the browser to 
> determine the image box size if not otherwise specified and this could 
> be done before loading the image.
> 
> This approach may be more natural for a fluid design.

Given how people are complaining that srcset="" is complicated as it is, 
making it even worse doesn't seem like a winner to me. :-)


On Fri, 2 Nov 2012, Eric Portis wrote:
> On Tue May 15 00:28:54 PDT 2012, Ian Hickson wrote:
> > 
> > In practice, the only information regarding dimensions that can be 
> > usefully leveraged here is the viewport dimensions. This isn't the end 
> > of the world, though -- there's often going to be a direct correlation 
> > between the dimensions of the viewport and the dimensions of the 
> > images.
> 
> This relationship will be direct, however it will not often be simple 
> and requires authors to bake information about their layout into their 
> srcset declarationsa potentially complex and error-prone process which 
> results in fragile markup.

Agreed, but since it's the only information we have, it doesn't have to be 
ideal -- all we need is that it be possible.


> For stretchy images the only two variables that really matter (excepting 
> bandwidth concerns) are 1) the device pixels the image is slotting into 
> on the layout and 2) the resolution of the image files themselves. Given 
> these, load the smallest image that is bigger than the device-pixel 
> dimensions of its box, or, failing that, the biggest file.

Actually all you need is the pixel density, as far as I can tell. You 
don't need the dimensions at all, if you're not worried about the "art" 
case where the image content itself is changed.


> If the browser can handle figuring out the layout

Browser vendors have indicated that they cannot do this before they have 
to fetch the image.


> I realize that requiring browsers to figure out what size the image will 
> end up rendering at on the layout before deciding which resource to load 
> will break image pre-fetchers as they currently operate. And that 
> pre-fetching is crucial to performance and therefore users. But it seems 
> to me pre-fetchers have a lot more headroom to get smarter than authors 
> do, and that we should strive to keep markup for content images as free 
> from presentational concerns as possible.

It's not a matter of "smarter". The browser parses the HTML file, gets the 
links to the CSS file, kicks off those loads, almost immediately comes 
across the <img> elements, and has to kick off those loads then too -- 
before the CSS file has returned.


> This pattern isn't mutually exclusive with the current srcset spec. What 
> I'm proposing is that if authors want simpler, more robust, 
> non-presentational markup, they should be able to opt into it and accept 
> the performance penalties that result (hopefully only for a while, while 
> browsers and pre-fetchers adapt).

If authors just want to do the pixel density case, they can just specify 
the 2x image in srcset="" and the 1x image in src="" and they're done.


On Tue, 6 Nov 2012, Pierre Dubois wrote:
> 
> Use case: Draw a graphic based on a data table
> * Like a pie chart, based on a sub-set of data contained in a data table.

This is an interesting use case. Do any sites actually try to do this 
today?

I tried writing an example to do this, and it's not clear to me that the 
API is particularly hard to use. Somewhat verbose, granted, but it only 
took a few lines of code, most of which is spent in canvas logic and in 
the CSS styles to make the table presentable:

   http://damowmow.com/playground/demos/tables/002.html

That's an admittedly simple table; what kinds of tables are people 
generating pie charts out of? Are they more complex? Do you have any 
examples we could study?


> An issue can be when the header cell cover and/or represent more than 
> one group (eg. multiple tbody from a column perspective and multiple 
> colgroup from a row perspective)

Certainly that does make things more complex and the current API doesn't 
handle spanning cells well if you want to select a cell by grid position.


> Use case: Draw a graphic based on a data table

Can you be more elaborate? Examples of pages trying to do this, examples 
of tables that need to be charted client-side, etc, would go a long way 
towards helping flesh out the use case.


> Use case: Draw a pie chart based on a sub-set of data contained in a 
> data table and retrieve heading cell content associated to the data 
> cells.

In the example above, getting the heading cell is trivial (it's the first 
cell of the row/column). I didn't draw labels on the slices, but I'm 
pretty sure that drawing those labels would involve far more canvas code 
than table DOM code, currently. If this is a use case we want to address, 
I think it suggests that either we're already handling it fine, or we 
should work on making canvas easier before working on the table API.


On Wed, 29 Aug 2012, Steve Faulkner wrote:
> > 
> > > Can you provide an example of how using a redundant role value can 
> > > lead to conflicts?
> >
> > Sure. Support someone writes:
> >
> >    <input type=submit name="submit" value="Submit My Form!" role=button>
> >
> > ...and then someone else copies and pastes it, and changes the type 
> > and name and value, but doesn't know what "role" is:
> >
> >    <input type=password name="password" value="" role=button>
> 
> that has nothing to do with it being redundant, it's to do with people 
> copying and pasting code, the same issue would occur for many other 
> attributes.

It happens when there are redundant markup features that are copied and 
pasted and then only one half is changed, making them inconsistent.


> That's why we have conformance checkers to pick up such issues where 
> they cause harm.

Conformance checkers help authors who use them catch mistakes they made, 
sure. But that doesn't give us license to make it more likely that authors 
will make mistakes.


> If the role and the input type do not match the role is no longer 
> redundant, so you did not answer the question.

I'm sorry you feel the answer is unsatisfactory. However, it is the reason 
that allowing redundancy is bad.


> >> > for example:
> >> >
> >> > <nav role="navigation">
> >> >
> >> > Under what circumstances can this example lead to 'conflicting 
> >> > information'?
> >
> > User copies-and-pastes this:
> >
> >    <nav class="fx-2" data-rollup="2s streamB"
> >         onclick="activateRollup(this)" role=navigation>
> >
> > ...and tweaks it for their sidebar, getting:
> >
> >    <aside class="fx-3" data-rollup="3s streamC"
> >           onclick="activateRollup(this)" role=navigation>
> 
> again you have changed the element so it is no longer redundant

Right, that's the whole point. Redundancy in the original markup leads to 
errors in the post-copy-and-paste markup.


> On Sun, 10 Jun 2012, Steve Faulkner wrote:
> >> >
> >> > You don't clearly differentiate between roles, properties and 
> >> > states, ther are quite a few states and properties NOT provided in 
> >> > HTML5 that may have use cases for adding to an input element, for 
> >> > example aria-hapopup, aria-labelledby, aria-decirbedby, 
> >> > aria-controls etc
> >
> > Could you give an example of any of these in valid use?
> 
> the following input (gmail search box) uses aria-haspopup=true
> 
> <input type="text" value="" autocomplete="off" name="q" class="gbqfif"
> id="gbqfq" *aria-haspopup="true"* style="border: medium none; padding: 0px;
> margin: 0px; height: auto; width: 100%; background:
> url(&quot;data:image/gif;base64,R0lGODlhAQABAID/AMDAwAAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw%3D%3D&quot;)
> repeat scroll 0% 0% transparent; position: absolute; z-index: 6; left:
> 0px;" dir="ltr" spellcheck="false">

Interesting. Can you elaborate on how this actually works? That is, 
aria-haspopup tells the AT that activating the element shows a popup, but 
what does activating the element mean? How does the AT expose this to the 
user? How does the user know what to do with this?


> the following link (from the gmail 'more' menu)  use aria-haspopup and 
> aria-owns
> 
> <a aria-owns="gbd" aria-haspopup="true" onclick="gbar.tg(event,this)" href="
> http://www.google.com/intl/en/options/" id="gbztm" class="gbgt"><span
> class="gbtb2"></span><span class="gbts gbtsa" id="gbztms"><span
> id="gbztms1">More</span><span class="gbma"></span></span></a>

How does this manifest in an AT?


> the gmail search button  uses aria-label
> 
> <button class="gbqfb" aria-label="Google Search" id="gbqfb"><span
> class="gbqfi"></span></button>

That's somewhat bogus, IMHO, though not because of the ARIA aspects. It's 
a button with no label. It's unclear to me why only AT users and users of 
graphical CSS UAs should be able to get a label for the button.


> These are few examples in use, I don't know if you consider them 'valid 
> use'.

These all seem allowed by the spec.


On Wed, 29 Aug 2012, Benjamin Hawkes-Lewis wrote:
> 
> I think you're missing Ian's point. Authors copy-paste markup from 
> deployed corpus. If linters throwing warnings/errors about "redundant" 
> markup caused authors to remove it, that would reduce the amount of 
> "redundant" markup in the corpus. Consequently, there would be fewer 
> copy-paste errors involving misunderstandings of the "redundant" markup. 
> The harm caused by "using a role that matches the implied default role" 
> is the proliferation of markup likely to result in copy-paste errors.

Precisely.


> Having said that, I don't buy Ian's argument because:
> 
> 1. Informed authors are unlikely to reduce their own content's 
> interoperability (backwards compatibility with today's client software 
> that doesn't expose implicit semantics) in favor of making it easier for 
> other authors to copy-paste their markup without errors.

Short-term band aids are hopefully a short-term concern, though. The idea 
here is to specify what authors need to do going forward.


> Assuming the linter gives accurate information about markup 
> interoperability, such warnings/errors are unlikely to result in authors 
> removing "redundant" markup. So an important effect of emitting these 
> warnings/errors is to decrease the linter's signal-to-noise ratio.

It's quite legitimate for a linter or conformance checker to group errors 
that its author thinks are likely the result of specific short-term 
compatibility needs and label them as such or even hide them behind a 
twisty or some such. Indeed, Henri's validator already does this for some 
features (e.g. it warns about cite="" usage -- incidentally, the spec 
changes on cite="", so that warning should probably change too).


> 2. Linter behaviour aside, I suspect more content will be enriched
> with "redundant" ARIA markup than broken by copy-paste errors involved
> in said markup.

Redundant ARIA markup doesn't enrich anything though, by definition (it 
wouldn't be redundant if it did).


> So long as we're talking about "redundant" ARIA markup motivated by 
> implementations not exposing implicit native semantics, the Living 
> Standard is trying to describe the behaviour on which implementations on 
> converging, not what implementations do today. For client software 
> implementors, what matters is implementor intention, not current browser 
> behaviour.
> 
> However, useful linter behavior needs to be predicated on current 
> browser behavior, because that's what authors care about. So I agree 
> that Living Standard requirements for conformance checkers should be 
> relaxed to take account of current browser behavior.

Conformance checkers really have quite wide latitude to present these 
issues in a variety of ways. I'm not sure really how much more latitude we 
can give. I don't think it makes much sense to make things we know will be 
non-conforming in the future conforming now, but I'm happy to look at 
specific cases if there is concrete data showing that some suboptimal 
markup is necessary for pragmatic reasons.


> >> Bugs should be fixed. We shouldn't warp the language to work around 
> >> temporary bugs. We certainly shouldn't teach a new generation of 
> >> authors to use bad authoring styles just because of a transitory 
> >> issue with certain browsers.
> 
> What most authors want out of conformance checking is not "authoring 
> styles" but interoperability, and they will need to keep using 
> "redundant" markup to achieve that.

If there are concrete examples of this, that would be helpful.


On Wed, 5 Dec 2012, Cory Sand wrote:
>
> The "Paragraphs" section (3.2.5.3) gives an interesting example where 
> paragraphs can overlap when using an element, like <object>, that 
> defines fallback content. To avoid the confusion of mixing the fallback 
> paragraphs with the sentences of the surrounding paragraph in the case 
> where the object resource is not supported, the spec suggests explicitly 
> marking up the fallback paragraphs with <p> tags (which makes sense to 
> me), but it also suggests marking the sentences before and after the 
> object element as paragraphs. This latter suggestion doesn't make sense 
> to me, because in the original example, those sentences constituted a 
> single paragraph (since <object> is phrasing content). Wouldn't it be 
> more correct to only mark the fallback paragraphs as paragraphs?

If you did that, you'd still have the confusion of an outer paragraph that 
somehow overlapped inner paragraphs. It would render a little better, but 
be just as semantically confusing.



There were also some more e-mails about <main>. I haven't replied to them 
explicitly because none of them introduce new material that hasn't already 
been discussed.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Friday, 14 December 2012 21:00:37 UTC