- From: Michael(tm) Smith <mike@w3.org>
- Date: Wed, 10 Nov 2010 07:26:41 +0900
- To: public-html-wg-announce@w3.org
The HTML WG F2F minutes for the sessions that occurred in room Rhone_3B in
Lyon (IRC channel #html-wg2) are available at:
http://www.w3.org/2010/11/04-html-wg2-minutes.html
and are copied below.
* Topics
1. intro from Alexey
2. IANA, rel, MIME, charset
3. URI/IRI [URL]
4. Testing
5. epub
6. link relations
7. Testing 2
8. Pushing policy
9. staging area
* Summary of Action Items
_________________________________________________________
intro from Alexey
Alexey: I call you about the organization chart
... I want to project the IANA slide that I think was skipped
yesterday
(setting up projector)
(IETF and IANA is projected)
Alexey: IANA manages registries, and there are multiple entities
that affect what IANA does
... If IETF adopts a procedure or defines a policy, IANA is required
to follow it
... IANA does give input on what the policy should be
... IANA follows what IETF says in RFCs
... the other entity that affects IANA is the IAB (Internet
Architecture Board) - talks to IANA about policy decisions like
licensing
... IESG approves RFCs and so defines the formats, IAB controls the
policy experts
... If people are unhappy with IANA policies they should not blame
IANA - except in the case where IANA is slow in updating something
AVK: can blame them about format, URL persistence
Alexey: there is a document, RFC5226 which defines standard
procedures for registries
... IETF can make any format that it wants, but there is a typical
format for registries
... registries can have different policies, templates, levels of
restrictiveness
... most permissive level is first come first serve
... examples include vendor names
... on the other end of the spectrum, the strictest ones require a
standards track RFC
... in the middle is a procedure called "specification required"
... requires a stable specification from an IETF-recognized
standards organization
IANA, rel, MIME, charset
HS: Is there an official definition of what is a recognized
standards organization? there are different opinions
Alexey: no, it's not defined; people don't want to fix the list
... general criteria are: long established, stable document
HS: why is stability a requirement? if the software moves faster
than the registry, then the registry is out of date
Alexey: depends on the registry - many registries are for developers
... for example, as a developer you may want to find all the link
relations
AVK: but as a developer, I find current IANA registries useless
... wikipedia is a better reference for URI schemes than IANA is
... vetting by experts makes registries incomplete and inaccurate
HS: you said not just software implementors or others
... for years, image/svg+xml wasn't in the registry
... when Apple shipped MPEG-4, the type wasn't in the registry
... I can't think of any constituency for whom the registry says all
that they want to know, or even close
AVK: apart from pedants, maybe
Alexey: a couple of comments on this
... different registries have different policies
... at the time when the registry was established, there was IETF
consensus that this was the desired policy
... as time goes on, it may be that reality shows that a particular
policy was too strict (or too permissive)
... maybe part of the answer is to revise the policy
HS: in the days of classic MacOS when Carbon was still used a lot,
and you needed four char type and creator codes, it seemd that the
value for those codes was smaller than the space for MIME types
... so you'd think you'd have a greater need than for MIME types to
limit who can get what, but Apple operated a registry on first-come
first-serve basis and nothing bad came out
<anne> MJS: you mentioned that it is possible to change the policy
<anne> ... assuming that some of the folks here are interested in a
much more permissive policy
<anne> ... what would be the process to get the IETF to change
<anne> Alexey: talk to the AD and talk to other people to initiate
discussion
<anne> Alexey: I'm happy to help with the progress
Alexey: the other half of the answer
... there is a reason there are expert reviews for some of the
registries, like MIME types
... people do make stupid mistakes in MIME types, so there is an
opportunities to fix this
HS: one of the supposed mistakes is using the text/* subtree for a
lot of stuff, and there I would claim the mistake on the IETF side
AVK: what proportion of MIME types are not in use when they are
registered? it seems like most of them already are deployed by the
time you go to register them, so it might be too late to fix
Alexey: in the ideal world, people should ask experts up front
<Julian> !
Alexey: one example is that you can't use UTF-16 of textual types
HS: that's bogus
AVK: still insisting the case now is misguided
JR: one thing that Anne mentioned - some registries have a
provisional system
... but not MIME types
Alexey: vendor prefix ones are first-come first-server
JR: other question -regarding the media type registration RFC, Larry
has started discussing revising it in the TAG
... for example, people sniff for types - we could make that more
robust
HS: I want to complain more about CR/LF
... the history of CR/LF restriction and the fact that text/*
defaults to US-ASCII in the absence of charsets...
... this is an artifact of a leaky abstraction from SMTP
... US-ASCII default is a theoretical most prudent default from the
time when in email there wasn't an obvious default
... but neither of those considerations apply to HTTP
... HTTP can send text that has line breaks that are not CR/LF
... in fact for HTML, LF-only is preferred
... it makes no sense to say that all these types like HTML,
JavaScript and CSS are "wrong"
... instead it would make more sense to say that CR/LF does not
apply to HTTP
... for some types, for historical reasons we need to default to
Windows -1252 or UTF-8
... pretending these need to be registered under the application/*
subtree doesn't help anyone
... it only serves the RFC canon that HTTP and SMTP match, but that
doesn't help authors or implementors
... line breaks should be based on transport protocol
... types themselves should be able to define their default charset
JR: if you look at the thing that Larry brought to the TAG about
MIME on the Web...
... he mentions all these problems
... line break thing doesn't make sense on the Web
... HTTP appears to use MIME, but doesn't, and doesn't need to
... charset is also an issue for HTTP
... conflict between MIME, HTTP and XML types on text/*
HS: I actually implement RFC2023
... I have a checkbox for saying ignore it
<anne> (There's a t-shirt saying "I support RFC 3023")
HS: if I shipped the validator without the "ignore it" box, people
couldn't use the validator
JR: what's the default?
HS: defaults to supporting it
Alexey: comment on Web vs email - this needs to be discussed in IETF
... if Web requires modified version of MIME, let's do it
... there is a new WG in applications area
<anne> APPSAWG
<weinig> http://datatracker.ietf.org/wg/appsawg/charter/
http://datatracker.ietf.org/wg/appsawg/charter/
HS: it feels frustrating to actually have to discuss this
... that people don't believe what they see on the web
AVK: the feeling is that the IETF is so much behind, and then we
have to get in and tell the old timers what the new world looks like
... we're not sure it is worth our time
... we have moved on
Alexey: it is occasionally helpful to talk to people who designed
the original
... especially when it comes to character set - I think there is
agreement from the original author
AVK: I talked about some of the discussion about moving away from
text/plain drafts, and people there express fear of Unicode....
... W3C is kind of slow too, but at least we think HTML and Unicode
are ok
HS: well, W3C isn't ready to publish HTML5 as HTML5 yet
JR: IETF thinks HTML and Unicode are fine, just not for their
documents
Alexey: there is provisional registration
AVK: for header fields, you need spec even for provisional
... person guarding the header field registry was too conservative
JR: does header name registry have a public mailing list
... registry lists should be public
Alexey: can you draw cases like this to my attention? it might be
implementation of process failures
AVK: but if we look at URI schemes..
Alexey: it's hard for me to defend the people who designed the
procedure
... there was a discussion about relaxing registration of certain
types of URIs
... so we could register things like skype or yahoo IM
AVK: we are trying to register about: - there should be some
registration pointing to the draft
... and for many headers, browsers have to know about them even if
they are unregistered
... difficulty of using registry causes incentive to use X- names
and just not registry
JR: one thing we should look at is accountability - there needs to
be a public mailing list for header registration
... also Larry will join us to talk about IRI
AVK: I would rather just get rid of IANA and have a W3C registry,
with a community-managed wiki
HS: to consider how the XHTML2 WG was doing things - at some point
it was obvious that just giving feedback wasn't going to change the
way they did things
... so instead of trying to change the way they did things, another
group did something else, and that became the group people paid more
attention to
... there is a feeling that fixing IANA is so difficult that it
would just be easier to set up a wiki
AVK: we could just compete
Alexey: this is not helpful
AVK: I would like a registry that would tell me X-Frame-Options
exists
... I don't think this will ever fly at IANA
HS: I have no experience of registration, but the language tag
registry is a very positive role model
Alexey: when I talk to IANA, they listen
AVK: I think the problem is the process
Alexey: I can help you initiate changing the process
AVK: not sure I am interested in helping to fix the process if there
is an easier path
HS: we should mention willful violations of the charset registry
... it would be useful for the main charset registry to be the place
to go to find out what you need to implement
... the thing is that ISO-Latin1 should actually be interpreted as
Windows-1252
... another example is that instead of Shift-JS you need to use the
Microsoft tables not the ISO tables
LM: I note that my draft covers many of these issues
HS: not in this much detail; I will give feedback
<Julian>
http://tools.ietf.org/html/draft-masinter-mime-web-info-01
http://tools.ietf.org/html/draft-masinter-mime-web-info-01
LM: I hope in the cases where there are willful violations, that the
right thing to do is to fix the registry
AVK: in the case of the charset registry, there might be a need for
separate registries for Web clients vs other clients
HS: for example the Java platform uses the IANA names for charsets
with their real meaning
... it would not be good to change Java, so the registry should
include both sets of info
... JAva could add an API for Web content decoders
LM: I think this is a three-phase process
... (1) identify the problem
... (2) identify which things need to change (w/o being explicit
about how)
... (3) then there needs to be action on the change
... I would like to identify the problem and the kinds of changes
first
... only then decide whether to make a wiki, change the process, etc
AVK: if you are already working on this, then that's great
LM: I would be happy to have co-authors
Alexey: at minimum we should talk
LM: I think we should bring it into a working group or take it up as
an action item
... MIME is a part of the Web architecture that we have adopted
without adopting it
JR: we talked earlier about text/html and encoding
LM: again I think we should describe the problem first
... same thing might be said for URI schemes
HS: given last call schedule (1H2010), how realistic is it that
changes of these magnitude could go through the IETF
... seems unlikely
LM: my view is that a W3C document entering LC can make reference to
documents at similar or behind level of maturity
... they don't need to be final until you go to REC
MS: (explains W3C process)
URI/IRI [URL]
HS: one reason I'm skeptical about the rate of change at IETF is the
URL thing
... we had rules in the HTML5 spec abut transforming href values to
IRIs
... it was argued that IRIbis was supposed to solve it
... I remember there was a schedule
LM: it's quite off
HS: at the date when there was supposed to be a deliverable, they
haven't even started
... we shouldn't send things to the IETF to die
... I was really annoyed when I wanted to fix a bug relating to URL
handling in Firefox and the spec did not have what was needed
... I think that for URLs the process has had it chance and din't
deliver
RI: the original schedule was very aggressive and we never really
expected meeting it
LM: it was wildly optimistic
... the problem with most standards activities is that there's
nobody home except for people who showed up
... if you look at the archives, there was really a fallow period,
but since then it is picking up
... meeting next week in beijing
... people who care about URLs in HTML should show up online
HS: there is also the problem that if people are already showing up
in some venue, then moving the work to a different venue and then
complaining that people didn't show up in the other venue is not
productive
LM: the problem really is that what was in the HTML document before
was wrong
... unfortunately there is complexity due to need to coordinate with
IDNA and bidirectional IRIs
HS: you need something that takes a base IRI, a relative reference
as UTF-16, and a charset, and you get a URI/IRI back
... my point is that the HTML spec doesn't need to deal with
rendering any kind of address
... it just cares about resolution / parsing
... nothing about how to render an IRI
... what is required is someone writing down the real-world
algorithm for this resolution thing
... and it needs to be somewhere that you can reference it
RI: if it were in the IRI specification would it be ok for you
HS: what I am annoyed about is that we had something that was right
or fixable, was removed or delegated, and now we have to rewrite it
... I am now betting on Adam delivering it
JR: I would like to say one thing
... we need to find the right separation between things that are
just part of the attribute and things that are part of the the
resolving algorithm
... I think whitespace discarding is not part of the resolutions
... there might be a step before resolving that is part of
extracting from an attribute
AVK: in the running code, whitespace stripping happens at the
resolving end
LM: it would be nice if you could copy from the location bar into
other apps
HS: we are not talking about the location bar
JR: what about space-separated lists of URLs
AVK: this is a different case
LM: motivation for trying to start the work in the IETF was to make
sure that URLs in HTML and in other apps weren't different
... it is true that the work has been delayed, but activity has been
restarted
Alexey: you need to open bugs
LM: Adam was at the last meeting
... there is an IETF document of how to do IETF document
HS: it's great the kinds of URLs that the web uses were the same as
what other things use it, that would be great
... but the Web is constrained
JR: this was very useful, which I'm not sure was expected; we have
another point about link relations, which is on the agenda
ack
MS: in the future, we shouldn't delete things until the replacement
is ready
LM: chairs from IRI working group are prepared to add an additional
charter item
AVK: Adam is a bit reluctant to go back to the IETF
<anne> (that was my impression)
RI: it seems like there are discussions coming up in beijing where
we need to be talking between the HTML WG and IETF
LM: editors will be remote, so remote participation might be good
... how about file: URLs
HS: they are not really on the Web
... best thing to do for USB key is relative URLs
<r12a> whether it's beijing or not, i think we need to find a way to
pursue this dialog with HTML5 folks and chairs/editors of the IRI
spec
RI: is something gonna happen
... action items?
LM: don't be skeptical - if you believe it will work
<scribe> ACTION: Henri to give feedback to Larry on MIME etc draft
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action01]
<scribe> ACTION: Anne to give Alexey info about registry problems
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action02]
<MikeSmith> started lunch break?
MikeSmith, we're about it
<MikeSmith> k
er, about to
session adjuourned
<anne> fwiw, testing was half an hour delayed
<anne> not sure if anyone is actually in the other room yet
<anne> but since you just signed in...
<Julian> isn't testing at 5pm (50 mins from now?)
<anne> no
<anne> it's a double block
<Julian> oh
<anne> yes
<anne> we are setting up
<anne> dbaron, ^^
<hsivonen> dbaron, we are in Rhone 3b
<hendry> scribenick hendry
<oedipus> scribenick: hendry
Testing
me: to find the connection type, it's not slow or rather blocking is
it?
it's a fast operation Andrei: yes, we fire online when the type
changes
type just caches last seen connection type
http://dvcs.w3.org/hg/html/
http://dvcs.w3.org/hg/html/
[ scribe apologies for pasting in wrong buffer ]
maciej: how to particpate in tasks tf, testing framework
<plh> kk: and goals for LC
kk: the TF meet every two weeks
... there is a wiki with schedule, there is a server with hg
... philippe has mirrored that work at http://dvcs.w3.org
http://dvcs.w3.org/
<plh> --> http://dvcs.w3.org/hg/html/ HTML test suite repository
http://dvcs.w3.org/hg/html/
kk: same content on both servers
<plh> --> http://test.w3.org/html/ HTML Testing Area
http://test.w3.org/html/
kk: asking what to test ... localstorage, x-domain messaging, doing
spec analysis
... looking at features which are shipping
... submitted some canvas tests
<plh> --> http://test.w3.org/html/tests/submission/PhilipTaylor/
Canvas test suite
http://test.w3.org/html/tests/submission/PhilipTaylor/
kk: getElementsByClassname tests from Opera
... distinction between approved and un-approved tests
<plh> --> s/Philipp Taylor/Philip Taylor/
kk: bugzilla to process the test
<plh> --> http://test.w3.org/html/tests/harness/harness.htm Test
harness
http://test.w3.org/html/tests/harness/harness.htm
jonias: what is the harness ?
anne: same as XHR
kk: tests run automatically
... video tests is hard to automate
... self-describing test
... some exceptions that you can't poke in the OM and you can't test
it
hsivonen: can you do some REFerence tests ?
jonas: yes, there are some things
kk: there are some things you can't test with REF tests, for e.g.
Audio
hsivonen: multi-testing question
plh: some tests are manual and some tests are automatic
kk: existing tests not using the testharness, it might not be worth
re-writing them
plh: it's a bug, it shows the buttons, though its automatic
kk: waits for 5 seconds before going to next test
maciej: this UI is broken
kk: can we get all the requirements up front ?
... esp we need a plan with REF tests
maciej: propsed categories; script driven, ref test, manual test
... too awkward with 100k tests ... takes too long to run
plh: the test can indicate itself, if it's manual or automatic
anne: if the test loads the test harness, we know it's an automatic
test ( no need to categorise )
hsivonen: just have 3 directories
dbaron: you can harness the harness
kk: we should do it in one file
hsivonen: the easier way is to use directories
jonas: i don't care
maciej: text file is harder to maintain than a directory, not big
deal either way
<plh> scripts/
<plh> reftests/
anne: we want directories for *types* of tests
<plh> manuals/
dbaron: painful to use dirs as metadata, as you may need to move
them around
kk: maybe we will come up with a new dir in some months time,
prefers a text file as it wont change location
jonas: bigger problem to have a function call when the test finishes
so we don't have to wait 5 seconds after each one loads
anne: there is logic in the harness to handle this & async tests
hsivonen: [ didn't quite understand your implicit mochi test comment
]
<dbaron> plh: need a way to copy all the additional files that tests
depend on
<hsivonen> I find that I almost always have to use the explicit
finish function for scripted tests, so it's not a win to finish
tests implicitly on onload
jonas: we need to somehow markup dependencies
sweinig: in the common case there will be no deps
hsivonen: should we decide whether to allow data URLs ?
anne: common resources makes sense
hsivonen: you want to use data URLs for near 0 load times
[ why does jonas use data URLs? didn't get his argument ]
kk: ie9 supports dataURIs
... might be a problem that browsers do not support dataURIs
jonas: we need to list our deps and assumptions
... can we assume browsers have ES5, foreach is nice
maceij: we should not use ES5 until it's widely implemented
jonas: queryselector test cases were held up by WebIDL
kk: e.g. of WebIDL false positive in canvas read only thing
jonas: do we have any existing docs of assumptions?
kk: there is just the source code
... can someone take an action to document them?
anne: read the XHR tests :-)
<krisk> testing wiki http://www.w3.org/html/wg/wiki/Testing
http://www.w3.org/html/wg/wiki/Testing
jonas: these tests are already in directories
kk: suggests documenting the tests in the wiki
hsivonen: ... something about re-writing the "mochi tests" ??
anne: i'm fine with re-writing / using another harness
kk: first anchor test is very simple, it's not hard to migrate to
james's harness
jonas: make some requirements for making the tests portable between
harnesses [ IIUC ]
hsivonen: something about integration layer, which allows reporting
into your own system (thanks anne)
<plh> --> http://dvcs.w3.org/hg/html/ mercurial
http://dvcs.w3.org/hg/html/
plh: you can commit a test if you have a W3C account
dbaron: might need to be aware with hg's push caveats [ to plh ]
<plh> ACTION: plh to work with systeam to make sure we keep track of
hg push [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action03]
maciej: not great security, since hg trusts the client's config WRT
who wrote the patch
dbaron: you might want logs
... Mozilla have a tool called push-log for this problem
jonas: i can see now the tests are seperated by directory
<dbaron> The source for pushlog is in this hg repository:
http://hg.mozilla.org/hgcustom/pushlog/
http://hg.mozilla.org/hgcustom/pushlog/
jonas: is there a description file ?
<anne> http://test.w3.org/html/tests/
http://test.w3.org/html/tests/
<anne> http://test.w3.org/html/tests/harness/approvedtests.txt
http://test.w3.org/html/tests/harness/approvedtests.txt
kk: see http://test.w3.org/html/tests/harness/approvedtests.txt
... we will add extra info
http://test.w3.org/html/tests/harness/approvedtests.txt
jonas: remove domain so it's not server specific
... we have a test file per dir
... i want to walk this from the cmdline
... i want relative paths
kk: we might need some absolute stuff
jonas: i'm pulling via hg
kk: there is no absolute need for absolute urls
hsivonen: mochi-tests point to localhost
jonas: something clearly identifiable for a search & replace to get
the tests working
... you can get different types of relative paths
... it's important that we can accomodate them in a "search &
replace"
... we need to scale
... it's not workable to ban absolute paths
hsivonen: we need to document the "clearly identifiable" bit, like
test.w3.org and test2.w3.org
jonas: we have to say it's OK to use abs paths
hsivonen: worried about some dir namespace collision
... get rid of prefixes
jonas: OK
<krisk> That is fine
kk: how to delimit the file ?
jonas: i don't care
... though, since it's hand-written, make it easy & little to type
sam: is there a preferred lengthmicroformats.org with CSS tests
there was a wide range
... bad = long test & lots of permutations
hsivonen: we know a bad test when we see it
maceij: there is a fuzzy boundary
jonas: io bound if we have a million tests ... we need to keep it
somewhat reasonable
sam: there are examples of tests that can be merged
adrian: there is a review process
kk: you could file a bug, raise issues
adrian: of course if it's approved, it doesn't mean it can't change
again
sam: if all the tests pass, then the bugs are in the specs
kk: tests do content negotiation (canPlayTypepermanence) WRT
choosing a codec the runtime supportS
hsivonen: mochi tests that we (mozilla) use, requires server side
javascript
plh: was a lot of trouble already to support PHP for security
reasons
sam: we have tests that use python, php, curl for certain load tests
<dom> (we evoked this in WebApps the other day; we can probably
consider more server-side stuff at some point, but we need to need
to have requirements documented earlier rather than ater)
<dom> (and please consider limiting the number of needed
languages/platforms as much as possible)
jonas: we can generalise "slow load tests" so it doesn't
neccessarily require PHP
... some security concerns here
plh: we need to review PHP files before they become live
jonas: we need it one the same server for same origin type cases
<dom> if same server == test.w3.org, that's part of the plan
hsivonen: we need a mechanism to load things slowly for example
<dom> (use a DTD for that)
hsivonen: avoid echo, we should return existing (approved) files
jonas: is there sensitive data WRT XSS-ing
plh: should be fine
<anne> safest might be w3test.org or some such
kk: what happens if 10 million tests are in the Q to be approved
dbaron: biggest risk is a test that claims to test something, but
doesn't actually test it
sam: we should only accept tests that use the new harness
... the tests here are about testing regressions
kk: worried about approval rate, esp. if only he does it
plh: if a subset of tests are passed by everyone, they are probably
good
anne: 1) is it good enough hsivonen 2) ... [ didn't get that ]
maceij: lets do a cost benefit analysis
<adam> Accidentally testing something that is not a requirement at
all
maceij: 1st category testing undefined behaviour
... 2nd -- testing something contrary to a requirement
... -- at least one browser will fail this
[ can someone write what maceij said pls ? ]
scribe: 3rd cat testing something where it doesn't actually test it
... review should catch them all
... almost certain something will be wrong
... how much time should be spent on review versus benefit
... test approved == matches what the spec says
dbaron: from exp within CSS, review is more work than writing to
test... so its not worth doing for an existing contributor
s/writing to test/writing the test/
dbaron: figure out why the test is failing sooner than later
... imp report: 1) run all tests 2) bug in test suite or in browser
(v. time consuming)
... figure out WHY tests are failing
hsivonen: we should flag tests that fail in all browsers
... we can't assume the spec is neccessarily 100% correct
<hsivonen> we should flag tests that fail in 3 engines
maceij: low skilled tests don't need to be approved, better if
everyone is just running them [ IIUC ]
anne: we should distribute the testing
maceij: don't have ref test when you could have a script test
... distributed test is more likely to succeed
hsivonen: do we have any way to feed the test info to the WHATWG
HTML5 section info box things
kk: could be an admin problem if links change
<krisk> see
http://test.w3.org/html/tests/approved/getElementsByClassName/00
1.htm for an example of a script based test
http://test.w3.org/html/tests/approved/getElementsByClassName/001.htm
<freedom> nobody in 3B yet? there will be an EPUB related meeting
right?
<oedipus> according to the agenda, EPUB discussion in 3B starting
8:30 french time
http://www.w3.org/html/wg/wiki/TPAC_2010_Agenda#Room_3B__.28IRC_
.23html-wg2.29
http://www.w3.org/html/wg/wiki/TPAC_2010_Agenda#Room_3B__.28IRC_.23html-wg2.29
<mgylling> Reads 09:00 to me
<mgylling> To anybody who is physically there: does 3B have call-in
facilities?
<oedipus> guess the first half hour will be spent in common again
then breakout to 3B
<freedom> seems not
<freedom> I am in 3B physically now
<mgylling> freedom, thanks.
epub
<MichaelC> scribe: Julian
ms: markus to give overview
mgylling: (remotely)
<mgylling> www.idpf.org
mgylling: epub standard for ebooks, around for several years,
expanding in popularity, large adoption
... idpf.org
... based on xhtml, subsets defined
... current ebpub 2.0
... uses XHTML1.1 mod
... is a fileset, ZIP container, different document types
... container called OCF
<freedom> http://www.idpf.org/specs.htm
http://www.idpf.org/specs.htm
mgylling: some of the formats in epub defined by w3c
... some of the metadata formats owned by epub itself
... is undergoing rev to 3.0
...charter: update & alignment with modern web standard
s
use HTML5 as grammar
is not allowed by current specs but already happening
need to formalize & stabilize
on HTML5 vs XHTML5: epub decided to use X*
based on requirement for existing reading systems to be upgradeable
MS: asks about design philosophies
... drive spec based on what current UAs already can do?
mg: docs used to be static
... <script> SHOULD/MUST be ignored
... but scripting is going to be added
... problems with legacy readers
... and non-browser-based impls
... it's clear that this will be needed in the future
MS: devices coming to market with have full browser engines
Julian: usability of spec for being referenced
?
mg: not a problem yet
... we're not forking
... defining profiles and extensions, follow the HTML5 style
Julian: how does ext work for you?
mg: XHTML5 is supposed to allowed namespace-based extensibility
ms: feedback on this is welcome
... epub I18N requirements -> CSS WG -> vertical text support
... does not seem to affect HTML though
... is there something the HTML WG need to do?
mg: books / ebooks slightly different domain
... missing semantics for books
... distinguish node references and nodes
... skippability
page breaks
have looked at role attributes for extensibility
mjs: extending role not recommended because owned by aria
... needs coordination with PFWG
... maybe dedicated elements
or attributes
what affects rendering should be in HTML
mg: book semantics, chicago manual of style
in transcript, replace "node" by "note"
MC: asks about roles
MG: uses custom attributes
<MichaelC> Role attribute extensibility:
http://www.w3.org/TR/role-attribute/#extending-the-collection-of
-roles
http://www.w3.org/TR/role-attribute/#extending-the-collection-of-roles
MG: fastest way for now (own NS)
MC: role module *does* allow extensibility
<MikeSmith> RRSAgent:, make minutes
MC: PF and HTML need to coordinate on r@ole
@role
<Zakim> MichaelC, you wanted to discuss role extensions, future
aria, etc.
MG: ownership of @role
mjs: HTML defines @role by refererence to ARIA spec
MC: aria defines on HTML to define @role
<MichaelC> s/aria defines/aria depends/
mg: request to clarify the HTML spec wrt role extensibility
<fantasai> RRSAgent: make minutes
mg: on metadata in epub
... NCX doesn't have metadata at all anymore
<MichaelC> ARIA on host language role attribute
http://www.w3.org/TR/wai-aria/host_languages#host_general_role
mg: core metadata will continue to come from outside HTML/head
<mjs> -> role attribute in HTML5:
http://dev.w3.org/html5/spec/Overview.html#annotations-for-assis
tive-technology-products-aria
http://dev.w3.org/html5/spec/Overview.html#annotations-for-assistive-technology-products-aria
mg: reading systems need to get the metadata from the package file
HS: on role attribute
<fantasai> hsivonen: ARIA spec defines aria- attributes, but does
not define role attributes
<fantasai> hsivonen: requires that a host language define a role
attribute with certain characteristics
<fantasai> hsivonen: HTML5 tries to do this
<fantasai> hsivonen says something about tricky wordsmithing
<fantasai> hsivonen: Way forward would be to figure out roles that
current AT vendors need (?) and define tokens for them, and have
ARIA promise not to conflict
<fantasai> hsivonen: The role module spec relies on CURIEs for
extensibility
<fantasai> hsivonen: ... not good for EPUB
<fantasai> hsivonen: I don't expect web engines to support CURIEs,
relies on namespace stuff ... lookup DOM L3
<fantasai> hsivonen: Best way forward is to ask PF to set aside the
names that you expect to use
<fantasai> hsivonen: Doesn't make sense to pretend different groups
dont' know about each other
<fantasai> hsivonen: We're communicating, so let's coordinate.
<MichaelC> ARIA taxonomy
http://www.w3.org/TR/wai-aria/rdf_model.png
<fantasai> ?: I'm ok with approach Henri is suggesting, but
coordination with PF is important sooner rather than later
<fantasai> MichaelC: Everything would have to fit into our taxonomy
<fantasai> hsivonen: Implementations don't care about the taxonomy,
that's only to help out with spec design
<fantasai> hsivonen: If PF promises that this set of names is not
going to be used, and picks different names if it decides to expand
in that area, then we don't have to worry about all this
extensibility stuff
<mjs> ack q+
<fantasai> MichaelC: For author understanding, we want to pick
tokens that match the most appropriate terminology
<Zakim> MichaelC, you wanted to say if you want to follow the
approach Henri suggests, should coordinate with PFWG sooner than
later and to say ARIA roles are part of a taxonomy
<fantasai> hsivonen: They're just tokens, it doesn't really matter
<fantasai> mjs: Instead of debating in the abstract, let's just send
the list of suggested roles to PF asap
<hsivonen> DOM 3 namespace lookup doesn't work for CURIEs in
text/html DOMs, so don't expect browsers to implement CURIEs
<fantasai> mjs: If they don't like the tokens proposed, then they
can respond about that.
<fantasai> mjs: I don't think this meta-conversation is getting us
anywhere
<Zakim> Julian, you wanted to let Mike speak
<fantasai> hsivonen: I'd like to add a note about why CURIEs are bad
idea in this space
<fantasai> hsivonen: So, frex, how Gecko exposes roles to interface
to JAWS, Gecko picks the first role it recognizes and exposes that
as the MSAA role
<hsivonen> IAccessible2
<fantasai> hsivonen: And then exposes the entire value of the role
attribute as the xml-roles property in the iAccessible2 interface
<fantasai> hsivonen: It follows that the namespace mapping context
of the CURIE binding context is not exposed at all
<MichaelC> scribe: fantasai
hsivonen: If you wanted to do something with CURIE, you wouldn't do
CURIE processing.
... You would wind up exposing to JAWS the prefix and local name
<freedom> IAccessible2,
http://www.linuxfoundation.org/collaborate/workgroups/accessibil
ity/iaccessible2
http://www.linuxfoundation.org/collaborate/workgroups/accessibility/iaccessible2
hsivonen: Therefore I advise against relying on the mapping context,
because the existing ... doesn't expose the mapping to IAccessible2
and therefore to JAWS
markus: Does Gecko expose the roles regardless of whether it
recognizes it?
hsivonen: Yes. All the data is passed through, in case JAWS wants to
violate ARIA and look at things itself.
... Gecko doesn't police whether JAWS follows ARIA spec
MikeSmith: I just wanted to state where things stand.
... It's not inconceivalbe that the language features you need for
EPUB could be considered as native elements and attriutes to be
added to HTML5 itself. It's not too late for that.
... It's not too late to ask, anyway.
... I'm sure we're going to get LC comments asking for new elements
and attributes.
... There will be a lot of people who haven't looked at the spec
yet, or want opportunity to have their request considered.
... Proper way to change the spec is file a bug against the spec.
... Cutoff for pre-LC was Oct1. Everything after that date will be
considered an LC comment.
... I don't think that you should self-censor, and just assume
there's no chance of getting any new language feature requests for
native elements and attriutes considered.
... That's not what we want
... I don't want to say you have nothing to lose, because there's
cost in time to everyone
... But something for EPUB to consider, whether you want to make
requests for new elements/attributes.
<hsivonen> Gecko exposes the value of the role attribute to JAWS but
not any kind of CURIE prefix mapping context, which mean using
CURIEs wouldn't really work with the URL and you'd end up
hard-coding a known prefix and the resolution to an absolute URI
would be fiction
MikeSmith: Not mutually exclusive: could also pursue extensible
approach, too
<hsivonen> thus bad idea to use CURIEs
MikeSmith: It's a good idea, although some things we need are likely
to be considered out-of-scope for HTML5
Markus says something about e.g. notes
fantasai asks if that wouldn't be <aside>
mjs: Just want to reinforce Mike's comment that we would definitely
like to hear all the requests, even though we are late in the game
and probably aren't going to add major new feature.
... But requests that are modest in scope and important for a
particular use case will be considered
... We're not 100% frozen yet, but in a few months we will be. So
better to get those requests in now rather than later.
... Any other comments?
fantasai: Wouldn't notes be an <aside>?
Markus: Notes would be a subclass of <aside>
Markus says something about an href role
mjs: Talking about footnotes and end notes?
Markus: Yes. Need to distinguish those for formatting
MikeSmith: Don't we have a bug open on having more roles for <a>?
mjs: If particular semantic of linking to footnote or endnote might
be more appropriate as a rel value
hsivonen: Maybe have a CSS pseudo-class detecting the note type from
what the <a> points to instead of requiring author to specify
Markus: Reponse from EPUB authors say that overall, it's really
good. There are a number of additions from XHML1 that we love.
... We're already very close to having it work for books, only a few
minor concerns.
... So not looking for any major surgery here.
fantasai: I think they should define a microformat for subclassing
notes.
hsivonen: Hkon and Bert already defined a microformat for books,
although I don't think they addressed notes.
Bert: yes. A lot of that has been added to HTML5, though: <article>,
<section>, etc.
mjs: HTML5 just recommends a plain <a>, with no distinguishing
markup
hsivonen: footnotes are a thorny issue in CSS. Prince supports
something, but it's not optimal
... I was reading Dante's Inferno in HTML5. It doesn't make any
sense to read it without footnotes.
mjs: Yeah, I read a Terry Pratchett book that was supposed to have
footnotes, but they were all endnotes and it didn't work so well
<Bert> Boom! (BOOk Microformat)
http://www.alistapart.com/articles/boom
hsivonen: I think we should figure out the CSS layout model first,
then fit the markup to that.
... If we come up with markup first, and it doesn't fit the CSS
layout model, making it work in layout could become very
complicated, involving many pseudo-classes, etc.
meeting closed?
<Bert> (Contrary to what I remembered, BOOM *does* have footnotes,
not just sidenotes: <span class=footnote>)
discussion of role attributes
mjs: You need centralized extensibility for accessibility, so the
a11y technology understands the roles
hsivonen: If you're on Windows, what FF can do is more than with the
AS api on Mac
<MikeSmith> http://code.google.com/p/epub-revision/w/list
http://code.google.com/p/epub-revision/w/list
hsivonen: So maybe it's a bad idea to design stuff with the
assumption that you have IAccessibible2 on Windows
... Alternatively, could consider it a bug that AS doesn't have this
feature
<hsivonen> s/AS/AX/
anne: The only case you'd notice it is JAWS was updated before
voiceover
hsivonen: I'm guessing the upgrade rate of JAWS is a non-issue in
practice
<MikeSmith>
http://code.google.com/p/epub-revision/wiki/Annotations
http://code.google.com/p/epub-revision/wiki/Annotations
Julian: You might not believe how backwards some people are in
upgrading their browser
hsivonen: Big parts of ARIA have been designed with the assumption
of an enterprise stuck with IE7 for years after ARIA has been
deployed in JAWS
<MikeSmith>
http://code.google.com/p/epub-revision/wiki/DesignPrinciples
http://code.google.com/p/epub-revision/wiki/DesignPrinciples
hsivonen: Design decisions make assumptions about which part of the
system will be upgraded first. Might not have been the best design
decisions.
<MikeSmith>
http://code.google.com/p/epub-revision/wiki/HTML5Subsetting
http://code.google.com/p/epub-revision/wiki/HTML5Subsetting
fantasai: So is EPUB subsetting HTML5?
MikeSmith: not sure
mjs: Engines are unlikely to enforce any subsetting
fantasai: True, but such content could be non-conformant for EPUB 3.
... Not all EPUB implementations are based on browser engines
?: Are there many that are not?
fantasai: I know of at least two
... and I haven't actually looked into the issue
<kennyluck> fantasai: When I was at Tokyo, I found a EPUB
implementation that implements CSS but not based on browser
<kennyluck> .. I also found one EPUB implementation that's not based
on browser at all
<kennyluck> ... yet it renders vertical text quite nicely
<kennyluck> ... (It does not support CSS)
fantasai: uses effectively a UA stylesheet only
hsivonen: Are the CSS implementatiosn any good/
fantasai: Don't know, haven't done any testing
discussion of converting HTML5 to EPUB
would need to split into multiple files for EPUB impl's tiny brains
:)
<mgylling> Yes, splitting files is done a lot due to memory
constraints in certain handhelds
<mgylling> A popular one has a 300k limit IIRC
<MikeSmith> 12 minutes to caffeine
<freedom> which means EPUB doesn't encourage authors to write long
chapters?
<mgylling> hehe, yes, need to keep it short ;)
<mgylling> I expect these max file size recommendations to be gone
soon, just another generation shift needed in devices
<freedom> mg: do it, my iPhone 4 has 512MB now
<mgylling> freedom, right. Note that this is not spec restrictions;
these are conventions that has arisen in the ecosystem
<freedom> OK, bad implementation, not bad spec
link relations
<scribe> ScribeNick: fantasai
mjs: Subtopics include
... Idea of using microformats
... another is that we have a number of specific issues
<mjs> http://www.w3.org/html/wg/tracker/issues/124
http://www.w3.org/html/wg/tracker/issues/124
<mjs> http://www.w3.org/html/wg/tracker/issues/127
http://www.w3.org/html/wg/tracker/issues/127
<mjs> http://www.w3.org/html/wg/tracker/issues/118
http://www.w3.org/html/wg/tracker/issues/118
<mjs> http://www.w3.org/html/wg/tracker/issues/119
http://www.w3.org/html/wg/tracker/issues/119
mjs summarizes the open issues
mjs: Does anyone else have other subtopics?
<adam> *u must be dozing off*
<anne> no kidding
<Zakim> MikeSmith, you wanted to show XPointer registry and to
discuss potential need for a role registry similar to need for a rel
registry
MikeSmith: Somehow I ended up the one responsible for registering
all link relations for HTML5
... So, I guess I can put some kind of report on that? What should I
be doing.
Julian: Let's start with a description of .. right now
... I'll summarize where IETF is right now.
... It all started with realization that HTTP has a Link header
that's supposed to be equivalent to Link element in HTML
... And that there are documents on the web which are not HTML and
for which it would be useful to expose linking
... Lots of people think it would be a good way of expressing link
semantics independently of HTML
... So Mark Nottingham started on the work of writing a new def of
Link in HTTP
... And establishing a registry that could be used in HTML as well,
but would not necessarily be used in HTML
... The IANA registry also includes the link relations registry that
was established for the Atom feed format, which is similar but not
identical to HTML.
... So there are overlapps, but it included syndication-related
things and not everything that HTML has
... So there was lots of discussion on procedural things, and
licensing of the registry.
... Can talk about that later.
... Took a long time for spec to come out, but has finally been
published.
<Julian> http://greenbytes.de/tech/webdav/rfc5988.html
http://greenbytes.de/tech/webdav/rfc5988.html
Julian: That's a very old style: you send an email to an IETF list,
and a group of designated experts to register that or ask questions.
<Julian> http://paramsr.us/link-relation-types/
http://paramsr.us/link-relation-types/
Julian: Mark has started making this more modern by, first of all,
providing a web page explaining how to register, has a template to
help with you write the registration and submit for you to the
mailing list
<Julian> http://paramsr.us/tracker/
http://paramsr.us/tracker/
Julian: The designated experts now also has an issue tracker
... So people can watch where there registration requests are
progressing
... Makes the IANA process a bit more pleasant
<Julian>
http://www.iana.org/assignments/link-relations/link-relations.xh
tml
http://www.iana.org/assignments/link-relations/link-relations.xhtml
Julian: Here's the registry riht now
... This contains link relations defined in Atom, Atom extensions,
and HTML4
... and some parts for HTML5
<Julian> https://www.ietf.org/mailman/listinfo/link-relations
https://www.ietf.org/mailman/listinfo/link-relations
hsivonen: ? has been recognized as an entity that has reasonable ?
measures in place
... It seems that the domain name is owned by ???
... as an individual
... And whatwg.org is also owned by an individual
Julian: I'm not sure how that affects our impression of whether
microformats.org is stable or not
<MikeSmith> s/???/Rohit Khare/
mjs: My biggest disappointment about the RFC is that it doesn't have
provisions for individual registrations
... It would be useful to have a central repository where all of
these can be listed so people know what's in use, even if it doesn't
have a formal spec
... I think Mark should make a provisional registry.
... Mark said the registry would be so lightweight it wouldn't be
necessary
... But that has not proven to be true.
<hsivonen> moreover, even proven to be false
Julian: We have provisional registries in other IANA things, and
nobody's used them.
<MikeSmith>
http://www.ietf.org/mail-archive/web/link-relations/current/thre
ads.html
http://www.ietf.org/mail-archive/web/link-relations/current/threads.html
mjs: I think if you find something that's almost never used, then
creating something that has higher barrier to entry, then creating
something with a higher barrier to entry isn't going to increase use
Julian: People don't use provisional registries because they don't
care enough.
mjs: microformats.org list has even lower barrier to entry, and it
is used
Julian: One difference between IANA registry and wiki page is that
wiki is completely HTML focused
... So they don't consider relations among other formats other than
HTML
... They don't think about use on PDF or video
mjs: Most people invent link relations for HTML. I don't think it
makes sense to force them to address these abstract link uses that
may or may not be practical.
... It makes more sense to me to provisionally register the link
relations, and then encourage them to think about generalizing to
other formats.
hsivonen: It might be not about people not caring, but about
provisional registration being dysfunctional
... I also agree with mjs that in some cases people don't care about
nonHTML use cases. In that case we should just do HTML.
Julian: we talked about ... provisional registry [that hsivonen
mentioned] yesterday, and I totally agree this problem needs to be
investigated.
... I think we try.
... I think we should try to encourage people to think of link
relations applied to non-HTML content
mjs: I think encouragement is fine. But if encouragement fails, what
happens? Should the link relation then be undocumented because
encouragement was unsuccessful?
Julian: ... nobody's mailed a link relation and asked designated
experts to help make the link relation more generic
mjs: You've raised the barrier by tring to make it generic, the
person doesn't care about making it generic, so it ends up being
unregistered
anne: You don't need that to get it in the registry, but to get it
endorsed
hsivonen relates hixie's experience with trying to register a link
relation
hsivonen: If what hixie wrote wasn't enough, then I think we have a
problem.
Julian: My point of view was that he didn't seriously try. He wanted
to prove it didn't work.
... I don't think it will be productive to continue on this path.
mjs: When I looked at the original templates hixie submitted and
compared them to what the RFC said, I couldn't see any mechanical
procedure that determined they failed to qualify
... So it seems anyone trying to register would require multiple
email go-around
... Same problems result in failure to register MIME types and URL
schemes
MikeSmith: I have been going through the process of making requests
using the mandated procedures
<MikeSmith>
http://www.ietf.org/mail-archive/web/link-relations/current/thre
ads.html
http://www.ietf.org/mail-archive/web/link-relations/current/threads.html
MikeSmith: You can see there the discussions about the registry
... It does take multiple go-arounds in email for these.
... One is for some of the link relation names or types, they are
already being used in other contexts
... One of those was 'search'.
... If you look at that, it was specified somewhere else.
... Regardless of how you do this, there has to be some discussion
about what this description should say
... I don't see any way to get around that, if you have multiple ppl
want to define the same thing.
... Other issues were with how it's defined in the spec itself.
... 'up' is one of those. Had to go back to WG and get a resolution
for it
... .. Maciej... having to change the description of the link
relation so that it's more generic, and less about HTML
... I'm not thrilled with that.
... Don't really care about doing that at this point in the
procedure.
<hsivonen> (one of the top Google hits for the metaphor is from one
of our co-chairs:
http://intertwingly.net/blog/2005/05/11/Fetch-Me-A-Rock )
http://intertwingly.net/blog/2005/05/11/Fetch-Me-A-Rock
MikeSmith: I think many ppl are not going to be thrilled about
changing what they think is a perfectly reasonable discription of
their use case to handle some speculative use cases
... That's alwasy going to be a troublesome thing for someone to do
s/disc/desc/
MikeSmith: In the spirit of going through the procedure and taking
it to the end to see if it ends up being something it works or not
... But I do think we have to keep open the possibility that we
decide that it doesn't work.
... I don't think it's a given that just because it's an RFC and the
registry exists, that we've commited to this is how we do it.
<MikeSmith> http://www.w3.org/2005/04/xpointer-schemes/
http://www.w3.org/2005/04/xpointer-schemes/
MikeSmith: I think it's still a possibility that this isn't working
the way we would like it to work, let's try something else.
... There is something else, plh asked me to point out.
... Is the xpointer registry.
<anne> +1 to W3C doing web registires
MikeSmith: This is another way of registering something that is
similar
<anne> s/registires/registries/
MikeSmith: I think the biggest ... difference between things that
have been successfully regsitered
... and those that are still being reviewed
... i.e. provisionally registered
... All you need to do to request a provisional registration, you
just start by typing in a name of some kind
it gives you a form asking for a description, and optionally a spec
URL
MikeSmith: This is a middle ground between a wiki page
and
<hsivonen> This looks good to me
MikeSmith: At least it's got a form-driven interface
... I think this is a good middle ground
... If the IANA registry provided a way of doing this, I think that
would be something we could agree on
Julian: IANA registry has something very similar
... The only thing is that instead of being automatically
registered, it gets sent to the email list
... If we made a provisional registration out of the sumission, that
would be the same.
<Julian> http://paramsr.us/tracker/
http://paramsr.us/tracker/
<anne> The requirements for XPointer are first-come-first-serve
Julian: and then someone on the mailing list to the tracker page
<anne> This is not at all the case for the link registry
<anne> well, the one the IETF/IANA uses
hsivonen: How do you know the tracker issue is filed and where that
is?
Julian: You don't
?: Why can't you do a web-based form?
Julian: Can't do that in IANA. IANA doesn't have web-based forms.
Lives in last century.
... The form that posts to email is a compromise.
hsivonen: So why does HTMLWG/W3C want to deal with an organization
that lives in the last century
<weinig> s/?:/Sam
hsivonen: Instead of using xpointer registry code?
Julian: It depends on whether you think the link relations should be
synced with other formats or not
sicking: Why couldn't you let W3C do the syncing to IANA?
MikeSmith: Before ? pointed out xpointer, I didn't know we did
registries
mjs: Sounds like building a registry along the lines of xpointer
would be a great idea
<MikeSmith> s/?/PLH/
mjs: Any volunteers to do that?
... write it up as a Change Proposal?
... It's a little past deadline, but since we have new info on the
W3C registry option, would be a good thing to do
MikeSmith: Guess I should talk to plh about this.
hsivonen volunteers
MikeSmith: plh asked me to point out the open issue about Role
... We talked about it this morning. Similar potential need to have
a role registry
... plh isn't sure xpointer way is the right way to go, but wanted
us to be aware that it exists
anne: I think we should do role more centralized, because it affects
implementations directly.
hsivonen: In last meeting I asked EPUB to ask PF to set aside some
tokens for them once getting commitments from AT vendors that they
will support these roles
mjs: Other things in HTML5 might benefit from this
... e.g. <meta> names
... There was a third thing
Julian: canvas context?
mjs: Seems more like role, in that it has implementation
implications and should therefore be centralized
hsivonen: Yes. for role, e.g. you need coordination among AT vendors
and browsers etc.
... Not good to have a registry. Rare to make a new role.
... PF should be able to set that aside without a formal process.
anne: Other one is meta http-equiv, which has a different namespace
than meta name
... And canvas context, you do sorta need a place that says which
are the contexts and which are compatible with which.
... Currently all are incompatbile, so not an issue now, but might
change.
hsivonen: New canvas context is even rare
r
?: Still need a list of them
??: No, could just be defined by the specs that define them
hsivonen: I don't see this as being a problem right now.
<kennyluck> s/??/mjs/
hsivonen: There are three canvas contexts in the world, and one is
proprietary
anne: we're removing them, 'cuz features have been added to 2d
... Might want a variant of WebGL that is compatible with 2D
... But still it's very limited
mjs: There's probably only a single-digit number of these, and
should all go through HTMLWG anyways
fantasai: For link relations, seems like the idea is to have a
provisional xpointer registry
... What about if someone wants to port a provisionally registered
link rel to IANA, for more general use?
discussion
hsivonen: Dont't think we want to hijack Atom registrations
Julian: If we decide not to go with IANA registry, need to decide
whether we want to continue with registration of HTML5 link
relations in IANA
mjs: I think registering HTML5 link rels in IANA is unrelated to
progress of HTML5
... It's not a requirement for us. It just makes the IANA registry
more complete.
mjs expresses that he doesn't care whether MikeSmith finishes the
registration since it's not required for HTML5
MikeSmith: It's not a lot of work, think it makes sense to finish
offf.
mjs: what about the ones where the designated experts require
changes to the definitions
MikeSmith: filed issues on that
mjs: For us, the importance of a registry is as an extension point.
sicking: Seems to me that the best caretakers of the link registry
so far has been the microformats people
... So I want whatever solution we choose here to work for them.
mjs: Idea of using page on microformats wiki was proposed, but
nobody's written up a change proposal for that either.
... Anyone want to volunteer to write that up?
sicking: Ok, I'll do it.
mjs: So post to the mailing list and say how long it will take you?
... I think we should make an exception here, because we have new
information that will help us make a better decision
Julian: Microformats.org is not a new idea
sicking: New information is our experience with IANA
Julian: Half have gone through. A number are held on bugs being
fixed in HTML
... Then we have to review the updated spec.
mjs: If the spec isn't updated, what happense?
Julian: We'd probably accept the registration anyway.
mjs: So why is the registration being held up?
Julian: If the description is updated at HTMl5, then the IANA
registration would have to be updated multiple times.
hsivonen: Why is updating IANA registry multiple times a problem?
Julian: I don't think it makes a big difference either way
fantasai: Then I suggest you ask the IANA registers to finish the
registration for any link relations that will be registered with the
current text, and then update the registry when the problems they've
pointed out have been addressed with updated text.
<scribe> ACTION: Julian to Ask the IANA designated experts if this
would be an acceptable model [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action04]
<Julian> http://www.w3.org/html/wg/tracker/issues/127
http://www.w3.org/html/wg/tracker/issues/127
ISSUE-127
Julian: ... Means in theory the semantic of the link relation can
change depending on whether it's on <link> or <a>
<MikeSmith> trackbot, associate this channel with #html-wg
<trackbot> Sorry... I don't know anything about this channel
<trackbot> If you want to associate this channel with an existing
Tracker, please say 'trackbot, associate this channel with #channel'
(where #channel is the name of default channel for the group)
<MikeSmith> issue-127
<MikeSmith> issue-127?
<trackbot> Sorry... I don't know anything about this channel
Julian: I think the link relation should be defined the same for
both, and the usage affect details like scope
... I think the section should be revised to not imply that rel
values on <link> and <a> could be substantially different
... The IANA registry has an extension point so that each
registration can have multiple columns
<MikeSmith> issue-127?
<trackbot> Sorry... I don't know anything about this channel
<kennyluck> trackbot, associate this channel with #html-wg
<trackbot> Associating this channel with #html-wg...
Julian: That was requested by Ian
<MikeSmith> issue-127?
<trackbot> ISSUE-127 -- Simplify characterization of link types --
raised
<trackbot> http://www.w3.org/html/wg/tracker/issues/127
http://www.w3.org/html/wg/tracker/issues/127
Julian: E.g. to have a column that says whether the linked resource
is required to be loaded, or just informational relation
<MikeSmith> ACTION: Julian to Ask the IANA designated experts if
this would be an acceptable model [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action05]
<trackbot> Created ACTION-196 - Ask the IANA designated experts if
this would be an acceptable model [on Julian Reschke - due
2010-11-12].
mjs: It seems that in practice the spec does what's requested, so
it's more an editorial issue
Julian: This distinction applies both to the spec and also to the
registry
... I don't think having the distinction in the registry is a good
idea.
... We don't seem to have any good cases for that.
... The observation is, we currently have a table in the spec that
has columns for effect on <link> and effect on <a> and <area>
... In this table, both are exactly the same
... except for two values, which in one column it's listed they're
not allowed
... And in these case there are bugs on whether that distinction is
a good idea.
<kennyluck>
fantasai: Setting stylesheet on <a> doesn't make sense to me
mjs: 'stylesheet' and 'icon' would have no effect outside <a>, even
if we add them
Julian: ...
... We'll have to make a decision on that no matter where we put the
registry. Defining things such that it's possible for relations to
have a different deifnition on different elements is a bad idea.
mjs: ok
<kennyluck> s/<a>/<link>/
<Julian> http://www.w3.org/html/wg/tracker/issues/119
http://www.w3.org/html/wg/tracker/issues/119
Julian: This is about the 'up' relation.
... Someone thought it would be nice to change the definition to
allow repetition of 'up'
... to e.g. have 'up up' mean grandparent
mjs: That wouldn't work very well given the DOM api for rel, which
lists unique tokens
fwiw, I agree this seems like an ill-fitted idea...
<Julian> http://www.w3.org/html/wg/tracker/issues/118
http://www.w3.org/html/wg/tracker/issues/118
<anne> HTML5 says something different from HTML4?
<Julian> this is about navigational link relations that changed in
HTML5, potentially changing existing content
hsivonen: fwiw, I think we should get rid of the up up up thing.
... It won't be supported in UI very well anyway
Julian: The use case given was to build a navigation tree in the UA
... But I think there are better ways to address that use case
hsivonen: When a browser user experience team wants to implement
something, and asks for syntax for it, then we should conside rit.
... but at this point it just seems a theoretical idea
... So I would propose to just drop it
Julian: I'd like to ask the chairs to bundle the timing for these
issues so they don't get too spread out
mjs: Could put them all together
... have been staggering them so you don't have to write proposals
all at once
http://fantasai.tripod.com/qref/Appendix/LinkTypes/ltdef.html
http://fantasai.tripod.com/qref/Appendix/LinkTypes/ltdef.html
meeting closed
RRSAgent: make minutes
RRSAgent: make logs public
Testing 2
<anne> scribe: anne
MJS: Lets make a testcase in this session and submit it
... in the later half of this session
JS: I am willing to coming up with a format for tests
... and write a harness
<mjs> ACTION: sicking to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action06]
<trackbot> Sorry, couldn't find user - sicking
<mjs> ACTION: Sicking to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action07]
<trackbot> Sorry, couldn't find user - Sicking
trackbot, this is HTML WG
<trackbot> Sorry, anne, I don't understand 'trackbot, this is HTML
WG'. Please refer to http://www.w3.org/2005/06/tracker/irc for
help
http://www.w3.org/2005/06/tracker/irc
<dbaron> trackbot, status
<trackbot> This channel is not configured
KK: I can update the wiki
<MikeSmith> trackbot, associate this channel with #html-wg
<trackbot> Associating this channel with #html-wg...
<scribe> ACTION: kris to update the wiki [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action08]
<trackbot> Created ACTION-199 - Update the wiki [on Kris Krueger -
due 2010-11-12].
<scribe> ACTION: Sicking to design a file format for describing
tests, and to write a harness that will run the automated tests
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action09]
<trackbot> Sorry, couldn't find user - Sicking
<scribe> ACTION: jonas to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action10]
<trackbot> Sorry, couldn't find user - jonas
<sicking> gaah, i don't exist
<sicking> i irc, therefor i exist
KK: What about XSS issues?
PLH: I agree we cannot solve the XSS issues
... My goal is that we do not set up services on these domains
... so there is no problem, effectively
AVK: as long as w3.org does not document.domain we are fine,
otherwise it might be safer to use w3test.org
MJS: There might be a problem in the future; everything should be
safe if we do not use a subdomain
JS: I have an idea for non-automatible tests, but we can discuss
that later
... The way I would like us to do new things is write tests in the
new format if it is compatible with our features
MJS: We have a requirement for landing new features and we could
require them to be written in the HTML format
AvK: We have used this format successfully already
... e.g. for server-sent events and XMLHttpRequest
MJS: one thing we might need to do is identify features in the
specification which are not new but still need tests
... there is an HTML4 test suite
AvK: I do not think we should start from that
[people agree]
HS: How does updating work?
JS: We will have to figure it out
HS: for html5lib WebKit first lands in WebKit, I land first in
html5lib
[HS implements for Gecko]
SW: We are not opposed to change
Pushing policy
AvK: I think if the test contributor is known the tests should just
get in
JS: I do not agree, I think we should have a staging area
KK: I think so too
MJS: I think it makes more sense that the testing in browsers
happens later and that tests should get automatically in
[scribe misses out on discussing Mozilla specifics]
staging area
KK: Basically you have a set of tests, and wait for them to be
approved
MJS: What do you want the approver to actually do?
KK: cursory review
AB: I think it might be worth having almost automatic approval
process
... for tests that pass in multiple user agents
MJS: why does there need to be this approval step? it will happen in
distributed form anyway
AB: to increase the level of quality
MJS: it does not seem to happen now
AvK: agreed
DB: I am not sure that a approval process is good for known
contributors
MJS: It seems like a waste of time of people to require people to
manually run the tests in every browser before it is approved
... there will also be cases that fail in all browsers
DB: it seems you want a staging area because you want a known good
set of tests
... an alternative approach is to ship a release, rather than delay
on trunk
HS: not having a lot of process helped html5lib to move forward
faster
MJS: with a release you know it does not get worse
KK: the idea of approved is that is done
AvK: so far that has not worked I think
MJS: I think you will always get more tests and with releases you
know the delta and can review whether that is ok as you already know
the previous release was ok
[something about multiple vendors contributing tests being awesome]
MJS: problematic tests can be removed from the release
<hsivonen> fantasai: Microsoft testa a lot of value combinations.
Mozilla tests tricky edge cases.
<fantasai> fantasai: Different vendors take different approaches to
testing, and thereby cover different aspects of the features.
<fantasai> fantasai: By putting them together you get a more
comprehensive test suite
JS: if the release process does not work we can revise it
KK: i like to lock things done
DB: if browsers import the tests they will report the problems more
quickly
KK: in the current model the test can be pulled right away
[mercurial haz magic]
JS: If I find something wrong should I fix the test and mail the
list
KK: currently mail the list
... and open a bug
MJS: I think people who report the bug should be allowed to fix the
test
AvK: you want to optimize for the case that is most common, and most
common the bug reporter will be correct I think
DB: you should notify the person who wrote the test
JS: I am fine with attaching patches to bugs
http://www.w3.org/html/wg/wiki/Testing
http://www.w3.org/html/wg/wiki/Testing
<plh> -->
http://lists.w3.org/Archives/Public/public-html-testsuite/2010Fe
b/0014.html Mercurial server
http://lists.w3.org/Archives/Public/public-html-testsuite/2010Feb/0014.html
<dbaron> hg clone http://dvcs.w3.org/hg/html/
http://dvcs.w3.org/hg/html/
http://tc.labs.opera.com/apis/EventSource/eventsource-close.htm
is an example of a test following the non-written guidelines
http://tc.labs.opera.com/apis/EventSource/eventsource-close.htm
<dbaron> default-push = https://[USERNAME]@dvcs.w3.org/hg/html/
<dbaron> is a line that you'd want to add to .hg/hgrc after:
<dbaron> [paths]
<dbaron> default = http://dvcs.w3.org/hg/html/
http://dvcs.w3.org/hg/html/
http://annevankesteren.nl/2010/08/w3c-mercurial
http://annevankesteren.nl/2010/08/w3c-mercurial
<hsivonen> let's make one of these:
http://ted.mielczarek.org/code/mozilla/mochitest-maker/
http://ted.mielczarek.org/code/mozilla/mochitest-maker/
<hsivonen> that is, we should have a tool like that for the W3C
harness
<krisk> see http://test.w3.org/html/tests/
http://test.w3.org/html/tests/
<hsivonen> I'm already annoyed by having to wrap stuff in test()
<hsivonen> so I can't do ok(false, "FAIL!"); in scripts that aren't
supposed to run
<plh> ACTION: Kris to add reftest handling in the test harness
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action11]
<trackbot> Created ACTION-200 - Add reftest handling in the test
harness [on Kris Krueger - due 2010-11-12].
<krisk>
http://test.w3.org/html/tests/approved/getElementsByClassName/00
1.htm uses a relative path
http://test.w3.org/html/tests/approved/getElementsByClassName/001.htm
<hsivonen> https://developer.mozilla.org/en/Mercurial_Queues
https://developer.mozilla.org/en/Mercurial_Queues
<hsivonen> you'll really want to use MQ
Media Queries ftw
<krisk> http://www.w3.org/html/wg/wiki/Testing
http://www.w3.org/html/wg/wiki/Testing
<weinig> sicking: http://www.w3.org/html/wg/wiki/Testing
http://www.w3.org/html/wg/wiki/Testing
<plh> a reftest:
http://test.w3.org/html/tests/submission/W3C/bidi-markup-export/
html5-html/
http://test.w3.org/html/tests/submission/W3C/bidi-markup-export/html5-html/
<dbaron> trackbot, associate this channel with #html-wg
<trackbot> Associating this channel with #html-wg...
Summary of Action Items
[NEW] ACTION: Anne to give Alexey info about registry problems
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action02]
[NEW] ACTION: Henri to give feedback to Larry on MIME etc draft
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action01]
[NEW] ACTION: jonas to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action10]
[NEW] ACTION: Julian to Ask the IANA designated experts if this
would be an acceptable model [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action04]
[NEW] ACTION: Julian to Ask the IANA designated experts if this
would be an acceptable model [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action05]
[NEW] ACTION: Kris to add reftest handling in the test harness
[recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action11]
[NEW] ACTION: kris to update the wiki [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action08]
[NEW] ACTION: plh to work with systeam to make sure we keep track of
hg push [recorded in
http://www.w3.org/2010/11/04-html-wg2-minutes.html#action03]
[NEW] ACTION: sicking to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action06]
[NEW] ACTION: Sicking to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action07]
[NEW] ACTION: Sicking to design a file format for describing tests,
and to write a harness that will run the automated tests [recorded
in http://www.w3.org/2010/11/04-html-wg2-minutes.html#action09]
--
Michael(tm) Smith
http://people.w3.org/mike
Received on Tuesday, 9 November 2010 22:26:52 UTC