W3C home > Mailing lists > Public > www-tag@w3.org > April 2013

Draft Minutes of 18-20 March TAG F2F now available for review

From: Noah Mendelsohn <nrm@arcanedomain.com>
Date: Thu, 11 Apr 2013 16:26:20 -0400
Message-ID: <51671C6C.3000001@arcanedomain.com>
To: "www-tag@w3.org" <www-tag@w3.org>, public-tag-announce@w3.org
The complete draft minutes for the TAG's 18-20 March 2013 F2F are now 
available for review, and are linked from the agenda at [1]. The full text 
is also copied below.

We will vote to approve these as a true record next week during our 
teleconference. Thank you.


[1] http://www.w3.org/2001/tag/2013/03/18-agenda



       [1] http://www.w3.org/

                                - DRAFT -

            Technical Architecture Group Face-to-Face Meeting

18 Mar 2013

    See also: [2]IRC log

       [2] http://www.w3.org/2013/03/18-tagmem-irc


           Yehuda Katz, Anne van Kesteren, Yves Lafon, Peter Linss,
           Ashok Malhotra, Jeni Tennison, Noah Mendelsohn, Marcos
           Caceres (afternoon only), Larry Masinter (phone), Henry
           Thompson (phone), Alex Russell (phone)


           Noah Mendelsohn

           JeniT, annevk


      * [3]Topics
          1. [4]Introductions
          2. [5]Brief Orientation
          3. [6]Outreach to developer community
          4. [7]Layering
          5. [8]Fragment Identifiers (aka hash, fragids)
          6. [9]ISSUE-24 Authoritative Metadata
          7. [10]TAG Orientation & scheduling F2F
          8. [11]F2F Scheduling
          9. [12]Coordination with ECMA TC39
      * [13]Summary of Action Items

Please note the following IRC handles and nicknames:

    The following IRC handles and nicknames are used in these
      Handle                TAG Member
    slightlyoff Alex Russell
    torgo       Dan Appelquist (former TAG member)
    wycats      Yehuda Katz


    wycats: jQuery foundation, web developer since 2003

    annevk: elected as individual, now at Mozilla
    ... was at Opera
    ... worked on selectors API, CSS namespaces, DOM, URLs, Fetch

    wycats: works on TC39, module system, gathering use cases
    ... day job working on libraries

    Yves: TAG team contact, and for WebApps
    ... worked on HTTP 1.1 & HTTPbis, CSS (esp validator)
    ... also web services

    plinss: HP, CSS WG, full-time standards, was Netscape's rep
    ... started the CSS namespaces spec for annevk

    <annevk> Yves, if it was up to me we'd remove the whole thing,
    but it was already too late :/

    ashok: Oracle, XML Schema, XQuery, web services, linked data
    ... trying to standardise property graphs

    JeniT: works for ODI, worked on XML Processing, Data & HTML,
    fragment identifiers

    Larry: Adobe, LISP, networked-mediated communication
    ... URIs, URNs, HTTP 1.0 & related protocols
    ... HTML, when it was at IETF
    ... on advisory board for W3C & contributed to TAG charter
    ... in IETF, URI schemes

    <Larry> [14]http://larry.masinter.net

      [14] http://larry.masinter.net/

    noah: For much of my career was officially IBM, ... but while
    there also spent time at Stanford, MIT, etc. ... split between
    industry & academia, went to Lotus
    ... work with IBM & Sun on JavaBeans
    ... contributed to XML & SOAP specs at W3C
    ... hard to do clean work in large standards arena
    ... appointe by Tim to TAG in late 2004, then as chair
    ... several years ago.
    ... nice tradition of having diverse opinions on the TAG
    ... (Note that Tim has often appointed people who disagree with
    ... now at Tufts teaching computer science

Brief Orientation

    noah: agenda fluid, except that ht & jar only here after lunch
    tomorrow & Jeff at 11am tomorrow
    ... big things we will come back to on Wed
    ... we represent the whole community, not ourselves
    ... or our employers. ... should be aware of charter

    <annevk> [15]http://www.w3.org/2004/10/27-tag-charter.html

      [15] http://www.w3.org/2004/10/27-tag-charter.html

    <annevk> charter ^^

    noah: From charter:
    ... 1. document & build consensus around web architecture & to
    interpret & clarify principles where necessary
    ... help community agree on what matters & how pieces fit
    ... 2. resolve issues involving architectural principles
    ... both brought to the TAG & proactively
    ... 3. coordinate cross-tech architecture developments inside &
    outside W3C


      [16] http://www.w3.org/2004/10/27-tag-charter.html#Mission

    noah: we're not officially in the loop on spec approval
    ... will usually go direct to WGs about things we disagree with
    ... TimBL has ultimate say as Director
    ... one of our jobs is to be available to advise him on
    occasions when he has to make decision

    wycats: when we ran, slightlyoff & I interpreted "web
    architecture" to include architecture of web platform
    ... it usually means how documents interlink etc
    ... there's more recent developments around how platform works
    ... there isn't a lot of architecture of the platform itself

    noah: we've had several efforts, eg torgo's work on API

    annevk: (to wycats) is your question about whether it means
    HTTP etc?

    wycats: I'm asking about whether it means architecture of
    documents & their interactions

    noah: web has grown from web of documents to web of other
    things, including active documents (ie scripts)
    ... we did try to rewrite web arch around web applications

    <Larry> how the web works: what's a server, client, using HTTP,
    separation of markup and style, the role of javascript. the
    'architecture' of something describes the pieces and names the
    interfaces for how they're put together

    wycats: there's architecture of that web that is out there, and
    architecture of platform

    noah: can we delay until later in the F2F?
    ... the community builds specs, web built on them
    ... the pieces should fit together so that the web has a set of
    characteristics such as scalability, internationalisation etc
    ... I want us to remember that our remit is broad
    ... eg whether we can use URIs as if they mean something in 100
    years, something the library community is concerned about
    ... probably not so relevant to browser community
    ... my job to smooth things along & make sure we deliver

    annevk: how important is the charter?
    ... eg it talks about using XML in a way that's no longer

    noah: not legalistic about it, but need to follow spirit

Outreach to developer community


      [17] http://www.w3.org/2001/tag/2013/03/18-agenda#developersdevelopers

    wycats: we'll have more luck outreaching to developers if what
    we do is stuff that affects their day-to-day lives

    plinss: I have a proposal: there's the webplatform.org project
    ... maybe we could have an architecture section on that site

    <annevk> [18]http://www.webplatform.org/

      [18] http://www.webplatform.org/

    <Larry> +1 to peter

    noah: we have tried to do that previously, we assigned actions
    to do similar on the W3C site
    ... almost never did them
    ... is anyone interested in doing that?

    <Larry> w3conf and webplatform.org represent new, significant
    addition of resources to build and maintain

    <Larry> that's what's different

    wycats: I'm not sure web developers would find value from it

    noah: when I teach, I like to point people at those things

    wycats: specifically, webplatform.org is not a good platform
    for that

    plinss: I think it's evolving and it's trying to be that portal
    ... if nothing else, some high-level overviews

    wycats: like a list of concepts? here's what a URI is?

    <Larry> i think this is an area where tag members are out of
    touch, webplatform.org is just starting, hasn't been filled out

    plinss: yes, and how to use it, just broad strokes

    ashok: could you extract it from the web arch document?

    plinss: yes

    wycats: my sense is that if we want to educate developers about
    web arch
    ... they see it as it actually works, eg POST can be used to
    delete resources
    ... danger of telling people how it works when it is out of
    touch with reality

    plinss: yes, you have to be pragmatic
    ... eg the hashbang thing is a hack
    ... we should document it, say it's a hack, show the way
    forward to the right way

    wycats: yes, say that the architecture of the web *right now*
    involves the hack

    plinss: yes, and show the migration path

    <Zakim> noah, you wanted to talk about interaction vs. one way

    wycats: calling out the things that work today but that are
    ... particularly useful to know what bits are hacks

    noah: right relationship with developers is a discussion rather
    than one way
    ... webplatform.org isn't right because it's one way
    ... been at our best when we've worked with others
    ... good tension between short-term focus & long-term view
    ... architecture is about taking long-term view
    ... it's really hard because you don't get the immediate
    ... good architecture usually boils down to use cases, should
    be examples about what breaks

    <wycats__> it turned out that it has poor adoption

    annevk: important to understand why the confusion happens
    ... eg a elements don't have way of specifying method

    noah: need to be asking the question "are we doing anything
    we'll regret in 10 years"

    <Larry> i think talking about GET vs POST belongs to the HTTP
    working group and that it was a waste of TAG time to talk about

    wycats: bringing it back to topic of developer outreach
    ... think there's high leverage in reaching out to platform
    developers (eg Rails, jQuery)
    ... to make it easy for developers to do the right thing
    ... eg Rails team very interested in getting guidance on REST
    ... we still have to guess sometimes
    ... target the tools that people are using to build the sites
    ... this is about adoption characteristics, don't have to
    address everyone in the universe

    <Larry> best instance of successful developer outreach was

      [19] http://www.w3.org/conf/

    noah: TimBL's design notes are a great resource, something TAG
    has done is write them up

    wycats: it's not well known that eg the web arch document

    <Zakim> Larry, you wanted to give a different description of

    Larry: architecture is about what pieces there are and how they
    connect together
    ... eg markup, style, scripting, client-server protocol
    ... principles are guidelines for how to use the architecture,
    or misuse it, things you should & shouldn't do
    ... as a result of telling developers about the fundamentals
    ... someone new to the web needs to understand how the pieces
    ... GET vs POST is at too low a level, not a great use of TAG
    ... architecture has changed significantly since Tim designed
    it, because of introduction of AJAX, scripting paradigm
    ... HTML now an API with a little bit of parsing

    slightlyoff: worth understanding economics that constituencies
    find themselves in
    ... touched on that around current constraints vs futures
    ... people are publishing content for parochial reasons
    ... want to create value for a set of users
    ... architecture enables or disables them from doing that
    ... want to maximise benefits & minimise costs
    ... have to be informed by what people are trying to get done
    ... second point is that AJAX isn't a completely different
    ... means we have to recognise imperative layer

    <Larry> "origin" and CORS need to be part of webarch

    <annevk> Larry, working on it:

      [20] http://fetch.spec.whatwg.org/

    <Larry> annevk, "part of webarch"

    <Larry> i don't mean "specified in webarch"

    <annevk> Larry, fair enough

    <Zakim> noah, you wanted to say why ajax does change things

    noah: on AJAX, we have found places where it changes things
    ... eg one principle is identifying things with URIs
    ... question about "what do you identify using an AJAX app"?
    ... eg states in a game
    ... if it's something that would have been done using the
    normal web architecture, still want to use URIs
    ... same with web storage
    ... still need to identify these things with URIs
    ... these are architectural questions

    annevk: when would you not use HTTP requests?

    noah: some of the web sockets stuff for example

    <Larry> i mean that there's no place in the current webarch to
    talk about 'origin'

    <Larry> i think URIs are less important to webarch as they were
    10 years ago

    wycats: one way to address the concern around local storage is
    to ask how can we make it really easy to use URIs for that

    noah: not everyone agrees that's even what we should be aiming

    wycats: naming things with URIs is a fundamental part of the

    noah: let's have a proper session on URIs, AJAX etc later

    slightlyoff: wycats, annevk & I have done work on this we could

    wycats: this is something that developers do understand, but
    they don't understand what it means to have a URI for an AJAX
    ... so this is something where we could have a good impact

    <Larry> i think "is it OK?" formulation isn't useful here

    <slightlyoff> it IS a capability system

    <Larry> noah: people use URIs for capabilities. it's part of
    how wthings work

    <Larry> secrecy is measured in time-to-expiration

    <Larry> uris can be secret for a little while

    <Larry> time-to-compromise

    <Larry> some really well encrypted channels have a
    time-to-compromise of decades (some people believe centuries,
    but i don't believe that)

    <Larry> webarch also needs security properties everywhere



      [21] http://www.w3.org/2001/tag/2013/03/18-agenda#layering

    <wycats__> Here is the deck: [22]http://cl.ly/3P3K3C3E422C

      [22] http://cl.ly/3P3K3C3E422C

    <slightlyoff> heh

    wycats: trying to unpack what we meant by "Layering" when we
    ran for TAG
    ... watched TimBL's TED talks
    ... wanted to talk about how AJAX stuff links up with linked
    data & open data
    ... first documents were hand-written documents, like we post
    via CVS
    ... the data is the same as the markup, no translation layer
    ... people realised they didn't want to have to write
    everything by hand, so started to separate out data and
    template, combined to create document
    ... this obscures some of the raw data
    ... as JS came into the frame, even the document provided via
    HTTP doesn't include the content
    ... now need to run JS to get the "content" of the document
    ... less and less of the document is content, more and more
    generated by JS
    ... other side of this is that it's the *data* that is
    published by the servers
    ... the semantic content is exposed
    ... JS is about displaying that semantic content to the user
    ... going to show discourse, form software built as a JS app
    ... URL friendly, but all the communication is done via APIs
    ... downloadable page is nothing (nothing in HTML)
    ... rich semantic JSON sent to the client, devoid of display
    ... other end of the spectrum from markup is data, end up doing
    API first development
    ... should be excited about this development if you like linked
    ... we shouldn't be scared of JS
    ... "Where We Are Today"
    ... people writing specs are writing implementations, think in
    terms of implementation
    ... platform capabilities are exposed via markup & DOM bindings
    (JS code)
    ... markup mapped to DOM, JS code interacts with it, display
    doesn't link with that JS bindings
    ... JS bindings & rendering don't interact
    ... eg simple form for POSTing info about people
    ... the HTML spec defines how the input fields are displayed
    ... one big case statement
    ... want to add something new, have to add a new case
    ... problem is that if you want to build your own controls, eg
    date picker
    ... algorithm doesn't delegate to you
    ... specification mirrors implementation rather than
    well-designed architecture

    <slightlyoff> indeed, serialization is pluggable inside most

    timbl: you're making assumption that you want to burrow all the
    way down
    ... even if it was implemented in JS it can be an architectural
    feature that you can't control everything

    <slightlyoff> nobody's arguing against the value of standards

    <slightlyoff> at least not me or wycats__ ;-)

    wycats: the answer isn't that "everything is JS"
    ... custom controls in jQuery
    ... create some hidden markup & use script to make it behave
    the way you want to
    ... serialize() in jQuery, to create form submission
    ... need to write lots of imperative code to hack around the
    constrained browser capabilities
    ... people end up writing the whole browser in JS

    <slightlyoff> other toolkits do exactly this

    <slightlyoff> (i've written this code multiple times)

    <slightlyoff> I also wrote this code in Dojo

    <slightlyoff> (the serialization system)

    <slightlyoff> Closure has the same split

    wycats: "Appeal to Magic"
    ... form serialisation is the exact implementation in C++

    <Larry> is SVG as its own content-type part of webarch?

    wycats: people understand the core value propositions of the
    web, work hard to implement them in script
    ... people don't do everything in canvas, but write lots of JS
    to emulate stuff that's part of the core web platform

    <slightlyoff> once again, I can attest to this from the
    perspective of Google properties


      [23] http://meta.discourse.org/

    wycats: URLs update as you scroll down the page

    noah: this is a great example of what we've been advocating

    wycats: the amount of JS necessary to do this is 962kb
    ... would prefer to hook into primitives of the platform to get
    this to work
    ... people discontented with having to write so much JS to do
    what should be built-in
    ... "so close and yet so far"
    ... in Rails we have something that reimplements browser
    navigation using XHR
    ... to get better performance
    ... end up doing crazy hacks to augment what the platform is

    <timbl> [24]https://github.com/chad/turbulence

      [24] https://github.com/chad/turbulence

    wycats: users are frustrated when emulated layers don't work
    ... examples of twitter & Facebook, falling back from emulation
    ... big picture of twitter going to native web pages is that
    they are giving up on good user experience
    ... twitter/Facebook say "if you want to have a good user
    experience use a native app"

    <slightlyoff> put another way: taking control, today, means
    taking *everything* under your JS-driven roof

    <slightlyoff> tweetdeck is still native on their native

    <Larry> tweetdeck was Adobe AIR

    <slightlyoff> (iOS, Android)

    <slightlyoff> it's HTML5 on web.tweetdeck.com and their Chrome

    <Larry> [25]http://en.wikipedia.org/wiki/TweetDeck

      [25] http://en.wikipedia.org/wiki/TweetDeck



    timbl: one of the things here is page load time, comparing that
    to a local application is cheating
    ... need to install the app locally to get the speed

    wycats: they were using caching etc, so trying, but you still
    have first page load hit for people finding tweets via google
    ... web is a more casual browsing experience than the
    installation of applications
    ... do not expect to have to install when they hit pages

    annevk: where does twitter advocate using native apps?

    wycats: not in this blog post

    annevk: how do we get both the speed and the user experience?

    wycats: I'm getting there

    noah: two problems: having to download loads of JS & having to
    write loads of JS

    wycats: reasonable to have a more installable model
    ... size is a particular issue in mobile & outside developed
    ... shouldn't underestimate cost of malaise, result is less
    engaging apps

    slightlyoff: speaking from Google experience, eg desktop Gmail
    on the web
    ... something like megabyte of JS, mostly is stuff the web
    should be better at
    ... spent an enormous amount of effort to reduce load time &
    impact of the wait
    ... the constraints are the same even for massive organisations
    like Google
    ... can't do as much as a native app can

    <Larry> does TAG want to take on web performance? takes
    optimizing rendering, download, network latency, javascript
    performance. "Velocity" conference

    wycats: looking at popular apps built using
    ... apps are around 700k

    slightlyoff: limits are much lower on mobile devices

    wycats: "Turing Escape Hatch"
    ... people ask for primitives
    ... example of mouse vs touchpad
    ... eg tapping on iPad vs with mouse --- no :active on div

    noah: this is about mapping that browsers have chosen in iPad

    wycats: can add .active rather than using :active to hack
    around it
    ...: active applies when "the element is being activated by the
    ... this is appeal to magic

    annevk: we have talked about how to spec this
    ... complexities around scrolling
    ... these are HTML4-era specs

    wycats: even in more modern specs there are huge failures in
    ... there's a theory of layering that leads us to do the right

    noah: I assume that these specs were written when the impact of
    iPads etc wasn't clear
    ... could people have gotten this right?

    <slightlyoff> yes

    wycats: there should be some JS property somewhere that says
    that an element is active, so you can call it

    slightlyoff: in the implementation there is an imperative
    version of this declarative form
    ... the question is how exposed is that declarative form to the
    imperative world
    ... there's going to be a translation somewhere
    ... it's straight-forward to say that you need to add the
    imperative form at some point

    timbl: by explanation, you mean you must be able to reroute the

    wycats: you must be able to not appeal to C++

    noah: you need to be able to keep some of it, need to choose
    where you have the layer
    ... end up with huge UI frameworks, where the purpose is to
    theme etc

    <annevk> (I think the main problem is that we don't even
    understand, in documented manner, what the actual user
    interaction model is. User agents reverse engineer this from
    each other at the moment.)

    wycats: platform built around C++ DOM
    ... "Fundamental Theorem of the Platform"
    ... "Markup begets JavaScript objects via a parser"
    ... should be theorem that explains what the platform is doing
    that helps us define declarative/imperative split
    ... stack of markup, JS runtime, DOM standard library,
    ... rendering defined in terms of platform primitives eg canvas
    ... this matches web developers' mental model
    ... gives us reasonable hook

    annevk: this doesn't allow for asynchronous selector matching

    wycats: this isn't getting 100% perfect, just getting magic
    part smaller

    slightlyoff: creating a competitive platform, we have technical
    question of accomplishing most value with least effort
    ... not about replacing everything, about doing archeology &
    explaining in terms of more primitive APIs
    ... some might not be exposed, but this is a powerful exercise
    in creating generative platform

    noah: in these architectures, one thing you might want to do is
    encouraging people to do the XML thing and creating lots of new

    wycats: yes, that's what I'm getting to
    ... "Path for Natural Platform Evolution"
    ... we don't want people to write everything using these
    primitive forms eg using canvas
    ... provide some markup, people write imperative extensions,
    new "slang" markup, broad acceptance
    ... we want to allow people to experiment with new tags
    ... if they become acceptable, they get rolled into the
    ... provide a mechanism for evolutionary development that
    doesn't rely on Hixie
    ... can look at pages on internet to see what's being used

    <slightlyoff> more to the point, we can't prevent it

    <slightlyoff> and haven't so far

    <slightlyoff> here's a preview of what that reserach might look
    like: [27]http://meaningless-stats.appspot.com/global

      [27] http://meaningless-stats.appspot.com/global

    <slightlyoff> [28]https://github.com/slightlyoff/meaningless

      [28] https://github.com/slightlyoff/meaningless

    noah: would you encourage people to experiment with tags that
    might become part of the platform, where's the cut-off about
    what you can use

    timbl: what if you have a serious community eg MathML & Geo
    ... broad acceptance in their communities, but not across whole
    ... what's the world in which there are pieces of these?

    wycats: I would imagine they would create their own extensions

    timbl: would it be a browser extension?

    wycats: can imagine in different levels of broad acceptance
    leading to different levels in browser support

    timbl: I'm particularly interested in things that are
    completely accepted in a small community

    wycats: the question is at what point it gets pulled into C++

    <noah> I mostly like where Yehuda is going, but I'm nervous
    about the ability to (a) encourage communities doing things
    like MathML to standardize while (b) telling dentists that a
    <jaw> tag is not what this is about. One persons's horizontal
    can be another's vertical.

    timbl: there are lots of communities doing exciting things
    ... like the geospatial folks, but they will not ever get into
    every browser

    <slightlyoff> the point is we're not doing this with data today

    <slightlyoff> we're doing conjecture, not science

    <Larry> "chemical ml" and "math ml" are good examples

    wycats: they could just have a JS library
    ... or they could have a browser extension that would make
    available & cache the JS library

    timbl: and maybe reimplemented in C++

    <noah> This might well lead us back to and XML-like
    architecture, albeit with dynamic Javascript support (which is
    cool), but without namespaces or a distributed extensibility

    <noah> I might agree that living without something like
    namespaces is OK for things that may become core to the
    platform, when you get too close to vertical (dentists), you
    really need some way to keep names straight, and to enable

    slightlyoff: I posted a reporting site for a Chrome extension
    to start looking at the ad-hocs semantics in the web at the
    ... we haven't stopped anyone adding semantics to HTML
    ... we've made it difficult enough that people don't agree on
    how to express them
    ... the goal is not to repeat the perceived mistakes of XML
    ... it's to generate the visual/content meaning from the markup
    ... we don't have today a good way to tie the creation of
    end-user value to new declarative forms
    ... hard to track use of declarative form because people use
    imperative form
    ... give tools to start declarative
    ... can then start to do science on the web to detect patterns
    ... at the moment we can't do this because it's hidden in the
    imperative code

    <slightlyoff> this markup system, btw, is my work from Web

    <Larry> JSON is a "declarative form"

    wycats: an Ember template looks like a minified JS function,
    for example
    ... final example is different from markup
    ... offline downloadable case
    ... step one was App Cache
    ... Apple used this at beginning of iOS
    ... App Cache declarative form with no imperative escape hatch
    ... Hixie built declarative form
    ... large number of workshops to fix what's broken with App
    ... Hixie wasn't interested in implementing it
    ... platform already has capability, but because it was drafted
    wrong, we're not able to use it
    ... if there's no escape hatch we can't easily fix/hack around

    <Larry> public-fixing-appcache@w3.org

    <Larry> [29]http://www.w3.org/community/fixing-appcache/

      [29] http://www.w3.org/community/fixing-appcache/



    <slightlyoff> you weld it shut if your view is that the
    platform is a hermetic thing without layering

    wycats: need platform features that are broadly useful rather
    than targeted on specific features
    ... "Navigation Controller"
    ... questions when you write an offline app
    ... what happens when they first load the page, first time,
    subsequent times, how does the cache work?
    ... Navigation Controller provides the answer, but also way to

    noah: is this defined in terms of an HTTP cache?

    slightlyoff: no
    ... turning big case into small primitives, if they're not
    implemented fall back on default behaviour

    wycats: App Cache implemented on top of something where
    behaviour can be overridden

    noah: HTTP caches are implementations of HTTP spec, built into
    ... not focused on long-term caching
    ... be interesting to tell an organised story about how what we
    do relates to those headers and that architecture
    ... should be a lot in common
    ... need a clean story where this is consistent with this

    annevk: need to define the interaction anyway

    noah: look at both spec and code point of view

    wycats: I think it does talk about interaction with HTTP cache

    annevk: if there's a POST, you want to store it in the
    Controller in the offline case, which HTTP caches won't do

    wycats: use existing browser primitives, build on top of them
    ... need core primitive of "opaque response" for example
    ... we might not like how it works, but we need to explain how
    it works

    ashok: is this how you think it *should* work, or how it *does*

    wycats: this is a proposal that Alex is creating, a proposal
    that Alex will make to a real WG
    ... "Declarative Form"
    ... Jonas Sicking has been working on better declarative API
    ... it's a JSON model
    ... one problem with App Cache was that it wasn't extensible
    ... using JSON means we don't have to amend the parser
    ... it's less ambitious
    ... provides just the obvious things, so we can explore how it
    is actually used
    ... includes mechanism for templated URLs & use of XHR
    ... lets you point to markup & data separately
    ... "Evolution"
    ... people will extend JSON, add JS libraries etc
    ... use this to evolve the platform
    ... better imperative capabilities enables better evolution
    ... I'll not talk about web components much
    ... these are basically the same thing
    ... I propose that we write a Recommendation that outlines this
    design philosophy for the web platform

    Yves: going back to delegation controller & cache
    ... if you look at local cache as being a URI resolver, then
    you see it as a low end version of a controller
    ... there are things that are shared

    wycats: yes, you could imagine that HTTP cache is defined as

    noah: need few minutes discussion about where to go next
    ... this would be a significant project for the TAG
    ... for these we should have a project page:
    ... goals
    ... success criteria
    ... a Rec is not a success criteria: explain in terms of what
    that achieves
    ... key deliverables, dates, who's assigned
    ... often useful at this point to create a first cut at what
    the product page might look like
    ... use that as point of discussion about why we should do
    this, what the TAG's role is

    <noah> Example of a product page:

      [31] http://www.w3.org/2001/tag/products/fragids.html

    wycats: all this stuff about Navigation Controller etc is just
    specific examples, we need to look at higher level

    timbl: what's the push-back that you've heard around these
    ... will the implementers get on board?

    wycats: main pushback is from people worried about new
    declarative forms

    <noah> If this is to be a significant project, we need a plan
    at about that level. We don't necessarily need it at this F2F,
    but I'm suggesting we draft a strawperson version just to get
    discussion of what the scope and goals of the TAG's work on
    this might be.

    slightlyoff: general lack of familiarity from implementers
    about what developers need
    ... don't go from being web developers to browser implementers

    <wycats__> noah: I volunteer to bring that back tomorrow or

    <noah> Great, thank you. Suggest you collaborate with others,
    at least informally.

    noah: great, we'll come back to this later in the F2F

    annevk: the main concern I've heard about web components is
    lack of shared understanding
    ... counter-argument is that we already don't
    ... also heard concern that if we only do low-level primitives,
    it'll be too hard for normal web developer

    timbl: building up from bottom rather than down from top?

    <wycats__> you do both

    <wycats__> but the high level is defined in terms of the
    primitives, like the app cache example

    slightlyoff: this is an interesting question for the TAG: do
    you do one before the other?
    ... look at what opportunities you foreclose and how you reduce
    opportunity for harm

    timbl: like with :active you could imagine it starting with
    touch events etc
    ... need some top-down for device interoperability
    ... if you start low-level people start generating different
    design abstractions

    wycats: define high-level in terms of low-level
    ... eg start with the element, then talk about what new
    capability is in JS objects
    ... end up with high-level thing that's nice + escape valve

    timbl: if you get it right, yes

    annevk: another concern, around rendering, is that rendering
    doesn't happen on main thread
    ... currently constraint-based system rather than imperative
    ... exposing it as a imperative system removes possibilities of

    noah: try to keep as much of it as declarative as possible
    ... eg changing CSS through adding class through JS

    wycats: always going to be the case that if you are
    reimplementing something that's implemented in browser it's
    going to be slower

    annevk: by defining the platform in a certain way, you might
    constrain it
    ... might want to implement browser in a fancy different way,
    the single-threaded system is problematic

    <slightlyoff> annevk:


    timbl: these sorts of problems: making something performant
    needs different implementation strategies
    ... eg parallelisation
    ... asking for the callout can be difficult to implement

    wycats: people are reimplementing the entire layout system in
    JS, surely that's not more performant

    timbl: so prepared to hit on very fast reflow of CSS in order
    to be able to override it

    <slightlyoff> annevk: it's possible to uncover the solver as a

    <slightlyoff> annevk: going where the evidence leads

    <slightlyoff> annevk: without pre-judging the level of
    abstraction you end up at

    <annevk> slightlyoff: I meant e.g. the case bz mentioned where
    if you implemented the DOM completely in JS you could no longer
    do async selector matching

    <slightlyoff> yeah, that's sort of BS

    <annevk> slightlyoff: I don't really fully understand

    <slightlyoff> browsers always have fast-and-slow paths

    <annevk> slightlyoff: euh sure

    <slightlyoff> annevk: it's re-implementing CSS in JS via a JS
    constraint solver that I work on

    <slightlyoff> annevk: but the conceptual level of abstraction
    need not be one level or the other

    <slightlyoff> annevk: e.g., JS engines internally use
    single-static-assignment transforms

    <slightlyoff> annevk: but JS is not a pure-functional language

    <slightlyoff> and there are scenarios where you can't employ
    those xforms

    <slightlyoff> and we get by

    <annevk> I don't follow

    <slightlyoff> going fast when we can based on some other
    formalism than the naive interpretation of "what it is"

    <slightlyoff> +1!

    <ht> I wonder if there's a place here for what the
    XML-on-the-Web community has been doing for years has a place
    in this story, i.e. _declarative_ layering using XSLT in the

    <annevk> scribenick: annevk

Fragment Identifiers (aka hash, fragids)


      [33] http://www.w3.org/2001/tag/2013/03/18-agenda#fragids

    goals for this session:

      [34] http://lists.w3.org/Archives/Public/www-tag/2013Mar/0059.html

    JeniT: Goal of this session is to figure out what to do with
    the spec

    [see link above]


      [35] http://lists.w3.org/Archives/Public/www-tag/2013Feb/0021.html

    JeniT: RDF/XML clashes with RFC 3023bis
    ... If you have a structured syntax like +xml or +json, then
    any media types that adopt that suffix need to comply with the
    suffix registration
    ... generic XML processors should be able to use fragids
    without understanding context


      [36] http://www.w3.org/2001/tag/doc/mimeTypesAndFragids-2013-03-12.html

    JeniT: the spec (^) is written for four sets of people
    ... registration of +suffix media types people (e.g. +xml);
    media type registration people (e.g. application/rdf+xml);
    ... fragment structures which should work across media types
    ... and the people who write code

    <ht> Relevant bit of 3023bis is here:
    s-00#section-8.1 and here:


    JeniT: Last Call was the last step
    ... now up for CR for which we need exit criteria

    wycats__: my main issue is that in browsers fragids are often
    used for something else entirely

    JeniT: the focus is not for RESTful app developers as that's
    addressed elsewhere

    timbl: don't you think the use of fragids might increase?

    wycats__: seems plausible
    ... My concern is that a document about fragment identifiers
    should cover its main use, in HTML

    <ht> See also

      [39] http://tools.ietf.org/html/rfc6839#section-4.1

    JeniT: it's up to the media type registration

    <slightlyoff> I see this as a question about navigation

    annevk: I think there's a mismatch between the media type ->
    fragment mapping and what actually happens in a browser

    noah: is this in the edge case or is this not the architecture?
    ... is this 80/20 or is the architecture really not correct?

    annevk: I think it's a 80/20

    <JeniT> discussion about embedded images in iframe etc where
    browser captures control of the interpretation of fragids
    within that iframe

    noah: is it okay if we note that we point out it's not always
    accurate and we might do future work?

    Ashok: what if somebody does a registration and does not follow
    the rules?

    Yves: best practices still allow for people shooting themselves

    JeniT: media type registration people could check against this

    Ashok: does the IETF agree with us?

    JeniT: the feedback from the people involved suggests so
    ... but they are not the reviewers of media type registrations
    ... the exit criteria are what's important here
    ... [goes through proposal]

    <noah> Two tentantive agenda items added for Wed morning. See

      [40] http://www.w3.org/2001/tag/2013/03/18-agenda.html

    plinss: CR is a call for impl so people should start using it

    <slightlyoff> uh...I think wycats__ had a point ot make

    <slightlyoff> we have time.

    <slightlyoff> (all day, in fact)

    wycats__: A thing that happens in the world in the world where
    people request the same resource from a browser and with an
    Accept header via XHR to get JSON
    ... if they use the same fragment, is that a problem?
    ... I want it to be specifically allowed to use a fragment
    identifier for some media types and not others
    ... in the context of content negotiation

    <scribe> scribenick: scribe

    <annevk> wycats__, so many reasons not to do content
    negotiation btw: [41]http://wiki.whatwg.org/wiki/Why_not_conneg

      [41] http://wiki.whatwg.org/wiki/Why_not_conneg

    <slightlyoff> annevk: conneg happens

    <slightlyoff> annevk: honestly, it does

    <annevk> doh

    <slightlyoff> consenting adults and all that

    <annevk> All I'm saying is that it's a bad idea

    <annevk> E.g. confused proxies that don't support Vary

    <wycats__> annevk: O_O

    <slightlyoff> and I don't happen to agree = )

    <wycats__> I disagree strongly with that document

    <slightlyoff> in the same way I disagreed with crock's jslint

    <annevk> wycats__, hope you read it first :)

    <wycats__> And Rails/jQuery is an existence proof that this is
    not an issue

    <wycats__> annevk: I did

    <wycats__> I skimmed

    <slightlyoff> annevk conneg drives a huge amount of the web

    <slightlyoff> annevk I think your document is simply a dead

    <annevk> slightlyoff it's not mine

    <annevk> slightlyoff I simply agree with it

    <slightlyoff> annevk ok, still a dead letter = )

    <slightlyoff> annevk and agreeing with it want revive it

    <annevk> slightlyoff I don't think it's dead at all

    <annevk> slightlyoff most of the new stuff has taken it into

    <slightlyoff> annevk sorry, it might reflect your reality but
    not most of the web, toolkits, etc.

    <slightlyoff> annevk so specs might write it in, but it doesn't
    make it right

    <annevk> slightlyoff most of the web, I'd like to see that

    <annevk> slightlyoff e.g. most of the CDNs don't work this way

    <slightlyoff> annevk by traffic? sure: gmail and google.com

    <annevk> slightlyoff they use the same URL with different

    <slightlyoff> annevk you bet

    <annevk> hmm

    <annevk> scribenick: annevk

    Larry: your CR exit criteria proposal looks fine

    <Larry> the exercise of thinking about exit criteria is

    <Larry> it matters less about the details but at least that
    there is some credible measure that you "got it right"

    <Zakim> ht, you wanted to point to the IETF state of play

    ht: this implements the best practices

    <JeniT> [42]http://tools.ietf.org/html/rfc6839#section-4.1

      [42] http://tools.ietf.org/html/rfc6839#section-4.1



    ht: RFC3023bis has stalled due to lack of energy, but I now
    found that energy

    <ht> [44]http://tools.ietf.org/html/rfc6839 uses prose taken
    indirectly from earlier drafts of our doc't, and I think it can
    be counted as evidence of uptake wrt CR exit

      [44] http://tools.ietf.org/html/rfc6839

    <Larry> the "elephant in the room" is that MIME in the web is
    under attack by sniffing and registerContentHandler

    <JeniT> browsers should implement

      [45] http://tools.ietf.org/html/rfc5147

    timbl: I'd like to use fragment identifiers to be used in lots
    more places
    ... e.g. I want fragment identifiers for plain text and

    <Larry> +xml vs +json don't have common fragment identifiers

    wycats__: Getting agreement on how to do this for text/plain
    might be tricky

    timbl: My concern is that fragment identifiers are already in
    use in HTML for wildly different things and if we ask for them
    to have the same semantics as their you're breaking things.

    <noah> I think the worry is conneg, Henry

    timbl: [46]http://tools.ietf.org/html/rfc5147

      [46] http://tools.ietf.org/html/rfc5147

    <ht> I think it's important to keep the conneg issues and the
    suffix/generic issues carefully distinct

    <Larry> polyglot fragment identifiers

    <Larry> fragment identifiers that mean the 'same' when
    interpreted by content with different media types

    <ht> There are similarities, along the lines that Jeni has just
    suggested, but they aren't the same

    JeniT: I think
    ... we agree on the exit criteria
    ... we agree to REC
    ... there are concerns with
    ... content negotiation and transitioning
    ... HTML/XML where script takes over the interpretation
    ... I will
    ... create a new draft for a future TAG call soonish

    plinss: do these changes require another LC?

    noah and JeniT: no

    <slightlyoff> I don't think we need another LC

ISSUE-24 Authoritative Metadata

    <JeniT> ScribeNick: JeniT

    <noah> [47]http://www.w3.org/2001/tag/doc/mime-respect-20060412

      [47] http://www.w3.org/2001/tag/doc/mime-respect-20060412

    annevk: TAG finding 2006 on "Authoritative Metadata"
    ... argues that encapsulating metadata about content is more
    important than the content

    timbl: that you can't understand content without having read it

    annevk: in fact, browsers disregard content type sometimes
    ... they look at content type and sniff content as well
    ... because use the wrong content type
    ... with img, the content type is basically ignored
    ... test if it's image/svg, otherwise just pipe to image
    ... video is similar, uses the bytes in the content to
    determine format
    ... cache manifest does the same thing

    noah: is there a lot of badly served video?

    annevk: video is hairy because of how it's distributed
    ... lots of different container formats
    ... doesn't map that well to media type system

    wycats: was a time, cache manifest was served wrong

    annevk: with cache manifest we required correct media type, but
    people complained a lot, so we dropped the requirement
    ... with subtitling, the webTTL format, we also decided to just
    look at first set of bits
    ... we disregard content type for the response for these
    ... fonts, same thing happens
    ... tried to do fonts/X but IETF didn't help
    ... browsers started shipping
    ... people were using application/octet-stream
    ... IETF had no interest in fonts/X
    ... for CSP it's important


    scribe: content security policy
    ... trying to prevent XSS attacks

    <slightlyoff> wait...what's important for CSP?

    <slightlyoff> hmmm

    <slightlyoff> I'm not sure I agree

    <slightlyoff> I'm contributing to CSP

    <slightlyoff> and I don't understand how this is an issue

    scribe: from a browser perspective, we wanted to enforce
    ... from a web developer perspective it's difficult
    ... because you don't always have sufficient control

    <Larry> talking about sniffing?

    marcosc: github doesn't give you any control for example

    <noah> e Hi Larry, we're just getting into authoritative
    metadata. Anne is giving us a summary of the state of play,
    which is basically: a lot of the new markup specifically
    ignores content type in some cases due to pushback when early
    versions required it

    annevk: we don't want to interpret arbitrary files as CSS
    ... there were hacks that took advantage of that
    ... for cross-origin requests we enforce content type
    ... and in strict mode
    ... CSS is easier to enforce because it's been around for a
    long time

    <Larry> x-content-type-options: nosniff is a good idea

    <slightlyoff> again, would like to dive into CSP

    slightlyoff: I'd like to dive into the CSP question, because I
    don't understand the issue
    ... CSP defines what origin what sort of content can be from
    ... that's about execution
    ... that doesn't speak to mime types

    annevk: CSP is authoritative metadata
    ... we want that metadata to be authoritative

    <noah> Curious, is content-type honored on script tags?

    slightlyoff: I see, you're not talking about content type here,
    you're talking about the CSP header

    noah: we're on a thread with darobin saying that authoritative
    metadata is an anti-pattern

    <slightlyoff> I don't see how <script> is bad either

    <slightlyoff> sending an image and having it pulled in is just

    <noah> Quoting from Robin's email of

      [48] http://lists.w3.org/Archives/Public/www-tag/2013Feb/0114.html

    <Larry> there are a ton of problems with the sniffing document

    <noah> "I would support the TAG revisiting the topic of
    Authoritative Metadata,

    <noah> but with a view on pointing out that it is an
    architectural antipattern. "

    annevk: there's an argument that we should be looking at
    something else because it's not working

    <noah> So, the suggestion that it's an antipattern comes from
    Robin, and that's in part what led us to discuss today.

    marcosc: the thing in github is that when you navigate to the
    raw file in github, it's served as text/plain
    ... if I have a type that I'm working on, there's no way of
    registering it in github

    <slightlyoff> FWIW, CSP is only authoritative in a modifier
    sense; the document is still valid HTML without the CSP

    <Larry> people would fix their servers if browsers rejected
    mislabeled content

    <slightlyoff> so CSP is strict subsetting, not type changing

    <Larry> supporting "nosniff" would be a way of making metadata

    wycats: web reality is that people don't have a lot of control
    over their deployment environments

    marcosc: when I read the finding, the assumption seemed to be
    that there was a user and someone in control of a server
    ... and the most common case now is that you don't run your own
    server now

    <noah> So, all this tends to confirm me feeling that the
    virtues of Postel's law are way oversold. If we were stricter
    about what we accepted from the start, we'd have lots more
    correctly typed content, and the Web would be a much more
    reliable source of information.

    wycats: same with CloudFront

    <Larry> i submitted specific bug reports on the sniffing spec

    marcosc: my blog is another example, things are getting proxied
    all over the place, I don't have a lot of control
    ... I'm not going to be able to influence what that service is

    wycats: there are some cases where you should care about
    authoritative metadata and some cases where you shouldn't

    <Larry> authoritative metadata needs fixing

    annevk: the specs capture this but the finding doesn't

    slightlyoff: CSP is different than other metadata
    ... it's strict subsetting



    <annevk> I think CSP might have been a distraction :-(

    <annevk> I just mentioned it as something we want to be

    slightlyoff: removing the CSP metadata from a document might
    functionally invalidate it, but doesn't otherwise change its
    ... it's ignored on browsers that don't support it
    ... it's best effort

    <Larry> i think the general principle of metadata might be
    different for content-type charset


      [50] http://trac.tools.ietf.org/wg/websec/trac/query?component=mime-sniff

    annevk: once it's supported in all browsers, you have to use it

    wycats: CSP doesn't give you 100% protection for all users

    slightlyoff: it doesn't modify the type of the document or
    create an alternative processing path

    wycats: you can't say "onclick=" in certain CSP modes
    ... if you turn off eval for example, it's
    application/javascript minus eval

    <Larry> servers should send no content-type rather than a wrong

    noah: it's not changing the feature list, it's just disallowing
    some of the features

    timbl: you brought this up as somewhere that sniffing is not

    annevk: yes, it's not that authoritative metadata is an
    incorrect finding
    ... we still use it for CSP for example

    slightlyoff: it's not authoritative, it's best-effort

    wycats: from the perspective of a client, it's not best-effort

    annevk: another example would be X-Frame-Headers
    ... places where the metadata *is* authoritative
    ... or with CORS, we have to trust the headers
    ... not looking into content to see if headers are wrong

    <Zakim> noah, you wanted to say it's not only the browsers

    noah: I know that this is going to come up again and again
    ... over the next few years, we're going to have to discuss
    server<->browser vs one in which content is served, and browser
    is an extremely important use case
    ... we don't know what's going to be reading this in 20-30

    slightlyoff: we're still going to be using browsers in 20-30

    noah: we'll probably be using browsers for a bunch of things we
    do today
    ... but HTTP is usable for many things beyond page browsing by
    ... like google crawlers
    ... in general, HTTP and URIs are important infrastructure
    ... it's like saying that the telephone system is only going to
    be used for telephony

    <slightlyoff> googlebot is TRYING to process the world the way
    users do

    <slightlyoff> that's all it doe

    <slightlyoff> s

    <slightlyoff> that's its job in life

    wycats: there's a common ubiquitous user agent, which is a

    noah: that's important, but I want to talk about what servers
    should do too

    timbl: the spec is a contract between client and server

    <slightlyoff> noah: googlebot's stated mission in life is to
    process and "see" the web the way users do

    <Larry> windows does x-content-type-options: nosniff, that
    would let metadata be authoritative

    timbl: I wonder whether we should define two webs: a simple one
    in which authoritative metadata is a platonic ideal
    ... it's easy to teach someone how it works
    ... in fact there are variations
    ... a much larger book
    ... for a lot of the web, you can ignore the authoritative
    metadata issues when you're designing websites
    ... maybe it's useful to have both
    ... authoritative metadata is a model
    ... just as it's useful to have an HTML spec which is "these
    are the tags"
    ... maybe we need two documents

    <slightlyoff> what's the use in a toothless finding?

    <wycats__> it's easy to teach people lies

    <wycats__> people do this all the time

    <wycats__> throughout the world

    annevk: that question hasn't been answered
    ... should we recommend people use mime types for fonts, even
    though we know that clients will ignore them
    ... there's a mime type for app cache manifests, for example
    ... we could encourage its use and flag it in the console if
    the wrong mime type is used
    ... we could indicate in the browser the mismatch

    <Larry> the MIME types for fonts are a mess

    <slightlyoff> Larry ; sure, but does it matter?

    <slightlyoff> that's the only question that seems to be at

    <Larry> yes, slightlyoff, the font people tell me they have
    lots of problems

    <Larry> there are awful workarounds

    marcosc: you get console warnings for some mismatches in Chrome
    ... I have a JSON-based format, I want it to behave in a
    particular way
    ... what do I do?
    ... I need some authoritative metadata there
    ... JSON doesn't provide a way of giving a magic number
    ... no comments or any other way of including a magic number
    ... XML is different because you can use a namespace

    slightlyoff: the mission of googlebot is to represent to users
    through search results what they would have chosen as humans
    ... we do everything we can to view the pages as a human would
    ... googlebot is necessarily browser-like
    ... systems that have to answer questions for humans need to
    see the web as humans do

    <noah> Yes, Alex, I acknowledged that the Google Bots are
    focused on matching what would be seen through a browser, at
    least for now. If Google Bots ever want to help you find linked
    data, that might change.

    <slightlyoff> noah the bots are trying to help you find linked
    data today; see schema.org markup

    <slightlyoff> noah but that's just a sub-axis of the
    human-visible content we already try to make sense of

    slightlyoff: second point: the question that marcosc raised
    ... is the question that you have a processor that doesn't know
    what to do with it?

    marcosc: the pattern now is to invoke it through an API
    ... concretely I'm thinking of the Firefox hosted application
    ... you install by pointing to a JSON file
    ... and what should happen if someone gets hold of that URL and
    pastes it into the location bar
    ... the bigger question is that we have these annoying formats
    that don't allow magic numbers
    ... and we need to be able to deal with those as well

    noah: yes, and we always will

    <slightlyoff> I can't speak for the crawl team and their
    interests, but our anti-spam techniques generally boil down to
    "you can't lie to us, show us what you'd show a human", so
    that's a hard constraint in the real world

    wycats: the authoritative metadata finding is being ignored by
    the most popular user agents
    ... I think we should recognise that there are good reasons for
    user agents to ignore it
    ... also it's not a good platonic ideal if it doesn't work

    <slightlyoff> agree with wycats__

    <Larry> "Authoritative Metadata" tries to accomplish too much--
    combining all kinds of metadata. The question si really
    "cascading metadata"

    timbl: the point of the platonic ideal is that it's a sketch or
    a pattern that is easy to understand
    ... there's the Content-Type model, the FTP model of
    understanding the file extension, and the magic number world
    ... unix pre-Mac used file extensions officially but could
    sniff things
    ... in the early web, people got stuck on the extension
    ... in the browser, do you look at the suffix?

    annevk: very rarely

    <Larry> sniffing sounds better than labeling, but sniffing
    can't be made to work, because the ambiguity is INRTRINSIC for
    polyglot, puns, and intrinsically unreliable.

    <Larry> if you are going to standardize sniffing, then you
    might as well put the sniffing in the server rather than the

    timbl: when you say "the most common application is the
    ... the browser is lots and lots of different clients in
    different situations

    wycats: I can imagine a universe where a platonic ideal spec is
    useful, but it shouldn't say MUST or MUST NOT

    timbl: what about talking about patterns?

    wycats: if we write something about Content-Type, we should
    look at what has actually worked

    timbl: if you removed everything about Content-Type in
    browsers, it wouldn't work

    annevk: we can't remove it at this point, HTML and CSS there's
    nothing you could use (to sniff)

    wycats: in the user agent, it's very very common not to do it
    ... even recently, with fonts and manifests it hasn't worked

    <Larry> "platonic ideal" is a incorrect reference

    <Larry> Plato has nothing to do with this ideal

    timbl: we've got these three patterns
    ... it's not an anti-pattern

    annevk: darobin rolled back from saying that

    <wycats__> I agree that it's not an anti-pattern, but I
    disagree that we should try to tell people that it's the one
    true way to do things

    <wycats__> and the current spec irretrievably says that

    <Zakim> Larry, you wanted to give pointers to background

    Larry: wishing that you could do sniffing is unrealistic
    ... you can't do it between text/plain and anything
    ... magic numbers are used in different places
    ... sniffing is intrinsically heuristic
    ... you might as well put the rules in the server
    ... the finding covers too many cases
    ... there are all sorts of metadata, content type, character
    sets etc
    ... might be better to focus on content type issue
    ... sniffing isn't a viable alternative to labelling
    ... it's not a matter of authoritative metadata, more cascading
    ... different sources of information about what the type would
    ... comments on the web sec, bug reports on the mime sniff
    living standard
    ... it's a complicated situation
    ... I don't think there'll be a simple solution
    ... to rescind it or sniff everywhere

    <wycats__> I don't understand how anyone thinks we are going to
    have a spec that tells the HTML spec that it MUST NOT do
    something that it has to do to match reality

    <wycats__> that's just absurd

    noah: we have the reality we do with servers, I don't think it
    proves very much about what might have been improved in the

    annevk: in the beginning we didn't have Content-Type, we just
    had sniffing

    <slightlyoff> how does this even matter?

    <slightlyoff> we can't buy into Postel's law less

    <Larry> slightlyoff: yes, you can

    <Larry> x-content-type-options: nosniff "Solves" the problem

    noah: this is water under the bridge though
    ... it's not really clear how many of the problems are because
    it's conceptually flawed
    ... or because it hasn't been implemented
    ... there are architectural reasons why follow-your-nose is

    <timbl> I much prefer the content-type, body pair model to the
    one in which all possible document contents are sniffed by one
    single interplanetary sniffing algorithm. Robin's proposal,
    that we have things like <plaintext> just begs the question,
    and moves the authoritative metadata moves to the next level.

    <wycats__> timbl: I agree with that

    <wycats__> timbl: I agree that in theory that is good, but we
    *must* accept that Postel's Law is the reality of the Internet

    <wycats__> why mess with success?

    <slightlyoff> timbl postel's law isn't a "nice to have"; only
    one half of it is

    <slightlyoff> it's a description of what systems at scale

    noah: it should be possible that as much as possible of the web
    can be understood by following pointers
    ... maybe we can refocus the finding
    ... surely no one thinks serving a PNG as image/jpeg is the
    right thing to do
    ... we don't know what people will do long term
    ... labelling things well is generally good practice in an

    <timbl> Postell's law is not th reality of HTML5

    <wycats__> timbl: sure it is

    <slightlyoff> timbl of course it is.

    <wycats__> look at the parser?

    <slightlyoff> timbl it's *actively* that

    <wycats__> every branch that says "This is an Error" tells the
    parser what to do anyway

    annevk: the community doesn't have the tools to follow this

    noah: the tools will evolve

    timbl: we can push the community, CORS for example

    <Larry> The "nosniff" option is deployable

    <annevk> there's nosniff content out there that you need to

    <annevk> because IE only implemented part of it

    wycats: CORS isn't implemented widely because it requires
    sysadmin access

    marcosc: even W3C doesn't do it properly

    wycats: we could decide that Postel's law is an interesting
    curiosity, or an important thing on the internet

    timbl: the HTML5 people say that you have to be liberal in what
    to send and liberal in what to expect

    <Larry> annevk "need to" bears closer examination. You mean IE
    still sniffs "nosniff" content?

    <slightlyoff> but that's not true about how behavior works

    wycats: the spec says exactly what to do when there are errors

    <annevk> Larry: yeah, e.g. image/jpeg vs image/png

    <annevk> Larry: cause it all goes down the image decoder

    noah: we need to push people to be conservative in what they
    ... that's the other side of Postel's law

    timbl: in Postel's law there's a huge space of messages that
    are not sent
    ... in HTML5, there is no difference in what you can send and
    what's accepted

    wycats: HTML5 there's a set of documents which are valid and
    ... the user agent can fail when there's an error

    slightlyoff: if you look at the new parser for webkit, we have
    error functions which are noops
    ... in the real-world the software that has to continue to work
    ... in Postel's law, there's the motherhood and apple pie of
    ... and the practical reality of massive scale with consumers

    timbl: the current situation is that people are liberal in what
    they produce, and the browsers are liberal in what they accept
    ... that's why I said that Postel's law does not describe the
    current web

    <wycats__> 1+

    annevk: the guidelines are conservative/liberal

    <slightlyoff> we saw this even in RSS

    <wycats__> the whole point of Postel's Law is that the reality
    is liberal/liberal

    <annevk> aaah

    marcosc: maybe what darobin means to say is to rethink what it
    means to be conservative on the server end
    ... it can be sending content type or magic number or something

    <wycats__> it's telling people to be conservative, but if
    people LISTENED you wouldn't need the other half

    <slightlyoff> timbl even in formats where errors should have
    been "fatal", we ended up with Postel's law dominating, and we
    got fixup-based parsers for those formats too

    <annevk> hehe

    noah: it says follow the specs

    marcosc: we've been having discussion about magic numbers over
    last two years or so, for example with app cache and fonts
    ... people just couldn't use content-type, maybe there we can
    use magic numbers
    ... I'm saying there's other ways of being conservative

    <noah> That can't be the point of it. Postels law says be
    conservative when sending. I accept that being liberal is where
    we've landed, but that is surely NOT what Postel intended.

    <slightlyoff> why are we debating something that can't work in

    <wycats__> the point of Postel's law is that reality is always
    people sending liberally

    <slightlyoff> i htought we cared about the architecture of the

    <wycats__> otherwise you can nuke the liberal half

    <noah> What I think he wanted was to give receivers a little
    wiggle room to keep running while everyone cleaned things up,
    not to license long term nonconformance.

    <slightlyoff> not a system that's web-like but not the web?

    <wycats__> noah: definitely not

    timbl: conservative means sticking to the spec
    ... all the groups are defining the protocol about how the web
    ... I'd like to see the spec so that when I'm writing a server
    from scratch I know what to do, and one that isn't too

    marcosc: it's a real problem for me that I don't know how to
    label my new JSON format

    timbl: label it

    marcosc: I need to know whether to use a magic number or a mime
    type in a new binary format
    ... why should I bother if the browser doesn't care

    annevk: the browser cares if you say it should care

    wycats: what matters to me is documenting reality

    <Zakim> Larry, you wanted to talk about recasting the language

    Larry: it's helpful to stop talking about should I ever serve X
    as Y
    ... you have a body and a header and try to figure out how to
    interpret it
    ... it can be ambiguous about what it is
    ... it's possible to add the no sniffing option, but the
    browsers have to make it work
    ... once someone starts sniffing in a no-sniff context,
    everyone will stop doing it
    ... it has to be linked to an event, for example introduction
    of HTTP 2.0
    ... put the sniffer in the proxy, so you could serve it as
    ... that would reduce a lot of the ambiguity
    ... absolutely true that if no one enforces it then content
    providers won't fix the content

    <annevk> That'll hurt adoption of HTTP 2.0

    <annevk> You don't want to close couple that

    <wycats__> does anyone disagree with removing the MUST?

    <slightlyoff> wycats__ I do not disagree with that

    noah: we need to decide whether to take this beyond this
    ... I don't think there's even a core of an emerging consensus
    ... it's possible that we should send people off to frame
    something that they think might lead to TAG consensus
    ... next thing up is setting TAG priorities
    ... is this something that people want to put time and energy

    <slightlyoff> I'd like to propose that we not try to advise for
    things that are fundamentally incompatible with observable

    <slightlyoff> I feel strongly that this cuts to the core of TAG

    <slightlyoff> and our current deficit in this regard.

    wycats: I think the consensus that might be here that the TAG
    shouldn't have MUSTs when WGs can't implement those MUSTs

    <Larry> if you're going to do something about sniffing, then
    work on the sniffing spec

    <slightlyoff> I agree with rescinding it

    <Larry> [51]http://mimesniff.spec.whatwg.org/

      [51] http://mimesniff.spec.whatwg.org/

    <Larry> i think rescending it without replacing it is

    <slightlyoff> Larry that's one potential option; we can also
    explore others

    <slightlyoff> Larry in what sense?

    timbl: we could add a disclaimer in it
    ... sounds like HTML5 spec doesn't describe how browsers do all

    annevk: not quite everything, but it's fairly complete

    <slightlyoff> what new space for error does it open that is
    currently not being explored?

    timbl: we could say "this is the model", but see HTML5 for what
    actually happens in the browser

    wycats: right now, there's a big list of things not to do, and
    I'd like to rescind that

    <Larry> there's a question. we had an answer. if that answer is
    wrong, then what's the right answer

    noah: we can't settle this here and now, I want to get two
    people to gather proposed solutions
    ... lead email & telcon discussion of pros and cons

    <slightlyoff> didn't we just have that discussion?

    noah: or we could discuss again on Wed
    ... get us to a place where we can do something useful

    timbl: we're not all on the same page at the moment
    ... let's see if we can pursue on Wed

    marcosc: could we have a breakout on Wed with a couple of us?

    noah: do that at another time

    <slightlyoff> +1 to that

    wycats: my proposal is rescind current document & replace with
    something aspirational

    <Marcosc> MC: +1

    <slightlyoff> agree

    <slightlyoff> +1

    wycats: the current document doesn't reflect reality at all, so
    it's bad to leave it around

    noah: let's pick it up Wed
    ... I'd like a couple of people who could frame the discussion
    in that way

    <annevk> It could be like CSS1: [52]http://www.w3.org/TR/CSS1/

      [52] http://www.w3.org/TR/CSS1/

    timbl: I went through IETF and learned to be good about MUSTs
    and MAYs
    ... they've often tripped us up
    ... the IETF says you must use the word MUST when conforming
    specs must do it
    ... people read it as an ethical must
    ... which it isn't
    ... it just indicates that it's not a conforming implementation
    of the spec if it doesn't follow those

    wycats: that means that browsers aren't conforming

TAG Orientation & scheduling F2F

    <slightlyoff> point of order: can we get a Google Calendar or
    something for our calls and agendas?

    <wycats__> slightlyoff: yes please

    noah: in this session I want to do three things: 1. set out
    goals for the meeting as a whole, 2. say a few things as chair,
    3. schedule F2F
    ... main goals:
    ... look at what we want to do in next 1-2 years
    ... need to wrap up some of the existing work
    ... and figure out what we're doing, not just picking up random
    ... when in gear, we'll usually have 2-3 major projects
    ... with 2-3 people driving each effort
    ... want to work out which of the things we're talking about
    are likely to be these things
    ... next, want to establish shared understanding & working
    ... next, want to pick the 2-3 things that we want to try
    ... not just random discussion, things that people are going to
    take forward
    ... with writing, telcons, bringing in community
    ... next, starting to review establish TAG positions that are
    ... next agree to publish fragids
    ... next, decide directions on existing projects
    ... there are a few big ones (publishing & linking,
    ... and a bunch of things that are more minor, many that are
    stale issues
    ... might look at them on Wed

    annevk: can anyone sift through them?

    noah: I'd like to but I have very limited time
    ... ---
    ... welcome new members, and thank you to outgoing members
    ... outgoing members should always feel welcome to come to
    meetings & social events
    ... this morning we looked at the charter & the mission
    ... that's the bounds of what to do
    ... my job is to organise the group and make sure we get
    ... we're a small WG
    ... most others have people who are sent to solve a problem
    ... would like everyone to think about commitments you've made
    ... you signed up to about 25% of your time, and to come to F2F
    ... there's no one to do the work but us
    ... we are very understanding about peoples' day jobs
    ... bursts of activity for 3-4 months
    ... but this stuff is hard, and people disagree about it, and
    it needs effort
    ... we have to impose people to do grunt stuff too
    ... including getting CVS sorted out, and scribing
    ... trying to be open to requests around topics
    ... but administrative changes are difficult
    ... try to try things the way we're doing it now
    ... hold of critiques for 3-6 months
    ... because it's hard for me
    ... on scribing: we try to be careful about privacy
    ... historically this is a close group, not unusual to hang out
    ... a lot gets sorted out over dinner etc
    ... project pages
    ... when we kick off a project, we try to create a project page
    that covers goals, success criteria, deliverables, people,
    issues/actions etc
    ... all drafts & pages like this should have a 'current
    version' and dated versions
    ... minutes refer to dated versions

F2F Scheduling

    noah: try to meet when Tim can be there, and me
    ... propose London meeting late May
    ... week of 27th

    wycats: it's not trivial for me to schedule international
    travel with only a couple of months notice
    ... best 12 months out

    timbl: sometimes urgent things come up

    noah: we'll typically do 4-6 months
    ... so would pencil in September now too

    RESOLUTION: TAG will meet in London 29th-31st May

    annevk: what happens between May & October?

    noah: there's lots of holiday, but we'll have telcons

    booked Oct 13-15th

    booked Jan 6-10th

Coordination with ECMA TC39


      [53] http://www.w3.org/2001/tag/2013/03/18-agenda#ECMATC39

    wycats: slightlyoff & I are both on TC39
    ... there are many things TC39 are doing to enhance JS
    semantics to provide more things that you could do with it
    ... there are 2 things I'd like W3C to do be doing
    ... 1. taking advantage of these better semantics
    ... eg defining getters & setters, proxies, modules
    ... W3C WGs should describe things in terms of TC39 JS
    ... less around ad-hoc ideas around host objects
    ... Chapter 8 in JS says user agents can do anything they want
    ... I'd like it if WGs would take advantage of new power of
    non-Chapter 8 parts of JS
    ... 2. there are things that TC39 that are in conflict with
    what's happened in W3C
    ... in particular around definition of web workers
    ... which is around adhoc definition of global object
    ... want to advise WGs to start thinking, to the extent that
    there are new semantics
    ... right now new browser APIs there are new objects on global
    ... but there will be modules

    timbl: are these like modules in node?

    wycats: the syntax indicates imports etc without executing the

    timbl: it looks procedural but is declarative?

    wycats: it's like an import
    ... import variable from module
    ... module name is a string

    timbl: there's a search path?

    wycats: there's a series of steps
    ... each has built-in descriptions and hooks

    annevk: and it's all async?

    <slightlyoff> yes, it's all async

    wycats: yes
    ... W3C isn't paying a ton of attention to this
    ... it would be great if new APIs from W3C were defined in a
    ... there's a lot of detail here
    ... slightlyoff & I would volunteer to detail these
    ... we'd like to have general direction to use these

    timbl: needs to work in either situation

    wycats: could just stick with global objects, but I think that
    would be a mistake

    marcosc: host objects aren't defined in terms of ES6
    ... you're talking about layering on top of ES6
    ... we want to adjust WebIDL to use ES6 and define in WebIDL

    <slightlyoff> this is a bug

    <slightlyoff> webidl is a bug

    wycats: there isn't anything in host objects which is
    incompatible in ES6
    ... in ES5 there are host objects, they don't exist in ES6
    ... there may well be obscure edge cases where you can't
    describe things in JS, but in general that's not the case

    marcosc: we had modules in WebIDL we could translate into
    modules in ES6
    ... we had something where when you declare an object in WebIDL
    it ends up in global scope
    ... I don't think we're fundamentally breaking things
    ... I think your proposal is good, that we should align more

    annevk: WGs and individuals aren't paying attention

    marcosc: ES6 is in flux
    ... WebIDL had native brand to get at what the object type is
    ... that changed, the name of it changed

    <annevk> (I'm paying some attention, but es-discuss is hard to
    follow and the drafts are PDFs with some force download

    wycats: that's fair, I'm not saying rewrite everything right
    now in terms of ES6
    ... my point is we should start paying attention to it

    marcosc: what can we do?

    wycats: create guidance to WGs about how to think about this,
    in coordination with TC39

    <Zakim> noah, you wanted to ask about balance between TAG and
    other wgs?

    noah: how much of this is the TAG's job and how much is other
    ... try to defer to WGs when they exist & have responsibility
    unless they're screwing up architecturally
    ... why here & not Web Apps?

    <slightlyoff> I do think they're screwing up big time

    wycats: this is interoperability, working with other standards

    marcosc: we have public script coord mailing list

    noah: just because we facilitate these things doesn't mean we
    own them
    ... we could still go to Web Apps and ask them to handle it

    annevk: I think we could do this through Web Apps
    ... there's a ton of WGs but they listen to Web Apps

    noah: they could also publish a Rec

    annevk: we're chartered for that, and they're not

    noah: APIs are generally in the Web Apps charter, aren't they?

    annevk: no

    <annevk> well APIs are

    noah: should we schedule a tutorial on what's new in JS?


      [54] https://speakerdeck.com/kitcambridge/es-6-the-refined-parts

    <annevk> RECs on how to do APIs... dunno

    timbl: I bet a bunch of W3Cers would be interested in that too

    <annevk> could be I guess

    noah: wycats, can you do it low notice? Wed?

    <Larry> there was a video talk at W3CConf that was good

    <Larry> Kit Cambridge ...

    wycats: I could cover a lot of ground on Wed

    Larry: there's a good video by Kit Cambridge on ES6

    <timbl> timbl: It would be good then to sync with Amy who
    organizes project reviews

    <Larry> [55]http://www.w3.org/conf/2013sf/video.html

      [55] http://www.w3.org/conf/2013sf/video.html

    <Larry> has the video

    <Marcosc> this is good too:

      [56] https://dl.dropbox.com/u/3531958/es6-favorite-parts/index.html#/

    noah: let's schedule 90 mins on Wed
    ... advertise that


      [57] http://lists.w3.org/Archives/Public/www-tag/2012Sep/0031.html

    jar: sounds like you're talking about same thing as Doug was
    complaining about
    ... Doug sounded very frustrated

    wycats: this is a targeted concern
    ... this is WebIDL doing something causing problems with TC39

    annevk: XHR is defined as an object, exposed on window, not on
    prototype window

    wycats: as far as TC39 is concerned, there is no prototype

    annevk: are you concerned about people defining things like
    methods on window or objects?

    wycats: you avoid exposing things on window is that you have
    objects via module import

    <slightlyoff> but you WILL have modules

    annevk: how do you avoid doing what we do?

    wycats: we fake it

    marcosc: how stable is the modules proposal?

    wycats: I'm not saying start doing modules right now, but that
    they start looking at it
    ... the syntax isn't stable

    <annevk> slightlyoff: I'm not saying that, I'm saying that
    nobody is telling us what to do

    timbl: how do serious websites deal with the transition?

    wycats: there'll be those that rely on browsers that have
    modules, and those that don't

    <annevk> slightlyoff: this is the first time I heard about this
    and maybe a bit from the TC39 echo chamber, but that never
    reaches us in clear terms :/

    timbl: will people write things twice, or will they fake it?

    <slightlyoff> transpilers

    wycats: once module syntax is stabilised, they'll stop writing
    them node style, and start writing them module style
    ... using AMD semantics
    ... write nice declarative form in syntax

    <slightlyoff> FWIW, we already have ES6-to-ES5 transpilers
    written in JS

    noah: server-side adaptation for those browsers?

    wycats: yes, that's already happening

    slightlyoff: ECMAScript transpiler in the browser
    ... already happening

    <noah> I think that was specifically an ECMAScript 6 ->
    ECMAScript 5 transpiler

    slightlyoff: larger point is that WebIDL is today a barrier
    ... core mismatches in the types
    ... we should be educating people about new features
    ... put people in mindset of users when designing APIs in the
    first place
    ... webIDL doesn't help make it easy for users
    ... eg constructible objects with no constructors
    ... you can't get an instance of this type
    ... it will hand you alien objects you have no recourse for
    ... makes you think about types that aren't efficient to
    express in JS
    ... eg numbers that aren't floats
    ... leads you towards designs that aren't appropriate
    ... at Google, we work through example code ignoring formal
    ... make that idiomatic
    ... work that back to its definition
    ... we could help people write idiomatic APIs

    <Larry> need JSON schemas

    <Larry> JSON isn't javascript

    <Zakim> timbl, you wanted to ask whether webIDL is right for
    close Ecmascript integration

    timbl: how JS-specific is WebIDL now?
    ... is it being enhanced with all the new things in JS?
    ... or is it language independent

    marcosc: there's like attribute foo, it forces you to create a
    setter function

    annevk: OMGIDL was language independent
    ... but didn't define a whole bunch of cases, didn't do
    constructors and stuff

    timbl: can you use all the tricks in JS in WebIDL

    <slightlyoff> we were successful in pulling WebIDL away from

    <slightlyoff> which it used to be joined at the hip to

    <slightlyoff> and there are left-over semantics from that era

    <annevk> that's such a non-issue though

    <slightlyoff> the mismatch is pretty large still

    <annevk> that was basically heycam's hobby thing

    <annevk> it didn't affect anything

    <slightlyoff> annevk the legacy is driven all the way through
    the design

    <slightlyoff> annevk it did, it introduced types that were JS

    <annevk> slightlyoff well yes the legacy is, there are dozens
    of specs and UAs written in terms of it

    wycats: in ES there's internal spec facilities
    ... which you can overwrite
    ... webIDL makes extensive use of these facilities
    ... want them to use the proper facilities in ES6
    ... if we want to have a system that describes what JS objects
    are doing, we should describe them as something that a JS
    developer could write

    slightlyoff: WebIDL drags you away from idiomatic JS
    ... doesn't teach you about how to design a JS API

    annevk: until you have an alternative that works for spec
    writers, don't criticise WebIDL

    slightlyoff: the advice is to design the API by example in JS

    wycats: this is an important distinction
    ... currently it maps onto C++ implementation
    ... like int32s

    <slightlyoff> it's not simple

    timbl: I'm worried the flip side
    ... how would it do describing jQuery?
    ... with lots of overloading

    marcosc: there's no problem with overloading

    wycats: WebIDL is mapping onto low level semantics of
    ... we want the spec device to map onto semantics of JS not C++

    annevk: there's no disagreement
    ... there's just not enough people to write the specs

    wycats: so we should make WebIDL better to use

    <slightlyoff> Marcosc I'm afraid that's just wrong

    marcosc: there's different things

    wycats: it's meaningless to define something in WebIDL that
    can't be implemented in JS

    annevk: you might want additional constraints

    <slightlyoff> annevk wanting additonal constraints are the sort
    of thing that you should be considering without syntax

    noah: history is from a corba-like view of the world,
    language-independent spec of APIs
    ... JS emerges as major implementation

    wycats: no one wants language independence

    noah: are we aiming for no WebIDL and just some JS-based spec?

    annevk: WebIDL is a mapping

    wycats: we're just proposing that if there's something like
    WebIDL, it should be described in terms of JS

    slightlyoff: people who are not JS developers don't have a good
    guide about how to design JS APIs
    ... if you're used to C++ then you reach for WebIDL and use it
    ... and you get people building APIs that aren't suited for JS
    ... we could change WebIDL, or how we teach people to use it
    ... but let's talk about TypeScript or other options

    <slightlyoff> yes, that's right...it's about how to think about
    the problem

    timbl: sounds like it's the way of thinking issue that is the
    real problem
    ... the jQuery pattern of overloading, things like passing
    around functions
    ... these aren't like C++ at all
    ... is there a lot of lossage at the more functional, JS end of
    the scale

    <slightlyoff> yes, we loose a lot of that sort of flexibility
    in DOM APIs

    wycats: if you're a C++ programmer, you won't be writing
    idiomatic JS

    <slightlyoff> yes, there's a culture of failure around this

    annevk: people are copy&pasting the patterns from previous

    <Larry> news from last week: IETF is spinning up a JSON working

    <Larry> IETF is doing other JSON work. Including schema for

    <Zakim> Larry, you wanted to ask about WebIDL & JSON schemas

    Larry: IETF are working on JSON, maybe a schema language
    ... DP70, talking about when to use XML
    ... think it should be extended to talk about XML & JSON
    ... bringing this up because JSON schema talks about defining
    value types
    ... seem to have a relationship with webIDL
    ... if talking to TC39, consider role of JSON
    ... it's language independent

    <noah2> [58]http://tools.ietf.org/html//bcp70

      [58] http://tools.ietf.org/html//bcp70

    annevk: it doesn't have methods

    wycats: seems unrelated

    timbl: if you define a method, you give types of parameters
    ... if you have a complex structured parameter, such as a
    ... or a list whose values are of a particular type
    ... then quite a lot of the work in defining the interface to
    the method is defining the complex parameter
    ... and describing that is the same as a JSON schema problem

    Larry: the datatypes that are available in WebIDL are basically
    those available in JSON schema

    timbl: can you say that in WebIDL

    wycats: yes, WebIDL's problem is not lack of expressive power
    ... they just create crazy JS semantics

    Larry: my point is under general topic of coordination with
    TC39, you've identified one issue, about WebIDL
    ... while you're thinking about that, interaction with JSON is
    also important
    ... tried to get them to update charter to include coordination
    with TC39 & WHATWG & W3C
    ... W3C needs to be engaged in JSON update also

    wycats: if it's only a problem with finding people to write a
    strawman, we can work with that

    annevk: TC39 need to give guidelines to the people doing DOM
    ... there's just a lack of communication and lack of guidelines
    ... feels more like a shouting match
    ... we're just trying to solve a problem, and we're input
    ... all we get is "you're doing it wrong" with no concrete

    <slightlyoff> well, I am doing that

    <slightlyoff> on a weekly bais to API authors here

    <slightlyoff> right, nobody's assuming malice

    wycats: sounds like slightlyoff & I need to go back to TC39 to
    get communication working better

    noah: what would you like the TAG to support you to do?

    wycats: I'd like to go back to TC39 to get resource on working
    on successor to WebIDL

    noah: I don't know if we have a group to do something with it

    <Larry> script-coord mailing list is still active

    JeniT: When I did the work on the task force of the HTML and
    Data stuff, it worked pretty well. So, maybe we say there
    should be a task force.

    slightlyoff: I've spent a lot of time working with people who
    are trying to come up with APIs which are better
    ... people don't have experience, haven't got a guide about how
    to build an idiomatic DOM API
    ... we could create those guidelines, talk to TC39 about it

    wycats: I like Jeni's proposal to put together a small, focused
    task force

    noah: when we did that, we had a better understanding of what
    the issues were, as the TAG
    ... if we could float these ideas on the TAG mailing list and
    other lists
    ... say that we're considering task force or outreach to TC39
    ... see what pushback we get

    annevk: Mozilla has invested effort in transitioning from old
    bindings to WebIDL bindings
    ... the right things happen

    <slightlyoff> we have a bunch of perl scripts ;-)

    wycats: that means there's a constraint to be compatible with
    that tooling

    annevk: API specs are built around ReSpec which has native
    support for WebIDL

    <slightlyoff> I'd once again like to suggest that we can help
    people design better APIs without changing the tooling; we can
    push on both

    <slightlyoff> and they can happen in parallel

    noah: are you saying we should therefore go slow?

    annevk: short-term, having better advice would be awesome,
    especially backed by TC39
    ... then we can see whether people disagree

    noah: how does TC39 work? how would we engage with them?

    wycats: most of the work that gets done is done by champions
    ... committee is 20-25 people, once every 2 months F2F meeting
    ... chair but he's not like you (noah)

    noah: should we get onto their agenda for a meeting?

    wycats: slightlyoff or I would become champions within the

    noah: I'm nervous that we're just starting on this, not sure we
    can make an informed decision

    slightlyoff: that's reasonable: we can look at design issues
    for example

    annevk: we can say we're interested in TC39's opinion

    noah: I need to know what the TAG needs to know
    ... draft of an email to TC39, on our private list, then sent
    on behalf of TAG
    ... vote on that on a telcon, to send email on our behalf

    wycats: annevk's suggestion is good, just go back say that
    there's interest
    ... in getting TC39's opinion about good JS API design

    noah: let's draft a resolution for the minutes that you can
    point at
    ... use private list for drafts that we haven't yet agreed on
    as the TAG
    ... usually better if this ends up in a specific WG

    annevk: I think that's Web Apps

    wycats: I think that TC39 should do it with people in charge of
    ... but perhaps there aren't those people or they don't have

    annevk: Cameron McCormack is driver of WebIDL

    wycats: TC39 have lots of skilled resources who might be
    interested in working on this
    ... a TF that has lots of people who don't have bandwidth isn't
    going to work

    <slightlyoff> the last time heycam was at a TC39 meeting, IIRC,
    was nearly 2 years ago

    <slightlyoff> and we got good prototype linearization done

    <slightlyoff> but that was a long time ago

Summary of Action Items

    [End of minutes]

     Minutes formatted by David Booth's [59]scribe.perl version
     1.137 ([60]CVS log)
     $Date: 2013-04-11 19:38:28 $

      [59] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
      [60] http://dev.w3.org/cvsweb/2002/scribe/


       [1] http://www.w3.org/

                                - DRAFT -

            Technical Architecture Group Face-to-Face Meeting

19 Mar 2013

    See also: [2]IRC log

       [2] http://www.w3.org/2013/03/19-tagmem-irc.txt


           Ashok Malhotra, Jeni Tennison, Marcos Caceres, Yves
           Lafon, Anne van Kesteren, Henry Thompson, Larry Masinter
           (phone), Noah Mendelsohn, Alex Russell (phone), Tim
           Berners-Lee, Yehuda Katz


           Noah Mendelsohn

           Marcos Caceres, Anne van Kesteren & Yehuda Katz


      * [3]Topics
          1. [4]Local Storage
          2. [5]Publishing and Linking
          3. [6]Discussion with Jeff Jaffe
          4. [7]Polyglot
          5. [8]httpRange-14
      * [9]Summary of Action Items

Local Storage

    NM: The TAG has had things to say about local storage - which
    APIs are good, which models are good, etc.. About the
    architecture, what is good, what could be better. And also
    about the use cases.
    ... who is not familiar with the TAGs finding about local

      [10] http://www.w3.org/2001/tag/doc/IdentifyingApplicationState-20111201

    YK: I would like to broaden the discussion to talk about
    resources generated locally and URIs to reference those things.
    ... e.g., bookmarking and sharing

    NM: bookmarking is not as critical because you can use your own
    arbitrary API
    ... some of this was motivated by hash bang (#!) being used in

    YK: the thing to keep in mind about push state, the server
    needs to return the same resource

    <slightlyoff> AR: the question here is about the app
    development model

    <slightlyoff> AR:i.e., what does a local cache "mean"?

    <slightlyoff> AR: the web doesn't have any sync infrastructure

    <slightlyoff> AR:which means that there's a fight

    YK: in URLs, hash slash is used because it doesn't send to the

    TBL: is it helpful for caching?

    YK: yes

    TBL: On the semantic web, I've seen people send the fragid as a
    HTTP header
    ... because authors wanted to know what users were looking at
    with regards to the data (as HTTP does not send the fragid in
    the request)

    YK: I can see why someone would want that.

    TBL: but, in a browser, you can't capture the fragids all the
    time because you can open tabs by shift click, etc. so it's
    hard to track how they are navigating an app or set of data.

    <slightlyoff> AR: the server has NO IDEA

    NM: I'm interested in local/remote transparency: we all know
    how the classic google maps uri scheme works (that is, in
    Google Maps, JS can reconstruct the right map from the URI).
    But if you use those URI in a non-js capable browsers on a
    low-end device, the browser will still serve a rendered image
    of the same map. This is cool because the URI still identifies
    that map representation even if you don't get the UI
    enhancements like being able to pan/zoom easily.
    ... The URL should resolve for both the client and the server.

    <noah> I'm saying just a bit more than the URI scheme, but yes:
    I think it's preferable to not bake into the names of things a
    commitment to implement resolution on client vs. server

    <wycats__> as a general rule, people who are trying to do these
    things aren't listening to the TAG's findings, but do things
    that are economically useful

    <slightlyoff> AR: In general, this is about navigating a
    partially disconnected graph of nodes and with ajax. Some of
    those nodes are "cache" or "surfaced" locally, so the role of
    URL is a lossy address. It doesn't encode most of the UI state;
    but it's an address.

    YK: I wanted to talk about what people are doing today. Most of
    my day job is to build a system that does what everyone is the
    room are talking about. Peoiple are doing 2 things: 1. people
    are either intercepting clicks (then XHR to get data); the
    other side, all the links are on the client side, but all the
    links refer to links that are local to the client side
    ... but you should still architect your app like they are fully
    running off a server

    <slightlyoff> AR: ...and to build those toolkits, we need to
    surface the idea of a graph of data that users are navigating

    YK: if we want apps to be bookmarkable, then we need a
    scheme/model that make sense for the app/context of usage.

    AM: what are you using for local storage?

    YK: there is no good answer to do offline.
    ... you can use local storage to simulate XHR sometimes

    NM: what does the TAG want to do or say about this?

    YK: when you store your stuff in local storage, you need to
    have enough information that it's as if coming from a server.

    <slightlyoff> AR: this is one of the thins the [currently
    private/secret] Navigation Controller design is explicitly
    working to enable.

    NM: Is there something fruitful for us to do here that explains
    the model (how local data ralates to remote data ... how that
    data be made addressable)
    ... we have a finding on this but people are not reading the

    YK: there are people who are scared of client side stuff
    because they are scared of losing stuff.

    ??: Who is the target for this?

    <slightlyoff> as a first pass, it's a start

    JT: we should have a central document, and spin documents for
    the various communities

    NM: someone needs to review the finding, and see what needs to
    changed or if it's already good enough

    YK: I will review the finding

    <noah2> . ACTION: Yehuda with help from Anne(?) to review TAG
    finding on application state and propose TAG followup to
    promote good use of URIs for Web Apps including those with
    persistent state

    YK: there is also the case for private URI that are not
    intended to be shared

    <noah2> ACTION: Yehuda with help from Anne to review TAG
    finding on application state and propose TAG followup to
    promote good use of URIs for Web Apps including those with
    persistent state with focus on actual examples [recorded in

      [11] http://www.w3.org/2013/03/19-tagmem-minutes.html#action01

    <trackbot> Created ACTION-789 - With help from Anne to review
    TAG finding on application state and propose TAG followup to
    promote good use of URIs for Web Apps including those with
    persistent state with focus on actual examples [on Yehuda Katz
    - due 2013-03-26].

    YK: use case to change the URL that is not in the history stack
    (is transient) ... for example popping up a form that is
    particular for a user. But the URL is not supposed to be

    <noah2> ACTION-789 - Due 2013-03-16

    YK: In Ember, we have a system that forces you to create a new

    <trackbot> ACTION-789 -- Yehuda Katz to with help from Anne to
    review TAG finding on application state and propose TAG
    followup to promote good use of URIs for Web Apps including
    those with persistent state with focus on actual examples --
    due 2013-04-16 -- OPEN


      [12] http://www.w3.org/2001/tag/group/track/actions/789

    YK: you need to deserialize the state from the URL

    <slightlyoff> this is about adressing something in the graph of

    <slightlyoff> that graph is defined by the app

    JT: I would expect that if it's in the address bar, I should be
    able to use it again

    <slightlyoff> (and is app controlled)

    <JeniT> ok, so like a preview of a blog post?

    <slightlyoff> but what I think wycats__ is pointing out is that
    there are states and addressable items that don't always live a
    long time

    <slightlyoff> so the question is "what's the role of addressing
    for ephemeral state?"

    <JeniT> that's an interesting question

    AM: Caching in a very smart way.

    <slightlyoff> I can add anyone here ot the github repo

    <slightlyoff> just dm me

    <wycats__> slightlyoff: confirm, I agree that that reflects my

    <wycats__> I agree 100% with what slightlyoff is saying

    <wycats__> the way Ember works is that segments in URLs
    represent serialized models. When you load a page, those models
    are deserialized and become the models that power the local
    templates. When you transition to a new state with a new model,
    that model becomes serialized into the URL.

    <wycats__> pushState is the low-level primitive

    AR: templating has state. The URL is really a state. Email
    clients are exactly the same way. URL address nodes in
    application management. so what you want to do is to build a
    structure that helps you address the content. Android provides
    a good model for this, which we should look at. We don't have a
    good language to talk about this stuff, wish hash bangs and
    push state.

    YK: Anne and I will go out and read the finding, and provide
    our feedback.

    <noah2> I'm remembering that I did a blog post on this about 2
    years ago:


    <noah2> Probably nothing that would surprise anyone now, but it
    seemed controversial at the time

    JT: There is something deeper here: moving towards a world
    where we have separation between the shell of the app and data.
    ... I'm quite interested in that from an open data perspective

    AM: The finding is focused on a different idea - how do you
    identify application state. You guys are going beyond that with
    regards to going off line.

    <slightlyoff> but I go INTO a level

    <slightlyoff> and I go to the navigation screen for a game

    NM: there are apps that are not very documenty. For example, in
    a car game, describing states in as a URI might not be helpful.
    Then there are other apps that do feel documenty, but what you
    want to link really depends on what the user's needs are.

    <slightlyoff> AR: I don't think the word "document" is helping
    us here

    YK: there are a bund of states in a model, and URIs represented
    those models and can be deserialized.

    YK draws picture on board of how a URL can map to a
    hierarchical data model

    <slightlyoff> AR: well, I think think DB state is also a bit
    wrong...we're talking about moving between nodes in a graph of

    <slightlyoff> (it might be relational)

    YK: several segments of the the URI end up representing
    different models
    ... sometimes people want to do this for concurrent states, but
    it doesn't work well.
    ... you many have several models represented in your URI

    <wycats__> YK: but at least allows people to represent their
    sub-app state in a URI

    <wycats__> JeniT: if you're interested in this, I'd be happy to
    work with you as well

    <trackbot> ACTION-756 -- Jeni Tennison to draft rough product
    page / briefing pape for "distributed web applications" -- due
    2012-11-06 -- OPEN


      [14] http://www.w3.org/2001/tag/group/track/actions/756

    <trackbot> ACTION-772 -- Jeni Tennison to with help from Larry
    to propose CR exit criteria for fragids finding -- due
    2013-02-12 -- OPEN


      [15] http://www.w3.org/2001/tag/group/track/actions/772

    <noah2> close ACTION-772

    <trackbot> Closed ACTION-772 With help from Larry to propose CR
    exit criteria for fragids finding.

    <JeniT> ACTION: Jeni to do new Editor's Draft of fragids spec
    for approval to publish as CR [recorded in

      [16] http://www.w3.org/2013/03/19-tagmem-minutes.html#action02

    <trackbot> Created ACTION-790 - Do new Editor's Draft of
    fragids spec for approval to publish as CR [on Jeni Tennison -
    due 2013-03-26].

    <noah2> scribenick: noah2

Publishing and Linking

    NM: Ashok, can you orient us? We need to figure out what the
    future will be for this work. Can you please remind our new
    members why we got into this and what we're trying to do?
    ... Ashok, can you orient us? We need to figure out what the
    future will be for this work. Can you please remind our new
    members why we got into this and what we're trying to do?

    AM: Sure. This arose because of concerns about legal cases. The
    TAG felt we needed to clarify to the legal community the
    differences between publishing and linking. We haven't had an
    easy time striking the right balance between focusing on
    technical issues vs. framing things in a way that will help the
    legal community.
    ... We did this work, but reviews haven't been good: responses
    included that it's too complex, the audience isn't clear, and
    for the non-technical/legal audience it's not sufficiently
    clear or at the right level.
    ... What to do isn't clear. We could try to fix this, turn it
    into something else, abandon it.

    TBL: Thomas Roessler had significant concerns.
    ... There's a concern that perhaps we cross lines where we're
    worrying too much about policy.
    ... We need to do something, not clear what.

    AM: There hasn't been disagreement this is useful.

    NM: I think there has been such disagreement.

    TBL: At least the form is controversial

    AM: OK, I think we need to think about other formats


      [17] http://lists.w3.org/Archives/Public/www-tag/2012Dec/0139.html

    <annevk> ^^ email from tlr

    b is 401 for me

      [18] https://www.w3.org/2001/tag/doc/publishingAndLinkingOnTheWeb

    NM: Personally would love to have impact here. As chair, very
    concerned about efforts like this that keep getting reconceived
    and never gel.

    AM: Idea on last telcon was to take one section, the publishing
    section, and expand it.



    <annevk> I found
    -2011-10-27.html via Google which has a number of broken links


    JT: Taking bits at a time seems reasonable.

    <Ashok> [21]http://www.w3.org/TR/publishing-linking/

      [21] http://www.w3.org/TR/publishing-linking/

    NM: Jeni, you interested in actually doing this. Not sure it's
    the highest priority for you.

    JT: (pauses...looks conflicted) right, it's probably not
    highest priority

    NM: I also want to raise the suggestions Larry's made that this
    relates to governance:


    AM: I might be willing to contribute time on this, even as an
    outgoing TAG member

    <slightlyoff> is it not sufficient to say "we believe that
    publishing is not the same as linking"?

    <JeniT> noah: one of the lawyers we talked to said that they
    wanted it published as a Rec because it has more weight that

    <slightlyoff> the audio on FT is actually better than the
    bridge = )

    <slightlyoff> thanks to plinss for that

    <JeniT> [we assess feeling in the TAG]

    <slightlyoff> JeniT, can that REC be that a one-liner?

    <slightlyoff> 'cause i think we could get 100% support behind
    "The TAG believes that publishing is distinct and different to
    linking in both intent and impact. This is fundamental to how
    the web works."

    <JeniT> slightlyoff, well, there's boilerplate in Recs that
    would make it more than one line, and probably it would have to
    have a bit of explanation around the diff between embedding &
    linking to make it make sense

    <JeniT> otherwise, yes

    NM: How about we agree, Ashok, that you do any further work
    it's at your own risk, but I will commit to at least asking the
    TAG to review whatever you come up with.

    <darobin> if we can kill two birds with one stone, we could ask
    W3C to make the REC boilerplate one line as well... that'd give
    us a two line spec

    <slightlyoff> I don't know how to change the audio levels on my
    side from FT vs. Google Voice

    <slightlyoff> will try to figure that out, though

    <slightlyoff> apologies.

    YK: I would be more sympathetic of a group like EFF reached out
    to us.

    JT: We did have early input from Thinh Nguyen, who had done
    legal work with creative commons.

    YL: We had some request from the membership

    TBL: People don't know to reach out to the TAG

    YK: I doubt the people doing the arresting will read this

    TBL: We're not expecting that

    <slightlyoff> figured out the speakerphone = )

    NM: We are hoping to offer something to the defense (or
    prosecution) lawyers and judges who may wish to better
    understand how the system they're discussing actually works


    TBL: Publish as a note first, and then work on making an
    excerpts better

    AM: Could do excerpt first

    JT: Isn't it better to get the whole thing out there?

    AM: Ah, OK, I guess so.

    ACTION-776: Due 2013-04-02

    <slightlyoff> I support publishing a trimmed version of this



    close ACTION-779?


    <slightlyoff> I'm not on the queue

    <slightlyoff> (I think)

    <slightlyoff> yeah, i have nothing to add beyond what I just
    typed above

    JT: I think it was just a time management

    <Ashok> Alex, by "trimmed version" did you mean one section or
    the whole doc trimmed down?

    <slightlyoff> Ashok: whole doc trimmed to what you think is the
    meat. Honestly, when you think it's good, I'm fine with it.

    <Ashok> OK. Thx!

    <JeniT> see
    l for background on registries

      [23] http://www.w3.org/2001/tag/2011/12/evolution/Registries.html

    NM: I'm curious, do folks feel like the TAG should dig into
    improving the situation with registries?

    AVK: I think the community is using Wiki pages

    YK: Maybe we should see what the community does first

    JT: A possible role for the TAG is to get people thinking about
    the solution.

    AVK: I think somewhere between wiki page and central is the

    NM: Are you sure it won't be much harder to get right after ad
    hoc things are done?

    AVK: HTML group is still trying to figure out what to converge
    ... IETF has heard feedback and is working on improving things
    ... They're getting better at quick registration, even without

    NM: Tim, my perception is that registries have been important
    to you. Am I right, and are you OK with the TAG "backing off"
    in this space?

    TBL: I think I'm OK with backing off a bit [scribe is having
    trouble getting the nuances of what Tim is saying...I think
    he's basically saying that watching and listening for awhile,
    and fact finding, is ok]
    ... Well, we've tried to avoid central registries, prefer using
    URIs and avoiding centralization. We seem to depend on DNS, but
    would prefer if that were the only registry.

    AVK: People use registries instead of URIs because URIs are too

    TBL: There's the default base URI approach

    NM: As used in ATOM and for some other thinks.

    <JeniT> timbl: sometimes it's useful to have a centralised
    registry to prevent proliferation

    <JeniT> ... it depends on the context in which the values are

    TBL: We need to remember there is a cost to deploying things.
    So, the design point is not necessarily to encourage users
    around the world to do 10 a day new ones.

    <slightlyoff> isn't this about huffman coding?

    <slightlyoff> how long is the break?

    <slightlyoff> that's me

    <Marcosc> scribe: Marcos

    <slightlyoff> how do I identify +44... as me?

    <Marcosc> scribenick: Marcosc

Discussion with Jeff Jaffe

    <slightlyoff> thanks JeniT! one of these days I'll get the hang
    of this W3C thing

    NM: we don't have a set agenda for this discussion
    ... Noah introduces everyone ...

    JJ: Going to spend a few minutes describing what I would like
    to see the TAG work on....
    ... like any WG, the TAG does it's best work when it's working
    on what it wants to work on. Happy to see new blood in on the
    TAG. I would like to reflect on what I think are the big issues
    for me personally. Reflecting on what is going on in the real
    world, with regards to web arch: web security is the big issue.

    You read about it every day in the news paper. The security
    issues of the web are quite striking and often reported. The
    lay person may think that the TAG is likely working on to fix

    <slightlyoff> I am contributing directly to CSP 1.1

    JJ: security is at risk on all sorts of levels (from technical
    to social engineering), so it's quite challenging to fix.

    <slightlyoff> also, I'm working on an extension that will let
    users control their experience WRT to XSS:

      [24] https://github.com/slightlyoff/CriSP

    JJ: its time to implement the fixes. When I think about this, I
    don't know where to turn at the w3c, so I turn to the TAG.
    Second thing, constructing a coherent Web architecture from the
    odd 60-70 groups. The fact that there are lots of groups are
    not coordinating is an issue - so this is an opportunity for
    the tag to help.

    <slightlyoff> I'd like to vhemently disagree too

    <noah2> MC: I somewhat disagree...the original idea was that
    the vocabularies would be decentralized.

    <slightlyoff> well, *some* people do

    <slightlyoff> but we don't have much data today

    <slightlyoff> thanks

    JJ: Third area of opportunity - centralised vocabularies ... a
    bunch of companies put together schema.org as a place to track
    vocabularies. Why didn't the W3C think of that?! They've done a
    great job. The important things about schema.org is that
    someone is caring for it.

    <slightlyoff> happy to wait

    JJ: The big miss was not the centralisation, but that we didn't
    provide a way for people to find these centralised sources.
    We've started a head lights exercise to help us look into the
    future in certain areas - it would be great if the TAG could
    contribute in identifying gaps.

    <JeniT> I think this plays to the layering question

    AR: Thanks Jeff. It's not that the w3c missed an opportunity
    with schema.org., the Web has documented it's own semantics
    (e.g., micro formats). What the TAG could do, can't start to
    inject the perspective and ask wg to add evidence with regards
    to vocabularies.

    <slightlyoff> JeniT: yep. It's all about trying to create an
    environment that reaps the best from every generation and puts
    it in the "dictionary".

    JJ: my view is, if we have something that is critical to web
    technology, why was the TAG not involved in that?

    AR: I don't think it's fair to say that schema.org has

    <JeniT> Jeff, your point is that you need the TAG to raise
    these kinds of issues to you, right?

    <jeff> Yes, Jeni, Thanks.

    <slightlyoff> how is this relevant? we've clearly failed, as
    jeff says. HTML hasn't evolved to include these common nouns
    and verbs

    <noah2> Wondering if we're diving too deep on this one item,
    which I think Jeff introduced as an example of a case in which
    he would have welcomed an earlier alert to community

    TBL: I don't agree that we missed an opportunity. We've
    discussed this long and hard, we've had conferences about this,
    etc. There is PhD on this problems, etc. Meanwhile, yes, these
    groups have been somewhat disconnected from the W3C. There has
    been some pushback from certain places. So, the semantic web
    folks regrouped around "linked data", and various groups
    started producing data like UK gov. Going back to schema.org,
    the w3c and various communiti

    es have been trying to find solutions for this for many years.
    The cynical folks say that there wasn't enough big players
    involved... it's a big topic. Maybe it's time to bring together
    what is happening at schema.org back to other communities.

    <JeniT> perhaps one answer is to work out what the big
    companies are going to worry about

    <slightlyoff> I think we missed signs much earlier

    <slightlyoff> microformats, JS libraries, WAI-ARIA

    <slightlyoff> the TAG missed the boat on all of those

    TBL: so, it's not like there has not been an attempt to address
    this issue of centralisation.

    <slightlyoff> schema.org is just the latest in a string

    <JeniT> slightlyoff, so what's next?

    <slightlyoff> JeniT: we watch the web as it's evolving, try to
    encourage paths that reduce friction to evolution, and help WGs
    pay attention to that evolution

    <slightlyoff> JeniT: we can be agents of change for data-driven
    spec evolution

    <JeniT> slightlyoff: sure, and that's what the TAG thought that
    it was always doing, and yet as you say it misses things

    <slightlyoff> JeniT: it's in our charter to say "data can bear
    on this problem, why don't we look at it?"

    <slightlyoff> JeniT: sounds like the TAG looked at things that
    weren't the public web

    <JeniT> slightlyoff: what does your examination of the public
    web tell you *now* about the things that Jeff needs to worry

    <slightlyoff> JeniT: I'm trying to figure that out:

      [25] https://github.com/slightlyoff/meaningless

    <slightlyoff> JeniT: and trying to build tools to help tell us
    what we don't know

    <slightlyoff> here's what my browsing for the last week or so
    turns up: [26]http://meaningless-stats.appspot.com/global

      [26] http://meaningless-stats.appspot.com/global

    <slightlyoff> hopefully this extension/reporting system can be
    released soon

    <noah2> JJ: describes successful workshop on ebooks, points out
    that Pearson (spelling) has just joined W3C. In general, TAG
    may want to consider innovations in this area.

    JJ: last month we had a very successful workshop on e
    publishings - with great participation. We concluded that the
    that industry has a strong reliance on W3C technologies. If we
    were to work closer together, we would have a richer web. We
    have a call next week so the CSS wg can coordinate with the pub
    community. There are some other workshops coming up. When new
    industries get involved, it might be good for the TAG to be
    there to help make links.

    <slightlyoff> I'd like to understand Jeff's focus on security

    <slightlyoff> what JeniT said = )

    JT: Can we talk about security


    JJ: sure

    <noah2> JT: Ah.. not that I have anything particular to
    say...we recognized it's big. I'm still not sure we have the

    <noah2> +1

    <wycats__> 1+

    JT: We might not have enough people in the TAG with a
    background in security.

    <noah2> JT: What is W3C strategy?

    <noah2> JJ: We have working groups...(names them)

    JJ: we do have a number of groups focused around security
    ... but my perspective is that, while we do some security
    stuff, we don't address security at large
    ... For example, I was reading the discussions about DRM and
    people complaining that it's broken. But the Web itself is
    "broken", yet we have standardised the platform...

    AvK: Can you be more concrete?

    JJ: example 1: in the press, the Chinese governments has been
    running an operation to go through the web and attack
    information resources around the web.

    AvK: But there are many layers at where this could be happening

    <Ashok> We need some "out of the box thinking about security"
    ... the current approaches do not seem to work

    <slightlyoff> Yves: heh

    YK: We have elite hackers that can break into anything. That's
    a poor understanding: there is poor security on the Web and
    people don't understand that.



    JJ: I was not proposing that the TAG can fix social engineering
    problems. Though there might be some technological solutions to
    fixing some problems, like short passwords.

    <JeniT> isn't the SSL CA issue is something we should push on?

    <slightlyoff> yeah, compared to DNS ;-)

    <JeniT> because no one else is

    JJ: The core Web architecture itself is not impervious to

    <slightlyoff> SSL CA issue is governance. Certs are being
    devalued every year.

    <slightlyoff> see SSL EV

    <JeniT> slightlyoff: so, what's the solution to that?

    <jeff> [To the point of lack of security expertise on the TAG;
    I believe that the TAG could study something and bring in
    additional experts.]

    <Zakim> noah, you wanted to talk about how TAG is consituted

    <timbl_> SSL MITM

    <slightlyoff> jeff: that's fair. CSP is currently our best hope
    against XSS, which is our ring-0 attack

    <jeff> +1 to delegation

    <slightlyoff> jeff: and I'm working with the CSP WG directly

    NM: The TAG is an interesting body in its makeup. We choose
    more or less what we work on - but TAG's scope is huge given
    that it covers just about everything. The membership did a
    really good job at selecting a good range of people with a
    range of skills. Although we have managed to do some things
    well as the TAG, the TAG doesn't have a good track record with
    regards to security.

    <annevk> This is the second time I've heard about two webs. For
    an organization that cares about One Web this is surprising.

    JJ: if we had a new Web, we could address help communicate how
    to address some of these security issues.

    <slightlyoff> btw, Mozilla deserves props for driving CSP

    <slightlyoff> this isn't unknown

    <slightlyoff> composition under mutual suspicion is hard, but
    not unknown in the literature

    <slightlyoff> and that's why the web security model looks a lot
    like capabilities

    TBL: Historically, SSL used to give users a fake sense of
    security. The TAG suggested the Web Security group, which maybe
    have helped fix up some of the UI issues around the SSL
    padlock. There are two areas that the TAG that could help:
    where JS code comes from (which can come from anywhere) but all
    running in the same scope. Second area, Mark Nottingham
    mentioned man in the middle attacks using SSL by installing
    fake certificates. Happened also in ce

    rtain countries. So there is real shakeup needed in the whole
    browser certificate model.

    TBL: is there was one thing, SSL man in the middle attacks...
    the TAG should talk to groups and make sure they are aware of
    the problem

    <slightlyoff> it's actually hard to describe what that mode is.

    <slightlyoff> I'm working to build a profiling chrome extension
    that helps you understand what the baseline trust is

    <slightlyoff> so you can lock yourself down to that

    YK: the main issue with CSP, it does not define a "mode". What
    I think is desirable, is to provide advice is: if you are
    starting a new site, this is the headers you could use.

    TBL: the validator could also help with validation help check
    about CSP.

    <slightlyoff> if you dissalow inline script via CSP, inline
    handlers are already disabled

    MC: it's like web lint for CSP

    <wycats__> slightlyoff: but there is no validator that yells at

    <wycats__> no linting tool that complains in your editor

    <slightlyoff> wycats__: just load the page and look at the

    JJ: the TAG needs to think broad of the large issues.

    <slightlyoff> wycats__: also, i'm looking to get events added
    for inline reporting

    <slightlyoff> wycats__: 'cause right now you can use a
    report-only URL to see violations, but that's a bit spammy in
    the spec

    <wycats__> slightlyoff: that assumes you can install the CSP

    <slightlyoff> wycats__: see the chrome extension

    <wycats__> slightlyoff: I want it in my editor ;)

    <slightlyoff> wycats__: the goal of the extenion is both to
    help you lock yourself down and to build good policies forsites

    <slightlyoff> wycats__: Mike West and I can use help on the
    configurator UI

    <wycats_> scribe: Yehuda Katz


    <wycats_> Date: 19 March 2013

    <ht> scribenick: wycats_

    <slightlyoff> I may need to turn off the FT to participate by
    phone effectively

    noah: Polyglot is a contentious topic. Can we have Henry remind
    us of what TAG and HTML WG have already done?

    ht: We filed a request ages ago that said we would like to see
    a polyglot spec on the rec track

    noah: I don't think we said rec track

    JeniT: TR space

    <noah> TR space also includes W3C Notes

    ht: the WG produced a working draft
    ... about 6 months ago without any other event occurring, Henri
    Sivonen requested that the TAG withdraw, and that's where we

    JeniT: we responded to that request


      [28] http://lists.w3.org/Archives/Public/public-html/2012Dec/0082.html

    JeniT: that turned into a bug on the polyglot spec to add a
    ... that bug has now been assigned to somebody

    <noah> Here's the original request from the TAG to HTML WG, as
    conveyed by Sam Ruby:

      [29] http://lists.w3.org/Archives/Public/public-html/2010Mar/0703.html

    ht: I'm assuming that we're in normal W3C ground rule space
    ... TAG made a decision to make a request
    ... until we make a decision to withdraw the request, the
    request stands

    noah: that's formally true, but I'd rather not emphasize that
    in order to avoid raising tensions just now
    ... we won't rescind without something resembling consensus

    ht: the bar is higher for changing your mind than for a
    greenfield decision

    noah: you're right but again, I'd rather not emphasize
    procedural issues just yet
    ... this request was sent by Sam Ruby on our behalf to the HTML
    ... my recollection is that the history of this was that there
    was some contention about this
    ... we arranged a F2F at TPAC with HTMLWG and TAG

    <masinter> i would change "work identically" to "be equivalent"

    <masinter> in

      [30] http://lists.w3.org/Archives/Public/public-html/2010Mar/0703.html

    noah: I thought we had consensus to have Sam do this

    ht: I don't remember

    noah: then that's my speculation

    <masinter> or "work equivalently" perhaps. it needs to be a
    useful equivalence

    <noah> Sam's original request.

    <noah> Sam conveyed this to the HTML WG saying it was "an
    action item from the TAG"
    3.html asking for a polyglot spec "in TR space." Noah >thinks<
    but isn't sure this resulted from joing HTML/TAG F2F

      [31] http://lists.w3.org/Archives/Public/public-html/2010Mar/0703.html

    <noah> About 6 months(?) ago Henri Sivonen asked HTML WG not to
    go this way....(and pick up with description above)

    noah: As Henry says, the formal situation is that we made a
    request, and the HTML WG hasn't formally answered us.
    ... let's hear from TAG members who want to convince us to
    change our position on this

    <slightlyoff> need to redial

    noah: opening the floor for 10 or 15 minutes to listen to pro
    and con arguments
    ... Anne was suggesting that you [Alex] may be useful to speak
    to the concerns

    <masinter> I think the request is fine, except that 'work
    identically' is probably too strong, 'work equivalently' would
    be fine and much more useful

    slightlyoff: I'll try to be as even handed as possible but I
    have an opinion
    ... what is in the channel matches my knowledge of the
    ... (1) there seem to be a series of unstated assumptions about
    what we would like to happen and what can happen
    ... polyglot *can* happen
    ... it's relatively clear that we don't know for sure how
    important this is

    <JeniT> I don't think we should be wasting time talking about
    pros and cons of polyglot, more relevant is whether or not
    changing the request will (a) make any difference and (b) make
    a positive difference

    slightlyoff: we have a theory

    <masinter> the origin of the request was earlier in the
    HTML/XML task force discussions

    slightlyoff: I think there's concern from Henri Sivonen that we
    might be giving what appears to be advice, when it may simply
    be the case that the document is outlining one potential
    subsetting. Intent vs. observation is one way to characterize
    ... there is no signpost to suggest whether a document is
    polyglot or not
    ... and there doesn't appear to be a strong argument for how or
    why to publish in polyglot so other people know
    ... which seems fatal to the intent argument

    noah: can you clarify?

    slightlyoff: there's a signage question
    ... how does anybody know whether there is more or less
    polyglot or if anyone is publishing polyglot
    ... you would need to parse both HTML and XML in order to
    determine it
    ... it doesn't seem to me that the arguments about intent hold
    much water
    ... as a note that describes a state of nature I think polyglot
    is fine
    ... but I don't think it rises to TAG's level of interest
    ... I would like to understand from the folks who designed it
    whether it is meant to be intent or observation

    <Zakim> ht, you wanted to make the arch. arg't

    ht: let me try to make the case
    ... I think the grounds for the case are most easily motivated
    by making the historical parallel that Larry made in his recent
    ... with the benefit of hindsight, the consensus position is
    that the W3C made a mistake 6-7 years ago with XHTML
    ... with the idea that the W3C's spec-writing and related
    activities would focus exclusively on XHTML

    <masinter> I worked on XHTML in the HTML working group in
    1999-2000 when I worked for Xerox and then AT&T

    ht: and that "HTML would wither away"
    ... because of the manifest benefits of XHTML
    ... we were wrong

    <noah> Larry sent a number of emails, but I think the one to
    which Henry refers is:

      [32] http://lists.w3.org/Archives/Public/www-tag/2013Mar/0082.html

    ht: it's cost us a lot to scramble back
    ... I think it would be exactly as culpable to believe that
    HTML5 is the way forward for the web

    <masinter> the only "mistake" was to have a HTML working group
    at a time when Microsoft & Netscape were trying to kill each
    other and neither were participating in the HTML working group

    ht: I don't think it's unreasonable on the part of some people
    to feel that polyglot is a sort of last ditch effort on the
    part of those no-hoper XML folks to keep a toe-hold on the web
    ... I want to say that just as it was a very fortunate thing
    that we added the relevant appendix to the XHTML spec about
    interop with text/html
    ... it seems to be precisely the reason why polyglot ought to
    be document
    ... not because we're pushing or endorsing it
    ... but because as Alex said, it's factually the case that the
    subset exists
    ... what are those goals?
    ... people with "certain goals" will have benefit from the
    description of the subset
    ... it is evidently the case that people want to do this
    ... the W3C owes it to its constituencies to make it convenient
    and gracefully interoperable

    wycats: can someone please describe the use-case crisply?

    <slightlyoff> does rescinding the request change the
    predictable outcome?

    slightlyoff: we want the web to be safe for people who believe
    that their constituencies need XML that is parsable and
    displayable with minimal failure as text/html

    <Zakim> noah, you wanted to make the case that having a spec is

    <JeniT> slightlyoff, if it doesn't, why rescind it?

    noah: I agree with people who say that it is an emergent
    property of the spec that you can do this

    <slightlyoff> JeniT: because of the confusion about intent vs.
    observation. Why should the TAG be in the middle of this debate
    at all?

    noah: the reason I think having a spec is useful is so that
    people can refer to it formally
    ... which I think is very important
    ... I think we have seen evidence that the use of this is
    ... the statistics of things that are polyglot in the wild is
    likely low

    <ht> Graham Klyne, in the email thread "As a developer, I
    really want to know there are patterns I can

    <ht> generate that work for the widest possible range of
    clients (browsers

    <ht> and otherwise)."

    noah: we don't have a firm grip on how many people are doing it
    ... but does anyone really doubt that some people are trying to
    do it
    ... XML is a W3C technology
    ... people will want to use it to publish documents that are
    also text/html

    <masinter> I would argue that if the HTML working group
    declines to pursue it, the W3C should spin up some other
    activity to pursue this instead. That is, the recommendation to
    the Director to ensure that some working group publish a
    polyglot spec.

    noah: these could be gas pipeline markup whatever
    ... I think it's good to give people a spec they can point to

    <slightlyoff> I don't disagree with any of that. But what does
    that matter to the open web or the TAG, particularly if that
    spec is on rails no matter what we do?

    noah: I think having a spec that says how people should do it
    is valuable
    ... I don't think it's very expensive
    ... we can put status notes about encouragement

    annevk: I think the expense...

    <Zakim> masinter, you wanted to talk about threshold of

    masinter: I think the TAG made a request
    ... the TAG made a request

    <slightlyoff> but will that kill the effort? I don't see how it

    masinter: withdrawing the request would send a signal that we
    don't think it's important
    ... polyglot is a transition technology that allows you to...
    anytime you have one to many communications...
    ... any time you have a network communication

    <slightlyoff> do I read the HTML WG differently than masinter

    masinter: any time you have multiple recipients and single
    ... and you want to transition from one technology to another
    ... you need a definition that people can use to transition
    ... this is an important sub-topic of the long-held versioning
    ... this is an architectural principle
    ... enterprises may be able to handle it, but it's not
    effective for the web
    ... it's an important understanding for how the web differs
    from other distributed systems

    <slightlyoff> why does that have to do with the WG dropping or
    keeping the Polyglot spec effort?

    masinter: it is an architectural principle
    ... we want people who are currently using XML tools

    <noah> FWIW, I think Larry's argument makes sense, but it's not
    the one that motivates me. I don't necessarily see it as a
    transition: I think XML will live a very long time and will be
    used for purposes that overlap with uses of HTML. I expect that
    the communities that invest in XML for other good reasons will
    be the ones who use polyglot, not just as a transition, but as
    long as XML is useful to them.

    <slightlyoff> would rescinding the request kill the effort? I
    get the sense no. What do you think masinter ?

    masinter: is this worthy of a recommendation
    ... the % of this is rather small, but the web is a trillion
    dollars in the world economy, so even 1% is billions

    <slightlyoff> how is the REC question even in our court?

    <slightlyoff> I don't understand

    <noah> Alex, could you clarify "kill the effort". Are you
    saying that the HTML WG will write and publish in TR space a
    polyglot spec anyway?

    there's an opportunity cost

    <slightlyoff> noah: that seems to be the case

    scribe: it's worth paving the cowpath

    <slightlyoff> that's the sense I got from Sam

    <noah> Yes, of course there's an opportunity cost. The TAG was
    making the case that the value justified it.

    <slightlyoff> it's within their charter and they have
    volunteers going ahead

    <Zakim> wycats_, you wanted to discuss why those are not the
    same thing

    that isn't the argument Larry was making noah

    he was making the case on economic terms without considering
    opportunity costs

    <noah> Hmm, what did I miss?

    slightlyoff: I had a couple of questions

    <masinter> My points were in email & irc log so this is just

    slightlyoff: it didn't seem to me that there was a particular
    sense that the TAG has anything to do with this
    ... if the TAG rescinded the request, it wouldn't kill the
    ... they're going to do this regardless of what we do
    ... this is a process question

    2.html requested Rec

      [33] http://lists.w3.org/Archives/Public/public-html/2012Dec/0082.html

    <noah> The TAG is concerned because we want to be >sure< that
    the W3C is appropriately supporting the coordinated rollout of
    two related technologies. Cross technology and WG coordination
    is part of our formal mandate.

    slightlyoff: assuming that the TAG made this request, and
    assuming that it rescinds it, what does the "signal" it sends
    affects the effort

    TBL: The TAG is responsible for things that span groups. This

    <slightlyoff> but this is about 2 specs both being published in
    the HTML WG, no?

    annevk: I just wanted to point out that we tried this before
    ... it was Appendix C of the XHTML spec
    ... and it failed miserably
    ... people fail to produce that content
    ... it seems to me that some people can do it
    ... IE now supports text/xml

    <masinter> if Appendix C "failed miserably", why is it that >
    6% of web sites are parsable as both HTML and XHTML ?

    annevk: it seems to me that actually doing it just makes your
    pipeline more complicated and isn't actually necessary

    <slightlyoff> masinter: vs. the hoped for 100%?

    annevk: you need both an XML and HTML parser to consume the

    timbl: no, you just need one

    <slightlyoff> masinter: doesn't pass any class I was ever in =

    <ht> XSLT specifies, and XSLT tools support, the production of
    XHTML per appendix C, and that path is widely used

    timbl: publishing as polyglot supports the greatest number of

    annevk: you can just publish as XML if that's what you want
    ... it will work everywhere

    <slightlyoff> I still haven't heard any response on the process
    issue that satisfies me

    <masinter> i never hoped for 100% when I worked on Appendix C
    in XHTML ... that's a mis-characterization

    <slightlyoff> timbl, masinter: can you try?

    timbl: the sorts of examples that people use this is for
    example to work with XSLT

    <ht> I'm about to, just waiting to get off the queue

    <slightlyoff> address the process issue. What happens, in your
    view, should we rescind the request?

    <noah> Alex, is your process issue: why is the TAG involved?
    Tim and I have offered quite consistent answers on that: the
    TAG has a formal mandate to help with technology issues that
    cross WGs.

    timbl: it's bad practice to need an XML copy and a translated
    HTML copy

    scribe admits he doesn't fully follow the argument

    <slightlyoff> noah: but this isn't cross WG. This is just XHTML
    and HTML, both of which are in the HTML WG

    <ht> I consult for people who sell and maintain XHTML

    timbl: how many people have used or built large XML system

    wycats_: I have written a book with XML
    ... it was one of the most miserable experiences of my life

    <slightlyoff> I've written huge systems in DocBook

    timbl: to what extent is the W3C only browsers

    <slightlyoff> also miserable

    <slightlyoff> (custom XSLT-FO, etc.)

    timbl: do we believe the XML community will wither and die?

    <masinter> everyone knows someone who has worked with XML tool
    chains on HTML. I've certainly done so myself when building an
    internal web site

    <slightlyoff> noah: what am I missing? isn't this all HTML WG?

    annevk: are we talking about producers or consumers

    <ht> The people I work with attribute _all_ their commercial
    value to the use of XML->XHTML toolchains

    timbl: are people going to stop using XML toolchains
    ... at the moment people use XML

    <masinter> there should be *NO* polyglot consumers

    <noah> I really don't think driving this by our personal
    experience is the right way to do it. I think if you invited
    someone from Mark Logic, which has a tremendously successful
    product that is XML down to the screws, they would have
    interesting things to say.

    <masinter> there are HTML consumers and XML consumers

    then let's invite them!

    <slightlyoff> noah: I agree that personal experience isn't data

    <noah> Yes, it's not mainly a browser product. I don't know
    >what< they'd say, except that XML almost surely remains very
    important to their customers.

    <slightlyoff> I'm still stuck on the question of process: isn't
    this all HTML WG?

    timbl: if they notice that HTML looks like XML and want to use
    their tools for it, we can tell them that, yes it works

    <masinter> personal experience *is* data if you're looking for
    evidence and convincing use cases to assure that the use cases
    are significant and non-trivial

    <slightlyoff> or is nobody going to bite on this?

    <noah> Alex, could you respond to what Tim and I have said
    about the process. You keep saying you don't see why, and we've
    both offered the same reason.

    <slightlyoff> noah: you've said it's cross-WG, I'm saying

    <slightlyoff> noah: that's an honest question

    timbl: seems to me that giving people a way to use their
    existing XML tools with HTML is good

    <masinter> it's a threshold > one percent of the web. maybe
    it's "half of one percent"

    annevk: if the document was more clearly scoped to be for the
    XML community, it would be better received

    noah: TAG members who are sympathetic to polyglot are
    sympathetic to intros and status sections


      [34] https://www.w3.org/Bugs/Public/show_bug.cgi?id=20707

    <masinter> it's already in their tracker

    noah: to the extent that the concern is that this is fine as
    long as it's clear what the intent is we might tackle it on
    that basis

    <Zakim> ht, you wanted to disagree that "the spec. is on rails"

    ht: I believe that consensus is possible around the table
    ... around the existence of polyglot and specifying it
    ... but the controversy is over process
    ... historically, TAG's creation of the task force and interest
    in the issue was significantly the reason for getting the spec
    in the first place
    ... us saying we don't care does send a signal
    ... we were involved in getting it on the rails in the first

    <noah> From the TAG's charter
    [35]http://www.w3.org/2004/10/27-tag-charter.html#Mission our
    mission includes:

      [35] http://www.w3.org/2004/10/27-tag-charter.html#Mission

    ht: the constituencies who want it have a legit need for it
    ... the cost of satisfying it is low

    <noah> "to help coordinate cross-technology architecture
    developments inside and outside W3C."

    ht: we're just documenting it

    <noah> This seems pretty squarely to be coordinating
    cross-technology (XML and HTML) developments.

    <Zakim> noah, you wanted to talk about TAG's role and what will
    happen anyway

    ht: there's already a bug to add a status section that
    explicitly says that we're not recommending it

    <slightlyoff> noah: but it's about XHTML and HTML, one of which
    has a dep on XML parsing, but isn't proposing any change to XML
    in any way

    noah: the question about the TAG's role has come up several
    ... I'm curious about why this doesn't make the case
    ... one of our three formal mission is "to coordinate
    cross-technology architecture developments inside and outside
    the W3C"
    ... this is about XML and HTML
    ... many of the use cases were using existing tool chains to
    integrate XML documents to generate HTML

    <slightlyoff> but nobody is proposing anything that's not an
    XHTML-compatible subset, right?

    noah: at least arguably for some users this is not two
    serializations of HTML
    ... we're just talking about a serialization that works in both

    <slightlyoff> yes

    <slightlyoff> that's my view

    annevk: I'd rather the polyglot thing doesn't happen

    noah: there are some people who don't want a polyglot spec

    annevk: my advice would be if you want to use XML, publish

    <masinter> if there are one to many publishers where some of
    the clients want HTML and others want XML, then you need
    polyglot for those

    <slightlyoff> but nobody is talking about polyglot as XHTML +
    namespaces, e.g.

    <slightlyoff> masinter: there's no debate over that

    noah: the point is you should be able to serve it as either
    mime type
    ... and serve it however you want

    <ht> That [serving XHTML as text/html] is certainly what my
    university does

    <ht> 1000s of pages of it

    <masinter> slightlyoff: so that's the use case, it's a common
    enough use case to have a standard that can be referenced and
    reviewed for suitability

    noah: the cases I would make is (1) I covered why it's in
    scope; (2) it is different for the WG to promise that they'll
    do something than for them to randomly do it
    ... in 2/3 years if they stop doing it we can ask them why
    ... it makes a difference in a coordinating role

    <slightlyoff> so I'm not *currently* suggesting that we say to
    WGs that we don't care (although I feel that we shouldn't), but
    this thing appears to be on rails

    <masinter> slightlyoff: so why bring it up at all? it was done,
    the request was made, there is no reason to retract it

    <noah> YK: Someone said something about the trillion dollar
    web. Yes and we have some leverage, but that means the
    opportunity cost is important

    <slightlyoff> masinter: because there is a serious concern
    raised about the Appendix-C-style costs to the community

    <noah> YK: Also, I worry that even if we say you aren't
    encouraged to do this, people will think you are.

    <slightlyoff> masinter: honestly, I'm willing to go with big
    warnings about this in the document

    <noah> YK: People who have no good need will go through hoops
    be be polyglot-valid

    <slightlyoff> masinter: but I don't know why the TAG cares, nor
    do I think it should, but I'm willing to only argue the first
    for now

    <noah> TBL: Not convinced. We could argue against most any spec
    we produce on that basis. I think we have to agree at the top
    of the document what it says.

    <masinter> slightlyoff: does
    [36]https://www.w3.org/Bugs/Public/show_bug.cgi?id=20707 cover
    it for you?

      [36] https://www.w3.org/Bugs/Public/show_bug.cgi?id=20707

    <noah> YK: You should not use this unless you have interop

    <masinter> The "Scope" of a spec should say "What is this good

    <noah> TBL: Good specs don't say you should or shouldn't: they
    say "if you do this you will get the following benefits"

    <slightlyoff> masinter: that + suggesting this should go to
    NOTE and not REC might get me there

    <noah> YK: Not convinced, some people are compulsive about this
    stuff and will read it as "do it"

    <Yves> note that there is also the question of updating the

    <Marcosc> +q

    <masinter> Rec means it has been reviewed for whether the
    technology described is useful for the scope for which it is
    claimed to apply. A "Note" carries no force except "for your

    <ht> But a) we don't have that for all browsers and b) browsers
    are not the only page consumers.

    <slightlyoff> masinter: and I'm ok with NOTE under that
    definition vs. REC

    <annevk> RECs carry no weight either historically... E.g. DOM
    Level 1-3, HTML4, CSS1, CSS2 (before .1), ...

    I don't understand why XHTML doesn't practically solve the "XML
    pipeline" issue

    noah: I don't really hear consensus

    <slightlyoff> wycats_: it does.

    <slightlyoff> wycats_: except for historical UAs.

    <ht> Making sure we are understood as supporting the proposed
    resolution to
    [37]https://www.w3.org/Bugs/Public/show_bug.cgi?id=20707 is a
    positive step we should take

      [37] https://www.w3.org/Bugs/Public/show_bug.cgi?id=20707

    slightlyoff: it sounds like there's the potential for some
    common ground here
    ... Larry and I have been chatting on IRC with the open bug
    ... I am ok with modifying our suggestion that this go to a
    note not a REC
    ... basically it says that this is an interop tool
    ... and we don't necessarily recommend for all content
    ... I don't want to give this inappropriate authority
    ... I am concerned about the tenor
    ... everyone agrees that polyglot happens and will be described
    whether we do it or not

    masinter: I think that the difference between a REC and a Note
    is that a Note is used when you can't reach agreement
    ... when you're not recommending it
    ... I think the scope of the document is critical

    <noah> I don't share Larry's concern about note. I would prefer
    Rec, but I think Note is fine as a compromise. Note is NOT just
    for dead technologies, IMO.

    masinter: if you're concerned about scope creep, then you do
    that by fixing the scope of the document

    wycats: I want to hear about the use-cases for which XHTML is

    <JeniT> Note vs Rec is not our decision

    wycats: if the HTML WG doesn't want to do it, we should do it
    elsewhere, like in XML
    ... when you obsolete a technology, it is a standard body's
    responsibility to give people a path off

    <ht> I would prefer a REC, along the lines Larry is arguing,
    but, full disclosure, I note that in XHTML1 we find: "C. HTML
    Compatibility Guidelines: This appendix is informative."

    wycats: the Director should get this done somewhere

    <slightlyoff> we'll disagree on that point, I guess

    wycats: the TAG is the shepard of technical activities
    ... make sure the right thing is done

    <masinter> and I think a REC is necessary

    noah: who would be in favor of Alex's proposal
    ... he suggested that the HTML WG would produce a Note not a
    ... with text on top that bounded the scope

    wycats: -1

    <ht> +0

    <slightlyoff> +1

    <noah> +1

    <JeniT> +0

    <timbl> +0

    <slightlyoff> annevk?

    <Yves> +1

    <annevk> +1

    <JeniT> Status is right thing to do, but Rec vs Note is not our

    <slightlyoff> JeniT: yes, I agree with that, but we can specify
    what we'd prefer in our request

    <slightlyoff> JeniT: the membership will vote

    <noah> FWIW, I prefer Rec, partly because it represents some
    commitment to keep it up, but as a compromise I can live with
    Note, because it meets the requirement for normative REC.

    Yves: not sure how to deal with new elements like <template>

    <masinter> it's the TAG decision to keep the current request or
    rescind it. that's all we're talking about, isn't it?

    <noah> YL: Therefore I think Note is preferable to REC

    JeniT: the only thing that's normative in that spec is to
    describe what polyglot is

    <annevk> -1, wycats_ convinced me it's better to stick to "if
    you want XHTML, use that"

    <noah> JT: The only thing normative in the polyglot spec is the
    defintion that says: "works the same viewed both ways" The rest
    is informative.

    wycats: it was your idea, annevk

    <slightlyoff> Yves: specifically for <template>, the XHTML
    parsing is changing

    <noah> YK: Had a brief conversation w/Jeni about, why not use
    XHTML. I still don't understand, but strongly feel we need to
    be able to answer that.

    <masinter> perhaps "the same" needs to be defined, since
    "equivalent" vs "identical"

    <slightlyoff> wycats_: can you type in IRC what you think the
    key question is?

    <noah> JT: Use cases include "an XML fragment is copied into a
    page served as text/html".

    <annevk> The key question is what does XHTML not have that
    Polyglot does.

    <masinter> perhaps the polyglot CR exit criteria need to be

    <slightlyoff> annevk: well, as a subset of both, whtever the
    subsets don't have = )

    <ht> I also care about anyone who works for my university, or
    institutions like it, who do not control the media type their
    XHTML is served with!

    <ht> Jeni's is not the only use-case

    <ht> Yehuda, does that make sense?

    <slightlyoff> ht: but if those folks are serving as text/html,
    what users are underserved? this is the signage issue write

    <annevk> ht: if you publish as HTML, you need to consume it as

    <ht> I.e. IE6 and IE7 are out, and they still have real market

    <masinter> XHTML is free to use XML syntax, namespaces, etc.
    that isn't valid HTML

    <masinter> polyglot is a restriction of XHTML that producers
    need to know about, even if consumers don't

    <annevk> thanks JeniT for naming them URLs

    <JeniT> just for you

    <annevk> scribenick: annevk


    <JeniT> [38]http://www.w3.org/2001/tag/2013/03/uris-in-data.pdf

      [38] http://www.w3.org/2001/tag/2013/03/uris-in-data.pdf

    JeniT: [Gives presentation on why URLs in data matter.]
    ... started with "What is the range of the HTTP function?"

    noah: httprange-14 means refers to how the TAG originally
    worked, it's issue 14 on the topic of httprange

    JeniT: [goes through examples of cat pictures and books and
    their associated copyright]

    [crowd demands more cat pictures]

    JeniT: This raises certain questions, such as what data can I
    reuse, what data has X published, ...
    ... How to associate data with your cat pictures (i.e.
    ... Landing pages complicate this
    ... Amazon has a landing page for the book; the landing page
    describes the book
    ... Metadata can be okay if it points to the actual picture and
    the license is about the picture
    ... Can also be okay if it points to the landing page and the
    license is about the landing page

    <masinter> use/mention ambiguity in natural language is common.
    'Where are you parked?' 'I'm parked out back' (but it's your
    car, not you). Waitress 'Who's the ham sandwich?'.

    JeniT: It gets confusing once party A uses the picture URL and
    party B uses the landing page URL

    <ht> Larry, I used to think this phenom. was a use/mention
    confusion, but I no longer do

    wycats_: I don't get why somebody should do the bottom one

    <ht> Use/mention would be "[39]http://www.w3.org/ has scheme

      [39] http://www.w3.org/

    marcosc: copy & paste

    wycats_: I'd hope people don't do that

    [light laughter]

    wycats_: who is producing this JSON [example on screen]

    <masinter> ht, ok, but 'ham sandwich' isn't use/mention?

    JeniT: In the picture case Flickr provides the bottom one
    [landing page URL combined with picture URL license]

    wycats_: I'm trying to understand why someone creating a
    database of metadata would do the botton one?

    timbl: happens all over the place

    wycats_: so is that because Flickr wants you to get to the
    landing page?

    <ht> masinter, no, that's just a kind of synecdoche

    jar: yes, but also the landing page might be the only stable

    wycats_: We want to write another page that points to the book
    and says the copyright is about the book and not about the
    landing page

    <wycats_> imagine you have <link rel="book"

      [40] http://amazon.com/awesome-book

    <wycats_> and you also want to include the license

    <wycats_> you need a way to describe that the license is about
    the BOOK

    <wycats_> and not about the amazon page

    <wycats_> <link rel="book"
    href="[41]http://amazon.com/awesome-book" license="cc-by">

      [41] http://amazon.com/awesome-book

    Marcosc: who is actually facing these problems today?

    JeniT: the Semantic Web people faced this problem early on
    ... more and more we get RESTful APIs and eventually they'll
    run into this problem

    <masinter> ht, isn't 'the copyright license for
    [42]http://amazon.com/awesome-book is [43]http://cc-license/ '
    similar ?

      [42] http://amazon.com/awesome-book
      [43] http://cc-license/

    Ashok: My understanding is different. Which is that does the
    URL point to the object or about the metadata about the object?
    ... If you point to the object, how do I get to the metadata?
    ... A somewhat differently focused issue

    <slightlyoff> this is yet another aspect of 'what does a URL
    represent?', and we want a way to index into the nodes being
    represented in the data graph, not the wrapper(s)

    JeniT: Having worked on this issue for a number of years there
    are a variety alleys we can stumble down into.

    timbl: I support JeniT in avoiding ratholing

    <masinter> I'd like to get
    [44]http://tools.ietf.org/html/draft-masinter-dated-uri-10 out
    as an Informational or Experimental RFC, would like comments

      [44] http://tools.ietf.org/html/draft-masinter-dated-uri-10

    <Marcosc> function http(){ return stuff;}

    <wycats_> proceeds to rathole

    annevk: What is the HTTP function?

    timbl: It's doing an HTTP request and getting a response
    ... It was a bad name for the issue

    noah: It's HTTP expressed mathematically

    Ashok: If you create a linked data resource... you also create
    a metadata resource pointed to by a Link Header on the resource

    JeniT: Right, so you store the resource and the description of
    the resource, and link from one to the other
    ... So yeah you could use link relations

    <wycats_> http = λuri

    Ashok: I think they only go one way

    JeniT: you can do both ways
    ... I want to publish URIs in Data Primer. The basic solution
    to the problem is to be really specific what the various
    properties refer to.

    <slightlyoff> so I'd like to re-frame this around nodes in a
    data graph

    timbl: If somebody is going to be using the same technology and
    they want to describe both levels (landing page and thing),
    they need to have syntax for that or have different properties

    <slightlyoff> a way to think about this is that you want a
    terminal URL for a page that is specifcally *about* that node

    <slightlyoff> and that might not be the image/book

    <slightlyoff> but it might be that details page for flickr

    wycats_: If you use a system such a rel system you need to
    describe concretely whether you mean the landing page or the

    <wycats_> the definition of the rel

    <slightlyoff> no objection from me

    annevk: I don't like it says URI so much

    <JeniT> I'm quite happy to change that to URLs

    <slightlyoff> annevk: it should s/URI/URL/g?

    annevk: slightlyoff, yeah

    <jar> hmm...

    <JeniT> jar, it's the primer

    <wycats_> what's the URI vs. URL third-rail?

    Ashok: I'm fine with publishing. Are we going to declare
    range14 closed?

    jar: Communicate with WGs for further work

    Marcosc: It would be good to get feedback

    wycats_: JeniT is working on that

    <JeniT> ack

    slightlyoff: Thanks for attention to detail. I support
    publishing this. I had a comment on 5.3
    ... It might be more concrete to say that there should be a URL
    for each piece of content you are trying to describe

    <Zakim> noah, you wanted to ask about 303s

    annevk: JeniT, also s/Link:/Link/ fwiw
    ... JeniT, trailing colon is more a URL scheme convention thing
    and even there...

    <JeniT> ok

    noah: We can publish this document to update our earlier advice

    JeniT: This document provides a route for people that don't
    want to use 303

    noah: At some point we should tell the community the intent is
    to close our issue based on this document

    [alright, community informed!]

    <slightlyoff> still no objection = )

    <scribe> scribenick: someone

    <annevk> still suggesting s/URI/URL/g

    <slightlyoff> annevk: can be cleaned up in next WD

    <JeniT> yeah, I'll roll in the comments received today prior to

    <annevk> scribenick: annevk

    noah: [45]http://www.w3.org/2001/tag/doc/uris-in-data/

      [45] http://www.w3.org/2001/tag/doc/uris-in-data/

    <JeniT> www.w3.org/2001/tag/doc/uris-in-data-2013-03-07/

    <noah> Proposed resolution: The TAG agrees to publish
    www.w3.org/2001/tag/doc/uris-in-data-2013-03-07/ with
    modifications agreed on 19 March 2013, as FPWD

    RESOLUTION: The TAG agrees to publish
    www.w3.org/2001/tag/doc/uris-in-data-2013-03-07/ with
    modifications agreed on 19 March 2013, as FPWD

    timbl: We looked at all kinds of situations out there,
    including OGP. Some schema.org stuff...
    ... The question is, if these recommendations are taken, who do
    we need to talk to about the damage?

    jar: all of them, everyone is broken

    [disappointed looks]

    jar: they're ambushed by ambiguity

    [scribe was wondering when that would come up]

    wycats_: the document does talk about hash URLs

    timbl: should we have action items on the TAG to chase these
    people down?

    jar: there are some tricky cases, like what do we tell dublin
    ... they have a bunch of properties that are not URLs
    ... there's no way to test that, the content is out there

    timbl: you can by crawling

    jar: worth a try
    ... you can't access whether a property is used?

    wycats_: this is done many times

    jar: this is good, but I'm still skeptical

    wycats_: you doubt you can make an experiment?

    JeniT: it's hard because in part it depends on intent

    timbl: you read a couple of million and then you sample

    [reference to foreign government censored]

    jar: it may be possible
    ... the interesting thing will be what Tom Baker has to say
    (from Dublin Core)
    ... if they can change the existing properties or if that's not

    Ashok: or have Dublin Core ignore this document

    <timbl> Tom Baker as in

      [46] http://dublincore.org/about/executive/

    wycats_: this is not different from any other thing we do on
    the web
    ... the path is, if you think you might break people, you try
    to find out ahead, then you experiment, and then you'll find

    jar: lets not pretend this is going to be an easy transition

    wycats_: lets find out

    <JeniT> some of these people might be happy with it being fuzzy

Summary of Action Items

    [NEW] ACTION: Jeni to do new Editor's Draft of fragids spec for
    approval to publish as CR [recorded in
    [NEW] ACTION: Yehuda with help from Anne to review TAG finding
    on application state and propose TAG followup to promote good
    use of URIs for Web Apps including those with persistent state
    with focus on actual examples [recorded in

      [47] http://www.w3.org/2013/03/19-tagmem-minutes.html#action02
      [48] http://www.w3.org/2013/03/19-tagmem-minutes.html#action01

    [End of minutes]

     Minutes formatted by David Booth's [49]scribe.perl version
     1.137 ([50]CVS log)
     $Date: 2013-04-11 19:39:58 $

      [49] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
      [50] http://dev.w3.org/cvsweb/2002/scribe/


       [1] http://www.w3.org/

                                - DRAFT -

                 Technical Architecture Group F2F - Day 3

20 Mar 2013


       [2] http://www.w3.org/2001/tag/2013/03/18-agenda#polyglot

    See also: [3]IRC log

       [3] http://www.w3.org/2013/03/20-tagmem-irc


           Yehuda Katz, Anne van Kesteren, Yves Lafon, Peter Linss,
           Ashok Malhotra, Jeni Tennison, Noah Mendelsohn, Marcos
           Caceres, Larry Masinter (phone), Henry Thompson (phone),
           Alex Russell (phone)

           Noah Mendelsohn

           Yves Lafon, Jeni Tennison


      * [4]Topics
          1. [5]polyglot
          2. [6]Layering
          3. [7]ES6 changes
          4. [8]"Wrap-up"
      * [9]Summary of Action Items

   Please note the following IRC handles and nicknames:

    The following IRC handles and nicknames are used in these

      Handle     TAG Member
    slightlyoff Alex Russell
    wycats      Yehuda Katz


    <wycats_> [10]http://wiki.whatwg.org/wiki/HTML_vs._XHTML

      [10] http://wiki.whatwg.org/wiki/HTML_vs._XHTML

    <noah> Link to XML/HTML Task force report:

      [11] http://www.w3.org/2010/html-xml/snapshot/

    <slightlyoff> yeah, I don't understand how namespace'd XML docs
    are relevant

    <plinss> [12]http://www.w3.org/2001/tag/2013/03/18-agenda

      [12] http://www.w3.org/2001/tag/2013/03/18-agenda

    <wycats_> I want to discuss the list of polyglot use cases that
    are not satisfied by XHTML

    <wycats_> you can use IE if you want XML islands ;)

    <wycats_> "Support for XML data islands has been removed in
    Internet Explorer 10 standards and quirks modes for improved
    interoperability and compliance with HTML5"

    <JeniT> the script element is the thing to use

    <slightlyoff> we added this for <template>, BTW

    <JeniT> can you use arbitrary XML in template?

    <slightlyoff> it's in the spec, but not the implementations

    <wycats_> you could theoretically have <template

    <wycats_> but the window on that is rapidly closing

    <slightlyoff> many implementations don't stop on first error

    <slightlyoff> see: RSS pipelines at scale

    <slightlyoff> also XAML processing

    Noah: discussion about error handling around xhtml, xml5

    Anne: brwoser vendors lost interest (in xml5)

    Jeni: one case is where you don't control the mime type on the

    wycats: not sure this is a high value, the fact that toolchains
    don't have that is an indication

    <slightlyoff> well, they have built those tools: lots of
    languages have both XML and HTML parsers; just run it through
    both and look for exceptions ;-)

    <slightlyoff> xmllint + HTML::Lint

    wycats: you don't need waiting for a spec to write a tool to
    help for that use case

    anne: your xml toolchain can have html endpoints (input and/or

    wycats: there might be confusion on when to use html, xml and
    polyglot documents

    Noah: the relationship between polyglot and xhtml is a
    subset/superset one

    wycats: I don't want to have people saying "to be safe, use

    <JeniT> the other example that I was going to raise was the E4H

    <annevk> JeniT, we already know it's not likely to happen,
    seems ratholing

    <JeniT> but in general: defining a parser for a simple subset
    of XHTML is much much easier than using a full HTML parser

    <annevk> JeniT: you want a third parser?!

    <annevk> JeniT: that is so crazy

    <annevk> JeniT: parsers are not rocket science

    <wycats_> E4H is a crazy reason to want polyglot

    <annevk> that too

    <wycats_> people would want @<img src='foo'><p>hello</p> to

    <wycats_> "just use polyglot" is ridiculous

    <wycats_> for anything like E4H

    Alex: I'll look at the wording on the status section and
    redraft it.

    <noah> ACTION: Alex to redraft proposed "status" section that
    TAG is suggesting for Polyglot [recorded in

      [13] http://www.w3.org/2013/03/20-tagmem-minutes.html#action01

    <trackbot> Created ACTION-791 - Redraft proposed "status"
    section that TAG is suggesting for Polyglot [on Alex Russell -
    due 2013-03-27].

    <JeniT> annevk, I'm merely using the E4H requirement as an
    existence proof that such a parser might sometimes want to do

    <annevk> JeniT: "defining a parser for a simple subset of
    XHTML" is still crazy


    wycats drafting on the white board^Wscreen

    Jeni: how it would help people developing platform features?

    wycats: by providing guidance

    Anne: should we restrict that to markup APIs?

    <slightlyoff> I think it's important for us to say that markup
    and APIs need a concrete relationship

    <slightlyoff> so declarative and imperative forms need a
    relationship, as do high and low-level APIs

    <annevk> What I meant to say is if we should be explicit that
    this includes markup as well as APIs.

    <slightlyoff> annevk: I agree with that

    <slightlyoff> annevk: and it's all about creating connections
    between the layers

    <slightlyoff> I think it's important for us to identify that
    people should be trying to create connections between layers

    <slightlyoff> not pre-suppose a fixed # of layers

    <slightlyoff> (1, N, or somewhere inbetween)

    anne: there are many layers in XHR, as redirection, decoding
    and getting content are different layers

    ex, redirects are currently not handled as we don't have the
    right API

    <slightlyoff> JeniT: I don't think our problem there is as
    large -- "progressive enhancement" is the received wisdom
    amongst FE-engs.

    <JeniT> agreed

    <JeniT> but I wonder if there are patterns there that work
    better/worse, and are worth exploring

    <slightlyoff> JeniT: yeah, I think there are

    <JeniT> for example, should you use data-* attributes, classes

    Anne: we should also describe when the layering was not done

    and point to what would have been the right way

    Web Components is considered generally as the right direction

    Noah: we need a list of who needs to buy into this

    Anne: CSS, WebApps, Webappsec, WHATWG...

    Noah: should we invite them to meetings?

    <annevk> WebRTC WG

    <annevk> [14]http://www.w3.org/WAI/PF/

      [14] http://www.w3.org/WAI/PF/

    <slightlyoff> it's an alternative declarative form that has
    more power

    wycats: when you have an img tag, or xhr, you need to describe
    the layers involved here

    Anne: in the case of a redirect, it's not clear that layers
    should be exposed or not

    (discussion about high level apis of socket vs low level
    options like slow-start)

    and their relevance today

    <slightlyoff> you're construing my point as a strawman for only
    low-level control, which it is not

    <slightlyoff> my point was more subtle

    Alex: exposing only high level or only low level is not
    helpful, you need to explain their relationship as well

    <slightlyoff> yes, and we will inevitably have both

    <slightlyoff> and should!

    <Zakim> ht, you wanted to ask about declarative layering

    <wycats_> [15]https://gist.github.com/wycats/5205250

      [15] https://gist.github.com/wycats/5205250

    wycats: if it's all declarative and not what you need, then it
    won't work, you need to have a programmative "escape" to work

    <slightlyoff> ht: I'd think of it this way: is it possible to
    implement your delcarative system declaratively all the way? If
    not, you probably need an imperative binding at SOME point

    <ht> Absolutely

    <ht> Just glad to see that both routes are on the table

    <slightlyoff> ht: oh, no, that wasn't the goal to rule that
    out. We're advocating for both declarative and imperative forms
    in the platform for MOST important capatbilities

    <slightlyoff> ht: yeah, we're not calling one or the other "the
    winner", just pointing out that they need each other and that
    they should have a relationship

    <ht> cool

    <slightlyoff> we can't all win unless both imperative and
    declarative move forward together

ES6 changes

    wycats_: syntactic improvements
    ... can do { foo() { ... } } to define methods on an object
    ... (args) => { ... } instead of function (args) { ... }
    ... 'this' doesn't get bound in shorthand
    ... APIs that mutate 'this' are bad APIs: instead you should be
    passing along parameters

    <slightlyoff> to provide some color, I don't necessarialy agree
    it's bad, but TC39 is constrained in syntax and semantic
    alternatives here -- I have argued in the past for "soft
    binding" that would allow for explicit "this" over-rides, but
    it's complicated and an edge-case

    <slightlyoff> the big issue here is that in JS, the dot
    operator (e.g. "foo.bar") doesn't bind a function that's
    de-referenced with any particular this binding

    <slightlyoff> this is a partial fix

    <slightlyoff> for some subset of use-cases

    wycats_: (item) => item.toUpperCase() auto-returns

    var callback = [];

    for (var i=0; i<10; i++) {

    function () { console.log(i); }



    <annevk> for(var i = 0; i<10; i++) { function c() { log(i) }
    push(c) }

    <scribe> ... new binding form 'let'

    UNKNOWN_SPEAKER: scope of 'let' is a block
    ... 'const' has same binding scope but can't be changed

    <slightlyoff> this is the "temporal dead zone"

    UNKNOWN_SPEAKER: but not expected to be heavily used
    ... "destructured assignment"
    ... var { type, bubbles } = options;
    ... var { type, bubbles=false } = options;
    ... var { type: type, bubbles: bubbles } = options;
    ... in this case, part before : is the key name in the passed
    object, part after is the variable to which that value is
    ... var { type : { bubbles }} = options;

    <slightlyoff> for folks who want to play around with many of
    these things, you can try interactively in Traceur:

      [16] http://traceur-compiler.googlecode.com/git/demo/repl.html

    UNKNOWN_SPEAKER: [ type, bubbles ] = array;
    ... [ type, bubbles, ...rest] = array;
    ... var { type, bubbles? } = options; means no error if bubbles
    isn't defined in options



    UNKNOWN_SPEAKER: this makes it more reasonable to have return
    values that are dictionaries or arrays

    annevk: functions already return objects, and this will work
    with that?

    wycats_: yes
    ... use destructuring method inside argument list
    ... function Event({type, bubbles})
    ... function foo(a, ...b) { ... }

    <masinter> most of this stuff is just "syntactic sugar",
    though, no changes to the VM needed?

    <slightlyoff> masinter: there are Object Model changes in ES6
    too, largely thanks to Proxies

    <slightlyoff> masinter: but much of what has been shown now is
    pure sugar, yes

    <slightlyoff> masinter: also, in JS, there's no parser/bytecode

    <slightlyoff> masinter: we don't have a standard bytecode (and
    most JS VMs do without one entirely, although they do have IRs)

    wycats_: function (a, ...b, { type, bubbles }) { ... }

    timbl: think object parameters demonstrate lack of power in
    parameter list

    wycats_: want implementations to optimise the destructuring

    [discussion of keyword parameters]

    wycats_: optional arguments
    ... function bar (a, b=1) { ... }

    wycats_: lots of weirdness around prototypical inheritance

    ... new syntax for semantic inheritance

    <slightlyoff> you can try this syntax in Traceur too



    UNKNOWN_SPEAKER: class Event { constructor ({ type, bubbles })
    { ... } foo() { ... } }
    ... class ClickEvent extends Event { constructor(...args) {
    super(...args); } }
    ... "maps and sets"
    ... var map = new Map();
    ... var key = {};
    ... map.set(key, "value");
    ... map.get(key);

    <slightlyoff> Maps and Sets, BTW, might already be in your

    <slightlyoff> FireFox has an early implementation shipping

    <slightlyoff> and Chrome has it behind a flag, IIRC

    UNKNOWN_SPEAKER: var set = new Set(); var obj = {};
    set.add(obj); set.has(obj);

    timbl: could have a value that you can pass around / compare,
    but not print, for example
    ... which is similar to what we were talking about yesterday re

    wycats_: var symbol = new Symbol(); var obj = {}; obj[symbol] =
    1; obj[symbol]

    plinss: hash is based on identity of object?

    wycats_: yes
    ... goal to have real private symbol, but it's complicated

    annevk: the symbol can be retrieved from object?

    wycats_: yes, by reflection

    annevk: the platform needs real private stuff

    wycats_: yes, but that isn't what Symbol is
    ... "Modules"
    ... most crucial thing for the platform
    ... import { foo } from "bar";
    ... foo is not on window or any global object
    ... real example is import { Event } from "web/dom";

    timbl: can you use "[19]http://..." there?

      [19] http://../

    wycats_: yes, but no, you don't want to

    timbl: I'm interested in this question, because as TAG we
    should defend using URLs for naming things
    ... there are lots of systems where the search path is a
    ... leads to pain, lack of interoperability, and security

    wycats_: conversation is still open, but URIs here force you to
    get it off the network, which is a problem
    ... this is a good conversation to have
    ... we'd like to have a good strategy to use URIs, but now is
    not the time
    ... in web/dom.js:
    ... export class Event { ... }

    annevk: so export is a module syntax?

    wycats_: if you import, it's assumed you're pointing to a
    ... there's a literal form which is module "web/dom" { ... }
    ... but that means you can't move the module file

    noah: can you export anything or only classes?

    wycats_: anything: variables, functions

    noah: do you have to explicitly export each thing explicitly?

    wycats_: there's a form that's export { x, y, z } but it might
    go away

    JeniT: can you import everything?

    wycats_: no

    timbl: good

    annevk: what if I export a function that returns a Document,
    but I haven't exported Document?

    wycats_: you only need to have access to the name "Document" if
    you need to identify the object as "Document"
    ... you can get hold of prototype and make a new instance using
    that prototype
    ... without knowing it's called "Document" in the imported

    timbl: can I in the import statement change the name of the
    thing that's imported?

    wycats_: yes, eg import { XHR : XHR2 } from "web/network";
    ... modules have static imports & exports
    ... so that we can transitively get dependencies, before
    executing the code
    ... can also do System.require("web/network") but it assumes
    module is already loaded

    timbl: can I get hold of module itself?

    wycats_: yes, syntax subject to change but import "web/network"
    as webNetwork;

    annevk: so you could then get hold of everything

    <slightlyoff> one way to think about this is that Module
    instances are a new core type; they're not Object instances

    wycats_: yes, but you don't have the local binding for the name
    in that case
    ... it's a frozen object
    ... you can loop over it

    <slightlyoff> ...except it has no Object.prototype as its

    <wycats_> Object.create(null)

    slightlyoff: one way to think of it is that modules are a new
    abstraction type, that you'll only ever create through this

    wycats_: the module is an immutable, frozen thing which cannot
    change, unlike the global object
    ... "Proxies"
    ... pretty complicated but have simple understanding
    ... [20]http://es5.github.com
    ... see Chapter 8
    ... contains [[GET]], [[SET]] are things that browser
    implementations can override but JS programmers can't
    ... now have var p = new Proxy (obj, { get: function (proxy,
    key, receiver) { ... } })
    ... let regular JS do what host objects could always do

      [20] http://es5.github.com/

    slightlyoff: it means the magic that was done through IDL can
    now be written out in JS
    ... exposing the magic

    wycats_: there are many places in DOM that are doing this kind
    of thing, like the style object

    annevk: like element.style.background is something
    ... style is a long list of names in IDL

    wycats_: ok, maybe this isn't a good example

    annevk: but the platform should not use proxies

    wycats_: yes, but this exposes what existing APIs are doing
    ... length properties for example

    slightlyoff: it's a way of rationalising how the current magic
    be explained
    ... and how we might do it in the future if there is a
    legitimate reason for doing so

    wycats_: there are C++ implementations of objects that are used
    ... like arrays
    ... can add a property to classes
    ... eg (not real syntax) Element[@@create] = function () { ...
    ... that's native code
    ... but in JS I can do class MyElement extends Element { ... }
    ... so extend things that are native implementations

    marcosc: are we sure that's going to work?

    annevk, wycats_, slightlyoff: yes

    scribe: though hard


    noah: photo is live on the web
    ... on [21]http://www.w3.org/2001/tag/2013/03/18-agenda
    ... let's remind ourselves of what we're going to do
    ... and make sure that there's a balance of work across the
    ... thanks to everyone: I think we've started to work together
    ... there are 9-10 of us, about a third of us will be busy with
    day jobs at any particular time
    ... we should have 3-4 things that we're working on

      [21] http://www.w3.org/2001/tag/2013/03/18-agenda

    annevk: there's one slot open for appointments

    timbl: you can suggest who that should be
    ... there's no defined date

    annevk: I think that should be Peter

    timbl: I'm always open to advice

    noah: so, the Layering project is wycats_ and slightlyoff

    noah: JeniT working on fragids, urls in data, capability URLs &
    unhosted Apps
    ... ht on persistence of URIs & URLs in data

    [reviewing actions]

    noah: please change actions to 'Pending Review' if you want
    them discussed in next telcon
    ... it's what I use to generate agendas

    <masinter> All of my action items disappeared, i guess they got

    <noah> close ACTION-763

    <trackbot> Closed ACTION-763 prepare response to last call
    feedback on Publishing and Linking.

    <noah> close ACTION-764

    <trackbot> Closed ACTION-764 arrange for expert review of
    Publishing and Linking last call draft.

    <noah> ACTION-789?

    <trackbot> ACTION-789 -- Yehuda Katz to with help from Anne to
    review TAG finding on application state and propose TAG
    followup to promote good use of URIs for Web Apps including
    those with persistent state with focus on actual examples --
    due 2013-04-16 -- OPEN


      [22] http://www.w3.org/2001/tag/group/track/actions/789

    <noah> ACTION-786?

    <trackbot> ACTION-786 -- Marcos Caceres to frame, with help
    from Alex, discussion of Javascript API Design Issues for F2F
    -- due 2013-03-04 -- OPEN


      [23] http://www.w3.org/2001/tag/group/track/actions/786

    <slightlyoff> thanks, only missed 30 seconds

    <slightlyoff> hit "esc"

    <slightlyoff> no objections

    <noah> MC: Let it go

    <noah> close ACTION-786

    <trackbot> Closed ACTION-786 frame, with help from Alex,
    discussion of Javascript API Design Issues for F2F.

    [decision not to follow up on API design issues]

    <noah> ACTION-788?

    <trackbot> ACTION-788 -- Yehuda Katz to frame F2F discussion of
    liaison with ECMA TC39 -- due 2013-03-07 -- OPEN


      [24] http://www.w3.org/2001/tag/group/track/actions/788

    <slightlyoff> we can re-schedule time for something along these
    lines later -- there's lot we did get to this week and more we
    can do when we get deeper with various WGs.

    <noah> close ACTION-788

    <trackbot> Closed ACTION-788 Frame F2F discussion of liaison
    with ECMA TC39.

    <masinter> public-script-coord@w3.org should follow up on

    <noah> ACTION-791?

    <trackbot> ACTION-791 -- Alex Russell to redraft proposed
    "status" section that TAG is suggesting for Polyglot -- due
    2013-03-27 -- OPEN


      [25] http://www.w3.org/2001/tag/group/track/actions/791

    wycats_: I should have an action to get TC39 to do something
    about WebIDL

    <noah> ACTION: Yehuda with help from Alex talk to TC39 about
    helping with WebIDL (agreed on Monday 18 March) - Due 2013
    [recorded in

      [26] http://www.w3.org/2013/03/20-tagmem-minutes.html#action02

    <trackbot> Created ACTION-792 - with help from Alex talk to
    TC39 about helping with WebIDL (agreed on Monday 18 March) [on
    Yehuda Katz - due 2013-03-20].

    <noah> ACTION-791?

    <trackbot> ACTION-791 -- Alex Russell to redraft proposed
    "status" section that TAG is suggesting for Polyglot -- due
    2013-03-27 -- OPEN


      [27] http://www.w3.org/2001/tag/group/track/actions/791

    <slightlyoff> yes

    <slightlyoff> I'm here

    <slightlyoff> yep, LGTM

    [noah writes 'polyglot' next to slightlyoff's name]

    annevk: I'm looking at application state

    <annevk> masinter: going through actions and such

    <slightlyoff> members of the tag are also concerned

Summary of Action Items

    [NEW] ACTION: Alex to redraft proposed "status" section that
    TAG is suggesting for Polyglot [recorded in
    [NEW] ACTION: Yehuda with help from Alex talk to TC39 about
    helping with WebIDL (agreed on Monday 18 March) - Due 2013
    [recorded in

      [28] http://www.w3.org/2013/03/20-tagmem-minutes.html#action01
      [29] http://www.w3.org/2013/03/20-tagmem-minutes.html#action02

    [End of minutes]

     Minutes formatted by David Booth's [30]scribe.perl version
     1.137 ([31]CVS log)
     $Date: 2013-04-11 19:45:29 $

      [30] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
      [31] http://dev.w3.org/cvsweb/2002/scribe/
Received on Thursday, 11 April 2013 20:27:04 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:56:55 UTC