W3C home > Mailing lists > Public > public-silver@w3.org > March 2020

Minutes of Monday Part 1 of Silver Virtual Meeting

From: Jeanne Spellman <jspellman@spellmanconsulting.com>
Date: Mon, 9 Mar 2020 12:03:29 -0400
To: Silver Task Force <public-silver@w3.org>
Message-ID: <0399bfc0-7005-ed08-c5a6-f6b86ce8568f@spellmanconsulting.com>
Instead of our 2 day Face to Face meeting at the CSUN Assistive 
Technology Conference, we are having a virtual meeting in 4 2-hour 
blocks.  For those who want to join, the times, remote access, and 
agenda are on the meeting page at:
https://www.w3.org/WAI/GL/task-forces/silver/wiki/2020_March_F2F_Meeting_at_CSUN

== Summary of Monday Part 1 ==

The starting Introduction for new attendees was a high level review of 
the Silver Requirements 
<https://w3c.github.io/silver/requirements/index.html> with particular 
review of the Design Principles.  We highlighted:

  * Support the needs of a wide range of people with disabilities and
    recognize that people have individual and multiple needs.
  * Support a measurement and conformance structure that includes
    guidance for a broad range of disabilities. This includes particular
    attention to the needs of low vision and cognitive accessibility,
    whose needs don't tend to fit the true/false statement success
    criteria of WCAG 2.x.
  * Improve the ability to support automated testing where appropriate
    and provide a procedure for repeatable tests when manual testing is
    appropriate.

We walked through the sections of the Editor's Draft (ED) 
<https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/> and 
Explainer 
<https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/explainer.html>to 
point out high level changes made in response to AGWG comments.

We briefly looked at the most recent round of comments on the Editor's 
Draft 
<https://www.w3.org/2002/09/wbs/94845/FPWD-AGWG-20200219/results>.   The 
key issues that need to be resolved are to:

  * Understand the scoring
  * Determine what is normative and what is informative

We started working through the Scoring Example 
<https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ7e7__FRcnZow4w7zLlkY/> 
section by section.  We didn't finish, but had some good discussion on 
the following topics:

  * How formal do we want the declaration of scope to be?  A list of
    URLs, screens, functional activities, task completions? We want to
    be flexible, but not wide open.
  * In the digital publishing industry, referring to pages or screens is
    confusing.  We may want to put more detailed examples of the variety
    of types of products in the Explainer. We may want to add ebooks to
    the list of examples in the document.  We want to be clear and
    obvious what the interaction is between the terminology and technology.
  * There is a variety of uses for these guidelines, and the necessity
    of giving the owner of the product significant flexibility of
    describing what the intent of the product is. As we try to support
    the various uses of this work (such as an app, a book, specific
    chapters, kiosk, mobile app, and more) we need to ensure that
    authors and owners have the ability to make those descriptions.
  * Representative sampling presents a challenge in selecting random
    pages or screens. WCAG-EM provides guidance that can assist with
    that. WCAG-EM has a great deal of depth, and we are only giving the
    highest level overview.
  * There is a mis-match between the normative guideline for headings
    and what we are measuring for tests.  That is because the guideline
    is the last part to be written in the new content process and the
    normative guideline in the Heading example is more of a placeholder
    that needs to be updated.
  * Should failure techniques of WCAG 2.x be an automated failure in
    WCAG 3 regardless of the score?  This is an important issue that
    needs more discussion and testing.
  * The scoring is complex now, because we have to accommodate a lot of
    different stakeholder priorities, user needs, and technologies. Once
    we hammer out the details, we will be able to present it more simply.
  * Some guidelines will be scored by number of instances of a
    condition, some will be scored (like Timing or Keyboard) by page or
    screen or site.  WCAG 2.x scores by page, but also uses instances in
    a less obvious way.  Image alternatives is often about individual
    instances.
  * Images on a control are more important than image descriptions, how
    do we account for that?

All participants were asked to try out the scoring on a site or product 
of their choice for the next session.

The next session will be at 4-6pm ET. World clock for your time zone 
<https://www.timeanddate.com/worldclock/fixedtime.html?msg=Monday+Silver+Meeting+Part+2&iso=20200309T16&p1=43&ah=2>. 


== Minutes ==

https://www.w3.org/2020/03/09-silver-minutes.html

=== Minutes as Text ===

    [1]W3C

       [1] http://www.w3.org/

                                - DRAFT -

                            Silver F2F Monday

09 Mar 2020

Attendees

    Present
           jeanne, Chuck, MichaelC, Lauriat, KimD, mattg,
           PeterKorn, Rachael, bruce_bailey, kirkwood, JF, sajkaj,
           Katie_Haritos-Shea, Nicaise_, AngelaAccessForAll,
           JakeAbma, Detlev, david-macdonald

    Regrets

    Chair
           Shawn, jeanne

    Scribe
           Chuck, sajkaj

Contents

      * [2]Topics
          1. [3]Introduction and where we are (focus areas in
             response AGWG comments)
          2. [4]Working through the Scoring Example (whole group)
      * [5]Summary of Action Items
      * [6]Summary of Resolutions
      __________________________________________________________

    <Chuck> scribe: Chuck

    <PeterKorn> ppresent+

    Notquitepresent+

Introduction and where we are (focus areas in response AGWG comments)

    Jeanne: A few who are not familiar... let's do an introduction,
    and what we are trying to do.

    Shawn: We are using #silver, taking minutes and links and
    comments.
    ... Chuck volunteered for scribing, we'll need a 2nd scribe for
    2nd hour and a backup.

    Janina: I'll backup and take 2nd hour.

    Shawn: Let's get started.
    ... A quick re-hash of how we got here.

    <Lauriat>
    [7]https://w3c.github.io/silver/requirements/index.html

       [7] https://w3c.github.io/silver/requirements/index.html

    Shawn: Link to silver requirements document.
    ... This is what we put together to establish what we are
    building and why. Cover a few of the overall design principals
    and requirements.
    ... We'll talk through some recent comments on current work.
    Then overview of planning over the next 2 days.
    ... Thank you everybody for flexibility and support, and
    patients with this. This is a first remote. We scheduled for
    the largest number and most diverse # of people.

    <Lauriat>
    [8]https://www.w3.org/WAI/GL/task-forces/silver/wiki/2020_March
    _F2F_Meeting_at_CSUN#Agenda

       [8] https://www.w3.org/WAI/GL/task-forces/silver/wiki/2020_March_F2F_Meeting_at_CSUN#Agenda

    Shawn: Hopefully a couple of sessions works for everybody.
    ... Design principals. Things we want to call out, set the
    overall guidance star for silver work. These are things we
    couldn't necessarily measure achieving these things.

    <Lauriat> Support the needs of a wide range of people with
    disabilities and recognize that people have individual and
    multiple needs.

    Shawn: "support the needs of a wide range of disabilities..."

    Katie: You going to share?

    Shawn: Not planning, put link in irc, people can follow along.
    ... We should always reference that this is something to guide
    the work that we do.

    Bruce: We have people in zoom that have never heard of irc. Can
    we do some screen sharing? I know it's new to some.

    Shawn: I'll give that a shot.

    <team determines how best to screen share>

    Shawn: Talk through a couple of the other design principals. I
    won't read through everything.

    <Lauriat> Support a measurement and conformance structure that
    includes guidance for a broad range of disabilities. This
    includes particular attention to the needs of low vision and
    cognitive accessibility, whose needs don't tend to fit the
    true/false statement success criteria of WCAG 2.x.

    Shawn: One of the things is to support a measurement and
    structure that covers a wide range of disabilities. Especially
    coga (and others) that don't fit... <pasted in>
    ... Some of the other design principals: overall... inclusion
    of most set of people creating...
    ... Some feedback, don't want to spend too much time on intro.
    Does this seem a good assessment of where we are?

    JF: I think so yes. Over the next 2 days I hope we can talk
    about point #6, repeatable tests.

    Shawn: Thanks John. "Improve the ability to support automated
    testing where appropriate..."

    <Lauriat> Improve the ability to support automated testing
    where appropriate and provide a procedure for repeatable tests
    when manual testing is appropriate.

    Shawn: For the agenda, today we have intro, followed by working
    through a scoring example. tomorrow we'll talk through
    normative vs informative and what should be what..
    ... Tuesday the first session should be on new content on what
    we can work on next, 2nd half of tuesday we talk through 2nd
    half of testing.
    ... The reason is we want to work through comments we got
    through recently from wg that we got on our working draft.

    <jeanne>
    [9]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guide
    lines/#visual-contrast-of-text

       [9] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/#visual-contrast-of-text

    <jeanne>
    [10]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/

      [10] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/

    Jeanne: This is the most recent branch in github.
    ... For benefit of those who haven't seen it before, run
    through the sections, we've made some updates based on first
    round of comments from agwg.
    ... We've updated intro based on comments. There were questions
    about the scope. Some people wanted it more broad, some more
    narrow. We decided to adapt the charter scope.
    ... There were a few things to make it fit this document.
    That's the scope.
    ... We are stating we are planning to be inclusive of wcag atag
    and uag. atag would be in non-normative way. Then we started
    into the guidelines.
    ... There were a lot of questions, and people wanted to know
    how it maps to wcag 2. We added a table. In general sc in wcag
    become guidelines in wcag 3
    ... Techniques map to methods. Understanding maps to how to
    documents. We gave a link to templates. A number of people
    asked to see bones of structure without too much detail.

    <JakeAbma> persent+ JakeAbma

    Jeanne: Please review. We put in 3 guidelines we are working
    on. 2 are migrated from wcag 2, headings and visual contrast of
    text. The middle one is clear language, a new guideline
    ... That came from coga. We worked closely with coga on the
    development. You can see under each guideline, there's a link
    to headings.
    ... "how to".. if you open those links, you'll see a wireframe
    of how we want the guidance to work. ... get started which
    covers a summary...
    ... any exceptions, then examples. Then activities... people
    who would be working on a project would plan, design, writing,
    development. Final tab is list of methods.
    ... If you click on the methods, you'll see an edit text for
    clear language, will show you the different tabs for methods.
    Basics (details of what it applies to, how recently updated).
    ... Detailed description, code samples, test, resources,
    changelog. Any questions?
    ... This is a lot to throw at you quickly. Please speak up in
    IRC or on phone.

    Katie: To be clear, the requirements of wcag 1 and 3 are
    guidelines, and 2 is sc.

    Jeanne: Yes, our best guess was to include guidelines. I found
    Friday when we talked about normative and informative. I found
    an email from Katie in 2006, asking for definition of
    normative.
    ... These are our samples of the guidelines. We have intro
    which needs to be shortened. Moving more info into explainer. I
    put it in the editors note.
    ... We do have an explainer doc. I moved goals, background
    info, I started moving some of the issues that we don't have
    consensus on or are difficult to obtain consensus.
    ... w3c recommends to include that. It's not done yet.
    ... We took the conformance issues and narrowed it to declaring
    a scope.
    ... We'll talk more. Declaring a scope is a recognition of how
    people operate today. We are formalizing what people actually
    do. Here's a logical subset of the entire operation...
    ... For example, make a claim for crossword puzzles... this is
    just to formalize that people can do a logical subset of their
    entire website or application or product.
    ... Next part is in response to a number of research requests
    and some of this also came from the work that Janina and Peter
    did on challenges doc. We want to formalize how to do sampling.
    ... To do that we are referencing wcag em. The issue is it's
    very web page oriented. We want to have a broader scope to it,
    but the principals apply.
    ... People thought we were just allowing cherry picking, but
    that's not correct. We put in the highlights from wcag em, so
    people could see that this is a very structured process to do
    the sampling.
    ... That's a change from earlier.
    ... What is new to silver is the next section, different sample
    size requirements for different size products. Hopefully we can
    talk about today. We can do more with this.
    ... Anything with either sites under 10 pages or screens need
    to test everything. 11-100 they need to test all pages with
    automated, and review at least 10 pages with indepth pages.
    ... Needs to be representative. Common elements must have
    testin. Then larger sites. As we tried to make them more suited
    for the size. We have a note that we want to talk about more...
    ... Do we want to have different requirements based on
    complexity? We'd love ideas. We moved to points and levels.
    We'd like to evaluate each guideline and come up with a %
    result.
    ... There will be various ways of doing that based on the
    guidelines. Based on what's in the guideline and in the
    methods. So people know how to test and score.
    ... It gives us more flexibility to cover more of the complex
    needs we are getting from various groups that are having
    difficulty getting their needs included in wcag 2.
    ... This is one of the solutions we came up with. Most of the
    group thinks this is the best solution to date. We'll cover
    scoring later.
    ... Another thing that came out of research, 18 months of
    research, combining corporate and academic.
    ... That's part of our solution of ... what we want to do is
    say that authors are not responsible for the bugs of the
    browsers or at. Not that you shouldn't test with these, but
    people...
    ... Shouldn't do hacks to correct code to adapt to bugs of
    browsers and at. In practice that's how people interpret
    accessibility supported. But that has bad results long term.

    Katie: We want to call that technical debt. That will... we
    want to prevent that from happening.

    <bruce_bailey> love the reference to "technical debt" !

    Shawn: One of the other sides is covering emerging
    technologies, a particular use of technology or standard is not
    covered yet, to be able to express that in a helpful meaningful
    way.
    ... We should come up with a name just to clarify so that we
    aren't as soft.
    ... Yes, we need better words to use. Scoping for instance,
    pages and screens doesn't completely cover how silver works.
    ... for google docs there is one url for the editor, you could
    call things out as screens, there is a ton of functionality
    that isn't different screens. This becomes a very large surface
    area.
    ... Not well covered by "pages and screens". We hope to work
    through that over the next couple of days, terminology and
    conformance definitions to make sense.

    Oconnor: Any thoughts to add from Aria AT community group that
    document the bugs?

    Shawn: We haven't given that a lot of formal thought. I and my
    team are actively interested in contributing.
    ... For those unfamiliar with ARIA AT, coming from ARIA work
    and practices guide, tons of implementation examples. The aria
    at group documents what are the bugs in browsers and screen
    readers.
    ... And how well they adopt ARIA standards.

    Jeanne: Shawn, do you think we've covered enough, should we
    cover comments?

    Shawn: Summary from Alastair? Or too early?

    Jeanne: We've fixed most of that summary. What we had left are
    issues we are addressing today. Scoring system, what's
    normative and not. We also had... testability and...
    ... ...we need to change the name, we know that, not our
    highest priority. We are looking to make the substantive
    changes that the group has been asking for.

    Shawn: The name we have drafted is W3C accessibilty guidelines,
    which abbreviates to WCAG 3.

    JF: Jeanne is showing comments from survey. To be clear, these
    aren't the complete comments from the wg. They are comments.

    Jeanne: I was trying to show a different screen. John, is your
    concern is that there are more comments?

    JF: I think the survey is still open, and if someone didn't
    comment in the survey, this isn't a complete sense of thoughts
    from the wg.

    Jeanne: Glad you clarified that. This is what we used to set
    the agenda. What are the issues that are outstanding that we
    need to address. Scoring and normative were the biggest ones,
    and that's where we'll focus.

    Shawn: A quick summary of those. For the scoring, it was more
    of the need to see some end to end examples of how scoring
    works. Normative vs informative... we have been thinking...
    ... of the structure of silver... user needs, tests. only the
    guidelines would be normative, the rest would be informative.
    We have some questions and want to work through pros and cons
    ... for each part of the structure.
    ... ... moving on to scoring example.

Working through the Scoring Example (whole group)

    Jeanne: Here's the document...

    <jeanne>
    [11]https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ
    7e7__FRcnZow4w7zLlkY/

      [11] https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ7e7__FRcnZow4w7zLlkY/

    Jeanne: that's what I'm sharing. First thing I want to point
    out is that we do reference wcag em. I put a link to it.
    ... Underlying this is a spreadsheet I was using for
    calculating the scores. That's also linked to, but please don't
    change. Be cautious.
    ... What we were asked was to take a person interested in
    making a conformance claim through all the steps of a demo
    site. Declaring a scope, taking a representative sample,
    ... Scoring against the new guidelines, total score, minimums,
    and level
    ... This uses W3C before and after.
    ... First part of declaring a scope is the way before and after
    demo is a portion of the w3c wai webset. A subsection of the
    w3c site. The before and after demo is...
    ... That we aren't trying to test the whole website, just the
    small part, and that's the scope.
    ... One of the things is that the scope doesn't have to be
    expressed as a URL. Not restricted. A single page app would not
    be ... if a complex app that's state driven, may not be url...
    ... mobile apps also could not be expressed as a URL. Any q on
    declaring a scope?

    JF: In a reporting scenario, how do we envision the scope being
    declared? Will there be a formal structure?
    ... So that we know we are comparing apples to apples?

    Jeanne: How does Deque handle it today? If someone asks Deque
    to evaluate a small part.

    JF: We'll start with an itemized list, and if it's a subsection
    we'll call it out a a component. I'm asking in a structural
    way, do we envision a structure?
    ... Our report includes what is tested. An itemized list of
    URL's as the baseline?

    Jeanne: How do you handle it with mobile apps?

    JF: We use screens.

    Jeanne: We may have to write more about this, but do you see a
    problem with what we have?

    JF: I would be concerned if we mixed screens and pages for
    example.

    Shawn: With different apps there is different kinds of
    granularity. google docs, the vpat declares the scope as the
    full application. When we do testing for new functionality we
    tend to...
    ... describe it in terms of the scope of functionality. The
    controls for opening and closing the outline, interacting with
    that outline, focus movement.

    JF: Shawn you have answered some of my question. Your scope has
    declared a section of a doc and screen, and focused on
    functionality. Will we focus on functionality or screens?
    ... Is it one of pages, screens, components, something else?

    Jeanne: We need to be flexible.

    JF: As flexible as gell.

    Shawn: In terms of tasks (which has its own set of issues)...
    for google docs the task would be navigating doc by heading,
    for a flat traditional website may be to get to this page...
    ... Allows for the flexibility of not being tied to URLs,
    screens, or anything else that's application specific.

    Matt: I want to raise digital publishing aspect. Screens and
    pages get confusing. epub consists of many html docs. Even
    bringing in sub-sections can be confusing. Is considered one
    whole, one unit.
    ... Language may be very confusing. Does site apply to epub
    internal contents where you have different html pages together.
    I don't have issue with web, but the terminology needs to be
    finessed.

    Jeanne: If we added epub to the list, would that be an
    appropriate usage?

    Matt: Maybe not epub specific, but something along those lines
    would be helpful.

    Shawn: Good way would be in supporting documentation or
    explainer of having an example "for this kind of app, here's
    how you can do this..."

    Matt: that would help, as long as it's clear and obvious what
    the interaction is between the terminology and technology.

    JF: Currently on screen we have "take a sample of website" This
    is just a draft, but I have a concern about website in this
    context.

    Jeanne: I have an example of a non-website that follows it.
    Anything else on scope?

    PK: I want to underscore that this conversation shows the
    variety of uses for these guidelines, and the necessity of
    giving the owner of the product significant flexibility of
    describing what the intent of the product is.
    ... Are you talking about an app, a book, specific chapters,
    whatever. as we try to support the various uses of this work we
    need to ensure that authors and owners have the ability to make
    those descriptions.

    Shawn: Indeed.

    Jeanne: Let's take a look at the next section. We are getting
    more into the meat of things. Taking representative sample. The
    before and after is only 4 pages long, so not a great example.
    ... What I did was take a different example and look at the wai
    website. What I did for the first example, I looked at... very
    high level, very sketchy level. We didn't want to spend too
    much time on it.
    ... We followed the wcag rules. This is the wai section of the
    w3c website. 2nd step is to explore target website. Identify
    common web pages.
    ... the identify essential functionality
    ... identified landing pages with common top nav and footer.
    Detailed pages on a topic... unique element. video pages with
    captions.
    ... We said "3 basic types of pages". We identified
    technologies relied upon. Identified other relevant web pages
    (from footer). contact, etc...
    ... Step 3 is select a representative sample. The structured
    sample is all the pages we identified above.
    ... We took all of the structured templates, etc.
    ... Then we also included randomly selected sample. Let's
    assume there are 120 pages, we would add 12 random pages as
    recommended by wcag em.
    ... Then include any complete processes... login checkout
    creating new account. We didn't have any of these on the wai
    website. Not included in our review, but this is where we would
    included it.
    ... 16 structured pages, 12 random pages, 28 in total.
    ... This is all what folk in agwg are familiar with.

    <sajkaj> scribe: sajkaj

    js: Moving to a nonweb sample
    ... Picked NY Times app for Android; specifically the "For You"
    section
    ... Paraphrased wcag-em for nonweb -- it's scratch work so far;
    rewording should be done by ACT-TF
    ... It's a thought expiriment at this stage
    ... -- Works through the em steps ...

    corb: How did you pick "random screens"

    js: Just made it up -- somewhere to start

    <Lauriat> Scoring Example:
    [12]https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ
    7e7__FRcnZow4w7zLlkY/edit

      [12] https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ7e7__FRcnZow4w7zLlkY/edit

    bb: Notes that ability to select is best for NY Times itself

    corb: On website we have index, so "pick random" is easy
    ... How do we advise "choose at random" for screens? Not sure
    what the solution might be

    js: There are good guidelines in wcag-em
    ... Lots of depth there, and seems best to adapt not redefine

    bb: Works well when there is a conformance claim
    ... Not se helpful to third parties

    sl: But a third party would still have some sampling to
    illustrate the concern; "looking at these kinds of screens,
    etc"

    bb: Agree and notes it's no loss from today

    js: OK, this was the easy part ... Let's take on actually
    scoring ...
    ... Took before and after demo -- but only used the before part
    ... Found issues with headings
    ... Similar heading issues on multiple pages

    jf: Reads from heading guideline ...
    ... Where is "hierarchy" in normative requirements?

    <Lauriat>
    [13]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/#headings

      [13] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/#headings

    <bruce_bailey_> Interesting. Is hieracrchy inferred from "use"?

    js: Reviews the Silver process -- and the last step is to write
    the guidelines
    ... We haven't fully developed the guideline yet, it's the
    least mature
    ... If working through examples exposes additional items for
    guidelines, we can do that

    jf: Lists several descriptors -- 5 things -- that might be
    scorable
    ... Where does EN get added?

    js: Much later

    jf: Concerned about circular logic

    sl: Which is why we're working in parallel and reiterating
    ... Possibly into the tests and not the guideline

    df: Asks whether current WCAG 2.x base will be brought in; The
    two heading failures currently defined as per this example
    ... sc 1.3.1
    ... Those failures, when detected, will constitute a failure

    js: That helps crystalize the issue
    ... Earlier attempts had more factors but that was
    counterproductive, it watered things down
    ... The really bad problem, no semantics, got lost in two many
    factors

    <JF> Severity of the failure

    js: That is the problem--asks SL whether we should now digress
    into severity

    sl: Suggests we do need to use prior work as we build 3.0
    guidance
    ... Even if not direct mapping

    <Zakim> bruce_bailey_, you wanted to say i am hearing that
    tests/methods are normative

    sl: Believe I heard tests will be normative?

    bb: Believe that's necessary

    matt: Want to agree with the complexity of providing total
    scores -- was a problem for us in publishing
    ... Publishing is currently trying to balance a plain lang
    description of presenting issues and what users are affected

    df: Wonders how severe issues will affect conformance?
    ... There needs to be a way to capture criticality

    sl: agree
    ... Part of our concern has been to find ways to support
    different pwd groups; Critical issues for screen readers might
    be covered, but some issue exist for COGA

    <Zakim> JF, you wanted to ask about Measurable, Testable,
    Repeatable

    <Detlev> John you are muted

    jf: Believe publishing needs a score, yet it may or not leave a
    pwd group out
    ... we do have multiple audiences
    ... Concerned that this draft is even more complicated than 2.x
    ... measurable, testable, repeatable are musts -- esp for tool
    vendors
    ... So that different testers will obtain approx the same score
    ... Not seeing help for producers yet

    bb: Suggests the count needs to be an instruction and not a
    summation total

    <david-macdonald> can someone drop URL of doc, thanks?

    sl: Suggests scoring will look far more complicated until we
    figure out how it needs to work and can then be made clear

    <Lauriat> Scoring Example:
    [14]https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ
    7e7__FRcnZow4w7zLlkY/edit

      [14] https://docs.google.com/document/d/1LfzTd_8WgTi0IUOOjUCRfRQ7e7__FRcnZow4w7zLlkY/edit

    jf: Worries counting headings could game conformance

    <bruce_bailey_> at this point in time, i don't think we need to
    focus much on bad faith actors gaming the scoring

    js: Asks whether knowing I score 75% is more informative than
    100% or 0%
    ... But agrees the rubric needs to offer better guidance

    <bruce_bailey_> i added a comment to doc that evaluator needs
    to decide -- on their own -- how many headings SHOULD be on the
    page

    jf: Need to say what headings are used for

    sl: We ask ourselves how well the scoring reflects impact on
    pwds
    ... Further describes the iterative process of development here
    --

    dm: Measurable vs testable first big change; second is
    nonbinary scoring; ...
    ... Concerned that analysis load is exploding
    ... How do you chank passes of "meaningful sequence"
    ... Wondering what discussion are happening around these issues

    js: Yes, and we have recently looked at this ...
    ... We note everything in 2.x measures by page; but some count
    instances on page like images; but others are to the page as a
    unit like timing
    ... Each guideline requires it's own measure; not previously
    broken down because conformance model was limited
    ... We will need to look at these individually
    ... But dm's point is correct; some are instance based, some
    page based, etc.
    ... Could aggregate for keyboard
    ... Was looking for site-based example; not sure keyboard is

    df: Important point that it's not just about measuring what's
    there
    ... But that's content dependent
    ... Suggests Proust would be a problem; no paras, no headings,
    nada
    ... Measuring what's not there is problematic
    ... Have to ask whether distirubtion of headings iis
    appropriate to the content
    ... another example, if minor headings are present, but main
    higher level ones aren't, that's a failure

    <JF> +1 alt text on actionable items is *more important* than
    informative images

    js: Notes other work not yet ready for the fpwd draft, e.g. alt
    text
    ... Believe we will have separate guideline for image based
    controls vs other images

    sl: task based framework helps us express that

    jf: returns to concern about "inaccessible to whom"
    ... Suggest more so to some but not others and wonders how
    scoring captures that?
    ... We need to be more granular in our measurement as
    regulators get more granular in what they're asking for

    js: Returns to an old scoring example to respond and show
    minimums
    ... Notes that old example provided score by individual
    categories and was a good analysis of the website evaluated

    jf: Asks about actionable images? Equally disruptive? Or more
    so?

    js: good point--important for Dragon users

    jf: Suggests alt text on actionable image is important to
    capture

    js: Yes. As noted earlier, probably we split alt on images into
    actionable and non-actionable

    <Zakim> bruce_bailey_, you wanted to ask if there will be a
    scoring exercise today or tomorrow?

    sl: Answer as we close call -- yes, we need to resolve these

    js: People, please pick a site and try this 3 guideline
    approach out. We need feedback with real data

    <jeanne>
    [15]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/

      [15] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/

    <jeanne> headings:
    [16]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/#headings

      [16] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/#headings

    <jeanne> Clear Language:
    [17]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/#clear-language

      [17] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/#clear-language

    <jeanne> Visual Contrast:
    [18]https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guid
    elines/#visual-contrast-of-text

      [18] https://raw.githack.com/w3c/silver/ED-changes-25Feb-js/guidelines/#visual-contrast-of-text
Received on Monday, 9 March 2020 16:03:57 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 17 March 2020 10:22:07 UTC