Re: Evaluation Report Generator minor things

Hi Wilco, Shadi,

Notes in line and big thought about how page 4 might work below.

On 1 Aug 2014, at 18:12, Shadi Abou-Zahra <shadi@w3.org> wrote:

>> • Make Save more apparent throughout - top right with nice disk icon to reinforce?
>> • Make Load more apparent at the start and less apparent throughout - it is unlikely that I would load from anywhere other than the start.
> 
> Excellent suggestions but we will probably need to look at them later on in the process. Could you add these suggestions to GitHub please?

Added.

>> • 5. Report Findings: Make Evaluation Date be a range.
> 
> As far as I know, this is a text box and people can fill anything they like. I personally think a reporting date is more common that a date range but both are considered in WCAG-EM. In any case, I think we can clarify this in the info box. Maybe we can also add "(or date range)" after the date in the placeholder text but that may add complexity and some people were already arguing for less placeholder text.

It is a text box, which is fine. I guess the format doesn’t matter that much as it is not going into any process which would require date comparisons.

>> • 3. Select Sample: Combine the Structured Sample and Randomly Selected Sample into one list. If Randomly Selected items need to be identified, include a checkbox to do so. This simplifies this and makes it more likely that the Randomly Selected pages would be identified.
> 
> I would personally prefer to stick to WCAG-EM structure as much as possible, also to help explain the terms "structured sample" and "randomly selected sample". If we have it in one section, it will be difficult to explain them individually.
> 
> Do you feel strongly about this? What is the issue you want to fix?

The issue I am thinking of here is that there are effectively two forms that do exactly the same thing. This adds unnecessary complexity. The action is to add a page to the sample to be evaluated. Whether this page is part of the structured set of pages or part of the random set of pages is not relevant to the action. That information is simply another facet of the added page… a checkbox could indicate this just as easily without the need for an additional form. For example, the attached image shows the form with another column with a checkbox indicating ‘Random sample’:



> 
>> • 3. Select Sample: Include an indicator of the number of randomly selected pages that need to be selected based on the number of pages in the structured sample. This could update as pages are added. The only downside of this is if the website is small enough that it is feasible to evaluate every page.
> 
> Good point! I think we should add this to the feature request list. Could you add that to GitHub, please?

Added

> 
>> One question I had was how a user could attach evidence to the report. It is briefly mentioned in step 5, but there is no apparent way to do this… and I can imagine that it is not that feasible.
> 
> Yes, I was wondering about that too. I guess for the first release attachments need to be made manually. That is, you download the HTML and add your appendices manually. In later versions we may be able to provide an "attach" feature where the attachments are stored in the browser session along with the data. My worry is that this may slow down the browser, depending on the attachment sizes. Another one for GitHub, please.

Done.

With regards to page 4, I found this incredibly confusing when I first accessed it.

There are a couple of particular things:

• It is not apparent what happens when a checkbox is clicked. Even if there is an evaluation details panel open it may not be apparent as the change may be off screen.
• For large samples the left hand checkbox list may become unwieldy. This may be apparent even with smaller samples of say 50 pages.
• It feels like there is a mismatch in how the checkbox beside the pages work; if none are selected, all pages are shown in the evaluation details, but when a page is selected only that page is shown in the evaluation details. It seems that in order for a page to be shown it must be selected therefore if all pages are to be shown then all should be selected.

In general my confusion may be to do with my mental model of evaluations and where it may differ from others. When conducting an evaluation, after doing a general explore, I would select the sample, and then for every page in the sample I would review against the criteria. On this basis it the interface would be ‘select page’ -> ‘check against criteria’ -> repeat while pages. I would never select two pages and evaluate them both at the same time. I may return  to a page that is part of a section and mark that the issue also applies to this page or I may raise an issue from page specific to being general.

Thinking about this model the approach could be structured as:

4. a. Audit Sample - Select page

    Select a page to evaluation: Dropdown select or checklist

    Button: Evaluate page >
    Button: Next >

4. b. Audit Sample - Evaluate page

    Clear indication of page being evaluated in the header section.

    Success Criteria to Evaluate:

    Boxes as are with the ‘Results for:’ drop down and the notes field.

    Include a checkbox: This is a general issue with a revealed notes text entry if the checkbox is checked.
    Include a multi-select widget (I refuse to be drawn too much on how! ;)): Also applies to. Allowing evaluator to select other pages that this issue applies to.

    Button: Select another page to evaluate >
    Button: Next >

This approach retains the focus on the page that is being evaluated but allows the evaluator to connect issues to other pages. It also allows for a few other useful features: mark a page as done, and percentage progress. It should also simplify the evaluation page and allow more space to include nice things like tabs to chunk up the Principles - further simplifying the page.

I am happy to try to mock some of this up if you think that would be useful for discussion.

Thanks

Kevin

Received on Thursday, 7 August 2014 09:18:24 UTC