Re: Feedback from Berlin doc sprint

And just because I can, I've added some more verbatims from doc sprint
participants, pasted in below.

Also, I've started working with the feedback Chris cited here and the stuff
I gathered, pasting it all into a spreadsheet, categorizing each
submission, and grouping the submissions. I'll then post everything
organized by category and we can draw out the issues and discuss possible
resolutions for each.

First, the raw data:

12. CSS project, Not clear where property belongs (Length: CSSOM or CSS?)
13. Editing tasks, Identify different levels of expertise required for
editing pages - skill level required
14. Editing tasks, Review by someone with greater familiarity/expertise
(could watch pages and check as they are saved)
15. Editing tasks, "Editor identifies pages that requrire further review
for escalation (via comments, flag tech review required, urgency)"
16. Getting Started, "Identify tasks based on expertise, skill level
17. Editing tasks, Identify tasks appropriate to role: developer vs. noob
vs. designer
18. Getting Started, Identify content in terms of % completeness
19. Syntax representation, Why Java syntax in a CSS property page?
20. Content, Examples do not show best practice
21. Content, Examples contain irrelevant information
22. Task queue, No way to identify which CSS properties are already taken
for tasks
23. CSS Project, CSS properties marked as non-exist (background) which
actually do (not under /css but under /css/cssom)
24. Forms, Not clear how to edit compatibility tables.
25. RTFM, Missing instruction on how to edit compatibility tables
26. Editing tasks, "Need a flag: ""Review Required"""
27. Editing tasks, Identify tasks appropriate to domain expertise
28. Editing tasks, [good] Forms identify content chunks for the purpose of
29. Editing tasks, "Identify the value (learning) of the trivial
""cut-n-paste"" as being separating the ""implementing"" pieces from the
""developing"" pieces"
30. Editing tasks, "Set the expectation that the ""trivial"" task will
teach the object model so that contributors can graduate to higher level
tutorials, concepts"
31. Editing tasks, "Time considerations are irrelevant to the completion of
tasks, either a contributor can or cannot complete the task - based on
domain knowledge, skills"
32. Forms, "Need a form for ""constructor method"" in APIs"
33. Forms, Required forms should always be visible; hide those that are
34. Forms, "Domain knowledge required for ""Topics"" vs. ""Topic Clusters""
(knee-jerk solution, documentation, not going to solve the problem - nobody
35. Editing tasks, Make it easy; the forms and other mechanisms are not
sufficiently easy to work with
36. Editing tasks, "Video task: creating api pages, CSS properties, getting
started tasks"
37. IRC, Ambiguity unresolved between #webplatform vs. #webplatform-site -
not everyone clear on which to use when
38. Content, "Reach out to authors of NetMag, Smashing Mag, etc. to ask
them to contribute published articles"
39. Templates, "Open templates to editors, run chron job to check template
usage, report out to community"
40. Templates, "Using SMW forms, with all the checkboxes, fields is really
annoying, significant hurdle to editing; prefer Kuma model - single page,
no templates, no forms"

Back soon...

On Thu, Feb 14, 2013 at 11:33 AM, Scott Rowe <> wrote:

> Hi Paul!
> I hope you're feeling better!
> Your excellent work can be used in our Getting Started work flows. One of
> the ideas that Rodney Rehm had was that we need to set up our Getting
> Started tasks according to domain expertise and skill required. So, you can
> imagine a page set up for working in the API domain and a section of tasks
> for developers, one of which would be contributing code examples and -
> bing! - your list of articles requiring code examples. The developer just
> clicks on a link to an article, and off they go. Same for the CSS domain.
> I'd love to be able to get this together in time for our next doc sprint -
> February 23rd in San Francisco. Most of it is dependent upon me to work out
> the Getting Started flow and pages. As I recall you had a few more things
> you'd like to add to the queries, but as far as I can see, we can use them
> starting now.
> Tell you what though, let's take this discussion into a separate thread so
> as not to confuse the issue here. This thread was started to talk about doc
> sprint participant feedback. I'll paste all this in a new thread. Stay
> tuned.
> And, thanks again for the terrrific work here!
> +Scott
> On Thu, Feb 14, 2013 at 11:04 AM, Paul Rosenbusch <
>> wrote:
>> The mailing list does not seem to publish my first message, so I'll
>> submit it again just to be sure. I hope nobody gets duplicate mails
>> because of this :)
>> 2013/2/14 Chris Mills <>
>> >
>> >
>> > On 14 Feb 2013, at 14:52, Tobie Langel <> wrote:
>> >
>> > > n Thursday, February 14, 2013 at 3:26 PM, Chris Mills wrote:
>> > >> 1. Some people want to just look at site compatibility info, or code
>> examples. It would be nice to create the site in a way that people can
>> search to just bring up site compat info or code examples, and not have to
>> trawl through all the full reference pages.
>> > > Sounds like this is something the test resource center[1] might
>> partially be able to address.
>> >
>> > Perhaps, yes.
>>  During the docsprint I worked on semantic querys that list articles
>>  needing examples. Unfortunately I got the flu right after and could
>>  not work on it this week.
>>  I still need to document the whole thing and maybe optimize the
>>  output. Regardless of that, the template is usable at the moment. You
>>  can find an example implementation here:
>>  Where do you think would be the best place to put these tables?
>>  If needed I could also create a custom output format, but currently I
>>  have no idea which formatting would work best.
>>  --Paul R.

Received on Thursday, 14 February 2013 20:29:31 UTC