Re: IA of Max's JS files for upload?

Hi, Chris & Scott:

PhistucK is pointing out some issues here, as have I. But we haven't had a line-item review of these paths. I was thinking that one of you, or some group, would take the action item to review and respond, at the line-item level, to Max's current list. Should we proceed another way?

Julee
----------------------------
julee@adobe.com
@adobejulee

From: PhistucK <phistuck@gmail.com<mailto:phistuck@gmail.com>>
Date: Saturday, June 15, 2013 10:33 AM
To: Max Polk <maxpolk@gmail.com<mailto:maxpolk@gmail.com>>
Cc: julee <jburdeki@adobe.com<mailto:jburdeki@adobe.com>>, Scott Rowe <scottrowe@google.com<mailto:scottrowe@google.com>>, Chris Mills <cmills@w3.org<mailto:cmills@w3.org>>, Webplatform mailing list <public-webplatform@w3.org<mailto:public-webplatform@w3.org>>, Doug Schepers <schepers@w3.org<mailto:schepers@w3.org>>
Subject: Re: IA of Max's JS files for upload?

The one thing that I think is crucial before the final import is the path structure.
Changing paths later creates either redirects (Semantic Media Wiki has issues with those), or links to non existent pages, which is really bad and should be avoided.

The dom/ namespace is filled with bad paths and personally, because moving pages creates a real mess, I do not move them and the (other) mess continues.


☆PhistucK


On Sat, Jun 15, 2013<tel:2013> at 7:31 PM, Max Polk <maxpolk@gmail.com<mailto:maxpolk@gmail.com>> wrote:
This concerns the needed improvements to the work completed so far
against the MSDN-JS import project, improvements like better page
names and conversion of content to template syntax.

There was a documentary about nuclear reactors that attributed the
high cost of production in part to improvements in regulations.  By
the time a reactor was designed and then built, such time had elapsed
that a new set of regulations began to take effect, requiring
considerable rework and additional costs.

Such is the nature of physical construction.  Software, however, is
very much unlike physical material.  Data input and output is more
like soft putty to work and rework.  So rather than wait until we have
what is perfect, my plan is to create interim versions of the work
completed so far, uploaded to a test wiki, but also committed to
github.

The next round of tools and scripts use the interim version as input,
and create another set of output.  Successive rounds can be created
like this until it is complete.  The main driver is that tools and
scripts operate on local files works much faster and more efficiently
than against web resources (pages stored in a wiki) which have to be
downloaded and uploaded.

The spirit of a wiki is successive refinement, so having the bulk
import effort use that pattern seems apropos.

So the first round of output, all the wiki content so far plus
upload-mapping.wiki as the mapping of where they should be uploaded,
is now located here, which I'll use for the next round of processing:

    https://github.com/maxpolk/msdn-js-conversion/tree/master/round-alice

The next round will address page names.  The internal content quality
was not reviewed yet, which would lead to a fix for empty "see also"
sections on the bottom of some pages, removal of some Microsoft
specific things like not using ActiveXObject that runs only in
Internet Explorer, and so forth.

Received on Friday, 21 June 2013 17:16:30 UTC