Re: The MusicXML challenge and Chords

>
> So what do we want MusicXML to be?  Should the core structure be
> representing the semantics of playback or rendering?
>

If you ask me, neither.  The goal of encoding music should be to capture
musical data in the most clear, complete, and standardized way possible so
that it can be rendered as notation, played back as audio, mined for data,
etc. in any application that "knows" the standard.

I think we're getting a bit bogged down in appearance/layout/display
issues.  For example, in Peter's original "MusicXML challenge" email, he
mentioned the tension between "flowed" elements and "fixed" elements,
specifically page numbers.  In my view, all elements in encoded music
should be "flowed," just as they are in HTML.

Issues like margins, pagination, page numbers, navigation, etc. should be
handled by the application doing the rendering (for HTML, the browser) and
any external "style" information (for HTML, CSS).  These aspects used to be
mixed together in HTML, but this was awkward and unmanageable, so they were
separated.  We should learn from these mistakes and avoid repeating them.

In short, we should capture *content* data, not *layout* data.

However, I want to acknowledge that this distinction is not always
straight-forward in music.  For example, if several simultaneous notes are
rendered with opposing stem directions, this suggests that several
different voices are converging, whereas if they are rendered with a shared
stem, they are a chord.  If an encoder wants to specify groupings of notes,
stem directions, etc. to clarify this relationship, that should be
accommodated.  However, it should not be required.

Sienna

On Mon, Oct 26, 2015 at 2:28 AM, Thomas Weber <tw@notabit.eu> wrote:

> Am 26.10.2015 um 07:23 schrieb mogens@lundholm.org:
> >
> > I think the music should be the base, the graphic appearance an addition.
> > (Like MIDI: notes are "events", other stuff is "metaevents"). But this
> is MusicXML,
> > and we must be pragmatic.
> >
>
>
> There you have a fundamental question.  To quote L Peter Deutsch's post:
>
>
> Am 20.10.2015 um 07:51 schrieb L Peter Deutsch:
> > MusicXML is first of all (1) a format for representing
> > printed scores, [...] I have seen no
> > evidence that it cannot have clarity and completeness about the *semantic
> > and general visual relationships* of the elements it names.
>
>
> So what do we want MusicXML to be?  Should the core structure be
> representing the semantics of playback or rendering?  I think this really
> needs clarification.  I have a very clear opinion about that: MusicXML
> should first and foremost facilitate notation for the following reasons:
>
>
> * MusicXML's original killer feature is enabling exchange between music
> notation software.
> * MIDI is the established standard for playback.
> * It's easy to extract playback information from notation data, but not
> vice versa.
> * Rendering is hard, properly conveying semantics needed for rendering as
> well. For this, we need sound foundations that we mustn't trade for minor
> playback facilitations.
>
>
> Concerning chords this means I fully agree with L Peter Deutsch's concerns
> and suggestions.  Aggregating notes in a chord incidentally also seems to
> be what notation programs happen to do anyway (single notes commonly being
> treated as one-note chords):
>
>
> Sibelius:
>
> http://www.sibelius.com/download/documentation/pdfs/sibelius710-manuscript-en.pdf#page=87
>
> Finale (apparently - link is a third party framework):
> http://www.finaletips.nu/frameworkref/class_f_c_note_entry.html
>
> MuseScore:
>
> https://github.com/musescore/MuseScore/blob/master/mtest/libmscore/selectionfilter/selectionfilter17-base-ref.xml#L7
>
> Capella:
> http://www.capella.de/download/mehr/workshops/capxml.pdf#page=2
>
> --
> Thomas Weber
> Notabit
> Burgkstraße 28
> 01159 Dresden
>
> Tel.: +49 (0)351 4794689
> http://notabit.eu/
>
>
>

Received on Monday, 26 October 2015 16:11:08 UTC