Re: The MusicXML challenge and Chords

It seems to me that the only role for MusicXML is semantic markup. 

If you want playback fidelity, use MIDI.
If you want rendering fidelity, use Postscript/PDF.

If you want something from which both of these may be *derived*, you need semantic markup. Musical semantics are never unambiguous. Within MEI, we have a general practice to always try and find some example of where a presumed "rule" is broken, just to test our assumptions on the full range of musical semantics. Don Byrd has a wonderful site where you can view some of the most immediate problems with any rigid attempt at constraining musical semantics.

http://homes.soic.indiana.edu/donbyrd/InterestingMusicNotation.html

History is littered with music encoding standards, representing just about every approach to timing, layout, etc. I would suggest reading Eleanor Selfridge-Field's book, "Beyond MIDI" for a fantastic overview of some of the most significant attempts.

That's not to say, however, that I don't think there is room for improvement. There is. But I think a solid knowledge of what has been tried before should precede any attempt at rewriting the MusicXML spec, and specifically fundamental designs for timing and musical event alignment. I can guarantee that it's not quite as simple as it seems on the surface!

Along those lines, one thing I think MusicXML can benefit from is a clear separation of concerns along the four general domains of music encoding, as defined by  SMDL: logical, gestural, visual, and analytical. See: http://www.lim.di.unimi.it/IEEE/SMDL/C5.HTM

Within MEI, these domains have been vitally important for helping us structure the semantic markup for the different types of data you will encounter. Our RelaxNG schema largely separates these concerns, which makes it easy for developers to understand what type of information is represented by a particular element or attribute.

-Andrew

> On Oct 26, 2015, at 9:10 AM, Sienna Wood <sienna.m.wood@gmail.com> wrote:
> 
> So what do we want MusicXML to be?  Should the core structure be representing the semantics of playback or rendering?
> 
> If you ask me, neither.  The goal of encoding music should be to capture musical data in the most clear, complete, and standardized way possible so that it can be rendered as notation, played back as audio, mined for data, etc. in any application that "knows" the standard.
> 
> I think we're getting a bit bogged down in appearance/layout/display issues.  For example, in Peter's original "MusicXML challenge" email, he mentioned the tension between "flowed" elements and "fixed" elements, specifically page numbers.  In my view, all elements in encoded music should be "flowed," just as they are in HTML.
> 
> Issues like margins, pagination, page numbers, navigation, etc. should be handled by the application doing the rendering (for HTML, the browser) and any external "style" information (for HTML, CSS).  These aspects used to be mixed together in HTML, but this was awkward and unmanageable, so they were separated.  We should learn from these mistakes and avoid repeating them.
> 
> In short, we should capture content data, not layout data.
> 
> However, I want to acknowledge that this distinction is not always straight-forward in music.  For example, if several simultaneous notes are rendered with opposing stem directions, this suggests that several different voices are converging, whereas if they are rendered with a shared stem, they are a chord.  If an encoder wants to specify groupings of notes, stem directions, etc. to clarify this relationship, that should be accommodated.  However, it should not be required.
> 
> Sienna
> 
> On Mon, Oct 26, 2015 at 2:28 AM, Thomas Weber <tw@notabit.eu <mailto:tw@notabit.eu>> wrote:
> Am 26.10.2015 um 07:23 schrieb mogens@lundholm.org <mailto:mogens@lundholm.org>:
> >
> > I think the music should be the base, the graphic appearance an addition.
> > (Like MIDI: notes are "events", other stuff is "metaevents"). But this is MusicXML,
> > and we must be pragmatic.
> >
> 
> 
> There you have a fundamental question.  To quote L Peter Deutsch's post:
> 
> 
> Am 20.10.2015 um 07:51 schrieb L Peter Deutsch:
> > MusicXML is first of all (1) a format for representing
> > printed scores, [...] I have seen no
> > evidence that it cannot have clarity and completeness about the *semantic
> > and general visual relationships* of the elements it names.
> 
> 
> So what do we want MusicXML to be?  Should the core structure be representing the semantics of playback or rendering?  I think this really needs clarification.  I have a very clear opinion about that: MusicXML should first and foremost facilitate notation for the following reasons:
> 
> 
> * MusicXML's original killer feature is enabling exchange between music notation software.
> * MIDI is the established standard for playback.
> * It's easy to extract playback information from notation data, but not vice versa.
> * Rendering is hard, properly conveying semantics needed for rendering as well. For this, we need sound foundations that we mustn't trade for minor playback facilitations.
> 
> 
> Concerning chords this means I fully agree with L Peter Deutsch's concerns and suggestions.  Aggregating notes in a chord incidentally also seems to be what notation programs happen to do anyway (single notes commonly being treated as one-note chords):
> 
> 
> Sibelius:
> http://www.sibelius.com/download/documentation/pdfs/sibelius710-manuscript-en.pdf#page=87 <http://www.sibelius.com/download/documentation/pdfs/sibelius710-manuscript-en.pdf#page=87>
> 
> Finale (apparently - link is a third party framework):
> http://www.finaletips.nu/frameworkref/class_f_c_note_entry.html <http://www.finaletips.nu/frameworkref/class_f_c_note_entry.html>
> 
> MuseScore:
> https://github.com/musescore/MuseScore/blob/master/mtest/libmscore/selectionfilter/selectionfilter17-base-ref.xml#L7 <https://github.com/musescore/MuseScore/blob/master/mtest/libmscore/selectionfilter/selectionfilter17-base-ref.xml#L7>
> 
> Capella:
> http://www.capella.de/download/mehr/workshops/capxml.pdf#page=2 <http://www.capella.de/download/mehr/workshops/capxml.pdf#page=2>
> 
> --
> Thomas Weber
> Notabit
> Burgkstraße 28
> 01159 Dresden
> 
> Tel.: +49 (0)351 4794689
> http://notabit.eu/ <http://notabit.eu/>
> 
> 
> 

Received on Monday, 26 October 2015 16:40:08 UTC