- From: Oscar Celma <oscar.celma@iua.upf.es>
- Date: Wed, 11 Jan 2006 18:30:37 +0100
- To: love26@gorge.net
- Cc: semantic-web@w3.org
Here's another project that makes your dream come true. http://www.ipem.ugent.be/MAMI/ Thus, "query-by-humming" problem its already solved (well, moreorless!) in the MIR (Music Information Retrieval) research field. Another interesting problem is the "find similar music" from content-based attributes (the ones extracted from the audio itself). There's a nice demo at http://musicsurfer.iua.upf.edu that gives you similar audio, given a track. It's the next step for the music recommenders. That is, not only using collaborative-filtering or playlist coocurrences for music recommendation, but merging both worlds (context and content descriptors). Cheers, Oscar. http://www.iua.upf.edu/mtg http://foafing-the-music.iua.upf.edu On Tue, 10 Jan 2006 08:13:25 -0800 William Loughborough <love26@gorge.net> wrote: > > I can approach thousands of people and ask "what's the name of the tune > that goes 'dah-dah-dah-BOOM?'" and they will instantly respond > "Beethoven's Fifth". > > When will our machine indexing, etc. stuff allow me to do that with a > google-type search and a microphone? > > Love. > > >
Received on Wednesday, 11 January 2006 17:31:43 UTC