Lelangir
June 28th, 2010, 05:06 pm
http://www.physorg.com/news191237592.html
"A team of telecommunications engineers from the University of Jaen (UJA) has created a new method to automatically detect and identify the musical notes in an audio file and generate sheet music. The system identifies the notes even when the type of instrument, musician, type of music or recording studio conditions vary."
Apparently the technology can mechanically transcribe audio (wav, mp3, flac, what have you) into MIDI. Good? Bad? The article mentions the cost benefits, but not the effect the technology can have on training and practice ethics.
"In music, people talk about 'harmonicity' when the energy produced by a note is distributed across the multiple positions of a fundamental frequency. The distribution of the harmonic energy of a musical note is what defines its spectral pattern. The harmonic diction is a matrix in which the typical spectral patterns of all musical notes are represented.
With the help of this dictionary and a computer algorithm called 'Matching Pursuit', the musical notes with patterns that most resemble the harmonic dictionary are identified. While the method can only be applied to files with one sole instrument at present, the scientists are already investigating how it can be applied to several."
One foreseeable problem is that if they're talking about "music" in terms of frequency, frequency is not nearly as quantum as "music" in the tertiary, scale-based we know it. So when you try and apply the software to, say, a blues gig, you're gonna get a lot of microtones that don't easily fit into convention notation. I remember hearing about some contemporary notation system that was able to more accurately represent music as it is played. Since this thing can convert to MIDI...well...I'm not even sure if MIDI supports microtones. But maybe the software will "round" to the nearest octave-divisible (or w/e, you know...) frequency.
"A team of telecommunications engineers from the University of Jaen (UJA) has created a new method to automatically detect and identify the musical notes in an audio file and generate sheet music. The system identifies the notes even when the type of instrument, musician, type of music or recording studio conditions vary."
Apparently the technology can mechanically transcribe audio (wav, mp3, flac, what have you) into MIDI. Good? Bad? The article mentions the cost benefits, but not the effect the technology can have on training and practice ethics.
"In music, people talk about 'harmonicity' when the energy produced by a note is distributed across the multiple positions of a fundamental frequency. The distribution of the harmonic energy of a musical note is what defines its spectral pattern. The harmonic diction is a matrix in which the typical spectral patterns of all musical notes are represented.
With the help of this dictionary and a computer algorithm called 'Matching Pursuit', the musical notes with patterns that most resemble the harmonic dictionary are identified. While the method can only be applied to files with one sole instrument at present, the scientists are already investigating how it can be applied to several."
One foreseeable problem is that if they're talking about "music" in terms of frequency, frequency is not nearly as quantum as "music" in the tertiary, scale-based we know it. So when you try and apply the software to, say, a blues gig, you're gonna get a lot of microtones that don't easily fit into convention notation. I remember hearing about some contemporary notation system that was able to more accurately represent music as it is played. Since this thing can convert to MIDI...well...I'm not even sure if MIDI supports microtones. But maybe the software will "round" to the nearest octave-divisible (or w/e, you know...) frequency.