Hi guys, sorry for the delay in responding! I found interesting this idea of using guitar samples to base the pitch/onset detection, but I think this would limit a bit the idea of plugin to guitars only right? Plus, i don't know anything about machine learning too hahaha, anyway, i think its a good idea, but for this plugin I was trying to do something more general. To Hermann, in this implementation I left the algorithm to choose at runtime, the algorithms are briefly described in these links (at 'methods') :

Onset:
http://aubio.org/manpages/latest/aubioonset.1.html

Pitch:
http://aubio.org/manpages/latest/aubiopitch.1.html 

I tested the plugin with both zynaddsubfx and a guitar, and in each situation a pitch method worked better (some methods didn't recognize the lowest frequencies), didn't feel much difference in the onset methods variation. Have you guys tryed aubio lib yet?

Att,

Lucas



2014-05-11 22:44 GMT-03:00 Rafael Vega <email.rafa@gmail.com>:




I would be interested in trying a different approach to this problem.
Namely using machine learning. Would the guitar players on this list
be willing to provide training data? I.e. audio files of you playing
single notes with an additional text file describing the played
pitches. A format like

frame-number start 440
frame-number end

e.g. you have an audio clip at 1000hz sampling rate where an 440hz A
starts on frame 500 (at 0.5 secs) that lasts to frame 2500 (so, 2 secs
duration):

500 start 440
2500 end

This would be an awesome project. I'm very interested. I can provide some training files and a bit of help coding or other supporting tasks. I don't know anything about machine learning, though.


_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev




--
Lucas Conejero Takejame
Engenharia Elétrica - ênfase em computação
Escola Politécnica - USP