Ok issues are fixed. I was sure that something like it exists, as aubio
has it. I want to take this thing to be polyphonic,
Ideally split up into polyphonic guitar to midi lv2 and maybe synth
plugin that somehow apeals to guitarist (I know that vague, but i'm
thinking of a synth thats piped through guitarixcab for that nice tube
sound). But the emphasis will be the polyphonic audio (actually only
guitar) to midi. The other stuff I wrote just not to have to deal with
lv2 standard for now. And wavetables are easy and fun.
I've done some research and the polyphonic pitch estimation falls under
the wider topic called Blind Source Separation (BSS). Is there any
expert on BSS here on the list?
Gerald
On 18.04.2015 14:55, Ralf Mardorf wrote:
> Thank you,
>
> builds without issues on Arch Linux. Running it works too, but I didn't
> test it. Jack audio IOs and Jack MIDI out are shown by QjackCtl. Are you
> aware that Rakarrack provides a relatively good working monophonic MIDI
> converter?
>
> JFTR I got those messages:
>
> $ ./GuitarSynth2
> jack_client_new: deprecated
> Samplerate 44100 Buffersize 256
> QObject::connect: No such slot GSEngine::setInputGain(int)
> QObject::connect: (sender name: 'InputVol')
> QObject::connect: No such slot GSEngine::setOutputGain(int)
> QObject::connect: (sender name: 'OutputVol')
>
> Regards,
> Ralf
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-user
Hi,
I have been debugging an issue in Giada where sometimes it would get
disconnected from Jack1 at start (not Jack2).
During my debugging, I found that the JackSyncCallback method setup in
Giada's jack_set_sync_callback uses Fl::lock which blocks the main
process and Jack doesn't like that.
I believe that's what the problem is (especially as after removing
them I cannot reproduce it anymore)...
Does anyone who knows Jack much better than I do (in fact I'm a real
noob in that area) think I'm right in my investigation's finding?
Thanks in advance,
Aurélien
hi all,
every year the post-lac nostalgic syndrome gets its toll...
all the photos taken by this lousy photo-shooter of yours during the
lac2015@jgu-mainz are now online:
http://www.rncbc.org/lac2015
the even lousier videos taken from some of the linux sound night live
acts are also delivered unedited and online:
http://www.youtube.com/user/rncbchannel
enjoy
--
rncbc aka. Rui Nuno Capela
Anyone out there using ambix on Linux?
I'm seeing various instabilities, for example trying out the converter
standalone I get a segfault when connecting output ports, and it looks
like the Jack JUCE component is doing some unaligned memory copies.
Any hint on how to fix this?
I get Ardour crashes if I try to use the converter LV2 plugin as well.
See below for a trace of the standalone binary...
Thanks for any help!
-- Fernando
#0 0x00007ffff507bdd6 in __memcpy_avx_unaligned () from /lib64/libc.so.6
#1 0x000000000069f6d1 in juce::FloatVectorOperations::copy(float*,
float const*, int) ()
#2 0x000000000069ec87 in juce::AudioSampleBuffer::copyFrom(int, int,
juce::AudioSampleBuffer const&, int, int, int) ()
#3 0x0000000000685452 in
Ambix_converterAudioProcessor::processBlock(juce::AudioSampleBuffer&,
juce::MidiBuffer&) ()
#4 0x00000000006e0e7d in
juce::AudioProcessorPlayer::audioDeviceIOCallback(float const**, int,
float**, int, int) ()
#5 0x000000000068e0b7 in
juce::AudioDeviceManager::audioDeviceIOCallbackInt(float const**, int,
float**, int, int) ()
#6 0x000000000069a694 in
juce::JackAudioIODevice::processCallback(unsigned int, void*) ()
#7 0x00007ffff148c2fc in Jack::JackClient::CallProcessCallback() ()
from /lib64/libjack.so.0
#8 0x00007ffff148c204 in Jack::JackClient::ExecuteThread() ()
from /lib64/libjack.so.0
#9 0x00007ffff1489c0b in Jack::JackClient::Execute() ()
from /lib64/libjack.so.0
#10 0x00007ffff14aa2fc in Jack::JackPosixThread::ThreadHandler(void*) ()
from /lib64/libjack.so.0
#11 0x00007ffff5d3052a in start_thread () from /lib64/libpthread.so.0
#12 0x00007ffff503622d in clone () from /lib64/libc.so.6
Hi all,
Ever dream of maintaining GNU/Linux servers and providing support for a
closely knit community of users interested in everything related to
sound, music and DSP? Designing high performance GNU/Linux-based
workstations that are completely silent? Packaging your favorite free
software so that users worldwide can easily download and install it?
Designing, maintaining, managing and deploying complex multichannel
studio and concert diffusion systems? Working with a community of
interdisciplinary students, researchers and faculty from all over the
world? Doing some music and research on the side? (and more, of course).
And all that at CCRMA, the Center for Computer Research in Music and
Acoustics in the middle of Silicon Valley?
And working with Nando?[*] (well, nothing's perfect :-)
Details are here:
https://stanfordcareers.stanford.edu/job-search?jobId=66452
Just in case you don't know, the Stanford Center for Computer Research
in Music and Acoustics (CCRMA) is a multi-disciplinary facility where
composers and researchers work together using computer-based technology
both as an artistic medium and as a research tool.
https://ccrma.stanford.edu/about
Waiting for applications...
-- Fernando
[*] https://ccrma.stanford.edu/~nando/
Hi All,
As OpenAV it is my pleasure to call for testing of ArtyFX 1.3.
The code is available online right now! [1]
New plugins include:
- Driva (Guitar Distortion)
- Whaaa (Wah pedal)
Credits:
- Whaaa DSP from WAH Plugins by Fons
- Driva distortion algorithms ported from Rakarack project
OpenAV may still change parameters based on feedback - note this is a
call for testing. But its sounding pretty awesome already here ;)
Stay tuned for an official release announcement after the LAC[2], and
tune in to the OpenAV AVTK + other important OpenAV news.. Cheers!
-Harry
[1] https://github.com/harryhaaren/openAV-ArtyFX
[2] lac.linuxaudio.org/2015/
[3] http://lac.linuxaudio.org/2015/speakers?uid=22
--
http://www.openavproductions.com
Hi all, me just migrated gsequencer to GitHub.
http://www.gsequencer.org
Further me did renamed binary from `ags` to `gsequencer`. About the
state of the sequencer: Version 0.4.2-44 is believed to be stable.
Since me don't have the time to test it well, me need people to test
it. Please, let me know your experiences.
MIDI support is intended to be added for release Version 0.4.3 and
many other new exciting features. If someone can help with ALSA
rawmidi, please let me know.
I don't want to do JACK, since it is believed to be somehow the Qt thing.
What me can say so far about MIDI me read years ago about it in "The
computer music tutorial - The MIT Press, ISBN 0-262-68082-3". And MIDI
consist of messages which are routeable. Someone giving specification
how the MIDI message struct is passed to ALSA and how it looks like
would be great.
Thank you in advance
Joël
Srinivasan S wrote:
> I didn't understand what is 'two channel devices' does
The two channels are "left" and "right".
> Regarding bindings as you explained"bindings.x y" or "bindings { x y }" maps channel x of this device to
> channel y of the slave device.
>
> I didn't understand channel x of this device means is it the real sound card??? which is the current device ie., channel x of this device means???
>
> I didn't understand channel y of the slave device means??.. ie., which is slave device here????
"This device" is the virtual device that is defined.
The slave device is the device whose name is specified with "slave.pcm".
Regards,
Clemens
Srinivasan S wrote:
> CPU consumption is 18%, with above asound.conf & the app
> alsa_loopback_min_mono.c for establishing my GSM two way call (ie.,
> VINR to VOUTR & VINL to VOUTL) , this is very huge & I want to reduce
> this CPU consumption drastically, Is there any other ways in alsa where
> I can do this two way GSM call (ie., VINR to VOUTR & VINL to VOUTL)
> without using alsa_loopback_min_mono.c application
dmix needs more CPU than dshare because it needs to mix multiple streams
together; if possible, use dshare instead of dmix.
dshare needs more CPU than direct access to the device because the data
needs to be copied and reformatted. dshare is needed only when the
application(s) cannot handle the format of the actual device; if
possible, change your application to handle the two-channel devices.
> And am hearing echo, when I do GSM calls when using the above attachd
> asound.conf & the app alsa_loopback_min_mono.c, could you please help
> me out is there any options to do echo cancellation in alsa?
ALSA has not built-in echo cancellation. You have to implement this
yourself, or use some third-party library.
If dmix/dshare alone eats 18 % CPU, it is unlikely that this is feasible
without hardware support.
> Am trying to completely understand the above attched asound.conf, but
> am not still very clear w.r.t the understanding of bindings
"bindings.x y" or "bindings { x y }" maps channel x of this device to
channel y of the slave device.
Regards,
Clemens