Hi all,
I've posted this one before on the LAU list but unfortunately I've not
received any response. Please accept my apologies for cross-posting, but I
am hoping that someone on LAD might be able to help me learn more on this
particular topic. I would greatly appreciate help in this matter!
At any rate, here's the question:
I have been messing with the ambisonic plugins provided in the CMT ladspa
collection. I am namely looking into a simple implementation of B-format
1ch->4ch encoder. The problem is that I am failing to figure out how to use
this on a simple mono (non-encoded) sound input in order to control its
diffusion over 4 channels. When running it via Ardour and sending it to 4
separate busses via this plugin, even though the x, y, and z coordinates do
affect the output of the 4 channels, they do not conform to the expected
3,4,2,1 output (I presume these correspond to 4 speaker outputs, but I am
also having doubts about this as well, so your help in understanding this
better would be most appreciated).
By now you can see that although I know a bit about ambisonics (namely
theory), that I am quite a newb in this area, but for what it's worth I am
eager to learn :-).
The pd set of plugins pretty much work as expected but they are also
designed
differently, likely with discrete 4-8 output that can be then patched
directly to main outs. In the case of ladspa plugins I am not even sure
whether what I am getting is encoded version of a 4-channel stream that then
needs to be decoded. Yet, when I tried having:
mono sound-> b-format 1ch to 4ch encoder (LADSPA)->4ch to 4ch decoder
(LADSPA)
-> out
That gave me bunch of garbage with two out of 4 channels consistently
clipping, so this means that this is probably not the case either...
So, to conclude I must be not using the CMT plugins properly and would
therefore greatly appreciate it if someone would enlighten me. I am already
aware of the theory behind the ambisonics, it's just that I am a bit puzzled
how the CMT plugins (if at all) could be used with mono and/or stereo
non-encoded streams in order to spatialize them in real-time. I would most
appreciate a simple practical example as to how to do this (if such proves
to
be possible).
Thank you very much!
Best wishes,
Ico