Are people using csound on 64-bit platforms? I tried building a couple
of versions in the Gentoo pro-audio overlay but they all failed to
build. It may just be that updates are needed to the ebuilds, which I
can explore, but first I thought it good to find out if it's known to
work at all.
Thanks,
Mark
On 01/07/2011 10:55 AM, sh0099(a)gmx.de wrote:
>
> Jörn Nettingsmeier schrieb:
>> why would you want to do that? ambdec comes with a number of example
>> configurations, and iirc, they cover all the speaker layouts that are
>> covered on richard's page, plus a few more.
>
> my problem was that i realised that there are not so many ambisonics
> order 2 (9channel) presets with ambdec.
> and my files are 9channel files.
> do i remember right that i can not decode a 9channel file in a 1st order
> etc. decoder?
i guess you are confusing things here. there are a number of 2nd order
decoders in ambdec. disregarding your source files for the moment, what
is your speaker layout? is it capable of reproducing 2nd order?
for that, it would have to be at least a 5.0, better yet a hexagon.
for both rigs, ambdec ships example configurations.
as for the input channel count: you won't need 9 channels for horizontal
only, just WXYUV. if you want to do with-height, you will need at least
10 speakers (12 is more practical, because then you can use a regular
dodecahedron).
>> or do you want to create a custom layout and have amb*i*dec compute a
>> configuration for you?
>
> actually i was thinking about it.
if you do that, check out bruce wiggins' paper on optimizing for
irregular layouts. but it's nowhere near a recipe, you'll have to do
some hefty number crunching to arrive at anything useful.
Wiggins, Bruce: "The Generation of Panning Laws for Irregular Speaker
Layouts using Heuristic Methods", AES 31st International Conference,
London 2007
>> if so, conversion between polar and cartesian is not too hard... if
>> your listening position is at the origin of both coordinate systems,
>> then (off the top of my head), you should get something like:
>>
>> azimuth = arctan (y/x)
>> elevation = arctan (z/(sqrt(x^2 + y^2))
>> r = sqrt (x^2 + y^2 + z^2)
> thanks i will dubble check with fons answer :-)
to my great relief, it looks like they are congruent :)
of course, when you implement it, you have to do something about the
singularities at 90 and -90 degrees elevation.
>> i'm pretty sure that the custom layout matrix you will get out of
>> amb*i*dec will yield worse results than ambdec, because it does not do
>> proper dual-band decoding.
>>
>> out of curiosity, what layout are you looking for?
>
> i will start tomorrow to set up a ambisonic setup for a presentation and
> i have two days to do so. because i have to mix different speakertyps i
> have to look a little bit around to know what will fit.
ouch. that is tricky. watch out: in the LF band, you have to have
matching phase responses, otherwise the speaker will work against one
another. if at all possible, get the horizontal ring right, with all
matching speakers. for height, you can get away with using a different
model. try the following: create a stereo pair from your two different
speaker models (i hope it's only two, otherwise it's a pretty lost
case). match the levels carefully, and then listen to some stereo
material you know well, with lots of spatial information in the LF band
(an orchestra recording would do). play with the polarity and (if you
can) with delay, until you get the best LF localisation result. use the
same polarity and delay for the rig. that way, you have hopefully
aligned the LF phase responses.
or easier but more expensive: get a dual-fft measurement system such as
smaart and plot the phase responses.
or screw perfectionism and just try it as-is. don't be disappointed, though.
the reason i'm so nit-pickish about all this is that every single audio
professional i've met who has had prior exposure to ambisonics has heard
at least one distinctly unconvincing demo, which makes live harder for
all of us.
i'm proud to say that of all the HOA demos i've rigged so far (about 12
in total), only one has been sub-optimal, and only because i pushed
delay compensation too far. i like to believe the reason for this is a)
that i totally don't believe in ambisonics as a silver bullet, and b) i
work very precisely, with the same speakers directivities, phase and
amplitude responses, placed within +/- 2° and +/- 2cms, if at all possible.
Hello all,
I'm wondering: what is the use of having automatic markers showing
up that marks the places where xruns have occured ? I'm using a rather
'old' Ardour version eg. 2.4.1. Is there anything that can be done
using these markers ?
Hello!
I know there is audio to midi conversion in some programms like
Rakarrack or zita-at, but its not possible to feed the midi to other
apps, right? I just need a tool that analyse the audio coming in (eg
whistle, dont have to be to complicated) and throws out midi over alsa-
or jack-midi. Is there something like this?
thx,
headles
Hi all,
I've encountered a strange problem : hydrogen doesn't launch anymore ! It
worked perfectly before. I'm running an Ubuntu 10.04 with jack 1.9.7.
starting hydrogen 0.9.4-1 from a terminal gives
$ hydrogen
hydrogen: error while loading shared libraries: liblash.so.2: cannot open
shared object file: No such file or directory
but then liblash.so.2 is installed
$ sudo locate liblash.so
/usr/lib/liblash.so.1
/usr/lib/liblash.so.1.1.1
/usr/lib/liblash.so.2
running ldconfig didn't change a thing and then tried to reinstall both
hydrogen + liblash, nothing there too !
Has someone encountered the same problem ? Does anyone have any idea ?
Thanks a lot
jy
hi
first off i'm sim, you may have seen me in the open source musician IRC room. because dan is quite busy i've taken over the tunestorm comp and am changing things slightly to make it a bit more streamlined and hopefully more interesting. this is something i'm trying, if you don't think it's working then obviously constructive criticism is welcome.
for this comp it's going to be based around a sample that you can build your track around. the sample is recorded from a hard drive with a jack soldered onto the bottom so that when the platter is spun it outputs a sine wave. the sample has a few different spins and timings, it will need chopping and tidying up as this is the raw sample, i thought giving the raw sample would be a better idea because it gives you more scope to manipulate it.
the sample can be as prominent as you want to make it, it doesn't have to be the main focus, it is left to your discretion. obviously making a track where the sample is completely unrecognisable does slightly go against the point of the comp.
i am provisionally making the submission date for the competition the 21st january which should give you a decent amount of time to work on it. the submission process is changing too, i've setup a soundcloud group here http://soundcloud.com/groups/open-source-musician where you can submit the tracks and also comment and vote on other peoples submissions. if you haven't used soundcloud before i highly recommend it. there is a very neat function where you can comment on the timeline of a track, really useful for directing crits. the other major plus of soundcloud is that a free account entitles you to 2hours worth of uploaded music in ANY format, so you can directly upload ogg and flac. the 2 hours is consistent regardless of whichever format you use.
i look forward to hearing your submissions :)
sim
Hi,
I am putting together a submission for LAC 2011.
I was half way through writing up the paper before I looked at the
submission form and realised (I think) that you are only asking for an
abstract at the moment, not the whole paper. Is this right?
If so, it might be a bit clearer for people not used to the academic
thing if you changed the wording on the web page to make this obvious.
Cheers,
andy baxter
Greetings,
I've posted two new songs on my site. Both are rough mixes prepared for
my band, but I figured I'd post them anyway until I find the time to do
a more decent job. Yes, the second song has some tuning discrepancies,
and yes, I'm really that lazy.
Notes from my page at http://linux-sound.org/ardour-music.html :
Here in Ohio USA we have an interesting law that requires a convicted
drunk driver to put specially-colored license plates on his or her
vehicle (assuming s/he's still allowed to drive, of course). Law
enforcement officers can easily spot such plates, and of course everyone
else gets to cast humiliation on the idiot who decided to drink and
drive. The following song is dedicated to my foolish neighbor who
managed to fall afoul of the authorities, winning himself some brightly
colored tags for his truck. Hilarity ensues.
http://linux-sound.org/audio/Orange_Plates.mp3
Okay, so much for the poor unfortunate Floyd. The next song was inspired
by none other Rusty Campbell, my good friend and fellow musician, who
also happens to play drums in my band. Strange to say, this song has
become a crowd-pleaser and is one of our most-requested tunes. Of course
it isn't /really/ about Rusty (a.k.a. El Viejo). The poor fellow was
just minding his own business, then he gets hit with this assessment.
Inspiration can be so cruel.
http://linux-sound.org/audio/drummerinabluesband.mp3
Lyrics are on the page mentioned above.
Happy holidays to all !
Best,
dp
I'm going to do a session tomorrow where I need to have two computers, one with my bass samples, the other with all my other keyboard stuff. Since I only have one MIDI USB interface, I was considering using aseqnet to send MIDI over Ethernet or Bluetooth from one computer to another.
How reliable is aseqnet? Will this blow up in my face if I use it? I will test it out today, but may not have time to really stress test it before the session. I'll probably stick with Ethernet just to keep it simple for now. Are there any gotchas I need to watch out for?
-ken