Thank you Shane and Wolfgang for your answers !
That was valuable input for me !
I visited "Jam", probably the best Music Equipment Shop in
Stockholm one or two days ago know to check the price levels
on the ADAT related hardware.
ADA8000 from Behringer seems to be a nice pice of hardware.
The price (about 279 EURO) is not very high, maybe too high
this month, though.
If I buy it, then I also need a PCI to ADAT adapter, and
eventually even with a ADAT input port to be able to use the
analog inputs on the ADA8000.
I realize that experienced Linux developers and users often prefer
to use cards of the RME (or RME Hammerfall) brand to not wasting
time or money on cards that are not supported or do not have
Linux drivers (or maybe drivers that only supports parts of the
card's functionality) and/or by one or more other good reasons.
The problem in my case is that the PCI adapters seem to be more
expensive than the ADA8000, but may be a good investment anyway
if you later want to add another ADAT based box into the system.
What may be good to have (build, buy or borrow), is some kind
of test equipment that could be used together with a ALSA base
test application to verify that the kernel module, the configured
sound system and the interface card that you (in my case: I)
already have is capable of outputing ADAT signals on the optical connection,
and then if that test is positive, it will be safer
and cheaper to buy only the ADAT based D/A conversion equipment.
I have not looked at the specifications and do not exactly
know how much electronics, like shift registers, PIC micro-
controllers and/or other things that is needed to build that
kind of test equipment.
I also find the "El Cheapo" article interresting. Of course the
instructions, and problems observed, may be based on one type of
card, but indicates an obviously working way, it seems, to use
existing onboard crystal oscilators to feed the next bord
with the clock signal and convert that boards oscilator cirquit
into an amplifier for the clock signal and making all cards
running at same speed (of course the cards must be of the
same type or at least using the same chrystal frequency).
My experience is that after inserting 3 soundcards in a single
motherboard, then I may have serious troubles to even have a
video signal during the power on self test of the computer
and/or entering the BIOS setup, those cards was not modified,
so they where probably not broken.
To bring down the costs, I am still thinking of using a fast
master computer to feed a couple of slave boxes containing a
couple of synchronized cards each. It should be possible to
synchronize all four cards, but you may need to use buffer
amplifiers and maybe a differential (ballanced) transmittion
line for the clock signal between the boxes.
All three boxes should use 100 Mbps Ethernet through a switch
using full duplex mode if possible.
If I avoid to build a virtual bigger sound card of the both
cards in a box, and instead use a couple of handles for the
pair of cards in the application process in that box, then I
expect to be able to balance the both queues for that slave box
and on the other side of the Ethernet switch, all four flows
at the master box and bring down the latancy to an acceptable
level without observing underrun errors.
To complete the intercard-synchronasation, I also want to reserve
one output channel from each card, for example a pair of left
channels and a pair of right channels, and feed them using four
resistors back into a reserved stereo input on the first slave
box to make it easy to compute the inter-card timing properties
after starting each card as a four channel output device.
I have one interesting question here. Maybe I have been too
lacy to look at the soundcard drivers and/or ALSA source code ... :
If I configure the .asoundrc file to make a device name
available for the four possible independent output channels
of the physical card, is then always (when using that device
name) all four channels started simultaneously or is there
any practical risc that the card may start (or be instructed
to start) the two pairs of channels (we can call them "front"
and "rear") with some random missallignment in time ?
If I am lucky, assuming that the audio cards are supporting full duplex,
then I will have 12 free output channels and 6 free input
channels by using three computers. Hopefully I can find a couple
of old 300MHz Pentium II boxes or something somewhere for use as
the slave computers.
If I for some reason want to play preproduced digital audio streams
with all 12 channels alligned in into same phase (with much better
resolution than 1s/44100), then I may have to resample/interpolate
the digital audio streams. But I think that this is PROBABLY
UNIMPORTANT except when feeding the channels into an analog mixer
(bad idea I think) or when having a very well defined setup of the
speakers in the room and at the same time collecting (analyzing or
recording) sound with a microphone with a fixed position and
expecting to have the identical interferense patterns through
several starts of the sound system.
Best regards !
/Hans Davidson
_______________________________________________________
Skicka gratis SMS!
http://www.passagen.se
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Sorry, I meant GObject.
http://developer.gnome.org/doc/API/2.0/gobject/gobject-Signals.html
Taybin
-----Original Message-----
From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
Sent: Mar 29, 2005 11:38 AM
To: linux-audio-dev(a)music.columbia.edu
Subject: [linux-audio-dev] Re: OSC-Question
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
How to implement signal/slot in C? I'd look at glib-2. I think their event mechanism might come close.
Taybin
-----Original Message-----
From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
Sent: Mar 29, 2005 11:38 AM
To: linux-audio-dev(a)music.columbia.edu
Subject: [linux-audio-dev] Re: OSC-Question
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
On Sun, Mar 27, 2005 at 09:39:31AM -0800, Matt Wright wrote:
> When we wrote that part of the OSC Spec, we were thinking
> of the case in which an OSC Method doesn't need to know
> the address through which it was invoked, i.e., "usual"
> cases like setting a parameter. That's why the spec
> doesn't mention sending either the expanded or unexpanded
> OSC address to a handler --- sorry about that.
>
> Why not simply always send both? That seems more general
> and easier to understand than a special case, at least for
> me.
Well, that would require changing the API, which is a Bad Thing, and there
is a user_data parameter than can encode that kind of contextual
information when its needed. Also, the method callback functions are too
complicated allready :)
- Steve
(Forwarding to LAD)
cheers,
Christian
--------------------------------------
I've created a mailing list for the discussion of defining an open
instrument standard. So far the agreement seems to be to create an XML
standard which references external audio files. The use of FLAC has
also been mentioned. All who are interested may join at the following
link:
http://resonance.org/mailman/listinfo/open-instruments
The address to post is:
open-instruments at resonance.org
Archives:
http://resonance.org/pipermail/open-instruments/
If you feel another email list should be notified, please send this
information on. Perhaps CC the new list so that others may check which
lists have already been notified (check the archives). At this point I
have sent this to:
swami-devel
linuxsampler-devel
fluid-dev (FluidSynth devel list)
Best regards,
Josh Green
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
re all,
i'm writing a software that can read a C++ header and generate code to
expose the public functions in classes found: the generated code can be
compiled together in the original app, which should also link to a
library, then with 4 new lines OSC and XMLRPC servers will be active
accepting remote calls to the functions.
in fakiir, i make use of liblo-0.18 also published on this list.
the software it's in a early phase of development, but since today is
able to accept concurrent OSC and XMLRPC calls in the testclass.cpp
application that is bundled with the source.
http://fakiir.dyne.org
i'm happy to hear comments, suggestions or any criticism that comes
ciao
- --
jaromil, dyne.org rasta coder, http://rastasoft.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Cryptographically signed mail, see http://gnupg.org
iD8DBQFCQvU3WLCC1ltubZcRAsP3AJwKkEKeXc4GrWoyTXOqkPUuStKLHwCfReBE
NN7TpeyfrNxTIC7dNNN4BPE=
=uqyK
-----END PGP SIGNATURE-----
Hi,
If playing a sound file that has a different framerate from Jack, using
libsamplerate, should I :
- convert in real time, in the process callback ?
- convert the whole file into memory when loading it ?
Actually, I've already coded the second option, but just discovered the
jack_set_samplerate_callback() method, which seems to require the first
option... Is it important to support framerate changes while rolling ?
--
og
Thanks for the replies everyone! :)
I was afraid that the computers would not be able to handle more than 8
channels of recording which is why I was limiting them that way. How
many channels can one computer handle? What kind of specs (ie:
processor/RAM) would that computer need? What interfaces can handle more
than 8 analog mics? I would need to record about 32 mic inputs.
Also, the OpenMosix would be disabled while recording, and would only be
active when in a non-recording mode. Also, this network would be an
island. The gigabit ethernet switch would be dedicated to just the audio
stuff.
-jordan