Who is "we"?
Does "marketed to hardware OEMs" imply that this is a commercial project?
What is the (intended) relation/difference to DeMuDi?
Cheers,
Andreas
----- Original Message -----
From: "Daniel James" <daniel(a)64studio.com>
To: "Linux Audio Announce list" <linux-audio-announce(a)music.columbia.edu>
Sent: Saturday, April 02, 2005 12:04 PM
Subject: [linux-audio-announce] 64 Studio - a new distribution for
creativex86_64 users
> Hello all,
>
> 64 Studio is a collection of software designed specifically for
> content creation on x86_64 hardware (that's AMD's 64-bit CPUs and
> Intel's EMT64 chips), including audio, video and design applications.
>
> It's based on the pure 64 port of Debian GNU/Linux, but with a
> specialised package selection and lots of other customisations. It
> will be marketed to hardware OEMs in the creative workstation and
> laptop markets as an alternative to the 64-bit version of Windows XP,
> or OS X on Apple hardware.
>
> We are currently working on a prototype. Our next step will be a
> CD-ROM installer image which will be distributed to beta testers. If
> you're interested in this project, please see the FAQ on the website,
> or join our mailing list.
>
> http://64studio.com/
>
> Cheers
>
> Daniel
> _______________________________________________
> linux-audio-announce mailing list
> linux-audio-announce(a)music.columbia.edu
> http://music.columbia.edu/mailman/listinfo/linux-audio-announce
>
Greetings:
If you don't know the drill:
http://linux-sound.org (USA)
http://linuxsound.atnet.at (Europe)
http://linuxsound.jp (Japan)
As usual the Japanese site will update later this evening. Please note
that my old Bright.net addresses and pages are gone now. If you have a
page containing a link to the old Bright.net site for these pages please
update it to one of the new URLs. Note too that my email address is also
no longer at Bright.net, please update your address book as necessary.
Best,
dp
http://www.dis-dot-dat.net/music/jolly_192.mp3
Used: Cheesetracker, jack-rack and timemachine.
Style: Downbeaty-kind-of-thing
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
Thank you Shane and Wolfgang for your answers !
That was valuable input for me !
I visited "Jam", probably the best Music Equipment Shop in
Stockholm one or two days ago know to check the price levels
on the ADAT related hardware.
ADA8000 from Behringer seems to be a nice pice of hardware.
The price (about 279 EURO) is not very high, maybe too high
this month, though.
If I buy it, then I also need a PCI to ADAT adapter, and
eventually even with a ADAT input port to be able to use the
analog inputs on the ADA8000.
I realize that experienced Linux developers and users often prefer
to use cards of the RME (or RME Hammerfall) brand to not wasting
time or money on cards that are not supported or do not have
Linux drivers (or maybe drivers that only supports parts of the
card's functionality) and/or by one or more other good reasons.
The problem in my case is that the PCI adapters seem to be more
expensive than the ADA8000, but may be a good investment anyway
if you later want to add another ADAT based box into the system.
What may be good to have (build, buy or borrow), is some kind
of test equipment that could be used together with a ALSA base
test application to verify that the kernel module, the configured
sound system and the interface card that you (in my case: I)
already have is capable of outputing ADAT signals on the optical connection,
and then if that test is positive, it will be safer
and cheaper to buy only the ADAT based D/A conversion equipment.
I have not looked at the specifications and do not exactly
know how much electronics, like shift registers, PIC micro-
controllers and/or other things that is needed to build that
kind of test equipment.
I also find the "El Cheapo" article interresting. Of course the
instructions, and problems observed, may be based on one type of
card, but indicates an obviously working way, it seems, to use
existing onboard crystal oscilators to feed the next bord
with the clock signal and convert that boards oscilator cirquit
into an amplifier for the clock signal and making all cards
running at same speed (of course the cards must be of the
same type or at least using the same chrystal frequency).
My experience is that after inserting 3 soundcards in a single
motherboard, then I may have serious troubles to even have a
video signal during the power on self test of the computer
and/or entering the BIOS setup, those cards was not modified,
so they where probably not broken.
To bring down the costs, I am still thinking of using a fast
master computer to feed a couple of slave boxes containing a
couple of synchronized cards each. It should be possible to
synchronize all four cards, but you may need to use buffer
amplifiers and maybe a differential (ballanced) transmittion
line for the clock signal between the boxes.
All three boxes should use 100 Mbps Ethernet through a switch
using full duplex mode if possible.
If I avoid to build a virtual bigger sound card of the both
cards in a box, and instead use a couple of handles for the
pair of cards in the application process in that box, then I
expect to be able to balance the both queues for that slave box
and on the other side of the Ethernet switch, all four flows
at the master box and bring down the latancy to an acceptable
level without observing underrun errors.
To complete the intercard-synchronasation, I also want to reserve
one output channel from each card, for example a pair of left
channels and a pair of right channels, and feed them using four
resistors back into a reserved stereo input on the first slave
box to make it easy to compute the inter-card timing properties
after starting each card as a four channel output device.
I have one interesting question here. Maybe I have been too
lacy to look at the soundcard drivers and/or ALSA source code ... :
If I configure the .asoundrc file to make a device name
available for the four possible independent output channels
of the physical card, is then always (when using that device
name) all four channels started simultaneously or is there
any practical risc that the card may start (or be instructed
to start) the two pairs of channels (we can call them "front"
and "rear") with some random missallignment in time ?
If I am lucky, assuming that the audio cards are supporting full duplex,
then I will have 12 free output channels and 6 free input
channels by using three computers. Hopefully I can find a couple
of old 300MHz Pentium II boxes or something somewhere for use as
the slave computers.
If I for some reason want to play preproduced digital audio streams
with all 12 channels alligned in into same phase (with much better
resolution than 1s/44100), then I may have to resample/interpolate
the digital audio streams. But I think that this is PROBABLY
UNIMPORTANT except when feeding the channels into an analog mixer
(bad idea I think) or when having a very well defined setup of the
speakers in the room and at the same time collecting (analyzing or
recording) sound with a microphone with a fixed position and
expecting to have the identical interferense patterns through
several starts of the sound system.
Best regards !
/Hans Davidson
_______________________________________________________
Skicka gratis SMS!
http://www.passagen.se
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Sorry, I meant GObject.
http://developer.gnome.org/doc/API/2.0/gobject/gobject-Signals.html
Taybin
-----Original Message-----
From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
Sent: Mar 29, 2005 11:38 AM
To: linux-audio-dev(a)music.columbia.edu
Subject: [linux-audio-dev] Re: OSC-Question
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
How to implement signal/slot in C? I'd look at glib-2. I think their event mechanism might come close.
Taybin
-----Original Message-----
From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
Sent: Mar 29, 2005 11:38 AM
To: linux-audio-dev(a)music.columbia.edu
Subject: [linux-audio-dev] Re: OSC-Question
>From: Jan Depner <eviltwin69(a)cableone.net>
>
>> >No, imho one of the main advantages is Qt's Signal/Slot mechanism
>
>> sigc++
How to implement signal/slot mechanism in simplest terms with C?
In my opinion, sometimes it is unnecessary to link to a massive
code libraries if only one feature is needed.
AlsaModularSynth uses Qt's signals and slots in audio processing,
and thus requires the whole Qt and mixes GUI toolkit to audio side.
It could be wise to use sigc++ or minimal S/S code in audio processing
and Qt only in GUI.
But because Qt comes with every Linux distribution, maybe bad
dependencies can be allowed.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
On Sun, Mar 27, 2005 at 09:39:31AM -0800, Matt Wright wrote:
> When we wrote that part of the OSC Spec, we were thinking
> of the case in which an OSC Method doesn't need to know
> the address through which it was invoked, i.e., "usual"
> cases like setting a parameter. That's why the spec
> doesn't mention sending either the expanded or unexpanded
> OSC address to a handler --- sorry about that.
>
> Why not simply always send both? That seems more general
> and easier to understand than a special case, at least for
> me.
Well, that would require changing the API, which is a Bad Thing, and there
is a user_data parameter than can encode that kind of contextual
information when its needed. Also, the method callback functions are too
complicated allready :)
- Steve
(Forwarding to LAD)
cheers,
Christian
--------------------------------------
I've created a mailing list for the discussion of defining an open
instrument standard. So far the agreement seems to be to create an XML
standard which references external audio files. The use of FLAC has
also been mentioned. All who are interested may join at the following
link:
http://resonance.org/mailman/listinfo/open-instruments
The address to post is:
open-instruments at resonance.org
Archives:
http://resonance.org/pipermail/open-instruments/
If you feel another email list should be notified, please send this
information on. Perhaps CC the new list so that others may check which
lists have already been notified (check the archives). At this point I
have sent this to:
swami-devel
linuxsampler-devel
fluid-dev (FluidSynth devel list)
Best regards,
Josh Green