>
> 3. 'amateur' users vs. 'professionals'.
>
> Since a took a clear stand on this matter, let me define what
> I mean by a 'professional' in this context: someone who makes
> a living by providing a service to customers who pay for it.
> I still maintain that someone in this position will not mind
> logging in as different users for his personal and pro work.
> He / she will probably use different machines anyway. It's
> just a matter of 'best practice' and professional conduct.
>
>
> > > As was already pointed out, prosumer and professional users
> > > will in all probably have two different audio cards anyway.
>
> > At least for the hobbyist (and I can speak for them):
> > disagreed :) .
Just to join in... from a dance music making perspective i'm not sure if i understand this customer concept. Does your fav rock band use "best practice" and professional conduct? are they not professional? I hope at least some of the people on here are interested in making music too, and i hope we can focus on that. The key thing i've learned from my limited experience with music, more time making music and less time getting your computer to work means better music. So i think the goal really needs to be as Jay said, it just works. Though i'm coming from the just doing it for fun camp so take that into account.
> On Sat, Jun 18, 2005 at 02:34:46AM +0200, Christoph Eckert wrote:
>>> I think we should (and can) keep the desktop and 'pro'
>>> worlds separate.
>>>
>>
>> I do not agree :) . We're in the free software world, so
>> there's no need to tell the non-pro-audio-users "use anything
>> else".
How OS X solves this problem may be instructive.
[most facts below are probably right, I'm sure I got
a few wrong ...]
In the System Preferences->Sound menu, a user
can choose the default audio input and audio output
device.
In the CoreAudio API, this choice becomes the
current value of kAudioHardwarePropertyDefaultInputDevice
and kAudioHardwarePropertyDefaultOutputDevice.
Consumer-oriented apps use those Defaults, as
a rule; content-creation apps usually have their
own Preferences that let you select the audio
devices for the app.
CoreMIDI works this way too.
So, the underlying system is "pro", but there
are provisions in the API to handle the mindset
of both pro and consumer worlds.
Finally, note that Apple solved the legacy problem
by emulating Sound Manager (OS 9 audio API).
Until Tiger, QuickTime actually used Sound Manager.
Many other apps still do.
If Linux followed this model, one would write emulators
over jack for all of the consumer audio APIs being used in
jack, to accommodate the installed base, and have
the emulations work well enough to convince distros
to just ship jack as the bottom layer, and use its
emulated APIs. Then, you can start evangelizing
direct jack calls to all application developers, consumer
and pro alike.
Yes, this takes a lot of work, and is not in the direct
path of solving pro audio problems. Just like I've spent
5 years on pushing RTP MIDI through the standards
process because I felt someone had to do it, I think the
Linux audio API problem only gets solved if someone
decides to dedicate 5 years of their life to doing it.
That's how long a systems project takes to have
an impact on the world. Good luck.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hello, I would like push audio streams over ethernet and was wondering
what avenues people have tried. I intend to have two linux boxes (or
more). Box1 has the user interface and all of the audio data. Box2 has
no user interface and is plugged into an amplifier and speakers.
Eventually I would like to have many Box2's. I envision an application
with a server running on box2 and a client running on box1. Ideally the
client would create an ALSA input and the server would create an ALSA
output. Then, the client connects to the server, any ALSA enabled app
connects to the input created by the client, and audio is heard from the
speakers connected to the server. I have no expectations that such an
application actually exists. I'm curious as to what options do exists to
push audio over etherenet. -Garett
>What I mean is if you have your soundserver, you are still left with
>rewritting every single app you ever want to use to use the soundserver,
>otherwise the problem still remains. There are a hell of a lot of tools
>around that use either plain OSS or direct ALSA.
Why not simply use alsa api. alsa-lib is not dependand on alsa-driver.
You can write(for example)esd alsa-lib plugin. All applications which
uses alsa api can be redirected to esd using this plugin withou user
notify this. If needed, it can go directly to soundcard or dmix. What
makes alsa unique is alsa-lib (I personally thing this is best alsa
part) not alsa-drivers.
Peter Zubaj
___________________________________________________________________________
Podte na navstevu k Wande - k najlepsej priatelke kazdej zeny na internete.
http://www.wanda.sk/
>this woud IMHO be the best solution, but it will not happen or
>at least last very long until
>
>* all distros will have JACK running per default
>* all developers of any audio program have rewritten their
>code.
Why not use only ALSA API. There is alsa to jack plugin. If someone
wants use jack it can be used to route audio to jack. Also there are
alsa <-> OSS driver plugins. This means you can output sound from alsa
app to oss driver or capture from alsa driver without any source change.
Peter Zubaj
___________________________________________________________________________
Podte na navstevu k Wande - k najlepsej priatelke kazdej zeny na internete.
http://www.wanda.sk/
Hi,
GNUsound 0.7.2 was released. This release fixes a few naggling issues
and incorporates support for the gmerlin avdecoder, which provides the
ability to load .AAC and Musepack formats (among many others).
Changes from 0.7.1:
Fixed undo issues with Mix Tool, Move Tool, Pencil Tool.
Fixed ILLEGAL_MID_SIDE_FORCE error when saving a FLAC file at
compression level 4.
Fixed a few make/autoconf snags (Jens Koerber)
Added support for gmerlin avdecoder (Burkhard Plaum)
Updated config.sub/config.guess.
GNUsound 0.7.2 is available here:
ftp://ftp.gnu.org/gnu/gnusound/gnusound-0.7.2.tar.bz2
The GNUsound homepage:
http://www.gnu.org/software/gnusound
Thanks,
Pascal.
Hi there.
>Actually a correct explanation isn't that simple. Yours is much
>_too_ simple. Theoretically a 20 kHz bandlimited signal can be
>represented _exactly_ as a 40 kHz PCM stream. In order to not
>have to use a very steep lowpass filter in the DAC it is better
>to use a somewhat higher sampling frequency. 48 kHz should be
>enough most of the time.
No. Nyquist frequency allow original _sampled_ signal to be
fully reconstructed using a IDFT. It does not mean that the
reconstructed signal will have anything more than the original
samples.
If you sample an incoming analog signal at 2 samples per second
(sps), you'll definitely not be able to reconstruct ALL the
phases and ALL the frequencies between 0 and 1 Hz. You'll get,
however, a correct reconstruction of a periodic signal if it
lasts long enough. This is due to the inherent impossibility,
using a DFT, to get both accurate space and time resolutions
at the same time.
Just try this "simple" experience: sample a 1 Hz pulse that
is triggered after a random non-quantized delay that is less
than four seconds. Sample it at a rate of 2 sps and then,
try to get the original signal back with all its components
(phase and frequency). Good luck.
This is just as impossible as saying that you can compress the
information stored in any analog signal in a finite number
of samples.
Mickael Vardo
Modulating "Magic" Numbers.
----------------------------
Given two instances of the rude pseudo sine fuction S:
#define S(n)(((n)*((n)^0x8000))>>13)^(((short)((n)^0x8000))>>15)
.. and offset them some 60 degrees with the "magic number" 0x14D4, then
they will partially cancel each other out to get you something that
*looks* like a sine wawe (but actually still have lots of harmonics.)
Now, what then if one was to change the offset, I thought. That should
result in different blends of harmonics, right?
So I made this new tiny button between the waweform selector and the
detune slider. Activating this button will do the following:
Frequency offset (detune) will detune the two instances against each
other.
Phase parameters will change, not only the phase relative to the other
oscillators, but also the offset between the two instances of S().
I've set up oscillator one in patch H3 ready for experimentation. Forgot
to change the name though, so the title "op 1 on quack" is a reference
to the sound in H2, which is actually pretty amusing ... but only If you
are five years old! :D
Old and deaf R&R farts will probably prefer the "fat boy" in E8.
However, the implication is that I from this week can offer you two
oscillators for the price of one only.
Get'em from:
http://mx44.linux.dk
mvh // Jens M Andreasen