Hello,
I just did some rereading of some parts of the "What Parts of Linux
Audio Simply Work Great?" thread, that talk about the problems with
soundcards that do not support multiple streams, and thought it would be
good if we could actually come up with an advice to the desktop
developers (Gnome and KDE mainly), distribution developers, and audio
application developers in general.
This document should contain a detailed description of the current
situation, of how we got there (i.e. how the desktop "sound daemons"
actually created bigger problem than a solution, and why alsa does not
do mixing in software by default), of how different user requirements
lead to different solutions that are not always compatible (i.e. the
"professional audio" vs "normal" users), and of all the different
solutions currently available (and interfering with eachother).
I believe such an overview is essential. I think most people on this
mailing list have a pretty good idea about his, but do others? For
example, I get the impression that there is a lot of misunderstanding or
ignorance about alsa and dmix.
Then it should propose an ad-hoc solution, and some guidelines of how to
work towards a future in which everybody (including jwz ;-) ) is happy
with linux audio that "just works". (I found jwz rant unjustified and unpleasant, but we can use it in our advantage if we give the right response, which, with a bit of luck (?) will get the same attention from the slashdot hords as jwz's blog)
The ad-hoc solution, I believe, is something that should work "right
now", or at least, as good as possible, with as little as possible
changes to existing applications. For one, this would mean: making sure
that dmix is being used when necesary, that no applications use the hw:
devices explicitely, but the "default", that OSS applications use
libaoss (If I understand correctly, libaoss can be told the use the dmix
plugin, while alsa oss emulation will always use the hw device, or am I
wrong here?). This is mainly a thing of the distro's.
The remaining problem here is what to do with jackd. When the
"professional" user runs jackd, and jackd complains that it is not using
the hw: device directly, the solution should be obvious for him, and the
non-jack apps should continue to work like before (but should be
restarted, I suppose). Could anyone comment on this? The "occasional"
jackd user can just run jack through the dmix plugin, which, if I
understand correctly, would cause higher latency, but we are not talking
about the "professional" user here.
Proposing a roadmap for the future is much harder, but I think we can
talk about that later.
For now, I would like your opinion on the issue. Do you agree that such
a document would be feasible and useful, and that the proposed
structure/contents make sense? I am not sure that the mailinglist is the
best way to write this document, maybe we could use a Wiki. I guess the
first step would be to look for the relevant messages in the "What Parts
of Linux Audio Simply Work Great?" thread, and write a short overview of
those.
maarten
hello,
any advice on what would be the best kernel options
when using ingo molnar's patch for an audio setup?
CONFIG_PREEMPT_DESKTOP: Preemptible Kernel (Low-Latency Desktop)
or
CONFIG_PREEMPT_RT: Complete Preemption (Real-Time)
CONFIG_PREEMPT_SOFTIRQS: Thread Softirqs
CONFIG_PREEMPT_HARDIRQS: Thread Hardirqs
yes or no?
right now i am building with CONFIG_PREEMPT_DESKTOP,
CONFIG_PREEMPT_SOFTIRQS and CONFIG_PREEMPT_HARDIRQS
maarten
at some point in the past, Thorsten Wilms did write:
> While ladspa control dialogs tend to be ugly, and things like having
> sliders for boolean values sucks, I can't say i miss plugin GUIs like
> I got to know while using Cubase VST much. Inconsistent eye-candy
> nonsense and useless big marketing labels everywhere ...
surely you forgot poor contrast?
keke ;)
Pete.
Hi!
I am trying to correct some factual errors in wikipedia regarding FM
synthesis (aka phase modulation.)
The page in qustion is here:
http://en.wikipedia.org/wiki/Frequency_modulation_synthesis
My current corrections are in italics. Further corrections appreciated
I am currently looking at the Casio section near the end (regarding
phase distortion.) Wasn't this technique developed way back in the 60's
by french academics?
mvh // Jens M Andreasen
(and yes, somebody ought to rewrite the whole page from scratch :))
--
Hi.
Since I finally ordered my Multiface, I also started to write
a small program which I am going to need since hdspconf and hdspmixer
are both GUI only.
The idea:
hdsposc will offer the RME mixer and the metering functionality
via OSC.
Why do I write this mail? I'd like to know if someone else
could also make use of this tool. If so, I'd like to hear
your input what you'd want to exactly do with it. If no one responds,
I'll save coding time by ONLY implementing the Multiface (since thats
what I can test) and probably not polish it much, since
what I need is probably very basic.
Oh, and BTW, during coding of hdsposc a question arose:
Is it recommended practice to immediately close a hwdep device
again after performing a single operation, or would it be OK
to leave the hwdep handle open during runtime and only close it
again on program shutdown?
--
CYa,
Mario
Hello all!
I am happy to tell you that I've finished my very first plugins. I
would like to thank all those who helped me over here to get started
and to Tim who kindly provided free plugin IDs.
There are four plugins, each with mono and stereo variant:
- Clipping Booster - boosts/clips the signal after a non-linear amp
- Noisifier - applies AM to the signal with a variable noise source
- Non-linear Amplifier - amp with 6 transmission curve types
(round/parabolic/sine/etc.)
- Variable Noise - generates white/stepped/shot noise
Download plugins/docs: http://artemiolabs.com/software/wasp/
Online docs: http://artemiolabs.com/software/wasp/docs/WASP.html
Comments and suggestions are very welcome. But please don't be cruel
because this is my first serious C coding and very first DSP
experience! I've tested the plugins thoroughly in many hosts, I hope
they work for you too and will be useful.
Artemiy.
Hello all!
I have a question regarding linking multiple plugins into a single
library.
if I do:
$ ld -shared -o plugins.so plugins/*.o
- then ld complains that g_psStereoDescriptor and g_psMonoDescriptor
as well as _init and _fini functions are duplicate and does nothing.
If I try:
$ ld -shared -z muldefs -o plugins.so plugins/*.o
- this does some wizardry on changing the duplicate functions names
(_init and _fini) and in the host only the last plugin in the .o
files list is seen.
As I understand I cannot change the names of _init and _fini because
these are standard for libraries (or not?) and names of
g_psStereoDescriptor and g_psMonoDescriptor are standard for a LADSPA
plgin (or not?)... But what should I do? Or is is better to keep the
plugins in separate .so files?
Thanks very much!
Artemiy.
Hi all,
I am busy writing a grant proposal and I am now pricing audio/computer
equipment.
I plan to have all the computers dual booting lin/win and want the
best quality sound card I can get that will run on both lin and win
with the least trouble and the most bang for the buck.
I don't need multitrack recording necessicarily.
This issue has popped up on the list before but I would like to get a
fresh veiw on this from the list.
Thanks
Aaron