On linux, which is faster, pipe, FIFO, or socket? What about shared
memory - is it faster, and if so is it faster enough to warrant the
extra programming overhead?
--
Hans Fugal | De gustibus non disputandum est.
http://hans.fugal.net/ | Debian, vim, mutt, ruby, text, gpg
http://gdmxml.fugal.net/ | WindowMaker, gaim, UTF-8, RISC, JS Bach
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
Sort of off-topic, except that i want to do it on linux and it
does have to do with audio :)
I've been wanting to tinker a bit with analog audio circuit
design. I've built a few boxes from schematics but I've
never actually designed a circuit.
What I'd REALLY like is some GUI software in which I can construct
a circuit diagram and then feed in a test audio signal and
through some DSP magic, hear (more or less) what the result would sound
like before I start using my big clunky hands to ruin physical objects :)
Does such a thing exist? Or some combination of software that would
do the above? I've been looking at some SPICE resources online
but I can't tell if you can feed an arbitrary digitized audio signal into
a SPICE simulation.
--
Paul Winkler
http://www.slinkp.com
Look! Up in the sky! It's PURPLE KABUKI WARRIOR BOOMERANG!
(random hero from isometric.spaceninja.com)
I'm a little green at time measurement facilities available in linux,
In my host, I would like to be able to measure how much time each
individual LADPSA plugin takes. Assuming I do realtime work with the
smallest buffer sizes my system can handle, what options are available
to me to measure time on this small scale? I know that the typical
system calls time and gettimeofday have resolution of 10 ms, which is
too big for this purpose. Is there any way to do this? Is there a
high-res time howto?
-jacob robbins.....
Has anyone done any low-latency audio testing with the new native pthread
implementation (comes with rh9.0 at least)...?
I was just reading through [1] and noticed the following (in section 8):
""Realtime support is mostly missing from the library implementation. The
system calls to select scheduling parameters are available but they have
no effects. The reason for this is that large parts of the kernel do not
follow the rules for realtime scheduling. Waking one of the threads
waiting for a futex is not done by looking at the priorities of the
waiters.""
If I've understood right, SCHED_FIFO semantics do not have any meaning
between threads of one process if NPTL is used! Hopefully this is right,
as it would cause quite a lot of problems (GUIs and disk i/o threads can
freely block audio processing even though using SCHED_FIFO). :(
[1] http://people.redhat.com/drepper/nptl-design.pdf
--
http://www.eca.cx
Audio software for Linux!
Hi Steve, hi others!
I'm just adding lrdf support to glame but apart from default values I dont
see more information than in the ladspa descriptor. Can we agree on some
more metatags? I'd like to have
- a Category (or multiple ones?)
- short description of ports and the plugin itself
- URI to the documentation of the plugin
am I right getting such information would be done via the lrdf call
lrdf_get_setting_metadata? So we need to define element labels for the
above. I'd suggest
- category
- description
- help_uri
Thoughts?
Richard.
ABD/GÖÇMENLÝK wrote:
[some turkish spam bs]
shit. they sneaked it past the html filter. but i'm reluctant to plonk
all multipart messages, many mailers produce them by default.
guess we have to live with it for now.
--
All Members shall refrain in their international relations from
the threat or use of force against the territorial integrity or
political independence of any state, or in any other manner
inconsistent with the Purposes of the United Nations.
-- Charter of the United Nations, Article 2.4
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux Audio Developers)
Hi all, the 0.8.0 version of Hydrogen Gnu/Linux drum machine is available at
http://hydrogen.sf.net
Features:
* Graphical user interface based on QT 3,
* Sample based audio engine,
* Oss Audio driver,
* Jack Audio driver,
* Export to disk audio driver,
* Alsa Midi input,
* Ability to import/export xml-based song file,
* 64 ticks per pattern,
* 16 voices with volume, mute, solo, pan capabilities,
* Import of samples in wave, au, aiff format.
* Humanize and swing functions
* Delay FX (new)
* Assignable Jack ports in preferences file (new)
* Assignable midi-in channel (1..16, ALL) (new)
* Import/export of drumkits (new)
Changes:
* Delay FX
* Bug fix in Alsa Midi Driver
* Assignable Jack ports in preferences file
* Assignable midi-in channel (1..16, ALL)
* Drumkit support (load, save, import, export)
* Acoustic drumkit included
* various GUI improvements
Happy drumming! ;)
--
Alessandro <Comix> Cominu
http://hydrogen.sf.net
e-mail: comix(a)despammed.com
Icq: 116354077
Linux User # 203765
[...Codito Ergo Sum...]
Hi all,
I'm currently working on creating Mess, a Buzz-like software studio written in
C++ on Linux, and have run into a somewhat critical problem (note: this is my
first "real" audio app): I'm using PortAudio as audio API and it's
outputbuffer keeps underflowing because my callback function is too slow.
One of my goals is to enable the user to make a network of synths, samples,
effects,etc. ("machines" in Buzz-terminology). My initial idea for
implementing this was to associate a buffer and a process-function with each
machine. If the machine only generated sound, the process-function would
simply fill the buffer, if it was an effect the buffer would contain the
effect's input which would be consequently overwritten by the effect's
output. The MachineManager class contained a processChain-function which was
responsible for calling each machine's process-function in the right order
and copying and mixing the contents of the buffers according to the
connections in the network.
PortAudio's callback called the processChain-function and next copied the
contents of the master machine's buffer (the master represents the output in
the network) to PortAudio's outputbuffer.
As a test I made a simple sine tone generating machine and connected it to the
master. This resulted in some weird noises when using small buffers and in
clear sine tones with pauses inbetween when using very large buffers.
In an attempt to let the callback do less work, I made an extra thread
together with a set of two buffers with read/write flags, the idea being that
the thread continuously tried to fill the buffers with the write flag set,
using processChain (after this the buffer's flag would be set to 'read' ; if
there was nothing left to write the thread would go to sleep) and the
callback function given to PortAudio would check each time it ran for a
readable buffer and copy that to it's own outputbuffer (and setting the just
read buffer's flag to 'write', awakening the thread with processChain in it).
This again resulted in some weird noises (mostly due to thread scheduling not
quite going the way I expected).
The problem with my first method seems to be the copying of all the buffers
that makes the callback function too slow, but I don't immediately see any
other way of doing it (is there one?).
In retrospect the second way just seems wrong.
So my question right now is: how can I process a chain of machines in a fast
enough way?
Raf
hello again kids,
if you like python and libsndfile and want to use python with
libsndfile, you might try a preliminary set of bindings i put up at
http://www.arcsin.org/archive/20030520025359.shtml .
if you don't like python with/and libsndfile, then this has
likely been a waste of your time.
thanks,
rob
----
Robert Melby
#&)*&)!$_! !&$@*($(_)!#& !$&*(!@#$
uucp: ...!org!arcsin!rm
Internet: yes