Greetings:
Once again I've updated the Linux soundapps sites. All sites are
current and can be accessed via these URLs :
http://linux-sound.org (USA)
http://www.linuxsound.at (Europe)
http://linuxsound.jp/ (Japan)
Many thanks to Frank Barknecht for his assistance with linuxsound.at.
Many thanks also to all my site providers: the mirrors have been donated
by their respective owners as a service to the community, for which I am
most grateful.
Enjoy !
Best regards,
== Dave Phillips
The Book Of Linux Music & Sound at http://www.nostarch.com/lms.htm
The Linux Soundapps Site at http://linux-sound.org
Thanks alot Martin and Paul for the useful hints,
My questions:
Basically the impossibility to do sync-start on multiple delta 1010 cards
is because the hardware lacks the ability to do so ?
de m-audio site/manual talks about
"sample accurate sync between multiple cards" but I guess this means
only that once started the frequencies do not drift but there could be
a small offset between the single channels.
Assuming there is a small offset when starting the cards, one could
assume too that (if you run the audio init code SCHED_FIFO) it would
be quite small (in order of a few samples).
This means that if you use this approach:
while(1) {
snd_pcm_write_to_card_1();
snd_pcm_write_to_card_2();
}
the small start offset would mean that one of the two card audio buffers
are a bit less filled than the other one and like Martin has said,
as long as you do not need sample accurate sync start it would work ok.
Paul: sorry I did not know that ALSA allows you to treat two cards as
a single logical card.
As said it has been about two years I last used the ALSA API and many
things have changed since then.
Are there some online resources available that describe how to do this
card linking ?
Are the cards then started in sync (or near in sync for cards that do
not support sync start) ?
BTW: what kind of cards support multicard sync start ? RME ?
Regarding the SPDIF cable between the two cards can I use a common
cinch mono cable or must a SPDIF cable be shielded and/or have a
precise resistance ?
Note: In my case I do not need sample accurate sync but I was just curious
if it would be possible to do that and/or if a single threaded audio app
could experience problems in presence of playback start offsets.
I searched the net for postings or notes about sync start
with the delta 1010 but I haven't found any except this
(article from year 2000)
".... Midiman UK told me that the current drivers can already keep four cards in
perfect sync, but there are some fixed offsets between them; this will be
overcome in a future driver release. ...."
see here:
http://www.sospubs.co.uk/sos/jan00/articles/midiman1010.htm
So I was wondering what "overcome" would mean in this case: starting
the two cards as close as possible in order to minimize offsets or
using some adaptive algorithm that (assuming that the hw allows it)
measure the dma ptr offsets and adjust them accordingly.
again thanks for your useful infos.
cheers,
Benno
-------------------------------------------------
This mail sent through http://www.gardena.net
Hi,
If someone would like to take a quad chorus I made and make it a LADSPA
plugin feel free.
The source code is @ www.sourceforge.net/projects/audiostar
I had an implementaion of StereoPhonic-Quadraphonic Matrix processor,
but I don't think the phase is correct, and nor is is 4-channel either.
If you had four channels you would calculate the left rear channel as
90 degrees phase shifted, and then sum it into the formula.
The quadraphonic processor still does make an interesting surround sound
effect even though it is not technically correct.
The algorithm is from an electronics book, the formulas were given as
part of the circuit design. If someone is interested I can post those
formulas to you for implementation, but really the
quadraphonicprocessor.cpp already does it, just the left rear channel is
phase shifted 90 degrees and mixed into the stereophonic stream. Since
it is operating on only stereo channels, I keep a copy of the previous
left channel about n samples behind, and slightly mix it to achieve
something like a phase effect. It does actually sound like surround
sound still.
The Quad Chorus is just a stereo chorus but it has two additional taps.
This is the ultimate in fatting up analog waveforms, but the
implementation suffers from swishing if you set the LFO rate too high.
It also does a deep flange if you set rate all the way down and put
feedback up. I had it go through the quadraphonic, but the way I do
stereo on my new synth, it doesn't sound proper, so I removed it.
I'd make it a LADSPA plugin but I don't know yet how to make them.
I have another chorus design but it is not real time. It is a six tap
chorus, passed through a phaser, and then put through an all pass with a
variable delay length setting. This can make echo effects. It was
actually really cool sounding.
I also had an FFT chorus that mixed pitch shifted copies of the signal
onto the stream like suggest in The Computer Music Tutorial. I am
working on getting it working with FFTW.
Hrmm, oh, I also had one funny chorus, that was also not real well with
real-time.I took the idea of the QuadraFuzz pedal, split the stream into
four (or more) bandpass (with resonance) and then put them through a
chorus effect. Then I remixed them through comb filters to get a reverb
feel to it. That also sounded pretty cool, but I had too of a hard time
getting it to work in real-time.
IF someone could take those designs and make them real-time they were
REALLY funky and people would like them.
Just some ideas you can experiment with.
--
Nick <nicktsocanos(a)charter.net>
I read:
> Any assistance will be vastly appreciated.
I'm just trying to contact the people I know of that are still involved in
vbs/atnet I hope someone will soonish contact you.
regards,
x
--
chris(a)lo-res.org Postmodernism is german romanticism with better
http://pilot.fm/ special effects. (Jeff Keuss / via ctheory.com)
Greetings:
I'm writing to the lists in the hope of finding someone who can advise
me on contacting someone in charge of www.linuxsound.at in Austria. The
former contact was Georg Hitsch, I've written to him but as yet have
received no reply. The site is still on-line, but for some reason I can
longer log in to update it. Thus only the US and Japanese sites have
been updated recently.
Any assistance will be vastly appreciated.
Best regards,
== Dave Phillips
The Book Of Linux Music & Sound at http://www.nostarch.com/lms.htm
The Linux Soundapps Site at http://linux-sound.org
Currently listening to: King Sunny Ade, "Ja Funmi"
While reading over the ladspa v1.1 header, i noticed that hint number 0x2C0
is defined as follows:
/* This default hint indicates that the Hz frequency of `concert A'
should be used. This will be 440 unless the host uses an unusual
tuning convention, in which case it may be within a few Hz. */
#define LADSPA_HINT_DEFAULT_440 0x2C0
These "LADSPA_HINT_DEFAULT_..." hints tell a host how to set the values
for a port when the user doesn't supply any values.
But then it hit me, "what kind of port could have a default defined as
440 HZ, concert A?" Clearly, the only reasonable use for a port of this type
is to control the pitch of an instrument-type object. Any ladpsa plugin that
has a port with hint DEFAULT_440 set, be it a true synthesis-engine with no
audio input ports, or some sort of tonal filter, can be used to generate
coherent melodic output (ie tunes,songs,melodies,pitched-vocalizations...).
This is important because it means that we can implement _right_now_
ladspa instruments that can be used without the user having to hook up
control A to port B or generally know what the hell they're doing. In fact,
the sine oscillator plugin, provided by Richard Furse as an example plugin
for the ladspa sdk, is a reference example that host programmers can code
off of _right_now_. Just remember that each ladspa plugin instance is a
single voice, so polyphony must be implemented in the host. Fortunately,
this is not that hard(*).
The DEFAULT_440 hint took me by surprise because there aren't many ladspa
virtual synths out there. With the use of this hint, coding a synth is about
as simple as we would expect from ladspa: very. Many projects with
integrated softsynths could put some of their core dsp routines into ladspa
plugins and start seeing immediate benefits because the widespread use of
the ladspa format facilitates sharing plugins between apps. Once again, this
is all completely possible _right_now_ with the current version of ladspa.
What's even better is that ladspa instruments will be automatically
ported to the new XAP plugin format when it becomes available. This is
because one of the first XAP plugins will be a plugin to "wrap" LADSPA
plugins inside an XAP plugin. Plugin developers will then be able to choose
whether to fully move to the new plugin architecture, continue using ladspa,
or, (isn't linux great:) both.
For the record, the XAP standard has settled on using linear pitch
instead of pitch in HZ. linear pitch is defined as a floating point value,
with 0 = 440 hz, and scale 1.0/octave. (example 440 hz = 0 lin pitch, 220 hz
= -1 lin pitch, 110 hz = -2 lin pitch, 55 hz = -3 lin pitch). As you can
see, the mapping between linear pitch and HZ is an isomorphism, so
translation between the two is not much of an issue.
Remember that each ladspa plugin instance is only ONE voice in a possibly
polyphonous instrument. This means that plugin authors do not need to worry
about the MIDI standard (voice_on, voice_off) at all if they don't want to.
Translating from midi to ladspa is one of the host's burdens. I have placed
a simple scheme for doing this at
http://soundtank.sourceforge.net/input_maps
---jacob..................
(*) to make a host use multiple ladspa plugin instances as a single
polyphonous instrument do the following:
-make a linked list of all the plugin instances as you activate them
-make a linked list called active_instances and one called
inactive_instances
-add all plugin instances to the inactive_instances list
-have your realtime thread move instances from inactive_instances to
active_instances when it activates them to handle a MIDI voice_on event, and
back when a MIDI voice_off arrives saying that the note is finished. I have
placed a simple scheme for handling input events at
http://soundtank.sourceforge.net/input_maps
_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963
Hi all,
I've posted gwc-lib on:
http://gwc.sourceforge.net/
This is an EXTREMELY crude release from the standpoint of documentation
at least, (and it took *forever* to get this far), and ..., but I want
to allow for input into the API.
It so far only includes the denoising algorithm. I do plan to include
the declicking and decrackling algorithms.
What got me motivated was 3 things:
1) Conrad Parker's intitial query into getting the algorithms into
sweep, and subsequent private emails helping me understand many issues.
2) Erik de Castro Lopo's release of Libsamplerate
3) Paul Davis' great post (spurred by libsamplerate) on linux-audio-dev
about how sharing these basic algorithms in a library format has really
made it a better world for all of us:
http://eca.cx/lad/2003/01/0006.html
In theory, there are only 2 files you need to look at to start playing
around - gwc_lib.h which describes the API (sorta), and denoise.c, which
is an example of how to use the algorithm. The basic idea is you
initialize the algorithm with parameters, feed it noise samples via
buffers, then start using it by writing and reading buffers. There
appears to be a small performance hit by using the buffers on the order
of 10%.
If you've a mind, please grab it and let me know how it goes. Fire back
ideas about the API. And, no, I'm not gonna LADSPA-ize it.
Cheers,
Jeff Welty
I'm trying John Hall's book Programming Linux Games, which recommends
using libsndfile to load sound files. Unfortunately, the API has changed
since the book as written, and I can't seem to get projects to compile
with it now. There don't appear to be any instructions on the library's
homepage about linking.
Hall just says to compile the project with the -lsndfile flag. This
doesn't seem to work now - the resulting project doesn't link the object
in. The library does seem to be correctly installed
I found a message from the library's author
http://www.eca.cx/lad/2002/Mar/0311.html
which says to link the program with
gcc `sndfile-config --libs` file1.o file2.o -o program.
I don't seem to have the sndfile-config utility, which was suppoed to
get installed along with the library.
Running ./configure created makefiles in some of the subfolders in the
installation folder. I'm mainly interested in the examples folder, but I
get errors when I try to compile them with the makefile, assuming this
is what I am supposed to do.
I'd appreciate any advice.
Rohan Parkes
Melbourne
Australia
> > i remember i have read a statement about a lock free ringbuffer
> > implemented in C somewhere.
>
> courtesy of paul davis:
>
> you should use a lock free ringbuffer. we will be adding example code
> to the example-clients directory soon. existing code is in ardour's
> source base (for C++). the example code will be in
> example-clients/capture_client.c.
>
> where ardour is ardour.sf.net. (i doubt there is anything hugely non-C in
> the ringbuffer code proper).
maybe it's just me, but I can't find said file...?