Hello all,
Would there be a simple way to import plugin
automation data into an Ardour session ?
'Simple' may include writing some software, but
not any major project :-)
The situation I want to handle is this:
We have a Csound, SC3, PD, program generating
a number of audio signals, and also simple
OSC commands that would control external
processing of these signals, e.g. surround
panning or movement implemented by an
external system.
The signals each become a mono track in Ardour,
while the OSC controlled parameters are stored
as automation data for a LADSPA plugin in each
track. On playback, the plugin just sends this
data to the external application.
The whole point is to combine a number of such
compositions into a single Ardour session that
can be looped without any user intervention.
Ciao,
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.
On Sat, Oct 18, 2008 at 5:30 PM, Fons Adriaensen <fons(a)kokkinizita.net> wrote:
> On Sat, Oct 18, 2008 at 10:56:31PM +0200, Paul Davis wrote:
>
>> if read_ptr > size then the math should result in the write space being
>> *under* estimated. if thats not happening then my worst nightmares come
>> true, which has happened before. is there a signed/unsigned issue going
>> on here?
>
> The only way to know is to verify all the possible cases in
> jack_ringbuffer_write_space() and jack_ringbuffer_read_space(),
> taking into account that the masking operation may not have
> been applied at the time these are called, and that 'the other'
> *_ptr could be >= size.
>
[Switched from LAU to LAD]
Forget I said anything about context switches. Those don't matter.
The concern here is SMP systems. The bug in the original code is +=
operator used to increment the read and write pointers. That operator
generates the addl instruction, which is not atomic. It needs to be
locked on SMP. The reason Olivier's patch works is because the
increment is done on a temp and then assinged (using plain old =) to
the shared variable. The assignment *is* atomic (since size_t is 4
bytes and presumably aligned to a 4-byte boundary).
I can't prove this right now, but I think I'm correct. That means it
may be possible to make the original code SMP-safe without the
(subtle) change in semantics Olivier's patch makes. Like increment a
temp, store it, mask the temp, store it again.
You can't write lock-free data structures without atomic operations.
While the x86 doesn't need any memory barriers (all stores are seen by
all other CPUs), other platforms could (someone mentioned PowerPC).
The code still needs to be patched for that; just make the x86 version
no-ops.
La Casa della Musica invites all Linux Audio developers,
users, composers, musicians, philosophers and anyone
interested to
The Linux Audio Conference 2009
16-19 April 2009
La Casa della Musica
Palazzo Cusani
Parma, Italy
The LAC will go outside Germany for the first time, but
we will keep close to the familiar four-day format with
paper presentations, workshops, electro-acoustic music
concerts, and the Linux Sound Night.
The website is being created, and 'calls for everything'
will be issued before the end of this week.
The conference starts a few weeks later than the previous
one, which allows the deadlines for everything to be moved
as well. For the papers and music calls this will be
somewhere mid January, so you can use the end-of-year
holiday period to get creative.
We hope to see you all in Parma !
Fons Adriaensen, LAD
Francesca Montresor, CdM
Hi List, I had one more question,
if you'd bear with me...this is a different question from the last one.
I play an instrument called the jaw harp, which is played in this video(not
me!):
http://in.youtube.com/watch?v=rDdG97MesZM
Now I would like to feed this live audio into my box, do a bit of morphing
on the sound and have that play in real-time output. It might even trigger
some other
sample from a C/C++ program. I don't want to use one of the audio
programming
languages like pd, Chuck, SC, Csound etc. I want to build my own system
using
linux audio native capabilities. .
In your general opinion with C/C++ as the language, what other configuration
would
be suitable? As in I/O device (OSS/Jack/Alsa/PortAudio/) and which audio
library.....
-- sincerely,
------- -.-
1/f ))) --.
------- ...
http://www.algomantra.com
Hi,
This might be of interest to multimedia developers willing to transfer
audio and/or video data between processes...
libshmsg implements (optionally) zero-copy message passing on top of
libsharedmem. This is very first release, so it's still lacking some
functionality and features.
Related tarballs:
http://sourceforge.net/project/platformdownload.php?group_id=171566
Project page:
http://sourceforge.net/projects/libsharedmem
Best regards,
- Jussi Laako
Geeks, tuxians and audioslaves!
I need some help here. I got a bit freaked when I saw that
the libsndfile example files contained a program nearly 1000 lines long to
simply play
a file. But perhaps it's intended for a higher level of geekery. My needs
are very modest.
My task: I just want to load a few samples(wav) which are represented on a
GUI (using SDL) and when they knock about on the screen, sound (music)
is generated using some rules.I need them to mix, of course.
I was using SDL_mixer for this. But my problem was that I want to write to
file
(record) a whole session of the running program as a single wav. SDL_mixer
does not seem to have writing options.
Now is there a way I can do this in some reasonably efficient manner using
libsndfile?
I could load using libsndfile too, skipping SDL_mixer altogether. It's
already installed.
What to do? Any tips would be appreciated.
Thanks,
------- -.-
1/f ))) --.
------- ...
http://www.algomantra.com
Hi,
I've just published a free software that can recognize notes recorded by an
audio input device in real time.
It's here : http://davidferaoun.free.fr/eric/zik/voice2midi.php
This may interest users (it's can be used freely) and developers if they
want to put it into a bigger audio program.
There are 2 releases : signed applet and stand-alone. Both work on my PC,
but sometimes it seems that the signed applet may cause a browser crash.
The source code (in java) is available for download.
( Btw: I use a java FFT library from the online book
http://www.cs.princeton.edu/introcs/97data/FFT.java.html but I don't know
its license. Sorry )
Best regards,
Eric
Some questions and comments regarding the LASH release candidate.
1. lahs_init()
Would it be possible to document the contents of
the first argument 'lash_args_t *args' ?
I'd be happy to give lash all the info it needs
but feel quite strongly that I should be able
to provide this myself and not being forced to
have liblash inspect/mangle my argv. See also 2.
2. lash_extract_args()
The type of main()'s argv is char *argv[], so
why is the second argument a triple pointer ?
Re-arranging the elements of argv does not
require access to the variable argv itself,
only to its elements, and for this a double
pointer is all you need.
3. The Restore_Data_Set event.
How does a client know when all configs saved
by a previous session have been received ?
The client may be updated meanwhile and
expect more configs than were saved. It
should have defaults for these of course,
but how long should it wait for data that
may never arrive ?
Ciao,
--
FA
Follie! Follie! Delirio vano e' questo!
Hello everyone!
I have a couple of questions regarding this.
1. Does every soundcard have a clock?
2. Do other architectures than ALSA offer access to this clock?
3. Can the soundcard clock probably be accessed via simple device file access?
I have looked into OSS, but didn't find anything there, directly reffering
to clocks.
If OSS or other unix sound drivers don't offer access to this resource, how
would they take care of accuracy?
Kindest regards and thanks in advance for any good advise
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Hello!
I just read through the JACK transport section in the JACK reference again.
If you want to be timebase-master you can (or must) supply a callback function
to update the time-info.
Does my program need audio thread (a process function?) Which timing source
is used to update the time? Does my program provide it or does JACK provide it
and my program only really kicks in when I want to relocate (jump to another
position)?
I am correct in assuming, that my code doesn't have to be multithreaded (at
least not look multithreaded) if I want to use the JACK transport interface
(control and timebase)?
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de