I'm writing on behalf of a friend who's having trouble dealing with midi
latency in a soft synth (possibly yoshimi) ...
Given a jack period of N frames, the midi latency with the original code
effectively ranges from N frames to 2 * N frames, which I guess qualifies
it as jittery. So far my friend has tried a few things, but there's no
workable solution as yet.
What seemed most promising was to break the audio generation into smaller
blocks, applying pending midi events between blocks. Sadly, that drags the
creation and destruction of note objects into the realtime jack process
callback path. Latency improves, but the number of notes you can get
away with before it all falls in a heap is significantly reduced.
Getting the destruction of dead notes out of the realtime path is trivial,
not so the creation of new ones. Even with a pool of pre-allocated note
objects, it seems the amount of initialization code per note is still a
real limiting factor on how busy things can get before it all falls apart.
Such is life (for my "friend").
cheers, Cal
Hallo,
I am new here, so I hope this is the right place to talk about what I
want to do.
I want to make a CUDA implementation of the algorithms from the
calf-plugins. On the front end there should be placed a button (or
something else) to (de-)activate the CUDA support. I have already
written a jack-program (which makes some simple changes on audio data)
using CUDA. It works good and at a first sight the performance looks
promisingly.
I have read a part of the mailing list archive and found out that there
already was a discussion about audio processing with CUDA. I know there
are some reasons for not using CUDA like the duty to use the proprietary
Nvida driver, the limitation that only people who have an Nvidia card
will have a benefit and so on. But the CUDA implementation may show
which performance can be reached and may beuseful for Nvidia users
immediately.
I know there is OpenCL but it is not as sophisticated as CUDA at the
moment, will have less performance than CUDA and I do not have the time
to learn OpenCl at the moment (but the project has to be finished soon).
I heard it is not too much work to transfer existing CUDA code to OpenCl
code later (assuming there is already an OpenCL implementation for all
CUDA functions which were used).
So I want to do this with CUDA.
At the moment I have some questions:
1. Is there anybody has already done or is doing something like this?
2. Where can I get information to make any specific changes on the calf
code? (I examined it a bit but it will take time to understand the
structure of the program when I only have the code, especially the part
for the GUI seems to be conceptualized a bit more complex.)
It would be nice if I can get some help here.
Regards
Max Tandetzky
http://www.youtube.com/watch?v=AoAOx97G8ewhttp://www.gizmag.com/roger-linn-linnstrument-digital-music-interface/15155/
........
Sadly, that's unlikely to happen anytime soon - because the TouchCo
multitouch pad that Linn used in the production of his prototype has
been withdrawn from production. Apparently Amazon bought up the
technology earlier this year (
http://www.nytimes.com/2010/02/04/technology/04amazon.html?_r=1 ) with
a view to using it in the Kindle eBook reader, but has completely
shelved it and shut down the TouchCo operation, presumably due to the
ongoing Intellectual Property chest-beating, suing and counter-suing
going on in the multitouch arena right now.
So the TouchCo website has nothing but a sad placeholder to offer (
http://touchco.com/ ), and Linn has nothing but his pre-production
prototype to work with, ruling out the possibility of a LinnStrument
hitting the market in the immediate future.
........
Could the meego touch API support the features needed by the
instrument used in the above Youtube Video of Roger Linn? Some of the
swipes and other gestures demonstrated in the video are available in
http://apidocs.meego.com/mtf/gestures.html but an important one isn't:
multitouch pressure sensitivity that is equivalent to "polyphonic
aftertouch" on a MIDI keyboard -- allowing analog pressure readings to
be taken continuously and simultaneously per touch.
Is there interest in making such devices part of the "use case" for meego touch?
And are there any such displays&touchsensors available for use with
Meego capable hardware such as the beagleboard?
I imagine one could extrapolate touch pressure by looking at how much
area each touch consumes, dynamically -- (assuming a normal human
finger whose tip would deform and cover more surface area with
application of more pressure).
Niels
http://nielsmayer.com
Hey guys!
I have two questions.
1. How does Sound Stretch work? It is incredible the way it can produce a
tone which has no noticeable vibrations, just a wall of sound. How is that
accomplished, in layman terms if possible :)
2. Can this program be jackified and is that a lot of work?
--
Louigi Verona
http://www.louigiverona.ru/
First of all, my apologies to all for x-posting of the original
matter--I did not realize there was such a major overlap in user base on
pd-list and pd-dev lists making my x-post truly redundant.
> since rPars can't be used by any other thread, you need to make a copy
> for each thread.
This must be it! You are absolutely right as there is no guarantee rPars
won't get destructed (with the end of the constructor function) before
the worker thread is properly instantiated. FWIW, instead of creating a
copy of rPars, I've actually gone with Robin's suggestion to use
sched_yield() and have a wait condition which is cleared once the worker
thread has spawned to ensure it will get the necessary data from rPars
before they are destructed as follows:
void *pd_cwiid_pthreadForAudioUnfriendlyOperations(void *ptr)
{
threadedFunctionParams *rPars = (threadedFunctionParams*)ptr;
t_wiimote *x = rPars->wiimote;
t_float local_led = 0;
t_float local_rumble = 0;
unsigned char local_rpt_mode = x->rpt_mode;
while(x->unsafe > -1) {
pthread_mutex_lock(&x->unsafe_mutex);
if ((local_led == x->led) && (local_rumble == x->rumble) &&
(local_rpt_mode == x->rpt_mode)) {
if (x->unsafe) x->unsafe = 0; //signal that the thread init is
complete
pthread_cond_wait(&x->unsafe_cond, &x->unsafe_mutex);
}
//snip
static void *pd_cwiid_new(t_symbol* s, int argc, t_atom *argv)
{
//snip
// spawn threads for actions known to cause sample drop-outs
threadedFunctionParams rPars;
rPars.wiimote = x;
pthread_mutex_init(&x->unsafe_mutex, NULL);
pthread_cond_init(&x->unsafe_cond, NULL);
pthread_create( &x->unsafe_t, NULL, (void *)
&pd_cwiid_pthreadForAudioUnfriendlyOperations, (void *) &rPars);
// wait until other thread has properly intialized so that
// rPars do not get destroyed before the thread has gotten its
// pointer information
while(x->unsafe) {
//must use as many yields as necessary as there is no
//guarantee that one will be enough
//also on Linux use sched_yield
//rather than pthread_yield
sched_yield();
}
//snip
Many thanks all for your help on this one! Hopefully the existence of
this thread will help others who may be looking for similar solutions.
Best wishes,
Ico
Hi all,
I am wondering if anyone can shed some light on the following
predicament. I am by no means a multi-threading guru so any insight
would be most appreciated.
The following are relevant excerpts from the code of an external. AFAIK
the external initializes mutex and cond and spawns a secondary worker
thread that deals with audio-unfriendly (xrun-causing) write operations
to the wiimote and terminates it when the object is destructed waiting
for the thread to join back and then destroying the mutex.
Now, if I add a bit of usleep right after the thread has been spawned as
part of the constructor (as included below) the external seems very
stable (e.g. cutting and pasting it as fast as keyboard allows, or in
other words constructing and destructing instances of it as fast as
possible does not result in a crash). Yet, when one does not use usleep
right after spawning the secondary (worker) thread in the constructor,
the whole thing is very crash-prone, almost as if the spawning of thread
does not go well unless given adequate time to do get things all into
sync, so to say, even though this makes to me no sense as the way I
understand it the constructor does not move ahead until pthread_create
does not return a value (which in this case I am not bothering to read).
Curiously, when not using usleep, a crash may occur right at creation
time, at any point while the object exists, and even as late as during
its destruction. Any ideas?
P.S. I am also including the entire file for those interested in trying
it out.
Best wishes,
Ico
Relevant excerpts (in random order and incomplete to allow for greater
legibility):
//struct defining the object
typedef struct _wiimote
{
t_object x_obj; // standard pd object (must be first in struct)
...
//Creating separate threads for actions known to cause sample drop-outs
pthread_t unsafe_t;
pthread_mutex_t unsafe_mutex;
pthread_cond_t unsafe_cond;
t_float unsafe;
...
t_float led;
...
} t_wiimote;
//constructor
static void *pd_cwiid_new(t_symbol* s, int argc, t_atom *argv)
{
...
x->led = 0;
// spawn threads for actions known to cause sample drop-outs
threadedFunctionParams rPars;
rPars.wiimote = x;
pthread_mutex_init(&x->unsafe_mutex, NULL);
pthread_cond_init(&x->unsafe_cond, NULL);
pthread_create( &x->unsafe_t, NULL, (void *)
&pd_cwiid_pthreadForAudioUnfriendlyOperations, (void *) &rPars);
//WHY IS THIS NECESSARY? I thought that pthread_create call will first
finish spawning thread before proceeding
usleep(100); //allow thread to sync (is there a better way to do this?)
...
}
//destructor
static void pd_cwiid_free(t_wiimote* x)
{
if (x->connected) {
pd_cwiid_doDisconnect(x); //this one has nothing to do with thread but
rather disconnects the wiimote
}
x->unsafe = -1; //to allow secondary thread to exit the while loop
pthread_mutex_lock(&x->unsafe_mutex);
pthread_cond_signal(&x->unsafe_cond);
pthread_mutex_unlock(&x->unsafe_mutex);
pthread_join(x->unsafe_t, NULL);
pthread_mutex_destroy(&x->unsafe_mutex);
...
}
//worker thread
void pd_cwiid_pthreadForAudioUnfriendlyOperations(void *ptr)
{
threadedFunctionParams *rPars = (threadedFunctionParams*)ptr;
t_wiimote *x = rPars->wiimote;
t_float local_led = 0;
t_float local_rumble = 0;
unsigned char local_rpt_mode = x->rpt_mode;
while(x->unsafe > -1) {
pthread_mutex_lock(&x->unsafe_mutex);
if ((local_led == x->led) && (local_rumble == x->rumble) &&
(local_rpt_mode == x->rpt_mode)) {
pthread_cond_wait(&x->unsafe_cond, &x->unsafe_mutex);
}
if (local_led != x->led) {
local_led = x->led;
//do something
}
}
if (local_rumble != x->rumble) {
local_rumble = x->rumble;
//do something else
}
...
pthread_mutex_unlock(&x->unsafe_mutex);
}
pthread_exit(0);
}
//an example of how the thread is affected by the main thread
void pd_cwiid_setLED(t_wiimote *x, t_floatarg f)
{
if (x->connected) {
x->led = f;
pthread_mutex_lock(&x->unsafe_mutex);
pthread_cond_signal(&x->unsafe_cond);
pthread_mutex_unlock(&x->unsafe_mutex);
}
}
Dear Linux Audio developer, user, composer, musician, philosopher
and anyone else interested, you are invited to the...
Linux Audio Conference 2011
The Open Source Music and Audio Software Conference
May 6-8 2011
Music Department, National University of Ireland, Maynooth
Maynooth, Co.Kildare, Ireland
http://music.nuim.ie
As in previous years, we will have a full programme of talks,
workshops and music.
Two calls will be issued, a Call for Papers (see below) and Call for
Music (soon to be announced).
Further information will be found in the LAC2011 website (under
construction).
================ CALL FOR PAPERS =================
Papers on the following categories (but not limited to them) are now
invited for submission:
* Ambisonics
* Education
* Live performance
* Audio Hardware Support
* Signal Processing
* Music Composition
* Audio Languages
* Sound Synthesis
* Audio Plugins
* MIDI
* Music Production
* Linux Kernel
* Physical Computing
* Interface Design
* Linux Distributions
* Networked Audio
* Video
* Games
* Media Art
* Licensing
We very much welcome practical papers resp. software demos ("how I use
Linux Audio applications to create my music/media art").
Paper length: 4-8 pages, with abstract (50-100 words) and up to 5
keywords.
Language: English.
The copyright of the paper remains with the author, but we reserve the
right to create printed proceedings from all submitted (and accepted)
papers.
IMPORTANT DATES:
Submission deadline: 15/January 2011
Notification of acceptance: 7/March 2011
Camera-ready papers: 1/April 2011
Queries: Victor Lazzarini, NUI Maynnooth (victor.lazzarini(a)nuim.ie)
if there was a standard that described the expected behavior of commonly
used
(or just useful) knob control methods, perhaps in the form of a short draft
specification on freedesktop.org, would people (ie developers) use it?
expectations and requirements clearly differ so we would probably need to
gather together and discuss those in use with a mind to removing or
combining
similar methods, explicitly naming them and defining their behavior in the
abstract. if we required that applications implement a set few methods as
configurable options (as a minimum) in order to comply then we might
eventually
see the back of the unpredictable mess we have now. it wouldn't matter
how they
are configured, be it gconf, dotfile, command line, prefs dialog, env
variable..
as long as it could be done.
worth thinking about or is it just too niche? are the actual real world
applications too varied and specific to make it useful?
for example it would certainly be nice to crack open a library config
or application prefs window and assign VLMC (vertical linear motion
control*)
to the left mouse button and RMC (radial motion control*)
to shift + left mouse button, and have those conform to the expected
standards.
or even just to set KNOB_CONTROL_METHOD=RMC in your global environment
and have all the adherent applications just do what you expect.
it seems to me that standards (of the mutually agreed rather than officially
sanctioned variety, since the latter is impractical) provide the best means
to bring about common behavior in sovereign systems. naturally it would be
completely platform independent.
just having named control methods with explicitly defined behavior may
help matters
too.
cheers,
pete.
*crappy names i'm sure, but you get the idea.
WRT the recent discussion about pixmap knob widgets and theme
conformance (that i can't reply to since i wasn't on the list
at the time, sorry)
there are a couple of ways that you might achieve this.
the crux gtk theme engine includes some pixmap recolouring code
(or used to at any rate). it recolours areas of a pixmap that
only contain green values to one specified in the gtkrc.
this might conceivably be stolen and incorporated
to provide some measure of theme conformance for pixmap
based widgets (knobs wheels and potentially sliders).
this method places specific constraints on the source pixmaps used,
constraints that are easily adhered to when creating pixel art
(which the crux pixmaps were) but procedurally generated rasters
(vector or 3d renders) of the kind that are likely to be used with
a pixmap widget might pose more of a problem since lighting and
anti aliasing probably induce a variety of colours.
still, i expect you could get something to work with a bit of effort.
(imagemagick or scriptfu in the gimp may help there).
another possibility that was briefly discussed for use with
phat was that of a composite widget with different layers that
could be drawn separately, one on top of the other. eg render an SVG
or pixmap as a background on the first pass, then draw something
with cairo (a value indicator..) on top.
there are obvious limits to what you can achieve with this kind of
thing, but you could get some complex effects on a knob while still
maintaining procedural control over the size, colour and shape of the
vector elements. (tick marks around the knob, value indicator size
and colour etc). IIRC we discarded the idea due to it's complexity.
we wanted a generally configurable knob and the vector elements would
need anything from extensive widget options right up to a full blown
markup language to describe them (not a problem for app specific
widgets).
cheers,
pete.