First of all, my apologies to all for x-posting of the original
matter--I did not realize there was such a major overlap in user base on
pd-list and pd-dev lists making my x-post truly redundant.
> since rPars can't be used by any other thread, you need to make a copy
> for each thread.
This must be it! You are absolutely right as there is no guarantee rPars
won't get destructed (with the end of the constructor function) before
the worker thread is properly instantiated. FWIW, instead of creating a
copy of rPars, I've actually gone with Robin's suggestion to use
sched_yield() and have a wait condition which is cleared once the worker
thread has spawned to ensure it will get the necessary data from rPars
before they are destructed as follows:
void *pd_cwiid_pthreadForAudioUnfriendlyOperations(void *ptr)
{
threadedFunctionParams *rPars = (threadedFunctionParams*)ptr;
t_wiimote *x = rPars->wiimote;
t_float local_led = 0;
t_float local_rumble = 0;
unsigned char local_rpt_mode = x->rpt_mode;
while(x->unsafe > -1) {
pthread_mutex_lock(&x->unsafe_mutex);
if ((local_led == x->led) && (local_rumble == x->rumble) &&
(local_rpt_mode == x->rpt_mode)) {
if (x->unsafe) x->unsafe = 0; //signal that the thread init is
complete
pthread_cond_wait(&x->unsafe_cond, &x->unsafe_mutex);
}
//snip
static void *pd_cwiid_new(t_symbol* s, int argc, t_atom *argv)
{
//snip
// spawn threads for actions known to cause sample drop-outs
threadedFunctionParams rPars;
rPars.wiimote = x;
pthread_mutex_init(&x->unsafe_mutex, NULL);
pthread_cond_init(&x->unsafe_cond, NULL);
pthread_create( &x->unsafe_t, NULL, (void *)
&pd_cwiid_pthreadForAudioUnfriendlyOperations, (void *) &rPars);
// wait until other thread has properly intialized so that
// rPars do not get destroyed before the thread has gotten its
// pointer information
while(x->unsafe) {
//must use as many yields as necessary as there is no
//guarantee that one will be enough
//also on Linux use sched_yield
//rather than pthread_yield
sched_yield();
}
//snip
Many thanks all for your help on this one! Hopefully the existence of
this thread will help others who may be looking for similar solutions.
Best wishes,
Ico
Hi all,
I am wondering if anyone can shed some light on the following
predicament. I am by no means a multi-threading guru so any insight
would be most appreciated.
The following are relevant excerpts from the code of an external. AFAIK
the external initializes mutex and cond and spawns a secondary worker
thread that deals with audio-unfriendly (xrun-causing) write operations
to the wiimote and terminates it when the object is destructed waiting
for the thread to join back and then destroying the mutex.
Now, if I add a bit of usleep right after the thread has been spawned as
part of the constructor (as included below) the external seems very
stable (e.g. cutting and pasting it as fast as keyboard allows, or in
other words constructing and destructing instances of it as fast as
possible does not result in a crash). Yet, when one does not use usleep
right after spawning the secondary (worker) thread in the constructor,
the whole thing is very crash-prone, almost as if the spawning of thread
does not go well unless given adequate time to do get things all into
sync, so to say, even though this makes to me no sense as the way I
understand it the constructor does not move ahead until pthread_create
does not return a value (which in this case I am not bothering to read).
Curiously, when not using usleep, a crash may occur right at creation
time, at any point while the object exists, and even as late as during
its destruction. Any ideas?
P.S. I am also including the entire file for those interested in trying
it out.
Best wishes,
Ico
Relevant excerpts (in random order and incomplete to allow for greater
legibility):
//struct defining the object
typedef struct _wiimote
{
t_object x_obj; // standard pd object (must be first in struct)
...
//Creating separate threads for actions known to cause sample drop-outs
pthread_t unsafe_t;
pthread_mutex_t unsafe_mutex;
pthread_cond_t unsafe_cond;
t_float unsafe;
...
t_float led;
...
} t_wiimote;
//constructor
static void *pd_cwiid_new(t_symbol* s, int argc, t_atom *argv)
{
...
x->led = 0;
// spawn threads for actions known to cause sample drop-outs
threadedFunctionParams rPars;
rPars.wiimote = x;
pthread_mutex_init(&x->unsafe_mutex, NULL);
pthread_cond_init(&x->unsafe_cond, NULL);
pthread_create( &x->unsafe_t, NULL, (void *)
&pd_cwiid_pthreadForAudioUnfriendlyOperations, (void *) &rPars);
//WHY IS THIS NECESSARY? I thought that pthread_create call will first
finish spawning thread before proceeding
usleep(100); //allow thread to sync (is there a better way to do this?)
...
}
//destructor
static void pd_cwiid_free(t_wiimote* x)
{
if (x->connected) {
pd_cwiid_doDisconnect(x); //this one has nothing to do with thread but
rather disconnects the wiimote
}
x->unsafe = -1; //to allow secondary thread to exit the while loop
pthread_mutex_lock(&x->unsafe_mutex);
pthread_cond_signal(&x->unsafe_cond);
pthread_mutex_unlock(&x->unsafe_mutex);
pthread_join(x->unsafe_t, NULL);
pthread_mutex_destroy(&x->unsafe_mutex);
...
}
//worker thread
void pd_cwiid_pthreadForAudioUnfriendlyOperations(void *ptr)
{
threadedFunctionParams *rPars = (threadedFunctionParams*)ptr;
t_wiimote *x = rPars->wiimote;
t_float local_led = 0;
t_float local_rumble = 0;
unsigned char local_rpt_mode = x->rpt_mode;
while(x->unsafe > -1) {
pthread_mutex_lock(&x->unsafe_mutex);
if ((local_led == x->led) && (local_rumble == x->rumble) &&
(local_rpt_mode == x->rpt_mode)) {
pthread_cond_wait(&x->unsafe_cond, &x->unsafe_mutex);
}
if (local_led != x->led) {
local_led = x->led;
//do something
}
}
if (local_rumble != x->rumble) {
local_rumble = x->rumble;
//do something else
}
...
pthread_mutex_unlock(&x->unsafe_mutex);
}
pthread_exit(0);
}
//an example of how the thread is affected by the main thread
void pd_cwiid_setLED(t_wiimote *x, t_floatarg f)
{
if (x->connected) {
x->led = f;
pthread_mutex_lock(&x->unsafe_mutex);
pthread_cond_signal(&x->unsafe_cond);
pthread_mutex_unlock(&x->unsafe_mutex);
}
}
Dear Linux Audio developer, user, composer, musician, philosopher
and anyone else interested, you are invited to the...
Linux Audio Conference 2011
The Open Source Music and Audio Software Conference
May 6-8 2011
Music Department, National University of Ireland, Maynooth
Maynooth, Co.Kildare, Ireland
http://music.nuim.ie
As in previous years, we will have a full programme of talks,
workshops and music.
Two calls will be issued, a Call for Papers (see below) and Call for
Music (soon to be announced).
Further information will be found in the LAC2011 website (under
construction).
================ CALL FOR PAPERS =================
Papers on the following categories (but not limited to them) are now
invited for submission:
* Ambisonics
* Education
* Live performance
* Audio Hardware Support
* Signal Processing
* Music Composition
* Audio Languages
* Sound Synthesis
* Audio Plugins
* MIDI
* Music Production
* Linux Kernel
* Physical Computing
* Interface Design
* Linux Distributions
* Networked Audio
* Video
* Games
* Media Art
* Licensing
We very much welcome practical papers resp. software demos ("how I use
Linux Audio applications to create my music/media art").
Paper length: 4-8 pages, with abstract (50-100 words) and up to 5
keywords.
Language: English.
The copyright of the paper remains with the author, but we reserve the
right to create printed proceedings from all submitted (and accepted)
papers.
IMPORTANT DATES:
Submission deadline: 15/January 2011
Notification of acceptance: 7/March 2011
Camera-ready papers: 1/April 2011
Queries: Victor Lazzarini, NUI Maynnooth (victor.lazzarini(a)nuim.ie)
if there was a standard that described the expected behavior of commonly
used
(or just useful) knob control methods, perhaps in the form of a short draft
specification on freedesktop.org, would people (ie developers) use it?
expectations and requirements clearly differ so we would probably need to
gather together and discuss those in use with a mind to removing or
combining
similar methods, explicitly naming them and defining their behavior in the
abstract. if we required that applications implement a set few methods as
configurable options (as a minimum) in order to comply then we might
eventually
see the back of the unpredictable mess we have now. it wouldn't matter
how they
are configured, be it gconf, dotfile, command line, prefs dialog, env
variable..
as long as it could be done.
worth thinking about or is it just too niche? are the actual real world
applications too varied and specific to make it useful?
for example it would certainly be nice to crack open a library config
or application prefs window and assign VLMC (vertical linear motion
control*)
to the left mouse button and RMC (radial motion control*)
to shift + left mouse button, and have those conform to the expected
standards.
or even just to set KNOB_CONTROL_METHOD=RMC in your global environment
and have all the adherent applications just do what you expect.
it seems to me that standards (of the mutually agreed rather than officially
sanctioned variety, since the latter is impractical) provide the best means
to bring about common behavior in sovereign systems. naturally it would be
completely platform independent.
just having named control methods with explicitly defined behavior may
help matters
too.
cheers,
pete.
*crappy names i'm sure, but you get the idea.
WRT the recent discussion about pixmap knob widgets and theme
conformance (that i can't reply to since i wasn't on the list
at the time, sorry)
there are a couple of ways that you might achieve this.
the crux gtk theme engine includes some pixmap recolouring code
(or used to at any rate). it recolours areas of a pixmap that
only contain green values to one specified in the gtkrc.
this might conceivably be stolen and incorporated
to provide some measure of theme conformance for pixmap
based widgets (knobs wheels and potentially sliders).
this method places specific constraints on the source pixmaps used,
constraints that are easily adhered to when creating pixel art
(which the crux pixmaps were) but procedurally generated rasters
(vector or 3d renders) of the kind that are likely to be used with
a pixmap widget might pose more of a problem since lighting and
anti aliasing probably induce a variety of colours.
still, i expect you could get something to work with a bit of effort.
(imagemagick or scriptfu in the gimp may help there).
another possibility that was briefly discussed for use with
phat was that of a composite widget with different layers that
could be drawn separately, one on top of the other. eg render an SVG
or pixmap as a background on the first pass, then draw something
with cairo (a value indicator..) on top.
there are obvious limits to what you can achieve with this kind of
thing, but you could get some complex effects on a knob while still
maintaining procedural control over the size, colour and shape of the
vector elements. (tick marks around the knob, value indicator size
and colour etc). IIRC we discarded the idea due to it's complexity.
we wanted a generally configurable knob and the vector elements would
need anything from extensive widget options right up to a full blown
markup language to describe them (not a problem for app specific
widgets).
cheers,
pete.
Hi,
BoxySeq is still very far away from suitable for end users, but I've
decided to post an update here to let people know that I'm still
working on it :-)
"The classification of what BoxySeq is, resides somewhere between
sequencer and arpeggiator. The core concept of BoxySeq is to use a
window-manager-like window-placement-algorithm to generate pitch and
velocity data as it sequences events in real time (via the JACK Audio
Connection Kit’s MIDI API)."
More details of how it (should) work(s):
http://github.com/jwm-art-net/BoxySeq/wiki
My latest demo with BoxySeq connected to Yoshimi and recorded in mhWaveEdit
http://jwm-art.net/art/audio/boxyseq_demo_28_09_2010.ogg
(the demo utilizes a simple pattern which feeds into 8 boundary boxes.
i move the boundaries around, switching them to blocking mode and back
to play mode for various melodic arpeggiated effects etc).
Since the last time I posted to LAD/LAU I've increased some usability features:
Improved boundary movement/resize code.
Added zoom functionaility
Added scrollbars.
Added keyboard shortcuts.
(select a boundary by hovering mouse over it)
left,right,up,down move selected boundary 1 unit in that direction
b, B - make the boundary turn all events into Blocks (ie events which
don't emit midi messages but are still placed)
i, I - make the boundary Ignore all events.
p, P - make the boundary Play all events as normal _
- - bring the boundary closer to the front of the boundary
list\___slightly confusing I know
+ - move the boundary further toward the end of the list _/
that's about it - but it feels like I've done a whole lot more.
the next step is to get one of the most fundamental features of
BoxySeq working, and that is to allow the user to create static block
boxes (these can be placed anywhere in the grid and prevent a boundary
placing any events in that location). Unfortunately it's not quite as
straightforward as it sounds.
Cheers,
James.
--
_
: http://jwm-art.net/
-audio/image/text/code
Friends, MusE 1.1 is here!
[Introduction]
MusE is a combined midi and audio sequencer which tries
to cover most bases for the linux computer studio.
MusE is one of the oldest sequencers on the Linux audio scene and is
today a very stable open source solution for everyday music making.
This release adds some new features, lots of bugfixes and a bunch
of usability improvements.
  MusE : http://muse-sequencer.org
[Highlights]
* Jack midi support.
* Allow native VST guis for plugins
* Audio and midi routing popup menus now stay open, for making rapid
connections.
* MusE now has two mixers, with selectable track type display.
* External midi sync fixes and improvements, should be very stable
* Some pianoroll improvements
* Some crash fixes
* Drum editor fixes
* Various arranger fixes and improvements
* Various improvements for plugin guis
* Routing fixes
* Stability fixes for plugins
* Various DSSI fixes
* Rec enabled track moves with selection when only one track is rec enabled
* Jack midi, routing system, multichannel synth ins/outs, midi strips
and trackinfo pane.
* Dummy audio driver: Added global settings for sample rate and period size.
* Arranger track list: Quick 'right-click' or 'ctrl-click' or
'ctrl-mouse-wheel' toggling of Track On/Off.
* Allow changing timebase master
* Option to split imported midi tracks into multiple parts.
* Several new keyboard shortcuts for various operations, see shortcut editor
* Several colour tweaks and other cosmetic changes
* Various stability fixes
* Countless fixes and tweaks, about a 300 lines in the Changelog,
 check it for a complete list of blood sweat and tears
[What is MusE again?]
MusE is multitrack virtual studio with support for:
* Midi
 * jack midi
 * internal softsynths, including soundfont player FluidSynth
 and sample player Simple Drums
 * DSSI softsynths, including VST instruments
 * with a patch to DSSI, VST-chunks are handled
 * Drum editor
 * Pianoroll
 * Conventional arranger
 * midi automation
 * and lots more
* Audio
 * Jack
 * Jack transport
 * LADSPA plugins
 * VST plugins through dssi-vst
 * audio automation, old sch00l
 * and lots more
[ChangeLog]
For a complete list of changes, check the ChangeLog in
the package or online at the sourceforge site:
http://lmuse.svn.sourceforge.net/viewvc/lmuse/trunk/muse/ChangeLog?revision…
[Download]
http://muse-sequencer.org/index.php/Download
Keep on rocking!
The MusE team
I think the answer belongs to the list. Maybe others will correct me...
On Sunday 26 September 2010 12:04:17 you wrote:
> Hi Arnold, your hint is a very revelation to me! I have spent the night
> thinking about it and now I have a question: if I drive the beat counter
> via the sampling-clock (you mean the internal clock of the sound card,
> right?) and the alsa process is in a blocking mode, the audio-thread
> become itself a sort of metronome where a chunk of data is a single tick,
> doesn't it?
The audio-thread becomes the metronome. But don't mix the chunks of data with
the tick of your bars:beats:ticks.
You get block of samples from the device (to write to or read from), you know
what samplingrate you use, you know how many samples you already processed.
From that you calculate your clock.
BTW: It sounds as if you are just beginning to write audio-apps: Start with
jack, its api for clients is easier then alsa's. At least that what I'm told,
I never used the alsa-api.
Another advantage of jack is that you get the global jack-transport for free.
Which means your sampler/looper will sync with your other soft-synths and with
your recording app.
Have fun,
Arnold
Hi guys,
first of all forgive my not-so-perfect English :-)
I'm writing down some code for a minimal loop player based on two threads: one handles a beat counter, the other feeds the soundcard with audio frames, through ALSA. When the beat counter has completed a full cycle (e.g. 4/4) it simply rewinds the PCM data to byte 0 making a seamless loop. Really straightforward.
Now, I'm wondering how to implement the metronome side: should I rely on something like usleep/nanosleep or ALSA layer could offer an advanced timer? Another potential issue would come from latency, obviously present within the audio thread (due to ALSA): what happens when the beat counter restarts the audio sample but an alsa frame is still being written to the soundcard?
Thank you in advance for any suggestion!
Tb