Can anyone point me at what they consider the best thing to look at for an
introduction to communication between threads in a jack app using the
ringbuffer?
I found some, but as docs appear a bit scattered, wondered if there was a
known best-first-reference type thing.
thanks
iain
this is totally prealpha-OMG-IT'S-FULL-OFF-FAIL-state..
https://github.com/fps/jiss
requirements: SWIG, libjack-dev and liblua5.1-dev on ubuntu..
compile with make (if it fails, you're on your own. it's a simple
makefile though). then run in the build dir:
lua wicked.lua
if you have jass running with a piano sample on midi channel 0, a bass
drum on channel 1 and a hihat on 2 you should get a rather weird
interpretation of "stella by starlight", a jazz standard..
something like this (some effects added with jack-rack):
http://shirkhan.dyndns.org/~tapas/stella.ogg
(wicked.lua code here with some omission of some chords at the start and
some remarks in comments added):
-- some stuff :D
require "jiss"
require "jissing"
-- create engine in stopped state
e = jiss.engine()
-- setup some state that the sequences later use
-- e:run can only be used when the engine is stopped..
-- as this is executed in non-RT context it's ok to
-- create some variables and tables here..
e:run([[
bar = 0;
min = 20;
max = 80;
stella = {
range(min, 80, min7b5(E(4))),
range(min, 80, min7b5(E(4))),
-- cut away quite a bit here (see wicked.lua in git clone) :D
range(min, 80, maj7s11(B(4)-1)),
range(min, 80, maj7s11(B(4)-1))
}
]])
-- this sequence can control the others since it's processed before
-- the others in the engine
-- events string is newline sensitive. in this case the events
-- on consecutive lines are spaced 1 second apart..
-- also: loop back to 0 at time t = 8 sec
tune = seq(e, "tune", loop_events(8, events_string(1, [[
drums1:relocate(0.0); drums1:start_(); notes:relocate(0.0);
notes:start_()
drums1:stop_();
]])))
-- manually start this sequence and add to the engine
tune:start()
-- note that a copy is appended to the engine
e:append(tune)
-- a sequence that controls the global variable bar to advance through
the song
play(e, seq(e, "control", loop_events(1, events_string(1, [[
bar = bar + 1; bar = (bar % #stella);
]]))))
-- events at fixed times. loop at t = 0.75 sec
play(e, seq(e, "notes",
loop_events(0.75, {
{ 0.125, [[ for i = 1,4 do note_on(0, 24 +
stella[bar][math.random(#stella[bar])], 30 + math.random()*64) end
]] },
{ 0.5, [[ for i = 1,2 do note_on(0, 24 +
stella[bar][math.random(#stella[bar])], 10 + math.random()*34) end ]] }
})))
-- a drum pattern
drums = [[
note_on(1, 64, 127); note_on(2, 64, 127)
note_on(2, 64, 127)
note_on(2, 64, math.random(127))
note_on(2, 64, math.random(127))
note_on(2, 42, 110)
note_on(2, 64, 127)
note_on(2, 64, math.random(127))
note_on(1, 64, 127); note_on(2, 64, 127)
note_on(2, 64, math.random(127))
]]
play(e, seq(e, "drums1", loop_events(1, events_string(0.125/2, drums))))
-- connect all sequence outputs to jass:in
connect(e,"jass:in")
-- run the whole thing
e:start()
-- wait for the user to press enter
io.stdin:read'*l'
Have fun,
Flo
Thanks everyone for all the help on my architecture questions. It seems
like a lot of the best practise functionality has tools/components for it
already in Jack. I *was* planning on using rtaudio in order to be cross
platform, but if it's a lot easier to get things done in Jack, i could live
with being limited to linux and OS X.
Just wondered if I could poll opinions, for a real time step sequencer
meant to do super tight timing and by syncable with other apps, is Jack
going to be a lot easier to work with? Should I just lay into the jack
tutorials?
And is it straightforward to use the perry cook stk in a jack app?
thanks everyone
iain
Hi All,
Recent thoughts of mine include changing the "direction" of operations in
real-time program design:
Eg: Don't call a set() on a Filter class to tell it its cutoff, every time
there's a change,
but make it ask a State class what its cutoff should be every time it runs.
I'm now considering implementing this pattern, but I'm wondering have
others done this before?
Along the same lines, say I have a single linked list of AudioElements, and
the 3rd element needs info, should
it request it, be told it, or have some other system to inform it of events?
I'm seeing downsides to each approach:
1: Tell it on every change -> performance hit
2: Request it every time it runs -> Keeping control over the many values &
unique ID's of class instances
Experience & Opinions all welcomed, -Harry
David's advice is right on.
> why not use audio rate control ports? Or some sort of hybrid, allowing
> you to
> switch as needed - but that quickly becomes a complexity explosion...
I use a hybrid where ports are audio-rate AND each port has two states:
'streaming' or 'static'. So if you don't want audio-rate modulation, you
pass a buffer of identical values and set the port state 'static' (not
changing).
The advantage is: 'Dumb' plugins need no special-case code, just write them
as though the port was always audio-rate, one single simple code-base.
'Smart' plugins can query the port state and switch to more efficient code
when the port is 'static', e.g. read only the first sample, treat it like a
block-accurate parameter (i.e. Far more efficient).
Best Regards,
Jeff
Hi,
I spent a few hours here and there to work on jass. Thus I give you
release 0.9 which is fairly feature complete. But it might still have a
gazillion of bugs. So please test, before I go 1.0..
http://shirkhan.dyndns.org/~tapas/Jass-0.9.tar.bz2
Jass - A Jack Simple Sampler
Qt4-, libsamplerate-, libsndfile-, jack_midi-, jack_session-,
ladish-L1-enabled sampler..
Changes (AFAICT):
* graphical editors for most parameters (those can be resized to suite
your needs)
* waveform display to set sample start/end, loop start/end
* keyboard widget to set note, min note and max note
* a retarded dial_widget that is barely usable :D
* global voice allocation (you can set the global polyphony per setup in
the xml file
* ADSR envelope that actually works
* show/hide some parameter sections
Screenshot (showing all parameter editors):
http://i.imgur.com/Ssc4F.png
Regards,
Flo
Hey everyone, especially those who have been helping me with me
architecture questions. I'm wondering whether some of you would be
interested in helping in a simple advisory/editorial capacity if I were to
try to write up a wiki or e-book of some kind on real time audio
architecture. It seems to me like there is a lot of knowledge here on it,
and not very many (if any) good starting points for such things. I've found
decent references on the dsp side of audio coding, but not seen anything on
'how to put together a real time audio app from a-z' kind of thing. I find
writing docs helps me clarify things in my head, I'd be interested in doing
some writing if I know that people who know what they are doing would be
interested in advising and correcting. I figured if I put it online it
might be a good source of publicity for your work and we could link back to
projects ( ardour, etc, )
It would take a while of course, but might also help people new to these
lists and give us all some thing to point at and say: there's a good write
up on that here ->
thoughts?
iain
Guitarix release guitarix2-0.20.0
After much code shuffling, refactoring and testing we are happy to
release "whizzing abacus" guitarix2-0.20.0. Thanks to all testers and
special thanks to rosea grammostola for his patience.
Guitarix is a tube amplifier simulation for jack, with effect modules
and an additional stereo effect chain.
Please refer to our project page for more information:
http://guitarix.sourceforge.net/
new features and bugfixes in short:
* important bugfix: convolver (cabinet, presence) in 0.19.0 only
worked when samplerate was 48000Hz
* avoid connect already connected ports
* always save state on exit (in earlier versions state was not saved
when presets where selected)
* separation of engine and UI
if you always wanted to use Guitarix headless or embedded, now
there's a chance :-)
* now its possible to set the reference pitch of the tuner
* remember currently selected preset in guitarix state file
* display selected preset / factory preset in window title and patch
info window
* updated factory settings from funkmuscle
* reworked guitarix operation under a jack session manager
* error popup window in addition to existing logging facility
* command line option to set instance name (determines jack client
names and state file to use)
* some more work to support localization
* upgraded to zita-convolver version 3
* other smaller changes and clean-ups
download site:
http://sourceforge.net/projects/guitarix/
please report bugs and suggestions in our forum:
http://sourceforge.net/apps/phpbb/guitarix/
here you can find a couple of examples produced by guitarix users:
http://sourceforge.net/apps/phpbb/guitarix/viewtopic.php?f=11&t=83
have fun
_________________________________________________________________________
For extra Impulse Responses, guitarix uses the zita-convolver library,
and, for resampling we use zita-resampler, both written by Fons
Adriaensen.
http://kokkinizita.linuxaudio.org/linuxaudio/index.html
We use the marvellous faust compiler to build the amp and effects and
will say thanks to
: Julius Smith
http://ccrma.stanford.edu/realsimple/faust/
: Albert Graef
http://q-lang.sourceforge.net/examples.html#Faust
: Yann Orlary
http://faust.grame.fr/
________________________________________________________________________
guitarix development team
Further to the conversation about Python to C++ ( with many helpful
responses, thanks everyone! ).
For my particular case, no drop outs is critical, and I really really want
to be able to run multiple UIs on lots of cheap machines talking to the
same engine over something (osc I expect). So I'm ok with the fact that
user input and requests for user interface updates may lag, as the queue is
likely to be really busy sometimes. I'm imagining:
Engine thread, which owns all the data actually getting played ( sequences,
wave tables, mixer/synth/effect controls, the works )
- gets called once per sample by audio subsystem ( rtaudio at the moment )
- does it's audio processing, sends out audio
- loop by Magical Mystery Number 1:
- get message off input queue describing change to a data point (
sequence or sample data )
- updates data point
- loop by mystery number 2:
- get message off 2nd UI queue requesting the state of a data point
- sends back a message with that data to the requester
done
GUI thread
- keeps it's own copy of whatever data is pertinent to that particular gui
at that point
- sends a bunch of requests if user changes the view
- sends messages data change requests according to user actions
Here's my question, how do I determine the magical mystery numbers? I need
to make sure engine callback is always done in time, no matter how many
messages are in the queue, which could be very high if someone is dropping
in a sample of audio. By making the data point messages very simple, I hope
that I'll have a pretty good idea of how long one takes to process. It's
just a lock-get-write to simple data structure. But how much audio
processing has happened before that point will be variable. Anyone have
suggestions on that? Is the system clock accurate enough to check the time
and see how much a sample period is left and make some safe calculation
with headroom left over there? It is totally ok for the queue and the
inputs to lag if the audio number crunching goes through a spike.
suggestions most welcome. (including 'that design sucks and here's why')
thanks
iain