Hi All,
A new build of Praxis LIVE makes its way into the light. Highlights
include new audio components and audio API improvements,
cross-platform video capture (yes, Linux has always been there!), and
live GLSL coding.
Website - http://code.google.com/p/praxis
Release notes - http://code.google.com/p/praxis/wiki/ReleaseNotes
Blog post - http://praxisintermedia.wordpress.com/2012/06/21/praxis-live-build120620/
Praxis is a Java-based modular framework for live creative play with
video, images, audio, and other media. Its primary focus is on the
easy development of generative and interactive media installations, as
well as live performance. Praxis LIVE is a graphical, patcher-style
interface for developing Praxis projects 'on the fly'. Praxis is
developed by UK Artist and Technologist Neil C Smith. It is partly
inspired by projects such as AudioMulch, Bidule and Isadora, and to a
lesser extent Pure Data and Processing; however, it is not intended to
be a clone of any of them.
Thanks for listening, best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis - open-source intermedia system for live creative play
http://code.google.com/p/praxis
OpenEye - specialist web solutions for the cultural, education,
charitable and local government sectors.
http://openeye.info
hi everyone!
thanks to the excellent pd documentation out there and lots of hand
holding by friendly pd gurus on this list and elsewhere, here's my
humble take at creating a theatre cue player with pd that does what i
need... all the heavy lifting is done by august black's excellent
readanysf~, thanks for making this tool available!
CueFrog is designed to be multi-instance capable, so you can create as
many decks as your machine can handle, and makes use of lots of
send/receive ports to simulate some kind of object-oriented
encapsulation stuff, based on my (limited) understanding of a
model/view/controller paradigm.
grab it:
http://stackingdwarves.net/public_stuff/software/CueFrog/CueFrog-0.0.2.tar.…
it's documented, so you should get it going in no time. i'm sure there
are many quirks there, and i found out it's very easy to create race
conditions in pd, so no warranties :)
comments and suggestions for improvements are most welcome.
i have a vbap-based panning automation in the works (which has already
been used live at a theatre festival), but the code is in
oh-my-good-tomorrow-is-dress-rehearsal shape, so forgive me for
withholding it another month or so.
and before you ask: frogs are cute. and when the director makes me jump,
i need tools that jump along :-D
best,
jörn
--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT
http://stackingdwarves.net
Hi Everybody,
My name is Bart, this is my first post here, though some have met me on IRC.
Thanks for making Linux audio what it is!
I started using Linux in 2004 with DeMuDi, and have never looked back.
I'm trying to get my pcm_multi to work without with jackd.
Some of you seem to have got this down, with or without "ghost xruns".
Jörn seems to imply in the quoted thread that tschack is the answer, but
it gives me the most xruns of all jack implementations.
Who has got this working?
Who want to help me get to the bottom of this?
Google has been a great help so far, but I'm not sure what to try or what
to google anymore. :(
So far I've tried:
*jackdmp1.9.9
*jackd1 (1:0.121.3+20120418git75e3e20b-2)
*http://nedko.arnaudov.name/soft/jack/dbus/jack-audio-connection-kit-dbus-0.121.3.tar.gz
*https://github.com/adiknoth/tschack.git
All tested with the 3 kernels mentioned below.
The nedko jack with the avlinux kernel sometimes goes without xruns for
quite a long while, but sometimes gives lots of them.
I haven't found the pattern behind it yet.
My system is fully tuned, according to realTimeConfigQuickScan.pl
the only exceptions:
cat: /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor: No such file
or directory
Checking CPU Governors... CPU 0: '' CPU 1: '' CPU 2: '' CPU 3: '' - not good
I assume this is because my cpu's are running full speed.
Kernel with Real-Time Preemption... not found - not good
Checking if kernel system timer is set to 1000 hz... not found - not good
But I'm running 3.2.0-2-rt-686-pae #1 SMP PREEMPT RT Fri Jun 1 20:28:43
UTC 2012 i686 GNU/Linux.
I've also tried linux-image-3.0.32-avl-8 from avlinux, debian 3.2.0-2
vanilla.
Here is my .asoundrc:
https://github.com/StudioDotfiles/DotRepo/blob/master/asoundrc
I'm runing jack like this:
jackd -d alsa -r 44100 -p 4096 -d rme9636_64
The two rme9636 soundcards are on their own irq's, with priorities just
below the timers.
Jack with just one card works like a charm.
Is there any other info needed?
Thanks a lot,
Bart.
On 01/14/2011 11:12 PM, Jörn Nettingsmeier wrote:
> On 01/14/2011 10:39 PM, Jörn Nettingsmeier wrote:
>
>> i had it crash once when loading a really demanding session, but with
>> another average ardour project, it has now played fine and without
>> glitches for 10 minutes or so, while the xrun count goes through the roof.
>
> alas. i spoke to soon:
>
> after i added a 6x1 convolver and an ambdec instance
> jack2 bails out reproducibly after a couple minutes more, with a
> "floating point exception". so more testing.
>
> meanwhile, i'd like to know what these xruns are, and i wouldn't be too
> surprised if the eventual crash is actually related to the message
> buffer or some internal error counter wrapping...
jack2 gets nervous when i use a session with four jconvolver instances:
JackPosixMutex::Unlock res = 1
Unknown request 4294967295
jackd: ../common/JackGraphManager.cpp:45: void
Jack::JackGraphManager::AssertPort(jack_port_id_t): Assertion
`port_index < fPortMax' failed.
Aborted
the cpu is not maxed out, afaics.
tschack handles this scenario just fine, and it doesn't spew error
messages on the console. if i monitor it in qjackctl, the xrun count
increases at the usual rate, though.
btw: qjackctl becomes a major cpu burden in this pathological case.
lookes like it's the error messages. i've seen it at up to 40% of one core.
Here it goes.
Mostly a LV2 1.0.0 compliance release with some fixes from the stash
and fewer candies from the jar. On the darker/brighter side (your
choice) there's news on the LV2 atom(ic) fall-out now being officially
over. A new dawn has commenced, quite as every day follows every night
may I add.
Dang!
I'd better stop right here and save you all from that boring
trivialities. Let's go with the plain, interesting facts:
Qtractor 0.5.5 (foxtrot uniform) swings out!
Release highlights:
* LV2 Atom/MIDI support (NEW)
* LV2 Worker/Schedule support (NEW)
* LV2 Presets support (NEW)
* LV2 Time/position support (NEW)
* LV2 Programs/instrument support (NEW)
* MIDI plugin event timing on tempo changes (FIX)
* Loop-recording/takes audio sync (FIX)
* Quick start guide and user manual (NEW)
* Russian and Italian translations (NEW)
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
- source tarball:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.5.tar.gz
- source package (openSUSE 12.1):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.5-4.rncbc.suse121.sr…
- binary packages (openSUSE 12.1):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.5-4.rncbc.suse121.i5…http://downloads.sourceforge.net/qtractor/qtractor-0.5.5-4.rncbc.suse121.x8…
- brand new (quick start guide &) user manual:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.x-user-manual.pdf
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.
Change-log:
- Auto-monitored MIDI tracks were missing their pass-through to their
respective MIDI output bus plugin chains, now fixed and letting any
multi-timbral instrument plugin to get a peek from auto-monitoring.
- New user option/preference to whether to open a plugin's editor
(GUI) by default, when available (cf. View/Options.../Plugins/Editor).
- Clicking and/or dragging for rubber-band selection on main
track-view canvas doesn't change the edit-head and -tail positions
anymore.
- Backward and Forward transport commands now have an additional stop
at first clip start point.
- LV2 Atom/MIDI buffering support is finally entering the scene; LV2
Worker/Schedule support is also included in a bold attempt to convey
non-MIDI event transfers between plugin and its UI.
- MIDI Clip editor (aka. piano-roll) and MIDI Tools fix: avoid note-on
events of zero velocity, which conventionally equates to a dangling
note-off event and dropped into oblivion sooner or later. There's no
more need for Shift/Ctrl keyboard modifier to change in one single
step all the MIDI events that are currently selected (now consistent
with drag-move).
- LV2 Presets support now entering effective operational status; a new
local option has been added (cf. View/Options.../Plugins/Paths/LV2
Presets directory; default is ~/.lv2).
- Dropped XInitThreads() head call as it was never useful but on those
early days of JUCE VST plugins.
- Italian (it) translation added (by Massimo Callegari, thanks).
- Clip fade-in/out dragging now follows snap-to-beat setting.
- Late modern eye-candy indulgence: alternate shaded stripes, on every
other bar as in a "zebra" background option for the main tracks and
MIDI clip editor views (cf. View/Snap/Zebra).
- LV2 Time/position information is now being supported through special
designated plugin input ports (after suggestion by Filipe Coelho aka.
falktx). Additionally, the time/position information report has been
corrected and complemented for VST plugins.
- Audio vs. MIDI time drift correction has been slightly improved
against rogue tempo changes across looping cycles.
- Honor tempo/timing on MIDI instrument plugins. Happy regression fix
on getting MIDI note-offs at looping ends back in business; all the
necessary bumming for MIDI plugins to play nice in face of tempo
changes and whenever playback is started from anywhere but the
beginning of the time-line (ie. frame zero); thanks to rvega aka.
Rafael Vega, for the heads-up).
- Audio clip wave-forms were being displayed in inverted phase (ie.
upside-down) all this time ever since day one. What a shame!
- LV2 Programs interface is getting initial experimental status, to
let LV2 instrument plugins get on par with the DSSI and VST crowd for
MIDI bank/program instrument inventory and selection support (a
sidetrack complot with Filipe Coelho aka. falktx, thanks:).
- Dropped the old but entirely useless LV2 URI-unmap feature, now
being superseded by official LV2 URID (un)mapper.
- Russian (ru) translation added (by Alexandre Prokoudine, thanks).
- SLV2 deprecation process started, effective now at configure time.
- Added include <unistd.h> to shut up gcc 4.7 build failures (patch by
Alessio Treglia, closing bug #3514794).
- Another approach avoiding recursive observer widget updates. Also
applies to mixer, monitor and track state buttons.
- Update to latest LV2 state extension (by David Robillard, thanks).
- Loop-recording/take number displayed on clip title, respectively.
- Make(ing) -jN parallel builds now available for the masses.
- A one buffer period slack on audio engine's loop turn-around logic
might just have fixed an illusive report on loop-recording/takes going
progressively out-of-sync, most notably when recording under large
audio buffer period sizes (>= 1024 frames/buffer).
- Editing MIDI while playback is rolling, doesn't mute the track any
more, adding a point to the live editing experience.
- Finer granularity for direct access parameter mouse wheel changes.
- Dropped a dumb optimization for short full-cached multiple
linked/ref-counted audio clips which were incidentally out-of-sync
after rewind/backward playback. Once again and uncertain to be the
last take on this, got fixed (probably related to some oddity reported
by Louigi Verona, thanks).
Enjoy!
--
rncbc aka. Rui Nuno Capela
Hi All,
>From what I can tell, it looks like the LV2 Atom Sequence specification
allows you to send events with arbitrary units for the timestamp.
I have a few questions about this:
1. How are we to know whether a particular unit uses the *double* field in
the timestamp union, or the *uint64_t *field in the timestamp union.
2. The specification says "The unit field is either a URID that described
an appropriate time stamp type, or may be 0 where a default stamp type is
known." In what circumstances would the timestamp be known? When can I
expect to see a zero in that field?
3. Are the timestamps absolute times? Or relative to the previous event?
Or relative to the start of the audio chunk? Does it depend on the units
used?
4. Why does the
documentation<http://lv2plug.in/doc/html/structLV2__Atom__Sequence__Body.html>show
|FRAMES |SUBFRMS| as the timestamp field? From what I can tell, there
is no unit which includes frames and subframes subdivided that way,
and the sampler
example </> just uses the full 64 bits as a frames field. Is this just a
relic from the old event port documentation? (the diagram seems familiar).
5. How are hosts/plugins supposed to deal with the multiplicity of units?
For example, suppose I'm a plugin or host that wants to receive MIDI data.
How am I supposed to know what timestamp unit to expect? Is there a
facility for converting between different units automatically? The only
extra information required to convert between any two time units would be
the bpm and sample rate. It would be rather annoying to have to implement
a bunch of unit conversion code in every host/plugin you write in order to
make sure it can handle any unit which is tossed at it.
Thanks,
Jeremy
Hi,
Many LADSPA plugins use one of their parameters to report
latency they add to their signal chain.
I wonder if there is a convention that these plugins report
the latency value in a common unit, or is it necessary
for me to examine the documentation of each plugin
case-by-case.
Regards,
Joel
--
Joel Roth
Hello laddies,
I am making an LV2 extension for accessing and/or restricting the buffer
size. This is straightforward, but I need to know just what
restrictions are actually needed by various sorts of DSP.
The sort of thing we're looking for here is "buffer size is always at
least 123 frames" or "buffer size is always a power of 2" or "buffer
size is always a multiple of 123".
I know "multiple of a power of two" is needed for convolution. Not sure
what else...
-dr
On 31 May 2012 03:41, Kaspar Bumke <kaspar.bumke(a)gmail.com> wrote:
> Hey,
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
> the code and see if I can use it as a basis for a more advanced
> drumreplacer.
>
> For now I was just making an Arch Linux AUR package and was wondering
> about the license (have to put it in the package). Is it just public domain
> or did I miss something?
>
> Regards,
>
> Kaspar
>
On 31 May 2012 09:38, Marc R.J. Brevoort <mrjb(a)dnd.utwente.nl> wrote:
> Hi Kaspar,
>
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
>> the code and see if I can use it as a basis for a more advanced
>> drumreplacer.
>>
>
> At present it's pretty basic. It does peak detection by looking if a wave
> goes over its treshold level, then (if I remember correctly)
> starts a counter to see how long it takes to get to another treshold
> level to extract MIDI velocity. In other words, at the moment it works
> entirely in the amplitude domain. This works pretty well for multi-track
> recordings, but for existing stereo tracks, doing the work in the frequency
> domain might work better.
>
>
> For now I was just making an Arch Linux AUR package and was wondering
>> about
>> the license (have to put it in the package). Is it just public domain or
>> did
>> I miss something?
>>
>
> I usually think of my packages as GPL'ish, but granted, in this case I
> probably forgot explicitly mentioning a licensing scheme, which means
> at the moment it's officially under copyright law. Obviously far more
> restrictive than I intended.
>
> I have a slant towards GPL as this will help guaruantee that the
> source code is going to remain accessible to the public to tinker with.
> So as far as I'm concerned you can release it as GPL and keep this
> email as evidence that I've given you written permission to do that.
> Adding the generic LICENSE.txt file to the package should suffice.
>
> Good luck. If you need any help explaining the code let me know. I'll do
> my best (though it's 3 years back by now!)
>
> Best,
> Marc
>
On 31 May 2012 14:43, Kaspar Bumke <kaspar.bumke(a)gmail.com> wrote:
> Hi Marc,
>
>
> I have a slant towards GPL as this will help guaruantee that the
>> source code is going to remain accessible to the public to tinker with.
>> So as far as I'm concerned you can release it as GPL and keep this
>> email as evidence that I've given you written permission to do that.
>> Adding the generic LICENSE.txt file to the package should suffice.
>>
>>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
> Common licenses are available by default so they don't need to be in the
> package.
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
> compile with gcc 4.7 by the way.
>
> Good luck. If you need any help explaining the code let me know. I'll do
>> my best (though it's 3 years back by now!)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
> to me so I think I will start out by trying to extract the Jack process and
> plugging that into the simple command line jack client. I want to make an
> OSC controlled back-end seperate from the GUI so that one day maybe I could
> put it in an embedded system to make an open source drum brain! I can see
> that you started out with a frontend and backend directories but looks like
> you ended putting everything in the frontend.
>
>
> At present it's pretty basic. It does peak detection by looking if a wave
>> goes over its treshold level, then (if I remember correctly)
>> starts a counter to see how long it takes to get to another treshold
>> level to extract MIDI velocity. In other words, at the moment it works
>> entirely in the amplitude domain. This works pretty well for multi-track
>> recordings, but for existing stereo tracks, doing the work in the frequency
>> domain might work better.
>>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
> enough example for me to start understanding just how to simply get audio
> in and MIDI out, once I have that down I will look at the signal processing
> in more detail, do FFTs etc and maybe a neural network.. haha who knows.
> You wouldn't happen to have any recommended reading on the theory behind
> drum replacement techniques? Any tips on what you changed from 0.1 to 0.2
> that made that crucial difference in performance?
>
> Kind Regards,
>
> Kaspar
>
On 31 May 2012 22:21, Marc R.J. Brevoort <mrjb(a)dnd.utwente.nl> wrote:
> Hi Kaspar,
>
>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
>>
> Absolutely.
>
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
>> compile with gcc 4.7 by the way.
>>
>
> I guess it's already starting to show its age a bit ;)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
>> to me so I think I will start out by trying to extract the Jack process
>> and
>> plugging that into the simple command line jack client. I want to make an
>> OSC controlled back-end seperate from the GUI so that one day maybe I
>> could
>> put it in an embedded system to make an open source drum brain! I can see
>> that you started out with a frontend and backend directories but looks
>> like
>> you ended putting everything in the frontend.
>>
>
> Correct, I based the empty application on another one I did earlier but
> couldn't be bothered to do proper frontend-backend separation in its early
> stages. That's probably a mistake.
>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
>> enough example for me to start understanding just how to simply get audio
>> in
>> and MIDI out
>>
>
> You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
> Some explanation on how things work - do with it as you please.
>
> As you've noticed, most of the magic happens in
> UserInterface::jack_process().
>
> The peak scanning: As an input wave is being scanned faster than realtime,
> one can't simply send out MIDI at the moment a peak is detected. Peaks at
> the end of a wave snippet would be triggered too
> quickly compared to peaks at the start of a wave snippet. Instead,
> the output has to be scheduled so that the latency between wave
> peak and MIDI trigger remains constant. (This is why the MIDI triggering
> is done through Fl::add_timeout() instead of just playing the note).
>
> If I recall correctly, the previous, 1-track version of drumreplacer
> didn't schedule notes at all and therefore to keep beats steady,
> it needed to use very small buffers and always had to triggered its
> notes immediately. Obviously this would result in poor performance.
>
> More about the note triggering: One thing to keep in mind is that
> Fl::add_timeout() is really a user-interface function. The delay is
> specified as milliseconds, but in reality it's not quite that
> accurate. Ideally, instead of a user interface timeout one would use
> a sample-accurate MIDI note scheduler.
>
> User interface controls:
>
> - Sens. is sensitivity, the level at which the note will trigger.
> - Res, the resolution - how often a note is allowed to retrigger.
> Related to variable "retrig" in the code.
> - Mid ch, note, are the MIDI channel and note number being output
> when the audio surpasses the treshold
> - Min veloc and Max veloc are the minimum and maximum velocity settings at
> which the note is played. If a note only reaches treshold value, it will be
> played ad minimum velocity; if it reaches maximum value (+1 or -1 as
> float), it will be played at the maximum given velocity.
>
>
> One clever bit is that when a note is scheduled for playback, the actual
> velocity at which it will be played isn't known yet because that is only
> determined *after* the treshold level is reached.
> The note playback is scheduled, and at that time the velocity value is set
> to "minimum velocity".
>
> But meanwhile, before the MIDI is sent out, the wave scanning proceeds-
> and may update the velocity to the highest found peak, until either the
> resolution knob timeout occurs (after which peak detection is reset) or
> until the MIDI note schedule demands the note to be played immediately, in
> which case it will be played at the highest velocity found between
> triggering the note and the actual playback event.
>
> Hope this helps!
>
> Best,
> Marc
I copied the whole conversation to LAD just because I like lurking on there
and reading technical discussions I don't fully understand. Hope that's all
right with you.
You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
That's weird, because it appears as a Jack MIDI program/device in Jack
(qjackctl) which I noticed right away because my MIDI-USB devices appear
under ALSA and it is a (minor) annoyance to deal with the two different
MIDIs and get them to connect. Most things still seem to default to ALSA
these days for better or for worse (maybe someone from LAD could chime in
here with their wealth of knowledge--accurate to 1/1000 of a second it says
in your comments, is that still the case? is that bad?).
Some explanation on how things work - do with it as you please.
>
Thanks so much the explanation. I may hit you up with more questions as I
dive more into the code.
Kind Regards,
Kaspar
> From: David Robillard <d(a)drobilla.net>
>
> I'm a modular head, I remain convinced that control ports are nothing
> but a pain in the ass and CV for everything would be a wonderful
> fantasy land :)
It's called "SynthEdit land" *everything* is CV ;) (not on Linux sorry).
> As it happens, I am currently porting the blop plugins to LV2, and
> making a new extension in order to drop the many plugin variants (which
> are a nightmare from the user POV). This simple extension lets you
> switch a port from its default type (e.g. Control) to another type
> (e.g.
> CV). The pattern looks something like this:
>
> /* plugin->frequency_is_cv is 1 if a CV buffer, 0 if a single float */
> for (uint32_t i = 0; i < sample_count; ++i) {
> const float freq = frequency[s * plugin->frequency_is_cv];
> if (freq != plugin->last_frequency) {
> recalculate_something(freq);
> plugin->last_frequency = freq;
> }
>
> /* Do stuff */
> }
That's smart. In a simple example this doesn't seem like much of a win.
Because A 1 port plugin has only two possible variants (frequency as
single-float/ buffer). But..
* A 2-port plugin has 4 varients.
* A 3-port plugin has 8 varients.
* A 10 port plugin has 1024 varients!
So you're avoiding that combinatorial nightmare.
I do something similar. The port is flagged as either 'streaming' (use the
entire buffer) or 'static' use a single float. My point of difference is -
the entire buffer is provided either way. So you have the option of writing
the plugin like..
const float freq = frequency[s];
..OR...
const float freq = frequency[s * plugin->frequency_is_cv];
.. and it works transparently either way. So the extension is backward
compatible with 'dumb' plugins, or 'dumb' plugin standards like VST (I can
interface VST plugins with modular components).
> Doing those comparisons to see if the value actually changed since the
> last sample in order to recalculate is not so great (branching).
I don't know if you can implement what I do. Once I know which ports are
single floats I 'switch' processing functions. i.e. use a function pointer
to select 1 of several optimised functions. So you write a general purpose
loop like the one above, this is your fallback. Then you write an optimised
one that assumes 'frequency' is a single float - This one has no branching
and no extra multiplication, it's super efficient. You get the best of both
worlds. Note I don't write loops optimised for every possible combination,
just pick a few key ones. The function pointer is one extra level of
indirection, but it's much faster than branching, esp when there's several
ports involved in the decision.
> personally my interest in a solution here is very real. More people
> care about normal high level parameters and being able to interpolate
> than low-level modular synth CV stuff, but to me it's telling that (it
> seems...) one solution can solve both problems nicely.
<high five> ;)
Best Regards,
Jeff
Hi all,
The LV2 spec says that on a call to activate(), "the plugin instance MUST
reset all state information dependent on the history of the plugin instance
except for any data locations provided by connect_port()"
I am not certain whether MIDI CC parameters are included in this category
of "data locations provided by connect_port()". The CC parameters are sent
through port buffers provided by connect_port(), but because they are *event
* buffers, all information passed through them is necessarily part of the *
history* of the plugin instance.
I could imagine cases where you would want to reset all internal state of
the plugin, but since CC values are very much like port values, they would
be kept. On the other hand, I could also imagine cases where you would
want to reset all internal data including the CC parameters.
I'm assuming MIDI note on/off status certainly should be reset.
Thanks,
Jeremy Salwen