Hello laddies,
I am making an LV2 extension for accessing and/or restricting the buffer
size. This is straightforward, but I need to know just what
restrictions are actually needed by various sorts of DSP.
The sort of thing we're looking for here is "buffer size is always at
least 123 frames" or "buffer size is always a power of 2" or "buffer
size is always a multiple of 123".
I know "multiple of a power of two" is needed for convolution. Not sure
what else...
-dr
On 31 May 2012 03:41, Kaspar Bumke <kaspar.bumke(a)gmail.com> wrote:
> Hey,
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
> the code and see if I can use it as a basis for a more advanced
> drumreplacer.
>
> For now I was just making an Arch Linux AUR package and was wondering
> about the license (have to put it in the package). Is it just public domain
> or did I miss something?
>
> Regards,
>
> Kaspar
>
On 31 May 2012 09:38, Marc R.J. Brevoort <mrjb(a)dnd.utwente.nl> wrote:
> Hi Kaspar,
>
>
> Just tested out drumreplacer. Seems to work well. I am going to go through
>> the code and see if I can use it as a basis for a more advanced
>> drumreplacer.
>>
>
> At present it's pretty basic. It does peak detection by looking if a wave
> goes over its treshold level, then (if I remember correctly)
> starts a counter to see how long it takes to get to another treshold
> level to extract MIDI velocity. In other words, at the moment it works
> entirely in the amplitude domain. This works pretty well for multi-track
> recordings, but for existing stereo tracks, doing the work in the frequency
> domain might work better.
>
>
> For now I was just making an Arch Linux AUR package and was wondering
>> about
>> the license (have to put it in the package). Is it just public domain or
>> did
>> I miss something?
>>
>
> I usually think of my packages as GPL'ish, but granted, in this case I
> probably forgot explicitly mentioning a licensing scheme, which means
> at the moment it's officially under copyright law. Obviously far more
> restrictive than I intended.
>
> I have a slant towards GPL as this will help guaruantee that the
> source code is going to remain accessible to the public to tinker with.
> So as far as I'm concerned you can release it as GPL and keep this
> email as evidence that I've given you written permission to do that.
> Adding the generic LICENSE.txt file to the package should suffice.
>
> Good luck. If you need any help explaining the code let me know. I'll do
> my best (though it's 3 years back by now!)
>
> Best,
> Marc
>
On 31 May 2012 14:43, Kaspar Bumke <kaspar.bumke(a)gmail.com> wrote:
> Hi Marc,
>
>
> I have a slant towards GPL as this will help guaruantee that the
>> source code is going to remain accessible to the public to tinker with.
>> So as far as I'm concerned you can release it as GPL and keep this
>> email as evidence that I've given you written permission to do that.
>> Adding the generic LICENSE.txt file to the package should suffice.
>>
>>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
> Common licenses are available by default so they don't need to be in the
> package.
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
> compile with gcc 4.7 by the way.
>
> Good luck. If you need any help explaining the code let me know. I'll do
>> my best (though it's 3 years back by now!)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
> to me so I think I will start out by trying to extract the Jack process and
> plugging that into the simple command line jack client. I want to make an
> OSC controlled back-end seperate from the GUI so that one day maybe I could
> put it in an embedded system to make an open source drum brain! I can see
> that you started out with a frontend and backend directories but looks like
> you ended putting everything in the frontend.
>
>
> At present it's pretty basic. It does peak detection by looking if a wave
>> goes over its treshold level, then (if I remember correctly)
>> starts a counter to see how long it takes to get to another treshold
>> level to extract MIDI velocity. In other words, at the moment it works
>> entirely in the amplitude domain. This works pretty well for multi-track
>> recordings, but for existing stereo tracks, doing the work in the frequency
>> domain might work better.
>>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
> enough example for me to start understanding just how to simply get audio
> in and MIDI out, once I have that down I will look at the signal processing
> in more detail, do FFTs etc and maybe a neural network.. haha who knows.
> You wouldn't happen to have any recommended reading on the theory behind
> drum replacement techniques? Any tips on what you changed from 0.1 to 0.2
> that made that crucial difference in performance?
>
> Kind Regards,
>
> Kaspar
>
On 31 May 2012 22:21, Marc R.J. Brevoort <mrjb(a)dnd.utwente.nl> wrote:
> Hi Kaspar,
>
>
> Cool, I marked it as GPL which means GPLv2 or later. Are you OK with that?
>>
> Absolutely.
>
>
> I needed to add a stdlib.h include to src/lib/convertlib.h to make it
>> compile with gcc 4.7 by the way.
>>
>
> I guess it's already starting to show its age a bit ;)
>
>
> I have started looking through the code. The FLTK stuff is a bit confusing
>> to me so I think I will start out by trying to extract the Jack process
>> and
>> plugging that into the simple command line jack client. I want to make an
>> OSC controlled back-end seperate from the GUI so that one day maybe I
>> could
>> put it in an embedded system to make an open source drum brain! I can see
>> that you started out with a frontend and backend directories but looks
>> like
>> you ended putting everything in the frontend.
>>
>
> Correct, I based the empty application on another one I did earlier but
> couldn't be bothered to do proper frontend-backend separation in its early
> stages. That's probably a mistake.
>
>
> Ah OK, cool. I am really glad I found your project as this is a basic
>> enough example for me to start understanding just how to simply get audio
>> in
>> and MIDI out
>>
>
> You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
> Some explanation on how things work - do with it as you please.
>
> As you've noticed, most of the magic happens in
> UserInterface::jack_process().
>
> The peak scanning: As an input wave is being scanned faster than realtime,
> one can't simply send out MIDI at the moment a peak is detected. Peaks at
> the end of a wave snippet would be triggered too
> quickly compared to peaks at the start of a wave snippet. Instead,
> the output has to be scheduled so that the latency between wave
> peak and MIDI trigger remains constant. (This is why the MIDI triggering
> is done through Fl::add_timeout() instead of just playing the note).
>
> If I recall correctly, the previous, 1-track version of drumreplacer
> didn't schedule notes at all and therefore to keep beats steady,
> it needed to use very small buffers and always had to triggered its
> notes immediately. Obviously this would result in poor performance.
>
> More about the note triggering: One thing to keep in mind is that
> Fl::add_timeout() is really a user-interface function. The delay is
> specified as milliseconds, but in reality it's not quite that
> accurate. Ideally, instead of a user interface timeout one would use
> a sample-accurate MIDI note scheduler.
>
> User interface controls:
>
> - Sens. is sensitivity, the level at which the note will trigger.
> - Res, the resolution - how often a note is allowed to retrigger.
> Related to variable "retrig" in the code.
> - Mid ch, note, are the MIDI channel and note number being output
> when the audio surpasses the treshold
> - Min veloc and Max veloc are the minimum and maximum velocity settings at
> which the note is played. If a note only reaches treshold value, it will be
> played ad minimum velocity; if it reaches maximum value (+1 or -1 as
> float), it will be played at the maximum given velocity.
>
>
> One clever bit is that when a note is scheduled for playback, the actual
> velocity at which it will be played isn't known yet because that is only
> determined *after* the treshold level is reached.
> The note playback is scheduled, and at that time the velocity value is set
> to "minimum velocity".
>
> But meanwhile, before the MIDI is sent out, the wave scanning proceeds-
> and may update the velocity to the highest found peak, until either the
> resolution knob timeout occurs (after which peak detection is reset) or
> until the MIDI note schedule demands the note to be played immediately, in
> which case it will be played at the highest velocity found between
> triggering the note and the actual playback event.
>
> Hope this helps!
>
> Best,
> Marc
I copied the whole conversation to LAD just because I like lurking on there
and reading technical discussions I don't fully understand. Hope that's all
right with you.
You'll also notice that it's JACK audio in, but rather than JACK MIDI out,
> it's ALSA MIDI out instead. Reason is that when I wrote drumreplacer, JACK
> MIDI was basically unsupported, even by JACK tools such as qjackctl. Things
> probably have changed at least somewhat, three years down the line.
>
That's weird, because it appears as a Jack MIDI program/device in Jack
(qjackctl) which I noticed right away because my MIDI-USB devices appear
under ALSA and it is a (minor) annoyance to deal with the two different
MIDIs and get them to connect. Most things still seem to default to ALSA
these days for better or for worse (maybe someone from LAD could chime in
here with their wealth of knowledge--accurate to 1/1000 of a second it says
in your comments, is that still the case? is that bad?).
Some explanation on how things work - do with it as you please.
>
Thanks so much the explanation. I may hit you up with more questions as I
dive more into the code.
Kind Regards,
Kaspar
> From: David Robillard <d(a)drobilla.net>
>
> I'm a modular head, I remain convinced that control ports are nothing
> but a pain in the ass and CV for everything would be a wonderful
> fantasy land :)
It's called "SynthEdit land" *everything* is CV ;) (not on Linux sorry).
> As it happens, I am currently porting the blop plugins to LV2, and
> making a new extension in order to drop the many plugin variants (which
> are a nightmare from the user POV). This simple extension lets you
> switch a port from its default type (e.g. Control) to another type
> (e.g.
> CV). The pattern looks something like this:
>
> /* plugin->frequency_is_cv is 1 if a CV buffer, 0 if a single float */
> for (uint32_t i = 0; i < sample_count; ++i) {
> const float freq = frequency[s * plugin->frequency_is_cv];
> if (freq != plugin->last_frequency) {
> recalculate_something(freq);
> plugin->last_frequency = freq;
> }
>
> /* Do stuff */
> }
That's smart. In a simple example this doesn't seem like much of a win.
Because A 1 port plugin has only two possible variants (frequency as
single-float/ buffer). But..
* A 2-port plugin has 4 varients.
* A 3-port plugin has 8 varients.
* A 10 port plugin has 1024 varients!
So you're avoiding that combinatorial nightmare.
I do something similar. The port is flagged as either 'streaming' (use the
entire buffer) or 'static' use a single float. My point of difference is -
the entire buffer is provided either way. So you have the option of writing
the plugin like..
const float freq = frequency[s];
..OR...
const float freq = frequency[s * plugin->frequency_is_cv];
.. and it works transparently either way. So the extension is backward
compatible with 'dumb' plugins, or 'dumb' plugin standards like VST (I can
interface VST plugins with modular components).
> Doing those comparisons to see if the value actually changed since the
> last sample in order to recalculate is not so great (branching).
I don't know if you can implement what I do. Once I know which ports are
single floats I 'switch' processing functions. i.e. use a function pointer
to select 1 of several optimised functions. So you write a general purpose
loop like the one above, this is your fallback. Then you write an optimised
one that assumes 'frequency' is a single float - This one has no branching
and no extra multiplication, it's super efficient. You get the best of both
worlds. Note I don't write loops optimised for every possible combination,
just pick a few key ones. The function pointer is one extra level of
indirection, but it's much faster than branching, esp when there's several
ports involved in the decision.
> personally my interest in a solution here is very real. More people
> care about normal high level parameters and being able to interpolate
> than low-level modular synth CV stuff, but to me it's telling that (it
> seems...) one solution can solve both problems nicely.
<high five> ;)
Best Regards,
Jeff
Hi all,
The LV2 spec says that on a call to activate(), "the plugin instance MUST
reset all state information dependent on the history of the plugin instance
except for any data locations provided by connect_port()"
I am not certain whether MIDI CC parameters are included in this category
of "data locations provided by connect_port()". The CC parameters are sent
through port buffers provided by connect_port(), but because they are *event
* buffers, all information passed through them is necessarily part of the *
history* of the plugin instance.
I could imagine cases where you would want to reset all internal state of
the plugin, but since CC values are very much like port values, they would
be kept. On the other hand, I could also imagine cases where you would
want to reset all internal data including the CC parameters.
I'm assuming MIDI note on/off status certainly should be reset.
Thanks,
Jeremy Salwen
> I think providing synchronous control events, with 'future' values (at
> least some distance L in the future) is the way to get that. Let's
> pretend that the Ultimate Plugin Interface (UPI) 1.0 exists, works this
> way, is stable and unmalleable, and all you have to work with to
> deliver
> your product (a plugin).
>
> >From the plugin author perspective: is there anything that is
> *impossible* to do correctly?
>
> -dr
I believe it simply impossible to reliable deliver 'future' parameter
values.
Even when the automation is pre-recorded. E.g. smoothly ramping up a
parameter over 1 second. You can't say to the plugin 'ramp this parameter
over 1 second' - because partway through the 'ramp' the user can reposition
the playback to another part of the song, or loop a section, or change the
tempo, or hit 'Stop'. Any attempt to predict the future like that leads to
kludgy hacks.
Now you can say to the plugin 'process 100 samples' while specifying the
parameter value at sample# 0 and also at sample# 99. That is how to specify
a precise ramp (or section of a longer ramp) without providing future
parameter values. Apologies if that's what you meant. (I classify 'future'
values in this example as being later than sample #99 ).
Best Regards,
Jeff
> From: Fons Adriaensen <fons(a)linuxaudio.org>
> Subject: Re: [LAD] Plugin buffer size restrictions
> On Mon, May 28, 2012 at 06:05:20PM +0300, Stefano D'Angelo wrote:
>
> > IMO it's easily said: if control rate < audio rate it's plugin's
> > responsibility, otherwise the host feeds upsampled/filtered control
> > signals at audio rate to the plugin and all problems evaporate...
>
> The don't evaporate, they explode.
>
> Take a filter plugin. Calculating the actual filter coefficients from
> the 'user' parameters (frequency, gain, etc..) can easily be
> 10..1000 times more complex than actually using those coefficients to
> process one sample. So you really don't want to do that at the audio
> sample rate.
True, But with synths at least, recalculating the filter at audio rate is
routine these days. Admittedly we are pre-calculating the full range of
coefficients in advance.
You do get *very* nice filter sweeps and blips at full audio-rate
modulation.
Best Regards,
Jeff
Hi!
I'm CC'ing this to LAD and jack-devel with reply-to set to jack-devel,
hope the listservers keep the header fields intact.
Though it reads jackd1 there, it also affects jackd2.
No action has been taken, yet, but I tend to agree with the bug report
and (temporarily) disable celt support in both packages.
The older celt versions are about to vanish, so neither Debian nor
Ubuntu will provide it within one year from now or so.
I haven't followed the Opus discussion recently, but it seems the CELT
support in both jackds needs to be replaced by Opus.
The Wheezy freeze is scheduled for "mid June", so removing CELT-0.7 is
likely to happen within the next weeks.
So as a warning to all users, it looks like Debian and Debian-based
distros are losing CELT support in netjack until somebody steps up and
ports the code to Opus.
Cheers
-------- Original Message --------
Subject: Bug#674651: Please disable celt support in jack
Resent-Date: Sat, 26 May 2012 12:36:50 +0000
Resent-From: Ron <ron(a)debian.org>
Resent-To: debian-bugs-dist(a)lists.debian.org
Resent-CC: Debian Multimedia Maintainers
<pkg-multimedia-maintainers(a)lists.alioth.debian.org>
Date: Sat, 26 May 2012 21:28:19 +0930
From: Ron <ron(a)debian.org>
Reply-To: Ron <ron(a)debian.org>, 674651(a)bugs.debian.org
To: Debian Bug Tracking System <submit(a)bugs.debian.org>
Package: jackd1
Version: 1:0.121.3+20120418git75e3e20b-1
Severity: normal
Hi,
We're planning on removing the celt package from Wheezy, since we now have
a stable release of Opus that people can really use. Please disable celt
in jack so that we can move ahead with doing that before the freeze.
If you can and wish to enable Opus support, that would be great, but for
now we're mostly concerned with not shipping an obsolete celt version for
another whole release cycle.
Thanks!
Ron
_______________________________________________
pkg-multimedia-maintainers mailing list
pkg-multimedia-maintainers(a)lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-multimedia-main…
> I assume that computing parameter trajectories basically means
> interpolating, and that inevitably introduces some latency.
That's a key point. Interpolating any sampled signal introduces latency.
> let the host pass a limited number of future parameter samples at each
> run() (could be negotiated at instantiation time), so that the plugin
> doesn't have to add latency to the audio streams in any case. Would be
> only supported by "offline hosts". If the block sizes are variable,
> future block sizes should be passed as well (argh?). But I don't know
> if this really makes sense or has downsides... ideas, folks?
I *really* hate this idea.
I play my MIDI keyboard into my DAW, perhaps while using my Mod-Wheel, or
artistically using the filter-cuttoff parameter...I hit record...stop..
Then I push 'offline render'.
You would say - shift all my parameter events earlier in time and render
the result to disk? It's going to sound different. The timing will be
wrong. A DAW is like a tape recorder. Playback or offline rendering should
result in an identical performance surely?.
Why are you selectively shifting some musical events in time but not
others, why not note-ons too?
You can't provide live MIDI playing 'in advance', you can't prove parameter
updates in advance, just like you can't provide live audio in advance. If
the plugin wants 'future' data to interpolate stuff, it needs to introduce
latency. A good host will compensate for latency if the plugin API supports
that.
Parameters aren't special, they don't require any different handling than
MIDI. What's the difference between a MIDI controller tweaking the filter
cuttoff, or directly tweaking the parameter? Nothing. They both need
smoothing, they both need interpolating, they both will have latency. Don't
overcomplicate it.
Jeff
Greetings:
The problem: When I build Kdenlive for my Arch 64 system it compiles
without problems and starts up okay. After that the sluggishness of its
response is almost unbearable. For example, I've timed up to 10 seconds
between a mouse click and the resulting action, e.g. right-click to Add
Clip, and everything stalls badly when the program is calculating audio
or video thumbnails. Response time gets better after running the program
for a while, but getting it going is rather frustrating.
I've already searched Google and asked about the problem on the Kdenlive
forum and got no useful replies, hence my request here. Some of you use
Arch systems, and some of you know Qt well enough that maybe you can
advise me on this problem. (I'm hoping for a relatively simple solution,
of course).
Incidentally, the problem occurs with Arch's packaged Kdenlive and the
binary I compile with the build script. I just upgraded both, the
problem is still there. Other large Qt apps don't have the problem, e.g.
QTractor. The problem is also absent with Kdenlive on my 32-bit machines.
Btw, video is an nVidia GeForce 7600 GS, with nVidia's driver.
Best,
dp
Maybe this is of interest to one or the other :D
Flo
-------- Original Message --------
Subject: [music-dsp] ANN: Book: The Art of VA Filter Design
Date: Fri, 25 May 2012 10:54:25 +0200
From: Vadim Zavalishin <vadim.zavalishin(a)native-instruments.de>
Reply-To: A discussion list for music-related DSP
<music-dsp(a)music.columbia.edu>
To: A discussion list for music-related DSP <music-dsp(a)music.columbia.edu>
Hi all
This is kind of a cross-announcement from KVRAudio, but since there are
probably a number of different people on this list, I thought I'd
announce it here as well. Get it here:
http://ay-kedi.narod2.ru/VAFilterDesign.pdfhttp://images-l3.native-instruments.com/fileadmin/ni_media/downloads/pdf/VA…http://www.discodsp.net/VAFilterDesign.pdf (thanks to "george" for
mirroring)
There is a discussion thread at
http://www.kvraudio.com/forum/viewtopic.php?t=350246
Regards,
Vadim
--
Vadim Zavalishin
Software Integration Architect | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsphttp://music.columbia.edu/mailman/listinfo/music-dsp