Hi everyone,
as some of you might know already, the "Linux Audio Conference 2013"
will be back in Europe, this time hosted by the
Institute of Electronic Music and Acoustics (iem)
in Graz, Austria.
We are planning to do the conference during
9th - 12th of May 2013
We have checked a number of computer-music related conferences, and it
seems that none of them collides with this date, so *you* should be able
to attend!
We are still in an early stage of organization, but among the things to
expect are:
- inspiring paper sessions on linux centered audio
- interesting hands-on workshops
- wild electro acoustic concerts (possibly using our
higher-order-ambisonics systems for periphonic sound rendering)
- cool club nights
- fuzzy media art installations
- scenic trips to the country-side
- nice people
- numerous things i forgot
I will keep you informed on any news, regarding deadlines, registration,
a website and more.
Stay tuned!
nmfgadsr
IOhannes
---
Institute of Electronic Music and Acoustics (iem)
University of Music and Dramatic Arts, Graz, Austria
http://iem.at/
> I think you are in error considering these things mutually exclusive.
> Yes, hosts dealing with MIDI binding is how things should be done, but
> crippling a plugin API to not be able to handle MIDI is just that:
> crippling. Maybe I want to patch up a bunch of plugins to process MIDI
> events, or have some MIDI effect plugins: these are certainly
> reasonable things to do.
Hi dr,
I think we mis-communicated. MIDI is *fully* supported, including SYSEX,
delivered to plugins as raw unmolested bytes. Plugins can and do function as
MIDI processors.
The 'Guitar de-channelizer' is supplied as an example MIDI processor with
the SDK, as is the 'MIDI to Gate' plugin.
The idea of binding MIDI to the plugin's parameters is a purely optional
alternative.
> LV2 UIs are also like this, though there is an extension to provide a
> pointer to the plugin instance to the UI.
>
> In theory this should only be used for displaying waveforms and such,
> and always be optional.
How I display waveforms is the API has a function sendMessageToGui(), that
sends an arbitrary bunch of bytes to the GUI in a thread-safe manner. You
can build on that to send waveforms etc. Neither DSP nor GUI needs a
pointer to the other (but they can if they *really* want to).
> Your argument sounds very obviously right because it's about numeric
> parameters, but note and voice control is trickier. That involves
> inventing a new, better event format.
I will disagree and say MIDI note and voice control is pretty good,
*provided* you support MIDI real-time-tuning-changes ( this is an existing
MIDI SYSEX command that can tune any note to any fractional pitch in
real-time. AKA micro-tuning) ...and.. support "Key-Based Instrument Control"
(another little-known MIDI command that provides 128 per-note controllers ).
By supporting these two MIDI commands you get the familiarity of MIDI with
the addition of:
* Fractional Pitch.
* Per note controllers.
By binding MIDI to the plugin parameters as 32-bit 'float' via meta-data,
you remove the need to support MIDI explicitly, you kill the dependency on
MIDI's 7-bit resolution, and you remain open to in future extending the API
to support OSC.
GMPI parameter events include an optional 'voice-number', this extends the
MIDI-binding system to note events and polyphonic aftertouch. I can build an
entirely MIDI-free synthesiser, yet the metadata bindings make it fully
MIDI-compatible.
> This is precisely the kind of reason why monolithic non-
> extensible specifications suck.
GMPI is extensible too. For example MS-WINDOWS GUI's are provided as an
extension (so the core spec can be adapted to other platforms), as is
support for some SynthEdit specific feature that don't really belong in the
core spec.
Best Regards!,
Jeff
Hi,
If you would like to promote your Linux Audio Business or educational
facilities to a wider audience who are actively looking for information
about Linux Audio there is some banner real estate on
http://linux-audio.com which is available for a limited time free of
charge to early birds.
Please contact me directly and I will give you more details on the process.
--
Patrick Shirkey
Boost Hardware Ltd
Hi,
Thanks Rui for the 'Vee One prototypes', released as LV2 plugins and
standalone verions. http://www.rncbc.org/drupal/node/549
It's nice that the user has the choice of using the sampler/synth as a
plugin or as standalone application. Another nice thing in this release
is that you included Session Management support in the standalone
versions, JackSession in this case.
This makes me wondering:
Why does every developer makes his own little single instance host for
his standalone version of the LV2 plugins? Why isn't there a standard
single-instance LV2 host which can be used by all the developers for
their standalone versions of the LV2 plugins they make? A small host,
devs can link to and which works like some kind of run-time dependency
of the standalone version. Didn't have DSSI some kind of system like that?
One of the big advantages of this is that you could eliminate a large
part of the session problem in the Linuxaudio ecosystem. Every new
release of a LV2 standalone application is another application which
needs to be patched for some kind of Session Management. This is
cumbersome for devs and users.
If that standard single instance LV2 host supports Session Management by
default (NSM/Ladish/JackSession/whatever), you solve a significant part
of the problems you encounter when working with standalone Jack apps on
Linux.
1) Users have the choice to use the plugin as standalone version, with
the SM functionality;
2) Developers don't have to patch their standalone version with SM support;
3) Users have more freedom to use the SM they want, because most new LV2
standalone versions will support the most popular SM systems.
Best regards,
Dirk (alias Rosea Grammostola)
Hello fellas!
You probably bumped into one or two videos of AudioGL, a sort of modular
DAW (http://youtu.be/bCC9uHHAEuA).
Of course, none of us knows whether the video is actually true, whether it
is as as smooth as it appears to be,
whether it really does work that well as seen in the demo.
However, assuming that it is, it does display a rather robust music
production environment.
I have two questions.
1. How is it possible that such complex functionality seems to run that
smoothly? Or does it look more complex than it really is? The GUI seems
very responsive and animated.
2. The project seems to be developed just by one person, yet it looks like
a very functional music sequencer. Am I missing something here or is it a
matter of personal talent and commitment?
Cheers!
--
Louigi Verona
http://www.louigiverona.ru/
Hi all,
I've working on a LV2 instrument plugin, and it consumes about 1-2% CPU on
idle. When I leave it for about 20 seconds, the CPU usage jumps to 38 / 40
% of a core, and JACK xruns. The code contains IIR's for a reverb effect,
so I'm going to blame this CPU burning on denormal values.
I'm using waf as the build system, and appending "-O3" and "-ffast-math" to
the CFLAGS and CXXFLAGS. Building with ./waf -v shows the runner thread to
have the "-O3" and "-ffast-math" in the command.
Yet when I run it it still hogs CPU after about 10-20 seconds.
Reading on gcc's pages (http://www.acsu.buffalo.edu/~charngda/cc.html) tells
me that if DenomalsAreZero and FlushToZero are set, it should be linked
with crtfastmath.o. I don't know how to check if this is happening?
I'm not sure where to look next to fix the problem. Help appreciated!
-Harry
I have adapted the GMPI requirements final draft document to a
comparison with the current state of LV2: http://lv2plug.in/gmpi.html
A couple of nonsense baroque ideas aside, most of the requirements are
met, though there are still important gaps. I mention it here in case
anyone has an interest, please feel free to address any of the points
made in this document.
It may also be useful to augment this document with additional
requirements, particularly since there's several knowledgeable folks who
may not grok the *how* of LV2 but know *what* they need in terms of
general requirements. I will add a section for this if anybody has any
input.
Perhaps this will serve as a good road map that is not too bogged down
with details.
Cheers,
-dr
Hi All,
I know I've posted an announcement about Praxis / Praxis LIVE here
before. Thought a few people might be interested in this blog entry
about Praxis' underlying architecture and the influence of the Actor
Model. Be really interested in any comments with thoughts, insights,
corrections, stupid mistakes, similar models in use elsewhere, etc.,
etc.
http://praxisintermedia.wordpress.com/2012/07/26/the-influence-of-the-actor…
Thanks in advance.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis - open-source intermedia system for live creative play -
http://code.google.com/p/praxis
OpenEye - specialist web solutions for the cultural, education,
charitable and local government sectors - http://openeye.info
> > For historical interest. I did complete the GMPI prototype.
> I don't suppose the code for those modular synthesis plugins is
> available? :)
I release as many as possible open source. Unfortunately before I used
plugins I coded everything as part of my application, so a most of the good
filters, Oscillators etc are not available as plugins, when I have time I'll
do more. The stuff I have released is included with the SDK...
http://www.synthedit.com/software-development-kit/
It's pretty specific to SynthEdit and the SEM SDK though so might not be
much help to anyone.
> > The SEM DSP Plugin API has a total of four functions { open(),
setBuffer(), process(), receiveMessageFromGui() },
> > the remainder of the spec is covered by metadata.
> Did you consciously decide to use a setBuffer method for some reason?
> I consider that (connect_port in LADSPA/LV2) a mistake and efficiency
> problem. Passing an array to process is better.
[Plugin]--buffer-->[plugin]
I use the same buffers over and over, so I inform the plugin once of the
buffer addresses, then I need not do so again. Not passing an array to
process() is less overhead on the function call... But I think any
efficiency gain would be minimal either way, passing an array to process is
probably just as good.
> A requirement specifically about a strong and sensible plugin/UI
> separation would have been a good one. By far the worst thing about
> porting VSTs with GUIs. A free for all for UIs to screw with DSP code
> is insane.
Absolutely. The GMPI model is complete separation of GUI/DSP. The GUI has no
direct access to the DSP. When the GUI tweaks a parameter it does so only
via the Host. The GUI can run on you iPad, the DSP on you Linux box. No
special plugin code for that scenario, it's transparent. Very clean.
> > This point is simply that you
> > should not actively *disallow* copy protection, and that you can check
for
> > example that the plugin is GPL *without* instantiating it, (because a
host
> > might want to check the license before unintentionally breaking that
> > license by linking to the plugin).
>
> Fair enough, I guess I will just list this as met. I felt compelled to
> at least hint that LV2 is not the sort of project that would ever
> consider including evil software crippling garbage as a requirement ;)
Understood. GMPI was for both commercial and free software.
> "Including" MIDI is not a burden, it is a trivial consequence of having
> a sane event mechanism....Not allowing for MIDI is completely unrealistic,
> and would mean plugins can't work with by far the most common format of
> musical control data, and porting existing code to work as a plugin
> become dramatically more difficult. Clearly a loss. All you get
> trying
> to do things that way is endless mailing list fights about whether to
> mandate MIDI, or OSC, or the new Ultimate Control Structure, or
> whatever
> - a waste of everyone's time.
Yeah, I do support MIDI via 'events', MIDI was trivial....but.....
My concept with GMPI (not everyone agreed) was that MIDI was not required
*in* the plugin.
For example take your MIDI keyboard's "Modulation Wheel". Imagine the
function that parses the MIDI bytes, decides what type of MIDI message it is
and typically converts that 7-bit controller to a nice 'float' scaled
between -1.0 -> +1.0. Typically every single VST synth plugin has that
code. The inefficiency is that 1000 Plugin developers had to create and
debug equivalent code.
I decided the *host* should provide that routine. Written once, available
to every plugin developer. The plugin simply exposes a port, and the
metadata says "map this port to MIDI modulation-wheel".
The advantage is - very rich MIDI support is available to ALL plugins. Not
just the standard 7-bit controllers, I support CC, RPN, NRPN, SYSEX, Notes,
Aftertouch, Tempo, almost everything. The MIDI parsing code exists only in
one place so it's robust, and it saves plugin developers a lot of wasted
time duplicating existing code. Plugins are small and lightweight, but MORE
functional than the average VST plugin.
With VST it's a mess. Developers MIDI support is highly variable, almost
none support SYSEX for example. Half of them can't handle NRPN properly if
at all.
The other advantage is that with MIDI abstracted and presented as
normalized 'float' values, the host can substitute OSC or HD-MIDI control
without changing the API. Your existing plugins become 'hi-definition' and
'OSC capable' without any effort. You don't have to argue "MIDI vs OSC"
because both are possible.
Back then I said 'no MIDI in GMPI' - it was interpreted as highly radical
and everyone hated the idea. I should have said 'MIDI parsing provided as a
service to the plugin'. You have to write the MIDI parsing anyhow, why not
make it available for everyone?
> That said, sure, MIDI sucks. Something better is needed, but certainly
> not some One Ultimate Control Structure to the exclusion of everything
> else.
All I've done with GMPI is recognise that MIDI controllers can be
'normalised' and presented to the plugin as nice hi-definition floats using
the event system (atoms?). The plugin 'has no MIDI', no crufty 7-bit crap,
yet is fully compatible with MIDI.
That's my rant over ;)
Best Regards,
Jeff
> I have adapted the GMPI requirements final draft document to a
> comparison with the current state of LV2: http://lv2plug.in/gmpi.html
For historical interest. I did complete the GMPI prototype.
Now running on Windows (GUI + DSP support) and Mac/Linux (DSP support). We
have over 1000 plugins available on Windows, mostly related to modular
synthesis (because that's my interest).
This year I ported the SDK and several plugins to Mac, and will also be
ensuring they run on Linux (Waves Plugins Ltd have a high-end Linux-powered
mixing desk that runs plugins. It handles mixing and effects for large
consoles and runs at a rock-solid 1 ms latency).
.SEM Comparison with GMPI...
http://www.synthedit.com/software-development-kit/sdk-version-3-documentatio
n/specifications/
SEMs main differences with LV2:
* Sample-accurate parameter updates via time-stamped events.
* Provides a performance optimization mechanism for handling silent audio
streams.
* Supports fractional pitch numbers.
* Provides the ability for an instrument to define an arbitrary set of
parameters that applies to each voice
Comments on LV2.....
>53 Disagree - Achievable entirely with metadata.
If a feature is achievable with metadata, then you DID MEET that requirement
(in a cool manner). The spec doesn't say HOW to meet the requirement.
Update your table to say 'Met' on these requirements.
The SEM DSP Plugin API has a total of four functions { open(), setBuffer(),
process(), receiveMessageFromGui() }, the remainder of the spec is covered
by metadata.
>61 ...It is unclear what "patches" means here.
It means presets. Or more generally save/restore the plugin's state. It
doesn't mean the plugin *cares* about presets (like VST2), presets can be
concept the host entirely takes care of.
>73 "GMPI must define a simple in-process custom UI mechanism. " Disagree -
It is unclear what this requirement means.
It means the SDK should support GUIs, for example VST provides for opening a
OS 'Window', what and how you draw inside that window is not specified. More
importantly the API has a mechanism for tweaking the plugins parameters from
the GUI. The crux is not that the API supports graphics or drawing, but that
it supports communicating with the plugin from an abstact 'control surface'
potentially running in a separate thread or even a separate process space.
i.e in SEM the API supports a 'GUI' class getting/setting any of the
plugin's parameters in a thread-safe manner.
>101 "GMPI should allow for copy-protected plugins" Disagree - LV2 is not
and should not be encumbered with specific "copy protection" mechanisms.
I would say LV2 MEETS that requirement. This point is simply that you should
not actively *disallow* copy protection, and that you can check for example
that the plugin is GPL *without* instantiating it, (because a host might
want to check the license before unintentionally breaking that license by
linking to the plugin).
All in all very good to see GMPI requirements used as a benchmark. A lot of
smart people put many months into GMPI. I think it was considered 'too
radical' at the time (the utter crap VST2 was considered ideal by many, and
still is). My suggestion that it didn't need MIDI at all (MIDI being too
limited and crufty) resurfaced in VST3 Note-expression. What we see now
with LV2, SEM and VST3 is a vindication of GMPI actually being ahead of its
time.
Jeff McClintock
www.synthedit.com
Best Regards,
Jeff