> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [LAD] Portable user interfaces for LV2 plugins.
> VST3 allows the GUI to run in a different process?
" The design of VST 3 suggests a complete separation of processor and edit
controller by implementing two components. Splitting up an effect into these
two parts requires some extra efforts for an implementation of course.
But this separation enables the host to run each component in a different
context. It can even run them on different computers. Another benefit is
that parameter changes can be separated when it comes to automation. While
for processing these changes need to be transmitted in a sample accurate
way, the GUI part can be updated with a much lower frequency and it can be
shifted by the amount that results from any delay compensation or other
processing offset."
> > The host needs to see every parameter tweak. It needs to be between the
> GUI
> > and the DSP to arbitrate clashes between conflicting control surfaces.
> It's
> > the only way to do automation and state recall right.
>
> well, almost. as i mentioned, AU doesn't really route parameter
> changes via the host, it just makes sure that the host can find out
> about them. the nicest part of the AU system is the highly
> configurable listener system, which can be used to set up things like
> "i need to hear about parameter changes but i don't want to be told
> more than once every 100msec" and more. It's pretty cool.
Yeah. It's important to realise that at any instant 3 entities hold a
parameter's value:
-The audio processer part of the plugin.
-The GUI Part.
-The Host.
A parameter change can come from several sources:
- The GUI.
- The Host's automation playback.
- A MIDI controller.
- Sometimes the Audio processor (e.g. VU Meter).
If several of these are happening at once, some central entity needs to give
one priority. For example if a parameter/knob is moving due to automation
and you click that control - the automation needs to relinquish control
until you release the mouse. The host is the best place for this logic.
Think of the host as holding the parameter, the GUI and Audio processor as
'listeners'. Or the host's copy of the parameter as the 'model' and the GUI
and audio processor as 'views' (Model-View-Controller pattern).
Best Regards!,
Jeff
> AU, which is the only other plugin API to explicitly support
> plugin<->GUI separation.
AND VST 3.0, AND GMPI/SynthEdit...
The host needs to see every parameter tweak. It needs to be between the GUI
and the DSP to arbitrate clashes between conflicting control surfaces. It's
the only way to do automation and state recall right.
Best Regards,
Jeff McClintock.
Hi all,
The upcoming LV2r4 has a new extension header #include scheme, and a
tool to enable it, named lv2config.
Rather than type everything all over again, see here:
http://lv2plug.in/docs/index.php?title=Using_Extensions_in_Code
lv2config can be used (like ldconfig) to set up system-installed
extension includes, or can be used by an individual project to bundle
extensions.
Packagers would use it to generate the include hierarchy when building a
package for an LV2 extension. This is the part I need feedback on, e.g.
someone pointed out the need for DESTDIR, so that is there now. Before
release, any other things that need doing to enable packaging of LV2
extensions (a new ability) should be aired out.
lv2config is part of core.lv2 which can be had from SVN here:
http://lv2plug.in/repo/trunk/core.lv2/
I wrote the C version late last night, but it seems to work for all the
cases I can think of.
Thanks,
-dr
P.S. There has already been a lot of back-and-forth on the #include
format and use for a tool. This is how it needs to be. Feedback is
needed on the /implementation/.
Just stumbled across this and thought some might find it interesting:
http://en.wikipedia.org/wiki/C1X
"The October 2010 draft includes several changes to the C99 language
and library specifications, such as:
Alignment specification (_Alignas specifier, alignof operator,
aligned_alloc function, <stdalign.h> header file)
...
Multithreading support (_Thread_local storage-class specifier,
<threads.h> header including thread creation/management functions,
mutex, condition variable and thread-specific storage functionality,
as well as the _Atomic type qualifier and <stdatomic.h> for
uninterruptible object access)
...
More macros for querying the characteristic of floating point types,
concerning subnormal floating point numbers, and the number of decimal
digits the type is able to store.
..."
james.
--
_
: http://jwm-art.net/
-audio/image/text/code/
sorry... i cced the old ML addresses :S
fixing the CC now.
On Mon, Feb 28, 2011 at 07:29:54PM +0100, Mike Galbraith wrote:
> On Mon, 2011-02-28 at 18:53 +0100, torbenh wrote:
> >
> > On Tue, Feb 22, 2011 at 03:47:53PM +0100, Mike Galbraith wrote:
> > > On Tue, 2011-02-22 at 13:24 +0100, torbenh wrote:
> > > > On Fri, Feb 18, 2011 at 01:50:12PM +0100, Mike Galbraith wrote:
> >
> > > > > Sounds like you just want to turn CONFIG_RT_GROUP_SCHED off.
> > > >
> > > > but distros turn it on.
> > > > we could prevent debian from turning it on.
> > > > now opensuse 11.4 has turned it on.
> > >
> > > If you or anyone else turns on RT_GROUP_SCHED, you will count your
> > > beans, and pay up front, or you will not play. That's a very sensible
> > > policy for realtime.
> >
> > this probably means that generic computer distros should not turn this
> > option on ?
>
> Yeah, agreed, not for a great default config, but only because
> newfangled automation thingies can't (possibly?) deal with it sanely.
but this is excactly the reason, why i would advocate rt_runtime to be
in a separate cgroups system.
any admin who wants to limit RT runtime could still do it.
people who dont care, and just want their cfs slices configured, can
still do it.
>
> > > If systemd deals with it at all, seems to me it can only make a mess of
> > > it. But who knows, maybe they made a clever allocator. If they didn't,
> > > they'll need an escape hatch methinks.
> >
> > the problem is that audio applications can not really pre allocate their
> > cpu needs. user can add processing plugins until he pushes his machine
> > to the limit. (or the cgroup where his process is running in)
> >
> > we dont really have a mechanism for plugins to publish their needed
> > cycles.
>
> I can't see how it could matter what any individual group of arbitrary
> groups N (who can appear/disappear in the blink of an eye) advertises as
> it's wish of the instant. "Hard" + "Arbitrary" doesn't compute for me.
i dont really understnad this statement.
>
> -Mike
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
torben Hohn
Hi all,
Standing on the shoulders of giants[*], I am pleased to announce the
public release of IR, a convolution reverb in the LV2 plugin format.
Released as free software under the GNU GPL, this easy to use plugin
has been created to open the fascinating world of convolution reverb
to Linux-based audio engineers. If you use Ardour to create, mix &
produce music, you will most probably want to check out this plugin.
Assorted features:
* Zero-latency operation
* Support for mono, stereo and 'True Stereo' (4-channel) impulses
* Realtime operation
* Very reasonable CPU consumption
* Maximum impulse length: 1M samples (~22 seconds @ 48kHz)
* Loads a large number of audio file formats
* High quality sample rate conversion of impulse responses
* Stretch control (via high quality SRC in one step integrated with
impulse loading)
* Pre-delay control (0-2000 ms)
* Stereo width control of input signal & impulse response (0-150%)
* Envelope alteration with immediate visual feedback: Attack
time/percent, Envelope, Length
* Reverse impulse response
* Autogain: change impulses without having to adjust 'Wet gain'
* Impulse response visualization (linear/logarithmic scale, peak & RMS)
* Easy interface for fast browsing and loading impulse responses
IR should work on Linux with Ardour 2.8.x (x >= 11) and 3.
For further info and source code download, please visit the plugin's
homepage: http://factorial.hu/plugins/lv2/ir
Thanks,
Tom
[*] Fons Adriaensen (zita-convolver), Erik de Castro Lopo (libsndfile,
libsamplerate)
2011/2/23 Alexandre Prokoudine <alexandre.prokoudine(a)gmail.com>:
> On 2/22/11, David Robillard wrote:
>
>> I have a working plugin (called "dirg") that provides a UI by hosting a
>> web server which you access in the browser. It provides a grid UI either
>> via a Novation Launchpad, or in the browser if you don't have a
>> Launchpad. Web UIs definitely have a ton of wins (think tablets, remote
>> control (i.e. network transparency), etc.)
>>
>> I also have a complete LV2 "message" system based on Atoms which is
>> compatible with / based on the event extension. Â Atoms, and thus
>> messages, can be serialised to/from JSON (among other things,
>> particularly Turtle).
>
> Any of them available to have a look at?
>
>> Currently dirg provides the web server on its own with no host
>> involvement, but every plugin doing this obviously doesn't scale, so
>> some day we should figure this out... first we need an appropriately
>> high-level/powerful communication protocol within LV2 land (hence the
>> messages stuff).
>
> Where do you stand with priorities now? That sounds like something
> very much worth investing time in.
>
> You see, one thing I'm puzzled about is that you have beginnings of
> what could be significant part of a potentially successful cloud
> computing audio app, and then you talk about how donations don't even
> pay your rent :)
Before I totally forget about it... I think it might be a very clever
thing to do to have some web-based thing (wiki or whatever, ideally a
social network kind of thing) were LAD people can notify of what they
are working on and what are their plans, so that it's easier to: a.
know about it and b. start cooperations, etc.
For example, Dave is doing lots of stuff that I plan to reuse, but I
only know it because I happen to lurk on #lv2 on freenode from time to
time, and the same goes for lots of stuff I'm seeing coming out
lately.
If it is a problem for me to keep up to date with this stuff, I can
only imagine what it would be like for a newcomer.
I can't comment on this more now, but please somebody consider the idea.
Stefano
Hi everyone,
I am currently spending a lot of time working on Android, and on the andraudio
mailing list [1] we are discussing about possible improvements to the internal
Android audio system. Currently, latencies are very high, over 100ms, and we're
looking for ways to improve the situation.
In my opinion this can't be achieved on Linux without realtime scheduling. On
Android, there's something called audioflinger which acts as a sound server, and
apps act as clients of this server. The server and clients run in distinct
processes. What I'm thinking about is having a realtime thread within the
server, as well as another realtime thread in the (each) client.
The one thing about Android is that it has a strict security model. Every app is
considered potentially harmful and is thus "sandboxed". Here, this for example
means that apps can lower their threads priority, but not increase it. And of
course they can only use non-realtime scheduling.
On desktops, for instance using JACK, apart from a few multimedia distributions,
realtime permissions are not granted by default to normal users. And when one
enables it, security is usually not a primary concern AFAIK. If a piece of
software happens to crash the system when running in realtime, the user may just
uninstall the buggy software, etc..
But on phones, this is critical, for example if the system crashes while you're
waiting for a call. So, on Android, the security policies are strict. But this
could certainly be necessary to plenty of other Linux usages.
Now my question is: how to allow user-space apps to use realtime scheduling for
one of their threads without compromising the overall security?
For example, in man sched_setcheduler() I see SCHED_RR, and "Everything
described above for SCHED_FIFO also applies to SCHED_RR, except that each
process is only allowed to run for a maximum time quantum".
Would this help and be sufficient? Would there need to have some kind of
watchdog/monitor running with SCHED_FIFO scheduling to prevent realtime client
threads from consuming too much resources?
Or is there some other ways to achieve this? Some kernel patch maybe?
Thanks in advance
[1] http://music.columbia.edu/mailman/listinfo/andraudio
Olivier
Hi, I've written a small program (Leevi is its name) to drive Lexicon
MX300, similar console what Lexicon ships for Windows operating system
along with their devices. Leevi supports Linux and BSD, and needs
either libusb-0 or libusb-1. It does not require any user interface
libraries, as everything is built on libX11, so if you can boot to
runlevel 5, you can run Leevi.
Please note: As Lexicon wasn't very keen to tell me how to talk with
their devices, USB protocol is reverse engineered by me and there are
myriad of things that I haven't yet revealed. Although Leevi is proven
to work without breaking anything, there is always a small change that
something goes wrong. Therefore I don't take any responsibility, you
use Leevi at your own risk. Check BSD License in Leevi's homepage.
Leevi is yet in development stage, changing effect in stereo mode and
changing routing to/from stereo does not yet work. All other features
should be fully functional.
http://leevi.sourceforge.net
- Jani Salonen
Hi list,
i want to write a jack2 network backend, like netjack2, for my bachelor
thesis.
There is a new network standard comming up specialized for audio and
video transmission, called AVB (Audio Video Bridging -> IEEE 802.1AS,
802.1Qat, 802.1Qav,...).
I want do integrate this standard into jack2.
I want to choose AVB (not ALSA or NET) from the driver dropdown box.
does anyone know where i can find this implementation in the source code?