Hi all,
The upcoming LV2r4 has a new extension header #include scheme, and a
tool to enable it, named lv2config.
Rather than type everything all over again, see here:
http://lv2plug.in/docs/index.php?title=Using_Extensions_in_Code
lv2config can be used (like ldconfig) to set up system-installed
extension includes, or can be used by an individual project to bundle
extensions.
Packagers would use it to generate the include hierarchy when building a
package for an LV2 extension. This is the part I need feedback on, e.g.
someone pointed out the need for DESTDIR, so that is there now. Before
release, any other things that need doing to enable packaging of LV2
extensions (a new ability) should be aired out.
lv2config is part of core.lv2 which can be had from SVN here:
http://lv2plug.in/repo/trunk/core.lv2/
I wrote the C version late last night, but it seems to work for all the
cases I can think of.
Thanks,
-dr
P.S. There has already been a lot of back-and-forth on the #include
format and use for a tool. This is how it needs to be. Feedback is
needed on the /implementation/.
Just stumbled across this and thought some might find it interesting:
http://en.wikipedia.org/wiki/C1X
"The October 2010 draft includes several changes to the C99 language
and library specifications, such as:
Alignment specification (_Alignas specifier, alignof operator,
aligned_alloc function, <stdalign.h> header file)
...
Multithreading support (_Thread_local storage-class specifier,
<threads.h> header including thread creation/management functions,
mutex, condition variable and thread-specific storage functionality,
as well as the _Atomic type qualifier and <stdatomic.h> for
uninterruptible object access)
...
More macros for querying the characteristic of floating point types,
concerning subnormal floating point numbers, and the number of decimal
digits the type is able to store.
..."
james.
--
_
: http://jwm-art.net/
-audio/image/text/code/
sorry... i cced the old ML addresses :S
fixing the CC now.
On Mon, Feb 28, 2011 at 07:29:54PM +0100, Mike Galbraith wrote:
> On Mon, 2011-02-28 at 18:53 +0100, torbenh wrote:
> >
> > On Tue, Feb 22, 2011 at 03:47:53PM +0100, Mike Galbraith wrote:
> > > On Tue, 2011-02-22 at 13:24 +0100, torbenh wrote:
> > > > On Fri, Feb 18, 2011 at 01:50:12PM +0100, Mike Galbraith wrote:
> >
> > > > > Sounds like you just want to turn CONFIG_RT_GROUP_SCHED off.
> > > >
> > > > but distros turn it on.
> > > > we could prevent debian from turning it on.
> > > > now opensuse 11.4 has turned it on.
> > >
> > > If you or anyone else turns on RT_GROUP_SCHED, you will count your
> > > beans, and pay up front, or you will not play. That's a very sensible
> > > policy for realtime.
> >
> > this probably means that generic computer distros should not turn this
> > option on ?
>
> Yeah, agreed, not for a great default config, but only because
> newfangled automation thingies can't (possibly?) deal with it sanely.
but this is excactly the reason, why i would advocate rt_runtime to be
in a separate cgroups system.
any admin who wants to limit RT runtime could still do it.
people who dont care, and just want their cfs slices configured, can
still do it.
>
> > > If systemd deals with it at all, seems to me it can only make a mess of
> > > it. But who knows, maybe they made a clever allocator. If they didn't,
> > > they'll need an escape hatch methinks.
> >
> > the problem is that audio applications can not really pre allocate their
> > cpu needs. user can add processing plugins until he pushes his machine
> > to the limit. (or the cgroup where his process is running in)
> >
> > we dont really have a mechanism for plugins to publish their needed
> > cycles.
>
> I can't see how it could matter what any individual group of arbitrary
> groups N (who can appear/disappear in the blink of an eye) advertises as
> it's wish of the instant. "Hard" + "Arbitrary" doesn't compute for me.
i dont really understnad this statement.
>
> -Mike
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
torben Hohn
Hi all,
Standing on the shoulders of giants[*], I am pleased to announce the
public release of IR, a convolution reverb in the LV2 plugin format.
Released as free software under the GNU GPL, this easy to use plugin
has been created to open the fascinating world of convolution reverb
to Linux-based audio engineers. If you use Ardour to create, mix &
produce music, you will most probably want to check out this plugin.
Assorted features:
* Zero-latency operation
* Support for mono, stereo and 'True Stereo' (4-channel) impulses
* Realtime operation
* Very reasonable CPU consumption
* Maximum impulse length: 1M samples (~22 seconds @ 48kHz)
* Loads a large number of audio file formats
* High quality sample rate conversion of impulse responses
* Stretch control (via high quality SRC in one step integrated with
impulse loading)
* Pre-delay control (0-2000 ms)
* Stereo width control of input signal & impulse response (0-150%)
* Envelope alteration with immediate visual feedback: Attack
time/percent, Envelope, Length
* Reverse impulse response
* Autogain: change impulses without having to adjust 'Wet gain'
* Impulse response visualization (linear/logarithmic scale, peak & RMS)
* Easy interface for fast browsing and loading impulse responses
IR should work on Linux with Ardour 2.8.x (x >= 11) and 3.
For further info and source code download, please visit the plugin's
homepage: http://factorial.hu/plugins/lv2/ir
Thanks,
Tom
[*] Fons Adriaensen (zita-convolver), Erik de Castro Lopo (libsndfile,
libsamplerate)
2011/2/23 Alexandre Prokoudine <alexandre.prokoudine(a)gmail.com>:
> On 2/22/11, David Robillard wrote:
>
>> I have a working plugin (called "dirg") that provides a UI by hosting a
>> web server which you access in the browser. It provides a grid UI either
>> via a Novation Launchpad, or in the browser if you don't have a
>> Launchpad. Web UIs definitely have a ton of wins (think tablets, remote
>> control (i.e. network transparency), etc.)
>>
>> I also have a complete LV2 "message" system based on Atoms which is
>> compatible with / based on the event extension. Â Atoms, and thus
>> messages, can be serialised to/from JSON (among other things,
>> particularly Turtle).
>
> Any of them available to have a look at?
>
>> Currently dirg provides the web server on its own with no host
>> involvement, but every plugin doing this obviously doesn't scale, so
>> some day we should figure this out... first we need an appropriately
>> high-level/powerful communication protocol within LV2 land (hence the
>> messages stuff).
>
> Where do you stand with priorities now? That sounds like something
> very much worth investing time in.
>
> You see, one thing I'm puzzled about is that you have beginnings of
> what could be significant part of a potentially successful cloud
> computing audio app, and then you talk about how donations don't even
> pay your rent :)
Before I totally forget about it... I think it might be a very clever
thing to do to have some web-based thing (wiki or whatever, ideally a
social network kind of thing) were LAD people can notify of what they
are working on and what are their plans, so that it's easier to: a.
know about it and b. start cooperations, etc.
For example, Dave is doing lots of stuff that I plan to reuse, but I
only know it because I happen to lurk on #lv2 on freenode from time to
time, and the same goes for lots of stuff I'm seeing coming out
lately.
If it is a problem for me to keep up to date with this stuff, I can
only imagine what it would be like for a newcomer.
I can't comment on this more now, but please somebody consider the idea.
Stefano
Hi everyone,
I am currently spending a lot of time working on Android, and on the andraudio
mailing list [1] we are discussing about possible improvements to the internal
Android audio system. Currently, latencies are very high, over 100ms, and we're
looking for ways to improve the situation.
In my opinion this can't be achieved on Linux without realtime scheduling. On
Android, there's something called audioflinger which acts as a sound server, and
apps act as clients of this server. The server and clients run in distinct
processes. What I'm thinking about is having a realtime thread within the
server, as well as another realtime thread in the (each) client.
The one thing about Android is that it has a strict security model. Every app is
considered potentially harmful and is thus "sandboxed". Here, this for example
means that apps can lower their threads priority, but not increase it. And of
course they can only use non-realtime scheduling.
On desktops, for instance using JACK, apart from a few multimedia distributions,
realtime permissions are not granted by default to normal users. And when one
enables it, security is usually not a primary concern AFAIK. If a piece of
software happens to crash the system when running in realtime, the user may just
uninstall the buggy software, etc..
But on phones, this is critical, for example if the system crashes while you're
waiting for a call. So, on Android, the security policies are strict. But this
could certainly be necessary to plenty of other Linux usages.
Now my question is: how to allow user-space apps to use realtime scheduling for
one of their threads without compromising the overall security?
For example, in man sched_setcheduler() I see SCHED_RR, and "Everything
described above for SCHED_FIFO also applies to SCHED_RR, except that each
process is only allowed to run for a maximum time quantum".
Would this help and be sufficient? Would there need to have some kind of
watchdog/monitor running with SCHED_FIFO scheduling to prevent realtime client
threads from consuming too much resources?
Or is there some other ways to achieve this? Some kernel patch maybe?
Thanks in advance
[1] http://music.columbia.edu/mailman/listinfo/andraudio
Olivier
Hi, I've written a small program (Leevi is its name) to drive Lexicon
MX300, similar console what Lexicon ships for Windows operating system
along with their devices. Leevi supports Linux and BSD, and needs
either libusb-0 or libusb-1. It does not require any user interface
libraries, as everything is built on libX11, so if you can boot to
runlevel 5, you can run Leevi.
Please note: As Lexicon wasn't very keen to tell me how to talk with
their devices, USB protocol is reverse engineered by me and there are
myriad of things that I haven't yet revealed. Although Leevi is proven
to work without breaking anything, there is always a small change that
something goes wrong. Therefore I don't take any responsibility, you
use Leevi at your own risk. Check BSD License in Leevi's homepage.
Leevi is yet in development stage, changing effect in stereo mode and
changing routing to/from stereo does not yet work. All other features
should be fully functional.
http://leevi.sourceforge.net
- Jani Salonen
Hi list,
i want to write a jack2 network backend, like netjack2, for my bachelor
thesis.
There is a new network standard comming up specialized for audio and
video transmission, called AVB (Audio Video Bridging -> IEEE 802.1AS,
802.1Qat, 802.1Qav,...).
I want do integrate this standard into jack2.
I want to choose AVB (not ALSA or NET) from the driver dropdown box.
does anyone know where i can find this implementation in the source code?
On Thursday 24 February 2011 12:09:19 David Robillard wrote:
> This is, of course, a big problem in terms of our greater mission
> to provide software that caters to the needs of precisely nobody while
> irritating everybody else.
>
> To resolve this situation, we now have an exciting new Clippy inspired
> assistant that hops around your screen begging you to add MIDI tracks
> constantly.
>
> If, after 20 minutes, you have still not created a MIDI track, Ardour
> will overwrite every .waf file found in your home directory with a 4/4
> electronic kick drum loop, then shut down.
>
> -dr
Dude!
This is just what I have been wanting but am too much of a newb to have even
thought of undertaking such a complex project on my own!
Hence my starting my soundwall project which I newbily thought was going to be
a simple little app.
I have some code inspired by some "anon" silence that could perhaps be
included in this project. wwnnsnmsnm.
cage-ily,
drew
On Tue, Feb 22, 2011 at 3:49 PM, Philipp Überbacher
<hollunder(a)lavabit.com> wrote:
> The rest sounds nice, and it might well be that X has become old, but I
> don't see the big improvement coming up. Windows are called surfaces
> now, can have different shapes and are more flexible, compositing,
> transformations, I got that bit, but I don't see the UI improvement.
> I've seen the demos with shapes flying around the desktop, I've seen
> the conventional compositing window managers and wayland will probably
> do all that and more, but I don't see the improvement in User
> Interfaces.
what its going to do,i think, is two-fold:
1) promote more and more toolkit design that makes everything just a
compositing stack. GTK has already moved significantly in this
direction, but could go a lot further. Qt is in a similar position.
the more this happens, the easier it is to reason and create new GUI
widgets that do cool things, easily and simply, because its all part
of a very simple model: you draw to your surface, it will be
composited onto the screen in ways that you don't have to worry about.
sounds a bit like X ... except that X is explicitly *not* a
compositing model. for a simpler explanation of the kind of thing i
mean, consider the difference in ardour between the main "tracks" area
of the editing window and all the widgets around it. its fundamentally
impossible to implement the tracks with widgets - it uses a "canvas"
object instead which embodies idea like z-axis stacking, transparency
and so forth. but likewise at present it would be a lot of work to
implement all the widgets as canvas "items". now fast forward a few
years, and find a spot where the drawing model for the canvas, the
button widgets, the tree/listviews, for everything *inside* the
program is the same as the model for everything *outside* the program.
drawing a particular "thing" on any other thing becomes identical,
whether the other thing is a "window", a "button", a cell of a
listview, etc, etc.
2) more and more apps able to take advantage of v-blank sync to reduce
computational load due to unnecessary redraws. instead, the whole
system will be a lot like a video-framebuffer version of JACK: the
vblank interrupt arrives. everything with a surface gets a chance to
redraw if it needs to, the surfaces are composited together, and boom,
its on the display. no more guessing how often to redraw stuff, no
more wierd ass hacks to get smooth animation, etc. if you think this
sounds like special effects, i suggest a few minutes playing with a
relevant iPod/iPhone/iPad app where these smooth transformations of
what is on the screen is a central metaphor in how the UI's work.