On 23 February 2011 22:11, David Robillard <d(a)drobilla.net> wrote:
> SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
> store). Both are roughly 2 thousand lines of C, solid and thoroughly
> tested (about 95% code coverage, like SLV2 itself). Serd has zero
> dependencies, Sord depends only on Glib (for the time being, possibly
> not in the future).
Can you point me at the API or code? I couldn't see it in a quick
browse on your SVN server.
I have a library (Dataquay,
http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
release of it at the moment, so if anyone wants to try it, go for the
repository rather than the old releases) which provides a Qt4 wrapper
for librdf and an object-RDF mapper.
It's intended for applications whose developers like the idea of RDF
as an abstract data model and Turtle as a syntax, but are not
particularly interested in being scalable datastores or engaging in
the linked data world.
For my purposes, Dataquay using librdf is fine -- I can configure it
so that bloat is not an issue (and hey! I'm using Qt already) and some
optional extras are welcome. But I can see the appeal of a more
limited, lightweight, or at least less configuration-dependent
I've considered doing LV2 as a simple example case for Dataquay, but
the thought of engaging in more flamewars about LV2 and GUIs is really
what has put me off so far. In other words, I like the cut of your
I've forked Specimen primarily to provide frequency Modulation of the
LFOs and to make all the LFOs and ADSRs independent so that there is
no longer a single dedicated ADSR and a single dedicated LFO for ie
pitch modulation, but two 'inputs' for pitch modulation for which the
choice of all ADSRs and all LFOs is available.
Please read the README for more information:
The current state of Petri-Foo is that the LFOs and ADSRs have been
made independant and are, AFAICT, working as should. The GUI is not
yet up to date, but changes have been made enough to get a basic idea
of what's going on.
Please do read the README before commenting. I've tried to do things
properly! I'm only human and only a hobbyist coder.
I'm working on a mulithreaded version of my pet project, and I've now
managed to deadlock one thread,
which in turn makes the GUI thread wait for a mutex lock, and then finally
segfaults the whole program :-)
So I'm looking for pointers on how best to find a deadlock's cause using
Other advice / good articles on the topic etc welcome!
Thanks for reading, -Harry
sorry... i cced the old ML addresses :S
fixing the CC now.
On Mon, Feb 28, 2011 at 07:29:54PM +0100, Mike Galbraith wrote:
> On Mon, 2011-02-28 at 18:53 +0100, torbenh wrote:
> > On Tue, Feb 22, 2011 at 03:47:53PM +0100, Mike Galbraith wrote:
> > > On Tue, 2011-02-22 at 13:24 +0100, torbenh wrote:
> > > > On Fri, Feb 18, 2011 at 01:50:12PM +0100, Mike Galbraith wrote:
> > > > > Sounds like you just want to turn CONFIG_RT_GROUP_SCHED off.
> > > >
> > > > but distros turn it on.
> > > > we could prevent debian from turning it on.
> > > > now opensuse 11.4 has turned it on.
> > >
> > > If you or anyone else turns on RT_GROUP_SCHED, you will count your
> > > beans, and pay up front, or you will not play. That's a very sensible
> > > policy for realtime.
> > this probably means that generic computer distros should not turn this
> > option on ?
> Yeah, agreed, not for a great default config, but only because
> newfangled automation thingies can't (possibly?) deal with it sanely.
but this is excactly the reason, why i would advocate rt_runtime to be
in a separate cgroups system.
any admin who wants to limit RT runtime could still do it.
people who dont care, and just want their cfs slices configured, can
still do it.
> > > If systemd deals with it at all, seems to me it can only make a mess of
> > > it. But who knows, maybe they made a clever allocator. If they didn't,
> > > they'll need an escape hatch methinks.
> > the problem is that audio applications can not really pre allocate their
> > cpu needs. user can add processing plugins until he pushes his machine
> > to the limit. (or the cgroup where his process is running in)
> > we dont really have a mechanism for plugins to publish their needed
> > cycles.
> I can't see how it could matter what any individual group of arbitrary
> groups N (who can appear/disappear in the blink of an eye) advertises as
> it's wish of the instant. "Hard" + "Arbitrary" doesn't compute for me.
i dont really understnad this statement.
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Standing on the shoulders of giants[*], I am pleased to announce the
public release of IR, a convolution reverb in the LV2 plugin format.
Released as free software under the GNU GPL, this easy to use plugin
has been created to open the fascinating world of convolution reverb
to Linux-based audio engineers. If you use Ardour to create, mix &
produce music, you will most probably want to check out this plugin.
* Zero-latency operation
* Support for mono, stereo and 'True Stereo' (4-channel) impulses
* Realtime operation
* Very reasonable CPU consumption
* Maximum impulse length: 1M samples (~22 seconds @ 48kHz)
* Loads a large number of audio file formats
* High quality sample rate conversion of impulse responses
* Stretch control (via high quality SRC in one step integrated with
* Pre-delay control (0-2000 ms)
* Stereo width control of input signal & impulse response (0-150%)
* Envelope alteration with immediate visual feedback: Attack
time/percent, Envelope, Length
* Reverse impulse response
* Autogain: change impulses without having to adjust 'Wet gain'
* Impulse response visualization (linear/logarithmic scale, peak & RMS)
* Easy interface for fast browsing and loading impulse responses
IR should work on Linux with Ardour 2.8.x (x >= 11) and 3.
For further info and source code download, please visit the plugin's
[*] Fons Adriaensen (zita-convolver), Erik de Castro Lopo (libsndfile,
2011/2/23 Alexandre Prokoudine <alexandre.prokoudine(a)gmail.com>:
> On 2/22/11, David Robillard wrote:
>> I have a working plugin (called "dirg") that provides a UI by hosting a
>> web server which you access in the browser. It provides a grid UI either
>> via a Novation Launchpad, or in the browser if you don't have a
>> Launchpad. Web UIs definitely have a ton of wins (think tablets, remote
>> control (i.e. network transparency), etc.)
>> I also have a complete LV2 "message" system based on Atoms which is
>> compatible with / based on the event extension. Atoms, and thus
>> messages, can be serialised to/from JSON (among other things,
>> particularly Turtle).
> Any of them available to have a look at?
>> Currently dirg provides the web server on its own with no host
>> involvement, but every plugin doing this obviously doesn't scale, so
>> some day we should figure this out... first we need an appropriately
>> high-level/powerful communication protocol within LV2 land (hence the
>> messages stuff).
> Where do you stand with priorities now? That sounds like something
> very much worth investing time in.
> You see, one thing I'm puzzled about is that you have beginnings of
> what could be significant part of a potentially successful cloud
> computing audio app, and then you talk about how donations don't even
> pay your rent :)
Before I totally forget about it... I think it might be a very clever
thing to do to have some web-based thing (wiki or whatever, ideally a
social network kind of thing) were LAD people can notify of what they
are working on and what are their plans, so that it's easier to: a.
know about it and b. start cooperations, etc.
For example, Dave is doing lots of stuff that I plan to reuse, but I
only know it because I happen to lurk on #lv2 on freenode from time to
time, and the same goes for lots of stuff I'm seeing coming out
If it is a problem for me to keep up to date with this stuff, I can
only imagine what it would be like for a newcomer.
I can't comment on this more now, but please somebody consider the idea.
I am currently spending a lot of time working on Android, and on the andraudio
mailing list  we are discussing about possible improvements to the internal
Android audio system. Currently, latencies are very high, over 100ms, and we're
looking for ways to improve the situation.
In my opinion this can't be achieved on Linux without realtime scheduling. On
Android, there's something called audioflinger which acts as a sound server, and
apps act as clients of this server. The server and clients run in distinct
processes. What I'm thinking about is having a realtime thread within the
server, as well as another realtime thread in the (each) client.
The one thing about Android is that it has a strict security model. Every app is
considered potentially harmful and is thus "sandboxed". Here, this for example
means that apps can lower their threads priority, but not increase it. And of
course they can only use non-realtime scheduling.
On desktops, for instance using JACK, apart from a few multimedia distributions,
realtime permissions are not granted by default to normal users. And when one
enables it, security is usually not a primary concern AFAIK. If a piece of
software happens to crash the system when running in realtime, the user may just
uninstall the buggy software, etc..
But on phones, this is critical, for example if the system crashes while you're
waiting for a call. So, on Android, the security policies are strict. But this
could certainly be necessary to plenty of other Linux usages.
Now my question is: how to allow user-space apps to use realtime scheduling for
one of their threads without compromising the overall security?
For example, in man sched_setcheduler() I see SCHED_RR, and "Everything
described above for SCHED_FIFO also applies to SCHED_RR, except that each
process is only allowed to run for a maximum time quantum".
Would this help and be sufficient? Would there need to have some kind of
watchdog/monitor running with SCHED_FIFO scheduling to prevent realtime client
threads from consuming too much resources?
Or is there some other ways to achieve this? Some kernel patch maybe?
Thanks in advance
Hi, I've written a small program (Leevi is its name) to drive Lexicon
MX300, similar console what Lexicon ships for Windows operating system
along with their devices. Leevi supports Linux and BSD, and needs
either libusb-0 or libusb-1. It does not require any user interface
libraries, as everything is built on libX11, so if you can boot to
runlevel 5, you can run Leevi.
Please note: As Lexicon wasn't very keen to tell me how to talk with
their devices, USB protocol is reverse engineered by me and there are
myriad of things that I haven't yet revealed. Although Leevi is proven
to work without breaking anything, there is always a small change that
something goes wrong. Therefore I don't take any responsibility, you
use Leevi at your own risk. Check BSD License in Leevi's homepage.
Leevi is yet in development stage, changing effect in stereo mode and
changing routing to/from stereo does not yet work. All other features
should be fully functional.
- Jani Salonen
i want to write a jack2 network backend, like netjack2, for my bachelor
There is a new network standard comming up specialized for audio and
video transmission, called AVB (Audio Video Bridging -> IEEE 802.1AS,
I want do integrate this standard into jack2.
I want to choose AVB (not ALSA or NET) from the driver dropdown box.
does anyone know where i can find this implementation in the source code?