Hi,
I'm new to audio programming, and new to LV2 and LVTK. I managed to
build the LVTK examples from sources. Now I'm trying to figure out how I
can send a simple text message from my UI to the plugin (it will be used
for speech synthesis).
First question: In the ttl, can I just add an atomPort with a type other
than sequence, like this:
[
a atom:AtomPort ,
lv2:InputPort ;
lv2:index 3;
lv2:symbol "text";
lv2:name "Text";
atom:bufferType atom:String ;
]
Second question: How do I send a string over this port? I suppose I need
to wrap it into an LV2_Atom_String somehow. However, I always end up
receiving MIDI data in my plugin.
Third question: In a Synth plugin, how do I handle events other than
MIDI note on and off (which have both their own specialized methods)?
Which method should I overwrite to react on the strings coming in over
my atomPort?
Any pointer to a solution would be nice!
Best wishes,
Ulrich
Hi all,
This is my first post here. I’m note new to audio programming or linux, but I haven’t done much in terms of combining the two. Most of my audio programming has been on os x.
Currently working on some realtime convolution with lots of channels and low latency requirements, but I am running into some unexpected cpu-spikes and hope some of you might have an idea of possible causes.
I’m processing 32 sample-blocks at 48KHz but roughly every 0,6 seconds I get a large spike in cpu usage. This cannot possibly be explained by my algorithm, because the load should be pretty stable.
I am measuring cpu load by getting the time with clock_gettime(CLOCK_MONOTONIC_RAW, timespec*) at the beginning and end of each callback. When converted to a percentage my cpu load hovers somewhere between 40 an 50% most of the time, but more or less every 900 callbacks (0.8 seconds there is a spike of more than 100%.
I am not doing any IO, mallocing or anything else that could block. My threads are SCHED_FIFO with max priority (I have 4 threads on 4 cores).
The only explanation I can come up with is that my threads are somehow pre-empted even though there are realtime threads. Is that even possible? And is there a way to check this? Besides pre-emption maybe my caches are severely thrashed but i find that unlikely as it seems to happen on all 4 cores simultaneously.
I’m running (more or less default install, no additional services run-in) Linux Mint 17.3 with a 3.19.0-42-lowlatency kernel on a core i7-6700 with hyperthreading/turbo disabled.
I remember reading somewhere that realtime threads cannot run more than .95s every second. That would be very bad if it actually meant my threads are blocked run for a period of 50ms straight…
Anyone have any thoughts on possible causes?
best,
Fokke
hi all linux audio devs
Advanced Gtk+ Sequencer libraries go to version 0.7.3.
http://gsequencer.org/download.html
Although the API is still unstable you might get insights, yet.
http://gsequencer.org/api/ags/index.html
* rudimentary JACK support
* refactored AgsTurtle to interface with RDF triples
* some API enhancements
Bests,
Joël
Hi all,
I have a RME madi fx card, but i found out that the minimal buffersize for those cards is 8192 samples, which is way to big for my use.
So i am thinking of returning the card, and getting the ‘normal’ rme madi (without the fx). As that seems to have a more sensible minimal buffersize of 128 (or so i am informed).
I wonder if anyone here that owns a RME madi ( or has access to) ever did a real world round trip latency test on it?
Of course it depends on the converters as well, but it would be great to have some numbers anyway.
cheers,
Fokke
Hello Linux Audio Community!
This is the announcement you have been waiting for.
We've come to the conclusion that doing a full LAC at the announced date is
impossible, as our sponsoring setup didn't work out, therefore we have decided
to go ahead with a miniLAC instead.
Since we have already planned a lot of stuff, we thought it would be a shame to
let all the work go to waste, so we asked a few people whether they'd be
interested in a more compact and reduced conference program.
What we can currently offer with the resources available are:
* a lecture track
* workshop tracks (one of which will use the c-base soundlab)
* live audio sessions
* hacking sessions
* tours around interesting berlin places
* linux audio nights
This miniature version of a Linux Audio Conference is still planned to take
place during the (kind of) announced date: 8.-10. April 2016. The location is
now set:
c-base, the spacestation below Berlin Mitte (http://c-base.org)
Our plan is to start off on Friday with a meet-and-greet evening at c-base,
where we will have an open stage for anyone who wants to connect their devices.
Since we don't have the originally intended resources, we will have to limit
attendance to around 150 participants.
Additionally, if there is interest for this (especially for people arriving
earlier), we'll try to organize optional visits to other Berlin locations on
Friday 8. April.
Please create an account on our wiki, to be able to set things up with us:
http://frab.linuxaudio.org (will later move on to
http://minilac.linuxaudio.org).
The wiki is still a work in progress, but should feature all necessary
information by the end of the week.
To satisfy your academic paper skills, we intend to support another crew that
could organize a second conference part at the FrosCon 2016
(http://www.froscon.de/), which is happening in Bonn 20th and 21st of August.
Here is a link to our issue concerning this topic on Github:
https://github.com/linux-audio-berlin/LAC16/issues/30
We hope to see you at the miniLAC16!
Cheers,
miniLAC16 Orga team
>
>> So now i am a but confused how to get the same kind of latency in my
>> own code (using alsa directly rather than through jack)
>>
>> Basically what happens is when i have a buffersize of 8192 and a
>> period of 32 is that snd_pcm_avail_update(capture_handle), will keep
>> returning 0 until i have played back the full 8192 samples. After
>> that I will start receiving input samples, but obviously the latency
>> is now more than 8182 samples…
>>
>> How can i make alsa not wait until the entire buffer is full?
>
> maybe that's $GOD's way of telling you to use jack?
> jokes aside, why not profit from the flexibility of jack when it doesn't have any real disadvantages?
Well i might have to go that way if i can’t work this out. But i would prefer to use alsa directly if possible. The reason for me not to use jack, is not that there is anything wrong with it, on the contrary. The reason is just that it doesn’t make sense in my case, i don’t need any inter-app routing for this project, and it would just be another dependency. Adding jack to the mix doesn’t seem like the right solution to the actual problem...
> with that kind of i/o, i doubt you're doing some tightly constrained embedded project :-]
>
it makes for a very nice delay actually :-)
fokke
> --
> Jörn Nettingsmeier
> Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
>
> Meister für Veranstaltungstechnik (Bühne/Studio)
> Tonmeister VDT
>
> http://stackingdwarves.net
>
On Wed, Feb 3, 2016 at 10:10 PM, Harry van Haaren <harryhaaren(a)gmail.com>
wrote:
> OpenAV will be doing a workshop on Fabla2
Harry, you beat me to it. :)
My proposal is a workshop with a practical introduction to Faust plugin
programming. Please check
http://minilac.linuxaudio.org/index.php/Workshop#Workshop_:_Plugin_Programm…
for my blurb.
David, thanks so much again for pulling this off. I'm sure that this will
be a great event and I'm really looking forward to it! :)
Albert
--
Dr. Albert Gr"af
Computer Music Research Group, JGU Mainz, Germany
Email: aggraef(a)gmail.com
WWW: https://plus.google.com/+AlbertGraef