Ross Bencina is the author of AudioMulch and has been extremely
involved in PortAudio, ReacTable, and other projects. His new article
on realtime audio programming is a MUST read for anyone new to the
area, and worth reading as a reminder even for experienced developers.
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-…
I'm off to count how many violations Ardour contains ...
--p
Hi All,
Just a quick post to let you know about a new JACK binding for Java,
called JNAJack, at http://code.google.com/p/java-audio-utils/
JNAJack is a minimal object-oriented wrapper to the JACK Audio
Connection Kit API. It uses Java Native Access (JNA) rather than
custom JNI to interface with the underlying Jack API, simplifying
development and deployment - no compilation required and it (*should*)
work cross platform. Use of JNA means that performance is not quite on
a par with the custom JNI code in JJack, but it is still fine for low
latency usage, and some further performance optimisations are in the
wings. Unlike JJack the aim of this project is to support full and
typesafe OOP access to the Jack API from Java, and nothing else. Most
important aspects of the audio API are included. MIDI and transport
support will be implemented in the future.
Well, I say this is new, it was mostly written quite a while back as
part of my Praxis InterMedia System project
http://code.google.com/p/praxis/ (a Java based cross-media patcher
environment). I'm finally getting around to releasing some of this
stuff separately as well.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Hi!
This question could have also been asked on jack-devel, but since LAD
probably has a broader audience:
I recently started hacking on a jack-driven matrix mixer (goals so far:
GUI, maybe network controls (OSC?), maybe LV2 host), and I wonder if
there are "frameworks" for test-driven development, so I can come up
with unit and acceptance tests while implementing new functionality.
Has anyone ever done test-first with jack? One could start jackd in
dummy mode with a random name, start some clients, wire inputs to
outputs and compare the generated signal to the expected result, maybe
with some fuzzy logic to allow for arbitrary delays.
OTOH, if there are existing mocking libraries for jackd, things might be
a bit more straight forward (provide an input buffer to be returned by
jack_port_get_buffer, call the process function and check the result
that's written to the output buffer).
Any pointers will be highly appreciated.
Cheers
I'm just getting started with some python gstreamer coding, and Having
trouble finding what I'd consider some basic examples:
These I've already got working:
1. build a pipeline showing the attached video device like:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,format=(fourcc)YUY2,width=640,height=480,framerate=20/1'
! xvimagesink
2. Record and show video at the same time:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,width=640,height=480,framerate=20/1' ! tee name=t_vid
! queue ! xvimagesink sync=false t_vid. ! queue ! videorate !
'video/x-raw-yuv,framerate=20/1' ! theoraenc ! queue ! oggmux !
filesink location=test.ogg
I want to get working:
3. toggle recording, so I can show a video monitor, and without
destroying and recreating the pipeline start or stop video recording
at will.
4. Sync multiple pipelines, my end goal is to be able to record x
video streams in sync, while playing back y videos with all play and
record sources in sync (think video mixing).
I'm looking for examples of 3 and 4, and improvement ideas to 1 and 2.
Thanks,
Nathanael
I was looking at the (unfinished) example sampler plugin here:
https://gitorious.org/gabrbedd/lv2-sampler-example
I ran through the mental exercise of trying to figure out how to
finish it, and I have a question about the UI extension. How does a
UI tell its plugin to load a sample file? The example has a TODO in
its run function that indicates it will react to an LV2_Event that
contains a pathname for a file. I don't understand how a UI will
create this event.
On lv2plug.in, I see there's an experimental string-port extension
that defines a transfer mechanism for strings. Is this the
recommended method? Do any hosts support this extension?
There's also an atom extension, but I don't think I grok it yet. Can
I create a port of type atom:MessagePort? How does a UI make use of
that?
On Mon, Jul 25, 2011 at 4:02 PM, David Robillard <d(a)drobilla.net> wrote:
> On Mon, 2011-07-25 at 13:05 +0200, Lieven Moors wrote:
> > OK, what happened was that I landed on the http://lv2plug.in/ns/ext
> > page, was expecting a download extensions link, didn't find it, and
> > downloaded
> > the files manually from the links on those pages.
>
> The event extension page
>
> http://lv2plug.in/ns/ext/event
>
> does have a link to the latest release (1.2).
>
> -dr
>
>
That is really odd. Did it change in the last
couple of days? I'm sure I got the header from
there, and I'm sure it had http in the address on line 28.
Otherwise this is not a bug, it's a ghost...
lievenmoors
Hi!
I've recently added support for the RME RPM to hdspmixer. Unfortunately,
I don't have one, it's been done blindly with user feedback.
This very user now reports that he needs to upload the device firmware
from windows. I've checked hdsploader, and of course, it needs patching.
I'll take care in a second.
More surprisingly, though, the kernel wasn't able to upload the firmware
itself, because it fails to detect the RPM and hence tries to upload a
multiface firmware.
After reading the kernel source, I think the code in hdsp.c is wrong:
if (hdsp_fifo_wait(hdsp, 0, HDSP_SHORT_WAIT)) {
hdsp_write(hdsp, HDSP_control2Reg, HDSP_VERSION_BIT);
hdsp_write(hdsp, HDSP_control2Reg, HDSP_S_LOAD);
if (hdsp_fifo_wait(hdsp, 0, HDSP_SHORT_WAIT))
hdsp->io_type = RPM;
else
hdsp->io_type = Multiface;
} else {
hdsp->io_type = Digiface;
}
Who here owns a Digiface and can confirm or deny that the kernel
correctly detects it as Digiface? Same for Multiface, though I guess
since it's more or less the default, users wouldn't notice it.
What's wrong with the code above? I think all occurrences of
HDSP_control2Reg in hdsp_check_for_iobox need to be changed to
HDSP_controlRegister and the second hdsp_fifo_wait needs to be inverted.
But this is pure guesswork. If I come up with a patch, who here has a
RPM, Digiface or Multiface to test it?
TIA
Hello.
Soon I will work on a linux kernel driver for a custom audio decoder device
that is being developed by a company I work for. If not going into
details, that devices reads A52-encoded stream from system memory, and
writes raw pcm stream to system memory.
Simplest thing to do is - implement a character device, where user-space
will write encoded stream, and from where user-space will read decoded
stream.
(driver for similar hardware, located at
http://sourceforge.net/projects/vs10xx/, does that).
However, perhaps a better architecture (e.g. in-kernel intergation with an
audio sink) is possible?
I'm looking for any related information - e.g. ideas on what interface to
implement, examples of drivers for similar devices, etc.
Thanks on any hints.
Nikita Youshchenko,
embedded linux developer.
I am playing around with GCC and Clang vector extensions, on Linux and
Mac OS X, and i am getting some strange behaviour.
I am working on jMax Phoenix, and its dsp engine, in its current state,
is very memory bound; it is based on the aggregation of very small
granularity operations, like vector sum or multiply, each of them
executed independently from and to memory.
I tried to implements all this 'primitive' operations using the vector
types.
On clang/MacOSX i get an impressive improvement in performance,
around 4x on the operations, even just using the vector types for
copying data; my impression is that the compiler use some kind of vector
load/store instruction that properly use the available memory bandwidth,
but unfortunately i do not know more about the x86 architecture.
On gcc/Linux, (gcc 4.5.2) the same code produce a *slow down* of around
2.5x.
Well, anybody have an idea of why ?
I am actually running linux (Ubuntu 11.04) under a VMWare virtual
machine, i do not know is this may have any implications.
Thanks,
Maurizio De Cecco
Hello,
A new/upcoming LV2 extension (from Lars Luthman) includes facilities for
sending host-calculated metric data for audio ports to a UI, for
metering and such. This is intended as a sane replacement for the
currently used kludge of having plugin control output ports provide this
information.
So, more DSP-minded folks, my question is: what data is required, and is
a reasonable compromise between overhead and expressiveness? The
current revision has the following:
/**
A data type that is used to pass peak and RMS values for
a period of audio data at an input or output port to an
UI, using port_event. See the documentation for
pui:floatPeakRMS for details about how and when this
should be done.
*/
typedef struct _LV2_PUI_Peak_RMS_Data {
/**
The start of the measurement period. This is just a
running counter that must not be interpreted as any
sort of global frame position. It should only be
interpreted relative to the starts of other
measurement periods in port_event() calls to the same
plugin instance.
This counter is allowed to overflow, in which case it
should just wrap around.
*/
uint32_t period_start;
/**
The size of the measurement period, in the same units
as period_start.
*/
uint32_t period_size;
/**
The peak value for the measurement period. This
should be the maximal value for abs(sample) over all
the samples in the period.
*/
float peak;
/**
The RMS value for the measurement period. This should
be the root mean square value of the samples in the
period, equivalent to sqrt((pow(sample1, 2) +
pow(sample2, 2) + ... + pow(sampleN, 2)) / N) where N
is period_size.
*/
float rms;
} LV2_PUI_Peak_RMS_Data;
Thanks,
-dr