On 23 February 2011 22:11, David Robillard <d(a)drobilla.net> wrote:
> SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
> store). Both are roughly 2 thousand lines of C, solid and thoroughly
> tested (about 95% code coverage, like SLV2 itself). Serd has zero
> dependencies, Sord depends only on Glib (for the time being, possibly
> not in the future).
Can you point me at the API or code? I couldn't see it in a quick
browse on your SVN server.
I have a library (Dataquay,
http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
release of it at the moment, so if anyone wants to try it, go for the
repository rather than the old releases) which provides a Qt4 wrapper
for librdf and an object-RDF mapper.
It's intended for applications whose developers like the idea of RDF
as an abstract data model and Turtle as a syntax, but are not
particularly interested in being scalable datastores or engaging in
the linked data world.
For my purposes, Dataquay using librdf is fine -- I can configure it
so that bloat is not an issue (and hey! I'm using Qt already) and some
optional extras are welcome. But I can see the appeal of a more
limited, lightweight, or at least less configuration-dependent
back-end.
I've considered doing LV2 as a simple example case for Dataquay, but
the thought of engaging in more flamewars about LV2 and GUIs is really
what has put me off so far. In other words, I like the cut of your
jib here.
Chris
Hello everyone,
Every one knows Yoshimi, the fork of ZynAddSubFx.
One thing was lacking to yoshimi to be perfect: to be nearly fully
controlled by midi controls ( no OSC, sorry ).
ZynAddSubFx had possibilities to control a few parameters with
complicated NRPN, Yoshimi recently had ( in the test versions ) some
features too.
But now I'm proud to announce you the work of licnep ( not me, I'm just
a bug reporter ) who made the "midiLearn" function for yoshimi. It's not
stable for now because it's recent, and not full, but here are the
present features:
* Control System effects, Part Insert Effects
* Master/Part Volume, Pan, System Effect Sends
* Most of ADsynth parameters
* Add/Remove controller
* detect the channel and the number
* reset the knob ( its position )
I think it's a feature that's very useful and could help many
yoshimi/zyn users.
To use it, that's simple: connect your controller to yoshimi,
right-click on a blue knob ( yellow are ones which are not supported for
now ) and click "midi Learn" move your controller, it detects
automatically the controller.
To see and modify controllers, go to the Yoshimi> MIDI controllers menu.
To erase midi control of a knob, simply right click on it and click on
"remove midi control"
Here is the gitHub repository: https://github.com/licnep/yoshimi
To download and install it, follow the explications link ( gitHub ):
https://github.com/licnep/yoshimi/wiki/How-to
A light page to understand how to control others not implemented
controllers:
https://github.com/licnep/yoshimi/wiki/Source-code
Pages to follow the news of the project:
Facebook: https://www.facebook.com/pages/Yoshimi-midi-learn/224823617534934
Twitter: http://twitter.com/#!/YoshimiMIDI
So if you're interrested, all bug requests are deeply recommended.
Cheers,
Louis CHEREL.
hi *!
sorry for the slightly off-topic post, but since spatial audio has been
a frequent topic lately, i think some people here might be interested.
linux or FLOSS won't be exactly in the limelight, but yours truly will
make sure there are at least 2-3 boxes with your favourite OS and audio
tools humming along in various places. oh, and you might come early and
watch a few high-end mixing consoles boot - the startup screen will
bring tears to your eyes (as will the price tag, unfortunately :)
unfortunately, there will have to be an admission fee, which we haven't
decided on yet. but we're trying to keep it reasonable. don't shout at
me when it turns out to be a bit more costly than LAC, though...
jörn
*.*
ICSA 2011 - International Conference on Spatial Audio
November 10 - 13, Hochschule für Musik, Detmold
Organizers:
Verband Deutscher Tonmeister (VDT), in cooperation with
Deutsche Gesellschaft für Akustik e.V. (DEGA), and
European Acoustics Association (EAA).
Contact/Chair:
Prof. Dr.-Ing. Malte Kob
Erich-Thienhaus-Institut
Neustadt 22, 52756 Detmold
Mail: icsa2011attonmeister.de
Phone: +49-(0)5231-975-644
Fax: +49-(0)5231-975-689
Summary:
The International Conference on Spatial Audio 2011 takes place from
November 10 to 13 at Detmold University of Music.
This expert‘s summit will examine current systems for multichannel audio
reproduction and complementing recording techniques, and discuss their
respective strengths and weaknesses.
Wavefield synthesis systems, a higher-order Ambisonics array, as well as
5.1/7.1 installations in diverse acoustic environments will be available
for comparative listening tests during the conference.
Structured plenary talks, paper and poster sessions will revisit
fundamentals and present latest research.
A series of workshops will be dedicated to practical implementations of
spatial sound capture and playback methods, and their esthetic and
psychoacoustical implications for music perception.
Concerts that include music specially arranged for the conference will
let you experience various spatial sound systems in "live" conditions.
Call for papers and music:
Your contributions are welcome, either as presentations, posters, or
workshops. Submissions will undergo a review process, and accepted
contributions will be published in the conference proceedings.
The conference language is English.
We are planning structured sessions on the following topics:
* Multichannel stereo
* Wave field synthesis
* Higher-order Ambisonics / spherical acoustics
* 3D systems
* Binaural techniques
An additional session will be dedicated to related miscellaneous
contributions, such as hybrid systems and perception/evaluation of
spatial music reproduction.
Ross Bencina is the author of AudioMulch and has been extremely
involved in PortAudio, ReacTable, and other projects. His new article
on realtime audio programming is a MUST read for anyone new to the
area, and worth reading as a reminder even for experienced developers.
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-…
I'm off to count how many violations Ardour contains ...
--p
Hi!
This question could have also been asked on jack-devel, but since LAD
probably has a broader audience:
I recently started hacking on a jack-driven matrix mixer (goals so far:
GUI, maybe network controls (OSC?), maybe LV2 host), and I wonder if
there are "frameworks" for test-driven development, so I can come up
with unit and acceptance tests while implementing new functionality.
Has anyone ever done test-first with jack? One could start jackd in
dummy mode with a random name, start some clients, wire inputs to
outputs and compare the generated signal to the expected result, maybe
with some fuzzy logic to allow for arbitrary delays.
OTOH, if there are existing mocking libraries for jackd, things might be
a bit more straight forward (provide an input buffer to be returned by
jack_port_get_buffer, call the process function and check the result
that's written to the output buffer).
Any pointers will be highly appreciated.
Cheers
I'm just getting started with some python gstreamer coding, and Having
trouble finding what I'd consider some basic examples:
These I've already got working:
1. build a pipeline showing the attached video device like:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,format=(fourcc)YUY2,width=640,height=480,framerate=20/1'
! xvimagesink
2. Record and show video at the same time:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,width=640,height=480,framerate=20/1' ! tee name=t_vid
! queue ! xvimagesink sync=false t_vid. ! queue ! videorate !
'video/x-raw-yuv,framerate=20/1' ! theoraenc ! queue ! oggmux !
filesink location=test.ogg
I want to get working:
3. toggle recording, so I can show a video monitor, and without
destroying and recreating the pipeline start or stop video recording
at will.
4. Sync multiple pipelines, my end goal is to be able to record x
video streams in sync, while playing back y videos with all play and
record sources in sync (think video mixing).
I'm looking for examples of 3 and 4, and improvement ideas to 1 and 2.
Thanks,
Nathanael
I was looking at the (unfinished) example sampler plugin here:
https://gitorious.org/gabrbedd/lv2-sampler-example
I ran through the mental exercise of trying to figure out how to
finish it, and I have a question about the UI extension. How does a
UI tell its plugin to load a sample file? The example has a TODO in
its run function that indicates it will react to an LV2_Event that
contains a pathname for a file. I don't understand how a UI will
create this event.
On lv2plug.in, I see there's an experimental string-port extension
that defines a transfer mechanism for strings. Is this the
recommended method? Do any hosts support this extension?
There's also an atom extension, but I don't think I grok it yet. Can
I create a port of type atom:MessagePort? How does a UI make use of
that?
On Mon, Jul 25, 2011 at 4:02 PM, David Robillard <d(a)drobilla.net> wrote:
> On Mon, 2011-07-25 at 13:05 +0200, Lieven Moors wrote:
> > OK, what happened was that I landed on the http://lv2plug.in/ns/ext
> > page, was expecting a download extensions link, didn't find it, and
> > downloaded
> > the files manually from the links on those pages.
>
> The event extension page
>
> http://lv2plug.in/ns/ext/event
>
> does have a link to the latest release (1.2).
>
> -dr
>
>
That is really odd. Did it change in the last
couple of days? I'm sure I got the header from
there, and I'm sure it had http in the address on line 28.
Otherwise this is not a bug, it's a ghost...
lievenmoors