On 23 February 2011 22:11, David Robillard <d(a)drobilla.net> wrote:
> SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
> store). Both are roughly 2 thousand lines of C, solid and thoroughly
> tested (about 95% code coverage, like SLV2 itself). Serd has zero
> dependencies, Sord depends only on Glib (for the time being, possibly
> not in the future).
Can you point me at the API or code? I couldn't see it in a quick
browse on your SVN server.
I have a library (Dataquay,
http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
release of it at the moment, so if anyone wants to try it, go for the
repository rather than the old releases) which provides a Qt4 wrapper
for librdf and an object-RDF mapper.
It's intended for applications whose developers like the idea of RDF
as an abstract data model and Turtle as a syntax, but are not
particularly interested in being scalable datastores or engaging in
the linked data world.
For my purposes, Dataquay using librdf is fine -- I can configure it
so that bloat is not an issue (and hey! I'm using Qt already) and some
optional extras are welcome. But I can see the appeal of a more
limited, lightweight, or at least less configuration-dependent
back-end.
I've considered doing LV2 as a simple example case for Dataquay, but
the thought of engaging in more flamewars about LV2 and GUIs is really
what has put me off so far. In other words, I like the cut of your
jib here.
Chris
Hello everyone,
Every one knows Yoshimi, the fork of ZynAddSubFx.
One thing was lacking to yoshimi to be perfect: to be nearly fully
controlled by midi controls ( no OSC, sorry ).
ZynAddSubFx had possibilities to control a few parameters with
complicated NRPN, Yoshimi recently had ( in the test versions ) some
features too.
But now I'm proud to announce you the work of licnep ( not me, I'm just
a bug reporter ) who made the "midiLearn" function for yoshimi. It's not
stable for now because it's recent, and not full, but here are the
present features:
* Control System effects, Part Insert Effects
* Master/Part Volume, Pan, System Effect Sends
* Most of ADsynth parameters
* Add/Remove controller
* detect the channel and the number
* reset the knob ( its position )
I think it's a feature that's very useful and could help many
yoshimi/zyn users.
To use it, that's simple: connect your controller to yoshimi,
right-click on a blue knob ( yellow are ones which are not supported for
now ) and click "midi Learn" move your controller, it detects
automatically the controller.
To see and modify controllers, go to the Yoshimi> MIDI controllers menu.
To erase midi control of a knob, simply right click on it and click on
"remove midi control"
Here is the gitHub repository: https://github.com/licnep/yoshimi
To download and install it, follow the explications link ( gitHub ):
https://github.com/licnep/yoshimi/wiki/How-to
A light page to understand how to control others not implemented
controllers:
https://github.com/licnep/yoshimi/wiki/Source-code
Pages to follow the news of the project:
Facebook: https://www.facebook.com/pages/Yoshimi-midi-learn/224823617534934
Twitter: http://twitter.com/#!/YoshimiMIDI
So if you're interrested, all bug requests are deeply recommended.
Cheers,
Louis CHEREL.
hi *!
sorry for the slightly off-topic post, but since spatial audio has been
a frequent topic lately, i think some people here might be interested.
linux or FLOSS won't be exactly in the limelight, but yours truly will
make sure there are at least 2-3 boxes with your favourite OS and audio
tools humming along in various places. oh, and you might come early and
watch a few high-end mixing consoles boot - the startup screen will
bring tears to your eyes (as will the price tag, unfortunately :)
unfortunately, there will have to be an admission fee, which we haven't
decided on yet. but we're trying to keep it reasonable. don't shout at
me when it turns out to be a bit more costly than LAC, though...
jörn
*.*
ICSA 2011 - International Conference on Spatial Audio
November 10 - 13, Hochschule für Musik, Detmold
Organizers:
Verband Deutscher Tonmeister (VDT), in cooperation with
Deutsche Gesellschaft für Akustik e.V. (DEGA), and
European Acoustics Association (EAA).
Contact/Chair:
Prof. Dr.-Ing. Malte Kob
Erich-Thienhaus-Institut
Neustadt 22, 52756 Detmold
Mail: icsa2011attonmeister.de
Phone: +49-(0)5231-975-644
Fax: +49-(0)5231-975-689
Summary:
The International Conference on Spatial Audio 2011 takes place from
November 10 to 13 at Detmold University of Music.
This expert‘s summit will examine current systems for multichannel audio
reproduction and complementing recording techniques, and discuss their
respective strengths and weaknesses.
Wavefield synthesis systems, a higher-order Ambisonics array, as well as
5.1/7.1 installations in diverse acoustic environments will be available
for comparative listening tests during the conference.
Structured plenary talks, paper and poster sessions will revisit
fundamentals and present latest research.
A series of workshops will be dedicated to practical implementations of
spatial sound capture and playback methods, and their esthetic and
psychoacoustical implications for music perception.
Concerts that include music specially arranged for the conference will
let you experience various spatial sound systems in "live" conditions.
Call for papers and music:
Your contributions are welcome, either as presentations, posters, or
workshops. Submissions will undergo a review process, and accepted
contributions will be published in the conference proceedings.
The conference language is English.
We are planning structured sessions on the following topics:
* Multichannel stereo
* Wave field synthesis
* Higher-order Ambisonics / spherical acoustics
* 3D systems
* Binaural techniques
An additional session will be dedicated to related miscellaneous
contributions, such as hybrid systems and perception/evaluation of
spatial music reproduction.
Linux Audio Developer,
May I make a feature request here for your Linuxaudio application(s)?
Could you please add JackSession support? It makes working with JACK
standalone applications a lot more user friendly. There are some apps
who support it already and they work fine, like Yoshimi, Qtractor,
Pianoteq, Ghostess, Guitarix, Jack-Rack, Ardour3, Bristol, Seq24, Jalv,
Ingen, Connie, Specimen and probably more.
It is possible to use applications without JackSession-support in a
session (via so called infra clients), it starts the applications, make
the connections, but doesn't save the state. So obviously it would be
far more useful if those applications would get JackSession-support also.
Qjackctl is able to work as Session Manager, so is Pyjacksm (and likely
Patchage in the future).
According to comments on IRC by Paul Davis, it's very easy to add
JackSession support to your application.
"Its really easy, just handle 1 more callback from the server. Torben's
walkthrough shows what is necessary."
Torben's walktrough:
http://trac.jackaudio.org/wiki/WalkThrough/Dev/JackSession
Thanks in advance,
\r
Hi,
It is very promising that devs like Torben, Paul Davis, Rui and David
Robillard (to name a few), are 'backing up' Jack Session and that the
Jack Session API is in the Jack API. This will give the community a very
good chance that many apps will get JackSession support soon (or later).
However, it's still reasonable to expect that not all LAD applications
are going to be patched with JackSession support.
In other words, there are and will be apps which might be useful (for
one or more of us) to use in a session but which won't have JackSession
(JS) support. From a users perspective, it would be very useful to be
able to use that application (without JS support) in a session in some
way nevertheless.
At the moment one Session Manager (SM), Pyjacksm (Qjackctl will follow
soon, and also Patchage I expect) makes this possible by manually adding
'infra clients' to a configuration file, .pyjacksmrc. See example below.
Infra clients are designed for applications without a state, like a2j.
But it is also possible to use apps without JS support as infra client.
Amsynth is an application without JS support and in this way I am able
to load amsynth, with project A. The SM makes sure that Amsynth is
started and that the Jack connections are restored (that's the only
thing the SM can do for you for apps without JS support). But I don't
want to use Amsynth with Project A always (Session 1). I might be
working on a totally different project and want to make a session for
that also (Session 2). This time I want to load amsynth as: amsynth -b
/home/user/projectB.amSynth.presets (I don't use Session 1 and 2
together in this example).
To be able to load Session 2, I have to edit my .pyjacksmrc file or make
symlinks.
*Feature request*: It would be nice if the SM could provide me a way to
load a different configuration file.
For example: JackSessionManagerX --load configurationfileSession2
Thanks in advance,
\r
.pyjacksmrc:
[DEFAULT]
sessiondir = ~/linuxaudio/JackSession
[infra]
a2j = a2jmidid -e
amsynth = amsynth -b /home/user/projectA.amSynth.presets
configurationfileSession2:
[DEFAULT]
sessiondir = ~/linuxaudio/JackSession
[infra]
a2j = a2jmidid -e
amsynth = amsynth -b /home/user/projectA.amSynth.presets
Dan Muresan wrote:
> Hi Erik -- please CC me so I can reply (I don't receive messages from
> LAU directly). I'm quoting manually here:
>
> > > I'm trying to cancel an ongoing sf_* I/O operation (from another
> > Maybe its a bad idea. :-)
>
> I don't think so... Issuing large requests, then cancelling as needed
> gives a process the lowest possible latency for unpredictable seeks
> (caused e.g. by user commands), while keeping CPU usage low (by
> avoiding syscall and context switching overhead)
Let me put it this way:
a) When I designed libsndfile over 10 years ago, I never dreamed
anybody would try this.
b) In the 10 years libsndfile has been around and the probably
hundreds of applications it has been used in, noone has suggested
that it would be a good idea if libsndfile could do this.
To me that suggests that either you have a completely unique
problem to solve or that they are other solutions to your
problem that other people use to get around the same problem.
My guess is that your problem is not completely unique.
> > Reading one frame at a time sounds like a bad idea too.
>
> 1 frame at a time was an extreme example. The point was that
> libsndfile doesn't employ a user-space cache, but direct system calls.
> Reading 10, 100 or 480 frames at a time will still incur syscall
> overhead (== CPU usage), and progressively larger cancel latencies.
>
> > > libsndfile. It would be nice if libsndfile could allow short reads and
> > > writes via some sf_command parameter.
> > It does. You can read any number of frames from 1 through to as many
> > frames as the file contains.
>
> I meant "short reads" in kernel-speak sense: read(2) can return a
> number of bytes less than the number requested when interrupted by a
> signal (if SA_RESTART is disabled). My proposal was to add a
> sf_command() parameter that disables the looping behavior of sf_read()
> on EINTR, and makes it return exactly as many frames as the first
> read() call manages to get.
I accept good quality, clean patches with tests for the functionality
you are adding. Wherever possible, they should be cross platform.
> On second thought, though, this proposal could not possibly work
> without a userspace (libsndfile) cache, because read() might return
> incomplete frames, which would need to be processed in a later call.
Modifying libsndfile to do fread/fwrite style buffering would be
relatively easy. Again, patches accepted.
> > I just checked, and the address you used to post this email to the LAU
> > list are not subscribed to the libsndfile-users list. Thats why the list
> > is rejecting your email.
>
> That's exactly the problem: I subscribed about two weeks ago, received
> a confirmation,
Was that a confirmation or a request for confirmation? Joining a
mailing list usually involves:
a) Send a request to join.
b) Receiving a "confirm that you want to join" message.
c) Sending "confirm that you want to join".
d) Receiving a "yes, you are mow a member" message.
> and sent a message at that time (which received no
> bounces, but no replies either). Now, the mailserver somehow forgot
> about me and is rejecting my messages. Or something...
All other complaints of this sort that I have received have been
from people who couldn't figure out the subscribe procedure or
joined from one email address and sent mail to the list from
another.
Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
Bonjour,
j'ai créé un nouveau format audio destiné à linux. Je suis à la recherche
d'aide pour finir le langage C, pour tester, et pour créer une nouvelle
générationde carte audio. J'arrive à créer des fichier audio avec des voix
humaines de plusieurs Ko avec seulement 10 octets.
La page d'avancement de mon projet audio se trouve ici :
Http://www.letime.net/legere/index.html
> From: Nick Copeland
> Don't you think it is more likely that people who are interested will run
> Linux as a replacement for Android on the ARM tablets rather than have
> the apps ported over?
"Smartphone growth is on a meteoric rise, with Android leading the way...
Android follows with 14.5%, and iOS with 13.8%. This is the first time
Android has eclipsed iOS share in the InMobi network"
Android penetration (14.5%) is an order of magnitude bigger than Linux
desktop (at about 2%?). My prediction - Android rapidly surpasses Linux
application base and mindshare.
Jump on board *early* LV2 ;)
Best Regards,
Jeff