On 23 February 2011 22:11, David Robillard <d(a)drobilla.net> wrote:
> SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
> store). Both are roughly 2 thousand lines of C, solid and thoroughly
> tested (about 95% code coverage, like SLV2 itself). Serd has zero
> dependencies, Sord depends only on Glib (for the time being, possibly
> not in the future).
Can you point me at the API or code? I couldn't see it in a quick
browse on your SVN server.
I have a library (Dataquay,
http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
release of it at the moment, so if anyone wants to try it, go for the
repository rather than the old releases) which provides a Qt4 wrapper
for librdf and an object-RDF mapper.
It's intended for applications whose developers like the idea of RDF
as an abstract data model and Turtle as a syntax, but are not
particularly interested in being scalable datastores or engaging in
the linked data world.
For my purposes, Dataquay using librdf is fine -- I can configure it
so that bloat is not an issue (and hey! I'm using Qt already) and some
optional extras are welcome. But I can see the appeal of a more
limited, lightweight, or at least less configuration-dependent
I've considered doing LV2 as a simple example case for Dataquay, but
the thought of engaging in more flamewars about LV2 and GUIs is really
what has put me off so far. In other words, I like the cut of your
sorry for the slightly off-topic post, but since spatial audio has been
a frequent topic lately, i think some people here might be interested.
linux or FLOSS won't be exactly in the limelight, but yours truly will
make sure there are at least 2-3 boxes with your favourite OS and audio
tools humming along in various places. oh, and you might come early and
watch a few high-end mixing consoles boot - the startup screen will
bring tears to your eyes (as will the price tag, unfortunately :)
unfortunately, there will have to be an admission fee, which we haven't
decided on yet. but we're trying to keep it reasonable. don't shout at
me when it turns out to be a bit more costly than LAC, though...
ICSA 2011 - International Conference on Spatial Audio
November 10 - 13, Hochschule für Musik, Detmold
Verband Deutscher Tonmeister (VDT), in cooperation with
Deutsche Gesellschaft für Akustik e.V. (DEGA), and
European Acoustics Association (EAA).
Prof. Dr.-Ing. Malte Kob
Neustadt 22, 52756 Detmold
The International Conference on Spatial Audio 2011 takes place from
November 10 to 13 at Detmold University of Music.
This expert‘s summit will examine current systems for multichannel audio
reproduction and complementing recording techniques, and discuss their
respective strengths and weaknesses.
Wavefield synthesis systems, a higher-order Ambisonics array, as well as
5.1/7.1 installations in diverse acoustic environments will be available
for comparative listening tests during the conference.
Structured plenary talks, paper and poster sessions will revisit
fundamentals and present latest research.
A series of workshops will be dedicated to practical implementations of
spatial sound capture and playback methods, and their esthetic and
psychoacoustical implications for music perception.
Concerts that include music specially arranged for the conference will
let you experience various spatial sound systems in "live" conditions.
Call for papers and music:
Your contributions are welcome, either as presentations, posters, or
workshops. Submissions will undergo a review process, and accepted
contributions will be published in the conference proceedings.
The conference language is English.
We are planning structured sessions on the following topics:
* Multichannel stereo
* Wave field synthesis
* Higher-order Ambisonics / spherical acoustics
* 3D systems
* Binaural techniques
An additional session will be dedicated to related miscellaneous
contributions, such as hybrid systems and perception/evaluation of
spatial music reproduction.
I've forked Specimen primarily to provide frequency Modulation of the
LFOs and to make all the LFOs and ADSRs independent so that there is
no longer a single dedicated ADSR and a single dedicated LFO for ie
pitch modulation, but two 'inputs' for pitch modulation for which the
choice of all ADSRs and all LFOs is available.
Please read the README for more information:
The current state of Petri-Foo is that the LFOs and ADSRs have been
made independant and are, AFAICT, working as should. The GUI is not
yet up to date, but changes have been made enough to get a basic idea
of what's going on.
Please do read the README before commenting. I've tried to do things
properly! I'm only human and only a hobbyist coder.
This is to bring a discussion from the Jack Dev list to this more
appropriate forum as suggested by Arnold Krille.
First, I hear lots of people seemingly thinking that AVB (IEEE 1722) and
the IEEE 1588 version of Precision Timing Protocol can be done in the
It cannot and it must not. They both need hardware assist. Period. A
timestamp is specified to be inserted based on the leading edge of the
header immediately after the preamble. If anyone ever makes some
neighboring equipment that has done these with the required precision,
then you will kill that clock network and there will be yet another good
reason not to bring Linux to the workplace.
The ONLY exception is that if the listener is a stream-to-disk system,
then the timestamp system can simply be ignored. Such a listener will
never turn on PTP, but that won't hurt, because it will just ask for the
1722 stream and the talker will spit it out without knowing that that
node doesn't play PTP.
The version of PTP that is used in AVB is from the 802.1AS
specification. The acronym PTP is now an ambiguous one that has at
least these two uses, and I have heard some other hardware-assisted
networked timing schemes called PTP.
IEEE 1588 specifies an epoch-based struct with 48 bits of seconds (This
gives 8.9 million years before a "y2k" hits IEEE 1588) and a 32-bit
number that specifies nano-seconds. the 802.1AS sub-spec also uses this
PTP maintains one suite of transactions to keep itself timed. This is
blind to AVB.
AVB creates Word Clock timeframes using the PTP wall-clock that MUST be
made available to the 1722 layer. IF YOU HAVE PTP, then you can
synthesize predictive wallclocks using a buffer-full scheme in a
PTP-capable NIC. That NIC has to be configured to pay out the frames
per the 802.1Qav forwarding and scheduling spec. This is how streamers
will deliver streams that are well-timed, low-jitter streams. There are
fruit companies doing this as we speak with new NICs that have been
enabled from Broadcom and Marvell (and any host of others.)
It is possible to fake a GrandMaster clock using kernel-timed
calculations. The Best Master Clock Algorithm (BMCA) of a two-node
system will be forced to accept such a sloppy clock and the slave will
achieve lock, but with jitter that will fail a normally specified PTP
system. Noisy environment listeners will not hear this, but clean
listening will reveal the various artifacts of such jitter.
You can just make a leaky-bucket PLL at a receiver and use the DPLL
frequency to inform SRC. This hack will be un-noticed by the average
media-player person, but not by the critical listener.
When the 1722 timestamp is constructed, it is a complex assembly from
the 802.1AS timestamp. The 802.1AS timestamp is a two-part thing, as
specified above with its first part being simply seconds. This will not
roll over in the lifetime of Linux, our species or even our continents,
let alone a recording session. The second part is specified to roll
over at decimal one billion-1 = 999999999 = 0x 3B9ACBFF. The timestamp
in IEEE 1722 rolls over at unsigned long = 4294967295 = 0xFFFFFFFF,
which is 4.294967296 seconds. I apologize for quoting "weeks between
rollover" in the previous thread.
IEEE 1588 and IEEE 1722 are Ethernet-Only protocols, do not shoe-horn
them into IP.
I have heard lots of people say that AVB is just some thing for consumers.
go to http://grouper.ieee.org/groups/1722/ and hover over some of the
names to find where they work. It was, in fact, designed FIRST to very
easily accommodate Pro Audio:
Multiple-Node Synchronization without the need for Sample Rate Conversion.
Unlimited (at least not limited by the protocol, only the bandwidth)
And then it would be a trivial subset to get two - or 5.1, 7.1 any
surround count - channels to go from my CD player to any media player
over some LAN. (However, as of two autumns ago, they were still
kvetching over Wi-Fi.)
Finally, Yes, the CLOCK_REALTIME can be very simply pasted from a good
Forwarding this to the list where users and developers might be able to assist.
Hope this helps!
From: Michael Van [mailto:email@example.com]
Sent: Saturday, April 09, 2011 12:44 PM
Subject: Frequency Space Editors in Linux
Hello Linux Audio,
I just wanted to find out if anyone know of any Linux programs that do sophisticated noise removal from recordings, like the frequency space editing process of Windows programs, Adobe Audition or Cool Edit. I wondered if there is a plugin for Audacity that might do it.
I need to use something other than standard noise sample removal plugins because the crackle is only present when the music is playing, not present during quiet stretches.
Is here library with C #include headers for so what ?
How can i gotta info about Wav, mp3, ogg, m4a, wma [wmv] ... files
SampleRate, Channels, PlayLen( in samples ), bytesPerSample ,
bitrate, VBR ... ,
and after i gotta this info , decode em to RAW stream ?
At least i need everything about Wav, mp3, ogg.
Info about sound file should not be printf()'ed to stdout , but must
be filled in info structure.
Decoder library should have functions - loadFile, Play,
SeekToSample , Stop.
At least possibility - play file from sample or second.part .
Or should be used 2 libs, one for gotta info and one for decoding ?
Any C example and pointer welcomed.
Tnx in advance @ all.
I might be staying in Dublin during the LAC. The last train on friday
and saturday is leaving on 23:10. The concerts might not be ended at
that time. Is there someone who is driving from Maynooth to Dublin after
the sound night and who can take someone with him?
I was just looking at rates for acccomodation for the LAC, I was about to
book a single person in the GlenRoyal hotel,
but if somebody doesn't mind sharing with me and has booked I can chip in
1/2 the cost?
I'm told I don't snore :)