Does anyone have any experience with speed of traversal through a
boost multi index container? I'm pondering their use to manage notes
currently in play, eg indexed by midi channel ordered by midi event
time/frame stamp.
cheers, Cal
Hi,
I've been trying to come up with a nice program architecture for a live
performance tool (Audio looping etc),
and I've kind of hit a wall:
Input will be taken via OSC, the "engine" will be written in C++, and the
GUI is up in the air.
I've written most of the engine, (working to a degree needs some bugfixes),
and now I've started implementing
the GUI in the same binary. Ie: its all compiled together, double-click it
and it shows on screen & loads JACK client.
The GUI code has a nasty habit of segfaulting.. which is also killing the
engine. That's a no-go for live performance.
The engine is rock solid stable. So its the fact that there's the GUI thread
running around that's segfault-ing things.
So I'm wondering if it feasible to keep the audio/other data in SHared
Memory, and then write the GUI in Python reading
from the same memory? Is this concidered "ugly" design? I have no experience
with SHM, so I thought I'd ask.
The other option I was concidering is writing the front-end GUI part using
only info obtained from OSC, but that would
exclude the waveforms of the audio, and lots of other nice features...
Help, advice, laughter etc welcomed :-) -Harry
Hello everyone,
I am trying to understand how a simple sound server could be implemented. I will
not necessarily develop this, but I'm trying to clarify my ideas.
As in JACK, it would allow clients to register, and their process callback to be
called with input and output buffers of a fixed size. The server would then mix
all output data provided by clients and pass the result to the audio hardware.
It would also read audio input from the hardware and dispatch it to the clients.
There wouldn't be any ports, routing, etc.. as provided by JACK. The main
purpose of a such server would be to allow several applications to record and
play audio, without them acquiring exclusive access the audio hardware. In this
regard it's similar to PulseAudio and many others.
The server itself could have a realtime thread for accessing audio. Therefore,
for a proof of concept, it could be developed on top of JACK. However, none of
the client could run in realtime: this is a given of my problem. The clients
would be standard applications, with very limited privileges. They wouldn't be
able to increase their own thread priorities at all. Each client would run as an
separated process.
The only solution that came to my mind so far is to have the clients communicate
with the server through shared memory. For each client, a shared memory region
would be allocated, consisting of one lock-free ringbuffer for input, another
for output, as well as a shared semaphore for server-to-client signaling.
At each cycle, the server would read and write audio data from/to the
ringbuffers of each registered clients, and then call sem_post() on all shared
semaphores.
A client side library would handle all client registering details, as well as
thread creation. It would then sem_wait(), and when awaken, read from the input
ringbuffer, call the client process callback with I/O buffers, and write to the
output ringbuffer.
Does this design sound good to you? Do you think it could achieve reliable I/O,
and reasonable latency? Keeping latency as low as possible, what do you advise
for the size of the ringbuffers?
--
Olivier
Hello
I bought the natural drum samples (http://www.naturaldrum.com/). It contains
WAVs and presets for Kontakt and Halion. Now I'd like to create some gigasampler
files in order to use it with linuxsampler.
The documentation of the natural drum sample library is quite good. The only
thing missing is the "loudness" of each sample in order to map each sample to a
velocity level from 0-127.
What would you recommend in order to calculate the "peek" of each drum sample
automatically? Is there a library which could do this? I would also be happy
with a command line tool like this:
$ peek bla.wav
Peek value: 12345
I could then write a C++-App using libgig.
Any ideas? Libraries? Algorithms?
Thanks!
Oliver
Hi all,
I've been battling a kind of a dsp-writer's-block as of late. Namely, I am
dealing with a project where (at least as of right now) I would like to explore
human whisper and its percussive/rhythmic power. This would take place in an
ensemble of "voices." I am also looking to combine whisper with some sort of
DSP. Obviously vocoder comes as one of the obvious choices but it sounds IMHO
cliche and as a result I would like to avoid it as much as possible (unless I
can somehow come up with a cool spin on it which I haven't yet). I also tried
amp mod, additive, filtering, etc., but none of these struck me as something
interesting. I do think delays will be fine in terms of "punctuating" the
overall pattern but I think this should take place at the end of the DSP chain.
Granular synthesis is also a consideration but I've done so much of it over the
past years I am hoping to do something different.
So, as of right now I have:
1) whisper
2) ???
3) delays
4) profit! :-)
Given the mental constipation I have been battling particularly over the past
couple of days, I wanted to turn to you my fellow LA* enthusiasts for some
thoughts/ideas/inspiration. Your help would be most appreciated and I will
gladly credit your ideas in the final piece.
Many thanks!
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology
Director, DISIS Interactive Sound and Intermedia Studio
Assistant Co-Director, CCTAD
CHCI, CS, and Art (by courtesy)
Virginia Tech
Department of Music
Blacksburg, VA 24061-0240
(540) 231-6139
(540) 231-5034 (fax)
ico.bukvic.net
If you need something to push you over the edge and port your existing
Qt/KDE music-making or multimedia app to the N900 running Symbian,
Maemo, or Meego: http://qt-apps.org/news/?id=340 (see below).
Some ideas (please?):
http://sv1.sourceforge.net/ == http://www.sonicvisualiser.org/http://kmid2.sourceforge.nethttp://kmetronome.sourceforge.nethttp://kmidimon.sourceforge.nethttp://vmpk.sourceforge.net/http://qtractor.sourceforge.nethttp://qmidictl.sourceforge.nethttp://qmidinet.sourceforge.nethttp://qjackctl.sourceforge.net/
....................
Win 10.000,- EUR at the "Qtest Mobile App Port"
Published: Dec 20 2010
Qtest Mobile App Port
Contest for Qt and KDE applications
Welcome to the Qtest Mobile App Port! As developers of applications
using Qt, you already know how great it is to work with - but how
about on mobile platforms, such as Symbian and MeeGo? How would you
like to take that step you have been wanting to take, but not been
able to justify: Take your application from the desktop and bring it
into the hand-held world via the Ovi store.
Let this contest be the justification, with the possibility of a new
phone or even 10,000 euros waiting at the end.
Dates:
The contest starts on 20th of December, 2010, and runs till 28th of
February. The 31st of December is important for you if you wish to
take part in the Early Bird competition. If you do no win, you will
still take part in the main competition, and will be allowed to
continue your work and submit new versions to the Ovi Store. The 28th
of February is the deadline for taking part in the main competition.
Developer Sprint: There will be a sponsored developer sprint organized
together with the KDE e.V. during the competition. The travel and stay
can be paid for if you do not have the budget yourself. Further
details will be made public at a later time, and all participants will
be notified of this information via email.
Judging and prizes:
The Qtest Mobile App Port is evaluated by a panel of judges which will
be announced in the next few days. The jury will pic 5 winners at 31th
of December as the early bird winners. Every winner gets a free N900
phone. The main competition first prize is EUR 10,000, which will be
awarded to the application which the judges find to be the best ported
application. The second to sixth price will be another 5 N900 phones.
And, finally: Everybody who takes part in the competition will be
awarded a gift bag, with a T-shirt and other merchandise.
Eligibility:
To be able to take part in the contest, the ported application must be
submitted for Ovi Store signing by one of the two deadlines:
- Early bird entries must be submitted by December 31st
- Standard entries must be submitted by February 28th
You also have to submit your application to the "Mobile Contest"
category on Qt-Apps.org or MeeGo-Central.org
You can submit your application to the Ovi Store as many times as you
wish during the competition. This allows you to get feedback from the
public on your software. It´s possible to submit new or existing
KDE/Qt applications
So have fun and good luck everybody!
.......................
Niels
http://nielsmayer.com
Dear Friends,
On behalf of Linuxaudio international consortium (http://linuxaudio.org), DISIS (http://disismusic.vt.edu), and L2Ork (http://l2ork.music.vt.edu), please allow me to use this opportunity to wish you very best for the Holidays. May you have many more seasons of merry music making using an ever-growing array of formidable FOSS tools and solutions!
Best wishes,
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology
Director, DISIS Interactive Sound & Intermedia Studio
Director, L2Ork Linux Laptop Orchestra
Assistant Co-Director, CCTAD
Virginia Tech
Department of Music
Blacksburg, VA 24061-0240
(540) 231-6139
(540) 231-5034 (fax)
ico.bukvic.net
I'm forwarding a job posting sent to me by the recruiter. I'd like to
do this myself, but the time and location make it infeasible.
Apparently, there are some options for remote work but I'm not clear
what they are. Talk to the recruiter to find out more.
--p
----------------------------------------------------------------------------------------------
Tom Gugger
Independent Recruiter
tgugger(a)bex.net
Linux Audio/ Contract/ Immediate/ IN
Skills:
linux ubuntu embedded alsa gstreamer cm synergy c++
This is an eight month contract in Kokomo, Indiana. If you
fit the job description below, email your resume to tgugger(a)bex.net.
Make sure your resume shows the required skills. No ALSA type
experience, no job.
ALSA = Linux Audio Package Experience
Duties in this role include:
- Design SW Architecture for Linux and audio subsystem
- Detailed task planning and estimating
- Linux package management, integration and direction
THIS IS A 6-8 MONTH CONTRACT OPPORTUNITY
Requirements
- Linux Audio Package experience (ALSA source/sink, GStreamer architecture)
- Linux Kernel understanding
- Ability to reconfigure and rebuild kernel and kernel modules
- C++ Object Oriented design, analysis
- Linux Driver architecture understanding
- It is preferable that the person has contributed to Open Source
development (FOSS project)
- CM experience required. Synergy experience preferred.
Marco -- many thanks for your reply -- I've taken the liberty of
CC'ing this informative reply to the Linux-Audio-Developers list
(your CC was not posted as you're not a subscriber:
http://lists.linuxaudio.org/pipermail/linux-audio-dev/2010-December/thread.…
).
The list comprises a good number of people with expertise in both
pulseaudio; hopefully the Jack sound server authors, including Paul
Davis, will be willing to publicly share their perspectives on the
issues raised regarding the role of pulseaudio on a handset and Linux
audio performance/latency/efficiency issues.
Here's a link to Marco's original post to meego-handset mailing list,
forwarded below:
http://lists.meego.com/pipermail/meego-handset/2010-December/000090.html
---------- Forwarded message ----------
From: Marco Ballesio <gibrovacco>
Date: Sat, Dec 18, 2010 at 4:42 AM
Subject: Re: Meego pulseaudio "compliance" and "enforcement" (was Re:
[Meego-handset] Enabling Speakerphone)
To: Niels Mayer <nielsmayer>
Cc: meego-handset(a)lists.meego.com, Linux Audio Developers
<linux-audio-dev(a)lists.linuxaudio.org> [...]
Hi,
sorry for the late reply but, whew, this was really long, definitely
more than what my poor brain can handle in a single bunch ;). Maybe we
could split this into a "Resource Policy Framework" and "Pulseaudio on
Embedded" couple of threads.
As a general comment on PA, it may appear odd but when I started
working with it I shared 100% of your considerations. Working with it
(not ON it), I started realising that, ok, it may not be perfect (and
which software actually is?) but it brings definitely more benefits
than troubles to the system. See my notes below for more details.
..snip..
>
> Hopefully there's more than just source-code to describe the policies
> and enforcement.
Maybe you've read in some other of my posts here and there, I'm trying
to get a Resource Policy Framework wiki somewhere. In the meanwhile,
documents are scarce but present:
- The MeeGo conference presentation:
http://conference2010.meego.com/session/policy-framework-flexible-way-orche…
Does not say too much, but it comes with some slides and many "mmmmh"
(keepalive signals).
- The resource policy application developer's manual:
http://wiki.meego.com/images/Meego-policy-framework-developer-guide.pdf
Mysteriously, it's not linked or referred to anywhere.. at least, now
it's on this thread.
- The source code: it may twist more than one nose but, at the end, I
always like to check in the sources before/after Googling. Documents
may be obsolete (who knows?) or expose a partial view. Code can't ;).
Not that I want to say with this that we don't need more documents
about the Resource Policy Framework...
> Is there a design document stating the
> "organizational" or "legal" or "human" reasons for said policies and
> enforcements?
My pointers of above may begin to satiate your hunger (appetisers?).
>
> And, as a developer, where is the documentation for the appropriate
> layer to use, the appropriate API, etc -- e.g. for the original
> question that started this thread -- how to switch on/off speakers,
> FM-radio transmitter, etc on handset. ( cf my original reply
> http://lists.meego.com/pipermail/meego-handset/2010-December/000066.html
> )
Unfortunately, this is still in the "dark side" of the documentation,
which is currently application-centric. Hopefully one day we'll have
an "Adaptation developer's guide", in the meanwhile, a good starting
point may be checking the sources, the N900/Meego ruleset and posting
questions here. If it's not enough, we may set up an IRC channel (if
there's enough interest)...
>
> Something high-level like the following Maemo document would be very
> helpful -- but I have been unsuccessful finding such documentation for
> Meego: http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Architecture/Mu…
See my notes above about the project wiki. Your pointers are
definitely good hints, thanks.
>
> Also, why not use ALSA use-case-manager:
> http://opensource.wolfsonmicro.com/node/14
> UCM appears to be scheduled for other distros "handset" work:
> https://wiki.ubuntu.com/Specs/N/ARM/AlsaScenarioManager
this project covers the audio-specific cases: for what I understood it
might very well be used as an ALSA enforcement point in the Framework
so, more than compete with Resource Policy it would be complementary
in case you don't really want/need to use a sound server. BTW, I could
not find so much info, maybe you can help me to retrieve:
- more than just source-code to describe how it works.
- the developer's documentation for the appropriate layer to use, etc.
- something like:
http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Architecture/Mu…
..snip..
now, the pulseaudio cases. First of all, please note I'm not the best
advocate for it, btw my 0.05 €.
Posting some extracts of your comments to the PA mailing list will
definitely give you a better insight for the internals of the server.
> Regarding my old nemesis, pulseaudio ( http://tinyurl.com/2976vu6
> == http://old.nabble.com/uninstall-pulseaudio-to-increase-audio-app-stability-…
> ), I think the only "elegant" thing about pulseaudio is that it's
> bluetooth handling works for those that care about bluetooth; given
> the number of bug-reports and incompatibilities pulseaudio generates
> in different distros, I'm not sure "elegant" is the right word....
And actually "elegant" was not referred to pulseaudio, but to the way
its ports can handle audio routing instead of using ALSA (they bring
better synchronisation).
Yes, the sound server has shown instabilities in some cases and I
personally used to uninstall it on my Ubuntu box immediately after the
upgrades. Recently but many improvements have been made at a point
that, on a PC, I'm nowadays not seeing it anymore jumping to a steady
30% of the CPU or completely jamming any audio output in the middle of
a movie.
Another _personal_ feeling I had is that pulseaudio was pretty hard to
configure for a generic HW (like distros tend to do), but works pretty
well after being tuned for a given one (like 95% of embedded products
are). Very often the root issue may have been a wrong configuration
for your system.
>
> From my position, as a multimedia/ALSA/linux-audio developer, having
> to go through pulseaudio sounds like an all-around bad idea, and to
> have "enforcement" or "compliance" attached makes it sound even
> worse. Tell me it ain't so!
I can agree from a kernel/ALSA pov (everything in userland appears to
be less efficient than any kernel drivers ;) ), I strongly disagree
from the system architecture's one. Some points you should consider:
- Audio outputs frequency tuning: in the real word a crappy speaker
costs less than a good one (strange?) so, unless you've a very high
target price, you'll usually have to take the maximum out of garbage
by "massaging" the output spectrum in order to get an adequate overall
transfer function. And this must work with all of the applications
-even the ones coming from third parties on which you don't have the
bare minimum control on- and different analog outputs (front, rear,
bt, ...). Not that pulseaudio comes with these features
out-of-the-box, but it's quite easy to plug them in.
- Audio inputs: the dual of audio outputs.
- Mixing: it's possible that users will want more than one application
accessing the audio output at the same time (e.g. navigator and car
radio) or that they'll want to switch bw two different applications
all of a sudden (e.g. radio and ringtone). In all these cases they'll
want a good mixing and no sudden power steps when the ringtone app
does not know about the volume level used for the radio.
- Acoustic delay estimation: sitting in the middle of pulseaudio
(running as a real-time thread) it's easier to write a proper AEC.
Back to 2004 I've tried to grant, with a TI aic31, constant latencies
for both LEC and AEC by just using ALSA and the application... It took
a really long time to work.
- It may not appear so, but it's energy-efficient (see my comments
below if you just jumped out of your chair).
Now, I don't mean that those things can be handled only by pulseaudio.
Actually developers could write their own sound server, or use an
already available one, to do all of this, but..
- How long would it take to be better than pulseaudio? (e.g. wrt my
points above).
- Can it grant the same solid user base and portability across embedded HW?
- Could it grant the same level of contribution from companies with
solid experience on the subject?
- Are there architectural flaws for which pulseaudio couldn't
effectively handle these cases? If yes, has anybody tried to discuss
about them on the PA mailing list?
>
> There is a growing class of applications that do not want or need
> pulseaudio around -- those using http://jackaudio.org/ . When the
> jack audio server launches, the first thing it does it use dbus to
> disable pulseaudio. Is that also non compliant?
I've no religious issues against jack and actually I don't have that
much experience with it, so I'd like to know more about jack on
embedded. I'd like to know, for instance, how many embedded, low-power
devices are already using it and with which degree of success. Also it
would be great to know if anybody has interfaced it with a cellular
modem to handle Circuit-switched cellular calls, and if the device has
actually been certified for such a service in any countries.
>
> It seems inappropriate to preclude an entire class of application --
> real-time "pro" audio and video apps that utilize the Jack audio
> server to provide low-latency, tightly synchronized audio -- as needed
> for modern multimedia creation and playback. Perhaps such applications
> are a stretch for an OMAP3-class device, but given the many
> audio/media apps listed in http://omappedia.org/wiki/PEAP_Projects ,
> clearly OMAP4 and beyond might not be, even on a puny "handset." Of
> course, those making such audio apps might sidestep pulseaudio
> compliance/latency/inefficiency issues by using
> http://opensoundcontrol.org/ and an external DAC (
> http://gregsurges.com/tag/dac/ ).
>
please correct me if I'm wrong, but if I understood well most of the
apps using _exclusively_ jack are aimed to audio/video
composition/editing. Now, as we are on the IVI ML I think it's quite a
strange use case (even though I must admit I don't know which
requirements you have).
> Finally, it seems odd that in a "handset" environment, pulseaudio is
> an absolute requirement. To me, it is just a waste of batteries,
see below (and above).
> and a
> terrible source of unnecessary context switching and interrupts during
> audio playback.
well, it runs as real-time priority, so it's executed when it's
scheduled. Context switches would not be different for any process
(e.g. jack) running with the same scheduling.
> It's sort of like being forced to drive around in a
> gas-guzzling, oversized sport-utility vehicle with 10 foot tires and 5
> feet of ground clearance -- just to drive to the market or work on a
> well-paved freeway on a summer's day -- even when one might prefer a
> bicycle, motorcycle, sports-car, subway, or whatever tool is best for
> the job.
funnily, this is EXACTLY the same thing I thought the first time I
profiled pulseaudio on the N900, then I found some reasonable
explanation (scroll a little below).
..snip..
> Take for example HD audio/video playback -- something where you need a
> "sportscar" to get the latency and sample rate associated with the
> audio stream while also performing HD decoding (e.g. 16bit/96K audio
> is supported on omap3430 per
> http://and-developers.com/device_information ). Pulseaudio typically
> operates at a fixed rate and forces resampling to that rate, causing
> an oft perceptible loss in fidelity.
Yes, this is a bad aspect, but it's actually possible to change the
output bitrate of the server (it depends on the port used). Maybe you
could query on the PA mailing list for more knowledge on the subject.
>
> IMHO what is needed is not a "digital mixer" and resampler and extra
> user-space processing before going into the kernel and out to the
> sound hardware via ALSA drivers. We certainly need "use case
> management" and the notion of a digital patch bay, and some way of
> smoothly mixing between sounds, glitch free switching of sample rates
> at the ALSA level, and then choosing the direct path to hardware
> that'll best handle the "main audio task" we're doing -- e.g. playing
> music, watching a movie, making a phone call, or just using the screen
> UI. What isn't needed is "desktop networked audio" capabilities of
> pulseaudio, or any extra user-space streaming/mixing/resampling of
> signals.
so we'll end up writing our own sound server or using a different one,
won't we? As an alternative, we may have to heavily modify ALSA to
suit our needs, and we'll not have yet covered all of the needed
features.
>
> The inefficiencies introduced by pulseaudio is evidenced by the
> n900/meego-1.1 handset, which cannot maintain audio synchronization
> during audio or video playback if even the slightest trace of
> something else is going on, including a wireless network scan, or even
> just cursoring around in a networked terminal (which goes through the
> wireless stack in my setup).
Personally, I would not say that MultiMedia with MeeGo on the N900 is
already at the same quality of Maemo 5. Lots of tuning is imho still
missing.
>
> On Maemo/n900 note what happens when playing back an Mp3 -- pulseaudio
> consumes twice the resources at "high priority" of the decoding
> process:
You must consider that here there many algorithms involved that
users/applications don't even know about, like the spectrum
optimisations I've been mentioning before. If you could run an
oprofile session over your tests you'd see that the actual CPU
attributable to PA is pretty lower than your figures.
..snip..
> I'm sure a simple experiment could determine exactly the "effect" of pulseaudio:
>
> Play the same playlist until the battery wears out using the same
> software player outputting first to pulseaudio (pref not through
> ALSA's pulseaudio driver because that wouldn't be "fair" to go through
> the kernel twice) then play the same through ALSA "dmix" interface,
> just to emulate pulseaudio's mixing/resampling functionality.
As written above, PA is not only resampling and mixing here.. imho if
the user/the developer didn't have to adjust the system to work with
PA in order to achieve this, it's a success. The CPU usage comes from
the fact we're actually running something we need.
> I would
> imagine the pure-ALSA solution would pass the "energizer bunny' test
> for more hours and far fewer kernel/userspace context switches.
Sure it would, but with a lower set of features (and an higher cost
for the audio system). It's up to the device price target/customer's
pockets and needs to decide what's better.
> Although a realtime userspace process like pulseaudio can help deliver
> stable audio in a multicore environment -- it may end starving out
> other processes on a slower uniprocessor. Which is why I believe a
> pulse-audio-free solution should be available and still be "compliant"
> on low-end systems.
I would agree in some cases, that is when the system conditions don't
require it to run multi-application use cases and the price tag for
the audio system is irrelevant wrt the audio quality.
Note for the reader: if you've reached this point it means you're
really interested about pulseaudio on embedded devices :D.
Regards
>
> The one place where pulseaudio is currently helpful is in hooking up
> bluetooth devices. But ALSA has it's own bluetooth layer as well, and
> other architectures for audio shouldn't be precluded:
> http://bluetooth-alsa.sourceforge.net/future.html or Phonon
> (http://pulseaudio.org/wiki/KDE ).
>
> -- Niels
> http://nielsmayer.com
>
Hello!
If I want to have a simple jack enabled program react to transport control
in JACK, what do I have to do?
The basics:
* the program can start (it will play always from the same point (so no need
for relocating, etc.)
* the program can stop
* I also have an interactive command to make it start. I guess in this case:
Just let it start and leave the transport master untouched.
Can I easily add the following: When the program is running, due to the
interactive start command and external jack_transport play/start comes along,
just start over playing?
Kind regards and thanks
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de