Dear Friends,
On behalf of Linuxaudio international consortium (http://linuxaudio.org), DISIS (http://disismusic.vt.edu), and L2Ork (http://l2ork.music.vt.edu), please allow me to use this opportunity to wish you very best for the Holidays. May you have many more seasons of merry music making using an ever-growing array of formidable FOSS tools and solutions!
Best wishes,
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology
Director, DISIS Interactive Sound & Intermedia Studio
Director, L2Ork Linux Laptop Orchestra
Assistant Co-Director, CCTAD
Virginia Tech
Department of Music
Blacksburg, VA 24061-0240
(540) 231-6139
(540) 231-5034 (fax)
ico.bukvic.net
I'm forwarding a job posting sent to me by the recruiter. I'd like to
do this myself, but the time and location make it infeasible.
Apparently, there are some options for remote work but I'm not clear
what they are. Talk to the recruiter to find out more.
--p
----------------------------------------------------------------------------------------------
Tom Gugger
Independent Recruiter
tgugger(a)bex.net
Linux Audio/ Contract/ Immediate/ IN
Skills:
linux ubuntu embedded alsa gstreamer cm synergy c++
This is an eight month contract in Kokomo, Indiana. If you
fit the job description below, email your resume to tgugger(a)bex.net.
Make sure your resume shows the required skills. No ALSA type
experience, no job.
ALSA = Linux Audio Package Experience
Duties in this role include:
- Design SW Architecture for Linux and audio subsystem
- Detailed task planning and estimating
- Linux package management, integration and direction
THIS IS A 6-8 MONTH CONTRACT OPPORTUNITY
Requirements
- Linux Audio Package experience (ALSA source/sink, GStreamer architecture)
- Linux Kernel understanding
- Ability to reconfigure and rebuild kernel and kernel modules
- C++ Object Oriented design, analysis
- Linux Driver architecture understanding
- It is preferable that the person has contributed to Open Source
development (FOSS project)
- CM experience required. Synergy experience preferred.
Marco -- many thanks for your reply -- I've taken the liberty of
CC'ing this informative reply to the Linux-Audio-Developers list
(your CC was not posted as you're not a subscriber:
http://lists.linuxaudio.org/pipermail/linux-audio-dev/2010-December/thread.…
).
The list comprises a good number of people with expertise in both
pulseaudio; hopefully the Jack sound server authors, including Paul
Davis, will be willing to publicly share their perspectives on the
issues raised regarding the role of pulseaudio on a handset and Linux
audio performance/latency/efficiency issues.
Here's a link to Marco's original post to meego-handset mailing list,
forwarded below:
http://lists.meego.com/pipermail/meego-handset/2010-December/000090.html
---------- Forwarded message ----------
From: Marco Ballesio <gibrovacco>
Date: Sat, Dec 18, 2010 at 4:42 AM
Subject: Re: Meego pulseaudio "compliance" and "enforcement" (was Re:
[Meego-handset] Enabling Speakerphone)
To: Niels Mayer <nielsmayer>
Cc: meego-handset(a)lists.meego.com, Linux Audio Developers
<linux-audio-dev(a)lists.linuxaudio.org> [...]
Hi,
sorry for the late reply but, whew, this was really long, definitely
more than what my poor brain can handle in a single bunch ;). Maybe we
could split this into a "Resource Policy Framework" and "Pulseaudio on
Embedded" couple of threads.
As a general comment on PA, it may appear odd but when I started
working with it I shared 100% of your considerations. Working with it
(not ON it), I started realising that, ok, it may not be perfect (and
which software actually is?) but it brings definitely more benefits
than troubles to the system. See my notes below for more details.
..snip..
>
> Hopefully there's more than just source-code to describe the policies
> and enforcement.
Maybe you've read in some other of my posts here and there, I'm trying
to get a Resource Policy Framework wiki somewhere. In the meanwhile,
documents are scarce but present:
- The MeeGo conference presentation:
http://conference2010.meego.com/session/policy-framework-flexible-way-orche…
Does not say too much, but it comes with some slides and many "mmmmh"
(keepalive signals).
- The resource policy application developer's manual:
http://wiki.meego.com/images/Meego-policy-framework-developer-guide.pdf
Mysteriously, it's not linked or referred to anywhere.. at least, now
it's on this thread.
- The source code: it may twist more than one nose but, at the end, I
always like to check in the sources before/after Googling. Documents
may be obsolete (who knows?) or expose a partial view. Code can't ;).
Not that I want to say with this that we don't need more documents
about the Resource Policy Framework...
> Is there a design document stating the
> "organizational" or "legal" or "human" reasons for said policies and
> enforcements?
My pointers of above may begin to satiate your hunger (appetisers?).
>
> And, as a developer, where is the documentation for the appropriate
> layer to use, the appropriate API, etc -- e.g. for the original
> question that started this thread -- how to switch on/off speakers,
> FM-radio transmitter, Â etc on handset. ( cf my original reply
> http://lists.meego.com/pipermail/meego-handset/2010-December/000066.html
> )
Unfortunately, this is still in the "dark side" of the documentation,
which is currently application-centric. Hopefully one day we'll have
an "Adaptation developer's guide", in the meanwhile, a good starting
point may be checking the sources, the N900/Meego ruleset and posting
questions here. If it's not enough, we may set up an IRC channel (if
there's enough interest)...
>
> Something high-level like the following Maemo document would be very
> helpful -- but I have been unsuccessful finding such documentation for
> Meego: http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Architecture/Mu…
See my notes above about the project wiki. Your pointers are
definitely good hints, thanks.
>
> Also, why not use ALSA use-case-manager:
> http://opensource.wolfsonmicro.com/node/14
> UCM appears to be scheduled for other distros "handset" work:
> https://wiki.ubuntu.com/Specs/N/ARM/AlsaScenarioManager
this project covers the audio-specific cases: for what I understood it
might very well be used as an ALSA enforcement point in the Framework
so, more than compete with Resource Policy it would be complementary
in case you don't really want/need to use a sound server. BTW, I could
not find so much info, maybe you can help me to retrieve:
- more than just source-code to describe how it works.
- the developer's documentation for the appropriate layer to use, etc.
- something like:
http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Architecture/Mu…
..snip..
now, the pulseaudio cases. First of all, please note I'm not the best
advocate for it, btw my 0.05 €.
Posting some extracts of your comments to the PA mailing list will
definitely give you a better insight for the internals of the server.
> Regarding my old nemesis, pulseaudio ( http://tinyurl.com/2976vu6
> == http://old.nabble.com/uninstall-pulseaudio-to-increase-audio-app-stability-…
> ), I think the only "elegant" thing about pulseaudio is that it's
> bluetooth handling works for those that care about bluetooth; Â given
> the number of bug-reports and incompatibilities pulseaudio generates
> in different distros, I'm not sure "elegant" is the right word....
And actually "elegant" was not referred to pulseaudio, but to the way
its ports can handle audio routing instead of using ALSA (they bring
better synchronisation).
Yes, the sound server has shown instabilities in some cases and I
personally used to uninstall it on my Ubuntu box immediately after the
upgrades. Recently but many improvements have been made at a point
that, on a PC, I'm nowadays not seeing it anymore jumping to a steady
30% of the CPU or completely jamming any audio output in the middle of
a movie.
Another _personal_ feeling I had is that pulseaudio was pretty hard to
configure for a generic HW (like distros tend to do), but works pretty
well after being tuned for a given one (like 95% of embedded products
are). Very often the root issue may have been a wrong configuration
for your system.
>
> From my position, as a multimedia/ALSA/linux-audio developer, having
> to go through pulseaudio sounds like an all-around bad idea, and to
> have "enforcement" or "compliance" Â attached makes it sound even
> worse. Tell me it ain't so!
I can agree from a kernel/ALSA pov (everything in userland appears to
be less efficient than any kernel drivers ;) ), I strongly disagree
from the system architecture's one. Some points you should consider:
- Audio outputs frequency tuning: in the real word a crappy speaker
costs less than a good one (strange?) so, unless you've a very high
target price, you'll usually have to take the maximum out of garbage
by "massaging" the output spectrum in order to get an adequate overall
transfer function. And this must work with all of the applications
-even the ones coming from third parties on which you don't have the
bare minimum control on- and different analog outputs (front, rear,
bt, ...). Not that pulseaudio comes with these features
out-of-the-box, but it's quite easy to plug them in.
- Audio inputs: the dual of audio outputs.
- Mixing: it's possible that users will want more than one application
accessing the audio output at the same time (e.g. navigator and car
radio) or that they'll want to switch bw two different applications
all of a sudden (e.g. radio and ringtone). In all these cases they'll
want a good mixing and no sudden power steps when the ringtone app
does not know about the volume level used for the radio.
- Acoustic delay estimation: sitting in the middle of pulseaudio
(running as a real-time thread) it's easier to write a proper AEC.
Back to 2004 I've tried to grant, with a TI aic31, constant latencies
for both LEC and AEC by just using ALSA and the application... It took
a really long time to work.
- It may not appear so, but it's energy-efficient (see my comments
below if you just jumped out of your chair).
Now, I don't mean that those things can be handled only by pulseaudio.
Actually developers could write their own sound server, or use an
already available one, to do all of this, but..
- How long would it take to be better than pulseaudio? (e.g. wrt my
points above).
- Can it grant the same solid user base and portability across embedded HW?
- Could it grant the same level of contribution from companies with
solid experience on the subject?
- Are there architectural flaws for which pulseaudio couldn't
effectively handle these cases? If yes, has anybody tried to discuss
about them on the PA mailing list?
>
> There is a growing class of applications that do not want or need
> pulseaudio around -- those using http://jackaudio.org/ . Â When the
> jack audio server launches, the first thing it does it use dbus to
> disable pulseaudio. Is that also non compliant?
I've no religious issues against jack and actually I don't have that
much experience with it, so I'd like to know more about jack on
embedded. I'd like to know, for instance, how many embedded, low-power
devices are already using it and with which degree of success. Also it
would be great to know if anybody has interfaced it with a cellular
modem to handle Circuit-switched cellular calls, and if the device has
actually been certified for such a service in any countries.
>
> It seems inappropriate to preclude an entire class of application --
> real-time "pro" audio and video apps that utilize the Jack audio
> server to provide low-latency, tightly synchronized audio -- as needed
> for modern multimedia creation and playback. Perhaps such applications
> are a stretch for an OMAP3-class device, but given the many
> audio/media apps listed in http://omappedia.org/wiki/PEAP_Projects ,
> clearly OMAP4 and beyond might not be, even on a puny "handset." Of
> course, those making such audio apps might sidestep pulseaudio
> compliance/latency/inefficiency issues by using
> http://opensoundcontrol.org/ and an external DAC (
> http://gregsurges.com/tag/dac/ ).
>
please correct me if I'm wrong, but if I understood well most of the
apps using _exclusively_ jack are aimed to audio/video
composition/editing. Now, as we are on the IVI ML I think it's quite a
strange use case (even though I must admit I don't know which
requirements you have).
> Finally, it seems odd that in a "handset" environment, pulseaudio is
> an absolute requirement. To me, it is just a waste of batteries,
see below (and above).
> and a
> terrible source of unnecessary context switching and interrupts during
> audio playback.
well, it runs as real-time priority, so it's executed when it's
scheduled. Context switches would not be different for any process
(e.g. jack) running with the same scheduling.
> It's sort of like being forced to drive around in a
> gas-guzzling, oversized sport-utility vehicle with 10 foot tires and 5
> feet of ground clearance -- just to drive to the market or work on a
> well-paved freeway on a summer's day -- even when one might prefer a
> bicycle, motorcycle, sports-car, subway, Â or whatever tool is best for
> the job.
funnily, this is EXACTLY the same thing I thought the first time I
profiled pulseaudio on the N900, then I found some reasonable
explanation (scroll a little below).
..snip..
> Take for example HD audio/video playback -- something where you need a
> "sportscar" to get the latency and sample rate associated with the
> audio stream while also performing HD decoding (e.g. 16bit/96K audio
> is supported on omap3430 per
> http://and-developers.com/device_information ). Pulseaudio typically
> operates at a fixed rate and forces resampling to that rate, causing
> an oft perceptible loss in fidelity.
Yes, this is a bad aspect, but it's actually possible to change the
output bitrate of the server (it depends on the port used). Maybe you
could query on the PA mailing list for more knowledge on the subject.
>
> IMHO what is needed is not a "digital mixer" and resampler and extra
> user-space processing before going into the kernel and out to the
> sound hardware via ALSA drivers. We certainly need "use case
> management" and the notion of a digital patch bay, and some way of
> smoothly mixing between sounds, glitch free switching of sample rates
> at the ALSA level, and then choosing the direct path to hardware
> that'll best handle the "main audio task" we're doing -- e.g. playing
> music, watching a movie, making a phone call, or just using the screen
> UI. Â What isn't needed is "desktop networked audio" capabilities of
> pulseaudio, or any extra user-space streaming/mixing/resampling of
> signals.
so we'll end up writing our own sound server or using a different one,
won't we? As an alternative, we may have to heavily modify ALSA to
suit our needs, and we'll not have yet covered all of the needed
features.
>
> The inefficiencies introduced by pulseaudio is evidenced by the
> n900/meego-1.1 handset, which cannot maintain audio synchronization
> during audio or video playback if even the slightest trace of
> something else is going on, including a wireless network scan, or even
> just cursoring around in a networked terminal (which goes through the
> wireless stack in my setup).
Personally, I would not say that MultiMedia with MeeGo on the N900 is
already at the same quality of Maemo 5. Lots of tuning is imho still
missing.
>
> On Maemo/n900 note what happens when playing back an Mp3 -- pulseaudio
> consumes twice the resources at "high priority" of the decoding
> process:
You must consider that here there many algorithms involved that
users/applications don't even know about, like the spectrum
optimisations I've been mentioning before. If you could run an
oprofile session over your tests you'd see that the actual CPU
attributable to PA is pretty lower than your figures.
..snip..
> I'm sure a simple experiment could determine exactly the "effect" of pulseaudio:
>
> Play the same playlist until the battery wears out using the same
> software player outputting first to pulseaudio (pref not through
> ALSA's pulseaudio driver because that wouldn't be "fair" to go through
> the kernel twice) then play the same through ALSA "dmix" interface,
> just to emulate pulseaudio's mixing/resampling functionality.
As written above, PA is not only resampling and mixing here.. imho if
the user/the developer didn't have to adjust the system to work with
PA in order to achieve this, it's a success. The CPU usage comes from
the fact we're actually running something we need.
> I would
> imagine the pure-ALSA solution would pass the "energizer bunny' test
> for more hours and far fewer kernel/userspace context switches.
Sure it would, but with a lower set of features (and an higher cost
for the audio system). It's up to the device price target/customer's
pockets and needs to decide what's better.
> Although a realtime userspace process like pulseaudio can help deliver
> stable audio in a multicore  environment -- it may end starving out
> other processes on a slower uniprocessor. Which is why I believe a
> pulse-audio-free solution should be available and still be "compliant"
> on low-end systems.
I would agree in some cases, that is when the system conditions don't
require it to run multi-application use cases and the price tag for
the audio system is irrelevant wrt the audio quality.
Note for the reader: if you've reached this point it means you're
really interested about pulseaudio on embedded devices :D.
Regards
>
> The one place where pulseaudio is currently helpful is in hooking up
> bluetooth devices. But ALSA has it's own bluetooth layer as well, and
> other architectures for audio shouldn't be precluded:
> http://bluetooth-alsa.sourceforge.net/future.html  or Phonon
> (http://pulseaudio.org/wiki/KDE ).
>
> -- Niels
> http://nielsmayer.com
>
Hello!
If I want to have a simple jack enabled program react to transport control
in JACK, what do I have to do?
The basics:
* the program can start (it will play always from the same point (so no need
for relocating, etc.)
* the program can stop
* I also have an interactive command to make it start. I guess in this case:
Just let it start and leave the transport master untouched.
Can I easily add the following: When the program is running, due to the
interactive start command and external jack_transport play/start comes along,
just start over playing?
Kind regards and thanks
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
On Tue, Dec 14, 2010 at 1:31 PM, Marco Ballesio
> Please check here:
>
> http://meego.gitorious.org/maemo-multimedia/pulseaudio-policy-enforcement/t…
> to get a few more hints on the subject.
Hopefully there's more than just source-code to describe the policies
and enforcement. Is there a design document stating the
"organizational" or "legal" or "human" reasons for said policies and
enforcements?
And, as a developer, where is the documentation for the appropriate
layer to use, the appropriate API, etc -- e.g. for the original
question that started this thread -- how to switch on/off speakers,
FM-radio transmitter, etc on handset. ( cf my original reply
http://lists.meego.com/pipermail/meego-handset/2010-December/000066.html
)
Something high-level like the following Maemo document would be very
helpful -- but I have been unsuccessful finding such documentation for
Meego: http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Architecture/Mu…
Also, why not use ALSA use-case-manager:
http://opensource.wolfsonmicro.com/node/14
UCM appears to be scheduled for other distros "handset" work:
https://wiki.ubuntu.com/Specs/N/ARM/AlsaScenarioManager
> the n900 Resource Policy enforcement points were directly interacting
> with ALSA. As Krisztian pointed out, it's no more necessary (and at
> the limit it may be dangerous for your system's health) to do so
> within MeeGo, as the pulseaudio ports can elegantly handle the whole
> thing.
Potential to blow out my handset speakers duly noted (the n900 goes
http://en.wikipedia.org/wiki/Up_to_eleven !), however, I wasn't
advocating playing around with the ALSA layer indiscriminately, in
fact, I specifically stated:
> the "amixer" results of meego indicate an equally complicated
> soundchip; where random hacking could render your system somewhat useless... is there documentation on all these values?:
> http://nielsmayer.com/meego/n900-card0-amixer.txt .
> The only chip mentioned by name is
> http://focus.ti.com/lit/ds/symlink/tpa6130a2.pdf "TPA6130A2 Headphone"
................................
Regarding my old nemesis, pulseaudio ( http://tinyurl.com/2976vu6
== http://old.nabble.com/uninstall-pulseaudio-to-increase-audio-app-stability-…
), I think the only "elegant" thing about pulseaudio is that it's
bluetooth handling works for those that care about bluetooth; given
the number of bug-reports and incompatibilities pulseaudio generates
in different distros, I'm not sure "elegant" is the right word....
>From my position, as a multimedia/ALSA/linux-audio developer, having
to go through pulseaudio sounds like an all-around bad idea, and to
have "enforcement" or "compliance" attached makes it sound even
worse. Tell me it ain't so!
There is a growing class of applications that do not want or need
pulseaudio around -- those using http://jackaudio.org/ . When the
jack audio server launches, the first thing it does it use dbus to
disable pulseaudio. Is that also non compliant?
It seems inappropriate to preclude an entire class of application --
real-time "pro" audio and video apps that utilize the Jack audio
server to provide low-latency, tightly synchronized audio -- as needed
for modern multimedia creation and playback. Perhaps such applications
are a stretch for an OMAP3-class device, but given the many
audio/media apps listed in http://omappedia.org/wiki/PEAP_Projects ,
clearly OMAP4 and beyond might not be, even on a puny "handset." Of
course, those making such audio apps might sidestep pulseaudio
compliance/latency/inefficiency issues by using
http://opensoundcontrol.org/ and an external DAC (
http://gregsurges.com/tag/dac/ ).
Finally, it seems odd that in a "handset" environment, pulseaudio is
an absolute requirement. To me, it is just a waste of batteries, and a
terrible source of unnecessary context switching and interrupts during
audio playback. It's sort of like being forced to drive around in a
gas-guzzling, oversized sport-utility vehicle with 10 foot tires and 5
feet of ground clearance -- just to drive to the market or work on a
well-paved freeway on a summer's day -- even when one might prefer a
bicycle, motorcycle, sports-car, subway, or whatever tool is best for
the job. Given that's the argument against Java/Android on the handset
(it's the SUV of languages/environments) it's unfortunate that
lighter-weight and less monolithic and more "unixy" solutions aren't
being pursued for audio on the Meego handset.
Take for example HD audio/video playback -- something where you need a
"sportscar" to get the latency and sample rate associated with the
audio stream while also performing HD decoding (e.g. 16bit/96K audio
is supported on omap3430 per
http://and-developers.com/device_information ). Pulseaudio typically
operates at a fixed rate and forces resampling to that rate, causing
an oft perceptible loss in fidelity. So in order to allow "digital
mixing" for notification signals or phone calls while watching an HD
movie, either pulseaudio will be upmixing those signals to 16/96,
which is inefficient for audio that doesn't need the higher bitrate;
the alternative, which is what we get with pulseaudio, is that the DAC
is running at 44 or 48k, and even though we might be listening to
material at a higher sample-rate, it'll be resampled and sonic
artefacts may be introduced in order to work with the 'lowest common
denominator'.
IMHO what is needed is not a "digital mixer" and resampler and extra
user-space processing before going into the kernel and out to the
sound hardware via ALSA drivers. We certainly need "use case
management" and the notion of a digital patch bay, and some way of
smoothly mixing between sounds, glitch free switching of sample rates
at the ALSA level, and then choosing the direct path to hardware
that'll best handle the "main audio task" we're doing -- e.g. playing
music, watching a movie, making a phone call, or just using the screen
UI. What isn't needed is "desktop networked audio" capabilities of
pulseaudio, or any extra user-space streaming/mixing/resampling of
signals.
The inefficiencies introduced by pulseaudio is evidenced by the
n900/meego-1.1 handset, which cannot maintain audio synchronization
during audio or video playback if even the slightest trace of
something else is going on, including a wireless network scan, or even
just cursoring around in a networked terminal (which goes through the
wireless stack in my setup).
On Maemo/n900 note what happens when playing back an Mp3 -- pulseaudio
consumes twice the resources at "high priority" of the decoding
process:
Mem: 239968K used, 5572K free, 0K shrd, 1856K buff, 68204K cached
CPU: 31.5% usr 10.3% sys 0.0% nice 53.9% idle 4.1% io 0.0% irq 0.0% softirq
Load average: 0.88 0.56 0.27
PID PPID USER STAT RSS %MEM %CPU COMMAND
781 1 pulse S < 3832 1.5 22.0 /usr/bin/pulseaudio --system
--high-priority
1317 705 user S < 6320 2.5 12.2 /usr/bin/mafw-dbus-wrapper
mafw-gst-renderer
971 1 user S < 1868 0.7 1.3 /usr/bin/dbus-daemon --fork
--print-pid 5 --print-address 7 --session
On Meego/n900, here's what playing track one in "Music player" looks
like -- where even cursoring around on the command-line in bash in the
terminal is enough to "desync" the audio stream for a few seconds --
and that's with pulseaudio running at nice=-11 and high priority! Note
pulseaudio consumes 31.6% CPU while the decompression, presumably
happening in bognor-regis "Media daemon and play queue manager" ==
34.5% and finally the media player app itself at 6.7%.... all while
consuming 21.3% memory just to play back an MP3....
top - 17:56:15 up 20 min, 1 user, load average: 3.31, 3.12, 2.07
Tasks: 106 total, 2 running, 103 sleeping, 0 stopped, 1 zombie
Cpu(s): 18.1%us, 26.6%sy, 17.2%ni, 36.5%id, 1.2%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 226260k total, 215948k used, 10312k free, 84k buffers
Swap: 786428k total, 5944k used, 780484k free, 81596k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1063 meego 20 0 127m 9m 7720 S 34.5 4 .5 1:35.09 bognor-regis-da
892 meego 9 -11 191m 4588 3256 S 31.6 2.0 2:45.14 pulseaudio
1104 meego 20 0 2364 960 744 R 10.5 0.4 0:00.22 top
1059 meego 25 5 91628 32m 29m R 6.7 14.8 0:29.26 meegomusic
803 root 20 0 38452 28m 17m S 3.8 12.8 0:38.55 Xorg
876 root 20 0 0 0 0 S 1.9 0.0 0:13.20 ipolldevd
Here's *just* watching a video (big buck bunny 240p) on the meego
handset -- and no audio is even playing back (constant triggering of
desync bug seen when playing back audio stream?) but pulseaudio is
working hard anyways, consuming almost 50% of CPU needed to decompress
the video and audio stream:
top - 17:46:48 up 11 min, 1 user, load average: 4.47, 2.20, 1.03
Tasks: 106 total, 2 running, 103 sleeping, 0 stopped, 1 zombie
Cpu(s): 12.8%us, 31.9%sy, 55.2%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 226260k total, 222564k used, 3696k free, 100k buffers
Swap: 786428k total, 1224k used, 785204k free, 85424k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1007 meego 25 5 269m 41m 32m R 59.3 18.9 1:27.18 meegovideo
892 meego 9 -11 191m 5272 3948 S 26.1 2.3 0:40.00 pulseaudio
803 root 20 0 33092 23m 13m S 4.1 10.6 0:10.83 Xorg
I'm sure a simple experiment could determine exactly the "effect" of pulseaudio:
Play the same playlist until the battery wears out using the same
software player outputting first to pulseaudio (pref not through
ALSA's pulseaudio driver because that wouldn't be "fair" to go through
the kernel twice) then play the same through ALSA "dmix" interface,
just to emulate pulseaudio's mixing/resampling functionality. I would
imagine the pure-ALSA solution would pass the "energizer bunny' test
for more hours and far fewer kernel/userspace context switches.
Although a realtime userspace process like pulseaudio can help deliver
stable audio in a multicore environment -- it may end starving out
other processes on a slower uniprocessor. Which is why I believe a
pulse-audio-free solution should be available and still be "compliant"
on low-end systems.
The one place where pulseaudio is currently helpful is in hooking up
bluetooth devices. But ALSA has it's own bluetooth layer as well, and
other architectures for audio shouldn't be precluded:
http://bluetooth-alsa.sourceforge.net/future.html or Phonon
(http://pulseaudio.org/wiki/KDE ).
-- Niels
http://nielsmayer.com
Pardon, I didn't follow the progress of envy24control. Did you finish
the recently development and if so, where can we/I get the latest source
code?
Cheers!
Ralf
(Repost, I replied to our MusE list!)
On December 14, 2010 06:47:31 am David Santamauro wrote:
> Hi Niels,
>
>
> On Mon, 13 Sep 2010 17:36:11 -0700
>
> Niels Mayer <nielsmayer(a)gmail.com> wrote:
> > On Mon, Sep 13, 2010 at 2:25 PM, Ralf Mardorf
> >
> > <ralf.mardorf(a)alice-dsl.net> wrote:
> > >Pardon, I didn't follow the progress of envy24control. Did you finish
> > > the recently development and if so, where can we/I get the latest
> > > source code?
> >
> > http:// mudita24.googlecode.com
> >
> > Status: still at 1.03. Waiting for Tim E. Real to commit and then will
> > release 1.04. Or use Tim's current patches (see link above).
I apologize, haven't had any time to complete the work.
I did quietly commit once more later on I think (I hope?), un-announced.
Its current state is my current state, there's nothing new to add right now.
I was so worried that changing from hard-coded slider min/max
and scale markings, to values obtained from asking ALSA, was the
right thing to do. I felt it was, to support different, unknown, perhaps
yet-to-be-sold cards.
We had good discussions on the list about the issues and where
it could go from here. Everyone contributed good points and
a good understanding of its shortcomings and issues.
Neils pointed out there's still some things to take care before we move on
like some spacing issues, and of course we must deal with the
meters, db markings, and slider height lining up correctly.
Fons pointed out his card gives him an extra button below the first slider,
for adjustable hardware input level, which kind of ruined the uniform look
of the sliders.
It works fine as is I think, but yeah, must continue with work on it.
> >
> > Niels
> > http://nielsmayer.com
>
> ...quick question on the "Monitor PCMs" tab: Why are there 2 sliders
> per PCM out? Shouldn't L/R Gang join channels 1 & 2, 3 & 4 etc...
>
Oh, is the gang not working? It was last I checked.
Oh wait, maybe I see what you mean, replace the two sliders with only one,
when gang is on?
We talked about replacing this with a pan knob or slider, like the
Windows version.
So many good ideas came out, a good vision of what it could be,
but it takes time to implement them.
Tim.
-------------------------------------------------------
Hi All,
ReverbTuner is a program that uses AI to tune LV2 reverb parameters to
match a convolution reverb. It's a school project made for an AI course,
and is a rather quick hack. It's probably the lousiest ever LV2 host,
and will crash with some plugins. However, the basic architecture should
be rather good :)
This has been at a standstill since March, so I thought I'd throw out
the code for anyone who is interested. I've designed it to be as modular
and scalable as possible, so using other plugin standards or
distributing the computational load over a network should not be too
hard. Also, the basic concept should work for any LTI effect, not only
reverbs.
More details (including build instructions) in the paper I wrote for
school: http://beatwaves.net/files/software/reverbtuner/paper.pdf
To build, you'll need: boost, SLV2, (a git version of) Aubio,
Libsndfile, and Gtkmm for the UI
the code can be checked out with svn from
http://svn.beatwaves.net/svn/reverb_tuner/trunk
Br,
Sakari