I just got Meego 1.1 SDK up and running on my Fedora12 desktop, courtesy of
"yum install kqemu qemu-kvm libvirt-client libvirt"
&&
http://wiki.meego.com/SDK/Docs/1.1/Getting_started_with_the_MeeGo_SDK_for_L…
&&
http://www.exerciseforthereader.org/PCBSD/PCBSD8_under_qemu-kvm.html
(essential in the above is that RPMFusion "metapackage" kqemu will
install the appropriate kernel-dependent module, e.g.
kmod-kqemu-2.6.32.21-168.fc12.x86_64 from rpmfusion-free-updates )
So I can now from my 2.6.32.21-168.fc12.x86_64 desktop cut/paste
"root@meego-netbook-sdk:~# uname -a
Linux localhost.localdomain 2.6.35~rc6-134.1-qemu #1 SMP PREEMPT Thu
Jul 29 10:40:24 UTC 2010 i686 i686 i386 GNU/Linux"
from a gnome-terminal running in the kvm, even though there's a tiny
little netbook/handheld emulator running somewhere on my desktop too.
Not sure why I'd want to use it for devel/admin when I can run any X
app via:
ssh -f -Y root@localhost -p 6666 "exec dbus-launch gnome-terminal
>&/dev/null </dev/null
I still have a little
http://people.redhat.com/berrange/olpc/sdk/network-bridge.html
to work through. However so far, it's outrageously easy in a modern
linux to setup and tear down other virtual OSes, and the performance
is surprisingly good, at least when emulating a 32 bit atom on a 64
bit phenom II :-)
I was surprised&happy to see "SMP PREEMPT" output from my virtual Meego logs:
Oct 23 21:30:24 localhost klogd: [ 0.000000] Linux version
2.6.35~rc6-134.1-qemu (abuild@build16) (gcc version 4.5.0 20100414
(MeeGo 4.5.0-1) (GCC) ) #1 SMP PREEMPT Thu Jul 29 10:40:24 UTC 2010
Seems like all that's missing between this solution and my "meegolem"
hack of adding Fedora RPMFusion and PlanetCCRMA app/lib/devel
repositories to Meego is the "RT" from the CCRMA realtime kernel? Or
is Meego 1.1 already fully realtime capable and the message just omits
"RT" ?
What aspects of http://lwn.net/Articles/319544/ are in Meego 1.1?
Is this just a side-effect of modern linux kernels adopting the
preempt/rt patches:
http://events.linuxfoundation.org/2010/linuxcon-brasil/pt/gleixner ?
Does this mean, in the future, that separate realtime kernels as
provided for Fedora/Ubuntu distros will no longer be needed? And that
we'll be able to configure our systems for RT usage as needed?
Question: In a KVM/qemu environment, would I still be able to use all
the goodness the PREEMPT features of my virtual Meego 1.1 provide for
audio/media usage?
Next step -- try out "Meegolem" hack (
http://lalists.stanford.edu/lau/2010/09/0480.htmlhttp://lalists.stanford.edu/lau/2010/09/0502.html ) on KVM'd Meego 1.1
and see how well "virtualized" media applications work. And figure out
http://libvirt.org/formatdomain.html#elementsSound to see if there's a
way of allocating a specific soundcard for use by
jack-audio-connection-kit (although I did get netjack running
previously in 1.0, so could use jackd on the "host" OS and netjack on
the virtualized one with some SSH tunneling).
However the netjack solution doesn't really scratch the realtime itch,
and SSH tunnelling across localhost isn't exactly the definition of
performance. The question is, would it work to have the the OS with
preempt features running in a KVM; using jackd in that OS to gain
realtime and exclusive access to specific media hardware. Would that
provide realtime performance for just the apps needing it, running in
the virtualized realtime kernel, or would that be a recipe for
disaster?
-- Niels
http://nielsmayer.com
PS: Assuming there was a way for me to grant exclusive access to a
particular soundcard from a given KVM OS, would I be able to
experiment with writing/changing an ALSA device driver for a
particular device within the "experimental OS" running in KVM, while
using all my other devices/desktop as normal, without fear of
crashes/hangs/etc? Is this a sensible development strategy?
Hi
I'm working on a custom-built embedded platform with the Marvell PXA310
processor, trying to make one of the SSP outputs work in I2S mode. Today I
managed to get everything to compile and boot, but nothing appears in
/dev/audio and I'm not sure what to do next. Here's the relevant portion of
the startup log:
Advanced Linux Sound Architecture Driver Version 1.0.18rc3.
ASoC version 0.13.2
littleton_init
Dummy(Codec) SoC Audio Codec
Littleton init done: 0
ALSA device list:
No soundcards found.
In this log, "littleton" is the machine level driver which I believe is
analogous to Corgi.c or spitz.c My hardware platform has no actual CODEC
chip so I created a dummy one and then made all the functions return zero
(i.e. return 0) to fool the system into thinking there actually was a DAC
present. In other words, my hardware consists of the Marvell CPU with its
I2S output feeding into an external chip that will take this audio and
(hopefully) play it. That external chip is not a DAC. I know it's primitive,
but I'm just learning all this stuff and can tackle a genuine DAC later on.
Since most of the SoC code I've looked at uses an external chip for the DAC
I thought this architecture would work, although it may not be very
practical.
NOTE: my kernel has the drivers built-in and not created as modules, if that
matters.
I think I must be close to being able to play a tune through this devkit,
but I don't know what I'm missing. I'd appreciate any suggestions,
Cheers All
Rory
Hi:
I am developing a C project in eclipse env (on Fedora 13) and currently my
debugger is not working. The message on the console is as follows:
.gdbinit: No such file or directory.
Reading symbols from
/opt/SpeechEnhance/Thesis/Projects/IPPS_Speech_Enhancement/Debug/IPPS_Speech_Enhancement...done.
Setting environment variable "LOADEDMODULES" to null value.
Stopped due to shared library event
kill
gdb Debugger Thread[0] (Running)
the gdb seems to work from terminal but not in eclipse. Previously I had
the debugger working well for last 3 months. Can anyone tell me what
settings change would have possibly happened and how I can use the gdb again
within eclipse? I will appreciate it.
Thanks,
Arvind V
Hi everyone,
I am looking for a self balancing binary tree implementation
in C or C++ that I can use in the JACK proces callback.
I was thinking about something like multiset in c++ (equal keys allowed),
but that doesn't use dynamic memory allocation.
Thanks for your help
Greetings,
Lieven
[Apologies for cross-postings]
[Please distribute]
8th Sound and Music Computing Conference, 06-09 July 2011
Department of Information Engineering, University of Padova
Conservatorio Cesare Pollini, Padova
http://smc2011.smcnetwork.org/
The SMC Conference is the forum for international exchanges around the
core interdisciplinary topics of Sound and Music Computing. SMC 2011
will feature lectures, posters/demos, musical/sonic works, and other
satellite events. The SMC Summer School will take place just before the
Conference and it will aim at giving an opportunity to young researchers
interested in the field to learn about some of the core interdisciplinary
topics and to share their own experiences with other young researchers
================Important dates=================
Deadline for submissions of papers and music: Friday 25 March, 2011
Deadline for applications to Summer School: Friday 25 March, 2011
Notification of acceptance to Summer School: Monday 18 April, 2011
Notification of paper and music acceptances: Friday 6 May, 2011
Deadline for submission of camera-ready papers: Friday 20 May, 2011
SMC 2011 Summer School: Saturday 2 - Tuesday 5 July, 2011
SMC 2011 Satellite Events: Wednesday 6 July, 2011
SMC 2011 Conference: Thursday 07 - Saturday 09 July, 2011
===========================================
The topics to be covered at the Conference are all the core ones in
Sound and Music Computing research, and can be grouped into:
. Processing of sound and music signals
. Understanding and modeling sound and music
. Interfaces for sound and music
. Assisted sound and music creation
================Call for papers==================
SMC 2011 will include paper presentations as both lectures and poster/
demos. We invite submissions examining all the core areas of the Sound
and Music Computing field. All submissions will be peer-reviewed
according to their novelty, technical content, presentation, and
contribution to the overall balance of topics represented at the
conference. Paper submissions should have a maximum of 8 pages
including figures and references, and a length of 6 pages is strongly
encouraged. Accepted papers will be designated to be presented either
as posters/demos or as lectures. More details are available at
http://smc2011.smcnetwork.org/call_for_participation.htm
===========================================
Want to help us promote SMC2011?
Insert a SMC2011 banner in your blog or web page
(available at http://smc2011.smcnetwork.org/img/banner/),
and link it to http://smc2011.smcnetwork.org/
Want to follow and share SMC2011 related news?
Join and invite your friends to the SMC2011 facebook fanpage
(linked from http://smc2011.smcnetwork.org/news.htm)
Hi,
In a bit of a time crunch. Can anyone tell me how to do this properly?
I would like to have a threaded timer to run "cmd" after 5 seconds.
However cmd is normally triggered like this:
os.system(cmd)
But there seems to be an issue with calling os.system(cmd) from
subprocess.popen.
==========================
def do_popen(self, *args):
subprocess.Popen(args[0], shell=True)
def other function
cmd = 'spd-say -t female2 "' + audioText + '"'
args = shlex.split(cmd)
# Add 5 second delay for first view to allow existing speech processes to
finish
print "do_speech: ",delay
if delay:
print "do_speech: delayed start"
t = threading.Timer(5.0, self.do_popen(cmd))
t.start()
else :
print "do_speech: immediate start"
self.do_popen(cmd)
======================
FYI, it is for an accessibility wizard that Daniel and I have been working
on.
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd.
"ZPE is not about creating something from nothing: It is about using the
zero point of a wave as a means to transform other forms of potential
energy like magnetic flux, heat, or particle spin into usable energy in
such a way that entropy appears to be reversed."
Hi,
I have some interesting ongoing P/T contractual work for a competent
perl/web dev who can assist me for the next couple of months.
If you are interested and available to start immediately please contact me
off list with your rate.
FYI, you'll be working online with me so there shouldn't be any surprises ;-)
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd.
Hi there!
My laptop's USB ports are busted, probably a combination of them not
being top quality in the first place and my Edirol UA-25 USB audio
interface occasionally draining a hefty amount of current to power up
a couple of large diaphragm mikes.
First I tried adding a 2-USB ports PCMCIA card, but PCMCIA (or pccard
or cardbus or however the heck it is called nowadays) won't supply the
necessary juice, so if you need more than 100 mA you must use an extra
cable to suck the remaining amperage from one of the no longer
existing motherboard's USB ports. Alas, therefore not a complete
solution for the fried port blues.
So now I am throwing a powered USB hub in. As long as I keep it to
ALSA usage there is no problem: I can record and playback with, for
instance, Audacity using the ALSA backend. But anytime I try to launch
jackd the daemon fails and I get this in /var/log/messages:
kernel: ALSA sound/usb/usbaudio.c:882: cannot submit datapipe for urb
0, error -28: not enough bandwidth
I've tried different jackd buffer configurations to no avail. Anyone
(I guess that means Clemens) has any idea about whether I can work
around this?
Thanks in advance for any insight. Cheers,
L
PS: Yup, I have forsaken any hope of anything resembling low latency
with this setup, at least whenever I need phantom. The laptop is
5-year old, but still does the job and, above all, has a matte LCD
screen. Nuff said. I am cringing in advance at the unavoidable moment
entropy will force me to watch my ugly mug reflection superimposed
over my code. The combined effect can be too much to bear.
>
> Message: 9
> Date: Mon, 04 Oct 2010 13:51:07 +0200
> From: Max Tandetzky <max.tandetzky(a)uni-jena.de>
> Subject: [LAD] CUDA implementation for calf
> To: linux-audio-dev(a)lists.linuxaudio.org
> Message-ID: <4CA9BFAB.2090707(a)uni-jena.de>
> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>
> Hallo,
>
> I am new here, so I hope this is the right place to talk about what I
> want to do.
> I want to make a CUDA implementation of the algorithms from the
> calf-plugins. On the front end there should be placed a button (or
> something else) to (de-)activate the CUDA support. I have already
> written a jack-program (which makes some simple changes on audio data)
> using CUDA. It works good and at a first sight the performance looks
> promisingly.
>
> I have read a part of the mailing list archive and found out that there
> already was a discussion about audio processing with CUDA. I know there
> are some reasons for not using CUDA like the duty to use the proprietary
> Nvida driver, the limitation that only people who have an Nvidia card
> will have a benefit and so on. But the CUDA implementation may show
> which performance can be reached and may beuseful for Nvidia users
> immediately.
> I know there is OpenCL but it is not as sophisticated as CUDA at the
> moment, will have less performance than CUDA and I do not have the time
> to learn OpenCl at the moment (but the project has to be finished soon).
> I heard it is not too much work to transfer existing CUDA code to OpenCl
> code later (assuming there is already an OpenCL implementation for all
> CUDA functions which were used).
> So I want to do this with CUDA.
>
> At the moment I have some questions:
> 1. Is there anybody has already done or is doing something like this?
> 2. Where can I get information to make any specific changes on the calf
> code? (I examined it a bit but it will take time to understand the
> structure of the program when I only have the code, especially the part
> for the GUI seems to be conceptualized a bit more complex.)
>
> It would be nice if I can get some help here.
>
> Regards
>
> Max Tandetzky
Hi Max,
I've done some test using OpenCL in the context of the Faust project (http://faust.grame.fr/). Up to now results are not really good, and I guess CUDA/OpenCL will be usable only in specific cases. I'll probably now test if directly using CUDA would give some benefit. Maybe we can share some ideas?
Stéphane