Awesome. this is wonderful. This is information that will keep my busy for a while. thank you, Gabriel
-Kris
--- On Tue, 26/10/10, Gabriel M. Beddingfield <gabrbedd(a)gmail.com> wrote:
> From: Gabriel M. Beddingfield <gabrbedd(a)gmail.com>
> Subject: Re: [LAD] Suggestion for diving into audio development?
> To: linux-audio-dev(a)lists.linuxaudio.org
> Cc: "Kris Calabio" <cpczk(a)yahoo.com>
> Date: Tuesday, 26 October, 2010, 8:46 PM
>
> Hi Kris,
>
> On Tuesday, October 26, 2010 05:24:59 pm Kris Calabio
> wrote:
> > I'm new to the Linux Audio community. Let me
> introduce
>
> Welcome!!
>
> > Does anyone have suggestions for diving into the world
> of
> > open source development? I've looked at some
> source
>
> 1. Watch this movie:
> Â Â Â http://wiki.xiph.org/A_Digital_Media_Primer_For_Geeks_(episode_1)
>
> 2. You said you know C and C++... so, you're all
> Â Â Â set there. :-)
>
> 3. Read through jack docs and examples in the source
> Â Â Â code for jack.
>
> 4. Another good tutorial/resource is Paul Davis's tutorial
> Â Â Â on using the ALSA API:
> Â Â Â http://www.equalarea.com/paul/alsa-audio.html
>
> 5. Pick an app that you like, and start squashing bugs.
> Â Â Â It'll be slow and tedious and confusing
> at first.
> Â Â Â But that stuff pays off big-time
> later. Not only
> Â Â Â will you have massive debugging chops,
> but you'll
> Â Â Â have some good trial-and-error
> opportunities to
> Â Â Â learn what you do/don't like doing.Â
> Not everyone
> Â Â Â likes nasty DSP algorithms, but some guys
> can't
>    get enough. Not everyone likes
> picking the perfect
> Â Â Â pixel size for a custom widget... but
> other guys
> Â Â Â really enjoy that.
>
> > code of applications I use but get pretty lost.Â
> Are
> > there any simple Jack applications that have easy to
> > read code? I'm all for taking baby steps.Â
> I'm also
>
> Gordon suggested playing with plugins... and I think that's
>
> an excellent suggestion.
>
> Fons Adriaensen writes very clean, well-designed code, with
>
> many small apps, plugins and libraries.
> http://www.kokkinizita.net/linuxaudio/downloads/index.html
>
> Except for his DSP algorithms (which use terse mathematical
>
> notation), I find his code easy to follow.
>
> -gabriel
>
Hi all,
I'm new to the Linux Audio community. Let me introduce myself:
(You can skip to "Ok getting to the point" if you like :P )
I'm primarily a rock musician and have a home recording setup with a Presonus Audiobox USB, Guitar Rig 3, and Reaper on a Windows system, and it works really well for me. I've been using Linux ever since I started studying computer science in college since 2006 and immediately recognized it as marginally better than Windows. I've considered switching my home system completely to Linux and free software (all knowledge must be free!), but I love Reaper too much.
So I decided to dual boot on my new laptop about a month ago. I still have Windows 7 to get stuff done in Reaper quickly and comfortably, and Ubuntu Studio to experiment with. I must say, this last month I've learned so, so much about Linux, DSP, and computers in general. The flexibility of Jack is awesome. I love how all my plugins don't have to be run all in one DAW application. Jack with Ardour and Guitarix rivals my Windows setup, though I still prefer Reaper.
Ok getting to the point:
Does anyone have suggestions for diving into the world of open source development? I've looked at some source code of applications I use but get pretty lost. Are there any simple Jack applications that have easy to read code? I'm all for taking baby steps. I'm also open to reading suggestions (online resources, books, anything really).
The lowest level of DSP programming I've ever done was with Pure Data. (I made a wavetable/FM synthesizer in pd that I could post if anyone's interested.) Are there other programming languages I should learn? I know C, C++, and Java. I understand that FAUST is a good DSP language. Are there others?
The Linux community is great and the free audio software is really powerful! It's definitely THE ideal alternative for musicians on a budget like myself. Unfortunately, you sort of have to be tech savvy to be a Linux musician. The average musician is not. I want to be part of the development of free audio software as my way of giving back to this wonderful community and helping the average musician.
I just got Meego 1.1 SDK up and running on my Fedora12 desktop, courtesy of
"yum install kqemu qemu-kvm libvirt-client libvirt"
&&
http://wiki.meego.com/SDK/Docs/1.1/Getting_started_with_the_MeeGo_SDK_for_L…
&&
http://www.exerciseforthereader.org/PCBSD/PCBSD8_under_qemu-kvm.html
(essential in the above is that RPMFusion "metapackage" kqemu will
install the appropriate kernel-dependent module, e.g.
kmod-kqemu-2.6.32.21-168.fc12.x86_64 from rpmfusion-free-updates )
So I can now from my 2.6.32.21-168.fc12.x86_64 desktop cut/paste
"root@meego-netbook-sdk:~# uname -a
Linux localhost.localdomain 2.6.35~rc6-134.1-qemu #1 SMP PREEMPT Thu
Jul 29 10:40:24 UTC 2010 i686 i686 i386 GNU/Linux"
from a gnome-terminal running in the kvm, even though there's a tiny
little netbook/handheld emulator running somewhere on my desktop too.
Not sure why I'd want to use it for devel/admin when I can run any X
app via:
ssh -f -Y root@localhost -p 6666 "exec dbus-launch gnome-terminal
>&/dev/null </dev/null
I still have a little
http://people.redhat.com/berrange/olpc/sdk/network-bridge.html
to work through. However so far, it's outrageously easy in a modern
linux to setup and tear down other virtual OSes, and the performance
is surprisingly good, at least when emulating a 32 bit atom on a 64
bit phenom II :-)
I was surprised&happy to see "SMP PREEMPT" output from my virtual Meego logs:
Oct 23 21:30:24 localhost klogd: [ 0.000000] Linux version
2.6.35~rc6-134.1-qemu (abuild@build16) (gcc version 4.5.0 20100414
(MeeGo 4.5.0-1) (GCC) ) #1 SMP PREEMPT Thu Jul 29 10:40:24 UTC 2010
Seems like all that's missing between this solution and my "meegolem"
hack of adding Fedora RPMFusion and PlanetCCRMA app/lib/devel
repositories to Meego is the "RT" from the CCRMA realtime kernel? Or
is Meego 1.1 already fully realtime capable and the message just omits
"RT" ?
What aspects of http://lwn.net/Articles/319544/ are in Meego 1.1?
Is this just a side-effect of modern linux kernels adopting the
preempt/rt patches:
http://events.linuxfoundation.org/2010/linuxcon-brasil/pt/gleixner ?
Does this mean, in the future, that separate realtime kernels as
provided for Fedora/Ubuntu distros will no longer be needed? And that
we'll be able to configure our systems for RT usage as needed?
Question: In a KVM/qemu environment, would I still be able to use all
the goodness the PREEMPT features of my virtual Meego 1.1 provide for
audio/media usage?
Next step -- try out "Meegolem" hack (
http://lalists.stanford.edu/lau/2010/09/0480.htmlhttp://lalists.stanford.edu/lau/2010/09/0502.html ) on KVM'd Meego 1.1
and see how well "virtualized" media applications work. And figure out
http://libvirt.org/formatdomain.html#elementsSound to see if there's a
way of allocating a specific soundcard for use by
jack-audio-connection-kit (although I did get netjack running
previously in 1.0, so could use jackd on the "host" OS and netjack on
the virtualized one with some SSH tunneling).
However the netjack solution doesn't really scratch the realtime itch,
and SSH tunnelling across localhost isn't exactly the definition of
performance. The question is, would it work to have the the OS with
preempt features running in a KVM; using jackd in that OS to gain
realtime and exclusive access to specific media hardware. Would that
provide realtime performance for just the apps needing it, running in
the virtualized realtime kernel, or would that be a recipe for
disaster?
-- Niels
http://nielsmayer.com
PS: Assuming there was a way for me to grant exclusive access to a
particular soundcard from a given KVM OS, would I be able to
experiment with writing/changing an ALSA device driver for a
particular device within the "experimental OS" running in KVM, while
using all my other devices/desktop as normal, without fear of
crashes/hangs/etc? Is this a sensible development strategy?
Hi
I'm working on a custom-built embedded platform with the Marvell PXA310
processor, trying to make one of the SSP outputs work in I2S mode. Today I
managed to get everything to compile and boot, but nothing appears in
/dev/audio and I'm not sure what to do next. Here's the relevant portion of
the startup log:
Advanced Linux Sound Architecture Driver Version 1.0.18rc3.
ASoC version 0.13.2
littleton_init
Dummy(Codec) SoC Audio Codec
Littleton init done: 0
ALSA device list:
No soundcards found.
In this log, "littleton" is the machine level driver which I believe is
analogous to Corgi.c or spitz.c My hardware platform has no actual CODEC
chip so I created a dummy one and then made all the functions return zero
(i.e. return 0) to fool the system into thinking there actually was a DAC
present. In other words, my hardware consists of the Marvell CPU with its
I2S output feeding into an external chip that will take this audio and
(hopefully) play it. That external chip is not a DAC. I know it's primitive,
but I'm just learning all this stuff and can tackle a genuine DAC later on.
Since most of the SoC code I've looked at uses an external chip for the DAC
I thought this architecture would work, although it may not be very
practical.
NOTE: my kernel has the drivers built-in and not created as modules, if that
matters.
I think I must be close to being able to play a tune through this devkit,
but I don't know what I'm missing. I'd appreciate any suggestions,
Cheers All
Rory
Hi:
I am developing a C project in eclipse env (on Fedora 13) and currently my
debugger is not working. The message on the console is as follows:
.gdbinit: No such file or directory.
Reading symbols from
/opt/SpeechEnhance/Thesis/Projects/IPPS_Speech_Enhancement/Debug/IPPS_Speech_Enhancement...done.
Setting environment variable "LOADEDMODULES" to null value.
Stopped due to shared library event
kill
gdb Debugger Thread[0] (Running)
the gdb seems to work from terminal but not in eclipse. Previously I had
the debugger working well for last 3 months. Can anyone tell me what
settings change would have possibly happened and how I can use the gdb again
within eclipse? I will appreciate it.
Thanks,
Arvind V
Hi everyone,
I am looking for a self balancing binary tree implementation
in C or C++ that I can use in the JACK proces callback.
I was thinking about something like multiset in c++ (equal keys allowed),
but that doesn't use dynamic memory allocation.
Thanks for your help
Greetings,
Lieven
[Apologies for cross-postings]
[Please distribute]
8th Sound and Music Computing Conference, 06-09 July 2011
Department of Information Engineering, University of Padova
Conservatorio Cesare Pollini, Padova
http://smc2011.smcnetwork.org/
The SMC Conference is the forum for international exchanges around the
core interdisciplinary topics of Sound and Music Computing. SMC 2011
will feature lectures, posters/demos, musical/sonic works, and other
satellite events. The SMC Summer School will take place just before the
Conference and it will aim at giving an opportunity to young researchers
interested in the field to learn about some of the core interdisciplinary
topics and to share their own experiences with other young researchers
================Important dates=================
Deadline for submissions of papers and music: Friday 25 March, 2011
Deadline for applications to Summer School: Friday 25 March, 2011
Notification of acceptance to Summer School: Monday 18 April, 2011
Notification of paper and music acceptances: Friday 6 May, 2011
Deadline for submission of camera-ready papers: Friday 20 May, 2011
SMC 2011 Summer School: Saturday 2 - Tuesday 5 July, 2011
SMC 2011 Satellite Events: Wednesday 6 July, 2011
SMC 2011 Conference: Thursday 07 - Saturday 09 July, 2011
===========================================
The topics to be covered at the Conference are all the core ones in
Sound and Music Computing research, and can be grouped into:
. Processing of sound and music signals
. Understanding and modeling sound and music
. Interfaces for sound and music
. Assisted sound and music creation
================Call for papers==================
SMC 2011 will include paper presentations as both lectures and poster/
demos. We invite submissions examining all the core areas of the Sound
and Music Computing field. All submissions will be peer-reviewed
according to their novelty, technical content, presentation, and
contribution to the overall balance of topics represented at the
conference. Paper submissions should have a maximum of 8 pages
including figures and references, and a length of 6 pages is strongly
encouraged. Accepted papers will be designated to be presented either
as posters/demos or as lectures. More details are available at
http://smc2011.smcnetwork.org/call_for_participation.htm
===========================================
Want to help us promote SMC2011?
Insert a SMC2011 banner in your blog or web page
(available at http://smc2011.smcnetwork.org/img/banner/),
and link it to http://smc2011.smcnetwork.org/
Want to follow and share SMC2011 related news?
Join and invite your friends to the SMC2011 facebook fanpage
(linked from http://smc2011.smcnetwork.org/news.htm)
Hi,
In a bit of a time crunch. Can anyone tell me how to do this properly?
I would like to have a threaded timer to run "cmd" after 5 seconds.
However cmd is normally triggered like this:
os.system(cmd)
But there seems to be an issue with calling os.system(cmd) from
subprocess.popen.
==========================
def do_popen(self, *args):
subprocess.Popen(args[0], shell=True)
def other function
cmd = 'spd-say -t female2 "' + audioText + '"'
args = shlex.split(cmd)
# Add 5 second delay for first view to allow existing speech processes to
finish
print "do_speech: ",delay
if delay:
print "do_speech: delayed start"
t = threading.Timer(5.0, self.do_popen(cmd))
t.start()
else :
print "do_speech: immediate start"
self.do_popen(cmd)
======================
FYI, it is for an accessibility wizard that Daniel and I have been working
on.
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd.
"ZPE is not about creating something from nothing: It is about using the
zero point of a wave as a means to transform other forms of potential
energy like magnetic flux, heat, or particle spin into usable energy in
such a way that entropy appears to be reversed."
Hi,
I have some interesting ongoing P/T contractual work for a competent
perl/web dev who can assist me for the next couple of months.
If you are interested and available to start immediately please contact me
off list with your rate.
FYI, you'll be working online with me so there shouldn't be any surprises ;-)
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd.