I send the following to the LAA :
This is a bug fix release.
A big thanks to all that send fixes to me, without you, the AlsaPlayer will not
be living.
See the ChangeLog for the details and the names of the contributors :
http://alsaplayer.svn.sourceforge.net/viewvc/alsaplayer/trunk/alsaplayer/Ch…
I will also stress you to contribute to the AlsaPlayer. At least 2 things need
to be fixed. The first one is the jack output plugin that is using deprecated
functions. The second one is the resampling. For that, libsamplerate will give
a better quality.
Enjoy the AlsaPlayer,
# # # # #
To quote Fons (he do have a better English than mine) :
While it's a nice player, it has some serious audio quality
issues.
- Resampling 44.1 -> 48 kHz (for jack) sounds horrible...
- The sndfile input plugin reduces everything to 16 bits.
This is really absurd, even if your files and your
sound card are 24 bit you only get 16.
Floating point wav files apparently aren't read at all
(they load but produce silence when played).
All of this could be solved by using a good resampler
lib, and making the internal format floating point
rather than short.
# # # # #
This change can be made using libsample rate. This is a must have feature for
me, and it must be implemented before any other change because it will ease
further development (it is a lot of free re-usable audio code in float).
I am just an admin and don't have the knowledge to make the needed code.
Anybody that can contribute such a great feature for AP will be welcomed. If
you are interested, please take contact with me, on this list or privately.
Ciao,
Dominique Michel
--
"We have the heroes we deserve."
Hello,
For most of the last week I've been stuck trying to configure my
silicon peripheral to generate Normal I2S audio data to mostly _bad_,
results and had a couple of questions (below).
Briefly, my setup is as follows: SqueezeServer on a Windows host PC,
Squeezeslave on my embedded Linux device sending audio samples through
the kernel (2.6.28) down to my driver which lives in
sound/soc/pxa.I've been trying to configure my device for I2S format.
Outside the CPU, the I2S-formatted PCM data is sent to an external FM
transmitter chip so I should be able to hear the audio on my FM
receiver. Since the only "new" stuff here is my driver and the output
hardware, I've been focussing my attention on them until now, but
haven't had much success hearing anything.
I've actually heard _some_ recognizable audio - I could tell it was
the song I had selected, but it sounded very distorted (sounded
horrible) and I'm certain it wasn't because of a weak FM transmitter
signal. I'm guessing it was a mismatch in the formatting of the I2S
data between what I was sending and what the FM Tx chip was expecting
and that has formed the basis of my troubleshooting effort. I've run
out of ideas for things to try at the bottom end and now I want to
make sure my top-end components are working properly. This is weird
because I've stopped being able to hear anything lately, but a scope
confirms my I2S lines are all lit up.
Here are a couple of questions...
1) The CPU supports a packed mode write to the Synchronous Serial Port
(SSP) FIFO, meaning that two 16 bit samples (one left channel and one
right) can be written at the same time, both being packed into a
single 32 bit FIFO write. My driver enables this mode, but my question
is, where in the kernel would the samples be combined into a 32 bit
chunk for writing? I'm using Squeezeslave on my embedded device as the
player and I've checked the sources and it doesn't look like it's
happening in there. Makes more sense to be somewhere further down in
the stack so players don't have to care about the details of the
hardware they are running on. I was wondering if anyone knew where
this packing might take place.
2) Are their any useful tools out there for debugging/narrowing down
where problems in the audio path might lie? My player is an embedded
platform and I've only ported Squeezeslave to it, but for all I know
there could be a problem anywhere from SqueezeServer, through
Squeezeslave, down into the stack, my PCM driver or even the FM
transmitter. To eliminate the latter as a problem I'm looking for
another device that spits out known good I2S audio and I'll feed that
into the FM Tx and hopefully eliminate that. But there's a lot of code
from the SSP back and it would be great if I had some simple tone
generator application (for instance) that was easily portable to an
ARM9 platform (kernel 2.6.28) that I knew was sending correct data
down the stack.
3) My experience with Linux and audio is just beginning and so far
it's been right down at the driver level, so a question about audio
playing software: when a player produces a PCM stream from, say, an
MP3 file, does it automatically interleave the left channel and right
channel or does it produce two separate streams, one for left and one
for right? I can't tell from reading the Squeezeslave code, but it
looks like the audio data is sent in one continuous stream, so ...
interleaved?
4) For those of you experienced with I2S and other PCM formats, what
would a Normal I2S stream sound like on a DAC that thought it was
receiving Justified I2S? Would the audio still be intelligible or
would you hear nothing at all?
Thanks to all who read this post.
Cheers,
Rory
> When such an audio-gui standard and configuration is developed, I would love to
> participate. And use it in my apps. I think its a good idea, especially the
> idea of allowing the user to switch between circular and linear behaviour for
> round controls. And have that changes affect all apps supporting this
> "standard".
>
As far there is no such a standard and it seems it wouldn't come so fast. So I
have add a menu option to gx_head (guitarix successor) were the user can select
between radial or linear knob interaction. All is in a lib witch can be used
static or shared. The switch work with a bool operator, and it will be easy to
set it over getenv() for example.
More difficult it will be to comes to a conclusion in the free developer world
of how, why, if and would we really all use the same environment variable.
If we, it makes now matter witch lib or not one use to create the knob, only
the var needs to be read.
greats hermann
[Apologies for cross-postings] [Please distribute]
Paper-submission, call-for-music and registration are now open
for the Linux Audio Conference 2011 - May 6-8 2011, Maynooth, Ireland
More information: http://lac.linuxaudio.org/2011/
As in previous years, we will have a full program of talks, workshops
and music.
The Linux Audio Conference 2011 will include several concerts. We are
looking for music that has been produced or composed entirely or mostly
using GNU/Linux or other Open Source music software for:
* The Electroacoustic Music Concerts
* The Linux Sound Night
* Sound installations
On Tue, Oct 26, 2010 at 10:05 AM, Robin Gareus <robin(a)gareus.org> wrote:
> http://gjacktransport.sourceforge.net/ is a tool that provides graphical
Thanks!! Works great and provides functionality I was looking for just recently.
One small nitpick is that when the transport is rolling, the area
displaying the rolling HH:MM:SS.mmm timecode jiggles around since the
font is proportionally spaced. (at least w/ my display and fonts and
setup). Monospaced fonts for such rolling values can prevent this
minor visual distraction.
-- Niels
http://nielsmayer.com
Thanks for the suggestions everyone! I'll definitely start
familiarizing myself with the LADSPA API. LADISH looks really nifty
too.
>My best advice is to pick up a project that you are objectively
>interested in completing. Pick a goal that you would be happy if
> *someone else* did. That way when the novelty wears off, you'll
>still have some motivation to keep working and keep learning when
>you otherwise might tire of it. So ask yourself: What's something
>that you think linux audio is lacking? Find something small, and
>something you care about.
>
>
>
>Jeremy
Yes,
that's exactly what I'm planning to do. That's a good way to put it,
Jeremy. I just gotta keep doing audio projects in Linux and find out
what tools would be useful to me as an end user.
-Kris
Awesome. this is wonderful. This is information that will keep my busy for a while. thank you, Gabriel
-Kris
--- On Tue, 26/10/10, Gabriel M. Beddingfield <gabrbedd(a)gmail.com> wrote:
> From: Gabriel M. Beddingfield <gabrbedd(a)gmail.com>
> Subject: Re: [LAD] Suggestion for diving into audio development?
> To: linux-audio-dev(a)lists.linuxaudio.org
> Cc: "Kris Calabio" <cpczk(a)yahoo.com>
> Date: Tuesday, 26 October, 2010, 8:46 PM
>
> Hi Kris,
>
> On Tuesday, October 26, 2010 05:24:59 pm Kris Calabio
> wrote:
> > I'm new to the Linux Audio community. Let me
> introduce
>
> Welcome!!
>
> > Does anyone have suggestions for diving into the world
> of
> > open source development? I've looked at some
> source
>
> 1. Watch this movie:
> Â Â Â http://wiki.xiph.org/A_Digital_Media_Primer_For_Geeks_(episode_1)
>
> 2. You said you know C and C++... so, you're all
> Â Â Â set there. :-)
>
> 3. Read through jack docs and examples in the source
> Â Â Â code for jack.
>
> 4. Another good tutorial/resource is Paul Davis's tutorial
> Â Â Â on using the ALSA API:
> Â Â Â http://www.equalarea.com/paul/alsa-audio.html
>
> 5. Pick an app that you like, and start squashing bugs.
> Â Â Â It'll be slow and tedious and confusing
> at first.
> Â Â Â But that stuff pays off big-time
> later. Not only
> Â Â Â will you have massive debugging chops,
> but you'll
> Â Â Â have some good trial-and-error
> opportunities to
> Â Â Â learn what you do/don't like doing.Â
> Not everyone
> Â Â Â likes nasty DSP algorithms, but some guys
> can't
>    get enough. Not everyone likes
> picking the perfect
> Â Â Â pixel size for a custom widget... but
> other guys
> Â Â Â really enjoy that.
>
> > code of applications I use but get pretty lost.Â
> Are
> > there any simple Jack applications that have easy to
> > read code? I'm all for taking baby steps.Â
> I'm also
>
> Gordon suggested playing with plugins... and I think that's
>
> an excellent suggestion.
>
> Fons Adriaensen writes very clean, well-designed code, with
>
> many small apps, plugins and libraries.
> http://www.kokkinizita.net/linuxaudio/downloads/index.html
>
> Except for his DSP algorithms (which use terse mathematical
>
> notation), I find his code easy to follow.
>
> -gabriel
>
Hi all,
I'm new to the Linux Audio community. Let me introduce myself:
(You can skip to "Ok getting to the point" if you like :P )
I'm primarily a rock musician and have a home recording setup with a Presonus Audiobox USB, Guitar Rig 3, and Reaper on a Windows system, and it works really well for me. I've been using Linux ever since I started studying computer science in college since 2006 and immediately recognized it as marginally better than Windows. I've considered switching my home system completely to Linux and free software (all knowledge must be free!), but I love Reaper too much.
So I decided to dual boot on my new laptop about a month ago. I still have Windows 7 to get stuff done in Reaper quickly and comfortably, and Ubuntu Studio to experiment with. I must say, this last month I've learned so, so much about Linux, DSP, and computers in general. The flexibility of Jack is awesome. I love how all my plugins don't have to be run all in one DAW application. Jack with Ardour and Guitarix rivals my Windows setup, though I still prefer Reaper.
Ok getting to the point:
Does anyone have suggestions for diving into the world of open source development? I've looked at some source code of applications I use but get pretty lost. Are there any simple Jack applications that have easy to read code? I'm all for taking baby steps. I'm also open to reading suggestions (online resources, books, anything really).
The lowest level of DSP programming I've ever done was with Pure Data. (I made a wavetable/FM synthesizer in pd that I could post if anyone's interested.) Are there other programming languages I should learn? I know C, C++, and Java. I understand that FAUST is a good DSP language. Are there others?
The Linux community is great and the free audio software is really powerful! It's definitely THE ideal alternative for musicians on a budget like myself. Unfortunately, you sort of have to be tech savvy to be a Linux musician. The average musician is not. I want to be part of the development of free audio software as my way of giving back to this wonderful community and helping the average musician.
I just got Meego 1.1 SDK up and running on my Fedora12 desktop, courtesy of
"yum install kqemu qemu-kvm libvirt-client libvirt"
&&
http://wiki.meego.com/SDK/Docs/1.1/Getting_started_with_the_MeeGo_SDK_for_L…
&&
http://www.exerciseforthereader.org/PCBSD/PCBSD8_under_qemu-kvm.html
(essential in the above is that RPMFusion "metapackage" kqemu will
install the appropriate kernel-dependent module, e.g.
kmod-kqemu-2.6.32.21-168.fc12.x86_64 from rpmfusion-free-updates )
So I can now from my 2.6.32.21-168.fc12.x86_64 desktop cut/paste
"root@meego-netbook-sdk:~# uname -a
Linux localhost.localdomain 2.6.35~rc6-134.1-qemu #1 SMP PREEMPT Thu
Jul 29 10:40:24 UTC 2010 i686 i686 i386 GNU/Linux"
from a gnome-terminal running in the kvm, even though there's a tiny
little netbook/handheld emulator running somewhere on my desktop too.
Not sure why I'd want to use it for devel/admin when I can run any X
app via:
ssh -f -Y root@localhost -p 6666 "exec dbus-launch gnome-terminal
>&/dev/null </dev/null
I still have a little
http://people.redhat.com/berrange/olpc/sdk/network-bridge.html
to work through. However so far, it's outrageously easy in a modern
linux to setup and tear down other virtual OSes, and the performance
is surprisingly good, at least when emulating a 32 bit atom on a 64
bit phenom II :-)
I was surprised&happy to see "SMP PREEMPT" output from my virtual Meego logs:
Oct 23 21:30:24 localhost klogd: [ 0.000000] Linux version
2.6.35~rc6-134.1-qemu (abuild@build16) (gcc version 4.5.0 20100414
(MeeGo 4.5.0-1) (GCC) ) #1 SMP PREEMPT Thu Jul 29 10:40:24 UTC 2010
Seems like all that's missing between this solution and my "meegolem"
hack of adding Fedora RPMFusion and PlanetCCRMA app/lib/devel
repositories to Meego is the "RT" from the CCRMA realtime kernel? Or
is Meego 1.1 already fully realtime capable and the message just omits
"RT" ?
What aspects of http://lwn.net/Articles/319544/ are in Meego 1.1?
Is this just a side-effect of modern linux kernels adopting the
preempt/rt patches:
http://events.linuxfoundation.org/2010/linuxcon-brasil/pt/gleixner ?
Does this mean, in the future, that separate realtime kernels as
provided for Fedora/Ubuntu distros will no longer be needed? And that
we'll be able to configure our systems for RT usage as needed?
Question: In a KVM/qemu environment, would I still be able to use all
the goodness the PREEMPT features of my virtual Meego 1.1 provide for
audio/media usage?
Next step -- try out "Meegolem" hack (
http://lalists.stanford.edu/lau/2010/09/0480.htmlhttp://lalists.stanford.edu/lau/2010/09/0502.html ) on KVM'd Meego 1.1
and see how well "virtualized" media applications work. And figure out
http://libvirt.org/formatdomain.html#elementsSound to see if there's a
way of allocating a specific soundcard for use by
jack-audio-connection-kit (although I did get netjack running
previously in 1.0, so could use jackd on the "host" OS and netjack on
the virtualized one with some SSH tunneling).
However the netjack solution doesn't really scratch the realtime itch,
and SSH tunnelling across localhost isn't exactly the definition of
performance. The question is, would it work to have the the OS with
preempt features running in a KVM; using jackd in that OS to gain
realtime and exclusive access to specific media hardware. Would that
provide realtime performance for just the apps needing it, running in
the virtualized realtime kernel, or would that be a recipe for
disaster?
-- Niels
http://nielsmayer.com
PS: Assuming there was a way for me to grant exclusive access to a
particular soundcard from a given KVM OS, would I be able to
experiment with writing/changing an ALSA device driver for a
particular device within the "experimental OS" running in KVM, while
using all my other devices/desktop as normal, without fear of
crashes/hangs/etc? Is this a sensible development strategy?
Hi
I'm working on a custom-built embedded platform with the Marvell PXA310
processor, trying to make one of the SSP outputs work in I2S mode. Today I
managed to get everything to compile and boot, but nothing appears in
/dev/audio and I'm not sure what to do next. Here's the relevant portion of
the startup log:
Advanced Linux Sound Architecture Driver Version 1.0.18rc3.
ASoC version 0.13.2
littleton_init
Dummy(Codec) SoC Audio Codec
Littleton init done: 0
ALSA device list:
No soundcards found.
In this log, "littleton" is the machine level driver which I believe is
analogous to Corgi.c or spitz.c My hardware platform has no actual CODEC
chip so I created a dummy one and then made all the functions return zero
(i.e. return 0) to fool the system into thinking there actually was a DAC
present. In other words, my hardware consists of the Marvell CPU with its
I2S output feeding into an external chip that will take this audio and
(hopefully) play it. That external chip is not a DAC. I know it's primitive,
but I'm just learning all this stuff and can tackle a genuine DAC later on.
Since most of the SoC code I've looked at uses an external chip for the DAC
I thought this architecture would work, although it may not be very
practical.
NOTE: my kernel has the drivers built-in and not created as modules, if that
matters.
I think I must be close to being able to play a tune through this devkit,
but I don't know what I'm missing. I'd appreciate any suggestions,
Cheers All
Rory