Hi all,
I am pleased to announce the first beta release
(and the first public release, as well) of
Aqualung, a music player for GNU/Linux
--------------------------------------
Homepage: http://aqualung.sf.net
Aqualung is a new music player for the GNU/Linux operating system.
It plays audio files from your filesystem and has the feature of
inserting _no_gaps_ between adjacent tracks.
Aqualung is released under the GNU General Public License.
Features at a glance
====================
Supported file formats:
* Almost all sample-based, uncompressed formats (eg. WAV, AIFF, AU
etc.), files encoded with FLAC (the Free Lossless Audio Codec), Ogg
Vorbis and MPEG Audio files (including, but not limited to, MP3) are
supported. Naturally, any of these files can be mono or stereo.
Supported output devices:
* OSS and ALSA driver interface, as well as support for connecting to
the JACK Audio Connection Kit.
Key features:
* Continuous, gap-free playback of consecutive tracks! Your ears get
exactly what is in the files -- no silence inserted in between.
* Ability to convert sample rates between the input file and the
output device, in high quality. (Thanks to libsamplerate!)
* LADSPA plugin support -- you can use any suitable LADSPA plugin to
enhance the music you are listening to.
Some other niceties:
* Internally working volume and balance controls (not touching the
soundcard mixer).
* Support for multiple skins; changing them is possible at any time.
* Support for random seeking during playback.
* Track repeat, List repeat and Shuffle mode (besides normal playback)
* All windows are sizable. You can stretch the main window
horizontally for more accurate seeking.
* State persistence via XML config files. Aqualung will come up in the
same state as it was when you closed it, including playback modes,
volume & balance settings, currently processing LADSPA plugins,
window sizes, positions & visibility, and other miscellaneous
options.
In addition to all this, Aqualung comes with a Music Store that is an
XML-based music database, capable of storing various metadata about
music on your computer (including, but not limited to, the names of
artists, and the titles of records and tracks). This is much more
efficient than the all-in-one Winamp/XMMS playlist.
Hope you will like this program. Please report any problems.
Tom
Ingo Molnar <mingo(a)elte.hu> wrote:
>
> I took a
> look at latencies and indeed 2.6.7 is pretty bad - latencies up to 50
> msec (!) can be easily triggered using common workloads, on fast 2GHz+
> x86 system - even when using the fully preemptible kernel!
What were those workloads?
Certainly 2.6+preempt is not as good as 2.4+LL at this time, but 2.6 isn't
too bad either. Even under heavy filesystem load it's hard to exceed a 0.5
millisecond holdoff. There are still a few problem in the ext3 checkpoint
buffer handling, but those seem pretty hard to hit. I doubt if the `Jack'
testers were running `dbench 1000' during their testing.
All of which makes me suspect that the problems which the `Jack' testers
saw were not directly related to long periods of non-preemption in-kernel.
At least, not in core kernel/fs/mm code. There have been problem in the
past in places like i2c drivers, fbdev scrolling, etc.
What we need to do is to encourage audio testers to use ALSA drivers, to
enable CONFIG_SND_DEBUG in the kernel build and to set
/proc/asound/*/*/xrun_debug and to send us the traces which result from
underruns.
As for the patch, well, sprinkling rescheduling points everywhere is still
not the preferred approach. But adding more might_sleep() checks is a
sneaky way of making it more attractive ;)
Minor point: this:
cond_resched();
function_which_might_sleep();
is less efficient than
function_which_might_sleep();
cond_resched();
because if function_which_might_sleep() _does_ sleep, need_resched() will
likely be false when we hit cond_resched(), thus saving a context switch.
Unfortunately, might_sleep() calls tend to go at the entry to functions,
whereas cond_resched() calls should be neat the exit point, or inside loop
bodies.
by Kjetil Svalastog Matheussen <k.s.matheussenï¼ notam02.no>
Dave Robillard:
> On Wed, 2004-09-01 at 07:48, martin rumori wrote:
>> On Wed, Sep 01, 2004 at 11:31:01AM +0100, Steve Harris wrote:
>> > On Wed, Sep 01, 2004 at 10:03:18 +0100, Dave Griffiths wrote:
>> > > so if I'm writing a osc sequencer, is the best plan to leave the
>> > > mapping open for the user to modify?
>> >
>> > I would say so yes, its possible that an OSC schema spe will be
>> > standardised at soem point that would make it easier.
>>
>> not to mention the microtone-capabilities of your osc sequencer and
>> the sophisticated envelope control functions, which are hard to cover
>> with pure midi... :-))
>
> Imagine a sequencer where, instead of little straight bars representing
> notes, the 'piano roll' just allowed you to draw a line to represent
> frequency.. with any angle, straight or curved (bezier), etc. Wow..
>
> Control could be like that too, with overlay and everything, but having
> that for pitch would be amazing.. has something like this ever been done
> before?
>
You can draw lines to represent pitch (and everything else) in Radium, but
its not a piano-roll though. http://www.notam02.no/radium/ (latest
linux-(not-finished)-version: http://www.notam02.no/arkiv/src/)
Hi guys,
I'm coding a software implementation of a buffer for PIO-DA16 (icpdas).
I'd like to know what a stream of stereo PCM data looks like
and how a Digital-toAnalog-Converter understands, which
part of data belongs to which channel in a stereo PCM stream.
Rgds,
Anton
Hello,
I'm using PortAudio library to try to program a loop-sampler software
under Linux (and perhaps Windows). Actually, I would like to make a
console-based software. I would like to know how could I manage entries
from keyboards. I think that ncurses lib is a good choice. But my
problem is in algorithm term.
I think that making a code in the main() like this:
while (true) {
if (key is pressed) {
switch (value of the key) {
case 'A':
proceed to FFT ;
break;
case 'B':
proceed to time stretch
break;
case 'C:
reduce volume
break;
...
...
}
}
wait for a certain time;
}
is not a good idea. I know that with MFC there is event management with
WM messages. I don't know what is there under Linux, especially in
console-based apps. With GUI libs like GTK, no problems I think. But
what about console-based apps?
Does anybody knows some internet sites, or some other libraries, or some
methods (perhaps multithreads? seems to be heavy for just event
managements...). I have look at ncurses docs but they shows an example
with a kind of loop like the one I just talk about above. Any advice is
welcome
Bye.
hi erik,
i've got a question regarding type combinations in libsndfile:
SF_FORMAT_FLOAT | SF_ENDIAN_CPU works with SF_FORMAT_AU on any
platform, which is great. but it doesn't work with neither aiff nor
wav. i expected that it would work with one of the both formats,
depending on the host endianess.
is there a special reason why it does not?
some other questions regarding string data:
snd does not explicitly allow using string data in the file format.
however, i remember that it was done in the old NeXT days by
increasing the data offset and using the space in between for a
comment. do you think this could be implemented in libsndfile for the
SF_STR_COMMENT thing or is that just too non-standard?
with SF_FORMAT_AIFF, i can't write string data to the file in SFM_RDWR
mode. the call to sf_set_string() succeeds, but the actual data in
the file is not changed. the call to sf_set_string() succeeds even
when the file is in SFM_READ mode.
with SF_FORMAT_WAV, a file with string data can't be opened with
SFM_RDWR anymore. this means there is no way to change the audio data
or the comment of the file without copying it? i guess there is a
special reason for that...
thanks very much for your patience...
bests,
martin
Greetings:
While doing some research on VST/VSTi technology I checked the
Wikipedia page for "VST". It's a good informative page, and there's even
an entry regarding the fst/libfst project. However, there was no mention
of Kjetil's vstserver and its clients. There is now.
It occurred to me that while we're laboring over where & whether to
use wikis, there's already a place for us to get some more information
out to the public: the Wikipedia. I don't know how many other Linux
audio projects are detailed. Ardour, ALSA, Pd, and others are already
there, but I don't know how up-to-date that material is. So, would-be
documenter writers, avail thyselves of this opportunity and start
hitting the wikipedia ! Add new entries, correct the old ones, bring 'em
up to date, but just do it ! ;-)
Thank you for your attention. We now return you to your regularly
scheduled programming.
Best,
dp
Hi.
I recently wrote ALSA PCM and seq support for BRLTTY[1].
However, the fact that libasound.so.2 lives in /usr/lib does
present a little problem for us. BRLTTY needs to be run as
early as possible during startup, most likely before /usr is
even mounted. Linking against -lasound the traditional way therefore is quite
a bug, since the executable now doesn't start anymore if
/usr is missing.
We've solved this now by dlopen() and dlsym()'ing all the
symbols we actually need, and call our private function pointers instead
of the real library symbols. However, this solution is kind
of icky. At the same time, we wrote support for QNX, which
seems to have a early fork of ALSA as the QNX Sound Architecture (also
called libasound). QNX apparently already puts its
libasound.so in /lib, most probably because of this
problem.
I'd like to ask the community what you think about
moving libasound from /usr/lib/ to /lib/? With OSS, that
wasn't a problem at all, since the ioctl calls were simple
enough that there is/was no need for a wrapper lib. But
since ALSA kind of relies on that, wouldn't it make more
sense to have the lib in /lib so that early boot programs
can output sound too?
--
CYa,
Mario
Hi,
I've posted the following message in Rosegarden-devel mailing list, and
Michael pointed me to an old thread from you about the same subject:
http://www.mail-archive.com/linux-audio-dev@music.columbia.edu/msg03322
Attached is a little program summarizing what I have found about the file
format at this moment. I would like to know if you have more information,
ideas or want to collaborate in this task.
Regards,
Pedro
On Sunday 29 August 2004 01:50, Pedro Lopez-Cabanillas wrote:
> Hi,
>
> I have many old songs, stored as Cakewalk "WRK" files, and I don't use
> Cakewalk or Sonar anymore. I'm not planning to do so in the future, 'cause
> I'm sure I will use Rosegarden. Of course.
>
> So, I need to convert my old WRK files to Rosegarden. But the WRK file is a
> closed format, and not publicly documented. It will be necessary to reverse
> engineer it, to rescue my old loved melodies along with the pile of shit.
>
> My plan is to first write a 'cake2rg' standalone utility, to start learning
> the file format and meanwhile to produce some practical results. After
> that, I hope to be able to import the files directly into Rosegarden. This
> won't happen before 1.0-final.
>
> If somebody had the same idea and has the work almost finished, please let
> me know ;-). Let's avoid duplicated work.
>
> Comments, please.
>
> Regards,
> Pedro
It may be that this topic is already covered somewhere, but after much
searching over the past few days I have been unable to come up with a
comprehensive guide to what I am attempting to do.
(Note that I've never written something like this before but know enough
about Linux and how things work that I would probably understand a
higher-level discussion on the topic...maybe. [grins])
I am attempting to write a program that will capture and record digital
audio (in the form of PCM/AC-3 over S/PDIF). What I *do* with that data is
a secondary issue and not something I want to get into now.
The input to the "system" is a USB digital audio capture device. (I
currently have an Edirol UA-1D
(http://www.edirol.com/products/info/ua1d.html) and Creative Labs MP3+
(http://www.creative.com/products/product.asp?prodid=154) at my disposal.)
What I'm not sure of is what I am looking for (or what I should expect to
find). The S/PDIF stream that will be coming into the device may have PCM
or AC-3. My basic assumptions on this topic are that I will have to find
some /dev entry to open and read from to do the capturing itself, but from
everything I've read so far it doesn't look like the standard /dev/audio or
/dev/dsp devices would suffice.
So, there you have it. I'm looking for some pointers to get the ball
rolling and I'm positive I'll be able to pick it up from there...it's always
the first few steps that are daunting. The kind of questions I am asking
are:
1) With the USB audio device plugged in, what device would I need to open
to read the raw digital data?
2) Is there an API already available for reading this information? (I've
looked at OSS, but haven't gotten much into ALSA yet.)
3) Assuming I can get at *a* bitstream, what will it contain? Will it be
the S/PDIF stream from which I will need to extract PCM/AC-3, or will it be
the raw PCM/AC-3 itself?
Thanks.
Paul Braman
Paul dot Braman at NielsenMedia dot com
813-366-5053