For a small application I am developing, I need to display waveforms
(read-only) from audio data loaded into my application using libsndfile.
My question is about displaying a waveform in a GUI window -- I have
searched around the net looking for general algorithms but haven't seen
anything that describes this. I haven't tried coding this yet, so I am
asking if someone can describe the basic algorithm or point me in the
right direction to a book, web site, the source file in Ardour or
whatever. Even just a high-level description of the basic algorithm
would be good -- I am thinking somewhere along the lines that I am
taking the values of samples and drawing line segments from point to point.
Thanks in advance!
-- Brett
Hi, All!
I'm rather frustrated with my situation, so I'm not sure which
mailing list is the most appropriate for my question.
I have Terratec Aureon 7.1 Space card which is used with ICE1724 alsa
driver (1.0.8, 1.0.9a). An appropriate .asoundrc fragment is shown below.
All works fine except for at some cases a sound level is about 30db lower
rather at another cases. Of course, alsamixer settings are the same.
--- "Normal" cases are: ---
1. Playing back with aqualung this way:
aqualung -o alsa -d default
2. Using any JACK-enabled app (ReZound, aqualung)
--- "-30db" cases are: ---
1. aplay -d default <file>
2. xine engine with any frontend (amaroK, Kaffeine)
Will anybody be so kind to suggest steps to find this difference
reason?
Andrew
////////////////// .asoundrc fragment ////////////////////////////
pcm.!default {
type plug
slave {
pcm "2x4"
format S32_LE
}
}
pcm.2x4 {
type route
slave.pcm surround71
slave.channels 8
ttable.0.0 0.05
ttable.1.1 0.05
ttable.0.2 1
ttable.1.3 1
ttable.0.4 1
ttable.1.5 1
ttable.0.6 1
ttable.1.7 1
}
Tim Goetze wrote:
> I'm pretty much sold on Python as my high-level language of choice and
> very reluctant to diversify in computer language literacy any further.
I feel your pain. Python is by far my favorite language ever. However,
I've recently been looking for an alternative *compiled* object-oriented
language, because let's face it, Python is on average 10 times slower
than C. Sometimes you just can't afford it.
Enter Objective-C:
- STRICT SUPERSET OF C: every valid C program is a valid ObjC program.
This makes it trivial to include or link to C code and libraries and
to mix procedural, object-oriented and ASM code in the same *file*.
- SIMPLE: ObjC is plain C with one syntax addition and a few new
keywords. It only extends the C language to support Smalltalk-like
object-oriented features, because that's all you're going to need.
No more operator overloading, templates, references, 'const', etc.
- DYNAMICALLY TYPED: messages (method calls) are delivered according to
the dynamic type of the target object, not to some static type. This
is how Python works. You can even send an object a message that is
not specified in its interface. This might seem like a bad idea, but
instead it allows for powerful delegation-based design patterns.
- FAST: Objective-C performs dynamically bound message calls very
quickly, about 1.5-2.0 times as long as a plain C function call!
Objective-C is the language of choice for MacOS X development.
GCC compiles it very well too.
Using GNUstep (optional) as the system and GUI framework, you can make a
GUI program that compiles almost without changes on both GNU/Linux and
MacOS X! (Windows port in the working.)
> > Well I really like to separate C and C++. C is unashamedly a low
> > level language. C++ OTOH tries to be both low level and high level.
> > In comparison to C, C++ is a poor low level language. Compared to
> > Python or Ocaml, C++ is a poor high level language.
Objective-C can be as low-level as C (including #define, ASM...) and as
high-level as Python (albeit a bit more verbose) IN THE SAME FUNCTION!
Toby
--
One theory states that if anyone ever learns how to use all of Emacs, it
will instantly disappear and be replaced by something even more bizarre
and inexplicable. Another theory states that that's how VI was invented.
Hello,
Digital Room Correction 2.6.0 is available at:
http://freshmeat.net/projects/drc
Changes:
A new prefiltering curve based on the bilinear transformation has been
introduced. An improved windowing of the minimum phase filters used to
apply the target frequency response and the microphone compensation has
been implemented. A missing normalization of the minimum phase correction
filter has been added. A new logarithmic interpolation has been added to
the target transfer function computation. The new interpolation method
simplifies the definition of the target transfer functions. Small
improvements to the documentation and to the Octave scripts used to
generate the graphs have been applied. A new improved version of the
measurejack script has been included in the package. Some new sample
configuration files, including one approximating the ERB psychoacoustic
scale, have been added.
Bye,
--
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it
The ever popular CAPS Audio Plugin Suite reincarnates as v0.2.3, a
maintenance release that rectifies the last remaining denormal
problems and restores the intermittently nonfunctional AmpIV gain
control to its usual fine form.
CAPS is a LADSPA library that enjoys worldwide favour for its
high-quality instrument amplifier emulation plugins; in addition it
provides a small but no less sophisticated assortment of DSP units for
daily use as well as some more exotic sound generators.
Upgrading is recommended; grab your copy before they are all gone:
http://quitte.de/dsp/caps.htmlhttp://quitte.de/dsp/caps_0.2.3.tar.gz
Please forward as you see fit.
Enjoy,
Tim
Hi,
I thought this could be of interest to some people here. I have just
released an acoustic echo canceller as part of Speex 1.1.9
(http://www.speex.org/). So far, I have tested it on 8 kHz, but it
should work for other sampling rates. It uses the MDF algorithm and
(optionally) residual echo cancellation in the spectral domain. If
anyone is interested in making an ALSA plugin out of it, I can provide
assistance. Oh, and it's released under the (revised) BSD license.
Jean-Marc
P.S. In case some noticed, there was an echo canceller in Speex before,
but this is the first version to actually work in real conditions
(double talk and all).
--
Jean-Marc Valin <Jean-Marc.Valin(a)USherbrooke.ca>
Université de Sherbrooke
Greetings:
I've prepared a brief report on LAC 2005 for the Linux Journal, it's
ready for submission but I need an outside photo of ZKM + the Kubus. Did
anyone take a nice shot of the buildings that they'd like to see in LJ ?
If so, let me know asap. A TIFF is preferred, but high-resolution JPG
will probably do. TIA!
Best,
dp
Hello!
Sorry if I'm starting in the wrong place, but after several months
of thinking and two weeks of working, I have a couple of questions.
1.) can FreeBSD 5.4-RELEASE operate without a sound card such that the
Network Audio Server allows applications running on it to have sound
heard on other speakers on the network?
a.) I have "device sound" compiled into the custom kernel
b.) the NASD gives an error about connecting to a block device
when I try to start NASD.
2.) is arts the way to go with ALSA? When KDE starts on my 2.6 kernel
Gentoo system the sound suddenly get louder, as if a new mixer takes
over and bumps the master volume as KDE 3.3 loads.
3.) What is the preferred method to have multiple x86 computers
playing the same stream of sound simultaneously? (within a few dozen
milliseconds)
a.) all the servers have NTP capability and mplayer/xine
4.) How do I have several x86 FreeBSD/linux machines all synchronize
video also? Is that simply XF86 Forwarding?
Yours Truly,
Christopher
Hi,
I'm looking for a mentor for Google's summer coding project for
students. In a nut-shell, a student pairs with a mentor from an established
open source project in order to complete a modestly sized project. The
benefits to open source developers are that you get to have someone work on
some code or functionality that otherwise might have been dismissed for a
while. Furthermore, you will gain a developer with a long term interest in
Linux audio to contribute to development further down the road. Of course,
there are benefits to myself, including money (which students always need),
but I am really looking to get established with some open source software and
gain experience with the development process before I consider moving into
the workforce (unless I stay on to do my PhD ;) and this seems like the
perfect motivation.
If anyone here is interested in taking this on, please check
http://code.google.com/summerofcode.html and specifically the mentoring
organization FAQ at http://code.google.com/mentfaq.html. There is some
misinformation about the need for more mentoring organisations but I've been
in contact and they are still accepting groups.
Thank you for your time,
Kevin Sookocheff
Hello,
Yesterday, we gave a Tutorial called "Linux for Audio" at the AES
Convention, which took place here in Barcelona. To quote the abstract:
"It is obvious that Linux is becoming a real alternative to other
well-known operating systems. But, is Linux ready to support all the
requirements of the audio industry? In this tutorial, the Linux audio
infrastructure will be introduced showing how it compares to and
competes with other operating systems, with emphasis on low latency,
application interconnectivity, and modularity. In addition, various
aspects of this audio infrastructure will be demonstrated, using several
promising applications in the Linux audio arena."
The tutorial was 2 hours long, and consisted of two parts: a talk and a
demo. During the talk we gave several smaller demonstrations. This
worked out really well, because it made the final demo much more agile,
as we already explained many basic features and operations before.
The talk consisted of the following parts: The Linux Operation System
(history, distributions, latency), ALSA, Interapplication connectivity
(jack, alsaseq), Plugins (LADSPA, DSSI, VST), and the final demo.
This demo mimicked a real-life studio situation, with Ardour as the main
application. We had some prerecorded material, but also recorded a new
bass guitar track on site. We demonstrated basic editting with Ardour,
the use of plugins (LADSPA, VST and Jack application inserts), mixing
with multiple output channels, automation with an motorized console
(Behringer BCF2000), synchronization with Hydrogen and with Rosegarden
(connected to fluidsynth), and bouncing the output back to Ardour. We
planned to show Jamin as well, but we ran out of time.
About 80 people assisted the tutorial, which can be considered quite a
lot, taking into consideration that the AES audience seems rather
Windows and Mac OS X centered. A quick "raise your hand" survey showed
that about 40 percent of our audience were developers.
We have the feeling that the tutorial went very well, and that the
audience got a good impression of the possibilities of Linux for audio.
No real technical problems happened during the demo.
We would like to thank all of you who made this possible. That means of
course all of the developers, but also the entire community. We had a
good time giving this Tutorial, and we hope to have generated some
interest in the AES crowd, and given them a good idea of the current
state of Linux audio applications.
We put the slides at http://iua-share.upf.es/wikis/aes/ and tomorrow we
will add some photos. Of course suggestions are welcome!
Pau Arumi
Maarten de Boer