Sorry for X-posting.
Multimedia Signal Processing
TU Berlin, Germany
The Communication Systems Group led by Prof. Dr.-Ing. Thomas Sikora
offers several PhD scholarships in the following research fields:
* Single- and Multi-view Video Coding
* 2D/3D/stereoscopic Image Processing
* Audio Analysis
* Description of Humans in Video Sequences
Please submit your full application until 2007-07-31.
Core Sound <http://www.core-sound.com/default.php> will
start shipping their A-format TetraMic in a few days.
This is one of the Ambisonic microphones that can be
used with TetraProc, see
I have signed an NDA with Core Sound, giving me access to
the original measurement files for each microphone. So if
you purchase a TetraMic you now have two options, either
perform the IR measurements required for calibration
yourself, or just send me the serial number which will
permit me to generate a config file for TetraProc based
on the IR measurements done by Core sound. Free service !
Follie! Follie! Delirio vano è questo !
Thanks for mail.
>Your terminology is not very clear. What exactly do you mean by
>'average' RMS and 'max' RMS ? The 'M' in RMS stands for 'mean',
>so it is already an average over all samples considered, and
I'm using 25ms of window for calculating Max RMS,meaning is in
every 25 ms I calculate RMS and compare it with previous value
if current 25 ms RMS value is bigger than previous value I retain
this value, using this I find MAX RMS value for 25ms in file and call
it MAX RMS value as it is in adobe audition.
>That is, by definition of RMS = square Root of the Mean of the
>Squares, the RMS value of the N samples, expressed in the same
>unit as the samples themselves.
> >AvgRMS = 20.0 * log10 ( rms /2^N-1)
>You may be confusing two values of N here, the first being the
>number of samples, as in equation (1), and the second being
>the number of bits.
Sorry for confusion in dB conversion algorithm, it is surely
depth of sample in number of bits.
> normalised_rms = rms / (2^(B-1))
>and then convert to dB:
> normalised_rms_in_dB = 20 * log10 (normalised_rms).
your are right here as i'm also doing the same. My requirement is
like Adobe audition my application has to calculate the Average RMS power
for a wave audio file using time window system.
I'm trying to do the same to calculate the Average RMS but confusion is my
calculated Avrg RMS value matches with Total RMS value of the adobe edition.
How to calculate the Average RMS , if above algorithm calculates Total RMS.
Linux-audio-dev mailing list
I've added libsamplerate for resampling/oversampling which - as expected
- dramatically improves the quality of the ngspice processed sound.
Here's some example 3sec guitar sound from current testing:
(left channel: resampled input-sound, right channel: fuzz-effect out)
The fuzz effect still sounds a little odd, but I believe that a
DI-recording would sound just like that ;) - I have not progressed to
simulating tube-amps or synths yet.. lack of time, netlists and
tube-models; it's low priority ATM.
Due to a few more people joining the phat project (Pete and Uli) and
us wanting somewhere to discuss stuff, i've created a phat dev mailing
Anyone interested in custom widgets or wants to use or contribute to
phat should join! We don't have any vu/metering widgets in phat yet
which would be great to have.
CVS-Head checkouts will get you AMS 1.8.9 beta 0, ported to qt4 now.
If you like DIY-building from CVS, please give it a try!
Keep in mind that you have to generate a Makefile first calling
or whatever qt4's qmake is called on your pc.
I hope you won't mind an off topic post, but the LAD list has helped
us in the past in this respect, and I hope it'll do so again.
The Centre for Music Technology at The University of Glasgow has a
postgraduate place funded for three years for a PhD student to
undertake research into data representation of musical structures. We
are looking for somebody who is fluent in music analysis to degree
level, and is also a competent programmer (preferably with Linux
experience) with an appreciation of databases, XML (in the context of
MusicXML) and desktop programming (e.g. with KDE). We are seeking to
achieve the automated discovery of musical structures in performed
and written music. Past projects have involved the performances of
Shoenberg's Pierrot Lunaire with Soprano Jane Manning, analysis of
Chopin Piano works, and microtonal performance analysis with memebers
of the BBC Singers and the Royal College of Music.
It is a condition of the funding that the successful applicant must
be a UK national. Applicants are of course encouraged internationally
if they have their own funding.
We are cognisant of the fact that such a disparity of skills will be
hard to come by, but there are 60 million UK nationals, and we only
need one! That said, if you have some of the skill set described
above and are interested, please contact me for details on how to
apply. Since we are part of an Engineering Faculty and have
postgraduate students already in place, music analysis skills would
be particularly valued.
Thanks for your time and bandwidth,
you are cordially invited to participate at the 2nd Conference on
Interaction with Sound - Audio Mostly 2007.
Due to a many requests we decided to extend the deadline for your
abstract submissions until June 22, 2007.
Looking forward to hear you at Audio Mostly 2007,
on behalf of the Audio Mostly Conference Committee
Please forward this call for papers to anyone who may be interested in
participating with our apologies for multiple postings.
Audio Mostly 2007 – 2nd Conference on Interaction with Sound
September 27 - 28, 2007
hosted by the Fraunhofer Institute for Digital Media Technology IDMT
CALL FOR PAPERS
Audio in all its forms – music, sound effects, or dialogue - holds
tremendous potential to engage, convey narrative, inform, dramatize and
enthrall. However, in computer-based environments, for example games,
nowadays the interaction abilities through and with sound are still
underrepresented. The Audio Mostly Conference provides a venue to
explore and promote this untapped potential of audio by bringing
together audio experts, content creators, interaction designers, and
behavioral researchers. Our area of interest covers new sound
applications that demand or allow for some kind of interactive response
from their listener, particularly in scenarios where screens and
keyboards are unavailable, unsuitable or disturbing. This area implies
cognitive research and psychology, as well as technological innovations
in audio analysis, processing and rendering. The aim is to both describe
and push the boundaries of sound-based interaction in various domains,
such as gaming, serious gaming, education, entertainment, safety and
We ask researchers, composers, game developers, audio engineers, etc.
who are interested in sharing their results, perspectives and insight to
a multidisciplinary audience to submit abstracts of 300-500 words for
paper or poster submissions before June 22, 2007. Please specify if your
abstract is for a paper or a poster. Also position papers from
industrial strategists are welcome.
Authors of accepted abstracts will be notified by July 8, 2007.
Final submissions are due on August 24, 2007.
Areas of Interest (including but not limited to):
- Games designed around audio and sound
- Interactivity through sound and speech
- Semantic speech, music, sound analysis
- Music recommendations and user feedback
- Semantic audio processing
- Cognition of sound and music
- New auditory user interfaces
- Sound design for games
- Spatial audio rendering
- Interactive composing & authoring of music
- Audio in teaching
- Sound in mobile applications
- New developments for audio broadcasting, podcasting and audiobooks
- Future uses of sound
Deadline for abstract submission - June 22
Notification of acceptances - July 8
Final paper submission - August 24
Deadline for registration - September 7
Conference - September 27-28
For more information, please visit the conference website
http://www.audiomostly.com/ or contact us at info(a)audiomostly.com
Furthermore, we plan to focus a special paper session on the area of
children's media, an area that is of special interest to our region here
in Thuringia (see
Thus, any submissions dedicated to the questions of audio interaction in
media applications for kids or adolescents, such as audio & learning,
music education or games for children, etc. are in particular welcome.
Today is a break in the Debian conference action here in Edinburgh,
Scotland. Talks will resume tomorrow and continue through Saturday.
The streams are mirrored by a network managed with geodns. Use the URL
below and you will be redirected to an appropriate mirror:
In case you've missed any of the talks from the past 4 days, the archive
is coming online as checking of recorded files and transcoding proceeds:
Note that these encodings have not all been checked. Since we have the
raw DV stored to disk we can go back and rework problem files to some
degree. If we have to we also have tape from the main camera in each
room (not the mixed video) as backup. As you can perhaps imagine, we
have a tremendous amount of footage to deal with, what with having as
many as 4 tracks running at once. If you have any comments or
suggestions on particular files, particularly if there seem to be
technical issues with the encoding, please(!) let us know about it on
the wiki. There's a good chance we will be able to improve the
i have written an jackified application for a customer. the main application
play a couple of audiofiles (wav) with some effects and filters. Everything
works fine so far.
From time to time, while loading some more files, jack disconnects my
application. the loading process is a seperated thread. i use the thread
implementation from wxWidgets, which uses pthreads.
Neither my app nor jack crashed, they are running but my app tell my
"zombified - calling shutdown handler"
the output of jack is.
bash-3.00# /usr/local/bin/jackd --realtime -d alsa
Copyright 2001-2005 Paul Davis and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK compiled with System V SHM support.
loading driver ..
creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
control device hw:0
configuring for 48000Hz, period = 1024 frames, buffer = 2 periods
nperiods = 2 for capture
nperiods = 2 for playback
**** alsa_pcm: xrun of at least 0.662 msecs
**** alsa_pcm: xrun of at least 4.624 msecs
**** alsa_pcm: xrun of at least 4.950 msecs
**** alsa_pcm: xrun of at least 68.566 msecs
**** alsa_pcm: xrun of at least 48.265 msecs
**** alsa_pcm: xrun of at least 4.413 msecs
**** alsa_pcm: xrun of at least 6.883 msecs
**** alsa_pcm: xrun of at least 126.050 msecs
**** alsa_pcm: xrun of at least 62.059 msecs
**** alsa_pcm: xrun of at least 51.514 msecs
**** alsa_pcm: xrun of at least 12.643 msecs
subgraph starting at soundroom timed out (subgraph_wait_fd=9, status = 0,
state = Running)
the xruns only appear while load sounds.
Linux soundroom 188.8.131.52 #7 SMP PREEMPT Fri Jun 16 22:18:26 GMT 2006 i686
unknown unknown GNU/Linux
0 [Gina3G ]: Echo_Echo3G - Gina3G
Gina3G rev.0 (DSP56361) at 0xea000000 irq 50
1 [CK804 ]: NFORCE - NVidia CK804
NVidia CK804 with ALC850 at 0xea105000, irq 217
and i use the gina3G
and now the questions;
- is there any way, to get more infos out off jack
- any way to keep the app connected or a hint to avoid the disconnection
- some days ago i testet the jack version 0.103, but jack uses 100% cpu
without any app. just starting qjackctl showed that.
thanks very much for some hints. c~