Hello again,
I've added libsamplerate for resampling/oversampling which - as expected
- dramatically improves the quality of the ngspice processed sound.
Here's some example 3sec guitar sound from current testing:
http://mir.dnsalias.com/_media/oss/spicesound/git-fuzz64.mp3
(left channel: resampled input-sound, right channel: fuzz-effect out)
http://mir.dnsalias.com/oss/spicesound/examples#audio_example
The fuzz effect still sounds a little odd, but I believe that a
DI-recording would sound just like that ;) - I have not progressed to
simulating tube-amps or synths yet.. lack of time, netlists and
tube-models; it's low priority ATM.
robin
G'day LADs
Due to a few more people joining the phat project (Pete and Uli) and
us wanting somewhere to discuss stuff, i've created a phat dev mailing
list.
https://lists.berlios.de/mailman/listinfo/phat-dev
Anyone interested in custom widgets or wants to use or contribute to
phat should join! We don't have any vu/metering widgets in phat yet
which would be great to have.
Loki
Hi,
CVS-Head checkouts will get you AMS 1.8.9 beta 0, ported to qt4 now.
If you like DIY-building from CVS, please give it a try!
Keep in mind that you have to generate a Makefile first calling
$ qmake-qt4
or whatever qt4's qmake is called on your pc.
Kind regards,
Karsten
I hope you won't mind an off topic post, but the LAD list has helped
us in the past in this respect, and I hope it'll do so again.
The Centre for Music Technology at The University of Glasgow has a
postgraduate place funded for three years for a PhD student to
undertake research into data representation of musical structures. We
are looking for somebody who is fluent in music analysis to degree
level, and is also a competent programmer (preferably with Linux
experience) with an appreciation of databases, XML (in the context of
MusicXML) and desktop programming (e.g. with KDE). We are seeking to
achieve the automated discovery of musical structures in performed
and written music. Past projects have involved the performances of
Shoenberg's Pierrot Lunaire with Soprano Jane Manning, analysis of
Chopin Piano works, and microtonal performance analysis with memebers
of the BBC Singers and the Royal College of Music.
It is a condition of the funding that the successful applicant must
be a UK national. Applicants are of course encouraged internationally
if they have their own funding.
We are cognisant of the fact that such a disparity of skills will be
hard to come by, but there are 60 million UK nationals, and we only
need one! That said, if you have some of the skill set described
above and are interested, please contact me for details on how to
apply. Since we are part of an Engineering Faculty and have
postgraduate students already in place, music analysis skills would
be particularly valued.
Thanks for your time and bandwidth,
Nick Bailey
http://cmt.gla.ac.ukhttp://www.n-ism.org
Dear colleagues,
you are cordially invited to participate at the 2nd Conference on
Interaction with Sound - Audio Mostly 2007.
Due to a many requests we decided to extend the deadline for your
abstract submissions until June 22, 2007.
Looking forward to hear you at Audio Mostly 2007,
Holger Grossmann
on behalf of the Audio Mostly Conference Committee
-------------
Please forward this call for papers to anyone who may be interested in
participating with our apologies for multiple postings.
-------------
Audio Mostly 2007 – 2nd Conference on Interaction with Sound
http://www.audiomostly.com
September 27 - 28, 2007
Ilmenau, Germany
hosted by the Fraunhofer Institute for Digital Media Technology IDMT
CALL FOR PAPERS
-------------
Audio in all its forms – music, sound effects, or dialogue - holds
tremendous potential to engage, convey narrative, inform, dramatize and
enthrall. However, in computer-based environments, for example games,
nowadays the interaction abilities through and with sound are still
underrepresented. The Audio Mostly Conference provides a venue to
explore and promote this untapped potential of audio by bringing
together audio experts, content creators, interaction designers, and
behavioral researchers. Our area of interest covers new sound
applications that demand or allow for some kind of interactive response
from their listener, particularly in scenarios where screens and
keyboards are unavailable, unsuitable or disturbing. This area implies
cognitive research and psychology, as well as technological innovations
in audio analysis, processing and rendering. The aim is to both describe
and push the boundaries of sound-based interaction in various domains,
such as gaming, serious gaming, education, entertainment, safety and
healthcare.
Paper Submissions:
We ask researchers, composers, game developers, audio engineers, etc.
who are interested in sharing their results, perspectives and insight to
a multidisciplinary audience to submit abstracts of 300-500 words for
paper or poster submissions before June 22, 2007. Please specify if your
abstract is for a paper or a poster. Also position papers from
industrial strategists are welcome.
Authors of accepted abstracts will be notified by July 8, 2007.
Final submissions are due on August 24, 2007.
Areas of Interest (including but not limited to):
- Games designed around audio and sound
- Interactivity through sound and speech
- Semantic speech, music, sound analysis
- Music recommendations and user feedback
- Semantic audio processing
- Cognition of sound and music
- New auditory user interfaces
- Sound design for games
- Spatial audio rendering
- Interactive composing & authoring of music
- Audio in teaching
- Sound in mobile applications
- New developments for audio broadcasting, podcasting and audiobooks
- Future uses of sound
Important dates:
Deadline for abstract submission - June 22
Notification of acceptances - July 8
Final paper submission - August 24
Deadline for registration - September 7
Conference - September 27-28
For more information, please visit the conference website
http://www.audiomostly.com/ or contact us at info(a)audiomostly.com
Furthermore, we plan to focus a special paper session on the area of
children's media, an area that is of special interest to our region here
in Thuringia (see
http://invest-in-thueringen.org/fileadmin/www/pdfs/EN/publications/flyer-me…).
Thus, any submissions dedicated to the questions of audio interaction in
media applications for kids or adolescents, such as audio & learning,
music education or games for children, etc. are in particular welcome.
Hello again,
Today is a break in the Debian conference action here in Edinburgh,
Scotland. Talks will resume tomorrow and continue through Saturday.
The streams are mirrored by a network managed with geodns. Use the URL
below and you will be redirected to an appropriate mirror:
http://streams.video.debconf.org:8000/
In case you've missed any of the talks from the past 4 days, the archive
is coming online as checking of recorded files and transcoding proceeds:
http://meetings-archive.debian.net/pub/debian-meetings/2007/debconf7/
Note that these encodings have not all been checked. Since we have the
raw DV stored to disk we can go back and rework problem files to some
degree. If we have to we also have tape from the main camera in each
room (not the mixed video) as backup. As you can perhaps imagine, we
have a tremendous amount of footage to deal with, what with having as
many as 4 tracks running at once. If you have any comments or
suggestions on particular files, particularly if there seem to be
technical issues with the encoding, please(!) let us know about it on
the wiki. There's a good chance we will be able to improve the
situation.
http://wiki.debconf.org/wiki/DebConf7/videoteam/RecordingsRemarks
-Eric Rz.
DebConf7 video-team
Hello all,
i have written an jackified application for a customer. the main application
play a couple of audiofiles (wav) with some effects and filters. Everything
works fine so far.
From time to time, while loading some more files, jack disconnects my
application. the loading process is a seperated thread. i use the thread
implementation from wxWidgets, which uses pthreads.
Neither my app nor jack crashed, they are running but my app tell my
"zombified - calling shutdown handler"
the output of jack is.
--------------- schnipp
bash-3.00# /usr/local/bin/jackd --realtime -d alsa
jackd 0.100.0
Copyright 2001-2005 Paul Davis and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK compiled with System V SHM support.
loading driver ..
creating alsa driver ... hw:0|hw:0|1024|2|48000|0|0|nomon|swmeter|-|32bit
control device hw:0
configuring for 48000Hz, period = 1024 frames, buffer = 2 periods
nperiods = 2 for capture
nperiods = 2 for playback
**** alsa_pcm: xrun of at least 0.662 msecs
**** alsa_pcm: xrun of at least 4.624 msecs
**** alsa_pcm: xrun of at least 4.950 msecs
**** alsa_pcm: xrun of at least 68.566 msecs
**** alsa_pcm: xrun of at least 48.265 msecs
**** alsa_pcm: xrun of at least 4.413 msecs
**** alsa_pcm: xrun of at least 6.883 msecs
**** alsa_pcm: xrun of at least 126.050 msecs
**** alsa_pcm: xrun of at least 62.059 msecs
**** alsa_pcm: xrun of at least 51.514 msecs
**** alsa_pcm: xrun of at least 12.643 msecs
subgraph starting at soundroom timed out (subgraph_wait_fd=9, status = 0,
state = Running)
----------- schnipp
the xruns only appear while load sounds.
kernelversion is
Linux soundroom 2.6.15.6 #7 SMP PREEMPT Fri Jun 16 22:18:26 GMT 2006 i686
unknown unknown GNU/Linux
more /proc/asound/cards
0 [Gina3G ]: Echo_Echo3G - Gina3G
Gina3G rev.0 (DSP56361) at 0xea000000 irq 50
1 [CK804 ]: NFORCE - NVidia CK804
NVidia CK804 with ALC850 at 0xea105000, irq 217
and i use the gina3G
and now the questions;
- is there any way, to get more infos out off jack
- any way to keep the app connected or a hint to avoid the disconnection
- some days ago i testet the jack version 0.103, but jack uses 100% cpu
without any app. just starting qjackctl showed that.
thanks very much for some hints. c~
Hello,
> Second, I noticed there's a click
> between segments when I split then using < X >.
Strange, are you sure you use 0.40.0 ? :D
It would be very helpfull if you could tell me how to reproduce it and/or join
us in #traverso / traverso-devel ml to look into this.
> Also, when I used < E > for
> an external command (SoX - can this be changed?), the clip then becomes
> silent. I typed "reverse" into the box. It did reverse the clip, but like I
> said - silence.
The external processing was a feature request from a user, and given the way
traverso works, I stated that creating a Command class to implement the
functionality was easy.
That turned out to be right, but unfortunately, I didn't get much feedback.
The external processing as such is still a somewhat experimental feature, and
the command has been hardcoded to sox.
If there are more people who would like to see this functionality expended,
please drop a note, with what exactly you want to accomplish, how, which
parameters and so on.
> :) Anyways, then I split the reversed clip and could then hear it. Also, no
We did have these problems long ago, weird you have them still....
But it could be related to the external processing not fully tested...
> JACK transport support?
Please add a wish for this in the bug tracker [1] !
Regards,
Remon
[1] https://savannah.nongnu.org/projects/traverso/
The C* Audio Plugin Suite reincarnates as version 0.4.0.
CAPS is a collection of LADSPA plugins enjoying worldwide favour for
its instrument amplifier emulation. In addition, it provides a
sizeable assortment of acclaimed audio DSP units, sound generators and
effects. CAPS is distributed as open source under the terms of the
GNU Public License.
http://quitte.de/dsp/caps.htmlhttp://quitte.de/dsp/caps_0.4.0.tar.gz
This release sees the addition of the fine work of David Yeh at CCRMA
on the emulation of classic tube amplifier tone stack circuits (more
here: http://ccrma.stanford.edu/~dtyeh/tonestack/ ).
Three new plugins are building on the tone stack: ToneStack and
ToneStackLT offer isolated implementations, while the new AmpVTS unit
combines a refined AmpV and a ToneStack circuit. I'm very grateful to
David for his brilliant contribution, and I'm quite positive that
those who actively use the CAPS Amps will share this sentiment.
Also primarily aimed at the discerning guitarist is the new AutoWah
plugin, offering a versatile rendition of this classic audio effect.
The last new plugin is Eq2x2, a two-channel 10-band graphic equalizer
modeled after an analogue design.
-*-
Beyond the new plugins, this release also brings tons of major
improvements "under the hood". All plugins have been hardened to work
glitch-free in the face of invalid control input. Much effort has
also been spent on further elimination of denormal numbers everywhere.
Parameter smoothing (which is performed in order to prevent zipper
noise) has been refined never to occur at the start of processing.
The build process can now be configured to take advantage of the SSE
and SSE3 extensions on the i686 platform, providing slight performance
gains and automatic denormal protection. The HTML documentation has
been thoroughly updated to reflect all changes. Finally, thanks to
Paul Winkler CAPS now comes with an improved RDF file containing
plugin categorisation.
For the near-complete list of changes please see
http://quitte.de/dsp/caps.html#Changelog
-*-
Don't hesitate to let me know what you think.
Enjoy, and thank you for using CAPS,
Tim
Fellow LADSPA developers,
I've been investigating curious bug reports concerning the use of some
of the CAPS plugins in Ardour.
Most notably, the Eq plugin, a 10-band graphic eq, very often consumes
an inordinate amount of CPU cycles and more likely than not even
produces a silent output signal (or even worse one containing only
samples of inf value).
[I'm talking about Ardour/GTK 0.99.3 (built using 1.4.1 with libardour
0.908.2 and GCC version 4.1.2 20061007 (prerelease) (Debian 4.1.1-16)
and CAPS up to and including 0.3.0]
Even after eliminating each and every possible cause of denormal
numbers in the Eq plugin, the problem persisted. It turns out that it
is quite simply caused by invalid control parameter values.
In order to prevent zipper noise, the Eq unit (like most of the CAPS
plugins) smoothens control parameter input. This is done by sweeping
internal parameters over the duration of an audio block in run() or
run_adding().
In order to prevent an involuntary parameter sweep during the first
call to the plugin's run() after it has been activate()d, the plugin
has to evaluate the current set of control input parameters in
activate(). To prevent problems with unconnected ports, in CAPS an
unconnected port points to the lower bound of that port's range.
Thus, if a host calls activate() before doing connect_port() on all
control inputs, an involuntary parameter sweep is very likely, but at
least it's not starting from garbage parameters.
Now, Ardour will first call connect_port() and then activate(), just
as you'd expect from a well-mannered host (and as is recommended by
ladspa.h).
However, when Ardour calls activate(), the control inputs point at
uninitialized memory (ie. garbage data). A plugin with 10 control
inputs like Eq thus stands a fair chance of running into a value of
nan or inf, causing all sorts of computational mayhem including the
problems described above.
So what to do about this? ladspa.h says nothing about the value of
control inputs at plugin activation time. What it does say is that
plugins should be able to operate on invalid input data without a
glitch, and I'm working on making CAPS resolve the issue gracefully.
But still, if control inputs are unconnected or point to a value out
of bounds, nan or inf at activation time, a plugin cannot perform
correct parameter smoothing. Which would be a loss, no?
So, dear LADSPA host authors, estimated Ardour developers, I'd like to
ask you to make sure your implementation provides plugins with valid
and useful control input data at activation time, preferably the
values intended to be used at run() time.
Cheers, Tim
PS: It'd be nice to see a recommendation to this effect in ladspa.h,
too.