Mammut will FFT your sound in one single gigantic analysis (no windows).
These spectral data, where the development in time is incorporated in
mysterious ways, may then be transformed by different algorithms prior to
resynthesis. An interesting aspect of Mammut is its completely
non-intuitive sound transformation approach.
0.15 -> 0.16
-Tiny source cleanup.
-Upgraded the included sndlib i386-binary to latest version. This
one includes jack-support.
http://www.notam02.no/arkiv/src/
--
Hi,
I guess this will be of interest to some of you...
An even newer version has shown up on linux-kernel.
/RogerL
From: Jens Axboe
--------------------------------
Hi,
I've implemented IO nice levels in the CFQ io scheduler. It works as
follows.
A process has an assigned io nice level, anywhere from 0 to 20. Both of
these end values are "special" - 0 means the process is only allowed to
do io if the disk is idle, and 20 means the process io is considered
realtime. Realtime IO always gets first access to the disk. Values from
1 to 19 assign 5-95% of disk bandwidth to that process. Any io class is
allowed to use all of disk bandwidth in absence of higher priority io.
Idle and realtime IO settings work as expected, but not much tuning has
gone into making sure that the individual levels in-between work 100% as
expected. It should be good enough for some testing at least, even if it
has some holes.
About the patch: stuff like this really needs some resource management
abstraction like CKRM. Right now we just look at the tgid of the
process. I've added two syscalls for setting and getting io priority.
Don't consider this final or anything, it's just easy for testing. Patch
has been tested on x86 and ppc, syscalls are also added for x86_64.
I'm attaching the simple ionice tool. It's used as follows:
# ionice -n20 bash
starts a bash shell with realtime io. Beware that io level is inherited
on fork, so any program you start from this shell will also run with
realtime io.
# ionice -n0 dbench 32
run some dbench thrasher, but only when disk is idle.
Pretty straight forward :-)
For really good results, you probably also want to set cpu nice level.
Needless to say, a realtime io process can only submit io when it gets
scheduled.
Default IO priority for a new process is 10.
Patch is against bk-current.
--
Roger Larsson
Skellefteå
Sweden
Hi,
this is a plea for help. I want to find out which factor makes jackd
freeze on a client open, client close and client open again. So i would
ask all of you using jackd and ardour or jack-rack to test for that
behaviour and report the results along with system information back to
this list. Maybe there is some constant factor..
I can observe this behaviour with both jack-rack and ardour. The
procedure to make jack freeze is pretty much the same in both cases:
jack-rack:
start jackd
start jack-rack
close jack-rack
start jack-rack again
ardour:
start jackd
start ardour
open a session in ardour
close the session
open it again
My System info:
dist: debian unstable [updated pretty recently]
kernel: vanilla 2.4.22 kernel patched with LL, capabilities and Preempt
alsa: 0.9.8
jackd: 0.89.12
i use: jackstart -R -d alsa -p 256 -n 2
ardour: Ardour/GTK 0.413.0 running with libardour 0.698.2
jack-rack: 1.4.3
Soundcard: terratec dmx xfire 1024 [snd-cs46xx]
Regards, Florian Schmidt
--
music: http://www.soundclick.com/bands/9/florianschmidt.htm
I'm not entirely sure whether this is the right place to be posting this or
not. I've been having some strange issues with the OSS emulation in the 2.6
kernel. With each new kernel release, programs which use OSS have taken
progressively longer to start up. Audacity now take about 20 seconds. There
is also a 5-10 second delay from the time I hit "play" in Audacity to the
time when the playback actually starts. Hydrogen takes a similarly long time
to start up if I use OSS output. Finally, if I try to use the OSS MIDI
driver for ZynAddSubFX, the entire system locks up - no Ctrl-Alt-Backspace,
no Ctrl-Alt-Del, no Magic SysRq.
I'm running the 2.6.0-test9 kernel on top of a Slackware 9.0 installation with
an M-Audio Audiophile 2496 card. I have everything built into the kernel.
The machine is an Athlon XP 1800+ with 256m memory.
The output of dmesg after running an OSS application consists of repeated
entries which all begin as follows:
Debug: sleeping function called from invalid context at
include/asm/semaphore.h: 119
in_atomic():1, irqs_disabled():0
Call Trace:
[<c011ad4b>] __might_sleep+0xab/0xe0
[<c03727eb>] ap_cs8427_sendbytes+0x3b/0xd0
[<c0367952>] snd_i2c_sendbytes+0x22/0x30
[<c03667b6>] snd_cs8427_reg_write+0x36/0x80
[<c0366e26>] snd_cs8427_reset+0x56/0x240
[<c036763a>] snd_cs8427_iec958_pcm+0xea/0x170
[<c0370b23>] snd_ice1712_playback_pro_hw_params+0x73/0x80
[<c0343507>] snd_pcm_hw_params+0x267/0x2a0
[<c03435d8>] snd_pcm_hw_params_user+0x98/0x100
(followed by various functions, most of which begin with "snd_")
I've posted the complete output of dmesg at:
http://www.comevisit.com/NorthernSunrise/oss26/dmesg
My kernel .config is at:
http://www.comevisit.com/NorthernSunrise/oss26/config
If there's anything I can do to help track this down, or if there is a better
list for me to post this on, please let me know.
|)
|)enji
Hi,
This is the last reminder for anyone interested in presenting at the
audio mini-conference at Linux.Conf.Au, Monday January 12 2004.
Submissions are due at the end of this week (Fri 31 Oct), details at:
http://www.metadecks.org/events/lca2004/
It's turning out to be an excellent day of LAD hacking and jamming,
with a full day of presentations lined up and an evening's playing at
a local bar in the planning. The full schedule will be released next
week.
A mailing list has been set up for general discussions and info about
the mini-conference, info for subscribing is at:
http://lists.linux.org.au/listinfo/lca-audioconf
To register for this mini-conference you MUST register for the main
Linux.Conf.Au conference. Unfortunately audio-miniconf-only
registrations are not available, sorry. Nevertheless you will not
be disappointed: the main conference has four parallel tracks of
tutorials and paper presentations running from Wednesday to
Saturday, covering everything from programming VR applications
to writing user-level device drivers; check it out and drool:
http://lca2004.linux.org.au/programme.cgi
Lastly ... if Tim Mayberry or Nick Mainsbridge could please drop
me an email, or if anyone could give me current contact details for
either of them, that would be much appreciated :)
cheers,
Conrad.
Sorry, forgot to ask a couple of things.
Will Jack or other similar APIs allow passing of DC signal? If so, do we have
to worry about speakers blowing up or do soundcards protect against DC?
If not, can anyone tell me what the most CPU efficient way of
modulating/demodulating a control signal might be? It occured to me that one
could just use a very high frequency signal and have amplitude represent the
DC control signal, requiring the receiver to use some sort of RMS amplitude
tracking. This could also allow more than one control signal to be passed in
one audio stream if we had several composite sines that were filtered apart
again on the receiving end if the CPU demand of the filtering was less
important than saving audio channels.
Any thoughts, feedback, or redirection appreciated.
Iain
Sorry if this is the wrong place, please tell me where to ask if it is. I'm
wondering whether it will be possible, ( or if anyone is doing something
similar ) to use Jack to pass control signals between audio apps. I think it
would be extremely useful to be able to send the equivalent of PD wires or
Csound krate variables back and forth between different apps as if the were
Control Voltages, allowing the functionality of outboard CV modulars with
tools from more than one family. I am also interested in the possibility of
extending the same to outboard hardware, ie having control signals generated
and transformed between apps and then sent out as actual CV signals to
outboard modular components. I've been working on a real time step sequencer
in Csound that allows sophisticated real time sequencing of control signals in
the fashion of step sequencers and would like to know how I can perhaps
integrate my work with others. I am really just a Csound programmer, and not a
C developer ( yet ) so please excuse me if this is noise on the list. ; )
Also, does or will Jack allow signals to be lower bit depth or sample rate
when full audio resolutions is not necessary, or will it always be full audio?
I'm just thinking that maybe it would be good to be able to pass control
signals with less bandwith, but of course that would introduce other
complications. Is there any limit to the number of ins and outs Jackable? I
guess to make modular synthesis between various apps one would start using a
*LOT* of them.
Thanks,
Iain Duncan
Crossposting from jackit-devel to here...
---------- Forwarded message ----------
From: Kai Vehmanen <kai.vehmanen(a)wakkanet.fi>
Subject: [Jackit-devel] linux-kernel comments on POSIX capabilities support
Not too encouraging.. :(
http://kt.zork.net/kernel-traffic/kt20031101_239.html#3
Quoting Albert Cahalan:
""The authors of our code seem to have given up and moved on. Nobody
cleaned up the mess. Is it any wonder the POSIX draft didn't ever make it
beyond the draft state?""
--
http://www.eca.cx
Audio software for Linux!
>You might want to take a look at some of the graphical sound systems,
>then. For example Pd (www.pure-data.org) is very easy to set up and
>yet very powerful. You could use some of the Pd extensions to create
>images of sound or the other way around. GEM is very easy to get
>started with, if you have an OpenGL enabled card. For reading from a
>TV card (or webcam) PDP is nice.
I will look your suggestions. Very thanks!
>This is directly programming the soundcard hardware. Do you really
>want to do that? In my opinion this could be too difficult for your
>pupils, if they don't have a certain background with that. But I may
>be wrong. Anyway, using something like Pd will still be a good
>>experience, because you get to see the whole picture of sound
>generation better. The Pd author Miller S. Puckette also uses Pd to
>teach sound synthesis and such. See his upcoming book at
>http://www.crca.ucsd.edu/~msp/techniques.htm
Hummm... directly soundcard hardware??!!! :(
I will test your suggestions, but... I just wanna one program can show one image when pupil play one C note ,or G, etc and maybe control intensity of color with gain of sound. Something that it can select colors, pictures and texts using pure notes played with guitar or harmonic or other instrument. Is not to produce images but to *select* images or colors (images too) or text.
Is it programming direclty in soundcard hardware? Not exist some librariy done can make this job (send to my program one pure value represent the note played)? I don't want Midi sound, it is very horrible to hear :)
I know, I want much. But to our project is very interesting give for pupil the possibily to choose the relation note/color/picture or note/word.
Is it very hard? For me all in this audio-world is hard because I don't have much notion about hardware programming (I'm just one database specialist), but if it's possible just using done values sent of the one library, all is ok.
>I'd be glad to hear about your experiences.
:) You will hear my screams!!
>I think, you will need some mathematics, but nothing too complicated.
>A bit of trigonometry is very useful in computer sound but you can get
>far with knowing just muliplication and summing. At least in Pd, that
>is.
>ciao
So... very very very thanks for your atention.
[]'s
Alexander