> I'm hoping that you're thinking of a realtime display, in which the
> peaks roll off to create a true waterfall effect.
Baudline (http://www.baudline.com) is a fantastic viewer that does fft
cascade. I've used it for a couple of years, and it is great for figuring out
how different sounds "work", and it has an oscilloscope-type display as well.
Cheers,
Jason Downer
Hello.
I finally started making my pet music project and realized I need a
drum synth to make some cool sounds. psindustrializer is good but also
need some tr-909-style sounds. I remeber from my old windoze days I
used a nice piece of software called Stomper. Does anybody know any
software for linux with comparable capabilities? Or we need to write
one?
Stomper does not work under wine :(
Thanks.
Hello.
I had a couple of articles on drum synths. Check
ftp://ftp.funet.fi/pub/sci/audio/devel/lad/drumsynth/
I built the circuit in a00*.jpg at the time when this article
was fresh. The article b00*jpg mentions an earlier article.
I will check that out at library.
Hmm.. I coded a drum synth for Commodore VIC-20 at the time.
VIC provided an audio chip with three oscillators, noise,
and a common volume if I remember correctly. What I did was to
modulate osc pitch and volume parameters with a fast and accurate
(compared to Basic) assembly code. The drum sounds were assigned to
the keys. This was about 1984, inspired by Yamaha's digital RX drum
synths, not by analog drums.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi Steve, thanks for the reply.
I will definitely look into using DSSI, looks like it
could be good once as supported as LADSPA is (I'd
never even heard of it before your post, although
that's probably just me). Is it intended as an
eventual LADSPA replacement? I never really saw the
need to divide plugins into 'instruments' and
'effects', and it seems like DSSI can do both.
Stefan Turner
> It would be more practical to do it as a DSSI
plugin, LADSPA has no way
> to
> indicate that you want to load files during runtime,
and no UI.
>
> In DSSI you can load the impulse in the "UI"
process, perform the FFT
> on
> it and send it ot hte DSP code with configure().
Once there the DSP
> code
> can the overlap-add/save on it.
>
> - Steve
___________________________________________________________
Win a castle for NYE with your mates and Yahoo! Messenger
http://uk.messenger.yahoo.com
Hi!
I Noticed that the Aureal drivers are now in Alsa. I used to be an early
adopter (with bad experience ...)
Anybody around here care to share some modern succes stories?
--
(
)
c[] // Jens M Andreasen
Sorry if multiple copies of this appear. The spam filter doesn't like my
choice of titles. I've tried a few variations so far.
I'm looking to develop a music editor/sequencer somewhat in the vein of
Cakewalk/Rosegarden, but looking more towards the future of MIDI and audio
capabilities. I've been thinking about this for a long time, and I think I
have enough of a plan now to make a go at it.
This is a rather long post, so I've divided it into four parts: Why Not
Rosegarden?, Project Overview, Design Goals, and End Notes.
Why Not Rosegarden?
Why not just join the Rosegarden devlopment team? While Rosegarden has a lot
of promise, it's design goal is different than what I'm thinking of. To put
it crudely, Rosegarden's goal is to be a better Cakewalk. This isn't meant to
be disrespectful to the Rosegarden developers. I like Cakewalk. (If you
don't, substitute CueBase, or whatever professional tool of that genre.)
Music is a language. While music itself is much more complex and vague (like
natural language) than a programming language such as C, I will use the
analogy of a programming editor. Rosegarden (and all similar tools) is like a
C editor. When it's mature, it will be a very good C editor. In addition, the
modularity and generalized approach that Rosegarden uses will make it
possible to also be a good C++ editor. I'm thinking of something more like a
multi-editor that will work for Perl or lisp. It will have it's own
limitations, so I'm not trying to create the be-all end-all of music editors.
But there are some fundamental things that Rosegarden will never be able to
accomplish. (An most users won't have any reason to want an editor that will
accomplish these things.)
The first involves the way meter and tempo are handled. There are several
pieces of 20th century music that incorporate 2 or more voices playing
simultaneously in 2 or more meters. An example that chokes every editor I
know of (though I'm told lilypond can do it with difficulty, but it has no
sequencing capability and isn't a WYSIWYG) is Bartok's "Music for Strings,
Percussion, and Celeste". I'm not saying that you can't create a Rosegarden
file that will play this piece. What I am saying is that the notation will be
extremely ugly. The issue is that not all of the voices are playing
simultaneously in the same meter. To notate this in all in one meter means
that some of the voices will look extremely complicated with myriads of ties
and accent marks.
Another interesting thing is to have 2 voices playing with different tempo.
No examples come to mind, but I can think of some cool effects that could be
achieve by having one voice race ahead of another. Again, Rosegarden could
play such a piece, but the notation would be ugly.
A third thing revolves around scales and tunings. Rosegarden has some plans
for different tunings and perhaps support for quarter tones. And I think,
again, that a Rosegarden tuning would be applied to an entire composition,
not a single instrument. But I'm thinking of things more general. Suppose
each track had an element called "tuning". Suppose also that each note is
stored as an integer in a note event (as it is in Rosegarden). The tunings
element contains a mapping between the note number given in the event and
what is actually played. Microtones and temperaments could be stored in the
tunings file as a MIDI note number plus a pitch bend. (Or we could store
frequencies, and some function converts frequencies to MIDI note numbers plus
pitch bends.) More novel things (such as octaves that aren't quite perfect
frequency multiples) could also be accommodated. The tunings file could also
be used to define scales that include added scale degrees. (e.g. for a
Pythagorean tuning, we have to include extra values, so we can notate, say,
an F## instead of a G, as these tones may not be equivalent in a non-equal
temperament.)
Rosegarden is trying to be fairly general about instrument definitions, which
is good. I'm thinking of somewhat different approach that looks to the future
of MIDI, in which we want to achieve more realism of the performance. Each
Track has this abstract thing called an Instrument. Instruments may be atomic
(a string) or composite (a guitar, which is composed of several atomic string
instruments). Let's suppose we use soundfonts. We could create a soundfont
with banks 1-6 for the strings of a guitar. Each single-string Instrument
gets mapped to one of the tone banks, and the guitar Instrument is composed
of these single-string Instruments.
In this way, it would be possible to achieve a playback in which an open D
chord sounds different than a 5th position D chord played a string lower,
just like on a real guitar. Instruments should have "modes" which can be
selected (and changed within the song). A guitar might have pickstyle
(perhaps which eventually get mapped to tonebanks 1-6) and fingerstyle (which
eventually get mapped to tonebanks 7-12). An organ could have modes for
various configurations of the stops. We could get very elaborate here and
have organs with multiple keyboards, etc., but I'm falling into a dreamy
state here. The idea is that the design should allow for this type of
functionality to be added.
So, how can we accomplish such a beast? Let's take a tour of what's in my
brain. No, no! Not over there! Don't open that door. That's the basement
where all the beastly details hide. Whatcha mean it's empty? Of course it's
empty. No, not that door either. That's the closet of confusion and
inconsistency. This way, this way. Here's the beautiful entry...
Project Overview
No pretty pictures, so you'll just have to imagine.
We begin with a main window. The document-view model is appropriate. We'll
have the typical views that you'd expect -- TrackView, ScoreView (StaffView),
MixerView, and some sort of EventView.
TrackView should look similar to that of Rosegarden, but we need to implement
it differently. The window needs to be some sort of split window, in which
each track gets its own pane. At the top, should be a time ruler which can
display in seconds, SMPTE, or beats/measures of a track. It should default to
displaying beats/measures of track 1. Each track pane displays its clips
similar to Rosegarden/Cakewalk. But it should hold a TrackGroup instead of a
single Track (Byt default each Track is in its own group). Additionally, it
should have a hideable ruler that displays time (default to beats/measures)
and tempo markings for that TrackGroup. This means that the TrackGroup data
structure needs to hold info about its meter and tempo. TrackGroups can be
expanded to show each track. They can also be hidden to reduce clutter.
TrackGroups can be nested (e.g. strings, violins, violin1). During playback,
big vertical ruler bar should follow the sequencer, similar to the appearance
in Rosegarden.
ScoreView should display like a score with staff notation. Tracks can be
grouped, and it should be possible to hide groupings. Expanded track
groupings should show each track. When compressed, a single staff with
multiple voices (1 for each track) should be shown. In this way, 4-part
harmony, drums, etc. can be displayed as is most convient to the user.
Tracks should be displayed with appropriate staves (single 5-line staff for
voice, optional tablature for stringed instruments, etc.) Non-MIDI tracks
such as wav audio should be displayed as a simple line or bar. We should
include typical musical symbols, dynamics, guitar tab symbols, lyrics,
section markers, and effects markers (things like echo, flange, etc.) Each
track should have an option to hide various things (e.g. hide the tablature,
lyrics, or effects markers).
Eventually, we should be able to print out sheet music.
The MixerView should be pretty typical, but again, with the ability to create
a nesting of sub-mixers. Each track should get its own effects rack.
Additionally, each sub-mixer (or track group) should get an effects rack.
This way, we can set effects for individual instruments. Then we can layer
another level of effects (say panning or delay) to a group. Finally, we can
add a layer for everybody (say reverb). As much as possible, we shouldn't
care whether we're dealing with wavs or MIDI. If we have a MIDI track, the
popup menu should only display MIDI effects. For aggregate groupings, only
effects that can be used on either MID or wave should be available. (Perhaps
we can take existing MIDI or audio only effects and put wrappers around them
that select which effect is actually used.) Supporting plug-in effects would
be very good. In the end, though, I think effects should be handled in the
same way as instruments. That is, we build complex effects out of simpler
components.
This leads me to instruments. Instruments and effects should be stored in
definition files. Then we can set up our instruments once and use them in
multiple compositions. Ditto for tunings.
Files should be stored as XML (probably compressed, as Rosegarden does).
Design Goals
1) Flexibility is paramount. A rigid desisn isn't going to work here. There
are too many unknowns and open-ended issues.
2) Modularity. Keep the pieces as independent as possible. Try to make
generic base classes that defer as much of the details as possible to derived
classes.
3) Begin with a generic framework. Determine how the system should operate.
Do we run the whole thing like an IDE for a programming language? How
separated should the editor and sequencer be? Do we edit, save to file, and
compile (sequence)? Or should the sequencer be more closely coupled, causing
the data structures in memory to be shared between editor and sequencer? The
first approach would make it difficult (or impossible) to do realtime mixing
of MIDI tracks, but it would provide for very efficient playback. All of the
MIDI tracks would be compiled to a MIDI file, so the on-the-fly processing
would be minimal. Efficient, but limiting. Perhaps it should run more like an
interpreter, processing the file on demand. A "mixdown" option could be
added, that would compile everything down for a final release. Then anyone
with the sequencer could play the file. (Analoguous to Windows Media Player,
in which you can play .avi files, but you can't create or edit them.)
4) Simple things should be simple. The default should be that every track
plays at the same tempo and in the same meter, instruments should map to
General MIDI bank, and equal-temperament tuning with standard key signatures
should be used. Recording, playback, and basic editing should be easy and
intuitive. The most common features and feature groupings should appear in
the menus. An advanced tab should be present to allow for lesser uses
features.
5) Don't lose sight of the big picture. The end goal is to have a product
that allows for audio and MIDI tracks that will play in sync. Other types of
tracks (such as video, or images) could be included as well.
6) Get something that's useful and provides basic functionality up and
running quickly. Add features step by step. A brilliantly conceived program
isn't worth much if it doesn't run.
7) I'm leaning toward C++ as the primary language, though it may be better to
use a scripting language such as Perl (which I know) or Python (which I don't
know) for certain portions. My most common methodology involves writing a
core engine in C++, then developing a simple scripting language (usually in
Perl) to access the features of the core. I'm not sure such an approach is
appropriate here, though.
End Notes for the Curious
I used to be a software designer and programmer. I've been stagnant for
several years now due to an illness that's taken away my presenability in job
interviews. (It's hard to make a good impression in today's world of
corporate fluff where appearance is more important than ability when you look
haggard, shake like an addict in detox, and have periods of brain fog that
never fail to appear in an interview.)
I don't know all that much about the down and dirty of audio programming. I
wrote a little media player tool with DirectX in Windows, a linux tool that
converts files between MIDI and XML (I should probably stick this out there
one SourceForge, as someone else might think it useful. How do I do that?),
and a few other little audio utilities.
I know a good deal about software design and maintenance, and I'm proficient
in c++ (and a few other languages as well). I've been project leader on a few
different pieces of software of substantial (100k-200k lines) size, with
teams of 10 people of less.
My degree is in applied mathematics.
For any university professors out there, I think this would be a great
master's (or phd) thesis. And I just might know a pretty good candidate. ;)
Mx41 minor update at
http://hem.passagen.se/ja_linux
This wasn't on my todo list, but I just stumbled over the missing link
in the voice assign/stealing algorithm and couldn't help implementing
it. Just to check out if it really worked ... and I think it did :)
I now have voices in five assignment-ques:
silent // absolutely idle voices
released // voices about to become idle
excess holdpedal // elder voices than mentioned below ..
holdpedal // the two most recent voices for a given key
fingered // voices where the key is still pressed
... and a short two-voice que for each key to figure out the 'excess'
part.
Voice assign will at best find a silent voice and at worst a fingered
voice.
Rolls (tremolo?) now works proper without stealing highest or lowest
note, nor anything inbetween for that matter. Finally!
Is there room for improvement? Yes I think so ... It is now possible to
get 'clicks' with certain combinations of envelope and playingstyle. On
the other hand it is also quite easy to avoid, so I will work slowly on
this one.
--
(
)
c[] mvh // Jens M Andreasen
Hi peeps.
Just looking for some quick advice about a new soundcard.
I'm looking at getting something like the M-Audio Audiophile 2496.
This is to replace a SB Live! Value.
Does this seem like a sensible (though cheap) move? I'm expecting a
better card.
The reason I'm thinking about changing to something not hugely
dissimilar is because under ALSA, I'm rally not impressed with my SB.
The volumes seem wrong - loudest seems to clip before it's off the
card. I have no use for all those fancy effects that it does, since I
can't make use of them in my setup anyway.
Up until now, I've been using headphones mostly, with a couple of sets
of cheapo speakers as monitors, with a super cheap amp.
I'm about to splash out on some real nearfield monitors and a real
monitoring amp, although I'll still be spending less than £350 (That's
UK pounds, in case the symbol doesn't come out as it should - there's
a reason I'm not sure it will, but let's not get into that).
With a nicer monitoring setup, obviously I want a decent soundcard.
Does the Audiophile work well with ALSA? Are there any better
alternatives? Am I being silly, and can just somehow "fix" my
SBLive/ALSA setup?
All advice welcomed,
James
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
Hi,
i implemented a small convolution engine for jack.. grab it at
http://affenbande.org/~tapas/jack_convolve-0.0.1.tgz
untar
make
./jack_convolve responsefile.wav
It creates as many in/out ports as the response file has channels. Uses
fftw3f and libsndfile. Will add libsamplerate support in near future.
So for now, make sure the samplerate of jack and the response file
match.
Here's a ca 1 sec 48khz (resampled from 96khz) stereo response file of a
room:
http://affenbande.org/~tapas/FrontFacing%20TLM170.wav
It's from this package:
http://www.noisevault.com/index.php?page=3&action=file&file_id=130
which has 96khz responses..
Consumes ca. 25-30% cpu load on my 1.2ghz athlon at jack buffer size 2048
[;)]
So there's plenty room for optimization (and some return value checking
will be added too ;)).. If you know some tricks, let me know.. The
sourcecode is pasted below for easier reference.
Flo
P.S.: thanks to mario lang for collaborating and giving some hints
towards using fftw3f instead of fftw and some other optimizations..
P.P.S.: oh yeah, example sound, here you go [hydrogen dry then with
output of jack_convolve mixed to it]:
http://affenbande.org/~tapas/jack_conv_ex1.ogg
And here the convoluted signal alone:
http://affenbande.org/~tapas/jack_conv_ex2.ogg
P.P.S.: known issues:
- won't handle samplerate or buffersize changes gracefully
- will bring your machine to a crawl ;)
jack_convolve.cc:
---------------
/*
Copyright (C) 2004 Florian Schmidt
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
$Id: types_8h-source.html,v 1.1 2004/04/27 18:21:48 joq Exp $
*/
#include <jack/jack.h>
#include <iostream>
#include <sstream>
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <sndfile.h>
#include <vector>
#include <cmath>
#include <fftw3.h>
jack_client_t *client;
std::vector<jack_port_t *> iports;
std::vector<jack_port_t *> oports;
jack_nframes_t jack_buffer_size;
int chunks_per_channel;
// channel chunk data
std::vector<std::vector <fftwf_complex*> > chunks;
// the buffers for the fft
float *fft_float;
fftwf_complex *fft_complex;
// the plan
fftwf_plan fft_plan_forward;
fftwf_plan fft_plan_backward;
float normalize_factor;
// per channel we need a ringbuffer holding the fft results of the
// audio periods passed to us by jackd.
// each needs to be sized jack_buffer_size * chunks_per_channel (see main())
std::vector<fftwf_complex *> ringbuffers;
// this gets advanced by jack_buffer_size after each process() callback
unsigned int ringbuffer_index = 0;
// a vector to hold the jack buffer pointers.. these get resized
// during init in main
std::vector<jack_default_audio_sample_t *> ibuffers;
std::vector<jack_default_audio_sample_t *> obuffers;
// channel data
std::vector<jack_default_audio_sample_t *>overlaps;
int process(jack_nframes_t frames, void *arg) {
// std::cout << " " << ringbuffer_index;
// get pointer[s] to the buffer[s]
int channels = chunks.size();
for (int channel = 0; channel < channels; ++channel) {
ibuffers[channel] = ((jack_default_audio_sample_t*)jack_port_get_buffer(iports[channel], frames));
obuffers[channel] = ((jack_default_audio_sample_t*)jack_port_get_buffer(oports[channel], frames));
}
for (int channel = 0; channel < channels; ++channel) {
// copy input buffer to fft buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
fft_float[frame] = (float)(ibuffers[channel][frame]);
fft_float[frame+jack_buffer_size] = 0.0;
}
// fft the input[s]
fftwf_execute(fft_plan_forward);
// store the new result into the ringbuffer for this channel
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
ringbuffers[channel][ringbuffer_index+frame][0] = fft_complex[frame][0] / normalize_factor;
ringbuffers[channel][ringbuffer_index+frame][1] = fft_complex[frame][1] / normalize_factor;
}
// zero our buffer for the inverse FFT, so we can simply += the
// values in the next step.
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
fft_complex[frame][0] = 0;
fft_complex[frame][1] = 0;
}
// multiply corresponding chunks of the fft'ed response[s]
// we start with the chunk for the current part of the response and work our
// way to the oldest data in the ringbuffer (we need to go backwards for that)
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
// we go backwards and constraint to the whole buffersize ("%")
long int chunk_rb_index = ((ringbuffer_index - (2 * chunk * jack_buffer_size))
+ 2 * chunks_per_channel * jack_buffer_size)
% (chunks_per_channel * jack_buffer_size * 2);
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
// complex multiplication (a+bi)(c+di) = (ac - bd)+(ad + bc)i
long int running_ringbuffer_index = chunk_rb_index + frame;
float a,b,c,d;
a = ringbuffers[channel][running_ringbuffer_index][0];
b = ringbuffers[channel][running_ringbuffer_index][1];
c = chunks[channel][chunk][frame][0];
d = chunks[channel][chunk][frame][1];
fft_complex[frame][0] += (a * c) - (b * d);
fft_complex[frame][1] += (a * d) + (b * c);
}
}
// inverse fft the input[s]
fftwf_execute(fft_plan_backward);
// copy fft result to output buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
obuffers[channel][frame] = (jack_default_audio_sample_t)(fft_float[frame] / normalize_factor);
}
// add previous overlap to this output buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
obuffers[channel][frame] += overlaps[channel][frame];
}
// save overlap
for (int frame = 0; frame < jack_buffer_size; ++frame) {
overlaps[channel][frame] = fft_float[frame+jack_buffer_size] / normalize_factor;
}
}
// advance ringbuffer index
ringbuffer_index += jack_buffer_size * 2;
ringbuffer_index %= jack_buffer_size * 2 * chunks_per_channel;
return 0;
}
bool quit = false;
void signalled(int sig) {
std::cout << "exiting.." << std::endl;
quit = true;
}
int main(int argc, char *argv[]) {
// we need to become jack client first so we can ask for the buffer
// size.
std::cout << "jack_convolve (C) 2004 Florian Schmidt - protected by GPL2" << std::endl;
if (argc < 2) {
std::cout << "usage: jack_convolve responsefile.wav" << std::endl;
exit(0);
}
// hook up signal handler for ctrl-c
signal(SIGINT, signalled);
client = jack_client_new("convolve");
jack_buffer_size = jack_get_buffer_size(client);
normalize_factor = sqrt(2.0 * (float)jack_buffer_size);
std::cout << "buffer size: " << jack_buffer_size << std::endl;
// first we load the response file. we simply assume it has
// the right samplerate ;) the channel count of the
// response file governs how many Ins and Outs
// we provide to jack..
// filename of the soundfile is the first commandline
// parameter, argv[1]
struct SF_INFO sf_info;
SNDFILE *response_file = sf_open (argv[1], SFM_READ, &sf_info) ;
// register ports for each channel in the response file
std::cout << "channels in response file: " << sf_info.channels << std::endl;
std::cout << "registering ports:";
for (int i = 0; i < sf_info.channels; ++i) {
std::stringstream stream_in;
std::stringstream stream_out;
stream_in << "in" << i;
stream_out << "out" << i;
std::cout << " " << stream_in.str();
std::cout << " " << stream_out.str();
jack_port_t *tmp_in = jack_port_register(client, stream_in.str().c_str(),
JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0);
jack_port_t *tmp_out = jack_port_register(client, stream_out.str().c_str(),
JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0);
iports.push_back(tmp_in);
oports.push_back(tmp_out);
}
std::cout << std::endl;
std::cout << "length of response file in frames: " << sf_info.frames << std::endl;
if (sf_info.samplerate != jack_get_sample_rate(client)) {
std::cout << "warning: samplerate in responseFile: " << sf_info.samplerate
<< "; jack-samplerate: " << jack_get_sample_rate(client) << std::endl;
std::cout << "will resample response file" << std::endl;
}
// find out how many chunks we need per channel:
chunks_per_channel = (int)ceil((float)sf_info.frames/(float)jack_buffer_size);
std::cout << "chunks per channel: " << chunks_per_channel << std::endl;
// allocate chunk memory
for (int i = 0; i < sf_info.channels; ++i) {
std::vector<fftwf_complex*> channel;
for (int j = 0; j < chunks_per_channel; ++j) {
// zero padded to twice the length
fftwf_complex *tmp = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2);
// zero
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) tmp[frame][0] = tmp[frame][1] = 0;
channel.push_back(tmp);
}
chunks.push_back(channel);
}
std::cout << "chopping response file...";
// fill the chunks with the appropriate data
float *tmp = new float[sf_info.channels];
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
// pad with 0's
if (chunk*jack_buffer_size + frame < sf_info.frames && frame < jack_buffer_size) {
int result = sf_readf_float(response_file, tmp, 1);
if (result != 1) std::cout << "problem reading the soundfile" << std::endl;
}
else {
for (int channel = 0; channel < sf_info.channels; ++channel) {
tmp[channel] = 0;
}
}
for (int channel = 0; channel < sf_info.channels; ++channel) {
// set real value to sound data
chunks[channel][chunk][frame][0] = tmp[channel];
// std::cout << tmp[channel] << " ";
// imaginary value to 0
chunks[channel][chunk][frame][1] = 0;
}
}
}
std::cout << "done." << std::endl;
std::cout << "creating fftw3 plan...";
// ok, now we need to FFT each chunk.. For this we need an FFT plan.
// buffers
fft_float = new float[jack_buffer_size * 2];
fft_complex = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2);
// create fftw plan
fft_plan_forward = fftwf_plan_dft_r2c_1d(jack_buffer_size * 2, fft_float, fft_complex, FFTW_MEASURE);
fft_plan_backward = fftwf_plan_dft_c2r_1d(jack_buffer_size * 2, fft_complex, fft_float, FFTW_MEASURE);
std::cout << "done" << std::endl;
// fft the chunks
std::cout << "FFT'ing response file chunks..." << std::endl;
for (int channel = 0; channel < sf_info.channels; ++channel) {
std::cout << "channel: " << channel << ": ";
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
std::cout << ".";
// copy chunk to input buffer
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
fft_float[frame] = chunks[channel][chunk][frame][0];
// fft_in[frame][1] = 0;
}
// fft
fftwf_execute(fft_plan_forward);
// copy output buffer to chunk
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
chunks[channel][chunk][frame][0] = fft_complex[frame][0] / normalize_factor;
chunks[channel][chunk][frame][1] = fft_complex[frame][1] / normalize_factor;;
}
}
std::cout << std::endl;
}
std::cout << "done." << std::endl;
// make room so we can store the buffer pointers for each channel
ibuffers.resize(sf_info.channels);
obuffers.resize(sf_info.channels);
// allocate ram for ringbuffers and zero out for 0 noise :)
for (int channel = 0; channel < sf_info.channels; ++channel) {
fftwf_complex *tmp = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2 * chunks_per_channel);
// zero out buffers
for (int frame = 0; frame < jack_buffer_size * chunks_per_channel * 2; ++frame) {
tmp[frame][0] = 0;
tmp[frame][1] = 0;
}
ringbuffers.push_back(tmp);
}
// allocate buffers for overlap
for (int channel = 0; channel < sf_info.channels; ++channel) {
jack_default_audio_sample_t *tmp = new jack_default_audio_sample_t[jack_buffer_size];
overlaps.push_back(tmp);
}
// now we should be ready to go
// std::cout << chunks.size() << std::endl;
jack_set_process_callback(client, process, 0);
jack_activate(client);
std::cout << "running (press ctrl-c to quit)..." << std::endl;
while(!quit) {sleep(1);};
// sleep(seconds_to_run);
jack_deactivate(client);
jack_client_close(client);
}
---------------
--
Palimm Palimm!
http://affenbande.org/~tapas/