I meed to communicate with MIDI devices from an embedded Linux world
(Gumstix). Its UARTs are cabable of:
14.7456M/16*29 = 31779
14.7456M/16*30 = 30720
MIDI spec says +-1%. My error is 1.7%, Is this close enough?
I figure this problem can't be new, but I'm finding very little info
about it. Some sites hint that an error of up to 3% is OK with certain
MIDI devices.
Any advice appreciated,
darren
Mx41 minor update at
http://hem.passagen.se/ja_linux
This wasn't on my todo list, but I just stumbled over the missing link
in the voice assign/stealing algorithm and couldn't help implementing
it. Just to check out if it really worked ... and I think it did :)
I now have voices in five assignment-ques:
silent // absolutely idle voices
released // voices about to become idle
excess holdpedal // elder voices than mentioned below ..
holdpedal // the two most recent voices for a given key
fingered // voices where the key is still pressed
... and a short two-voice que for each key to figure out the 'excess'
part.
Voice assign will at best find a silent voice and at worst a fingered
voice.
Rolls (tremolo?) now works proper without stealing highest or lowest
note, nor anything inbetween for that matter. Finally!
Is there room for improvement? Yes I think so ... It is now possible to
get 'clicks' with certain combinations of envelope and playingstyle. On
the other hand it is also quite easy to avoid, so I will work slowly on
this one.
--
(
)
c[] mvh // Jens M Andreasen
Hello.
It is no problem to design or write reverbs. The problem
is in coding the interface (API, GUI, etc.) to the reverb.
Here are some numbers. Yet unreleased jackgverb source code
has 20 parameters, and I have coded only the early reflection
section. All 20 parameters are meaningful to the end users, i.e.,
they are not developer parameters such as lengths of individual
delay lines. Therefore the number of parameters cannot be reduced
without making the reverb dull, canned.
For each parameter I have the following functions:
-a GUI callback (e.g., slider)
-jackgverb_set() and jackgverb_get() called in the GUI callbacks
-gverb_set() and gverb_get() called in jackgverb_set/get()
Perhaps jackgverb_set/get() are unnecessary. I have them because
gverb is an enclosed system with API and if I need to add
something to the jack wrapping, its place is in jackgverb_set/get().
So, in total I have 5*20 = 100 functions for the parameters. The
whole situation is a nightmare. I don't have time nor interest in
coding with that kind of system. It all gets worse if the design
is changed as often happens in the design phase.
What would help? If nobody figures out anything, I could draw the
reverb design on a small piece of paper and then somebody else
may torture himself in coding.
Are there coding patterns which would make my C code simpler?
Would modular system help? PD has strange firing rules: does they
apply to standard signal flow or to advanced ones? Feedback
paths should be sample accurate, not blocked to N samples as
in Csound.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi peeps.
Just looking for some quick advice about a new soundcard.
I'm looking at getting something like the M-Audio Audiophile 2496.
This is to replace a SB Live! Value.
Does this seem like a sensible (though cheap) move? I'm expecting a
better card.
The reason I'm thinking about changing to something not hugely
dissimilar is because under ALSA, I'm rally not impressed with my SB.
The volumes seem wrong - loudest seems to clip before it's off the
card. I have no use for all those fancy effects that it does, since I
can't make use of them in my setup anyway.
Up until now, I've been using headphones mostly, with a couple of sets
of cheapo speakers as monitors, with a super cheap amp.
I'm about to splash out on some real nearfield monitors and a real
monitoring amp, although I'll still be spending less than £350 (That's
UK pounds, in case the symbol doesn't come out as it should - there's
a reason I'm not sure it will, but let's not get into that).
With a nicer monitoring setup, obviously I want a decent soundcard.
Does the Audiophile work well with ALSA? Are there any better
alternatives? Am I being silly, and can just somehow "fix" my
SBLive/ALSA setup?
All advice welcomed,
James
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
Hi,
i implemented a small convolution engine for jack.. grab it at
http://affenbande.org/~tapas/jack_convolve-0.0.1.tgz
untar
make
./jack_convolve responsefile.wav
It creates as many in/out ports as the response file has channels. Uses
fftw3f and libsndfile. Will add libsamplerate support in near future.
So for now, make sure the samplerate of jack and the response file
match.
Here's a ca 1 sec 48khz (resampled from 96khz) stereo response file of a
room:
http://affenbande.org/~tapas/FrontFacing%20TLM170.wav
It's from this package:
http://www.noisevault.com/index.php?page=3&action=file&file_id=130
which has 96khz responses..
Consumes ca. 25-30% cpu load on my 1.2ghz athlon at jack buffer size 2048
[;)]
So there's plenty room for optimization (and some return value checking
will be added too ;)).. If you know some tricks, let me know.. The
sourcecode is pasted below for easier reference.
Flo
P.S.: thanks to mario lang for collaborating and giving some hints
towards using fftw3f instead of fftw and some other optimizations..
P.P.S.: oh yeah, example sound, here you go [hydrogen dry then with
output of jack_convolve mixed to it]:
http://affenbande.org/~tapas/jack_conv_ex1.ogg
And here the convoluted signal alone:
http://affenbande.org/~tapas/jack_conv_ex2.ogg
P.P.S.: known issues:
- won't handle samplerate or buffersize changes gracefully
- will bring your machine to a crawl ;)
jack_convolve.cc:
---------------
/*
Copyright (C) 2004 Florian Schmidt
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
$Id: types_8h-source.html,v 1.1 2004/04/27 18:21:48 joq Exp $
*/
#include <jack/jack.h>
#include <iostream>
#include <sstream>
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <sndfile.h>
#include <vector>
#include <cmath>
#include <fftw3.h>
jack_client_t *client;
std::vector<jack_port_t *> iports;
std::vector<jack_port_t *> oports;
jack_nframes_t jack_buffer_size;
int chunks_per_channel;
// channel chunk data
std::vector<std::vector <fftwf_complex*> > chunks;
// the buffers for the fft
float *fft_float;
fftwf_complex *fft_complex;
// the plan
fftwf_plan fft_plan_forward;
fftwf_plan fft_plan_backward;
float normalize_factor;
// per channel we need a ringbuffer holding the fft results of the
// audio periods passed to us by jackd.
// each needs to be sized jack_buffer_size * chunks_per_channel (see main())
std::vector<fftwf_complex *> ringbuffers;
// this gets advanced by jack_buffer_size after each process() callback
unsigned int ringbuffer_index = 0;
// a vector to hold the jack buffer pointers.. these get resized
// during init in main
std::vector<jack_default_audio_sample_t *> ibuffers;
std::vector<jack_default_audio_sample_t *> obuffers;
// channel data
std::vector<jack_default_audio_sample_t *>overlaps;
int process(jack_nframes_t frames, void *arg) {
// std::cout << " " << ringbuffer_index;
// get pointer[s] to the buffer[s]
int channels = chunks.size();
for (int channel = 0; channel < channels; ++channel) {
ibuffers[channel] = ((jack_default_audio_sample_t*)jack_port_get_buffer(iports[channel], frames));
obuffers[channel] = ((jack_default_audio_sample_t*)jack_port_get_buffer(oports[channel], frames));
}
for (int channel = 0; channel < channels; ++channel) {
// copy input buffer to fft buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
fft_float[frame] = (float)(ibuffers[channel][frame]);
fft_float[frame+jack_buffer_size] = 0.0;
}
// fft the input[s]
fftwf_execute(fft_plan_forward);
// store the new result into the ringbuffer for this channel
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
ringbuffers[channel][ringbuffer_index+frame][0] = fft_complex[frame][0] / normalize_factor;
ringbuffers[channel][ringbuffer_index+frame][1] = fft_complex[frame][1] / normalize_factor;
}
// zero our buffer for the inverse FFT, so we can simply += the
// values in the next step.
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
fft_complex[frame][0] = 0;
fft_complex[frame][1] = 0;
}
// multiply corresponding chunks of the fft'ed response[s]
// we start with the chunk for the current part of the response and work our
// way to the oldest data in the ringbuffer (we need to go backwards for that)
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
// we go backwards and constraint to the whole buffersize ("%")
long int chunk_rb_index = ((ringbuffer_index - (2 * chunk * jack_buffer_size))
+ 2 * chunks_per_channel * jack_buffer_size)
% (chunks_per_channel * jack_buffer_size * 2);
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
// complex multiplication (a+bi)(c+di) = (ac - bd)+(ad + bc)i
long int running_ringbuffer_index = chunk_rb_index + frame;
float a,b,c,d;
a = ringbuffers[channel][running_ringbuffer_index][0];
b = ringbuffers[channel][running_ringbuffer_index][1];
c = chunks[channel][chunk][frame][0];
d = chunks[channel][chunk][frame][1];
fft_complex[frame][0] += (a * c) - (b * d);
fft_complex[frame][1] += (a * d) + (b * c);
}
}
// inverse fft the input[s]
fftwf_execute(fft_plan_backward);
// copy fft result to output buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
obuffers[channel][frame] = (jack_default_audio_sample_t)(fft_float[frame] / normalize_factor);
}
// add previous overlap to this output buffer
for (int frame = 0; frame < jack_buffer_size; ++frame) {
obuffers[channel][frame] += overlaps[channel][frame];
}
// save overlap
for (int frame = 0; frame < jack_buffer_size; ++frame) {
overlaps[channel][frame] = fft_float[frame+jack_buffer_size] / normalize_factor;
}
}
// advance ringbuffer index
ringbuffer_index += jack_buffer_size * 2;
ringbuffer_index %= jack_buffer_size * 2 * chunks_per_channel;
return 0;
}
bool quit = false;
void signalled(int sig) {
std::cout << "exiting.." << std::endl;
quit = true;
}
int main(int argc, char *argv[]) {
// we need to become jack client first so we can ask for the buffer
// size.
std::cout << "jack_convolve (C) 2004 Florian Schmidt - protected by GPL2" << std::endl;
if (argc < 2) {
std::cout << "usage: jack_convolve responsefile.wav" << std::endl;
exit(0);
}
// hook up signal handler for ctrl-c
signal(SIGINT, signalled);
client = jack_client_new("convolve");
jack_buffer_size = jack_get_buffer_size(client);
normalize_factor = sqrt(2.0 * (float)jack_buffer_size);
std::cout << "buffer size: " << jack_buffer_size << std::endl;
// first we load the response file. we simply assume it has
// the right samplerate ;) the channel count of the
// response file governs how many Ins and Outs
// we provide to jack..
// filename of the soundfile is the first commandline
// parameter, argv[1]
struct SF_INFO sf_info;
SNDFILE *response_file = sf_open (argv[1], SFM_READ, &sf_info) ;
// register ports for each channel in the response file
std::cout << "channels in response file: " << sf_info.channels << std::endl;
std::cout << "registering ports:";
for (int i = 0; i < sf_info.channels; ++i) {
std::stringstream stream_in;
std::stringstream stream_out;
stream_in << "in" << i;
stream_out << "out" << i;
std::cout << " " << stream_in.str();
std::cout << " " << stream_out.str();
jack_port_t *tmp_in = jack_port_register(client, stream_in.str().c_str(),
JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0);
jack_port_t *tmp_out = jack_port_register(client, stream_out.str().c_str(),
JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0);
iports.push_back(tmp_in);
oports.push_back(tmp_out);
}
std::cout << std::endl;
std::cout << "length of response file in frames: " << sf_info.frames << std::endl;
if (sf_info.samplerate != jack_get_sample_rate(client)) {
std::cout << "warning: samplerate in responseFile: " << sf_info.samplerate
<< "; jack-samplerate: " << jack_get_sample_rate(client) << std::endl;
std::cout << "will resample response file" << std::endl;
}
// find out how many chunks we need per channel:
chunks_per_channel = (int)ceil((float)sf_info.frames/(float)jack_buffer_size);
std::cout << "chunks per channel: " << chunks_per_channel << std::endl;
// allocate chunk memory
for (int i = 0; i < sf_info.channels; ++i) {
std::vector<fftwf_complex*> channel;
for (int j = 0; j < chunks_per_channel; ++j) {
// zero padded to twice the length
fftwf_complex *tmp = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2);
// zero
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) tmp[frame][0] = tmp[frame][1] = 0;
channel.push_back(tmp);
}
chunks.push_back(channel);
}
std::cout << "chopping response file...";
// fill the chunks with the appropriate data
float *tmp = new float[sf_info.channels];
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
// pad with 0's
if (chunk*jack_buffer_size + frame < sf_info.frames && frame < jack_buffer_size) {
int result = sf_readf_float(response_file, tmp, 1);
if (result != 1) std::cout << "problem reading the soundfile" << std::endl;
}
else {
for (int channel = 0; channel < sf_info.channels; ++channel) {
tmp[channel] = 0;
}
}
for (int channel = 0; channel < sf_info.channels; ++channel) {
// set real value to sound data
chunks[channel][chunk][frame][0] = tmp[channel];
// std::cout << tmp[channel] << " ";
// imaginary value to 0
chunks[channel][chunk][frame][1] = 0;
}
}
}
std::cout << "done." << std::endl;
std::cout << "creating fftw3 plan...";
// ok, now we need to FFT each chunk.. For this we need an FFT plan.
// buffers
fft_float = new float[jack_buffer_size * 2];
fft_complex = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2);
// create fftw plan
fft_plan_forward = fftwf_plan_dft_r2c_1d(jack_buffer_size * 2, fft_float, fft_complex, FFTW_MEASURE);
fft_plan_backward = fftwf_plan_dft_c2r_1d(jack_buffer_size * 2, fft_complex, fft_float, FFTW_MEASURE);
std::cout << "done" << std::endl;
// fft the chunks
std::cout << "FFT'ing response file chunks..." << std::endl;
for (int channel = 0; channel < sf_info.channels; ++channel) {
std::cout << "channel: " << channel << ": ";
for (int chunk = 0; chunk < chunks_per_channel; ++chunk) {
std::cout << ".";
// copy chunk to input buffer
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
fft_float[frame] = chunks[channel][chunk][frame][0];
// fft_in[frame][1] = 0;
}
// fft
fftwf_execute(fft_plan_forward);
// copy output buffer to chunk
for (int frame = 0; frame < jack_buffer_size * 2; ++frame) {
chunks[channel][chunk][frame][0] = fft_complex[frame][0] / normalize_factor;
chunks[channel][chunk][frame][1] = fft_complex[frame][1] / normalize_factor;;
}
}
std::cout << std::endl;
}
std::cout << "done." << std::endl;
// make room so we can store the buffer pointers for each channel
ibuffers.resize(sf_info.channels);
obuffers.resize(sf_info.channels);
// allocate ram for ringbuffers and zero out for 0 noise :)
for (int channel = 0; channel < sf_info.channels; ++channel) {
fftwf_complex *tmp = (fftwf_complex*)fftwf_malloc(sizeof(fftwf_complex) * jack_buffer_size * 2 * chunks_per_channel);
// zero out buffers
for (int frame = 0; frame < jack_buffer_size * chunks_per_channel * 2; ++frame) {
tmp[frame][0] = 0;
tmp[frame][1] = 0;
}
ringbuffers.push_back(tmp);
}
// allocate buffers for overlap
for (int channel = 0; channel < sf_info.channels; ++channel) {
jack_default_audio_sample_t *tmp = new jack_default_audio_sample_t[jack_buffer_size];
overlaps.push_back(tmp);
}
// now we should be ready to go
// std::cout << chunks.size() << std::endl;
jack_set_process_callback(client, process, 0);
jack_activate(client);
std::cout << "running (press ctrl-c to quit)..." << std::endl;
while(!quit) {sleep(1);};
// sleep(seconds_to_run);
jack_deactivate(client);
jack_client_close(client);
}
---------------
--
Palimm Palimm!
http://affenbande.org/~tapas/
Back in the days before I converted my windows partition to ext3, I
used Cubase with a number of VST instruments and effects.
I miss a number of these, but there is usually something equivalent or
something that will one day be equivalent in the Linux world.
One thing I haven't been able to replace so far is the Oberheim
OB-Tune plug-in. This was an amazingly useful plug-in that would take
an audio input and make sure it stayed in tune. It worked on guitars,
vocals, synths, whatever.
There were a number of ways to use it. You could define a set of
notes that were allowed, and audio would be made to be in tune with
the closest note (useful for phrases) or you could specify a single
note. This was all parameterised, so you could write changes to these
throughout a track.
Is there anything like this out there at the moment for Linux?
If there isn't, does anyone have any idea how it works? I might
consider implementing something similar, but I'm not sure where to
begin.
Here's what I'm thinking:
Operate in smallish chunks. Find the most intense frequency (FFT or
such) and decide how far that is from the desired frequency. Scale
accordingly, preferably with as little distortion as possible, so pack
and crossfade sections.
If anyone has any thoughts, please let me know.
I'd expect to make this a plug-in rather than a standalone app, but
I've never touched LADSPA before - is it possible to send events to
change parameters?
This might be more than I can chew, but I intend to bite anyway.
James
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)
does the idea of documenting various lad design patterns make sense to anyone?
seems a lot of stuff is sort of voodooish.
example: application timer design
or, am i just a moron and should be able to come up with this sort of stuff
easily and on my own, in other words: common sense?
thanks.
--
Levi D. Burton
http://www.puresimplicity.net/~ldb/
On Fri, 28 Jan 2005 21:26:46 -0800 (PST)
R Parker <rtp405(a)yahoo.com> wrote:
> Florian,
>
> Just gave your dry and wet sample a quick listen. It
> sounds promising. I'm looking forward to giving it a
> more serious run.
>
Hi,
make sure you get the newest version.. homepage here:
http://www.affenbande.org/~tapas/wiki/admin.php?jack_convolve
tarball[s] here [minor optimization attempts since last tarball]:
http://affenbande.org/~tapas/jack_convolve/
I also created another example file where i use a different response
file [the one used for the other two examples was noisy]. hear you hear
the convolution result get mixed in and out:
http://affenbande.org/~tapas/jack_conv_ex3.ogg
Flo
--
Palimm Palimm!
http://affenbande.org/~tapas/