Hello everyone!
I was wondering, is there a direct way to do a transport locate in sync with
klick? Or can one at least tell klick to relocate to 0, so that the metronome
is back in line with everything else. I've tried this:
tty1> klick -i -t # interactive and JACK transport aware
tty2> ecasound -i null -o jack_alsa -G:jack,eca,sendrecv # Transport master
Klick reacts to start and stop. But it doesn't react at all to a setpos 0
(relocate to 0) in Ecasound.
Warm regards
Julien
----------------------------------------
http://juliencoder.de/nama/music.html
Hello all,
I'm trying to add Alsa support to an application with mixed results.
When using the default audio output it sounds fine (my dac display 48KHz
as sampling rate, which doesn't equal the source signal of 44.1) but
when using hw:0 it is heavily distorted and with plughw:0 the sound
stutters at a steady interval of about 2Hz (in these last two cases my
dac display 44.1kHz as the sampling rate). When using libao plughw:0
sound fine (and my dac always display 44.1kHz).
Can anyone give me a direction (or answer) where I might go wrong here.
Unfortunately I'm a bit out of my league here and I do not yet
understand the data that fills buf[]. However, I do know that the data
it supplies works fine with libao, so I do suspect it is something with
the code below.
Thanks in advance, Maarten
ps. it is supposed to add Alsa to the following:
https://github.com/abrasive/shairport/tree/1.0-dev
static void start(int sample_rate) {
if (sample_rate != 44100)
die("Unexpected sample rate!");
int ret, dir = 0;
snd_pcm_uframes_t frames = 32;
ret = snd_pcm_open(&alsa_handle, alsa_out_dev,
SND_PCM_STREAM_PLAYBACK, 0);
if (ret < 0)
die("Alsa initialization failed: unable to open pcm device:
%s\n", snd_strerror(ret));
snd_pcm_hw_params_alloca(&alsa_params);
snd_pcm_hw_params_any(alsa_handle, alsa_params);
snd_pcm_hw_params_set_access(alsa_handle, alsa_params,
SND_PCM_ACCESS_RW_INTERLEAVED);
snd_pcm_hw_params_set_format(alsa_handle, alsa_params,
SND_PCM_FORMAT_S16);
snd_pcm_hw_params_set_channels(alsa_handle, alsa_params, 2);
snd_pcm_hw_params_set_rate_near(alsa_handle, alsa_params, (unsigned
int *)&sample_rate, &dir);
snd_pcm_hw_params_set_period_size_near(alsa_handle, alsa_params,
&frames, &dir);
ret = snd_pcm_hw_params(alsa_handle, alsa_params);
if (ret < 0)
die("unable to set hw parameters: %s\n", snd_strerror(ret));
}
static void play(short buf[], int samples) {
int err = snd_pcm_writei(alsa_handle, (char*)buf, samples);
if (err < 0)
err = snd_pcm_recover(alsa_handle, err, 0);
if (err < 0)
die("Failed to write to PCM device: %s\n", snd_strerror(err));
}
Hi all,
I'm working on a wavetable oscillator class, and I'm wondering about how to
best go about bandlimiting. I see two ways to achieve bandlimiting, i'll
detail as A and B.
A) Create different wavetables for each octave. Base octave includes all
harmonics. Octave 1 has the top half of the harmonics removed by FFT. Oct 2
has the harmonic content halved again.
B) Oversample the single waveform x8. Play the oversampled audio back, and
lowpass with a steep rolloff just below the nyquist of the output
samplerate. Removal of the otherwise aliasing harmonics is done at the
higher samplerate, so its not aliased yet at that stage.
I'm asking in terms of quality and CPU usage: this is for a synth which has
3 oscillators per voice, and 16 / 32 voices..
Cheers, -Harry
Hey everybody,
Its my pleasure to announce that the next OpenAV Productions LV2 plugin is
finished!
Its called Fabla, and its a performance sampler.
Page: http://openavproductions.com/fabla
Demo reel: https://vimeo.com/70122957
One year from today Fabla will be released, and each donation motivates the
release to be one month earlier. -Harry
PS: The release system is the same as the previous OpenAV release for
Sorcer, details available here: http://openavproductions.com/support
Hello all,
Some maintenance updates are available on
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads>
* libclxclient 3.9.0: bugfixes
* aeolus, aliki, jaaa, japa: all now use zita-alsa-pcmi instead
of clalsadrv.
* The aliki package now includes the manual.
That means that clalsadrv is now deprecated. It will remain available
for a few months and then disappear forever.
Note to AMS devs: zita-alsa-pcmi is a near drop-in replacement
for clalsadrv-2.0.0:
* Change the library name in the build files
* s/#include<clalsadrv>/#include<zita-alsa-pcmi>/
* s/Alsa_driver/Alsa_pcmi/
* s/->stat()/->state()/
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hello all,
The incredible happens.
The electronics of the 'Lampadario' at the Casa del Suono in
Parma consists of a rack with an RME ADI468 converting MADI
to 8 ADAT outputs, 8 Behringer ADA8000 converters, and 8 QSC
amplifiers of 8 channels each. The rack was wired (very neatly)
by a firm specialising in this sort of work.
When I installed the software four years ago, I found out
that 25 of the 64 channels had their phase inverted. For one
of those it was an error in the speaker wiring, which was easy
to correct. The other 24 corresponded exactly to 3 groups of 8,
and the speaker wiring was OK. I assumed that the cables between
the ADA8000 and the amps were to blame - this is a non-standard
cable which had to be hand-made by the whoever did the wiring.
If two technicians had worked on that, they could have had
different ideas of what were the correct connections.
Since I didn't want to take the rack apart, resolder 24 wires
and put it all back, and since there was only one SW app driving
the installation at that time, those 24 inversions were corrected
for by that software. So far so good.
Recently I re-measured the IRs of the whole thing. There
were again 24 channels out of phase. But not the same ones.
One of the groups of 8 had turned in-phase, and another
one was now inverted.
The only thing that has happened to the installation over
the last years is that some of the Behringers failed (power
supply blown up, one per year on average) and were replaced.
So I checked those separately. And yes, some of them had their
output phase inverted w.r.t. the others. Apparently the thing
exists in two versions, but apart from measuring there's no
way to tell which is which. So I'll have to recheck things
each time any of them are replaced again. Thank $GOD we didn't
use those for the WFS system.
The incredible happens.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
I'm listening to music from an AROS virtual machine in VirtualBox. The
audio setup is the following:
AROS -> ALSA (loopback) -> zita-a2j -> JACK
VirtualBox has no direct support for JACK. A bug was opened 3 years
ago asking for jack support https://www.virtualbox.org/ticket/6049
This seam pretty hopeless, unless someone competent enough, which I
am not, contribute with some patch.
However, when listening to music, it works fabulous with my setup.
The main advantage of VirtualBox is that the guest OS is really
installed in a virtual machine, and any software compatible with this
OS can be installed and will run. VirtualBox is also a little bit
faster than wine.
So my question is, are there anyone who tested VirtualBox with windows
and some VST?
Dominique
--
"We have the heroes we deserve."
Quoting conrad berhörster <beat.siegel.vier(a)gmx.de>:
> maybe this helps
> http://welltemperedstudio.wordpress.com/code/lemma/
lemma looks cool. I will definitely give it a try as soon as I get my
daw back from the repair shop...
> and take a look at
> impro-visor
> http://www.cs.hmc.edu/~keller/jazz/improvisor/
I also worked with impro-visor a lot. Basically it has too many
features for my requirements. I just want an easy-to-use tool to
provide a playback track for practice sessions for those of us that
lack the skill or the extra hands to accompany themselves on the piano
during practice. I remember the playback quality to be very good,
though. I never got to use all the melody-centered functionalities,
it's a different use-case really. That's also one of my design goals -
focus on the central requirement, keep everything out that's not
absolutely necessary, make it as easy to use as possible, and in the
end really focus on the quality of the output (i.e. the music that can
be heard).
> And maybe you can explain a little bit about your ideas about the
> pattern-less approach
Happy to, though it might get a bit longish from here on. Bear with
me. My reasoning goes like this: Take the bass part of a very
simplistic straight 4-beat groove. In a fictional
pseudo-pattern-defining-format you may specify something like this:
-) Play the bass note of the chord on beat 1.
-) Play the bass note of the chord on the off-beat after 2.
-) Play the bass note of the chord on beat 3.
-) Play the bass note of the chord on the off-beat after 4.
This will sound fine on straight-forward one-chord-per-bar tunes. Even
if you change chord every two beats it will sound reasonably good. But
what of bars with a chord change on every beat, such as happens often
in Jazz, especially in turnarounds and the like? The bass player will
miss every other chord! Sure, you could add a rule like
-) In addition, play the bass note of the chord at every chord change.
assuming your pseudo-pattern-defining-format allows this. But then the
notes on the off-beats defined earlier will not sound too good. A real
life bass player would in such turnaround bars probably play a single
note on every chord change and leave out the off-beat notes. Or
consider off-beat chord changes, also a common thing. Just think about
what a typical pattern definition will make of them, and what a real
player would play.
Still, it's possible to express all this in a pattern definition. But
always provided the engine reading the pattern files supports it! And
I think that sooner rather than later you reach the point where
understanding the syntax of a sufficiently powerful pattern format and
actually writing good patterns in it is not that much easier than
programming in any well-structured contemporary programming language.
Bottom line: I think everyone able to define patterns so complex that
they actually sound really good is just as able to program them given
a well-structured framework to work in.
What you gain by actually programming a groove is that you are no
longer limited by the abilities of the pattern interpreter, but have
the full expressive power of the programming language at your
disposal. Random elements, variations in timing and volume to make it
sound more human, even the occasional wrong note, all that is possible
without any changes to the engine itself.
BTW, the same holds for the voicing of chords. Consider the ugly but
simple progression D7 C9 Bb6. By defining for the bass to
-) Play the root note of the chord in octave #3
will give you a not only ugly but also quite unrealistic jump of a
seventh up at the last change. It gets worse if the voicing of the
pianists right hand just builds the chords up from the root. Playing
the above progression with all the chords in root position will not
only violate a lot of voicing rules, it will also sound pretty crappy.
A real-life pianist would probably play all three chords with the D at
the bottom, or maybe even move up through D E F or something similar.
Actually programming such rules is doable, defining them in a
declarative fashion is much much harder.
So how does this actually work? At the lowest (and most convenient)
level you just define your Groove as a number of Players, each
representing typically one instrument. In your Player you have two
hook methods, one that gets called once for every bar and one that
gets called once for every chord change. Within these methods you
define (actually you program) what notes should be added to the final
music. For that you have not only methods to actually add notes to the
track, but also a (hopefully still growing) number of convenience
methods, for example to get the proper notes for the current chord
from a voicing engine or to get not only the current chord but also
the previous and next chord(s) (for example to program cool walking
bass lines you need the full context) as well as their suggested
voicings, etc. In addition you can do basically everything that can be
programmed.
In the most simple case, programming in that layer is not harder than
defining patterns. The simple pattern from above would look something
like this:
void createEvents(Bar bar) {
addNote(getVoicing(getChordAt(0.0)).get(0), 0.0, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.375)).get(0), 0.375, 1.0, 0.12);
addNote(getVoicing(getChordAt(0.5)).get(0), 0.5, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.875)).get(0), 0.875, 1.0, 0.12);
}
The difference is that you are not limited anymore. You can even skip
the convenience layer and directly implement your own Player object,
in which you receive a list of music information (Bars, Chord changes,
Volume changes, Tempo changes), an assigned MIDI channel number and
the set MIDI resolution, and must return simply an arbitrary list of
MIDI events.
To everyone who made it this far: Thanks for listening ;-)
Mike
--
Michael Niemeck
Krausegasse 4-6/3/6
1110 Wien
michael(a)niemeck.org
+43 1 9417017
+43 660 9417017
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
No, not the umpteenth person looking for one. I want to actually set
out to try and build one. I'm still in the early stages of scoping,
architecturing and prototyping (read: not ready to share anything
yet), but still there's a few questions I would like to get feedback on:
1) Would there be any demand for such an open source initiative at
all, or is everyone seriously needing this type of application using
The Real Thing anyway?
2) Would anyone be interested in collaborating? The core and GUI code
are not really suitable for collaborative development (yet), and
anyway not the real issue (although I'm open to any suggestion and/or
wishes regarding the user experience). What the project _does_ need
however in order to get serious at some point, is a lot of brains put
into the actual music creation routines, which I'm designing from the
start to be easily contributed to. It's in Java and I'm trying to put
in as much convenience methods as possible, so a lot of programming
know-how is not really necessary to contribute. What I'd rather need
is people with a lot of musical know-how, like voicing theory or
in-depth style knowledge, and just enough formal thinking to be able
to put their ideas into some form of algorithms (if Java is a problem
I'm willing to accept any for of pseudo-code as well...).
3) Is this the right place to ask? (Probably should have put this as
1)...). Seriously, if you have any suggestions where else I might put
this up, please let me know as well.
If there are some 'yes's out there, I'll be happy to post more information.
Cheers
Mike
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.