Hi all,
I'm working on a wavetable oscillator class, and I'm wondering about how to
best go about bandlimiting. I see two ways to achieve bandlimiting, i'll
detail as A and B.
A) Create different wavetables for each octave. Base octave includes all
harmonics. Octave 1 has the top half of the harmonics removed by FFT. Oct 2
has the harmonic content halved again.
B) Oversample the single waveform x8. Play the oversampled audio back, and
lowpass with a steep rolloff just below the nyquist of the output
samplerate. Removal of the otherwise aliasing harmonics is done at the
higher samplerate, so its not aliased yet at that stage.
I'm asking in terms of quality and CPU usage: this is for a synth which has
3 oscillators per voice, and 16 / 32 voices..
Cheers, -Harry
Hey everybody,
Its my pleasure to announce that the next OpenAV Productions LV2 plugin is
finished!
Its called Fabla, and its a performance sampler.
Page: http://openavproductions.com/fabla
Demo reel: https://vimeo.com/70122957
One year from today Fabla will be released, and each donation motivates the
release to be one month earlier. -Harry
PS: The release system is the same as the previous OpenAV release for
Sorcer, details available here: http://openavproductions.com/support
Hello all,
Some maintenance updates are available on
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads>
* libclxclient 3.9.0: bugfixes
* aeolus, aliki, jaaa, japa: all now use zita-alsa-pcmi instead
of clalsadrv.
* The aliki package now includes the manual.
That means that clalsadrv is now deprecated. It will remain available
for a few months and then disappear forever.
Note to AMS devs: zita-alsa-pcmi is a near drop-in replacement
for clalsadrv-2.0.0:
* Change the library name in the build files
* s/#include<clalsadrv>/#include<zita-alsa-pcmi>/
* s/Alsa_driver/Alsa_pcmi/
* s/->stat()/->state()/
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hello all,
The incredible happens.
The electronics of the 'Lampadario' at the Casa del Suono in
Parma consists of a rack with an RME ADI468 converting MADI
to 8 ADAT outputs, 8 Behringer ADA8000 converters, and 8 QSC
amplifiers of 8 channels each. The rack was wired (very neatly)
by a firm specialising in this sort of work.
When I installed the software four years ago, I found out
that 25 of the 64 channels had their phase inverted. For one
of those it was an error in the speaker wiring, which was easy
to correct. The other 24 corresponded exactly to 3 groups of 8,
and the speaker wiring was OK. I assumed that the cables between
the ADA8000 and the amps were to blame - this is a non-standard
cable which had to be hand-made by the whoever did the wiring.
If two technicians had worked on that, they could have had
different ideas of what were the correct connections.
Since I didn't want to take the rack apart, resolder 24 wires
and put it all back, and since there was only one SW app driving
the installation at that time, those 24 inversions were corrected
for by that software. So far so good.
Recently I re-measured the IRs of the whole thing. There
were again 24 channels out of phase. But not the same ones.
One of the groups of 8 had turned in-phase, and another
one was now inverted.
The only thing that has happened to the installation over
the last years is that some of the Behringers failed (power
supply blown up, one per year on average) and were replaced.
So I checked those separately. And yes, some of them had their
output phase inverted w.r.t. the others. Apparently the thing
exists in two versions, but apart from measuring there's no
way to tell which is which. So I'll have to recheck things
each time any of them are replaced again. Thank $GOD we didn't
use those for the WFS system.
The incredible happens.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
I'm listening to music from an AROS virtual machine in VirtualBox. The
audio setup is the following:
AROS -> ALSA (loopback) -> zita-a2j -> JACK
VirtualBox has no direct support for JACK. A bug was opened 3 years
ago asking for jack support https://www.virtualbox.org/ticket/6049
This seam pretty hopeless, unless someone competent enough, which I
am not, contribute with some patch.
However, when listening to music, it works fabulous with my setup.
The main advantage of VirtualBox is that the guest OS is really
installed in a virtual machine, and any software compatible with this
OS can be installed and will run. VirtualBox is also a little bit
faster than wine.
So my question is, are there anyone who tested VirtualBox with windows
and some VST?
Dominique
--
"We have the heroes we deserve."
Quoting conrad berhörster <beat.siegel.vier(a)gmx.de>:
> maybe this helps
> http://welltemperedstudio.wordpress.com/code/lemma/
lemma looks cool. I will definitely give it a try as soon as I get my
daw back from the repair shop...
> and take a look at
> impro-visor
> http://www.cs.hmc.edu/~keller/jazz/improvisor/
I also worked with impro-visor a lot. Basically it has too many
features for my requirements. I just want an easy-to-use tool to
provide a playback track for practice sessions for those of us that
lack the skill or the extra hands to accompany themselves on the piano
during practice. I remember the playback quality to be very good,
though. I never got to use all the melody-centered functionalities,
it's a different use-case really. That's also one of my design goals -
focus on the central requirement, keep everything out that's not
absolutely necessary, make it as easy to use as possible, and in the
end really focus on the quality of the output (i.e. the music that can
be heard).
> And maybe you can explain a little bit about your ideas about the
> pattern-less approach
Happy to, though it might get a bit longish from here on. Bear with
me. My reasoning goes like this: Take the bass part of a very
simplistic straight 4-beat groove. In a fictional
pseudo-pattern-defining-format you may specify something like this:
-) Play the bass note of the chord on beat 1.
-) Play the bass note of the chord on the off-beat after 2.
-) Play the bass note of the chord on beat 3.
-) Play the bass note of the chord on the off-beat after 4.
This will sound fine on straight-forward one-chord-per-bar tunes. Even
if you change chord every two beats it will sound reasonably good. But
what of bars with a chord change on every beat, such as happens often
in Jazz, especially in turnarounds and the like? The bass player will
miss every other chord! Sure, you could add a rule like
-) In addition, play the bass note of the chord at every chord change.
assuming your pseudo-pattern-defining-format allows this. But then the
notes on the off-beats defined earlier will not sound too good. A real
life bass player would in such turnaround bars probably play a single
note on every chord change and leave out the off-beat notes. Or
consider off-beat chord changes, also a common thing. Just think about
what a typical pattern definition will make of them, and what a real
player would play.
Still, it's possible to express all this in a pattern definition. But
always provided the engine reading the pattern files supports it! And
I think that sooner rather than later you reach the point where
understanding the syntax of a sufficiently powerful pattern format and
actually writing good patterns in it is not that much easier than
programming in any well-structured contemporary programming language.
Bottom line: I think everyone able to define patterns so complex that
they actually sound really good is just as able to program them given
a well-structured framework to work in.
What you gain by actually programming a groove is that you are no
longer limited by the abilities of the pattern interpreter, but have
the full expressive power of the programming language at your
disposal. Random elements, variations in timing and volume to make it
sound more human, even the occasional wrong note, all that is possible
without any changes to the engine itself.
BTW, the same holds for the voicing of chords. Consider the ugly but
simple progression D7 C9 Bb6. By defining for the bass to
-) Play the root note of the chord in octave #3
will give you a not only ugly but also quite unrealistic jump of a
seventh up at the last change. It gets worse if the voicing of the
pianists right hand just builds the chords up from the root. Playing
the above progression with all the chords in root position will not
only violate a lot of voicing rules, it will also sound pretty crappy.
A real-life pianist would probably play all three chords with the D at
the bottom, or maybe even move up through D E F or something similar.
Actually programming such rules is doable, defining them in a
declarative fashion is much much harder.
So how does this actually work? At the lowest (and most convenient)
level you just define your Groove as a number of Players, each
representing typically one instrument. In your Player you have two
hook methods, one that gets called once for every bar and one that
gets called once for every chord change. Within these methods you
define (actually you program) what notes should be added to the final
music. For that you have not only methods to actually add notes to the
track, but also a (hopefully still growing) number of convenience
methods, for example to get the proper notes for the current chord
from a voicing engine or to get not only the current chord but also
the previous and next chord(s) (for example to program cool walking
bass lines you need the full context) as well as their suggested
voicings, etc. In addition you can do basically everything that can be
programmed.
In the most simple case, programming in that layer is not harder than
defining patterns. The simple pattern from above would look something
like this:
void createEvents(Bar bar) {
addNote(getVoicing(getChordAt(0.0)).get(0), 0.0, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.375)).get(0), 0.375, 1.0, 0.12);
addNote(getVoicing(getChordAt(0.5)).get(0), 0.5, 1.0, 0.35);
addNote(getVoicing(getChordAt(0.875)).get(0), 0.875, 1.0, 0.12);
}
The difference is that you are not limited anymore. You can even skip
the convenience layer and directly implement your own Player object,
in which you receive a list of music information (Bars, Chord changes,
Volume changes, Tempo changes), an assigned MIDI channel number and
the set MIDI resolution, and must return simply an arbitrary list of
MIDI events.
To everyone who made it this far: Thanks for listening ;-)
Mike
--
Michael Niemeck
Krausegasse 4-6/3/6
1110 Wien
michael(a)niemeck.org
+43 1 9417017
+43 660 9417017
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
No, not the umpteenth person looking for one. I want to actually set
out to try and build one. I'm still in the early stages of scoping,
architecturing and prototyping (read: not ready to share anything
yet), but still there's a few questions I would like to get feedback on:
1) Would there be any demand for such an open source initiative at
all, or is everyone seriously needing this type of application using
The Real Thing anyway?
2) Would anyone be interested in collaborating? The core and GUI code
are not really suitable for collaborative development (yet), and
anyway not the real issue (although I'm open to any suggestion and/or
wishes regarding the user experience). What the project _does_ need
however in order to get serious at some point, is a lot of brains put
into the actual music creation routines, which I'm designing from the
start to be easily contributed to. It's in Java and I'm trying to put
in as much convenience methods as possible, so a lot of programming
know-how is not really necessary to contribute. What I'd rather need
is people with a lot of musical know-how, like voicing theory or
in-depth style knowledge, and just enough formal thinking to be able
to put their ideas into some form of algorithms (if Java is a problem
I'm willing to accept any for of pseudo-code as well...).
3) Is this the right place to ask? (Probably should have put this as
1)...). Seriously, if you have any suggestions where else I might put
this up, please let me know as well.
If there are some 'yes's out there, I'll be happy to post more information.
Cheers
Mike
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
Hi Geoff
On Sun, 16 Jun 2013 20:49:26 +1000 you wrote:
> Going to finally build a new machine. I'ts going to be Intel this time -
> AMD for 15 years or so - can any one here give some advice as to how
> many cores are optimal given current kernel >3.8 performance. Any
> install/operational issues? Any pitfalls ?
I can't provide much in the way of scientific evidence; there are others who
know far more about the technical realities of this - and push their
machines much harder - than I do.
Personally, I have always run with the idea that so long as one has N
independent processes whose links to other processes are limited to the
consumption or production of streams, then one could theoretically max out N
CPU cores. However in most practical cases involving audio and video one is
running the streams at real world speed, and this obviously limits the
extent that each process requires a CPU. The only time a single process
would have the chance to max out a single CPU core would be when
freewheeling with jack for example, or doing a final render of a video.
If one spends most of the time interacting with their AV software, this
means that there's no simple answer to the "how many cores are optimal"
questions. It depends on the precise mix of software you're running, what
each process requires of a CPU core, how much I/O each process instigates,
and so on. Another caveat is jackd: my current understanding is that jack2
can better utilise multiple cores, but I'm happy to be corrected on this
point by anyone with more knowledge than I (I really haven't looked into
this recently because for my current situation it's academic).
My system has been based on a first-generation i7 for the last couple of
years and I've noticed no major issues. However, in terms of audio work I'm
not really pushing the system all that hard during real work (I don't have
soft-synths running generally, and the plugins I use tend to be fairly
frugal with CPU requirements). This gives me 4 cores with hypertheading
and when I've done tests to see what it could handle, an audio-like workload
is able to push well above the 400% loading (so the hyperthreading seems to
be doing something useful).
Having said all that and knowing the sort of work you do, I would probably
err on the side of getting as many cores as you can reasonably afford. As
time goes on they won't go astray; you'll have the flexibility to experiment
with new ways of doing things without being too constrained by the number of
cores at your disposal.
A final comment is that with the release of Intel's Haswell-based CPUs we
are at an interesting point in time. These new cores are certainly a big
win for mobile computing due to their lower power consumption for a given
performance level. However, whether the increase in outright performance -
the primary metric for a desktop - justifies the "new product" premium these
will attract for the next 6-12 months remains to be seen, especially since
one would also expect some runout discounting on the 4th generation CPUs in
the coming months.
Regards
jonathan
Anyone else noticed this:
Ardour 3.1-3-g1606996: changing the meter source (in/pre/post/custom)
in a mixer strip generates clicks in the audio output.
??
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)