Someone suggested that it would be impossible to map the 12 tones
onto the white keys of a keyoard, and still get correct pitch bend
handling. Well, here's a solution:
The MIDI converter/driver:
This will have to make pitch bend available as a
separate control output from pitch. This does *not*
mean we have to apply pitch bend later, though!
Instead, we apply pitch bend to out PITCH output,
*and* then send pitch bend to the PITCHBEND output.
So, if you don't care for anything but continous
pitch, just ignore this feature.
As to pitch bend range, that is handled entirely
by the converter/driver; the bend output is in the
same units as the pitch output.
With NOTEPITCH:
NOTEPITCH is in 1.0/note
PITCHBEND is in 1.0/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
NOTEPITCH = midi_pitch + PITCHBEND;
Without NOTEPITCH:
PITCH is in (1/12)/note
PITCHBEND is in (1/12)/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
PITCH = midi_pitch + PITCHBEND;
PITCHBEND *= 1.0/12.0;
PITCH *= 1.0/12.0;
The keyboard remapper:
This one will have inputs for both pitch and pitch
bend controls, since it needs to be able to remove
the bend and then reapply it later. remap_lut[] is
simply a table that "returns" the desired pitch for
each key in the scale. (Obviously, you'll have to
add a conditional to ignore the black keys, if you
don't want to map them to anything.)
With NOTEPITCH:
int note = (int)(NOTEPITCH - PITCHBEND);
NOTEPITCH = remap_lut[note] + PITCHBEND;
Without NOTEPITCH:
int note = (int)((NOTEPITCH - PITCHBEND)*12.0);
NOTEPITCH = remap_lut[note] + PITCHBEND;
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
(Same thing again...)
---------- Forwarded Message ----------
Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:15:59 +0100
From: David Olofson <david(a)olofson.net>
To: Nathaniel Virgo <nathaniel.virgo(a)ntlworld.com>
On Wednesday 11 December 2002 18.09, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> > On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > > Steve Harris wrote:
> > > >On Wed, Dec 11, 2002 at 12:40:18 +0000, Nathaniel Virgo wrote:
> > > >>I can't really say I can think of a better way though.
> > > >> Personally I'd leave scales out of the API and let the host
> > > >> deal with it, sticking to 1.0/octave throughout, but I can
> > > >> see the advantages of this as well.
> > > >
> > > >We could put it to a vote ;)
> > > >
> > > >- Steve
> > >
> > > I vote 1.0/octave.
> >
> > So do I, definitely.
> >
> > There has never been an argument about <something>/octave, and
> > there no longer is an argument about 1.0/octave.
> >
> > The "argument" is about whether or not we should have a scale
> > related pitch control type *as well*. It's really more of a hint
> > than an actual data type, as you could just assume "1tET" and use
> > both as 1.0/octave.
>
> I don't think that should be permitted. I think that this case
> should be handled by a trivial scale converter that does nothing.
> No synth should be allowed to take a note_pitch input, and nothing
> except a scale converter should be allowed to assume any particular
> meaning for a note_pitch input.
I like the idea of enforced "explicit casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)
Either way, there will *not* be a distinction between synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.
> If you have an algorithm that needs
> to know something about the actual pitch rather than position on a
> scale then it should operate on linear_pitch instead.
Yes indeed - that's what note_pitch vs linear_pitch is all about.
> I think that
> in this scheme note_pitch and linear_pitch are two completely
> different things and shouldn't be interchangeable.
You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.
> That way you
> can enforce the correct order of operations:
>
> Sequencer
>
> | note_pitch signal
>
> V
> scaled pitch bend (eg +/- 2 tones) /
> arpeggiator / shift along scale /
> other scale-related effects
>
> | note_pitch signal
>
> V
> scale converter (could be trivial)
>
> | linear_pitch signal
>
> V
> portamento / vibrato /
> relative-pitch arpeggiator /
> interval-preserving transpose /
> other frequency-related effects
>
> | linear_pitch signal
>
> V
> synth
>
> That way anyone who doesn't want to worry about notes and scales
> can just always work in linear_pitch and know they'll never see
> anything else.
Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.
So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Hi.
Today I relased a new version of zynaddsubfx.
ZynAddSubFX is a very powerful software syntesizer,
licensed under GPL v.2.
News:
- Added instrument banks (I am sure that you'll like
this)
- the BandPass Filter's output amplitude was increased
- few fixes of FFTwrapper. See the documentation from
"FFTwrapper.h" if you got error messages.
Paul.
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
Didn't we come up with some good ammo in case anyone decided to sue?
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Hi,
I'm currently looking at JACK (http://jackit.sourceforge.net/) for a small
project I'd like to work on some time soon. It sounds like a promising concept.
It's interesting for me because I don't have to write my own audio loop. My
questions are:
-Is it in a state where I can actually use it? Or are there so many things
still to be done so you wouldn't advise me to build something on top of it?
-Is there any competing "product" at the moment? What are the chances that JACK
will be the standard in the future? (Try to remain as objective as you can,
please.)
Thanks for your help.
-Oliver
__________________________________________________________________
Gesendet von Yahoo! Mail - http://mail.yahoo.de
Weihnachts-Einkäufe ohne Stress! http://shopping.yahoo.de
Andrew Morton wrote:
>At http://www.zip.com.au/~akpm/linux/2.4.20-low-latency.patch.gz
>
>Very much in sustaining mode. It includes a fix for a livelock
>problem in fsync() from Stephen Tweedie.
Hi,
I have not currently the possibility to test this patch for the next 2-3
weeks but I'd be interested it this patch is able to cure the latency
problems of Red Hat 8.0.
I think Red Hat 8.0 is a nice desktop distro and thus it would be good
if we achieve low latencies it too.
While discussing about RH8 on the #lad channel on irc, Jussi L. told me
that ext3 causes latency spikes during writes becauses of journal
commints etc, but according to him it seems that there are other latency
sources too (he said probably libc).
e.g. he tried a LL kernel on RH7.3 with reiserFS and it worked fine
while RH8.0 with reiserFS did cause latency peaks.
So my question is: does this patch fix latency problems on Red Hat 8.0 ?
cheers,
Benno
--
http://linuxsampler.sourceforge.net
Building a professional grade software sampler for Linux.
Please help us designing and developing it.
What's going on with headers, docs, names and stuff?
I've ripped the event system and the FX API (the one with the state()
callback) from Audiality, and I'm shaping it up into my own XAP
proposal. There are headers for plugins and hosts, as well as the
beginnings of a host SDK lib. It's mostly the event system I'm
dealing with so far.
The modified event struct:
typedef struct XAP_event
{
struct XAP_event *next;
XAP_timestamp when; /* When to process */
XAP_ui32 action; /* What to do */
XAP_ui32 target; /* Target Cookie */
XAP_f32 value; /* (Begin) Value */
XAP_f32 value2; /* End Value */
XAP_ui32 count; /* Duration */
XAP_ui32 id; /* VVID */
} XAP_event;
The "global" event pool has now moved into the host struct, and each
event queue knows which host it belongs to. (So you don't have to
pass *both* queue and host pointers to the macros. For host side
code, that means you can't accidentally send events belonging to one
host to ports belonging to another.)
Oh, well. Time for some sleep...
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Just an observation about an alternative path on softsynths: a LADSPA plugin
or network can be used easily enough as a softsynth using control-voltage
(CV) approaches (a few already exist). It's just a matter of agreeing the
conventions - implementation is trivial.
I've been meaning to finish writing PNet for a while (I've mentioned it a
few times) - essentially an environment where LADSPA plugins are strung
together to form a "patch" and are wired up to "standard" CV controls for
pitch, velocity, MIDI CC etc. These CV components and outputs can be
provided by the host as "fake" plugins providing the CV signals based on
MIDI input (or by using a non-LADSPA convention). This is trivial to
implement and provides an extremely flexible way to build plugin-based
softsynths from LADSPA components - or to wire existing self-contained
LADSPA soft synths (e.g. the "analogue" synth by David Bartold in the CMT
library, see http://www.ladspa.org/cmt/plugins.html) up to MIDI streams.
All a question of time - if anyone wants to do the rest of the
implementation then please let me know. The code required to do the above
also provides a nice way to store patches of plugins for standard processing
chains. Patches would probably be stored as XML representations of
pure-LADSPA networks. BTW, is anyone doing this already? If so, 50% of the
code is already done. ;-) I'm thinking in terms of defining a synth using
two patches - one to define the per-note network required (e.g.
CV->osc->filter->OUT) and another for any per-instrument post processing
(e.g. IN->chorus->reverb->OUT).
--Richard
I was in this long thread about pitch control on the VST list, and I
think I learned a few things. (For a change! ;-D)
There are times when continous, linear pitch (what I have in
Audiality) is perfectly fine - and in those cases, it's by far, the
simplest possibly way you can control pitch of a synth. You get note
pitch, pitch bend, continous pitch control over the whole range,
whatever scales you like and all that, using *only a single
pitch->frequency conversion* somewhere in your synth code.
I will bet almost anything that there simply cannot be an easier way
of dealing with this.
*However*, in some cases, you may not be all that interested in the
actual pitch, but rather just want to deal with the notes in whatever
scale the user wants to deal with. One example would be a simple,
basic arpeggiator. Sure, you *could* do that with linear pitch, but
then the plugin would have to either assume that you want 12tET (or
whatever), or you need a way to tell it what scale you want to use
for the output. (Note that the kind of arpeggiator I'm thinking about
here may be expected to generate a full, modulated chord from a
single note, so it can't just look at a full input chord and pick the
exact pitches from that.)
In that case, you'd much rather have input more similar to integer
MIDI pitch, and *possibly* pitch bend to go with that. This could
indeed be expressed as "linear pitch" as well (float; 1.0 per
octave), but with one very important difference: it would actually be
1.0 per *note* - where what a "note" is is not strictly defined or
known to the plugin. The plugin just assumes that the user knows what
0-4-7 means, if he/she enters that for "arpeggio offsets". The plugin
also assumes that the user will put a suitable note_pitch to
linear_pitch pitch converter in between the output and the synth, or
that the synth understands note_pitch events.
Note that linear_pitch = note_pitch * (1.0/12.0) for 12teT, so these
"conerters" (or note_pitch support) can be very trivial to implement.
If you want "weird" scales, it gets slightly more complicated, but
the *major* point here is that no synth plugin is required to do this
- and still, every synth plugin can use any scale!
(How many VSTi plugins actually support non-12tET scales? ;-)
Hmm... As to having a synth support *both* note_pitch and
linear_pitch controls, I suppose that would effectively just be a
dual interface to a single internal control value. Send something to
linear_pitch, and it goes directly into the internal pitch variable.
Send it as note_pitch, and it gets multiplied by (1.0/12.0) or is
passed through an interpolated "weird scale" table first. Makes sense?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---