> > a softstudio; it's pretty far already and
> > the first public release is scheduled Q1/2003.
>
> for Linux, obviously? ;-)
Yes. Linux, GPL. MONKEY is about 30.000 lines of C++ at the moment. I
still have to make a final architecture revision based on some issues
reading this list has evoked, and prepare the whole thing for release.
> > First, I don't understand why you want to design a "synth API". If you
> > want to play a note, why not instantiate a DSP network that does the job,
> > connect it to the main network (where system audio outs reside), run it
> > for a while and then destroy it? That is what events are in my system -
> > timed modifications to the DSP network.
>
> because a standard API is needed for a dynamically loaded plugins!
> LADSPA doesnt really cater for event-driven processes (synths)
Yes, I understand it now. In principle, audio and control ports could
almost suffice but sample-accurate events sent to plugins are more
efficient, and allow one to pass around structured data.
I shall have to add something like this to MONKEY. Right now it supports
LADSPA via a wrapper - the native API is pretty complex - although
creating a nice GUI based on just information in a LADSPA .so is not
possible, mainly due to lack of titles for enums.
> For a complete contrast, please look over
> http://amsynthe.sourceforge.net/amp_plugin.h which i am still toying
> with as a(nother) plugin api suitable for synths. I was hoping to wait
I like this better than the more complex proposal being worked on, except
that I don't much care for MIDI myself. But I also realize the need for
the event/channel/bay/voice monster because it is more efficient and
potentially doesn't require plugins to be instantiated while a song is
playing. I don't think one API can fit all sizes.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Someone suggested that it would be impossible to map the 12 tones
onto the white keys of a keyoard, and still get correct pitch bend
handling. Well, here's a solution:
The MIDI converter/driver:
This will have to make pitch bend available as a
separate control output from pitch. This does *not*
mean we have to apply pitch bend later, though!
Instead, we apply pitch bend to out PITCH output,
*and* then send pitch bend to the PITCHBEND output.
So, if you don't care for anything but continous
pitch, just ignore this feature.
As to pitch bend range, that is handled entirely
by the converter/driver; the bend output is in the
same units as the pitch output.
With NOTEPITCH:
NOTEPITCH is in 1.0/note
PITCHBEND is in 1.0/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
NOTEPITCH = midi_pitch + PITCHBEND;
Without NOTEPITCH:
PITCH is in (1/12)/note
PITCHBEND is in (1/12)/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
PITCH = midi_pitch + PITCHBEND;
PITCHBEND *= 1.0/12.0;
PITCH *= 1.0/12.0;
The keyboard remapper:
This one will have inputs for both pitch and pitch
bend controls, since it needs to be able to remove
the bend and then reapply it later. remap_lut[] is
simply a table that "returns" the desired pitch for
each key in the scale. (Obviously, you'll have to
add a conditional to ignore the black keys, if you
don't want to map them to anything.)
With NOTEPITCH:
int note = (int)(NOTEPITCH - PITCHBEND);
NOTEPITCH = remap_lut[note] + PITCHBEND;
Without NOTEPITCH:
int note = (int)((NOTEPITCH - PITCHBEND)*12.0);
NOTEPITCH = remap_lut[note] + PITCHBEND;
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
(Same thing again...)
---------- Forwarded Message ----------
Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:15:59 +0100
From: David Olofson <david(a)olofson.net>
To: Nathaniel Virgo <nathaniel.virgo(a)ntlworld.com>
On Wednesday 11 December 2002 18.09, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> > On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > > Steve Harris wrote:
> > > >On Wed, Dec 11, 2002 at 12:40:18 +0000, Nathaniel Virgo wrote:
> > > >>I can't really say I can think of a better way though.
> > > >> Personally I'd leave scales out of the API and let the host
> > > >> deal with it, sticking to 1.0/octave throughout, but I can
> > > >> see the advantages of this as well.
> > > >
> > > >We could put it to a vote ;)
> > > >
> > > >- Steve
> > >
> > > I vote 1.0/octave.
> >
> > So do I, definitely.
> >
> > There has never been an argument about <something>/octave, and
> > there no longer is an argument about 1.0/octave.
> >
> > The "argument" is about whether or not we should have a scale
> > related pitch control type *as well*. It's really more of a hint
> > than an actual data type, as you could just assume "1tET" and use
> > both as 1.0/octave.
>
> I don't think that should be permitted. I think that this case
> should be handled by a trivial scale converter that does nothing.
> No synth should be allowed to take a note_pitch input, and nothing
> except a scale converter should be allowed to assume any particular
> meaning for a note_pitch input.
I like the idea of enforced "explicit casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)
Either way, there will *not* be a distinction between synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.
> If you have an algorithm that needs
> to know something about the actual pitch rather than position on a
> scale then it should operate on linear_pitch instead.
Yes indeed - that's what note_pitch vs linear_pitch is all about.
> I think that
> in this scheme note_pitch and linear_pitch are two completely
> different things and shouldn't be interchangeable.
You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.
> That way you
> can enforce the correct order of operations:
>
> Sequencer
>
> | note_pitch signal
>
> V
> scaled pitch bend (eg +/- 2 tones) /
> arpeggiator / shift along scale /
> other scale-related effects
>
> | note_pitch signal
>
> V
> scale converter (could be trivial)
>
> | linear_pitch signal
>
> V
> portamento / vibrato /
> relative-pitch arpeggiator /
> interval-preserving transpose /
> other frequency-related effects
>
> | linear_pitch signal
>
> V
> synth
>
> That way anyone who doesn't want to worry about notes and scales
> can just always work in linear_pitch and know they'll never see
> anything else.
Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.
So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Hi.
Today I relased a new version of zynaddsubfx.
ZynAddSubFX is a very powerful software syntesizer,
licensed under GPL v.2.
News:
- Added instrument banks (I am sure that you'll like
this)
- the BandPass Filter's output amplitude was increased
- few fixes of FFTwrapper. See the documentation from
"FFTwrapper.h" if you got error messages.
Paul.
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
Didn't we come up with some good ammo in case anyone decided to sue?
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Hi,
I'm currently looking at JACK (http://jackit.sourceforge.net/) for a small
project I'd like to work on some time soon. It sounds like a promising concept.
It's interesting for me because I don't have to write my own audio loop. My
questions are:
-Is it in a state where I can actually use it? Or are there so many things
still to be done so you wouldn't advise me to build something on top of it?
-Is there any competing "product" at the moment? What are the chances that JACK
will be the standard in the future? (Try to remain as objective as you can,
please.)
Thanks for your help.
-Oliver
__________________________________________________________________
Gesendet von Yahoo! Mail - http://mail.yahoo.de
Weihnachts-Einkäufe ohne Stress! http://shopping.yahoo.de
Andrew Morton wrote:
>At http://www.zip.com.au/~akpm/linux/2.4.20-low-latency.patch.gz
>
>Very much in sustaining mode. It includes a fix for a livelock
>problem in fsync() from Stephen Tweedie.
Hi,
I have not currently the possibility to test this patch for the next 2-3
weeks but I'd be interested it this patch is able to cure the latency
problems of Red Hat 8.0.
I think Red Hat 8.0 is a nice desktop distro and thus it would be good
if we achieve low latencies it too.
While discussing about RH8 on the #lad channel on irc, Jussi L. told me
that ext3 causes latency spikes during writes becauses of journal
commints etc, but according to him it seems that there are other latency
sources too (he said probably libc).
e.g. he tried a LL kernel on RH7.3 with reiserFS and it worked fine
while RH8.0 with reiserFS did cause latency peaks.
So my question is: does this patch fix latency problems on Red Hat 8.0 ?
cheers,
Benno
--
http://linuxsampler.sourceforge.net
Building a professional grade software sampler for Linux.
Please help us designing and developing it.
What's going on with headers, docs, names and stuff?
I've ripped the event system and the FX API (the one with the state()
callback) from Audiality, and I'm shaping it up into my own XAP
proposal. There are headers for plugins and hosts, as well as the
beginnings of a host SDK lib. It's mostly the event system I'm
dealing with so far.
The modified event struct:
typedef struct XAP_event
{
struct XAP_event *next;
XAP_timestamp when; /* When to process */
XAP_ui32 action; /* What to do */
XAP_ui32 target; /* Target Cookie */
XAP_f32 value; /* (Begin) Value */
XAP_f32 value2; /* End Value */
XAP_ui32 count; /* Duration */
XAP_ui32 id; /* VVID */
} XAP_event;
The "global" event pool has now moved into the host struct, and each
event queue knows which host it belongs to. (So you don't have to
pass *both* queue and host pointers to the macros. For host side
code, that means you can't accidentally send events belonging to one
host to ports belonging to another.)
Oh, well. Time for some sleep...
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Just an observation about an alternative path on softsynths: a LADSPA plugin
or network can be used easily enough as a softsynth using control-voltage
(CV) approaches (a few already exist). It's just a matter of agreeing the
conventions - implementation is trivial.
I've been meaning to finish writing PNet for a while (I've mentioned it a
few times) - essentially an environment where LADSPA plugins are strung
together to form a "patch" and are wired up to "standard" CV controls for
pitch, velocity, MIDI CC etc. These CV components and outputs can be
provided by the host as "fake" plugins providing the CV signals based on
MIDI input (or by using a non-LADSPA convention). This is trivial to
implement and provides an extremely flexible way to build plugin-based
softsynths from LADSPA components - or to wire existing self-contained
LADSPA soft synths (e.g. the "analogue" synth by David Bartold in the CMT
library, see http://www.ladspa.org/cmt/plugins.html) up to MIDI streams.
All a question of time - if anyone wants to do the rest of the
implementation then please let me know. The code required to do the above
also provides a nice way to store patches of plugins for standard processing
chains. Patches would probably be stored as XML representations of
pure-LADSPA networks. BTW, is anyone doing this already? If so, 50% of the
code is already done. ;-) I'm thinking in terms of defining a synth using
two patches - one to define the per-note network required (e.g.
CV->osc->filter->OUT) and another for any per-instrument post processing
(e.g. IN->chorus->reverb->OUT).
--Richard