Well, this might be early, but I needed to do something slightly less
demanding for a while. So I hacked a small presentation:
http://olofson.net/xap/
Please, check facts and language (not my native tongue), and suggest
changes or additions.
(Oops! Clicked on dat doggy-like animal in da process... ;-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Actually, it may not be all that hairy after all. Consider these
events:
XAP_A_BUFFER
Function: Someone just gave you an audio
buffer of the standard size used
by the Host. It's all yours now;
do what you like with it. Don't
forget to free it eventually!
Arguments: Pointer to the buffer.
Cookie. (So you know what it's for.)
Size. (# of BYTES actually used.)
XAP_A_REQUEST_DATA
Function: Ask for a number of buffers.
The API doesn't guarantee anything
about when they'll arrive; that's
something you and that guy in the
other end will have to discuss.
Arguments: Size. (# of BYTES of data you want.)
Cookie. (So the other guy knows)
When. (Your deadline. Be sensible...!)
If you want data streamed to your plugin, you'll send
XAP_A_REQUEST_DATA events, and (eventually) receive XAP_A_BUFFER
events. Connections are made by the host (as usual), although each
streaming connection actually needs *two* connections; one in each
direction. (The API should probably cover this, so hosts and/or users
don't have to mess with it manually.)
Ok?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
> Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> writes:
>
> SAOL is still block based AFAIK.
See:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/wemp01.pdf
Sfront does no block-based optimizations. And for many
purposes, sfront is fast enough to do the job.
It may very well be that sfront could go even faster
with blocking, although the analysis is quite subtle --
in a machine with a large cache, and a moderate-sized
SAOL program, you're running your code and your data
in the cache most of the time.
Remember, blocking doesn't save you any operations, it
only improves memory access and overhead costs. If those
costs are minimal for a given decoder implementation,
there is not as much to gain.
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
> > a softstudio; it's pretty far already and
> > the first public release is scheduled Q1/2003.
>
> for Linux, obviously? ;-)
Yes. Linux, GPL. MONKEY is about 30.000 lines of C++ at the moment. I
still have to make a final architecture revision based on some issues
reading this list has evoked, and prepare the whole thing for release.
> > First, I don't understand why you want to design a "synth API". If you
> > want to play a note, why not instantiate a DSP network that does the job,
> > connect it to the main network (where system audio outs reside), run it
> > for a while and then destroy it? That is what events are in my system -
> > timed modifications to the DSP network.
>
> because a standard API is needed for a dynamically loaded plugins!
> LADSPA doesnt really cater for event-driven processes (synths)
Yes, I understand it now. In principle, audio and control ports could
almost suffice but sample-accurate events sent to plugins are more
efficient, and allow one to pass around structured data.
I shall have to add something like this to MONKEY. Right now it supports
LADSPA via a wrapper - the native API is pretty complex - although
creating a nice GUI based on just information in a LADSPA .so is not
possible, mainly due to lack of titles for enums.
> For a complete contrast, please look over
> http://amsynthe.sourceforge.net/amp_plugin.h which i am still toying
> with as a(nother) plugin api suitable for synths. I was hoping to wait
I like this better than the more complex proposal being worked on, except
that I don't much care for MIDI myself. But I also realize the need for
the event/channel/bay/voice monster because it is more efficient and
potentially doesn't require plugins to be instantiated while a song is
playing. I don't think one API can fit all sizes.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Someone suggested that it would be impossible to map the 12 tones
onto the white keys of a keyoard, and still get correct pitch bend
handling. Well, here's a solution:
The MIDI converter/driver:
This will have to make pitch bend available as a
separate control output from pitch. This does *not*
mean we have to apply pitch bend later, though!
Instead, we apply pitch bend to out PITCH output,
*and* then send pitch bend to the PITCHBEND output.
So, if you don't care for anything but continous
pitch, just ignore this feature.
As to pitch bend range, that is handled entirely
by the converter/driver; the bend output is in the
same units as the pitch output.
With NOTEPITCH:
NOTEPITCH is in 1.0/note
PITCHBEND is in 1.0/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
NOTEPITCH = midi_pitch + PITCHBEND;
Without NOTEPITCH:
PITCH is in (1/12)/note
PITCHBEND is in (1/12)/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
PITCH = midi_pitch + PITCHBEND;
PITCHBEND *= 1.0/12.0;
PITCH *= 1.0/12.0;
The keyboard remapper:
This one will have inputs for both pitch and pitch
bend controls, since it needs to be able to remove
the bend and then reapply it later. remap_lut[] is
simply a table that "returns" the desired pitch for
each key in the scale. (Obviously, you'll have to
add a conditional to ignore the black keys, if you
don't want to map them to anything.)
With NOTEPITCH:
int note = (int)(NOTEPITCH - PITCHBEND);
NOTEPITCH = remap_lut[note] + PITCHBEND;
Without NOTEPITCH:
int note = (int)((NOTEPITCH - PITCHBEND)*12.0);
NOTEPITCH = remap_lut[note] + PITCHBEND;
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
(Same thing again...)
---------- Forwarded Message ----------
Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:15:59 +0100
From: David Olofson <david(a)olofson.net>
To: Nathaniel Virgo <nathaniel.virgo(a)ntlworld.com>
On Wednesday 11 December 2002 18.09, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> > On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > > Steve Harris wrote:
> > > >On Wed, Dec 11, 2002 at 12:40:18 +0000, Nathaniel Virgo wrote:
> > > >>I can't really say I can think of a better way though.
> > > >> Personally I'd leave scales out of the API and let the host
> > > >> deal with it, sticking to 1.0/octave throughout, but I can
> > > >> see the advantages of this as well.
> > > >
> > > >We could put it to a vote ;)
> > > >
> > > >- Steve
> > >
> > > I vote 1.0/octave.
> >
> > So do I, definitely.
> >
> > There has never been an argument about <something>/octave, and
> > there no longer is an argument about 1.0/octave.
> >
> > The "argument" is about whether or not we should have a scale
> > related pitch control type *as well*. It's really more of a hint
> > than an actual data type, as you could just assume "1tET" and use
> > both as 1.0/octave.
>
> I don't think that should be permitted. I think that this case
> should be handled by a trivial scale converter that does nothing.
> No synth should be allowed to take a note_pitch input, and nothing
> except a scale converter should be allowed to assume any particular
> meaning for a note_pitch input.
I like the idea of enforced "explicit casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)
Either way, there will *not* be a distinction between synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.
> If you have an algorithm that needs
> to know something about the actual pitch rather than position on a
> scale then it should operate on linear_pitch instead.
Yes indeed - that's what note_pitch vs linear_pitch is all about.
> I think that
> in this scheme note_pitch and linear_pitch are two completely
> different things and shouldn't be interchangeable.
You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.
> That way you
> can enforce the correct order of operations:
>
> Sequencer
>
> | note_pitch signal
>
> V
> scaled pitch bend (eg +/- 2 tones) /
> arpeggiator / shift along scale /
> other scale-related effects
>
> | note_pitch signal
>
> V
> scale converter (could be trivial)
>
> | linear_pitch signal
>
> V
> portamento / vibrato /
> relative-pitch arpeggiator /
> interval-preserving transpose /
> other frequency-related effects
>
> | linear_pitch signal
>
> V
> synth
>
> That way anyone who doesn't want to worry about notes and scales
> can just always work in linear_pitch and know they'll never see
> anything else.
Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.
So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Hi.
Today I relased a new version of zynaddsubfx.
ZynAddSubFX is a very powerful software syntesizer,
licensed under GPL v.2.
News:
- Added instrument banks (I am sure that you'll like
this)
- the BandPass Filter's output amplitude was increased
- few fixes of FFTwrapper. See the documentation from
"FFTwrapper.h" if you got error messages.
Paul.
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
Didn't we come up with some good ammo in case anyone decided to sue?
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
Hi,
I'm currently looking at JACK (http://jackit.sourceforge.net/) for a small
project I'd like to work on some time soon. It sounds like a promising concept.
It's interesting for me because I don't have to write my own audio loop. My
questions are:
-Is it in a state where I can actually use it? Or are there so many things
still to be done so you wouldn't advise me to build something on top of it?
-Is there any competing "product" at the moment? What are the chances that JACK
will be the standard in the future? (Try to remain as objective as you can,
please.)
Thanks for your help.
-Oliver
__________________________________________________________________
Gesendet von Yahoo! Mail - http://mail.yahoo.de
Weihnachts-Einkäufe ohne Stress! http://shopping.yahoo.de
Andrew Morton wrote:
>At http://www.zip.com.au/~akpm/linux/2.4.20-low-latency.patch.gz
>
>Very much in sustaining mode. It includes a fix for a livelock
>problem in fsync() from Stephen Tweedie.
Hi,
I have not currently the possibility to test this patch for the next 2-3
weeks but I'd be interested it this patch is able to cure the latency
problems of Red Hat 8.0.
I think Red Hat 8.0 is a nice desktop distro and thus it would be good
if we achieve low latencies it too.
While discussing about RH8 on the #lad channel on irc, Jussi L. told me
that ext3 causes latency spikes during writes becauses of journal
commints etc, but according to him it seems that there are other latency
sources too (he said probably libc).
e.g. he tried a LL kernel on RH7.3 with reiserFS and it worked fine
while RH8.0 with reiserFS did cause latency peaks.
So my question is: does this patch fix latency problems on Red Hat 8.0 ?
cheers,
Benno
--
http://linuxsampler.sourceforge.net
Building a professional grade software sampler for Linux.
Please help us designing and developing it.