1. getTimeInfo() does *not* take a time argument of any kind;
only a 32 bit filter, to say which fields you actually need.
That is, you can only get the full info for the start of
the buffer. There is a tempoAt() call, that gives you the
tempo at the specified sample frame in the buffer.
2. The VstTimeInfo struct is interesting. I like the unit
for musical time: pulses with 1 PPQN. :-) Tempo is in
BPM. (Why not QN/s? ;-) There is a field that says where
the last bar starts. There's also time signature, SMPTE,
MIDI clock (samples to next clock) and a bunch of flags.
Flags include transport changed, playing and "cycle
active". (The latter come with loop start and end positions,
obviously.) And automation reading + writing... It is also
interesting to note that the "sample position" field in
here is timeline bound - it may jump! There's a nanoseconds
field as well, holding the "system time" corresponding to
the first sample in the buffer.
Conclusion:
a. The information is there, but you have to *pull*
it from the host. Doesn't seem like you're ever
notified about tempo changes or transport events.
b. Since you're asking the host, and there are no
other arguments, there is no way that one plugin
can keep track of more than one timeline. It
seems that it is assumed that there is only one
timeline in a net.
3. There are calls that allow plugins to get the audio input
and output latency.
Conclusion:
c. I'm assuming that this is mostly useful for
VU-meters and other stuff that needs to be
delayed appropriately for correct display.
Obviously not an issue when the audio latency
is significantly shorter than the duration of
one video frame on the monitor! ;-) Seriously
though, this is needed for "high latency"
applications to display meters and stuff
correctly. They're not very helpful if they're
half a second early!
4. There is a feature that allows plugins to tell the host
which "category" they would fit in. (There is some
enumeration for that somewhere.) Might be rather handy
when you have a large number of plugins installed...
5. A getCurrentProcessLevel() host call lets the plugin ask
what kind of context it's being called from. Can return
values corresponding to "not supported", "user/GUI",
"audio/irq", "sqeuncer/irq" and "user/offline". Other
values are possible and "probably" (as they say) mean
something that pre-empts the user thread.
6. There is support for using the whole process() call
asynchronously. This is for external DSPs and stuff. The
plugin is expected to return "instantly" from process(),
and then the host wanders off to do something else. The
host will use another plugin call to poll, to see if the
data from the external DSP (or whatever) is available.
Conclusion:
d. I think we can ignore this feature. We don't
do external DSP, and we don't feel like having
hosts poll/busy-wait for data, do we...?
7. There's a bunch of calls specifically meant for off-line
processing. This is basically about accessing the host's
"open" files, setting markers and stuff, being notified
when files are opened, closed, changed, when markers are
changed etc. A plugin API in itself.
8. You can ask the host about the speaker arrangement,
for surround stuff...
9. There is a bypass feature, so that hosts can have
plugins implement sensible bypass for mono -> surround
and other non-obvious in/out relations. (As if in/out
relations ever were to be assumed "obvious"!)
10.Finally, the states. There are 2 calls related to this;
suspend() and resume(). Nothing much is said about these,
and they're not used for much in the examples. Just for
clearing delay buffers and the like. Memory is generally
allocated in the constructors and freed in the destructors,
but I'm not sure when and where you're supposed to
reallocate buffers if you need to... (This is when you
start realizing why compatibility problems with VST are
so common. Where are the reference docs!? *heh*)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Aaargh! Can't seem to find anything more interesting than a PDF with
a very basic overview... Is there a freely available SDK anywhere?
Would just like to say that I find some parts of that PDF a bit
scary... We're *not* talking about a lean and mean low overhead API
here, that's for sure!
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
http://plugin.org.uk/meterbridge/
Changes
* Greatly improved the readability of the VU meter
* Made the VU meter conform the the AES analogue equivalent levels. This
should make it more generally useful without adjustment and if you
have properly calibrated DA converters and analogue equipment then
the levels should agree.
* Made the DPM meter look nicer and easier to read.
* Cured a handful of segfaults (thanks to Melanie and Mark K.).
* Reduced the maximum CPU usage of the UI. It should never have caused RT
problems before, but it could have stolen cycles from other UI
threads that needed them more.
* Cleaned up and optimised the port insertion (input monitoring) code, its
still hacky but cleaner and more reliable now.
* Added a "set jack name" option, -n.
* Will now make a meter for every non-flag argument, even if there is no port
matching that name, so, eg. you can create an unconnected 4
channel meter with "meterbridge - - - -".
* More reliable cleanup on exit.
Before it goes to 1.0 I'd like some sort of documentation, and maybe to
improve the input port monitoring situation. So don't hold your breath ;)
If anyone wants to write anything for the docs I would be extremely
greatful.
I will look at any tricky things, like meter labelling, and antialiased
needles after 1.0.
- Steve
Well, this might be early, but I needed to do something slightly less
demanding for a while. So I hacked a small presentation:
http://olofson.net/xap/
Please, check facts and language (not my native tongue), and suggest
changes or additions.
(Oops! Clicked on dat doggy-like animal in da process... ;-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Actually, it may not be all that hairy after all. Consider these
events:
XAP_A_BUFFER
Function: Someone just gave you an audio
buffer of the standard size used
by the Host. It's all yours now;
do what you like with it. Don't
forget to free it eventually!
Arguments: Pointer to the buffer.
Cookie. (So you know what it's for.)
Size. (# of BYTES actually used.)
XAP_A_REQUEST_DATA
Function: Ask for a number of buffers.
The API doesn't guarantee anything
about when they'll arrive; that's
something you and that guy in the
other end will have to discuss.
Arguments: Size. (# of BYTES of data you want.)
Cookie. (So the other guy knows)
When. (Your deadline. Be sensible...!)
If you want data streamed to your plugin, you'll send
XAP_A_REQUEST_DATA events, and (eventually) receive XAP_A_BUFFER
events. Connections are made by the host (as usual), although each
streaming connection actually needs *two* connections; one in each
direction. (The API should probably cover this, so hosts and/or users
don't have to mess with it manually.)
Ok?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
> Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> writes:
>
> SAOL is still block based AFAIK.
See:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/wemp01.pdf
Sfront does no block-based optimizations. And for many
purposes, sfront is fast enough to do the job.
It may very well be that sfront could go even faster
with blocking, although the analysis is quite subtle --
in a machine with a large cache, and a moderate-sized
SAOL program, you're running your code and your data
in the cache most of the time.
Remember, blocking doesn't save you any operations, it
only improves memory access and overhead costs. If those
costs are minimal for a given decoder implementation,
there is not as much to gain.
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
> > a softstudio; it's pretty far already and
> > the first public release is scheduled Q1/2003.
>
> for Linux, obviously? ;-)
Yes. Linux, GPL. MONKEY is about 30.000 lines of C++ at the moment. I
still have to make a final architecture revision based on some issues
reading this list has evoked, and prepare the whole thing for release.
> > First, I don't understand why you want to design a "synth API". If you
> > want to play a note, why not instantiate a DSP network that does the job,
> > connect it to the main network (where system audio outs reside), run it
> > for a while and then destroy it? That is what events are in my system -
> > timed modifications to the DSP network.
>
> because a standard API is needed for a dynamically loaded plugins!
> LADSPA doesnt really cater for event-driven processes (synths)
Yes, I understand it now. In principle, audio and control ports could
almost suffice but sample-accurate events sent to plugins are more
efficient, and allow one to pass around structured data.
I shall have to add something like this to MONKEY. Right now it supports
LADSPA via a wrapper - the native API is pretty complex - although
creating a nice GUI based on just information in a LADSPA .so is not
possible, mainly due to lack of titles for enums.
> For a complete contrast, please look over
> http://amsynthe.sourceforge.net/amp_plugin.h which i am still toying
> with as a(nother) plugin api suitable for synths. I was hoping to wait
I like this better than the more complex proposal being worked on, except
that I don't much care for MIDI myself. But I also realize the need for
the event/channel/bay/voice monster because it is more efficient and
potentially doesn't require plugins to be instantiated while a song is
playing. I don't think one API can fit all sizes.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Someone suggested that it would be impossible to map the 12 tones
onto the white keys of a keyoard, and still get correct pitch bend
handling. Well, here's a solution:
The MIDI converter/driver:
This will have to make pitch bend available as a
separate control output from pitch. This does *not*
mean we have to apply pitch bend later, though!
Instead, we apply pitch bend to out PITCH output,
*and* then send pitch bend to the PITCHBEND output.
So, if you don't care for anything but continous
pitch, just ignore this feature.
As to pitch bend range, that is handled entirely
by the converter/driver; the bend output is in the
same units as the pitch output.
With NOTEPITCH:
NOTEPITCH is in 1.0/note
PITCHBEND is in 1.0/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
NOTEPITCH = midi_pitch + PITCHBEND;
Without NOTEPITCH:
PITCH is in (1/12)/note
PITCHBEND is in (1/12)/note
PITCHBEND = (midi_bend - 8192) * midi_range * (1.0/8192.0);
PITCH = midi_pitch + PITCHBEND;
PITCHBEND *= 1.0/12.0;
PITCH *= 1.0/12.0;
The keyboard remapper:
This one will have inputs for both pitch and pitch
bend controls, since it needs to be able to remove
the bend and then reapply it later. remap_lut[] is
simply a table that "returns" the desired pitch for
each key in the scale. (Obviously, you'll have to
add a conditional to ignore the black keys, if you
don't want to map them to anything.)
With NOTEPITCH:
int note = (int)(NOTEPITCH - PITCHBEND);
NOTEPITCH = remap_lut[note] + PITCHBEND;
Without NOTEPITCH:
int note = (int)((NOTEPITCH - PITCHBEND)*12.0);
NOTEPITCH = remap_lut[note] + PITCHBEND;
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
(Same thing again...)
---------- Forwarded Message ----------
Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:15:59 +0100
From: David Olofson <david(a)olofson.net>
To: Nathaniel Virgo <nathaniel.virgo(a)ntlworld.com>
On Wednesday 11 December 2002 18.09, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> > On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > > Steve Harris wrote:
> > > >On Wed, Dec 11, 2002 at 12:40:18 +0000, Nathaniel Virgo wrote:
> > > >>I can't really say I can think of a better way though.
> > > >> Personally I'd leave scales out of the API and let the host
> > > >> deal with it, sticking to 1.0/octave throughout, but I can
> > > >> see the advantages of this as well.
> > > >
> > > >We could put it to a vote ;)
> > > >
> > > >- Steve
> > >
> > > I vote 1.0/octave.
> >
> > So do I, definitely.
> >
> > There has never been an argument about <something>/octave, and
> > there no longer is an argument about 1.0/octave.
> >
> > The "argument" is about whether or not we should have a scale
> > related pitch control type *as well*. It's really more of a hint
> > than an actual data type, as you could just assume "1tET" and use
> > both as 1.0/octave.
>
> I don't think that should be permitted. I think that this case
> should be handled by a trivial scale converter that does nothing.
> No synth should be allowed to take a note_pitch input, and nothing
> except a scale converter should be allowed to assume any particular
> meaning for a note_pitch input.
I like the idea of enforced "explicit casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)
Either way, there will *not* be a distinction between synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.
> If you have an algorithm that needs
> to know something about the actual pitch rather than position on a
> scale then it should operate on linear_pitch instead.
Yes indeed - that's what note_pitch vs linear_pitch is all about.
> I think that
> in this scheme note_pitch and linear_pitch are two completely
> different things and shouldn't be interchangeable.
You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.
> That way you
> can enforce the correct order of operations:
>
> Sequencer
>
> | note_pitch signal
>
> V
> scaled pitch bend (eg +/- 2 tones) /
> arpeggiator / shift along scale /
> other scale-related effects
>
> | note_pitch signal
>
> V
> scale converter (could be trivial)
>
> | linear_pitch signal
>
> V
> portamento / vibrato /
> relative-pitch arpeggiator /
> interval-preserving transpose /
> other frequency-related effects
>
> | linear_pitch signal
>
> V
> synth
>
> That way anyone who doesn't want to worry about notes and scales
> can just always work in linear_pitch and know they'll never see
> anything else.
Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.
So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Hi.
Today I relased a new version of zynaddsubfx.
ZynAddSubFX is a very powerful software syntesizer,
licensed under GPL v.2.
News:
- Added instrument banks (I am sure that you'll like
this)
- the BandPass Filter's output amplitude was increased
- few fixes of FFTwrapper. See the documentation from
"FFTwrapper.h" if you got error messages.
Paul.
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com