DIKKAT
VIDEO CD :
Iddia ediyoruz.. Hic bir yerden temin edemeyeceginiz ses ve g�runtu kalitesi ile
yuzlerce porno video CD. arSivimiz yenilenmistir. istemis oldugunuz video CD.ler
bire bir yollanir kesinlikle isteginiz harici alakasiz baSka video CD.ler yollanmaz.
Anal. Oral. Vajinal. Grup. Zenci. FethiS. Ayak fethiS. Gay. Zenci gay. Trans. Transexual.
Lezbiyen ve daha bircok ceSit .... <http://www.erotik.sites.cc/>
ZENGIN URUN CESITLERIMIZ :
Sisme Bebekler ..... (Erkek & Bayan) Kesinlikle size hayir demeyecek.
Vibrat�rler ........ Istediginiz boy ve ebatlarda (Vajinal/Anal/catal.Pilli.Motorlu.TitreSimli)
Suni Vajinalar ..... Asla gerceginden ayirt edemeyeceksiniz (Gercek ten hassasiyetinde)
Reailistik Penisler. Gercek ten hassasiyetinde ve dokusunda (Vantuzlu/Deri kemer kilotlu)
Vakum Pompalari .... Ereksiyonu kolaylastirici ve duzenli kullanimlarda peniste irilesme saglar.
Geciktiriciler ..... Erken boSalmayi dert etmeyin (Sprey ceSitleri/Kremler)
Kremler ............ Anal ve Vajinal iliSkilerde kullanabileceginiz kayganlaStirici krem ceSitleri
Uyandiricilar ...... Cinsel istek uyandirici haplar ve damlalar.
Yapmaniz gereken tek Sey <http://www.erotik.sites.cc/> TIKLAMAK ..
NOT : BU MAIL REKLAM AMAcLI OLUP HIcBIR SEKILDE TARAFIMIZDA KAYDINIZ BULUNMAMAKTADIR.
ILGI ALANINIZIN DISINDA ISE EGER LUTFEN DIKKATE ALMAYINIZ TESEKKURLER..
Ecawave will no longer be actively developed. The Linux audio application
scene has changed drastically since January of 2000 when the first version
of ecawave was released. Nowadays there are many FOSS (free and open
source software) audio file editors available for you to choose from. As
replacements to ecawave I recommend:
- Audacity
- GLAME
- GNUsound
- Sweep
- ... check the list at:
http://freshmeat.net/search/?q=audio%20file%20editor§ion=projects
My thanks to all who have participated in ecawave
development!
--
http://www.eca.cx
Audio software for Linux!
While hacking a more uniform control event type into Audiality, I
realized something: The event action *might* be better off as a part
of the cookie.
The voice mixers in Audiality have two types of controls; ramped
controls and controls without ramping. As it is, they're implemented
in different ways, use different arrays, different events and
different event handling code. You use SET for the non-ramped
controls, and ISET, IRAMP and ISTOP for the ramped controls. Same
index means one control for SET, and another for ISET.
Fine; no problem with that so far, since the Patch Plugins (that
drive the voices, implementing mono/poly, envelopes and whatnot) have
intimate knowledge of the voice controls. Everything's hardcoded,
basically.
However, when applied to "real" plugins, this scheme has a problem:
There are two kinds of controls, and they're not compatible in any
way. If you have ramped output, you need a ramped input, and vice
versa.
Now, the idea I had was to drop the event action/type field, and have
receivers encode that part as well, into the cookie. That way, if you
don't have ramping for some controls, you can just encode a different
action field into your cookies, so that when you get those events,
you end up in a different case in the decoding switch(), where you
"fake" the response in a suitable way. Taking some Audiality code as
an example:
switch(ev->type)
{
case VE_START:
voice_start(v, ev->arg1);
...
break;
case VE_STOP:
voice_kill(v);
aev_free(ev);
return; /* Back in the voice pool! --> */
case VE_SET:
v->c[ev->index] = ev->arg1;
if(VC_PITCH == ev->index)
v->step = calc_step(v);
break;
case VE_ISET:
v->ic[ev->index].v = ev->arg1 << RAMP_BITS;
v->ic[ev->index].dv = 0;
break;
case VE_IRAMP:
v->ic[ev->index].dv = ev->arg1 << RAMP_BITS;
v->ic[ev->index].dv -= v->ic[ev->index].v;
v->ic[ev->index].dv /= ev->arg2;
break;
case VE_ISTOP:
v->ic[ev->index].dv = 0;
break;
}
(which requires senders to special-case normal and ramped controls)
becomes:
switch(ev->cookie & 0xf)
{
case VE_START:
voice_start(v, ev->arg1);
...
break;
case VE_STOP:
voice_kill(v);
aev_free(ev);
return; /* Back in the voice pool! --> */
case VE_SET:
v->c[ev->index] = ev->arg1;
break;
case VE_SET_PITCH:
v->step = calc_step(v);
v->c[ev->index] = ev->arg1;
break;
case VE_RAMP:
v->c[ev->index] = ev->arg1 << RAMP_BITS;
case VE_STOP:
break;
case VE_ISET:
v->ic[ev->index].v = ev->arg1 << RAMP_BITS;
v->ic[ev->index].dv = 0;
break;
case VE_IRAMP:
v->ic[ev->index].dv = ev->arg1 << RAMP_BITS;
v->ic[ev->index].dv -= v->ic[ev->index].v;
v->ic[ev->index].dv /= ev->arg2;
break;
case VE_ISTOP:
v->ic[ev->index].dv = 0;
break;
}
Notice the new VE_RAMP and VE_STOP actions? Those are where ramping
events go if you send them to a non-ramped control. Also note that
VE_SET_PITCH no longer needs an extra conditional to run (the rather
expensive) calc_step() for the pitch control.
Just encode whatever action you want into the cookie for each control
connected. A full control target now looks like this:
struct AEV_target
{
AEV_queue *queue;
Uint32 set_cookie;
Uint32 ramp_cookie;
Uint32 stop_cookie;
} AEV_target;
One cookie for each type of action there is for a control. This way,
you can encode *all* aspects of control handling into the cookie,
without adding any significant complexity or overhead to senders. You
could have only one set of control event handlers, or you could have
one set of cases for *each control*! You decide.
I'm definitely switching to this system internally in Audiality.
(And if it's a bad idea, you'll find out...! :-)
(Note that STOP is just to deal with the limited accuracy of the
integer math in Audiality - and that might actually have been a
"false fix" for another bug I found later. Will try without STOP
again, and/or tweak the ramping code. Or just ditch the bl**dy
integer code! :-)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Very strange:
I'm seeing this while trying to compile the alsa drivers:
gcc -D__KERNEL__ -DMODULE=1
-I/usr/src/redhat/BUILD/alsa-driver-0.9.0/include
-I/lib/modules/2.4.19-1.llsmp/build/include -O2
-mpreferred-stack-boundary=2 -march=i686 -D__SMP__ -DCONFIG_SMP -DLINUX
-Wall -Wstrict-prototypes -fomit-frame-pointer -pipe -DALSA_BUILD
-DKBUILD_BASENAME=via82xx -c -o via82xx.o via82xx.c
In file included from via82xx.c:1:
../alsa-kernel/pci/via82xx.c: In function `snd_via82xx_create':
../alsa-kernel/pci/via82xx.c:1588: structure has no member named
`rate_lock'
make[1]: *** [via82xx.o] Error 1
The error happens when compiling in an SMP host with gcc2.96, but the
same tarfile compiles fine on a UP host with gcc2.96 (same kernel) and
on another UP host with gcc3.2...
-- Fernando
[Lost touch with the list, so I'm trying to catch up here... I did
notice that gardena.net is gone - but I forgot that I was using
david(a)gardena.net for this list! *heh*]
> Subject: Re: [linux-audio-dev] more on XAP Virtual Voice ID system
> From: Tim Hockin (thockin_AT_hockin.org)
> Date: Fri Jan 10 2003 - 00:49:07 EET
> > > The plugin CAN use the VVID table to store flags about the
> > > voice,
> > > as you suggested. I just want to point out that this is
> > > essentially the same as the plugin communicating to the host
> > > about
> > > voices, just more passively.
> >
> > Only the host can't really make any sense of the data.
>
> If flags are standardized, it can. Int32: 0 = unused, +ve = plugin
> owned, -ve = special meaning.
Sure. I just don't see why it would be useful, or why the VVID
subsystem should be turned into some kind of synth status API.
> > > It seems useful.
> >
> > Not really, because of the latency, the polling requirement and
> > the coarse timing.
>
> When does the host allocate from the VVID list? Between blocks. As
> long as a synth flags or releases a VVID during it's block, the
> host benefits from it. The host has to keep a list of which VVIDs
> it still is working with, right?
No. Only the one who *allocated* a VVID can free it - and that means
the sender. If *you* allocate a VVID, you don't want the host to
steal it back whenever the *synth* decides it doesn't need the VVID
any more. You'd just have to double check "your" VVIDs whenever you
send events - and this just to support something that's really a
synth implementation detail that just happens to take advantage of a
host service.
> > > If the plugin can flag VVID table entries as released, the host
> > > can have a better idea of which VVIDs it can reuse.
> >
> > Why would this matter? Again, the host does *not* do physical
> > voice management.
> >
> > You can reuse a VVID at any time, because *you* know whether or
> > not you'll need it again. The synth just doesn't care, as all it
> > will
>
> right, but if you hit the ned of the list and loop back to the
> start, you need to find the next VVID that is not in use by the
> HOST.
No, you just need to find the next VVID that *you're* not using, and
reassign that to a new context. (ALLOC_VVID or whatever.) You don't
really care whether or not the synth's version of a context keeps
running for some time after you stop sending events for context; only
the synth does - and if you're not going to send any more events for
a context, there's no need to keep the VVID.
> That can include VVIDs that have ended spontaneously (again,
> hihat sample or whatever).
VVIDs can't end spontaneously. Only synth voices can, and VVIDs are
only temporary references to voices. A voice may detach itself from
"it's" VVID, but the VVID is still owned by the sender, and it's
still effectively bound to the same context.
BTW, this means that synths should actually keep the voice control
state for a VVID until it actually knows the context has ended.
Normally, this just means that voices don't really detach themselves
from VVIDs, but rather just go to sleep, until stolen or woken up
again.
That is, synths with "virtual voices" might actually have use for a
"DETACH_VVID" event. Without it, they basically have to keep both
real and virtual voices indefinitely. Not sure it actually matters
much, though. Performance wise, it just means you have to deal with
voice controls and their ramping (if supported). And since a ramp is
actually two events (ramp event with "aim point" + terminator event
or new ramp event), and given that ramping across blocks (*) is not
allowed, it still means "no events, no processing."
(*) I think I've said this before, but anyway: I don't think making
ramping accross block boundaries illegal is a good idea. Starting
a ramp is actually setting up a *state*. (The receiver transforms
the event into a delta value that's applied to the value every
sample frame.) It doesn't make sense to me to force senders to
explicitly set a new state at the start of each block.
Indeed, the fact that ramping events have target and duration
arguments looks confusing, but really, it *is* an aim point; not
a description of a ramp with a fixed duration. If this was a
perfect world (without rounding errors...), you would have sent
the delta value directly, but that just won't work in real life.
If someone can come up with a better aim point format than
<target, duration>, I'm all ears, because it really *is*
confusing. It suggests that RAMP events don't behave like SET
events, but that's just not the case. The only difference is
that RAMP events set the internal "dvalue", while SET events
set the "value", and zero "dvalue".
> The host just needs to discard any
> currently queued events for that (expired) VVID. The plugin is
> already ignoreing them.
The plugin is *not* ignoring them. It may route them to a "null
voice", but that's basically an emergency action taken only when
running out of voices. Normally, a synth would keep tracking events
per VVID until the VVID is explicitly detached (see above), or the
synth steals whatever object is used for the tracking.
Again, DETACH_VVID might still be a good idea. Synths won't be able
to steal the right voices if they can't tell passive contexts from
dead contexts...
[...]
> > A bowed string instrument is "triggered" by the bow pressure and
> > speed exceeding certain levels; not directly by the player
> > thinking
>
> Disagree. SOUND is triggered by pressure/velocity. The instrument
> is ready as soon as bow contacts the string.
Well, you don't need a *real* voice until you need to play sound, do
you?
Either way, the distinction is a matter of synth implementation,
which is why I think it should be "ignored" by the API. The API
should not enforce the distinction, nor prevent synths from making
use of the distinction.
> > No, I see a host sending continous control data to an init-latched
> > synth. This is nothing that an API can fix automatically.
>
> Ok, let me make it more clear. Again, same example. The host wants
> to send 7 parameters to the Note-on. It sends 3 then VELOCITY. But
> as soon as VELOCITY is received 'init-time' is over. This is bad.
Yes, it's event ordering messed up. This will never happen unless the
events are *created* out of order, or mixed up by some event
processor. (Though, I can't see how an event processor could reorder
incoming events while doing something useful. Remember; we're talking
about real time events here; not events in a sequencer database.)
> The host has to know which control ends init time.
Why? So it can "automatically" reorder events at some point?
> Thus the
> NOTE/VOICE control we seem to be agreeing on.
Indeed, users, senders and some event processors might need to know
which events are "triggers", and which events are latched rather than
continous.
For example, my Wacom tablet would need to know which of X, Y,
X-tilt, Y-tilt, Pressure and Distance to send last, and as what voice
control. It may not always be obvious - unless it's always the
NOTE/VOICE/GATE control.
The easiest way is to just make one event the "trigger", but I'm not
sure it's the right thing to do. What if you have more than one
control of this sort, and the "trigger" is actually a product of
both? Maybe just assume that synths will use the standardized
NOTE/VOICE/GATE control for one of these, and act as if that was the
single trigger? (It would have to latch initializers based on that
control only, even if it doesn't do anything else that way.)
[...]
> > If it has no voice controls, there will be no VVIDs. You can still
> > allocate and use one if you don't want to special case this,
> > though. Sending voice control events to channel control inputs is
> > safe, since the receiver will just ignore the 'vvid' field of
> > events.
>
> I think that if it wants to be a synth, it understands VVIDS. It
> doesn't have to DO anything with them, but it needs to be aware.
Right, but I'm not even sure there is a reason why they should be
aware of VVIDs. What would a mono synth do with VVIDs that anyone
would care about?
> And the NOTE/VOICE starter is a voice-control, so any Instrument
> MUST have that.
This is very "anti modular synth". NOTE/VOICE/GATE is a control type
hint. I see no reason to imply that it can only be used for a certain
kind of controls, since it's really just a "name" used by users
and/or hosts to match ins and outs.
Why make Channel and Voice controls more different than they have to
be?
* Channel->Channel:
* Voice->Voice:
Just make the connection. These are obviously
100% compatible.
* Voice->Channel:
Make the connection, and assume the user knows
what he/she is doing, and won't send polyphonic
data this way. (The Channel controls obviously
ignore the extra indexing info in the VVIDs.)
* Channel->Voice:
This works IFF the synth ignores VVIDs.
You could have channel/voice control "mappers" and stuff, but I don't
see why they should be made more complicated than necessary, when in
most cases that make sense, they can actually just be NOPs.
About VVID management:
Since mono synths won't need VVIDs, host shouldn't have to
allocate any for them. (That would be a waste of resources.)
The last case also indicates a handy shortcut you can take
if you *know* that VVIDs won't be considered. Thus, I'd
suggest that plugins can indicate that they won't use VVIDs.
[...]
> > Why? What does "end a voice" actually mean?
>
> It means that the host wants this voice to stop. If there is a
> release phase, go to it. If not, end this voice (in a
> plugin-dpecific way).
> Without it, how do you enter the release phase?
Right, then we agree on that as well. What I mean is just that "end a
voice" doesn't *explicitly* kill the voice instantly.
What might be confusing things is that I don's consider "voice" and
"context" equivalent - and VVIDs refer to *contexts* rather than
voices. There will generally be either zero or one voice connected to
a context, but the same context may be used to play several notes.
> > >From the sender POV:
> > I'm done with this context, and won't send any more events
> > referring to it's VVID.
>
> No. It means I want the sound on this voice to stop. It implies the
> above, too. After a VOICE_OFF, no more events will be sent for this
> VVID.
That just won't work. You don't want continous pitch and stuff to
work except when the note is on?
Stopping a note is *not* equivalent to releasing the context in which
it was played.
Another example that demonstrates why this distinction matters would
be a polyphonic synth with automatic glisando. (Something you can
hardly get right with MIDI, BTW. You haveto use multiple monophonic
channels, or trust the synth to be smart enough to do the right
thing.)
Starting a new note on a VVID when a previous note is still in the
release phase would cause a glisando, while if the VVID has no
playing voice, one would be activated and started as needed to play a
new note. The sender can't reliably know which action will be taken
for each new note, so it really *has* to be left to the synth to
decide. And for this, the lifetime of VVIDs/contexts need to span
zero or more notes, with no upper limit.
> > >From the synth POV:
> > The voice assigned to this VVID is now silent and passive,
>
> More, the VVID is done. No more events for this VVID.
Nope, not unless *both* the synth and the sender have released the
VVID.
> The reason
> that VVID_ALLOC is needed at voice_start is because the host might
> never have sent a VOICE_OFF. Or maybe we can make it simpler:
If the host/sender doesn't sent VOICE_OFF when needed, it's broken,
just like a MIDI sequencer that forgets to stop playing notes when
you hit the stop button.
And yes, this is another reason to somehow mark the VOICE/NOTE/GATE
control as special.
> Host turns the NOTE/VOICE on.
> It can either turn the NOTE/VOICE off or DETACH it. Here your
> detach name makes more sense.
VOICE_OFF and DETACH *have* to be separate concepts. (See above.)
> A step sequencer would turn a note
> on, then immediately
> detach.
It would have to send note off as well I think, or we'd have another
special case to make all senders compatible with all synths. (And if
"note on" is actually a change of the VOICE/GATE control from 0 to 1,
you *have* to send an "off" event as well, or the synth won't detect
any further "note on" events in that context.)
> > assumed to be more special than it really is.
> > NOTE/VOICE_ON/VOICE_OFF
> > is a gate control. What more do you need to say about it?
>
> Only if you assume a voice lives forever, which is wasteful.
It may be wasteful to use real, active voices just to track control
changes, but voice control tracking cannot be avoided.
> Besides that, a gate that gets turned off and on and off and on
> does not restart a voice, just mutes it temporarily. Not pause, not
> restart - mute.
Who says? I think that sounds *very* much like a synth implementation
thing - but point taken; "GATE" is probably not a good name.
[...]
> Well, you CAN change Program any time you like - it is not a
> per-voice control.
In fact, on some MIDI synths, you have to assume it is, sort of.
Sending a PC to a Roland JV-1080 has it instantl kill any notes
playing on that channel, go to sleep for a few hundreds of a second,
and then process any events that might have arrived for the channel
during the "nap". This really sucks, but that's the way it works, and
it's probably not the only synth that does this.
(The technical reason is most probably that spare "patch slots" would
have been required to do it in any other way - and as I've discovered
with Audiality, that's not as trivial to get right as it might seem
at first. You have to let the old patch see *some* of the new events
for the channel, until the old patch decides to die.)
AWE, Live! and Audigy cards don't do it this way - but PC is *still*
not an ordinary control. Playing notes remain controlled by the old
patch until they receive their NoteOffs. PC always has to occur
*before* new notes.
Either way, MIDI doesn't have many voice controls at all, and our
channel controls are more similar to MIDI (Channel) CCs in some ways.
(Not addressed by note pitch, most importantly.)
That is, they can't be compared directly - but the concept that some
controls must be sent before they're latched to have the desired
effect is still relevant.
[...]
> Idea 2: similar to idea 1, but less explicit.
> -- INIT:
> send SET(new_vvid, ctrl) /* implicitly creates a voice */
> send VOICE_ON(new_vvid) /* start the vvid */
> -- RELEASE:
> send SET(new_vvid, ctrl) /* send with time X */
> send VOICE_OFF(vvid) /* also time X - plug 'knows' it was for
> release */
I see why you don't like this. You're forgetting that it's the
*value* that is the "initializer" for the VOICE_OFF action; not the
SET event that brings it. Of course the plugin "knows" - the last set
put a new value into the control that the VOICE_OFF action code looks
at! :-)
A synth is a state machine, and the events are just what provides it
with data and - directly or indirectly - triggers state changes.
We have two issues to deal with, basically:
1. Tracking of voice controls.
2. Allocation and control of physical voices.
The easy way is to assume that you use a physical voice whenever you
need to track voice controls, but that's just an assumption that a
synth author would make to make the implementation simpler. It
doesn't *have* to be done that way.
If 1 and 2 are handled as separate things by a synth, 2 becomes an
implementation issue *entirely*. Senders and hosts don't really have
a right to know anything about this - mostly because there are so
many ways of doing it that it just doesn't make sense to pretend that
anyone cares.
As to 1, that's what we're really talking about here. When do you
start and stop tracking voice controls?
Simple: When you get the first control for a "new" VVID, start
tracking. When you know there will be no more data for that VVID, or
that you just don't care anymore (voice and/or context stolen), stop
tracking.
So, this is what I'm sugesting ( {X} means loop X, 0+ times ) :
* Context allocation:
// Prepare the synth to receive events for 'my_vvid'
send(ALLOC_VVID, my_vvid)
// (Control tracking starts here.)
{
* Starting a note:
// Set up any latched controls here
send(CONTROL, <whatever>, my_vvid, <value>)
...
// (Synth updates control values.)
// Start the note!
send(CONTROL, VOICE, my_vvid, 1)
// (Synth latches "on" controls and (re)starts
// voice. If control tracking is not done by
// real voices, this is when a real voice would
// be allocated.)
* Stopping a note:
send(CONTROL, <whatever>, my_vvid, <value>)
...
// (Synth updates control values.)
// Stop the note!
send(CONTROL, VOICE, my_vvid, 0)
// (Synth latches "off" controls and enters the
// release phase.)
* Controling a note (even in release phase!):
send(CONTROL, <whatever>, my_vvid, <value>)
// (Synth updates control value.)
}
* Context deallocation:
// Tell the synth we won't talk any more about 'my_vvid'
send(DETACH_VVID, my_vvid)
// (Control tracking stops here.)
This still contains a logic flaw, though. Continous control synths
won't necessarily trigger on the VOICE control changes. Does it make
sense to assume that they'll latch latched controls at VOICE control
changes anyway? It seems illogical to me, but I can see why it might
seem to make sense in some cases...
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Following the discussion on VVIDs, I've been thinking about how the
MIDI protocol could be modified to encompass explicit contexts. To my
surprise, this would quite simple. I'll call the new protocol ECMP
(Explicit Context Midi Protocol -- just a working name). Key features
are:
1. Easy conversion MIDI <--> ECMP.
To convert a MIDI stream to ECMP, just insert a zero byte as the second
byte of each voice message (or as the first byte if running status is
used). To convert ECMP to MIDI, just skip the inserted bytes which are
easy to find, and what remains is standard MIDI. So porting existing
instruments to use ECMP but ignore the extensions is almost trivial.
2. Allows to have a 'set of mono synths' (up to 127 at a time) on a
single channel.
3. Caters for the needs of continuous control synths (I think).
These are the standard MIDI voice messages, with the new byte
inserted:
0x80+c z k v note off
0x90+c z k v note on
0xA0+c z k p aftertouch
0xB0+c z n x controller
0xC0+c z a program change
0xD0+c z p channel pressure
0xE0+c z b c pitch wheel
c = channel number
z = inserted context number
k = key (pitch)
p = pressure
v = velocity
x = controller value
a = program number
b, c = pitch wheel value
If the context number is 0, this means standard MIDI behaviour, all
contexts numbered zero are independent of each other.
A 'note on' with a non-zero context number creates the context if it
does not already exist. If it does already exist, the new note is
started in the existing context. What that means is defined by the
patch.
Each such context is either in the 'on' or 'off' state, depending on
which one of 'note on' or 'note off' was most recently received.
When a context is 'off' and it reaches an internal state where
its further existence does no longer matter (e.g. all envelopes are at
the end of their release phase), it is destroyed. If a context never
reaches such a point, it can be explicitly destroyed by 'note off'
received in the 'off' state.
Control messages (including aftertouch, pitch wheel, etc.) are ignored
if they refer to a non-existing context.
For the 'channel pressure' message, z = 0 means 'channel pressure',
which means all contexts, and z > 0 refers to a specific context.
'Channel program' with z = 0 has its normal meaning. If z > 0, it
could be used to associate a context number with a specific patch
variation or static parameter set. For example, a patch that contains
a simulation of four different violin strings could use this message
to associate a context number with a particular string.
If z > 0 in the 'aftertouch' message, then the key (pitch) parameter
is effectively redundant. I propose to keep it anyway, as this allows
conversion to standard MIDI (and you may get 'almost' what you want).
For continuous control instruments, a context or voice is created
by a 'note-on' message. The interpretation of the key and velocity
params is up to the patch, but normally no sound should be produced
at this point. The context will continue to exist until the end of the
performance, when it is removed by a double 'note off'.
Comments invited !!
--
Fons Adriaensen
http://plugin.org.uk/timemachine/ tarball, 100k.
Depends on SDL, SDL_image, jack and libsndfile.
I used to always keep a minidisc recorder in my studio running in a mode
where when you pressed record it wrote the last 10seconds of audio to the
disk and then caught up to realtime and kept recroding. The recorder died
and haven't been able to replace it, so this is a simple jack app to do
the same job. It has the advantage that it never clips and can be wired to
any part of the jack graph.
I've been using it to record occasional bursts of interesting noise from
jack apps feeding back into each other.
Usage: ./configure, make, make install, run jack_timemachine. Connect it
up with a patchbay app. To start recording click in the window. To stop
recording, click in the window.
It writes out 32bit float WAV files called tm-<time>.wav, where <time> is
the time the recording starts from.
The prebuffer time and number of channels are set in a macro, defaults are
10s and 2. It works on my machine, and I'l fix major bugs, but I don't
really have time to support another piece of software, so good luck :)
If anyone wants to maintain it, feel free.
May it preserve many interesting sounds for you,
Steve
The whole discussion about VVIDs has become a rather complicated
web of opinions and examples that sometimes are understood, and
sometimes not. This is how I see it.
Why we need explicit VVIDs.
With MIDI, you can have
1. A mono synth. If there is any relation between a new note
and another one, it's always clear wich one is meant (the
previous one). This allows things like for example, not restarting
an ADSR if you play a second note before releasing the previous one.
2. A poly synth. Here normally 'a new note is a new note', and
things like the effect described above are not possible because
the synth does not know the relations between the existing set
of notes and any new ones. Anther example, you play a 3-note chord,
and then a second one, and you want notes to slide individually from
the first chord to the second. Once your masterpiece is in MIDI format
it't impossible to find out which notes are related.
If course, if you try to play this on a keyboard, you can not even
express what you want, but that only a limitation of the the
interface, and should not imply that it can't be done. If you look
beyond the traditional 'pop' music scene, lots of composers are using
other means to enter their scores, such as scripts or even algorithms.
What should be clear from this, is that as a results of the
limitations of MIDI, a poly synth is *not* the same thing as a set of
mono synths.
If you want that (polyphony by a set of mono synths) the only way to
get it is by abusing the channel mechanism. This forces you to work
in a way that is completely different from normal poly mode, which
is extremely unpractical. Anyway channels are not meant for this,
they are meant to multiplex data intended for different devices over
a single cable.
The explicit use of VVIDs would allow us to unify the interface to the
'normal' (in the MIDI sense) polyphonic synth, and the 'set of of
monophonic synths'.
And it would indeed allow the player to take the normally automatic
voice assignment into his own hands, but it does *not* force him to
do so.
A lot more could be said, but I have to go.
--
Fons Adriaensen