On Wednesday 24 March 2010, at 11.06.43, Ralf Mardorf <ralf.mardorf@alice-
dsl.net> wrote:
Nick Copeland wrote:
[snip, because you sent your reply off-list, but
I guess this should
be send to the list too]
If my broken English doesn't fool me, than the more I learn about cv,
the more I guess it's a bad idea to import it to Linux.
Not quite sure what "everyone" means by cv in this context ("cv" makes
me
think of pitch control in synths, specifically), but here's my take on it; a
simple synth for a small prototyping testbed I hacked the other day:
----------------------------------------------------------------------
function Create(cfg, preset)
{
local t = table [
.out nil,
.gate 0.,
.vel 0.,
.ff cfg.fc0 * 2. * PI / cfg.fs,
.ph 0.,
.phinc 1.,
.amp 0.,
procedure Connect(self, aport, buffer)
{
switch aport
case AOUTPUT
self.out = buffer;
}
procedure Control(self, cport, value)
{
switch cport
case GATE
{
self.gate = value;
if value
{
// Latch velocity and reset phase!
self.amp = self.vel;
self.ph = 0.;
}
}
case VEL
self.vel = value;
case PITCH
self.phinc = self.ff * 2. ** value;
}
function Process(self, frames)
{
if not self.out or not self.amp
return false;
local out, local amp, local ph, local phinc =
self.(out, amp, ph, phinc);
local damp = 0.;
local running = true;
if not self.gate
{
// Linear fade-out over one period.
damp = -self.vel * phinc / (2. * PI);
if -damp * frames >= amp
{
// We're done after this fragment!
damp = -amp / frames;
self.amp = 0.;
running = false;
}
}
for local s = 0, frames - 1
{
out[s] += sin(ph) * amp;
ph += phinc;
amp += damp;
}
self.(ph, amp) = ph, amp;
return running;
}
];
return t;
}
----------------------------------------------------------------------
So, it's just 1.0/octave "linear pitch", and here I'm using a
configurable
"middle C" (at 261.625565 Hz by default) to define what a pitch value of 0.0
means. MIDI pitch 60 would translate to 0.0, pitch 72 would translate to 1.0
etc.
You could pass this around like events of some sort (buffer-splitting +
function calls as I do here for simplicity, or timestamped events), much like
MIDI, or you could use an audio rate stream of values, if you can afford it.
Just different transport protocols and (fixed or variable) sample rates...
Fons: "Another limitation of MIDI is its handling
of context, the only
way to do this is by using the channel number. There is no way to refer
to anything higher level, to say e.g. this is a control message for note
#12345 that started some time ago."
I don't know how this will be possible for cv without much effort, but
assumed this would be easy to do, than there would be the need to record
all MIDI events as cv events too, right?
These issues seem orthogonal to me. Addressing individual notes is just a
matter of providing some more information. You could think of it as MIDI using
note pitch as an "implicit" note/voice ID. NoteOff uses pitch to
"address"
notes - and so does Poly Pressure, BTW!
Anyway, what I do in that aforementioned prototyping thing is pretty much what
was once discussed for the XAP plugin API; I'm using explicit "virtual voice
IDs", rather than (ab)using pitch or some other control values to keep track
of notes.
You can't really see it in the code above, though, as synth plugins are
monophonic (can have channel wide state and code and stuff, though, but those
are implementation details), but that actually just makes it easier to
understand, as one synth instance corresponds directly to one "virtual voice".
Here's a piece of the "channel" code that manages polyphony and voices
within
a channel:
----------------------------------------------------------------------
// Like the instrument Control() method, but this adds
// "virtual voice" addressing for polyphony. Each virtual
// voice addresses one instance of the instrument. An instance
// is created automatically whenever a voice is addressed the
// first time. Virtual voice ID -1 means "all voices".
procedure Control(self, vvoice, cport, value)
{
// Apply to all voices?
if vvoice == -1
{
local vs = self.voices;
// This is channel wide; cache for new voices!
self.controls[cport] = value;
// Control transforms
if cport == S.PITCH
{
self.pitch.#* 0.;
self.pitch.#+ value;
value += self.ccontrols[PITCH];
}
// Apply!
for local i = 0, sizeof vs - 1
if vs[i]
vs[i]:Control(cport, value);
return;
}
// Instantiate new voices as needed!
local v = nil;
try
v = self.voices[vvoice];
if not v
{
// New voice!
v, self.voices[vvoice] = self.descriptor.
Create(self.(config, preset));
v:Connect(S.AOUTPUT, self.mixbuf);
if self.chstate
v:SetSharedState(self.chstate);
// Apply channel wide voice controls
local cc = self.controls;
for local i = 0, sizeof cc - 1
self:Control(vvoice, i, cc[i]);
}
// Control transforms
if cport == S.PITCH
{
self.pitch[vvoice] = value;
value += self.ccontrols[PITCH];
}
// Apply!
v:Control(cport, value);
}
// Detach the physical voice from the virtual voice. The voice
// will keep playing until finished (release envelopes etc), and
// will then be deleted. The virtual voice index will
// immediately be available to control a new physical voice.
procedure Detach(self, vvoice)
{
local v = self.voices;
if not v[vvoice]
return;
self.dvoices.+ v[vvoice];
v[vvoice] = nil;
}
----------------------------------------------------------------------
The Detach() feature sort of illustrates the relation between virtual vocies
and actual voices. Virtual voices are used by the "sender" to define and
address contexts, whereas the actual management of physical voices is done on
the receiving end.
As to MIDI (which is what my keyboard transmits), I just use the MIDI pitch
values for virtual voice addressing. Individual voice addressing with
polyphonic voice management as a free bonus, sort of. ;-) (No voice stealing
here, but one could do that too without much trouble.)
BTW, the language is EEL - the Extensible Embeddable Language. Basically like
Lua with more C-like syntax, and intended for realtime applications. (Uses
refcounting instead of garbage collection, among other things.) The #*, #+ etc
are vector operators, and <something>.<operator> is an in-place operation - so
'self.pitch.#* 0.' means "multiply all elements of the self.pitch vector with
0." Typing is dynamic. A "table" is an associative array, and these are
used
for all sorts of things, including data structures and OOP style objects. No
"hardwired" OOP support except for some syntactic sugar like the
object:Method(arg) thing, which is equivalent to object.Method(object, arg).
Resp. Linux than only would
record all events as cv events and apps would translate it to MIDI.
Well, you can translate back and forth between MIDI and cv + "virtual voice"
addressing, but since the latter can potentially express things that MIDI
cannot, there may be issues when translating data that didn't originate from
MIDI... I believe the user will have to decide how to deal with this; map
virtual voices to MIDI channels, use some SysEx extension, just drop or "mix"
the information that doesn't fit, or whatever.
I'm asking myself, if cv has advantages compared
to MIDI, what is the
advantage for the industry to use MIDI? Ok, when MIDI was established we
had another technology, e.g. RAM was slower, more expensive etc., today
we e.g. have serial buses that are faster than parallel buses, so
thinking about reforming MIDI or of having something new is suitable.
It's not easy to replace an existing standard with massive support everywhere,
that gets the job done "well enough" for the vast majority of users...
[...]
Sounds nice in theory, but in praxis I don't
believe that this is true.
There is fierce competition between proprietary software developers, why
don't they use cv for their products? Because they are less gifted than
all the Linux coders?
Because no popular hosts can handle cv controlled synths properly...? And, how
many musicians ACTUALLY need this for their everyday work?
Even if this is a PITA for me, I stay at Linux.
Musicians now need to
know which way Linux will go? Are coders for Linux interested to take
care of such issues, or do they want all musicians to buy special Linux
compatible computers, instead of solving issues like the jitter issue
for nearly every computer?
Well, you do need a properly configured Linux kernel. Don't know much about
the latest Windows developments, but not long ago, I did some vocals recording
and editing on a Windows laptop with a USB sound card, and it was pretty much
rock solid down to a few ms of buffering. (After all those problems I've had
with Windoze, which actually drove me over to Linux, I was actually slightly
impressed! :-D) I've been lower than than with Linux, that's WITH massive
system stress (which the Windows laptop couldn't take any) - but sure, you
won't get that out of the box with your average Linux distro.
Either way, if you're having latency issues with Windows (like I had when I
first tried to do that job on another laptop...), you'll most likely have the
same issues with Linux, and vice versa. A hardware issue is a hardware issue.
A common problem is "super NMIs" (usually wired to BIOS code) freezing the
whole system for a few ms every now and then. Absolute showstopper if you're
running RT-Linux or RTAI. There are fixes for most of those for Linux... Maybe
Windows has corresponding fixes built-in these days...? Other than that, I
don't know where the difference could be, really.
Are they interested in being compatible to
industry standards or will they do their own thing? An answer might be,
that Linux coders will do their own thing and in addition they will be
compatible to industry standards. I don't think that this will be
possible, because it isn't solved now and the valid arguments are time
and money right now, so how would implementing a new standard defuse the
situation?
Are we talking about OS distros, external hardware support (ie MIDI devices),
file format (ie standard MIDI files for automation), APIs, or what is this
about, really...?
Supporting all sorts of PC hardware out of the box with any OS is a massive
task! Some Linux distros are trying, but without loads of testing, there will
invariably be problems with a relatively large percentage of machines. Then
again, I talked to a studio owner some time ago, who had been struggling for
weeks and months getting ProTools (software + hardware) to work on a Windoze
box until he discovered that the video card was causing the problems... In
short, regardless of OS, you need to buy a turn-key audio workstation if you
want any sort of guarantee that things will Just Work(TM). Nothing much we -
or Microsoft, for that matter - can do about this. Mainstream PC hardware is
just not built for low latency realtime applications, so there WILL be issues
with some of it.
I mean, standard cars aren't meant for racing either. You may find some that
accidentally work "ok", but most likely, you'll be spending some time in the
garage fixing various issues. Or, you go to Caterham, Westfield, Radical or
what have you, and buy a car that's explicitly built for the race track. Right
tools for the job.
[...]
Having cv additionally is good, no doubt about it. My
final question,
the only question I wish to get an answer is: Even today MIDI is treated
as an orphan by Linux, if we would get cv, will there be any efforts to
solves MIDI issues with usual products from the industry?
Those issues will have to be solved either way. Having proper APIs, file
formats etc in the Linux domain will probably only make it MORE likely that
these issues will be solved, actually. Why spend time making various devices
work with Linux if you have no software that can make much use of them anyway?
A bit of a Catch-22 situation, maybe...
Or do we need to buy special mobos,
Yes, or at least the "right" ones - but that goes for Windows too...
do we need to use special MIDI interfaces etc.
If you can do cv<->MIDI mapping in the interface, you may as well do it
somewhere between the driver and the application instead.
If you want to network machines with other protocols, I don't think there's a
need for any custom hardware for that. Just use Ethernet, USB, 1394 or
something; plenty of bandwith and supported hardware available for any OS,
pretty much.
Of course, supporting some "industry standards" would be nice, but we need
open specifications for that. NDAs and restrictive per-user licenses don't mix
very well with Free/Open Source software.
to
still have less possibilities using Linux, than are possible with usual
products of the industry?
We won't deal with the devil just by using the possibilities of MIDI.
Today Linux doesn't use the possibilities of MIDI, I wonder if having a
Linux standard e.g. cv would solve any issues, while the common MIDI
standard still isn't used in a sufficient way.
Well, being able to wire Linux applications, plugins, machines etc together
would help, but I'm not sure how that relates to what you're thinking of
here...
I do agree that everybody I know, me too, sometimes do
have problems
when using MIDI hardware, because of some limitations of MIDI, but OTOH
this industry standard is a blessing.
Indeed. Like I said, it gets the job done "well enough" for the vast majority
of users. So, replacing MIDI is of little interest unless you want to do some
pretty advanced stuff, or just want to design a clean, simple plugin API or
something - and the latter has very little to do with connectivity to external
hardware devices.
Networking of sequencers, sound
modules, effects, master keyboards, sync to tape recorders, hard disk
recorders etc. is possible, for less money, without taking care from
which vendor a keyboard, an effect, a mobo is. Linux is an exception, we
do have issues when using MIDI. But is it really MIDI that is bad? I
guess MIDI on Linux needs more attention.
Internal Linux most things are ok, but networking with usual MIDI
equipment musicians, audio and video studios have got still is a PITA.
Cv would solve that?
Still not quite sure I'm following, but looking at some other posts in this
thread, I get the impression that this cv thing is more about application
implementation, APIs and protocols, and not so much about interfacing with
external hardware.
From that POV, you can think of cv (or some Linux Automation Data protocol, or
whatever) as a way of making automation data easier to deal with inside
applications, and a way of making applications communicate better. Wiring that
to MIDI and other protocols is (mostly) orthogonal; you just need something
that's at least as expressive and MIDI. Nice bonus if it's much more
expressive, while nicer and simpler to deal with in code.
--
//David Olofson - Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|
http://olofson.net http://kobodeluxe.com http://audiality.org |
|
http://eel.olofson.net http://zeespace.net http://reologica.se |
'---------------------------------------------------------------------'