[linux-audio-dev] [semi-OT] EEL 0.1.0

David Olofson david at olofson.net
Mon Jan 10 01:11:22 UTC 2005


On Sunday 09 January 2005 21.47, Stefan Westerfeld wrote:
[...]
> in a property at the GUI, to allow the user to add its own custom
> DSP code. But its really just started, thus I am wondering whether
> EEL is intended for this domain (RT audio processing), or whether
> it will be too slow.

The new EEL, which is pretty much a complete rewrite of what's in the 
current version of Audiality, is specifically intended to run in the 
audio thread, acting as the glue between events (such as MIDI I/O) 
and the RT synth objects.

The idea with Audiality is to create a tool that's powerful enough to 
be used as a "real" synth, while being scalable enough to be useful 
in games and multimedia for small downloads and/or interactive sound 
and music. Sort of like a games sound system (like SDL_mixer, HMQ 
Audio System, FMOD etc) on steroids. Or crack, maybe - I dunno... ;-D

I'm also using EEL in a work related control engineering project, that 
needs to handle hard RT control tasks at 1000+ Hz cycle rates. That's 
actually the main reason why I decided to go ahead with the VM based 
rewrite at this point. (You can see in the Audiality TODO that it was 
planned for much later.)


That is, whereas most scripting languages and HLLs are unsuitable for 
RT tasks, and the few soft RT scripting languages I've found are 
having trouble with cycle/frame rates above 100 Hz (they're aimed at 
game scripting), EEL is explicitly designed for that kind of (ab)use. 
It has about the same hard RT potential as a programming language 
with manual memory management (like C, Pascal or C++), while 
providing automatic memory management and some other "real" HLL 
features.

That said, one must realize that instruction decoding, dynamic typing 
etc causes some serious overhead in any VM, and there's really no way 
around that, short of implementing a JIT or straight native compiler.

Well, almost. There are some tricks. For EEL, I'm planning on 
automatic type tracking (the compiler detects code paths where 
variables stick with one type, and selects faster static typed VM 
instructions, takes memory management shortcuts etc) and of course, 
*The* performance hack for VMs; taking CISC as far as it can 
realistically be taken. More and bigger instructions, so you can use 
fewer of them, and optimize their implementations in the VM better.

The biggest performance gain, however, is what is already demonstrated 
in current versions of Audiality, where a slow direct-from-source 
scripting engine drives "macro operators" that process entire 
waveforms at a time. That kind of domain specific "shortcuts" can 
render the scripting engine overhead practically irrelevant. I'm 
intending to take that concept pretty far in Audiality - but of 
course, that's really more of a use case than anything to do with EEL 
itself. You'll be able to add your own EEL classes with C 
implementations, so you could implement a generic vector math or DSP 
package or something if you like.


> I know that for RT audio processing, in the end you should compile
> the resulting code;

Yes, at least if you want to do traditional explicit sample-by-sample 
processing directly in the scripting language. JIT or native compiler 
(compile via C maybe?) might work, but I'll probably focus on higher 
level operations, such as vector math, running DSP units, processing 
timestamped events and that kind of stuff.

So, as it is, EEL is not an optimizer friendly low level DSP language 
- but then again, type tracking (effectively turning dynamic/implicit 
typed code into static typed where possible), compound operators 
(MACC and the like) and vector operators might help quite a bit... 
I'm definitely interested in exploring that kind of stuff, but it has 
to be implemented and tested as well! :-)


> the question is that: can EEL be used for rapid 
> prototyping of DSP algorithms, or is it too slow (or to unreliable
> when it comes to the RT aspects: this kind of thing is ultimately
> run under as strict conditions as those that apply for the JACK
> thread) for doing this.

Well, determinism shouldn't be much of a problem. There is no RT 
memory manager in this release, but if you're in a hurry, you can 
probably use the one from RTAI/LXRT, or hack something simple. (Some 
LIFO pools of blocks of suitable sizes.) Then again, you may not even 
need it, depending on how your code works! (*)

Memory management is done with refcounting + some shortcuts, and as of 
now, objects are destroyed instantly when they have no references. It 
should be easy enough to destroy objects "a bunch a cycle" if that 
fits your application better. Most of the code for that (not much, 
really) is already in place. The scheduler can easily be wired to 
destroy some garbage every n instructions, if that's better.

(*) Simple values and function "closures" are *not* managed
    objects! The VM is register based, and uses an array of
    registers (the heap) as a call stack by moving around a
    register frame. Basically, "malloc()ed" objects are used
    only for stuff that doesn't fit in a register, and they're
    normally passed around by reference rather than being
    cloned.


Bed time... Too much late night hacking lately.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---



More information about the Linux-audio-dev mailing list