On Friday 10 October 2003 19.19, hexe_2003(a)directbox.com wrote:
[...]
quiet interesting ;)
Could you give me a short idea howto mix two
signals/stream/oscilators/whatever ??
The actual mixing is just a matter of adding the signals together.
You'll have to make sure the result doesn't overflow (wrap), but
there's no single correct way of doing that. You can play safe and
scale to 1/voices, use a compressor/limiter (dynamics processor) or
just set a sensible level and clip the signal (to avoid wrapping),
just in case.
I guess the most confusing part for some people (depends on your
background) is to get multiple "units" working together, in parallel,
generating/processing N samples at a time. Functions that generate as
many samples as they feel like and then expect to send the output
somewhere through a write() style API won't work (well) in single
threaded engines. You want everything to process/generate N samples
upon request, so you can just run all units in the net once per block
of audio.
If it is not pthread, what is
it ?
Platform and API dependent. Some audio APIs will run your audio engine
as a callback, so you don't have to (and generally shouldn't) set up
any audio threads of your own.
Usually, there will still be a thread (managed by the API or driver
subsystem), but in some cases, the callbacks actually come directly
from the driver and run in interrupt context. This is common on
systems without real multitasking, like Win16 and Mac OS "Classic".
//David Olofson - Programmer, Composer, Open Source Advocate
.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`----------------------------------->
http://audiality.org -'
---
http://olofson.net ---
http://www.reologica.se ---