Hi all,
We're in the process of porting an open source Matlab toolbox to Linux
called the Psychtoolbox. It's a piece of software that's widely used
within neuroscience and psychophysics, to display graphics and play
sounds in experiments. It uses OpenGl for its graphics backend, and
right now the sound support is just matlab's, which is really poor.
A lot of us researchers are interested in audiovisual experiments, where
we study how the brain combines auditory and visual information (for
example in speech perception).
Concretely speaking, that means we need to get as precise a timing and
synchrony as we can possibly get. A typical experiment will go
something like this :
- display a flash and, at the same time, play a beep
- wait for response
The tricky bit is of course getting a flash that's totally synchronous
with the beep. Absolute synchrony is not achievable without dedicated
hardware, but we need to get an approximation that's within the few ms
range.
I do things a bit like this for audiovisual performances. It is impossible
to get it precisely right as you say, but what I tend to do is run
everything slightly ahead of realtime - so you timestamp events to happen
in the future. Of course you can't do this if human input is involved, but
if the timing is machine generated you can tune the audio and visual
independantly so they are close enough for most purposes.
This is also the technique I use for syncing live performances with other
people, it all tends to have to be tuned by hand as there are too many
variables to calculate - gfx card speed, hardware sound card latency etc
etc across different machines.
Technologically speaking I'd recommend using OSC protocol to send messages
to anything that accepts timestamped events.
cheers,
dave