Hello, (I'm new to this list, so hi everyone!)
I'm rather stuck on the following: I'm writing an app that uses JACK for its
audio output. I now want to control this app using midi but I have trouble
figuring out how to synchronize the rendered sound to the incoming events.
The events, midi notes for example, come in with timestamps in one thread.
Another thread (the one entered by process()) renders the audio. In order to
render properly, it would need to calculate the exact sample at which the
incoming note should begin to take effect in the rendered output stream.
If you have an evenly spaced font, here's a graphical representation of the
problem:
|...e.....e|e....e....|...ee...e.|.....e.e.e|....e...e.| midi events
|..........|...rrr....|.rr.......|......rrr.|....rrrr..| rendering
|..........|..........|ssssssssss|ssssssssss|ssssssssss| sound
Here, the e's represent midi events (but could be gui events just as well).
The r's in the second bar represent the calls to the process function of my
app. During this time, the audio that will be played back during the next
cycle will be rendered. The s'es in the third bar represent the actual sound
as it was rendered during the previous block. The vertical bars represent
blocks of time equivalent to the buffer size.
The best I can think of now is that I have to record midi events during the
first block, process these into audio during the second block (because I
want to take into account all events that occured during the first block) so
it can be played back during the third. Now, all is fine, but time in the
event-bar is measured in seconds and fractions thereof, but time in the
third bar is measured in samples. How can I translate the time recorden in
the events (seconds) to time in samples? How can I know at which exact time
relative to the current playback time my process() method was called?
If I just measure time at the start of my application I'm afraid things will
drift. Is that correct? How have other people solved this problem? Hope
somebody can help!
Regards,
Denis
_________________________________________________________________
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
The Generalized Music Plug-In Interface (GMPI) working group of the MIDI
Manufacturer's Association (MMA) is seeking the input of music and audio
software developers, to help define the technical requirements of GMPI.
The objective of the GMPI working group is to create a unified
cross-platform music plug-in interface. This new interface is hoped to
provide an alternative choice to the multitude of plug-in interfaces that
exist today. Among the many benefits of standardization are increased
choice for customers, lower cost for music plug-in vendors and a secure
future for valuable market-enabling technology.
Like MIDI, GMPI will be license free and royalty free.
Phase 1 of the GMPI working group's effort is to determine what is required
of GMPI: What sorts of capabilities are needed to support existing products
and customers? What are the emerging new directions that must be addressed?
Phase 1 is open to any music software developer and is not limited to MMA
members. It will last a minimum of three months, to be extended if deemed
necessary by the MMA. Discussions will be held on an email reflector, with
possible meetings at major industry gatherings such as AES, NAMM and Musik
Messe.
Following the collection of requirements in Phase 1, the members of the MMA
will meet to discuss and evaluate proposals, in accordance with existing MMA
procedures for developing standards. There will be one or more periods for
public comment prior to adoption by MMA members.
If you are a developer with a serious interest in the design of this
specification, and are not currently a member of the MMA, we urge you to
consider joining. Fees are not prohibitively high even for a small
commercial developer. Your fees will pay for administration, legal fees and
marketing. Please visit http://www.midi.org for more information about
membership.
To participate, please email gmpi-request(a)freelists.org with the word
"subscribe" in the subject line. Please also provide your name, company
name (if any) and a brief description of your personal or corporate domain
of interest. We look forward to hearing from you.
Sincerely,
Ron Kuper
GMPI Working Group Chair
Hi,
I'm currently embarking on a project to make an interface between Q, a
functional programming language
(http://www.musikwissenschaft.uni-mainz.de/~ag/q/), and SuperCollider. I
think the OSC interface will be fairly straightforward to do, but I
haven't been able to find any documentation (besides the sc sources,
which I haven't grokked yet ;-) on the format of the synth definition
file. Does anyone here know more about this?
Many thanks in advance,
Albert
--
Dr. Albert Gr"af
Email: Dr.Graef(a)t-online.de, ag(a)muwiinfa.geschichte.uni-mainz.de
WWW: http://www.musikwissenschaft.uni-mainz.de/~ag
Several people have asked me what denormal numbers are over the last few
weeks, well heres a much better description than my rambling head
scratching: http://www.ecs.soton.ac.uk/~swh/denormal.ps
Its an extract from David Goldberg's article, "What Every Computer
Scientist Should Know about Floating-Point Arithmetic".
- Steve
Hi list,
I just found this nice link at the Debian Weekly News:
"Debian as musical Instrument. James Patten and Ben Recht developed a
composition and performance instrument for electronic music which tracks
the positions of objects on a tabletop surface and converts their motion
into music. One can pull sounds from a giant set of samples, cut between
drum loops to create new beats, and apply digital processing all at the
same time on the same table. Knoppix is used as operating system."
Homepage is at http://web.media.mit.edu/~jpatten/audiopad/ which also
has a very nice 20MB Quicktime .mov file explaining how it is controlled
and how it works (mplayer can play this .mov without problems once you
have the codecs installed).
"Sweet!" :-)
Greetings,
Frank
Hi,
I'm pleased to announce the first public release of hdspmixer.
Hdspmixer is a linux clone of Totalmix, a tool to control the
advanced routing possibilities of the RME Hammerfall DSP cards.
You can donwload hdspmixer here :
http://www.undata.org/~thomas/
Thomas
Hello Debian users and interested bystanders,
I have completed the Debian csound package. Once my sponsor has looked
it over, approved of it, and uploaded it, it will be in unstable. In the
meantime you can find it at
deb http://hans.fugal.net/debian sid/main
deb-src http://hans.fugal.net/debian sid/main
I use csound, but I wouldn't consider myself a power user; in particular
I haven't used any of the utilities (e.g. pvanal, hetro, etc.). If you
do use them, please give the manpages a look-over and let me know if you
see anything wrong.
--
Hans Fugal | De gustibus non disputandum est.
http://hans.fugal.net/ | Debian, vim, mutt, ruby, text, gpg
http://gdmxml.fugal.net/ | WindowMaker, gaim, UTF-8, RISC, JS Bach
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
I'm a professional audio engineer for 15 years, and DSD did convince me. I
could compare the sound quality with TI ADC/DAC.
Conventional PCM techniques are unable to reproduce high frequencies
correctly. And the explanation is very simple. If you record a sound at 44.1
kss, you get a theorical frequency response of 0 - 22050 Hz. BUT to describe
frequencies from 11050 to 22050 Hz, you can only play with a 4-sample long
period.
A 22050 Hz sine could be really accurate (one sample up, one sample down
every 1/22050th second), and so is 11025. But intermediary frequencies
introduces temporal aliasing, some metallic feeling due to temporal
quantization. This is inherent to the very low sampling rate (96 kHz is just
a bit better, but no miracle), which is unable to describe waveforms at high
frequencies.
Bad high frequencies temporal definition means bad transients. Anyone can
notice it when he _actually_ hear and compare PCM and DSD.
Stop speculative talking and try to get some real demo...
--
mickael
Dear Sirs, dear Madams
I've programmed an utility called kisdnmonitor, it tracks the calls you make/
receive. It's for KDE as the name suggests. There's a server as well, it's called
isdnserver. You can get more infos on both on:
http://www.elogix.ch/linux_en.html
Now I would like to add telephony support to the programs, so that you can use
the computer as a "deluxe" isdn phone.
So far I've figured out that you can dial through the modem emulation /dev/ttyIx.
You just properly configure the device and then with the following Hayes codes:
ATS18=1
ATS14=4
ATD<the phone number>
you can start a call.
Everything is ok so far, the problem is that, obviously, no sound will automagically
come out from the speakers and nothing that is brabbled through the mic will go to
the other side.
Basically, what I want to do, is to stream the sound from the isdncard to the speakers
and from the mic to the isdncard. Vbox (a voice call programm) is able to save samples
from the isdncard's input. I should mention as well that years ago, I had a program that
came with the isdncard (for windoze) called RVScom that had the same functionality,
and it worked (in walkie talkie mode, I guess that my isdncard isn't full duplex capable).
Since 8 bits suffice, the processor load shouldn't be extreme.
Has anyone an idea?
Any help would be greatly appreciated. Thank you in advance
Respectfully submitted
George
Hello list,
I hope there are some other users of the Terratec EWS88MT card out
there, because I seem to be in serious trouble - the card 'works'
but not in a way that makes it useful.
I've got a test program that outputs the same signal (1 kHz sine at
-6 dB below peak level) to all eight channels. Measuring the output
levels, I get wildly different values. Using envy24control to set
the DAC output levels only seems to make things worse - there is
no logical relation at all between the slider position and the
actual level. For each of the four pairs of channels, the two
level settings interact in a more or less random way, mostly by
just switching the other channel off or on. After a few minutes
twiddling, all sound stops until I reboot.
Similar things seem to happen with the inputs, but I haven't had
the time to investigate those.
So I'm wondering what's wrong - is my new card not up to standards,
or is there a problem with envy24control trying to control it ?
Any help / hints / tips appreciated !
--
FA