Hello, (I'm new to this list, so hi everyone!)
I'm rather stuck on the following: I'm writing an app that uses JACK for its
audio output. I now want to control this app using midi but I have trouble
figuring out how to synchronize the rendered sound to the incoming events.
The events, midi notes for example, come in with timestamps in one thread.
Another thread (the one entered by process()) renders the audio. In order to
render properly, it would need to calculate the exact sample at which the
incoming note should begin to take effect in the rendered output stream.
If you have an evenly spaced font, here's a graphical representation of the
problem:
|...e.....e|e....e....|...ee...e.|.....e.e.e|....e...e.| midi events
|..........|...rrr....|.rr.......|......rrr.|....rrrr..| rendering
|..........|..........|ssssssssss|ssssssssss|ssssssssss| sound
Here, the e's represent midi events (but could be gui events just as well).
The r's in the second bar represent the calls to the process function of my
app. During this time, the audio that will be played back during the next
cycle will be rendered. The s'es in the third bar represent the actual sound
as it was rendered during the previous block. The vertical bars represent
blocks of time equivalent to the buffer size.
The best I can think of now is that I have to record midi events during the
first block, process these into audio during the second block (because I
want to take into account all events that occured during the first block) so
it can be played back during the third. Now, all is fine, but time in the
event-bar is measured in seconds and fractions thereof, but time in the
third bar is measured in samples. How can I translate the time recorden in
the events (seconds) to time in samples? How can I know at which exact time
relative to the current playback time my process() method was called?
If I just measure time at the start of my application I'm afraid things will
drift. Is that correct? How have other people solved this problem? Hope
somebody can help!
Regards,
Denis
_________________________________________________________________
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
Hi,
yesterday was the opening of the "Southtyrol game",
one of the worlds largest hand carved pinball style game machines.
(length: 11 metres, weight: 2.6 tons, 16 audio speakers)
It is an interesting combination of art, electronics and audio powered
by Linux. The game is located in Southtyrol - Italy in a museum
dedicated to tourism.
The audio part was implemented by me.
I created a webpage with a description and some movie clips
that demonstrate how the game works.
http://www.linuxdesktop.it/benno/southtyrolgame/
Thanks to all Linux developers in particular the LADers that made
audio under Linux a viable solution !
comments ?
PS: would such a story qualify for slashdot ?
Just try to submit it if you want ;-)
cheers,
Benno
http://linuxsampler.sourceforge.net
-------------------------------------------------
This mail sent through http://www.gardena.net
The Generalized Music Plug-In Interface (GMPI) working group of the MIDI
Manufacturer's Association (MMA) is seeking the input of music and audio
software developers, to help define the technical requirements of GMPI.
The objective of the GMPI working group is to create a unified
cross-platform music plug-in interface. This new interface is hoped to
provide an alternative choice to the multitude of plug-in interfaces that
exist today. Among the many benefits of standardization are increased
choice for customers, lower cost for music plug-in vendors and a secure
future for valuable market-enabling technology.
Like MIDI, GMPI will be license free and royalty free.
Phase 1 of the GMPI working group's effort is to determine what is required
of GMPI: What sorts of capabilities are needed to support existing products
and customers? What are the emerging new directions that must be addressed?
Phase 1 is open to any music software developer and is not limited to MMA
members. It will last a minimum of three months, to be extended if deemed
necessary by the MMA. Discussions will be held on an email reflector, with
possible meetings at major industry gatherings such as AES, NAMM and Musik
Messe.
Following the collection of requirements in Phase 1, the members of the MMA
will meet to discuss and evaluate proposals, in accordance with existing MMA
procedures for developing standards. There will be one or more periods for
public comment prior to adoption by MMA members.
If you are a developer with a serious interest in the design of this
specification, and are not currently a member of the MMA, we urge you to
consider joining. Fees are not prohibitively high even for a small
commercial developer. Your fees will pay for administration, legal fees and
marketing. Please visit http://www.midi.org for more information about
membership.
To participate, please email gmpi-request(a)freelists.org with the word
"subscribe" in the subject line. Please also provide your name, company
name (if any) and a brief description of your personal or corporate domain
of interest. We look forward to hearing from you.
Sincerely,
Ron Kuper
GMPI Working Group Chair
I'm hoping there's a simple solution to this that I've just missed
somewhere along the line. Occasionally something will crash while using
oss emulation and I can't use the sound card until I reboot. I've tried
lsof /dev/dsp and as many other variations as I can think of and I never
get anything (even when sound is playing), which I think is related to
using devfs. I've also tried fuser, and good old visual grep on the
output of ps and nothing is running that would use the soundcard yet I
can't unload the snd-pcm-oss module.
This time it was timidity (which I have promptly uninstalled since this
version seems capable of nothing other than locking up my soundcard),
but it has been mplayer in the past.
Is there some way to restore access to the sound card short of
rebooting?
--
Hans Fugal | De gustibus non disputandum est.
http://hans.fugal.net/ | Debian, vim, mutt, ruby, text, gpg
http://gdmxml.fugal.net/ | WindowMaker, gaim, UTF-8, RISC, JS Bach
---------------------------------------------------------------------
GnuPG Fingerprint: 6940 87C5 6610 567F 1E95 CB5E FC98 E8CD E0AA D460
> Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> writes:
>
> Standards processes that I've been exposed to generally mandate regular
> weekly meetings (teleconference and/or irc), with less frequent
> face-to-face meetings and most business is sorted out during the meetings.
> Email is only used for tying up loose ends and exchanging text, minuites
> etc.
The IETF is a successful counter-example -- email is the only way
any decision can be made, meetings are optional, and no decision
made at any meeting is binding until consensus occurs on the mailing
list to confirm it.
I'm hesitant to comment further about GMPI and its chances for
success or failure, because I've been too busy trying to finish
RTP MIDI to keep a close eye on it. My only worry stems from a
common IETF belief -- that the standards process is a great way
to polish and reach consensus on a substantially complete design,
but using the standards process as the vehicle to do the design
is a much harder road to hoe. A good example of this is 801.11,
which was an incredibly long and painful experience because many
parties brought bits and pieces of wireless Ethernet to the IEEE
table. Only the inherent goodness of the core idea (packet radio)
kept everyone at the table to eventually produce a standard that
could be interoperably deployed (801.11b, aka Wi-Fi, and its
lettered follow-ons).
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
On Mon, Sep 29, 2003 at 09:08:31 -0700, Tim Hockin wrote:
> On Mon, Sep 29, 2003 at 09:06:06AM +0100, Steve Harris wrote:
> > I have my own thery about why it didnt work out, but I'm keeping my mouth
> > shut for once :)
>
> Don't do that! :) I want to learn from the failure of this process, even if
> it was entirely my fault. Please share, even privately.
The reason I'm keeping my mouth shut is not reluctance to offend anyone :)
its because it wont help, and I dont want to be too negative about it.
Standards processes that I've been exposed to generally mandate regular
weekly meetings (teleconference and/or irc), with less frequent
face-to-face meetings and most business is sorted out during the meetings.
Email is only used for tying up loose ends and exchanging text, minuites
etc.
The W3C (for example) are quite stict about it - if you miss too many
meetings or face-to-face's then you're not allowed to vote for the rest of
the process.
Sadly that wont really work for GMPI as there are too many key
contributors who couldn't, or wouldn't commit to weekly meetings.
I dont think the GMPI process is domed, its just likely to be slow. The
recent suggestions should help. Also, the fact that theres no real
pressing need for it doesnt help either - most platforms allready have
OK instrument APIs and we have ALSA Seq + JACK for synths, which does
work.
- Steve
>discussion about it moved to the GMPI list. GMPI is an industry-wide
>attempt to define a platform+vendor neutral music plugin API.
>
> majordomo: gmpi-request(a)freelists.org
> archives: http://www.freelists.org/archives/gmpi
Is there a digest mode? (I'm not familiar with majordomo.)
Could anyone post a summary about what has been discussed, what
about is discussed now, and what are open issues?
Juhana
>Has anybody used the above ALSA functionality? The functions are
>defined in <alsa/pcm.h> but there doesn't seem to be any documentation
>nor any example programs.
There is a package called ameter.
> Do these functions work?
Seem to.
> Do they work on record?
Yes. ameter works for both record and playback.
> Does anyone have any example code?
ameter.
Maybe you can stop it from segv-ing every few hours? :-)
I fixed one bug and mailed it to the author but didn't check
to see if he updated it (I have 0.3). It concerned ^C behavior,
e.g. ^C out of aplay or arecord with ameter as the device.
Hi all,
Has anybody used the above ALSA functionality? The functions are
defined in <alsa/pcm.h> but there doesn't seem to be any documentation
nor any example programs.
Do these functions work?
Do they work on record?
Does anyone have any example code?
Thanks,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
Linux : Think of it as 'free' as in 'free speech' not 'free beer'.
re
I installed cheesetracker / legasynth on my computer and have a short question now:
The instrument editor of cheesetracker, if I click arround with the buttons, and then I added
it to the pattern. But cheesetracker gives out error msg's in the xterm... .
Now, Can I import some sort of instrument sets into cheesetracker so that he KNOWS
what to do ? And if yes, I assume these instrument structs come from legasynth ?
But how can I export legasynth's instrument "files" ( if there're any) ?
greetz Sascha Retzki
__________________________________________________
Verpassen Sie keine eBay-Auktion und bieten Sie bequem
und schnell über das Telefon mit http://www.telefonbieten.de
Ihre eMails auf dem Handy lesen - ohne Zeitverlust - 24h/Tag
eMail, FAX, SMS, VoiceMail mit http://www.directbox.com