i made a small mistake in forwarding the message about the UAPA
meeting. it was not intended for public dissemination, even though the
meeting is open to the public. my bad, as they say.
--p
> >I browsed the Kernel Source and there is only one mark_inode_dirty in
> >pipe_write (in fs/pipe.c). So we know where it is hanging...
> >
> >And in __mark_inode_dirty (in fs/inode.c) there is one
> > spin_lock(&inode_lock)
> >call, and I guess that is where the whole thing is hanging. So something
> >is holding that lock... how do I find out who is doing that? Apparently
> >the handling of inode_lock is confined to inode.c. I'll keep reading.
[Andrew Morton had suggested that the stack traces did not show problems
with stuck locks in the kernel...]
> >Maybe the pipe in question is one of the pipes that jack uses for ipc?
>
> seems *damn* likely ... sorry to just be chiming in with a useless comment!
One more (small) datapoint. Roger Larsson sent me off the list a couple
of small utilities (VERY nice tools!) that monitors the cpu usage of
SCHED_FIFO processes and after a timeout actually downgrades the
persistent hogs to SCHED_OTHER.
So I run that in a terminal and after playing around with a bunch of
jack apps got the machine to lockup... and then, after a little bit,
suddenly, it came back to life! (you could see that the monitor had
changed the priority of the hogs to SCHED_OTHER).
So I guess that somehow jack has a hard to trigger race condition that
locks up the machine when running SCHED_FIFO.
Now I have to figure out how to trace the thing so as to determine where
the whole thing is locking. Help from the jack gurus appreciated.
-- Fernando
>>It's obvious when you consider that "VVID has no voice" can happen
>>*before* the synth decides to start the voice; not just after a voice has
>>detached from the VVID as a result of voice stealing. At that point, only
>>the value of the control that triggered "voice on" will be present; all
>>other controls have been lost. Unless the host/sender is somehow forced to
>>resend the values, the synth will have to use default values or something.
>OK... I was thinking that the initial mention of the VVID would cause it
>creation (be that implicit or explict, though I prefer explit I think),
>thereafter control changes would be applied the the instantiated voice (or
>the NULL voice if you've run out / declined it).
The initial mention of the VVID is the issue here; certain types of voice
events are assumed not to allocate a voice (parameter-set events). THis is
because there is no difference between a tweak on a VVID that has had its
voice stolen and a tweak intended to initialize a voice that arrives before
voice-on. We must conclude that the plugin will discard both of them. There
must be a signal to the plugin that a VVID is targeted for activation. we
have a few options:
---a voice-activation event is sent, then any initializing events, then a
voice-on event
---a voice-on event is sent, with any following events on the same timestamp
assumed to be initializers
---a voice-activation event is sent and there is no notion of voice-on, one
or more of the parameters must be changed to produce sound but it is a
mystery to the sequencer which those are. (I don't like this because it make
sequences not portable between different instruments)
---events sent to voiceless VVID's are attached to a temporary voice by the
plugin and which may later use that to initialize an actual voice. This
negates the assumption that voiceless VVID events are discarded.
#2 is just an abbreviated form of #1, as i argue below. (unless you allow
the activate-to-voice_on cycle to span multiple timestamps which seems
undesireable)
> > > > When are you supposed to do that sort of stuff? VOICE_ON is > > >
>what triggers it in a normal synth, but with this scheme, you > > > have to
>wait for some vaguely defined "all parameters > > > available" point.
We can precisely define initialization parameters to be all the events
sharing the same VVID and timestamp as the VOICE_ON event. This means that
the "all parameters available" point is at the same timestamp as the
VOICE_ON event, but after the last event with that timestamp.
If we want to include a VOICE_ALLOCATE event then the sequence goes:
timestamp-X:voice-allocate, timestamp-X:voice-parameter-set(considered an
initializer if appropriate), timestamp-X:voice-on, timestamp-X+1:more
voice-parameter-sets (same as any other parameter-set)
But this sequence can be shortened by assuming that the voice-on event at
the last position for timestamp-X is implicit:
timestamp-X:voice-on(signifying the same thing as voice-allocate above)
timestamp-X:voice-parameter-set(considered an initializer if appropriate),
(synth actually activates voice here), timestamp-X+1:other-events
---Jacob Robbins.................
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
>I made a post a while back defining all the XAP terminology to date. Read
>it if you haven't - it is useful :)
I was hoping something of this sort existed. It would be very helpful if you
could put the list of XAP terminology on the webpage. It would help keep
everybody on the same page when discussing.;) And it would help people to
join the discussion without spending the 10-15 hours it takes to read
december's posts.
>VVID allocation and voice allocation are still two different issues. VVID
>is about allocating *references*, while voice allocation is about actual
>voices and/or temporary voice control storage.
I agree entirely. If each VVID=a voice then we should just call them Voice
ID's, and let the event-sender make decisions about voice reappropriation.
---jacob robbins.......................
_________________________________________________________________
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus
Hi all,
Nice couple of new features with this release, a lot of code cleanups, a
bit of UI work, and general betterness all round :) And, if I'm not
mistaken, this makes jack rack the only application so far with LRDF
support. Nee ner nee ner ;-)
* proper ladcca support (pays attention to events)
* added saving/loading rack configurations. this is a bit of a hack tho as
when you open a file, it only adds the plugins to whatever's in the
current rack. in fact, the whole of the file loading/saving is hairy atm.
* added lrdf support (this is majorly phat.. categorised plugin menus :)
* proper toolbar with stock buttons
* control rows now have no central port label
* added a menu bar
* added a splash screen
* added an about box (using gnome 2)
* nice new icon and logo images, used for the splash screen, the window
icons and also a gnome 2 .desktop
* lots of code separation and cleanups and under-the-hood changes
http://pkl.net/~node/jack-rack.html
Bob
My understanding of VVID's is that the sequencer puts one complete,
continuous note on a particular VVID. The sequencer only reuses a VVID once
it has ended any previous notes on that VVID. The sequencer can allocate a
large number of VVIDs so that it never has to make a voice stealing decision
on its end (and so we don't have to make roundtrips). This large allocation
means that the plugin should never try to allocate a significant sized
structure for each VVID. Instead, the plugin should match VVIDs to actual
voices as incoming voice-on messages are received until all actual voices
are used. After all voices are in use, the plugin has to decide whether to
steal voices from ongoing notes or deny voice-on events. Voice stealing
decisions are properly made by the plugin. This is the case even if the
sequencer knows how many actual voices there are because the plugin has much
more intimate knowledge of the nature of the voices: their amplitude, timbre
etc.
My underlying assumptions are:
-a single object resides in each channel, be it a piano a gong or whatever,
there is one in the channel
-HOWEVER, that single object may be polyphonic; the piano may be able to
sound multiple notes concurrently, the gong may be able to sound two quick
strokes in succession which overlap in their duration.
-DEFINITION: We call the facility for making ONE of those sounds a voice.
-DEFINITION: the individual voices produce finite periods of sound which we
call notes. A note is the sound that a voice makes between a Voice-On event
and a Voice-Off event (provided that the voice is not reappropriated in the
middle to make a different note)
-HOWEVER- there is no rule that a note has any pitch or velocity or any
other particular parameter, it is just that the Voice-On tells the voice to
start making sound and the Voice-Off tells the voice to stop making sound.
-ALSO HOWEVER- the entity which sends voice-on and off messages may not
directly refer to the object's voices. Instead, the event sender puts
separate notes on separate Virtual Voice IDs to indicate what it desires the
voices to do. This differential is in place because sequencers typically
send more concurrent notes than the plugin has actual voices for AND the
plugin is better suited to decide how to allocate those scarce resources. In
other words, it is the role of the plugin to decide whether or not to steal
a voice for a new note and which voice to steal. So the sequencer sends out
notes in fantasy-land VVID notation where they never ever have to overlap,
and the plugin decides how best to play those notes using the limited number
of voices it has.
As I see it, the procedure for using a voice via a particular VVID is as
follows (note that all events mentioned are assumed to have a particular
VVID):
(1)send voice-on event at timestamp X. This indicates a note is to start.
(2)send parameter-set events also at timestamp X, these are guaranteed to
follow the voice-on event even though they have the same timestamp because
the event ordering specifies it. These parameter-set events are to be
considered voice initializers should the plugin support such a concept,
otherwise they are the first regular events to effect this note.
(3)send parameter-set events at later times to modify the note as it
progresses.
(4)send voice-off event at later time to end the note and free the voice.
When the plugin reads the voice-on event at timestamp X it decides whether
to allocate a voice or not. If it has an initialization routine for voice-on
events, then the plugin must read through the remaining events with
timestamp X to get initialization arguments. The plugin must delay actually
initializing the voice until it has read the other events at the same
timestamp as the voice-on event. If the plugin doesn't do any special
initialization procedures then it doesn't have to worry about this because
the events concurrent with the voice-on event can just be applied in the
same manner as later param-set events.
--jacob robbins:..... soundtank..........
_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963
ZynAddSubFX is a open-source software synthesizer for
Linux.
It is available at :
http://zynaddsubfx.sourceforge.net
or
http://sourceforge.net/projects/zynaddsubfx
news:
1.0.4 - It is possible to load Scala (.scl and .kbm)
files
- Added mapping from note number to scale degree
is possible to load Scala kbm files
- Corrected small bugs related to Microtonal
- If you want to use ZynAddSubFX with OSS (or
you don't have ALSA) you can modify the Makefile.inc
file to compile with OSS only.
- It is shown the real detune (in cents)
- Made a new widget that replaces the Dial
widget
- Removed a bug that crashed ZynAddSubFX if you
change some effect parameters
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
> -----Original Message-----
> From: Pascal Haakmat [mailto:a.haakmat@chello.nl]
> 05/01/03 18:14, CK wrote:
>
> > if this is OSS forget about it, it will be outdated as soon
> an 2.6 kernel
> > is out
>
> Well, ALSA has OSS compatibility, and 4front-tech.com will still be
> selling their for-money OSS drivers (for a number of platforms, not
> just Linux).
>
> So saying that OSS will disappear after the release of kernel 2.6 is a
> bit premature. Especially since some of the for-money OSS drivers are
> still much better than the ALSA drivers (just my experience, YMMV).
also, alsa is linux only, isn't it?
erik