Hi all,
Nice couple of new features with this release, a lot of code cleanups, a
bit of UI work, and general betterness all round :) And, if I'm not
mistaken, this makes jack rack the only application so far with LRDF
support. Nee ner nee ner ;-)
* proper ladcca support (pays attention to events)
* added saving/loading rack configurations. this is a bit of a hack tho as
when you open a file, it only adds the plugins to whatever's in the
current rack. in fact, the whole of the file loading/saving is hairy atm.
* added lrdf support (this is majorly phat.. categorised plugin menus :)
* proper toolbar with stock buttons
* control rows now have no central port label
* added a menu bar
* added a splash screen
* added an about box (using gnome 2)
* nice new icon and logo images, used for the splash screen, the window
icons and also a gnome 2 .desktop
* lots of code separation and cleanups and under-the-hood changes
http://pkl.net/~node/jack-rack.html
Bob
My understanding of VVID's is that the sequencer puts one complete,
continuous note on a particular VVID. The sequencer only reuses a VVID once
it has ended any previous notes on that VVID. The sequencer can allocate a
large number of VVIDs so that it never has to make a voice stealing decision
on its end (and so we don't have to make roundtrips). This large allocation
means that the plugin should never try to allocate a significant sized
structure for each VVID. Instead, the plugin should match VVIDs to actual
voices as incoming voice-on messages are received until all actual voices
are used. After all voices are in use, the plugin has to decide whether to
steal voices from ongoing notes or deny voice-on events. Voice stealing
decisions are properly made by the plugin. This is the case even if the
sequencer knows how many actual voices there are because the plugin has much
more intimate knowledge of the nature of the voices: their amplitude, timbre
etc.
My underlying assumptions are:
-a single object resides in each channel, be it a piano a gong or whatever,
there is one in the channel
-HOWEVER, that single object may be polyphonic; the piano may be able to
sound multiple notes concurrently, the gong may be able to sound two quick
strokes in succession which overlap in their duration.
-DEFINITION: We call the facility for making ONE of those sounds a voice.
-DEFINITION: the individual voices produce finite periods of sound which we
call notes. A note is the sound that a voice makes between a Voice-On event
and a Voice-Off event (provided that the voice is not reappropriated in the
middle to make a different note)
-HOWEVER- there is no rule that a note has any pitch or velocity or any
other particular parameter, it is just that the Voice-On tells the voice to
start making sound and the Voice-Off tells the voice to stop making sound.
-ALSO HOWEVER- the entity which sends voice-on and off messages may not
directly refer to the object's voices. Instead, the event sender puts
separate notes on separate Virtual Voice IDs to indicate what it desires the
voices to do. This differential is in place because sequencers typically
send more concurrent notes than the plugin has actual voices for AND the
plugin is better suited to decide how to allocate those scarce resources. In
other words, it is the role of the plugin to decide whether or not to steal
a voice for a new note and which voice to steal. So the sequencer sends out
notes in fantasy-land VVID notation where they never ever have to overlap,
and the plugin decides how best to play those notes using the limited number
of voices it has.
As I see it, the procedure for using a voice via a particular VVID is as
follows (note that all events mentioned are assumed to have a particular
VVID):
(1)send voice-on event at timestamp X. This indicates a note is to start.
(2)send parameter-set events also at timestamp X, these are guaranteed to
follow the voice-on event even though they have the same timestamp because
the event ordering specifies it. These parameter-set events are to be
considered voice initializers should the plugin support such a concept,
otherwise they are the first regular events to effect this note.
(3)send parameter-set events at later times to modify the note as it
progresses.
(4)send voice-off event at later time to end the note and free the voice.
When the plugin reads the voice-on event at timestamp X it decides whether
to allocate a voice or not. If it has an initialization routine for voice-on
events, then the plugin must read through the remaining events with
timestamp X to get initialization arguments. The plugin must delay actually
initializing the voice until it has read the other events at the same
timestamp as the voice-on event. If the plugin doesn't do any special
initialization procedures then it doesn't have to worry about this because
the events concurrent with the voice-on event can just be applied in the
same manner as later param-set events.
--jacob robbins:..... soundtank..........
_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963
ZynAddSubFX is a open-source software synthesizer for
Linux.
It is available at :
http://zynaddsubfx.sourceforge.net
or
http://sourceforge.net/projects/zynaddsubfx
news:
1.0.4 - It is possible to load Scala (.scl and .kbm)
files
- Added mapping from note number to scale degree
is possible to load Scala kbm files
- Corrected small bugs related to Microtonal
- If you want to use ZynAddSubFX with OSS (or
you don't have ALSA) you can modify the Makefile.inc
file to compile with OSS only.
- It is shown the real detune (in cents)
- Made a new widget that replaces the Dial
widget
- Removed a bug that crashed ZynAddSubFX if you
change some effect parameters
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
> -----Original Message-----
> From: Pascal Haakmat [mailto:a.haakmat@chello.nl]
> 05/01/03 18:14, CK wrote:
>
> > if this is OSS forget about it, it will be outdated as soon
> an 2.6 kernel
> > is out
>
> Well, ALSA has OSS compatibility, and 4front-tech.com will still be
> selling their for-money OSS drivers (for a number of platforms, not
> just Linux).
>
> So saying that OSS will disappear after the release of kernel 2.6 is a
> bit premature. Especially since some of the for-money OSS drivers are
> still much better than the ALSA drivers (just my experience, YMMV).
also, alsa is linux only, isn't it?
erik
On Mon, Jan 06, 2003 at 12:04:23 -0800, robbins jacob wrote:
>>Alternately, we could require that event ordering has 2 criterion: -first-
>>order on timestamps -second- put voice-on ahead of all other event types.
>This is what I was assuming was meant orignally.
>However you dont have to think of them as initiasiation parameters, voices
>can have instrument wide defaults (eg. a pitch of 0.0 and and amplitude of
>0.0), and the parameter changes that arrive at the same timestamp can be
>thought of as immediate parameter changes, which they are.
True, my post was based on the assumption that there are some plugins where
a certain parameter being initialized at the beginning of the voice would
affect the voice over its entire duration. The only concrete example I can
give is velocity maps determining use of different samples in a sampler,
which doesn't apply here. Maybe a bell model where the voice-on event is
considered an impulse describes what I'm talking about; ramping up the
velocity after the voice has started has no effect. Irregardless, if a
plugin wants to use some parameters in voice initialization, it can do so
with the events timestamped at the same point as the voice-on supplying the
values for initialization. For the majority of plugins all parameters are
equal and some events just happen to coincide with voice-on events, as you
say.
--jacob robbins ....porjects, soundtank.......................
_________________________________________________________________
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus
this is not meant to intimidate, rather to be a wake-up call.
it seems almost unreal (and certainly unprofessional) to me
that an instrument plugin api is being discussed here by a
bunch of people who have little to no experience in the field
of software sequencers. going into implementation details at
the current level of understanding of the problem space is,
excuse me, ridiculous.
after all, what is going to drive your instrument networks?
punched cardboard? certainly not. you'll either use realtime
input, or a sequencer, or, most wanted, a combination of both.
the closer the integration of the event/plugin system with the
sequencer, the more uses the api can be put to, with less pain.
stopping short of the mark where the api becomes useful for
more applications than basically sample-rate dependent
event->audio converters is narrow-minded. viewing the 'host'
as a blackbox supposed to 'do the rest' without caring about
its internals is blatant ignorance.
i do think it's reasonable to ignore my personal input since
i don't offer published code to back up my views, and when i
do, you'll find it centered around my personal musical needs.
however there are, afaik, people from the rosegarden team on
this list. it would also be helpful having werner schweer of
muse fame participate in some way or other. you might also want
to look at other free/open sequencer engines. for one thing,
you'll find that most, if not all, are tick-based.
vst[i] is a bad candidate i think because few people here will
have vst host-side coding experience, and the api itself is
bound to be centered around the particular coding needs of a
specific company, for a specific application that drags code
with it that originated in the eighties and never was subject
to public source-level review.
in short, the more people with hands-on sequencer experience
participating, the better. none are just too few.
tim
ps: if this post hasn't substantially changed your ways of
perceiving this matter, please don't bother answering.
Hi,
Cow Outputs Waves is a waveform editor. You draw graphs for amplitude
and frequency, cow synthesizes a wavfile. The GUI uses Qt. The package
includes also a program to play the cow-sounds and also wavfiles with
a midi-keyboard (or any other thing that can output note on/off
commands) and several tools.
You might be interested in this program if you are using a
tracker and need cool sounds or intent to play accords and find
it ugly that the wavfiles are getting shorter or longer depending on
the tone pitch. Hrm. Maybe youre also interested if you dont use a
tracker :)
http://kuh.sourceforge.net
for pre-compiled binarys for Mandrake, Redhat and SuSE have a look at:
http://apps.kde.com/na/2/info/id/1457
HAND,
Kay.
i just want to stop for a moment and reflect on the power of the open
source software model. for a long time, a rather glaring defect in
ardour was the inability to record at a higher frame rate (say, 48kHz
or 96kHz) and then easily produce an audio file of the piece at
44.1kHz, the standard for redbook cd audio.
i avoided trying to add the code because i knew that resampling was
complex and had some significant depth to it - i was busy enough with
other things.
then erik comes along with his libsamplerate library, and it took me
less than 15 minutes to add the capability to the backend of ardour
(plus another 45 minutes of work on the GUI to control it).
now, The Unix Way calls for the use of tools like sox to do this work:
small, independent applications that do one thing and do it well. Ok,
that's not exactly a good description of sox, but you get the
point. There is nothing wrong with this model for some purposes,
especially for people who want to play around with many different
possibilities.
but when rob pike wrote "the unix programming environment", he was
clear (at least to me) that the main strength of The Unix Way was in
providing a really productive environment for *prototyping*. it turns
out that its a really nice environment for getting many kinds of real
work done too, especially for many experimental music folks, and i
wouldn't want to see the end of the toolset that sox is just one
aspect of.
but ... but ... i am just glowing with the way libraries like
libsamplerate and libsndfile provide the same simple "just plug it
together" functionality that the unix shell and all our pipe-connected
utilities do. this time, its not to being offered (directly) to
command line users, but to people writing GUI-based software and thus
ultimately to their users.
it really makes me feel good to be able to turn around and explain to
my disbelieving kids "yes, somebody wrote this really useful software
and they just made it available so that other people could use it.
that made it easy for me to make my software (with which i do the
same) be so much more useful for many people".
thanks to erik, thanks to RMS for the GPL and thanks to everybody on
this list and elsewhere who is making this revolution possible.
--p