Some thoughts on that SILENT event for reverb tails and stuff...
(Currently impemented as a "fake" spontaneous state change in
Audiality FX plugins, BTW.)
I would assume that since there is no implicit relation between
Channels on different Bays (remember the input->output mapping
discussion?), this event is best sent from some kind of Master Event
Output Channel. (That is, now we have one Master Event Input Channel,
and one Master Event Output Channel. Each will be in it's own Bay,
and there can be only one and exactly one Channel on each of those.)
So, the SILENT event would need Bay and Channel (but not Slot)
fields, in order to tell the host (or who ever gets the event) which
audio output just went silent.
And it would probably be a rather good idea to have "NOT_SILENT"
event as well, BTW!
Anyway, what I was thinking was: How about allowing plugins to
*receive* SILENT and NOT_SILENT events, if they like?
That way, you could use the plugin API for things like
audio-to-disk-thread "gateways" for recording and that kind of stuff,
without forcing the host to be involved in the details.
Not that recording half a buffer extra of silence would be a
disaster, but I bet someone can or eventually will think of a reason
why their plugin should know the whole thruth about the audio inputs.
Now, there's just one problem: Put a plugin with tail, but without
sample accurate "tail management" support in between a plugin that
sends (NOT_)SILENT events and one that can receive them - and the
information is useless! All you can do is have the host fake the
(NOT_)SILENT events sent to the latter plugin, since the plugin in
the middle thinks only in whole buffers WRT inputs and/or outputs...
And there's another problem: If you would get a (NOT_)SILENT event
*directly* from another plugin, how on earth would you know which one
of *your* audio inputs that other plugin is talking about, when the
event arguments are about *that* plugin's audio outputs?
Only the host knows where audio ports are connected, so the host
would have to translate the events before passing them on.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
> personally, i think ardour is an excellent proof-by-implementation
> that yes, busses are really just a special class of strip,
Well, no. Busses are not strips. Busses are not signal paths. Busses
are unity gain summing nodes that facilitate many-to-one connections.
Ardour depends on jack for all of its busses.
with no
> basic difference in the kinds of controls you'd want for each. these
> days, an AudioTrack in ardour is derived from the object that defines
> a Bus. the only differences are that a Bus takes input from
> "anywhere", whereas an AudioTrack takes input from its playlist (via a
> DiskStream) and can be rec-enabled. other than, they are basically
> identical.
Main outs, aux sends, and sub outs are a special class of strip that
receive their input exclusively from busses. Other than that, there is
no difference between these and any other kind of strip.
Tom
Hi all,
I've been beavering away on a session/config managment system, and it's just
reached the point where projects can be properly saved and restored. It's
an implmentation of the api proposal, http://reduz.dyndns.org/api/ , that
originated from this discussion:
http://marc.theaimsgroup.com/?l=linux-audio-dev&m=102736971320850&w=2 .
This is more an RFC, alpha release, rather than a proper "you can make your
apps work with this" release; a lot of the api will undoubtedly change.
What's right with this release: it saves/restores sessions, it saves data,
it exists. What's wrong with this release: the code is barely commented,
there's no documentation, it's quite inconsistent, the code is scrappy in
many places, and it's not very stable.
So, download it, have a bash, tell me what works/what doesn't, what's good/
what's not, what should stay the same/what should change.
http://pkl.net/~node/software/ladcca-0.1.tar.gz
Bob
does anyone here know if splitting code across different files, or for
that matter, reordering the layout of one source file so that
functions called together are now "far apart" can actually affect
execution speed?
--p
Please follow-up this discussion to LAD.
On Sun, 8 Dec 2002, Paul Davis wrote:
> >> the situation, as i said before, is miserable. we just don't have a
> >> situation in linux where a single point of control can say "*this* is
> >> the GUI toolkit you will use". X is clearly the standard, but its not
> >> a toolkit (see below) that anyone can feasibly use alone.
> >>
> >I have asked you this two times allready, but I'm trying for a third time
> >now. Why do you need this functionality? Unix is built from the ground
> >to support multiple processes running at the same time, and therefore does
> >it very well, at least linux does. And unix has things as sockets, pipes,
> >semaphores and shared memory. WHY do you need to run everything from the
> >same process? I only see disatvantages by doing that, exept that it
> >uses a tiny bit more memory, but thats it. Please explain to me...
>
> because when running a real-time low-latency audio system, the cost of
> context switches is comparatively large. if you've got 1500usecs to
> process a chunk of audio data, and you spend 150usecs of it doing
> context switches (and the cost may be a lot greater if different tasks
> stomp over a lot of the cache), you've just reduced your effective
> processor power by 10%.
>
I dont believe you. I just did a simple context-switching/sockets
test after I sent the last mail. And for doing 2*1024*1024 context
syncronized switches between two programs, my old 750Mzh duron uses 2.78
seconds. That should about 1.3usecs per switch or something. By
having a blocksize of lets say 128 bytes, that means that by
25 minutes of 44100Hz of sound processing, 2.78 seconds is used
for context switching. Not much.
I'm not talking about jack tasks, I'm talking about doing a simple plug-in
task inside a standalone program, the way the vst server works.
I believe the advantages of making a proper plug-in server with an
easy-to-use library binding "plug-ins" and hosts are large:
1. Stability. A Host can not crash, the server can not crash. Just the
plug-in can crash.
2. Runs better on multiprocessor machines. (At least I think so)
3. Ease-of-use. By extending the interface with a library, common tasks
as finding lists of plugins, loading plug-ins and GUI is available
as functions ready to use.
4. A "plugin" is a program, which means that it can choose whatever GUI
system it wants. Ladspa plug-ins can use guis.
5. All sorts of plugins can be supported by one such system, vst, ladspa,
DX, maya, etc.
6. By making a simple wrapper, all ladspa plugins can automaticly be
available as a "plugin" server "plugin", serving GUI.
--
>From: Anthony <avan(a)uwm.edu>
>...
>RTsynth it is impossible for me to tell whether a given synth is on or
>off. That pixmap LED seems like a good idea, but maybe blue would be
>better ;) ...
Thank you for the hint! I will fix it in upcoming versions :)
- Stefan
_________________________________________________________________
The new MSN 8: advanced junk mail protection and 2 months FREE*
http://join.msn.com/?page=features/junkmail
http://www.vischeck.com/vischeck
The above link allows you to upload an image or parse a webpage thru
some software written at stanford that imitates what a colour blind
person would see. It may be worth the time for you GUI designers to
run a screenshot in it and see what 1/10 people see. Case in point: In
RTsynth it is impossible for me to tell whether a given synth is on or
off. That pixmap LED seems like a good idea, but maybe blue would be
better ;) Not that I'm picking on RTsynth. Web page designers may want
to also look at this, although the linux webpage community usually
does a good job.
--ant
> David Olofson <david(a)olofson.net> writes:
>
> Yes - but we're not talking about MIDI here. We *may* require that
> events are never lost, and even that it's not legal to send two
> identical events in line to an event port, unless you really mean
> something else than to set a controller to the same value twice.
Think ahead about how these sorts of requirements will be enforced:
will they be a "law of nature" (code checks to see if an app broke
the law, and takes action, like nature takes action when you try
to change the current flowing through an inductor :-), or will
it be unchecked by code? If its the latter, you can get into
this mode where everyone has extra checking and work-around code,
to handle impolite API users who aren't obeying the requirements.
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
> David Olofson writes:
>
> The point I think you're missing is that a "control change" event is
> *exactly* the same thing as a "voice start" event on the bits and
> bytes level.
Lossy MIDI filters will prune away two MIDI Control Change commands
in a row for the same controller value with the same data value,
apart from controller numbers (like All Notes Off) whose semantics
have meaning in this case. And the assumption underlying the behavior
of these filters are present in subtle ways in other MIDI gear and
usages too. For example, a programming language that presents an
array with 128 members, holding the last-received (or default) value
of each MIDI controller, presents an API that implicitly does this
filtering, no matter how frequently the program samples the array.
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Just wondering how the h*ll you're supposed to name functions and
types when all sensible naming conventions seem to be reserved by
POSIX and other authorities...
How about this for Audiality:
Functions: a_whatever()
Types: at_whatever
I bet *some* lib is using a_* and/or at_*, but I have yet to find it.
Besides, it doesn't look like a great idea to use both a_* and at_*
for a single project - but *_t is reserved, and I don't like
Capitalization in public APIs. Maybe I can make an exception,
though... A_* and AT_* or something?
Then again, most compilers can tell types from functions without
throwing too much bogus messages at the user, so one might get away
with a single prefix and simply remove the _t everywhere.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---