there was some talk about window grouping on fvwm mailing list (fvwm.org)
but IIRC it was mostly about tabbed windows, not tiled (which is what I
think you talk about).
then again, maybe some panel could be (ab)used for this, e.g.fvwm panel
can swallow various X apps and you can specify fairly fancy layout of
windows within panel (usually panel is very small, used to run clock or
xload and other apps of that type but you can make it any size you want and
run any apps). not sure how it would work, I never tried to use fvwm panel
in this way...
erik
-----Original Message-----
From: Roger Larsson
To: linux-audio-dev(a)music.columbia.edu
Sent: 12/18/02 10:11 AM
Subject: Re: [linux-audio-dev] Audio s/w ui swallowing
On Monday 09 December 2002 20:10, Steve Harris wrote:
> The world and his dog seems to be releasing macos/windows audio s/w
that
> looks like 19" rack units.
>
> Anyone know enough about X to know if its possible to make X apps open
> thier main window inside a standard sized cabinet (ala Reason).
>
> I'm assuming it would be ok to require the app to be a certian size
and
> have explicit support, but I guess it couldn't put any restrictions on
> toolkit.
>
> Other than looking cool, it would actually be a useful way to keep
window
> clutter down. Not that we have any 19" lookalike apps yet, but I guess
we
> will do at some point.
>
> - Steve
>
>
Why not add another option to the window manager instead?
(much like "Always on top")
"Add to window group" -> "Audio rack"
Let the window manager keep them with a common width
and split height. Resize and Move all windows together.
- Possible?
/RogerL
--
Roger Larsson
Skellefteå
Sweden
> > First, I don't understand why you want to design a "synth API". If
> > you want to play a note, why not instantiate a DSP network that
> > does the job, connect it to the main network (where system audio
> > outs reside), run it for a while and then destroy it? That is what
> > events are in my system - timed modifications to the DSP network.
>
> 99% of the synths people use these days are hardcoded, highly
> optimized monoliths that are easy to use and relatively easy to host.
> We'd like to support that kind of stuff on Linux as well, preferably
> with an API that works equally well for effects, mixers and even
> basic modular synthesis.
>
> Besides, real time instantiation is something that most of us want to
> avoid at nearly any cost. It is a *very* complex thing to get right
> (ie RT safe) in any but the simplest designs.
Okay, I realize that now, maybe your approach is better. RT and really
good latency was and is not the first priority in MONKEY, it's more
intended for composition, therefore I can afford to instantiate units
dynamically. But it's good that someone is concerned about RT.
> > However, if you want, you can define functions like C x =
> > exp((x - 9/12) * log(2)) * middleA, where middleA is another
> > function that takes no parameters. Then you can give pitch as "C 4"
> > (i.e. C in octave 4), for instance. The expression is evaluated and
> > when the event (= modification to DSP network) is instantiated it
> > becomes an input to it, constant if it is constant, linearly
> > interpolated at a specified rate otherwise. I should explain more
> > about MONKEY for this to make much sense but maybe later.
>
> This sounds interesting and very flexible - but what's the cost? How
> many voices of "real" sounds can you play at once on your average PC?
> (Say, a 2 GHz P4 or someting.) Is it possible to start a sound with
> sample accurate timing? How many voices would this average PC cope
> with starting at the exact same time?
Well, in MONKEY I have done away with separate audio and control signals -
there is only one type of signal. However, each block of a signal may
consist of an arbitrary number of consecutive subblocks. There are three
types of subblocks: constant, linear and data. A (say) LADSPA control
signal block is equivalent to a MONKEY signal block that has one subblock
which is constant and covers the whole block. Then there's the linear
subblock type, which specifies a value at the beginning and a per-sample
delta value. The data subblock type is just audio rate data.
The native API then provides for conversion between different types of
blocks for units that want, say, flat audio data. This is actually less
expensive and complex than it sounds.
About the cost: an expression for pitch would be evaluated, say, 100 times
a second, and values in between would be linearly interpolated, so that
overhead is negligible. It probably does not matter that e.g. pitch glides
are not exactly logarithmic, a piece-wise approximation should suffice in
most cases.
I'm not sure about the overhead of the whole system but I believe the
instantiation overhead to be small, even if you play 100 notes a second.
However, I haven't measured instantiation times, and there certainly is
some overhead. We are still talking about standard block-based processing,
though. Yes, sample accurate timing is implemented: when a plugin is run
it is given start and end sample offsets.
Hmm, that might have sounded confusing, but I intend to write a full
account of MONKEY's architecture in the near future.
> You could think of our API as...
It seems to be a solid design so far. I will definitely comment on it when
you have a first draft for a proposal.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Nope, I'm not going to suggest a complete threading API here. That -
if it's ever going to be part of XAP - will have to way until we know
what we're doing. (Use pthreads for now. Just don't get any ideas
about toolkits and stuff...)
What I *am* going to suggest is this:
XAP host call:
/*
* Calls a function as a "background job".
* 'context' is the worker call thread ID.
* 'data' is passet to the worker callback
*/
int (*worker_call)(XAP_host *host,
int (*callback)(),
int context, void *data);
XAP event:
/*
* Notify a plugin that one of it's worker
* calls has returned.
*/
XAP_A_WORKER_DONE(int result, void *data)
'context' is used to handle serializing when you want to prevent
multiple workers running at the same time. For example, saying "1"
for all workers you start guarantees that only one of them will run
at a time. If you use different context IDs, the workers may run
concurently on different CPUs, for example.
'data' is user defined data that is passed to the worker thread.
Obviously, you should keep your hands off this data until you get it
back (through XAP_A_WORKER_DONE), since the worker is supposed to be
running in a different thread. You may break this rule if you
*really* know what you're doing. Lock-free FIFOs and similar
constructs that are thread safe by design may be shared. Do note,
however, that hosts are not *required* to actually run worker calls
in another thread!
<maybe>
If this doesn't work for you, you must have a hint
"REQUIRES_WORKER_THREAD" that tells hosts that don't provide
out-of-thread worker calls to stay away from your plugin.
FIXME: Plugins that will work in different ways depending on
whether workers are in a separate thread or not, should
probably be able to tell, preferably during instantiation.
</maybe>
The XAP_A_WORKER_DONE event is sent to the calling plugin as the
worker call returns. 'result' is the return value from the callback,
and 'data' is the 'data' argument passed to the worker call through
host->worker_call().
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
We concluded a while ago that string events are really rather handy
for referencing external files. (For that to be truly useful, hosts
need to be able to understand what these strings are; ie directory
names, file names, temporary storage, output, input. This, so they
can keep track of what belongs in a project.)
Anyway, what I'm seeing here is that strings and raw data blocks
overlap quite significantly on the protocol level. Both are just
control ports that take or send "strings" - either NULL terminated,
or "pascal style". The latter *could* be replace former... Just hint
them differently. Same transfer protocol.
Problem with NULL terminated strings is that they cannot take binary
data (without encoding), and that you have to scan them for the
terminator to find out their length.
OTOH, sending a C string as raw data means you have to do this, even
if the receiver will just ignore the "length" field, and treat it as
the C string it actually is...
I'm leaning towards the "strings in raw data blocks" approach,
despite the little strlen() inconveniency for senders. Mostly because
it's one control data type less to care about, without loss of real
functionality.
Ideas?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
> Paul Davis <paul(a)linuxaudiosystems.com> writes:
>
> i am also mildly suprised by the value of the deal. steinberg is
> effectively valued at US$24 million.
They sold at a discount to Emagic ... Apple paid 30M for Emagic,
and Apple is not known for over-paying for smaller acquisitions
(NeXT is a different story, although one could argue that Steve
as iCEO was worth every penny ...).
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Hi,
I have just bought myself the Cardbus+Multiface RME system. I think I
have finally gotten the alsa modules (snd-hammerfall-mem and snd-hdsp)
to load correctly along with the OSS emulation modules. BUT I cant seem
to play anything out of the Lineout/Headphone output. I have tried
various 'amixer cset' commands but all I get is silence (trying to play
a simple wav file using aplay).
Any suggetsions?
--
---------------------------------------
D. Sen, PhD
21 Woodmont Drive
Randolph
NJ 07869
Home Email: dsen(a)homemail.com Tel: 973 216 2326
Work Email: dsen(a)ieee.org Web: http://www.auditorymodels.org/~dsen
I just had to come out of lurk mode for this:
tim wrote:
> all of them.
>
> rhythmn is always based on one integral periodic 'pulse'. if
> time is not divisible by this atom, there is no musical time.
Nancarow, Ives, Stockhausen, Xenakis, Boulez, Schaeffer, Henry etc. etc in
the classical field
Taylor, Sun Ra, Ornette Coleman, Coltrane, Mengelberg, Broetzman, Zorn,
Ayler etc etc in jazz/impro
lots of ambient stuff that I don't know the names of.
lots of acapella vocal music from various cultures.
There can be easily multiple time-frames going happening in a single piece
of music that have non-lineair relationships.
A computer can also be used to make sounds that a player cannot make.
A sequencer/daw will also be used for non-musical ordering of sounds in
time. It might be handy to use an extended beat/measure structure for
setting event frames for dialog editing for a radio play.
BTW measure are much more complicated than just A/B. Even a 6/8 is really 2/
2.6666666.... in a way. Unless it is divided differently. See Brahms for
nice examples of playing with the groupings of eight notes in 4/4.
The notation x/y is just a shorthand in classical music _notation_, that
only becomes meaningfull in the context of other notation parameters, such
a note-beam groupings etc.
So notating 17/16 instead of 4.25/4 is fine, because the score gives the
grouping information. (to the player and conductor)
Although I have written (4+1/2) / 4 because I wanted to mae sure that the
piece s counted that way and not in 9/8 (=3+3+3).
Anyway my point is that the A/B concept of measure if only really relevant
if your dealing with western _notation_, and then together with the entire
score.
going back to lurk mode now
Gerard
You move the play position marker.
Plugins get the position changes from the timeline,
and those that need to, do their best to prebuffer
audio data from disk, or whatever. While doing that,
the put a "1" on their "READY" Control Outputs, that
are connected to the transport control.
You press "Start".
The transport control simply waits until it has
received a "0" from each one of the "READY" Controls
it's watching. Then it actually starts the sequencer.
If there are no READY Controls in the net, the
sequencer will just start instantly.
Sounds reasonable?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
The Rosegarden team have the pleasure of announcing the
latest release of their MIDI and audio sequencer and score
editor for Linux.
The source code is available now from the project homepage:
http://www.all-day-breakfast.com/rosegarden
The availability of binary packages depends on their various
maintainers; please check the project homepage for more
information.
New features since the 0.8 release include:
o Improved MIDI file I/O: better support for banks and
merging in import, better support for delay, transpose
etc in export
o MIDI device Bank and Program editor, including import
and export of Studio data
o MIDI Panic Button for clearing down stuck notes
o Added some keyboard controls to matrix
o Added real-time segment delays
o Progress display completely overhauled
o ALSA clients can be added dynamically (you can change
your soft synth configuration while Rosegarden is running)
o MIDI events filter dialog for MIDI THRU and MIDI record
o MIDI/ALSA recording bug fixed (stopping after so many
recorded events)
o Many bug fixes, tweaks and performance improvements
Unfortunately we've had to drop KDE2 support from this release
onwards as it was getting too difficult to maintain both KDE2
and KDE3 in the same development tree. We're hoping this won't
affect too many users in the long term.
Finally, we would like to thank the cachegrind/kcachegrind team
for a truly useful development tool.
Chris