Hello, LAD readers,
I just want to give a short update on the "Linux Audio Developer
Meeting" that Matthias Nagorni and I are organizing on March 14th-16th,
2003.
We have been collecting suggestions for talks/presentations for the last
couple of weeks, and our current programm looks quite "high-quality":
---------------------
Paul Davis, Bala Cynwyd (Pennsylvania), USA:
10 Things You Might Not Have Thought About When Writing An Audio
Application
Frank van de Pol, Bergen op Zoom, Netherlands:
Timing and Synchronisation for Sequencer Applications
Steve Harris, Southampton (Hampshire), UK:
Digital Signal Processing and the LADSPA Audio Plugin Interface
Takashi Iwai, Nuremberg, Germany:
Ruminations on ALSA Drivers
Jaroslav Kysela, Ceske Budejovice, Czech Republic:
ALSA - Always on the run
Matthias Nagorni, Nuremberg, Germany:
Modular Synthesis with AlsaModularSynth
---------------------
This list can be found at http://www.suse.de/~mana/ladzkm.html .
Also, there will be some kind of opening talk/presentation at the
beginning and probably some demos at the end of the respective days. The
exact plans of when which talk will be held have not been fixed yet, but
will be soon. There is a slight chance that we might be able to give out
a live audio stream of the presentations.
We will be given 3 rooms by the ZKM organization: a large lecture room
(100 seats) for the presentations, and 2 smaller rooms for..well, let's
call them "Hack center" for now :-). Both rooms are provided with enough
electricity (so as to not cause any power failures :-), and we will also
be allowed to use the ZKM's internet connection for our purposes, so
noone has to feel "isolated from the Matrix" while attending the meeting
:-).
I am very excited about this - I believe it's the first time we manage
to get together so many Linux audio programmers in one spot, and this
could create a great synergy effect. It's definitely worth visiting, and
if you plan to join it (as participant/developer, not as a "normal"
guest), please don't forget to register with Matthias (mana(a)suse.de) or
me (Frank.Neumann(a)st.com) so that we can plan the rooms, accomodation
and equipment. We do not plan any particular participation fees, but
people might have some costs for travelling, hotel/youth hostel and
food. I will do my best to help attendees with finding a room.
I believe that finding our sponsor at the ZKM was a great piece of luck
- at the Institute for Musics&Acoustics they are currently using mostly
Macs running software like Max/MSP, Csound, PD, jMax, but they are very
interested in doing more with Linux - so this come-together really seems
to fit perfectly. Already at this time I can say that we owe them a lot
for making this meeting possible.
Ok, that's all for now. More news will be posted here as I get them.
Have a great Christmas time and a happy, healthy and successful 2003,
Frank
--
Frank Neumann (Frank.Neumann(a)st.com), VIONA Development Center
STMicroelectronics, Karlstraße 27, 76133 Karlsruhe
>What if we made it more of a broadcast. Have a way to have host or timeline
>global variables (which is what these are) which can be read by plugins.
>....
>Plugins are free to drop these global events. There is no function call
>overhead. The only overhead is the indirection of using a pointer to a
>global.
>
It seems like the transport framework would want to have an entire "transport state" calculated for each frame. Why not include a copy of this record in the process() function parameters? This makes it available in the process loop without a function call and makes it thread-safe, right?
Tell me more about the plan for SEEKing (i.e. retarting playback in mid-sequence). As already discussed, this is a big pain if you've got a droning pad sound that only has a note-on every 8 measures.
I think it's too much to ask for plugin writers to manage seeking. Instead the transport should do it by prerolling some distance at faster than realtime (playback buffers are thrown away as they are generated).
I see 2 solutions, which can be combined:
1) user settable pre-roll. Set it to 0 for live applications; a few seconds for normal use; and "from the top" when you must be sample-accurate.
2) cue points. When the user marks a cue point, the plugin must stream its internal state (LFO values, sample offsets, etc) into a buffer. Then playback can start immediately from this point later. Making changes to a plugin before the cuepoint would invalidate the cuepoint. When starting playback from an invalid cue point, the transport would have to pre-roll from the last valid cue point.
Of course this is no real solution for outboard synths.
(sorry if I'm restating the obvious.. I'm new to the list. Is there a "general overview" of XAP? I couldn't find one at the web site.)
-Ben Loftis
What frequency does PITCH 0.0 correspond to?
I assume this should be a fixed value, as "system finetune" can be
implemented in other ways. No need to have all synths keep track of
yet another parameter to convert PITCH into whatever they need.
440 Hz? :-)
(Who the h*ll came up with the idea that *C* starts the octave!? What
a horrible frequency to remember...)
Nah - I guess we should stick with the conventional "middle C", like
everyone else. I guess most synths and software use 440 Hz for A4 by
default. That's 261.6255653 Hz for C4. Nice value... *heh*
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
there was some talk about window grouping on fvwm mailing list (fvwm.org)
but IIRC it was mostly about tabbed windows, not tiled (which is what I
think you talk about).
then again, maybe some panel could be (ab)used for this, e.g.fvwm panel
can swallow various X apps and you can specify fairly fancy layout of
windows within panel (usually panel is very small, used to run clock or
xload and other apps of that type but you can make it any size you want and
run any apps). not sure how it would work, I never tried to use fvwm panel
in this way...
erik
-----Original Message-----
From: Roger Larsson
To: linux-audio-dev(a)music.columbia.edu
Sent: 12/18/02 10:11 AM
Subject: Re: [linux-audio-dev] Audio s/w ui swallowing
On Monday 09 December 2002 20:10, Steve Harris wrote:
> The world and his dog seems to be releasing macos/windows audio s/w
that
> looks like 19" rack units.
>
> Anyone know enough about X to know if its possible to make X apps open
> thier main window inside a standard sized cabinet (ala Reason).
>
> I'm assuming it would be ok to require the app to be a certian size
and
> have explicit support, but I guess it couldn't put any restrictions on
> toolkit.
>
> Other than looking cool, it would actually be a useful way to keep
window
> clutter down. Not that we have any 19" lookalike apps yet, but I guess
we
> will do at some point.
>
> - Steve
>
>
Why not add another option to the window manager instead?
(much like "Always on top")
"Add to window group" -> "Audio rack"
Let the window manager keep them with a common width
and split height. Resize and Move all windows together.
- Possible?
/RogerL
--
Roger Larsson
Skellefteå
Sweden
> > First, I don't understand why you want to design a "synth API". If
> > you want to play a note, why not instantiate a DSP network that
> > does the job, connect it to the main network (where system audio
> > outs reside), run it for a while and then destroy it? That is what
> > events are in my system - timed modifications to the DSP network.
>
> 99% of the synths people use these days are hardcoded, highly
> optimized monoliths that are easy to use and relatively easy to host.
> We'd like to support that kind of stuff on Linux as well, preferably
> with an API that works equally well for effects, mixers and even
> basic modular synthesis.
>
> Besides, real time instantiation is something that most of us want to
> avoid at nearly any cost. It is a *very* complex thing to get right
> (ie RT safe) in any but the simplest designs.
Okay, I realize that now, maybe your approach is better. RT and really
good latency was and is not the first priority in MONKEY, it's more
intended for composition, therefore I can afford to instantiate units
dynamically. But it's good that someone is concerned about RT.
> > However, if you want, you can define functions like C x =
> > exp((x - 9/12) * log(2)) * middleA, where middleA is another
> > function that takes no parameters. Then you can give pitch as "C 4"
> > (i.e. C in octave 4), for instance. The expression is evaluated and
> > when the event (= modification to DSP network) is instantiated it
> > becomes an input to it, constant if it is constant, linearly
> > interpolated at a specified rate otherwise. I should explain more
> > about MONKEY for this to make much sense but maybe later.
>
> This sounds interesting and very flexible - but what's the cost? How
> many voices of "real" sounds can you play at once on your average PC?
> (Say, a 2 GHz P4 or someting.) Is it possible to start a sound with
> sample accurate timing? How many voices would this average PC cope
> with starting at the exact same time?
Well, in MONKEY I have done away with separate audio and control signals -
there is only one type of signal. However, each block of a signal may
consist of an arbitrary number of consecutive subblocks. There are three
types of subblocks: constant, linear and data. A (say) LADSPA control
signal block is equivalent to a MONKEY signal block that has one subblock
which is constant and covers the whole block. Then there's the linear
subblock type, which specifies a value at the beginning and a per-sample
delta value. The data subblock type is just audio rate data.
The native API then provides for conversion between different types of
blocks for units that want, say, flat audio data. This is actually less
expensive and complex than it sounds.
About the cost: an expression for pitch would be evaluated, say, 100 times
a second, and values in between would be linearly interpolated, so that
overhead is negligible. It probably does not matter that e.g. pitch glides
are not exactly logarithmic, a piece-wise approximation should suffice in
most cases.
I'm not sure about the overhead of the whole system but I believe the
instantiation overhead to be small, even if you play 100 notes a second.
However, I haven't measured instantiation times, and there certainly is
some overhead. We are still talking about standard block-based processing,
though. Yes, sample accurate timing is implemented: when a plugin is run
it is given start and end sample offsets.
Hmm, that might have sounded confusing, but I intend to write a full
account of MONKEY's architecture in the near future.
> You could think of our API as...
It seems to be a solid design so far. I will definitely comment on it when
you have a first draft for a proposal.
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Nope, I'm not going to suggest a complete threading API here. That -
if it's ever going to be part of XAP - will have to way until we know
what we're doing. (Use pthreads for now. Just don't get any ideas
about toolkits and stuff...)
What I *am* going to suggest is this:
XAP host call:
/*
* Calls a function as a "background job".
* 'context' is the worker call thread ID.
* 'data' is passet to the worker callback
*/
int (*worker_call)(XAP_host *host,
int (*callback)(),
int context, void *data);
XAP event:
/*
* Notify a plugin that one of it's worker
* calls has returned.
*/
XAP_A_WORKER_DONE(int result, void *data)
'context' is used to handle serializing when you want to prevent
multiple workers running at the same time. For example, saying "1"
for all workers you start guarantees that only one of them will run
at a time. If you use different context IDs, the workers may run
concurently on different CPUs, for example.
'data' is user defined data that is passed to the worker thread.
Obviously, you should keep your hands off this data until you get it
back (through XAP_A_WORKER_DONE), since the worker is supposed to be
running in a different thread. You may break this rule if you
*really* know what you're doing. Lock-free FIFOs and similar
constructs that are thread safe by design may be shared. Do note,
however, that hosts are not *required* to actually run worker calls
in another thread!
<maybe>
If this doesn't work for you, you must have a hint
"REQUIRES_WORKER_THREAD" that tells hosts that don't provide
out-of-thread worker calls to stay away from your plugin.
FIXME: Plugins that will work in different ways depending on
whether workers are in a separate thread or not, should
probably be able to tell, preferably during instantiation.
</maybe>
The XAP_A_WORKER_DONE event is sent to the calling plugin as the
worker call returns. 'result' is the return value from the callback,
and 'data' is the 'data' argument passed to the worker call through
host->worker_call().
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
We concluded a while ago that string events are really rather handy
for referencing external files. (For that to be truly useful, hosts
need to be able to understand what these strings are; ie directory
names, file names, temporary storage, output, input. This, so they
can keep track of what belongs in a project.)
Anyway, what I'm seeing here is that strings and raw data blocks
overlap quite significantly on the protocol level. Both are just
control ports that take or send "strings" - either NULL terminated,
or "pascal style". The latter *could* be replace former... Just hint
them differently. Same transfer protocol.
Problem with NULL terminated strings is that they cannot take binary
data (without encoding), and that you have to scan them for the
terminator to find out their length.
OTOH, sending a C string as raw data means you have to do this, even
if the receiver will just ignore the "length" field, and treat it as
the C string it actually is...
I'm leaning towards the "strings in raw data blocks" approach,
despite the little strlen() inconveniency for senders. Mostly because
it's one control data type less to care about, without loss of real
functionality.
Ideas?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
> Paul Davis <paul(a)linuxaudiosystems.com> writes:
>
> i am also mildly suprised by the value of the deal. steinberg is
> effectively valued at US$24 million.
They sold at a discount to Emagic ... Apple paid 30M for Emagic,
and Apple is not known for over-paying for smaller acquisitions
(NeXT is a different story, although one could argue that Steve
as iCEO was worth every penny ...).
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Hi,
I have just bought myself the Cardbus+Multiface RME system. I think I
have finally gotten the alsa modules (snd-hammerfall-mem and snd-hdsp)
to load correctly along with the OSS emulation modules. BUT I cant seem
to play anything out of the Lineout/Headphone output. I have tried
various 'amixer cset' commands but all I get is silence (trying to play
a simple wav file using aplay).
Any suggetsions?
--
---------------------------------------
D. Sen, PhD
21 Woodmont Drive
Randolph
NJ 07869
Home Email: dsen(a)homemail.com Tel: 973 216 2326
Work Email: dsen(a)ieee.org Web: http://www.auditorymodels.org/~dsen