Hi everyone,
The months fly off the calendar, and the time for the
next IETF meeting arrives (in San Francisco later this month).
Pick up the latest version of the MWPP I-D's at:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txthttp://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
Back at the end of the last meeting, I had predicted these
documents would be "candidate Last Call" (the end of the writing
process and the beginning of the vetting process ...), but alas,
it was not meant to be. But I really think we're only a month
away or so ... the things left to do are doable and not overwhelming.
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------
Think this needs to go to LAD too...
--- Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> wrote:
> I think there was one more too, but I can't think what it was...
I remember one about sidechains or something. I don't know what a sidechain is,
though, I just recall seeing it somewhere :)
> I agree AUDIO_RATE_CONTROL should be renamed.
>
> There was a suggestion on ardour-dev that a hint to say whether control outs
> were supposed to be informative or a source of control data might help,
> but I'm not sure about it.
Not sure what 'informative' means here... what information do we get if we
ignore the control data on the output?
> Does someone want to reword these in a more meaningful way? If not I'l do
> it, then you'l be sorry ;).
I'll have a go :)
(The LADSPA_IS_* things will need to be added too)
/* Hint MOMENTARY indicates that that a control should behave like a
momentary switch, such as a reset or sync control. LADSPA_HINT_MOMENTARY
may only be used in combination with LADSPA_HINT_TOGGLED. */
#define LADSPA_HINT_MOMENTARY 0x40
/* Hint RANDOMISABLE indicates that it's meaningful to randomise the port
if the user hits a button. This is useful for the steps of control
sequencers, reverbs, and just about anything that's complex. A control
with this hint should not result in anything too suprising happening to
the user (eg. sudden +100dB gain would be unpleasant). */
#define LADSPA_HINT_RANDOMISABLE 0x80
/* Plugin Ports:
Plugins have `ports' that are inputs or outputs for audio or
data. Ports can communicate arrays of LADSPA_Data (for audio
or continuous control inputs/outputs) or single LADSPA_Data values
(for control input/outputs). This information is encapsulated in the
LADSPA_PortDescriptor type which is assembled by ORing individual
properties together.
Note that a port must be an input or an output port but not both
and that a port must be one of either control, audio or continuous. */
[...]
/* Property LADSPA_PORT_CONTINUOUS_CONTROL indicates that the port is
a control port with data supplied at audio rate. */
#define LADSPA_PORT_CONTINUOUS_CONTROL 0x10
-
Mike
>
> - Steve
>
> > On Wed, 15 Jan 2003 18:13:35 +0000
> > Steve Harris <S.W.Harris(a)ecs.soton.ac.uk> wrote:
> >
> > > There have been a few suggestions recently, I'l try to summarise them
> > > for comment.
> > >
> > > MOMENTARY. A hint to suggest that a control should behave like a
> > > momentary switch, eg. on for as long as the user holds down the
> > > key/mouse button/whatever. Useful for reset or sync controls for
> > > example. Would be useful in the DJ flanger. Only applies to TOGGLED
> > > controls.
> > >
> > > AUDIO_RATE_CONTROL. Hints than an audio control should/could be
> > > controlled by a high time res. slider or control data, but shouldn't
> > > be connected to the next audio signal by default. I can't think of any
> > > simple examples off hand, but combined with MOMENTARY it could be used
> > > for sample accurate tempo tapping.
> > >
> > > RANDOMISABLE. Hints that its useful/meaningful to randomise the port
> > > if the user hits a button. This is useful for the steps of control
> > > sequencers, reverbs, and just about anything that's complex. Allows
> > > you to specify which controls can be randomised without anything too
> > > supprising happening to the user (eg. sudden +100dB gain would be
> > > unpleasent).
__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com
bob mentioned issues with the serial port. i'd like to amplify
that. if you are using a serial modem (COM1, COM2) etc, and you don't
specify "EXPERIMENTAL" source support, you won't even have a chance to
build the serial driver. once you do build it, it won't work for PPP
(in my experience).
I have been working on an audio processing/synthesis application that
should have the power of Reason and Buzz plus the ability to be used
effectivly in live proformance. Some of you may have read my earlier
emails on this subject. I call it Voltage.
The main features that I want but are not availiable in other software
are:
- Linux and Open-Source
- the ability to route and process sequence data (MIDI like data). This
would allow very powerful chordization and arpegiation.
- the ability to create sequence loops in a flexible way (like Reason
Matrix except without the limitations). I'd also like to be able to
record/modify them live without stoping the playback.
- the ability to control things in complex ways using scripting of some
sort (maybe as a machine that is scripted to create certain sequence
events)
What I am looking for is someone to help me (an inexperienced software
designer) design this program. That person don't nesseserily need to
help code the program. I've desided I need help after multiple failed
attempts. I think my current ideas are closer to a working design than
past ones, but I don't want to code it then find out doesn't work (like
I did last time round).
If you are willing to help or have any tips or books I should read, etc.
please email me.
Thanks much.
-Arthur
--
Arthur Peters <amp(a)singingwizard.org>
Hi!
On Tue, Feb 25, 2003 at 07:48:11PM +0200, Kai Vehmanen wrote:
> Date: Tue, 25 Feb 2003 12:20:22 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Reply-To: linux-audio-dev(a)music.columbia.edu
> To: linux-audio-dev(a)music.columbia.edu
> Subject: Re: [linux-audio-dev] Fwd: CSL Motivation
>
> >There are discussions on kde-multimedia about
> >the future of Linux/Unix multimedia (especially sound).
> >This is one of the most interesting messages.
>
> CSL is proposed primarily as a wrapper layer around existing APIs. as
> such, it seems to me to have no particular merits over PortAudio,
> which has the distinct advantages of (1) existing
CSL also "exists".
> (2) working on many platforms already and
You're right about that one: CSL is not as complete as PortAudio w.r.t to
portability.
> (3) using well-developed abstractions.
I do not believe that something is ever a "well-developed abstraction" by
itself. Something is always a "well-developed abstraction" from something,
designed to achieve a certain purpose. From the PortAudio homepage:
| PortAudio is intended to promote the exchange of audio synthesis software
| between developers on different platforms, and was recently selected as the
| audio component of a larger PortMusic project that includes MIDI and sound
| file support.
This clearly states the purpose: if you want to write audio synthesis software,
then you should use PortAudio. Then, I assume, the abstraction is well-
developed. However, it does not state:
"... is intended to play sound samples with sound servers easily. Or: ... is
intended to port existing applications easily. Or: ... is intended to let
the application choose its programming model freely."
No. PortAudio makes a lot of choices for the software developer, and thus
provides an easy abstraction. This will mean, however, that actually porting
software to PortAudio will probably be hard (compared to CSL), whereas
writing new software for PortAudio might be convenient, _if_ the software
falls in the scope of what the abstraction was made for.
> CSL was
> written as if PortAudio doesn't exist. I don't know if this a NIH
> attitude, or something else, but I see little reason not use consider
> PortAudio as *the* CSL, and by corollary, little reason to develop Yet
> Another Wrapper API.
Well, I gave you some. The paper gives some more. Basically, CSL is intended
for porting _most_ free software, whereas PortAudio is intended for portable
synthesis software.
I think PortAudio would benefit in _supporting_ CSL, rather than aRts for
instance, because CSL is more generic ; once new sound servers (like MAS)
come up, you need not patch PortAudio all the time, but just one place: a
CSL driver. The same is valid for other meta-frameworks like SDL.
> the only reason i was happy writing JACK was
> precisely because its not another wrapper API - it specifically
> removes 90% of the API present in ALSA, OSS and other similar HAL-type
> APIs.
I am glad you did write JACK, although back then I thought it was just another
try to redo aRts (and we had some heated discussions back then), because some
people seem to like it. If some people will like CSL, why not?
If you added CSL support to JACK right now, you would never need to bother
with any of the "sound server guys" like me again, because you could always
say: "support CSL in your sound server thing, and then JACK will support your
sound server".
On the other hand, if you added JACK support to CSL, you could also mix the
output of all of these "sound servers" into JACK, without endangering your
latency properties.
Cu... Stefan
--
-* Stefan Westerfeld, stefan(a)space.twc.de (PGP!), Hamburg/Germany
KDE Developer, project infos at http://space.twc.de/~stefan/kde *-
Hallo,
with the LAD meeting getting closer, I'm getting a bit curious about,
what the plans are for the open "Linux Sound Night" on 15.3.? Will we
hear some of you guys perform and Paul records it?
ciao
--
Frank Barknecht _ ______footils.org__
Greetings:
I need to investigate the 2.5.x kernel series. Is there a recommended
version for audio work ? Low-latency patches ? Any special comments
about the 2.5 series ?
TIA!
== Dave Phillips
On Thursday 27 February 2003 20:25, Tim Jansen wrote:
> Did the other platforms use a callback-driven approach because it is a
> superior API, or because it is the only way to have sound on
> cooperative-multitasking OSes like MacOS <X and early Windows versions?
>
> Callback-driven APIs are much harder to use, especially with many existing
> codecs and frameworks that have been written with a push API in mind.
>
I try to list some pro / con for call back and push APIs:
push API:
Pro:
* simple to play a single file, basically a copy from source file to
destination.
Con:
* hard to write plugins that way. Take a look at arts plugins. They all have
a 'calculateBlock' = call back!
Why?
In a pure push model each processing step reads data from
one file/pipe/device, processes it, pushes it to a file/pipe/device
You get:
* lots of threads/processes that are not optimally synchronized.
Any thread is runnable when there are input available until the
output is full.
But that is not the important case, concider the case when the last
processing steps output is almost empty (if it gets empty you will hear
a click). How to prioritize it higher than all other threads? Should it
always be higher? Suppose it is a mixer that has several inputs...
Could be done by a super server that sets priorities depending
on position in the line? This is not easy...
* If plugins, with callback model, are used to build the application. Does
it not make sense to build the whole application (audio part) the same way?
There are some neat tricks that can be used, since the wrapper library
can know where the destination is.
* if the destination is in the same process, your output will end up somewere
in your process memory.
* On the other hand, suppose the destination is another application,
it can allocate shared memory and let the output of your pluggin end up
there.
* If the output is destined to go to an audio board
It could then give you a memory mapped hardware buffer instead of
ordinary memory to avoid the final copy. (you will get different buffers on
each process...)
* if your output type does not match the input type of the destination,
the library could automatically insert a converter/resampler either on
your side or on the destination side (pick the one that gives less
communication).
* Can the destination change during the run?
1. Your application starts alone, output format matches one supported
by hardware. => hardware buffers
2. Another application starts (suppose the device can have several
destinations open at once - like SB Live!) => no change for your pluggin
(but assume the format of this pluggin is not supported by hardware
=> in process buffer + automatic inserted convertion pluggin
+ hardware buffer)
3. Even more applications start... No more possible to output direct to
hardware for all... suppose the library checks for matching data types
- and the first application match perfectly!
=> the new application will get shared memory,
your application will be changed to ordinary memory, these buffers will be
mixed by an automatically inserted pluggin that outputs
to the hardware buffer...
4. The new application ends. => hardware buffers again
=> your application/pluggin does not need to care. [No framework that I know
of implements this today - especially not the automatic parts]
With the push model your application needs to know about possible
destinations, or treat everything as a file or shared memory.
But how to handle the dynamic changes in the push model?
Pipes, shared memory?
It could also use a library that directs the produced buffer in the right
direction (CSL?) - but it will be hard to eliminate extra copying.
Note: Arts and the KDE multimedia framework does a lot of things right today.
It even goes one step furter since it moves the pluggins, including input and
output into one process - artsd. But currently it does not work perfectly
together with applications using other frameworks.
/RogerL
--
Roger Larsson
Skellefteå
Sweden
Hello,
As announced earlier on this list, Frank Neumann and I are organizing a
Conference of Linux Audio Developers at ZKM Karlsruhe. More information is
available from http://www.linuxdj.com/audio/lad/eventszkm2003.php3
The list of speakers and talks is now complete and the webpage of the event
has been moved to ZKM: http://on1.zkm.de/zkm/stories/storyReader$3027
Information on accommodation has been added as well.
In addition to the speakers, the following LADers have registered so far:
Rene Bastian
Joern Nettingsmeier
Jean-Daniel Pauget
Kai Vehmanen
Several other LADers have shown interest but not yet registered. If you want to
register for the conference, please provide the following information:
1) Hardware you will be bringing (if any)
2) How long will you stay ?
3) Email address to which we can send last minute information
Remarks: 1) It is not necessary to bring any hardware, but if you do so,
it would be important for us to know because we need to
plan the rooms, network cabling, power supply etc.
2) In addition to the talks, there is room for LAD internal discussion
especially on Saturday morning and Sunday. We assume that on
Sunday this will last until about 18.00. Some LADers will be
around already on Friday morning (some even on Thursday
afternoon), however we might still be busy with preparations for
the talks.
A live audio stream of the talks will be available for those who can not
attend the event.
Matthias
--
Dr. Matthias Nagorni
SuSE GmbH
Deutschherrnstr. 15-19 phone: +49 911 74053375
D - 90429 Nuernberg fax : +49 911 74053483
ReZound aims to be a stable, open source, and graphical audio file
editor primarily for but not limited to the Linux operating system.
http://sourceforge.net/project/showfiles.php?group_id=5056
--
Notes:
This release adds several new features and fixes a few bugs.
Changes:
- Added preliminary JACK support (the speed of and load on your computer
will deterine how well this works)
- Added a "Repeat Count" or "Repeat Time" when pasting from any clipboard
- Added a "Balance" action which can change the balance or pan audio
between two channels
- Added some symmetry changing buttons and smoothing feature to the
graph parameter widget
- Added a "Recent Actions" under the "Edit" menu
- When an audio I/O system is selected at configure time, ReZound will
now fall back to OSS if the other is unavailable
- Buffer size and buffer count settings can now be made without rebuilding
- Made other minor cosmetic change, backend changes and bug fixes