>Perhaps just adding a couple check buttons to the database:
>
>[X] Available for professional Linux Audio consulting
>[X] Available as a Linux audio community node
Easy enough to add.
>There could then be 2 entry points to the
>database, one catering to the company or individual looking for a Linux
>audio consultant and another for those wanting to connect with other
>audio developers. The latter being more losely based since one would
>not necessarily need to be a professional to just meet others.
I'm still looking for the appropriate name for this db. I want it to
provide all the functionality that we need to allow users and developers
to do real networking. Whether that is for paid work or just to arrange
meetings with people in an area you may be travelling to/through.
I think it makes sense that as part of a comunity that is based on
openness we provide a way for people to get a little background info on
us. I also hope that this database will present one more level of
professionalism to the Linux audio developers public image.
>I like the idea of having an email proxy so that one could hide their
>email address (as an option) and a form could be provided that would
>email the user from the internally stored address. I think the idea of
>Spam might scare away some people from putting their info in the
>database (like me).
I can make it more spammer unfriendly. I'll think about how. Currently
there is a form mailer provided to allow people to contact directly from
the site. Since setting this up for my business site a couple of years
ago it has proven to be a good way for people to contact me. So I have
provided this feature for the db also.
That is possibly the most useful feature on the site. Anyone (not tested
with lynx) can contact you directly.
>Perhaps this might already be the case (haven't tried submitting
>anything yet to the database) but having the street address info be
>optional is good (just city/country is probably sufficient in many
>cases) as well as phone number and any other items related to privacy.
>Indicating optional and required fields would be nice on the entry >form.
Currently the only required entry is an email address. Soon I will have
a proper login faciltiy so there will also be a password required.
>Maybe a resume URL would also be good, so if one is browsing through
>lots of entries they can quickly jump to individual's resumes.
I figure this belongs in the specifics column. I will make it parse
these entries for urls and automatically create links. On that note
would anyone find it useful to have a photo column?
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
http://plugin.org.uk/releases/0.3.4/
Hods of changes, including:
* Bugfixes to GSM sim (Pascal Haakmat)
* Bugfixes to FM osc (Pascal Haakmat)
* Bugfixes to audio divider (Nathaniel Virgo)
* Added another compressor, SC4, stereo, no sidechain
* Added lookahead brickwall limiter
* Added L/C/R delay (requested by Marek Peteraj)
* Added Giant flanger (kind of requested by Patrick Shirkey)
* Added DJ Flanger (actually requested by Patrick Shirkey)
* Should now compile on FreeBSD
* Fixed syntax error in RDF metadata, works with Bob H's jack-rack now
I've updated the AUTHORS list, but I've undoubtedly forogtten people, so if
youre not in there and should be give me a shout.
SC4 is more-or-less like SC3, but has no sidechain and a subtly different
algorithm. The sidechain was confusing hosts.
The limiter has up to two seconds of lookahead, so can be very gentle.
The L/C/R delay might be a bit familiar to Korg Trinity users ;) I haven't
used it much bet it seems pretty cool.
The giant flanger was a mistake, but I left it in anyway.
The DJ flanger hjas controls for LFO period (instead of frequency as you
would have in a synth) and you can resync the LFO by clicking a toggled
control*.
- Steve
* This reminds me, there would be a use for a MOMENTARY hint (implying or
requiring TOGGLED) in LADSPA, that meant that a control port should only
be held high while the UI control was held down, otherise you have to
double click reset controls, which is confusing.
1. A short summary of changes
Support for JACK and LADSPA 1.1 added, more intelligent runtime
parameter selection, ECI licence changed from GPL to LGPL,
new NetECI client API, ecasound emacs mode added, largefile
support, new resample, reverse and typeselect audio objects,
new peak amplitude chain operator and new utilities ecalength,
ecamonitor and ecasignalview.
---
2. What is ecasound?
Ecasound is a software package designed for multitrack audio
processing. It can be used for simple tasks like audio playback,
recording and format conversions, as well as for multitrack effect
processing, mixing, recording and signal recycling. Ecasound supports
a wide range of audio inputs, outputs and effect algorithms.
Effects and audio objects can be combined in various ways, and their
parameters can be controlled by operator objects like oscillators
and MIDI-CCs. A versatile console mode user-interface is included
in the package.
Ecasound is licensed under the GPL. The Ecasound Control Interface
(ECI) is licenced under the LGPL.
---
3. Changes since last release
Although over a year has passed since the last major stable
release, ecasound development work has not stopped. To put things
into perspective, a diff between 2.0.0 and 2.2.0 takes about
1.7MB of space. Considering the whole 2.2 codebase is just
over 2MB, this is quite a lot! In the future there will hopefully
be much more frequent releases. Here's a list of most notable
changes:
* Intelligent parameter configuration. Instead of one set of
default parameters, ecasound lets user specify different parameters
for three predefined profiles: real-time, real-time-low-latency and
non-real-time. When starting processing, ecasound will automatically
select and use most suitable profile for the given configuration.
Ecasound will not only consider the types of objects, but also the
runtime environment: whether it is possible to lock memory, to use
RT-scheduling and so on.
* The Ecasound Control Interface is now licensed under LGPL.
In addition, the ECI implementations are now standalone, and do
not require linking against libecasound and libkvutils. Only
thing needed to run ECI apps is to have a working ecasound
executable installed.
* JACK support added. This is a major new addition as it involved
making relatively large changes to the ecasound engine.
* Up-to-date support for ALSA-0.9 and LADSPA-1.1.
* Effect preset improvements. Support for parametrized presets
has been improved. For instance it's now possible to write
a wrapper effect preset for a complex ecasound effect
or LADSPA plugin, and only publish a subset of original
effect's parameters.
* The disk i/o buffering subsystem that was introduced
in ecasound 2.0 has been integrated more closely to
the ecasound engine leading to better performance and
reliability.
* NetECI API. Ecasound now has a daemon mode that allows
multiple clients, using the NetECI protocol, to connect to
a running ecasound session. A proof-of-concept client,
ecamonitor, is included in the package. It can be used
to monitor ecasound session status from a separate console.
This is especially useful in combination with ecasound's
console mode user interface. The console interface can
be used for control and the NetECI monitor client for
getting real-time status information. In addition,
NetECI can be used with all ECI apps.
* Ecasound.el, an emacs ecasound mode and a Lisp ECI
implementation.
* Largefile support for reading and writing audio files larger
than 2GB.
* New audio object types: JACK, resample, reverse, typeselect.
* New chain operators: peak amplitude monitor
* Utilities: ecalength and ecamonitor added, ecasignalview
totally rewritten.
* New ECI implementations: Lisp, Perl and PHP (the last two
are not included in the main ecasound package)
Full list of changes is available at
<http://www.wakkanet.fi/~kaiv/ecasound/history.html>.
---
4. Interface and configuration file changes
* Command line options: 2.2 is backward compatible with
2.0 releases, so old scripts and .ecs files should
continue to work. See ecasound(1) for more info.
* Ecasound Interactive Mode (EIAM): No changes to the commands
available in 2.0 releases. See ecasound-iam(1) for more
info.
* Library interfaces: Major changes in all library interfaces.
Direct use of these libraries is no longer encouraged.
The ECI and NetECI APIs are preferred for developing new
applications on top of ecasound.
* Ecasound Control Interface (ECI): No interface changes.
* The ~/.ecasoundrc config file is no longer used. The
new location is ~/.ecasound/ecasoundrc. As there's now
a separate global configuration file, it is no longer
necessary to duplicate all config variables in the
user config files. See ecasoundrc(5) for further info.
---
5. Links and files
Web sites:
http://www.eca.cxhttp://www.eca.cx/ecasound
-
http://www.alsa-project.orghttp://jackit.sourceforge.nethttp://www.ladspa.org
Source and binary packages:
http://ecasound.seul.org/downloadhttp://ecasound.seul.org/download/ecasound-2.2.0.tar.gz
Distributions with maintained ecasound support:
Agnula - http://www.agnula.org
Debian - http://packages.debian.org/stable/sound/ecasound.htmlhttp://packages.debian.org/unstable/sound/ecasound2.2.html
DeMuDi - http://www.demudi.org
FreeBSD - http://www.freebsd.org/ports/audio.html
Gentoo Linux - http://www.gentoo.org
PLD Linux - http://www.pld.org.pl
PlanetCCRMA - http://www-ccrma.stanford.edu/planetccrma
SuSE Linux - http://www.suse.de/en
Note! Distributors do not necessarily provide packages for
the latest ecasound version.
--
http://www.eca.cx
Audio software for Linux!
>Anyone remember the project mentioned on LAD before about an online
>Linux Audio tech database? Cheers.
That was me.
http://www.djcj.org
I will be working on improving the functionality soon. I have been
overloaded recently actually playing music. Makes a nice change from
wanting to but not having the correct resources. All open source of course.
Anyway, I would like to have more feedback on the way the database is
setup, displayed...
Feel free to add your name. I originally intended it to be for paid
support work but if it evolves into a more general database then I'm
happy to do my part to get it there.
I will focus on making it user configurable over the next few weeks. I
have been contemplating it for the past few days. Sometimes it nice not
to get sucked into the net. Almost like taking a vacation :)
--
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.comHttp://www.djcj.org - The Linux Audio Users guide
========================================
Being on stage with the band in front of crowds shouting, "Get off! No!
We want normal music!", I think that was more like acting than anything
I've ever done.
Goldie, 8 Nov, 2002
The Scotsman
> Why not? Shouldn't these sorts of discussions be wiiiiide open, Paul?
> Or is it just a matter of it being too soon, and thus there being
> nothing to talk about?
The discussions will of course be wide open, most likely held on an email
listserv, once one gets set up. To encourage more participation, it is also
likely that the discussion will be overseen by the IASIG rather than the
MMA.
Right now the whole proposal is in the "idea" stage. There is no definite
spec. I think the ultimate aim would be to cherry pick the best of what's
out there, and try to build a standard that is both "best of breed" and
"lean and mean" -- if such a thing is possible. :-)
-----------
Ron Kuper
VP of Engineering
Cakewalk
http://www.cakewalk.com
If anyone can get to this, it would be a great idea. I might even
consider using a frequent flyer ticket for this. not sure yet.
--p
----------------------------------------------------------------------
To: various folks
cc: mma(a)midi.org
From: RonKuper(a)Cakewalk.com
Subject: FW: [mma-members] Announcement: Item #183 Unified Audio Plug-In Architecture
Hi folks,
The following announcement may be of interest to you. The initial
discussion on this proposal will be held at NAMM in about 2 weeks. Note
that this meeting is not restricted to MMA members, so I would urge anyone
who is interested and would like to attend, to please do so. Also, as far
as I know you do not need a NAMM convention badge to attend this meeting.
The meeting will be held on Sunday, January 19 from 4:30 PM - 5:45 PM at the
Anaheim Marriott. I'm not sure what the actual conference room will be; so
far it's only been designated as "MMA Meeting Room A".
I hope to see you there!
----------
Ron Kuper
Cakewalk
********************************************
MIDI MANUFACTURERS ASSOCIATION MEMBERS FORUM
********************************************
A new MMA working group has been formed for discussion of a Unified
Audio Plug-In Architecture. The proposal from Ron Kuper of Cakewalk
is attached below.
The Working Group Chair is Ron Kuper (Cakewalk). The TSB
Representative is David Miller (Microsoft).
For the moment, discussion will take place on the mma-members mailing
list. If the email traffic gets heavy, a new separate working group
mailing list will be created. A discussion session will take place
following the AGM on Sunday January 19th.
Please respond to this message if you would like to join the working group.
=====================================================
Item # 183 - Unified Audio Plug-In Architecture
Submitted by: Ron Kuper
Company: Cakewalk
The professional audio market offers a variety of audio plug-in
formats, some hardware based, some software based, all entirely
incompatible. These plug-in formats include Audio Units (Apple),
DirectX (Microsoft), DXi (Cakewalk), JACK (Linux), LADSPA (Linux),
MAS (MOTU), MFX (Cakewalk), OPT (Yamaha), ReWire (Propellerheads),
RTAS (Digidesign), TDM (Digidesign), VST (Steinberg), VSTi
(Steinberg).
While these are touted as standards, they are in fact proprietary,
and the companies responsible for their development assume a heavy
documentation and support burden. Furthermore, unlike true standards
such as MIDI, they do not actually enable interoperability between
vendors. Instead, they fragment the music software industry into
"tribes" of vendor allegiance.
The large number of competing formats means that audio plug-in
developers must either incur the high cost of developing for
multiple formats, or else take the business risk of focusing
on a single format. Host application vendors face the same
dilemma when choosing which formats to support in their applications.
We propose to develop a single audio plug-in framework, a
cross-platform standard for audio plug-ins and software synthesizers.
The key design objectives for this standard are:
- Transport neutral: can stream in memory, PCI, Ethernet, WiFi, etc.
- Low-overhead
- Adaptive to hardware
- Platform and programming-language neutral
- Compatible with existing standards, e.g., MIDI (for
parameterization and data); XMF (for serialization);
AAF (for project interchange)
- Easy to "wrap" in existing formats such as DirectX or VST
What is encouraging is that all of these standards differ only in terms of
programming interface. They all provide equivalent levels of functionality.
The goal then is to define a standard core level of functionality that will
support all of the common features among existing plugin standards, yet be
easily encapsulated by the vendor specific interfaces. In other words, the
goal would not be to replace TDM, DirectX, VST, etc with a common interface,
but rather to define a low-level interface to a reusable DSP core that could
be then packaged as TDM, DirectX, VST, etc.
-----------------------------------------------
The contents of this message are Copyright 2002
MIDI Manufacturers Association Incorporated and
not to be reproduced or distributed in any form
without express written permission.
i made a small mistake in forwarding the message about the UAPA
meeting. it was not intended for public dissemination, even though the
meeting is open to the public. my bad, as they say.
--p
> >I browsed the Kernel Source and there is only one mark_inode_dirty in
> >pipe_write (in fs/pipe.c). So we know where it is hanging...
> >
> >And in __mark_inode_dirty (in fs/inode.c) there is one
> > spin_lock(&inode_lock)
> >call, and I guess that is where the whole thing is hanging. So something
> >is holding that lock... how do I find out who is doing that? Apparently
> >the handling of inode_lock is confined to inode.c. I'll keep reading.
[Andrew Morton had suggested that the stack traces did not show problems
with stuck locks in the kernel...]
> >Maybe the pipe in question is one of the pipes that jack uses for ipc?
>
> seems *damn* likely ... sorry to just be chiming in with a useless comment!
One more (small) datapoint. Roger Larsson sent me off the list a couple
of small utilities (VERY nice tools!) that monitors the cpu usage of
SCHED_FIFO processes and after a timeout actually downgrades the
persistent hogs to SCHED_OTHER.
So I run that in a terminal and after playing around with a bunch of
jack apps got the machine to lockup... and then, after a little bit,
suddenly, it came back to life! (you could see that the monitor had
changed the priority of the hogs to SCHED_OTHER).
So I guess that somehow jack has a hard to trigger race condition that
locks up the machine when running SCHED_FIFO.
Now I have to figure out how to trace the thing so as to determine where
the whole thing is locking. Help from the jack gurus appreciated.
-- Fernando
>>It's obvious when you consider that "VVID has no voice" can happen
>>*before* the synth decides to start the voice; not just after a voice has
>>detached from the VVID as a result of voice stealing. At that point, only
>>the value of the control that triggered "voice on" will be present; all
>>other controls have been lost. Unless the host/sender is somehow forced to
>>resend the values, the synth will have to use default values or something.
>OK... I was thinking that the initial mention of the VVID would cause it
>creation (be that implicit or explict, though I prefer explit I think),
>thereafter control changes would be applied the the instantiated voice (or
>the NULL voice if you've run out / declined it).
The initial mention of the VVID is the issue here; certain types of voice
events are assumed not to allocate a voice (parameter-set events). THis is
because there is no difference between a tweak on a VVID that has had its
voice stolen and a tweak intended to initialize a voice that arrives before
voice-on. We must conclude that the plugin will discard both of them. There
must be a signal to the plugin that a VVID is targeted for activation. we
have a few options:
---a voice-activation event is sent, then any initializing events, then a
voice-on event
---a voice-on event is sent, with any following events on the same timestamp
assumed to be initializers
---a voice-activation event is sent and there is no notion of voice-on, one
or more of the parameters must be changed to produce sound but it is a
mystery to the sequencer which those are. (I don't like this because it make
sequences not portable between different instruments)
---events sent to voiceless VVID's are attached to a temporary voice by the
plugin and which may later use that to initialize an actual voice. This
negates the assumption that voiceless VVID events are discarded.
#2 is just an abbreviated form of #1, as i argue below. (unless you allow
the activate-to-voice_on cycle to span multiple timestamps which seems
undesireable)
> > > > When are you supposed to do that sort of stuff? VOICE_ON is > > >
>what triggers it in a normal synth, but with this scheme, you > > > have to
>wait for some vaguely defined "all parameters > > > available" point.
We can precisely define initialization parameters to be all the events
sharing the same VVID and timestamp as the VOICE_ON event. This means that
the "all parameters available" point is at the same timestamp as the
VOICE_ON event, but after the last event with that timestamp.
If we want to include a VOICE_ALLOCATE event then the sequence goes:
timestamp-X:voice-allocate, timestamp-X:voice-parameter-set(considered an
initializer if appropriate), timestamp-X:voice-on, timestamp-X+1:more
voice-parameter-sets (same as any other parameter-set)
But this sequence can be shortened by assuming that the voice-on event at
the last position for timestamp-X is implicit:
timestamp-X:voice-on(signifying the same thing as voice-allocate above)
timestamp-X:voice-parameter-set(considered an initializer if appropriate),
(synth actually activates voice here), timestamp-X+1:other-events
---Jacob Robbins.................
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
>I made a post a while back defining all the XAP terminology to date. Read
>it if you haven't - it is useful :)
I was hoping something of this sort existed. It would be very helpful if you
could put the list of XAP terminology on the webpage. It would help keep
everybody on the same page when discussing.;) And it would help people to
join the discussion without spending the 10-15 hours it takes to read
december's posts.
>VVID allocation and voice allocation are still two different issues. VVID
>is about allocating *references*, while voice allocation is about actual
>voices and/or temporary voice control storage.
I agree entirely. If each VVID=a voice then we should just call them Voice
ID's, and let the event-sender make decisions about voice reappropriation.
---jacob robbins.......................
_________________________________________________________________
MSN 8 with e-mail virus protection service: 2 months FREE*
http://join.msn.com/?page=features/virus