"David Cournapeau":
>>>
>>> On Wed, Jun 14, 2006 at 07:47:36AM +0200, Alex Polite wrote:
>>>> Hi there.
>>>>
>>>> Is it possible to write LADSPA plugins in anything but C/C++? I prefer
>>>> perl, ruby or python.
>>>>
>>>> alex
>>>
>>> Anything but C/C++, yes. See FAUST [1], a compiled language designed
>>> specificly for processing audio streams. Perl, Ruby, or Python, not
>>> really.
>>>
>>> [1] <http://faudiostream.sourceforge.net/>
>>
>>
>> The realtime extension for snd (scheme-like language) is another:
>> http://www.notam02.no/arkiv/doc/snd-rt/
>>
>> Here is a cool alsa softsynth written in that system:
>> http://ccrma.stanford.edu/~kjetil/220c/
> there is also chuck, that nobody has mentionned, I think :
>
> http://soundlab.cs.princeton.edu/research/chuck/
>
Not really. Chuck code runs in a VM and does not compile to native machine
code. It also process blocks of samples, while faust and snd process one
and one sample. In this respect, Chuck is more in the same class of
programs like Supercollider, nyquist, csound, pd and many many others.
Mx44 got itself an update and will now understand some of the most
important standard GS midi controllers. Also included is a fix for
compiling with gcc 4. The homepage has moved to:
http://members.chello.se/luna/
Mx44 is a multichannel polyphonic synthesizer, loosely based on
FM-synthesis, with a Klingon approach to oscillators ...
Implemented GS controllers
--------------------------
R ## Ctrl
73 Attack (modifies the time value of all env stage 1 and 2)
75 Decay (modifies the time value of all env stage 3 and 4)
79 Loop (modifies the time value of all env stage 5 and 6)
72 Release (you get the drill ...)
05 Portamento (routed to intonation, being the closest match)
94 Celeste (modifies amount of frequency offset)
* 07 Channel Volume (yep!)
10 Pan (rotates the sound-image)
* 01 Modulation (modulation send amount from op with "Wheel" btn ON)
* 70 Timbre (modulation send amount from op with "Wheel" btn OFF)
* 71 Variation (balance between modulation from op 1+3/op 2+4)
* 74 Cutoff Freq (resonance ctrl for oscillators connected to envelope)
Controllers marked with asterix operates in true RT mode (ie: on a
sustained note.) The rest is set up at note-on
--
mvh // Jens M Andreasen
[feel free to redistribute this posting to other mailing lists]
On Jun 15, 2006, at 6:56 AM, Paul Winkler wrote:
> And, is all the sfront / saol action happening somewhere
> that I'm not aware of? I was always disappointed that there
> didn't seem to be a lively community around saol.
No, the MIT mailing list went inactive a few years ago, and
to my knowledge a new one hasn't sprung up to replace it.
The only communication channel at the moment is the
mailing list to be notified of new sfront releases via freshmeat.net:
http://freshmeat.net/projects/sfront/
Which does have 10 subscribers, and I'm not one of them,
so there must be 11 sfront users left in the world (at least) :-).
I've actually spent the last few months going through my
queue of bug reports and feature requests, and updating
the code base. But, these are the sorts of changes that
benefit from doing them in "batch mode" and then testing
thoroughly, so the plan is to hold off on a new release until
its time for me to switch back into teaching mode for the fall ...
Historically, a few things happened with sfront:
[1] The standardization of RTP MIDI via the IETF took priority,
and it took longer than I had thought (first running code happened
Christmas 2000, and the IESG approved the I-Ds a few months
ago:
http://www1.ietf.org/mail-archive/web/ietf-announce/current/
msg02110.html
We're now in the copy-editing queue (it takes a long time to
proof-read 250 pages of dense text ...), once this is done
we'll get our RFC numbers.
[2] It took a long time to really get a sense of where SAOL could
fit in the community ... I think pitching it as a Max/MSP or Pd
or SuperCollider or CSound competitor won't succeed ... instead,
I think it needs to be relaunched as "audio Postscript" -- an ISO/IEC
standard for normatively interchanging audio algorithms in a
domain-specific representation. And, as a extra bonus, SAOL is
easier for human programmers to read/write than Postscript :-).
This is an old idea -- some of you may remember an editorial David
Wessel wrote about "audio Postscript" many years ago in Electronic
Musician magazine ...
Admittedly, SAOL is not perfect for this role -- being standardized
almost a decade ago, its inevitable that its dated in some
respects, and even during its birth some of its design decisions were
controversial. But, I don't think that a core group of academic
and industrial folks would be willing to mount (another) 5 year
effort to
standardize a better audio algorithm interchange language, unless
they see
real evidence from the community that there's a need for such an effort.
And the best way to show that need is to try to insert SAOL
into markets for such an interchange format where SAOL's limitations
are not of great concern.
>> Sfront compiles a high-level music language (Structured Audio) to C,
>> and there's no reason in theory that audio drivers couldn't be
>> written
>> for LADSPA.
>
> I remember asking you about this a couple years ago and you said
> it could be done, but you could only run one plugin instance
> at a time... .is that still the case? or am I misremembering?
You are remembering correctly ...
Basically, one of the "feature requests" in my queue is improving
the sfront audio driver API to remove this restriction, and to
reality-test it by writing an AudioUnits driver. Until that happens,
yes, you'd be limited to single-instantiation for your LADSPA
driver ... realistically, I don't think "multiple instances" would
make it
into the "late summer 2006" release, unless I get lucky and
finish earlier items in the queue ahead of schedule. The priority
items are fixing real semantic bugs in the language implementation,
since sfront is serving as a de-facto secondary reference implementation
at this point, as saolc has trouble running a lot of correct SAOL
code ...
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
We are considering re-architecting our VST host in Receptor to run each
plugin in a separate process and connecting the processes with Jack. How
much extra overhead can we expect?
Receptor is basically a PC running our VST host. That's the only audio
app that is running. Currently all the VSTs run in our host app's
process, so we have no context switch or system call to get them to
process.
This helps keep processing overhead low. But the downsides are that (1)
all plugins share the same 2GB VM space and (2) we can't make use of a
machine with > 2GB RAM. (And (3) one crashing VST can take down our
whole mixer, but that's another story...)
As it turns out customers are interested in using Receptor to run sample
players which are hungry for both VM and RAM. So giving each VST its own
process would give the VST its own VM space with lots of elbow room, and
allow Linux give all of those plugin apps access to the additional RAM
even though they are still 32bit apps.
As we looked over the Jack docs, it seems like a natural for supporting
this kind of architecture. We would break out our VST support into a
separate app and connect them to our Host app via Jack. This seems to be
how FST is implemented and how Jack is intended to be used.
So does anyone have a sense of how much overhead is introduced by the
per-process() IPC that Jack uses? Our worst case would be 57 VST plugins
with a 32x2 sample buffer (.725 msecs). How much extra overhead would
those 57 Jack calls to process() add to the overall processing time? Any
other gotchas?
Thanks for any help... mo
On Jun 14, 2006, at 10:04 AM, linux-audio-dev-
request(a)music.columbia.edu wrote:
> There are, of course, languages like SuperCollider and CSound, which
> ARE made for expressing audio algorithms. However, again they are
> generally interpreted.
Sfront compiles a high-level music language (Structured Audio) to C,
and there's no reason in theory that audio drivers couldn't be written
for LADSPA. At the moment, though, all of the audio drivers are for
interface APIs, not plug-in APIs ... the first plug-in API for sfront is
most likely to be for AudioUnits, since I've moved from Linux to OS X
as my computing platform these days. But anyone can write an
audio driver for sfront, see:
http://www.cs.berkeley.edu/~lazzaro/sa/sfman/devel/adriver/index.html
for info on writing new audio drivers for sfront, and:
http://www.cs.berkeley.edu/~lazzaro/sa/
For more general info on Structured Audio and sfront.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi there.
Is it possible to write LADSPA plugins in anything but C/C++? I prefer
perl, ruby or python.
alex
--
Alex Polite
http://flosspick.org - finding the right open source
Greetings,
Version 300 of Audioscience HPI driver has been released and can be
downloaded from here:
http://www.audioscience.com/internet/download/linux_drivers.htm
There are many changes, please read the release notes for details:
http://www.audioscience.com/internet/download/drvnotes.txt
In addition ALSA has been updated to use the same source code and DSP
files. Currently only available in the ALSA Mercurial repository:
http://alsa-project.org/download.php
If you have any problems or queries about this new driver, please email
support@ (our domain name), include info about your distro, kernel version,
card type etc.
regards
--
Eliot Blennerhassett
AudioScience Inc
Steve:
>
> I think this is a worthwhile topic actually...
> There is currently a shortage of interest in developing good
> alternative NATIVE machine-language-compiled languages.
> Although I have been programming C/C++ for a long time, I have lately
> been getting into Python and I really like it... Really, there's no
> REAL reason we can't use other languages for writing audio stuff,
I completely agree with you. And thats why I use SND for almost anything:
http://ccrma.stanford.edu/software/snd/
Hi all!
Sorry for crossposting, but I wasn't sure, who could answer it more
apropriately.
Does anyone of you ever have had experience with doggiebox
http://www.doggiebox.com
Is the format of their drumkits known? It seems to me this software is kind
of free. And they seem to have a few nice and free kits. Thus I thought
perhaps one could convert them to hydrogen or soundfont. It looked straight
forward enough.
Any ideas, hints, whatever?
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
GLASHCtl is a control applet for LASH. This is the first release. Other
than my code it contains eggtrayicon.h and eggtrayicon.c (by Anders
Carlsson and Jean-Yves Lefort), taken from libegg, and the LASH icon (by
Thorsten Wilms) from the LASH project. A patch from Florian Schmidt,
adding session renaming and directory switching, has also been applied.
Get it at http://dino.nongnu.org/glashctl
Attaching README:
GLASHCTL
=======================================================================
This is a simple applet for controlling the LASH Audio Session Handler.
When you run it it will appear as a small LASH icon in your
"notification
area" or "system tray" (if your desktop manager is compatible with
freedesktop.org's "System tray" standard,
http://www.freedesktop.org/Standards/systemtray-spec). This is typically
somewhere in the panel in KDE or GNOME.
BUILDING IT
============================================================
To build this program you will need the following libraries:
* libgtkmm (2.6.4 or newer)
* libvte (0.11.15 or newer)
* liblash (0.5.1 or newer)
You will also need to have the LASH server, lashd, somewhere in your
$PATH.
To build the program with the default configuration (install
in /usr/local,
compile with -g -O2 etc), simply type 'make' in this directory. If you
want to
change the configuration, use the configure script (run configure --help
for
details). When you type 'make' a program called glashctl should be
generated,
and when you type 'make install' it should be installed on your system.
You need to install it before you run it, otherwise it won't find the
LASH
icon file and will not start.
USING IT
============================================================
To use the applet, simply run the program. If you have a
standards-compliant
system tray on your desktop a small LASH icon (a cardboard box with a
soundwave on it) should appear there. It is probably insensitive (greyed
out), unless you were already running lashd or have the
LASH_START_SERVER
environment variable set to 1. If you right-click the icon a menu will
pop up
where you can choose to start lashd. When lashd has started the icon
should
become sensitive (show colours), and you will be able to restore audio
sessions, and when there is an active session, save it, close it, rename
it
or change its directory. You can also quit the applet from the popup
menu.
You can also left-click the icon to open a message window that shows
information about the events received from lashd.
NOTES
============================================================
The LASH icon was created by Thorsten Wilms for the LASH project
(http://lash.nongnu.org).
I know that the GNOME HIG discourages using the notification area for
permanent icons and icons that have actions other than just opening a
window
associated with them, but until there is a standard for writing normal
panel
applets that work in both KDE and GNOME and in other window managers
I'll do
it anyway.
Send bug reports and suggestions to Lars Luthman, lars.luthman(a)gmail.com
--
Lars Luthman - please encrypt any email sent to me if possible
PGP key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x04C77E2E
Fingerprint: FCA7 C790 19B9 322D EB7A E1B3 4371 4650 04C7 7E2E