(Cross-posted to the Faust and Q mailing lists.)
Hi all,
I thought that some of you might be interested in a Faust [1] interface
I created for my functional programming language Q [2]. The interface
allows you to load and run Faust DSPs in Q. Those of you who attended
Yann Olarey's Faust workshop and my talk about Q at LAC05 should have an
idea of what I'm talking about. ;-) I think that Faust and Q really make
a great combo, which allows you to do all your multimedia/DSP stuff
using nothing but modern FP tools.
The Q-Faust interface is currently only available in cvs, see the
q-faust module at [3]. A few examples are included, such as a realtime
synthesizer playable via MIDI. To build and run this stuff, you'll need
the Q core system, Q-SWIG, and the q-midi and q-audio modules, all
readily available from [2]. And, of course, you'll also need Faust [1].
Relevant links:
[1] Faust homepage: http://faudiostream.sourceforge.net
[2] Q homepage: http://q-lang.sf.net
[3] Q cvs repository: http://cvs.sourceforge.net/viewcvs.py/q-lang
Enjoy! :)
Cheers,
Albert
--
Dr. Albert Gr"af
Dept. of Music-Informatics, University of Mainz, Germany
Email: Dr.Graef(a)t-online.de, ag(a)muwiinfa.geschichte.uni-mainz.de
WWW: http://www.musikwissenschaft.uni-mainz.de/~ag
Hi all,
I'm the author of Freecycle, one of the younger FOSS audio projects out there.
I have a problem that may astound by it's simplicity, so I barely dare to ask
for help...
Freecycle provides some LADSPA functionality, and as there are lot of great
but mono ladspa effects, I need a very simple way of sending stereo audio to
mono input..
As I want to make my software as simple as possible for the end user, I
provide some basic routing for the LADSPA giving the possibility to enter
LADSPA input audio port either from the left channel, either from the right
channel either from the mix of two channels. The signal is then passed
through LADSPA and every 1024 frames the LADSPA control port input values are
changed according to the desired automation.
My problem consists in a correct way of mixing the two channels into one. I
have found 4 ways of doing that:
1) sum L and R and divide by 2 : well..
2) if L>0 and R>0 take the max, if L<0 and R<0, take the min, else add. :
current implementation
3) add, and then normalize to the max after summation.
The way that "feels" most correct to me is 3), but I don't like the two pass
approach, as I mix the channels every 1024 frames and the send those 1024
frames to LADSPA.
Of course, I would like to avoid letting the user set the gains manually...
Could someone please help with this apparently simple problem?
Many thanks,
Predrag Viceic
http://www.redsteamrecords.com/freecycle
Hello Lists,
As some of you may know, I'm the guy who wrote Specimen. And as a tiny
fraction of that some may know (or care), Specimen hasn't seen an update
in an exceedingly long time.
What, exactly, is going on?
The answer, dear friends, is simple: music. My introduction to the art
of electronic music composition began precisely when Specimen
development halted. For a few months, I fumbled around with countless
little songs in an attempt to figure out how one goes about the process
of coaxing the electrons in such a fashion as to result in sonic
euphoria. And once I reached that point where the magnitude of my
suckiness became bearable, I began writing songs.
This process began in February with the assistance of an old friend, who
lent his discerning ear and exceptional bass talent to the effort. And
yesterday, we finished step one --- general composition. We now have 11
rough drafts that will be polished into a full length album and released
next fall --- 100% Linux and OSS produced.
It occurred to us that we might share some of this with all you folks
out there in Internet land, both to stoke the flames of anticipation and
to remind my adoring public that I have not yet shed this mortal coil.
So, I am pleased to present a collection of snippets from the
aforementioned tracks above, pieced together in an easy to swallow
medley:
http://www.gazuga.net/preliminary_beats.ogghttp://www.gazuga.net/preliminary_beats.mp3
And for those interested in staying abreast of the latest and greatest,
be sure to tune into The State of the Beat:
http://www.gazuga.net/blog
That's all for now, but don't worry --- I'll be back before you know it.
Kinda like herpes, only better.
Peace out,
-Pete
Hi,
I have a little Problem:
I'm using OpenGL in my Program, in combination with OSS via portaudio.
The Updates of the OpenGL display are triggered by the audio callback,
protected using the available Methods of the GUI-Toolkit that I use
(fltk).
And I'm experiencing very strange effects, like: without Audio the
OpenGL works fine. without the OpenGL, the audio works fine, however
when used in combination, the Audio Thread (callback) stops very soon,
and I have no Idea why?
I used efence to discover wheater I'm screwing up memory somewhere,
but I couldn't find anything. I ran valgrind, but it couldn't find
something helpful either.
I ran it from within gdb, but the program behaves correctly if run
from gdb (heisenbug?). It even works better whenever I insert some
cout's for debugging purposes.
in gdb it also reports a signal like this when initializing the
openGL, but I believe that might be propably unreleated.
http://music.columbia.edu/pipermail/linux-audio-user/2004-October/016821.ht…
BTW. I'm running an unmodified ubuntu warty.
Any tips or pointers??
Thanks
Richard
PS.: (writing this message led me to some more experiments ;) )
I'm fairly sure that something is screwed up within thread
syncronisation, because I can "fix" it by inserting a usleep() call at
a certain position, but I do have a fairly hard time imagining what
could be wrong. I'm considering to switch to blocking IO for Audio
because, latency and stuff is not really an issue.
The RT rlimits patch (nice-and-rt-prio-rlimits.patch) has been proposed
as a solution to allow audio users to run their applications with
realtime priorities. While more complicated to configure, it's a much
cleaner patch than realtime-lsm and it's likely to get merged soon _if_
enough audio users test it and confirm that it works.
To encourage this, I have created a wiki page containing installation
instructions, links to prebuilt PAM packages, etc:
http://www.steamballoon.com/wiki/Rlimits
If this works for you, I am collecting success reports. Please email
rlimits-success(a)steamballoon.com .
If you have any problems with it, LAU is probably the best place to ask
for help. Unfortunately I don't have large amounts of time to spend
helping people with this, so any help requests emailed to me directly
may be deleted without a reply. Sorry.
Thanks for testing!
Jody
Through local contacts, I ended up meeting with a person close to
Philadelphia who is starting a company to build a sample playback
engine that will be used to enhance and extend the sound of an
existing large scale instrument. They need a relatively simple sample
playback engine:
- MIDI in
- up to 400 samples being played at once
- all samples assumed to live in RAM
- up to 50-70 channels of output
- no functional GUI (simple one required at
install/configuration time)
The work needs doing quickly (they have existing customers who are
waiting), and well (the system will have no monitor; stuff has to Just
Work All The Time).
If you think you might be interested in doing this, get in touch with
me, estimating time to completion and cost. Available compensation is
reasonable, and may be adjustable to include stock in the company.
--p
ps. you know who you are :)
looks nice Kjetil!
OS X ??
also (OT) are you still developing/mainting Mammut?
cheers~
PAtrick
On Fri May 20 08:32:42 EDT 2005, Kjetil Svalastog Matheussen
<k.s.matheussen(a)notam02.no> wrote:
>
>
>
> The Realtime Extension for the sound editor SND consists of two
> parts:
>
> 1. The RT Engine - An engine for doing realtime signal
> processing.
> 2. The RT Compiler - A compiler for a scheme-like programming
> language
> to generate realtime-safe code understood
> by the
> RT Engine.
>
> Homepage:
> http://www.notam02.no/arkiv/doc/snd-rt/
>
> Screenshot:
> http://www.notam02.no/arkiv/doc/snd-rt/screenshot.png
>
>
> ********************************************
>
>
> Snd-ls v0.9.3.0
> ---------------
> Released 19.5.2005
>
>
> About
> -----
> Snd-ls is a distribution of the sound editor Snd. Its target is
> people that don't know scheme very well, and don't want
> to spend too much time configuring Snd. It can also serve
> as a quick introduction to Snd and how it can be set up.
>
>
> Changes 0.9.2.0 -> 0.9.3.0
> ---------------------------
> -Updated SND to v7.13 from 18.5.2005. Many important changes.
> -Fixed a small error in the installation script.
>
>
> http://www.notam02.no/arkiv/src/snd/
>
>
>
> _______________________________________________
> Cmdist mailing list
> Cmdist(a)ccrma.stanford.edu
> http://ccrma-mail.stanford.edu/mailman/listinfo/cmdist
>
>
Patrick Pagano,M.F.A
Digital Media Specialist
Digital Worlds Institute
University Of Florida
(352) 294-2082
The Realtime Extension for the sound editor SND consists of two parts:
1. The RT Engine - An engine for doing realtime signal processing.
2. The RT Compiler - A compiler for a scheme-like programming language
to generate realtime-safe code understood by the
RT Engine.
Homepage:
http://www.notam02.no/arkiv/doc/snd-rt/
Screenshot:
http://www.notam02.no/arkiv/doc/snd-rt/screenshot.png
********************************************
Snd-ls v0.9.3.0
---------------
Released 19.5.2005
About
-----
Snd-ls is a distribution of the sound editor Snd. Its target is
people that don't know scheme very well, and don't want
to spend too much time configuring Snd. It can also serve
as a quick introduction to Snd and how it can be set up.
Changes 0.9.2.0 -> 0.9.3.0
---------------------------
-Updated SND to v7.13 from 18.5.2005. Many important changes.
-Fixed a small error in the installation script.
http://www.notam02.no/arkiv/src/snd/
--- Dave Robillard <drobilla(a)connect.carleton.ca> wrote:
> Hi all,
>
> A while ago I started a thread about the proper way to refer to LADSPA
> plugins (in save files or whatever) and the consensus was library
> filename + label.
>
> People have been having problems with library name - different packages
> seem to make different names for the libraries (prefixing blop_, for
> example) so it doesn't always work. Basically I think using shared
> library file name is an awful way to reference plugins for numerous
> reasons.
As the guilty party (author of blop) I admit that when this option was added
(--program-prefix for library files) it was done in blissful ignorance of the
use of library basename as an identifier, as I had assumed that the Unique ID
was as claimed in ladspa.h.
The purpose was to avoid name clashes with generic names such as 'sawtooth' - I
ended up copying swh, and append the UID (so it does have a use after all :) to
the filenames. I'd meant to remove the --program-prefix option from configure,
but forgot.
> So why wasn't the unique ID the thing to use? There is a unique plugin
> ID in LADSPA, if not for this then for what reason?
Going by what is said on ladspa.org, I think that it was originally intended to
be the way to refer to plugins, and changed as development progressed.
IIRC, the UID is still required to lookup metadata with liblrdf, but this may
have changed since I last looked.
> In a similar vein, I really think the current system for LADSPA
> distribution sucks - big tarballs from various devs containing heaps of
> completely unrelated plugins. A centralized site where plugins can be
> submitted on their own (or in related groups) would be a great thing,
> IMO, and would make it easy to verify that unique IDs are in fact unique
> to solve the above problem.
I don't think you'll get very far arguing the case for UIDs - the arguments
against were pretty clear in the previous discussion and summarised by Chris
Cannam in his response. I think we're stuck with basename+label until a better
scheme can be implemented, possibly for LADSPA 2.
Regarding unrelated plugins in one library, I personally don't have a problem
with this as liblrdf does a fine job of categorising plugins where it's really
needed (in the host).
> Right now if a developer wants to make just one random plugin, they
> don't really have a sane way of getting it out there.
I agree here. The best option short of creating a new library distribution is
to get your plugin 'adopted' into an existing library.
I'm certainly willing to merge any homeless plugins into blop.
> I'm willing to full-time maintain the site, but I don't really have the
> hosting/abilities to create it. What do the other plugin authors think
> about this?
I'm all for it. Maybe liase with Richard Furse to update the ladspa.org site
itself? There's already a list of links there so all that is really needed is
to add details for maintainers willing to adopt plugins, with appropriate
provisions (kind of plugin, language, build system and so on)?
-
Mike
___________________________________________________________
Yahoo! Messenger - want a free and easy way to contact your friends online? http://uk.messenger.yahoo.com
>From: Alfons Adriaensen <fons.adriaensen(a)alcatel.be>
>
>IFF JACK could be modified to allow this then probably you wouldn't
>need M, its function would be taken care of by jackd.
That was the idea.
>The problem with such a scheme could be that the load is not spread
>evenly, so it would be necessary to give those clients that use
>longer buffers a lower priority. There may be other troule hidden
>somewhere...
Yes. The priority should depend on the buffer size.
Longer buffers would run with non-soft-RT schdulers.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software