Hi all,
here's a work in progress collection of the state of affairs in
RRADical Pd as I propose it.
http://footils.org/pkg/rradical-wip-040109.tgz
This includes a new and improved version of the pattern sequencer
rrad.pattseq and memento and lots of more abstractions for
reuseability enhancements in Pd.
Impressive Screenshot of a use case is here:
http://footils.org/images/rradical-wip.png
Extensive documentation is missing. :(
I'll regularily announce updates to my wip-collection here:
http://footils.org/cms/pydiddy/wiki/RradicalPd
Linux-audio-user is CC'd because maybe this is interesting to, well,
Linux audio users, too, I guess.
== What is RRADical Pd? ==
One goal of RRADicalPd? is to create a collection of patches, that
make Pd easier and faster to use for people who are more comfortable
with software like Reason or Reaktor. RRADical ships with patches,
that solve real-world problems on a higher level of abstraction than
the standard Pd objects do. All these high level abstractions
(should) come with (detachable and changable) GUIs built in and use a
common way of saving states. They also include a net remote control
using OSC.
RRAD as an acronym stands for "Reusable and Rapid Application/Audio
Development" with Pd.
The official home of RRADical Pd is:
http://pure-data.org/community/projects/rradical/
Have fun,
--
Frank Barknecht _ ______footils.org__
Greetings:
Linux Journal On-line has published the latest edition of my monthly column :
http://www.linuxjournal.com/article.php?sid=7342&mode=thread&order=0
It's about living on Planet CCRMA and visiting my Aunt AGNULA...
On-line since yesterday morning, currently at 3330 reads...
Best,
== dp
On the 3-D audio (Interest in Software?) thread, I had promised to mail
out something on a plug-in. It got a little too long, so rather than
clog up everyone's mailbox, I went ahead and put it in:
http://home.earthlink.net/~davidrclark/linux_audio_users/Plug-in.html
If this isn't proper nettiquette for this user's mailing list, someone
please do let me know. It would be easier just to email my ramblings.
---------------------
Bottom line is: I'm not sure if it makes sense as a plug-in or not.
I think I may provide the impulse response function generator and my
own convolution engine, then let one of the plug-in gurus see what they
can do, if they want to. If nothing happens, it'll be a command-line
thingy until I get motivated to GUI it all up.
No schedule, but I'd be happy to provide a crude tarball to anyone who
wants to try and deal with it at any time. I PROMISE it won't be easy
without some instruction. Please email me privately also so that I'm
sure to see it.
Thanks,
Dave.
Does anyone know how to get a Midiman Quattro to work with suse 9.0? I
did have outputs working at one time, but never figured out the input
channels. Ther is reference to a .asound file, but not in Suse9.0
Dave
Right now there is a bug in Alsa (after version 0.9.8) for intel8x0 that causes a lot of xruns (2-3 a second!) when trying to set the sample rate to 44100. I would stick to 48000 for now until that matter gets fixed. You shouldn't have any problems with that setting.
-Reuben
--From: Atte André Jensen--
intel8x0: clocking to 48000
I also presume this is not optimal and should be changed to 44100,
right? Where is this done?
I run debian/unstable on both 2.4.23-low-latency-patched and 2.6.0/1
kernels with the latest alsa on 2.4 and the build in alsa in 2.6. Please
bear with me, but I don't really know which config files to post....
Thanks in advance.
--
peace, love & harmony
Atte
http://www.atte.dk
Brian Redfern wrote:
> Soundhack does some of this...
I just took a look at the Soundhack page and the documentation for the
binaural module. From the available documentation, it appears that
Soundhack merely uses impulse response functions that are alreay
generated, specifically that the binaural filter uses the HRTF's from
Durand R. Begault. Has something changed recently? Soundhack now
generates HTRF's or solutions to the wave equation?
The documentation also says that you can add reverb to the results. Ugh.
No, that isn't what one really wants to do for 3-D. It should all be
a consistent solution to the wave equation for a particular boundary
and source/sink locations, or a close approximation of it.
Hi all.
Has anyone been able to use this sound card so far ?
I posted this yesterday: http://eca.cx/lau/2004/01/0087.html
but it might have been lost in an old thread.
Cheers, piem
4/10LT works for me under debian using a patched 2.4.22 and alsa (0.9.4 I
think) I haven't tried multi outs yet but the stereo analog input/output
works fine
m.
> -----Original Message-----
> From: Jim Hines [mailto:jhines@iolinc.net]
> Sent: Wednesday, January 07, 2004 10:21 AM
> To: linux-audio-user(a)music.columbia.edu
> Subject: [linux-audio-user] M-Audio Delta products
>
> Hi,
> What M-Audio products work under Linux? I am interested in purchasing the
> D1010LT. Anyone know if this one works?
>
> Thanks,
> --
> Jim Hines
> Redhat Linux 9
> Athlon XP2100+
> cd /pub; more beer
>
>
> _________________________________________________
> Scanned on 07 Jan 2004 18:31:54
> Scanning by http://erado.com
+-----------------------------------------------------------------+
| ______ ______ _ _ _ |
| /\ / _____) ___ \| | | | | /\ |
| / \ | / ___| | | | | | | | / \ |
| / /\ \| | (___) | | | | | | | / /\ \ |
| | |__| | \____/| | | | |___| | |_____| |__| | |
| |______|\_____/|_| |_|\______|_______)______| |
| |
+-----------------------------------------------------------------+
[Sorry for cross-posting. Feel free to forward around]
Florence, 7 January 2004
+++ AGNULA/DeMuDi 1.1.0 approaching - early packages testing
As we approach the release of AGNULA/DeMuDi 1.1.0 [0] we'd like to
spread awareness on the availability of the debian packages we've been
working on in the past weeks.
+++
As we approach the release of AGNULA/DeMuDi 1.1.0 [0] we'd like to
spread awareness on the availability of the debian packages we've been
working on in the past weeks.
These packages are built against a frozen snapshot of Debian Unstable
[1], but they should work on Sarge systems too, as there haven't been
any major upgrade between the two. They won't work without major
overhaul on Debian Woody systems, unfortunately. [2]
If you are running a Sarge or Sid debian system, we would appreciate
early testing of our packages. Instructions on downloading them can
be found at:
http://www.agnula.org/download/demudi/demudi_1_1_0_apt
We value all your bug reports, suggestions, criticisms and anything
else you feel would be useful for us to improve our work.
You can find instructions on how to report bugs and requests here:
http://www.agnula.org/development/agnula_bugs_requests/
while instructions on how to contact us are available here:
http://www.agnula.org/contacts/
+++
About AGNULA: Agnula (acronym for A GNU/Linux Audio distribution,
pronounced with a strong g) is the name of a project funded by the
European Commission (number of contract: IST-2001-34879; key action
IV.3.3, Free Software: towards the critical mass). The project aims
to spread Free Software in the professional audio/video arena.
Best regards,
--
The AGNULA Team info(a)agnula.org
Our mailing lists: http://lists.agnula.org/
Our web site: http://www.agnula.org/
"There's no free expression without control on the tools you use"
[0] Which should hopefully go out on Jan 15, 2004 (cross your
fingers).
[1] And specifically, the snapshot frozen at 15/11/2003.
[2] But please check
http://www.agnula.org/download/demudi/demudi_1_0_isohttp://www.agnula.org/download/demudi/demudi_1_0_apt
for information on how to use (a subset of) our debian packages on
woody.
Thanks very much to those on the user list who listened to the demo
and/or responded regarding 3-D Audio. I really appreciate all of the
feedback. I'll try to answer some of the queries and comment on
responses in one combined email rather than have a string of
individual responses, so this is a little long. However, many of the
themes are related, so I'd prefer to answer in this all-in-one manner.
On "harshness" (Mark Constable) and "extreme" separation (Jörn) plus
Mark's observations of the demo clips:
Both the harshness and the extreme separation are adjustable. These
effects people noticed aren't a necessary result of the 3-D processing
by any means. The separation is exaggerated for this demo. The
monophonic clip was included simply to emphasize that I did not merely
take the reverberated, stereophonic output of the synth (clips #2 and
#4) and "improve" it a little; instead I completely started over with
very dull, dry, monophonic recordings (#1 and #5).
Cohesiveness (Jan), preprocessing and bus-oriented reverbs (Mark
Knecht):
The 3-D processing provides a more well-integrated or cohesive sound
due to the physical basis of the processing. In using typical DSP-
oriented techniques, you are essentially processing the audio in a
non-physical manner, despite the terminology "early reflections" and
so on. The 3-D processing involves solution of the wave equation in
three dimensions, providing a solid physical basis. I have found that
far less tweaking is necessary for this approach than with the usual
DSP-oriented processing that is the norm.
As Mark Knecht wrote, this approach lends itself more to
preprocessing, or determining in advance what processing to do, then
doing it. The good news is that the result will be closer to
something you can use than it would be using the normal
mixing/processing approach. The bad news is that the best way to use
this new approach is to rethink the whole process of mixing and
mastering.
ToPlug-In or not to Plug-In:
Some people would like to see this implemented as a plugin, but that's
putting something new in an old container --- which can be done, but
one has to ask if that's really what one wants to do. If so, then I'd
be happy to do it, but in the long run, it may be better to rethink
the whole process.
The mathematical basis for all of this is the solution of the wave
equation. Once you've developed methods for doing that in 1-, 2-, and
3-D, you can build a reverber/echoer/stereo-separator OR you can build
an SF2 generator OR LOTS of other things. If I were to make these
programs available for someone else, how shall I package them? I
could build any one of a number of different programs that utilize the
routines I've developed, each of which can do completely different
things. So rather than speak to developers about how to improve my
programs or something like that which was suggested, I really need to
speak with potential users about what they might need or want, whether
that be a plugin or something completely different.
For example, these same programs can also be used to create
instruments. (A room can be regarded as part of a three-dimensional
instrument.) One could solve the wave equation in two dimensions
(drums, cymbals, etc.), in one dimension (guitars, pianos, etc.) or in
other geometries (for example pipes --- organs, and so on).
On the approach used --- IR?:
Mark Knecht asked whether or not this work was IR-based. I assume
that this means "impulse response" function based. Well, yes this is
how the user would see the application of 3-D processing at the very
end of the line, but there is a lot else going on. First one needs to
generate the impulse response functions, then generate the impulses,
then generate the "recorded" signals. The recorded signals can be
decomposed into subcomponents (for example split into frequency
bands), then the various impulse response functions can be applied.
The programs I've written do all of this, so it's much more than
writing a plugin. If I were to merely do that simple part of it, then
I'd have to supply some "canned" impulse response functions and
transfer some information on how to utilize them properly (or
improperly like I do!). I could do this, but I suspect soon enough
people would want more information or additional impulse response
functions. The "IR" application step is the simplest part of this
whole process.
On documentation:
Jörn asked about whether or not the code was documented so that one
could see what was going on. No, it's really not. Some sort of
instruction would be necessary, and I'd have to generate that. I'm
not aware of anywhere else I could point one to, either. This is
rather original work I've done, and the information is scattered about
in mathematical physics textbooks, books on acoustics, and in books on
signal processing. It is done in C++, so some of the code is in a
library --- but some of that may also be of interest. One thing I did
was to start completely from scratch. You don't need anything else
other than a C++ library to link to. The scripts are Korn shell with
a little Python.
What next?
To summarize a little bit: I can do a lot of different things here,
depending upon what people are interested in. I can try to write a
plugin that applies impulse response functions that I have generated;
I could perhaps make available the programs for producing them; I
could write a program that assists in applying them; I could write an
instrument generator; I could release a library of the utilities. Or
I could just do what Jörn suggested and wrap up what I've already
done. I suspect this would be the least useful approach for most
people, but the best approach for me and for potential collaborators.
Thanks once again for your comments and for listening to the demo.
I'd appreciate further discussion, either here or privately by email.
I've got a little more on the idea of a plugin and on real-time
concerns which I'll send a little later.