Dear FireWire enabled Linux audio users,
libfreebob 1.0.3 is available as from today. It is downloadable at our
SourceForge page:
http://downloads.sourceforge.net/freebob/libfreebob-1.0.3.tar.gz
This is a maintenance release for the freebob 1.0 branch, and contains
no new features.
It fixes two bugs:
- a buffer reset bug that prevented jackd freewheeling from working.
- a bug that caused MIDI output to fail on all but the last channel of a
device.
Greets,
Pieter
Festival/conference about live coding in Sheffield, UK this Summer...
======================================================================
_ ___ ____ ____ _ _ ____ _
| | / _ \/ ___/ ___| | | (_)_ _____ / ___|___ __| | ___
| | | | | \___ \___ \ | | | \ \ / / _ \ | / _ \ / _` |/ _ \
| |__| |_| |___) |__) | | |___| |\ V / __/ |__| (_) | (_| | __/
|_____\___/|____/____/ |_____|_| \_/ \___|\____\___/ \__,_|\___|
---------------------> LOSS Livecode Festival <-----------------------
Sheffield, UK -- 20-22 July 2007
http://livecode.access-space.org/
In association with Access Space, TOPLAP and lurk
When we improvise music, we are creating music while it is being
performed. "Live Coding" is the creation of software while it is being
executed; the software in turn generating music or video.
Thanks to dynamic programming languages, the live coder is able to
modify and extend their program without restarts, their music and/or
visual growing with the code that describes it. This way of working
allows instant results for every sourcecode edit. Programming becomes
a fast, creative process - expressive enough that a whole audio/visual
performance may be created as software.
Live Coding began during the 1980s, primarily with FORTH and Lisp. In
recent years new live coding environments and languages such as Chuck,
Fluxus, Impromptu and SuperCollider 3 have appeared, with enthusiastic
communities growing around them. Live Coding performances have also
used Smalltalk, PureData, Scheme, Perl, Haskell, Ruby, Python...
In early 2004 the "Temporary Organisation for the Promotion of Live
Algorithm Programming" (TOPLAP) was formed to support open dialog
between all live coders. Since its early beginnings in a smoky bar in
Hamburg, TOPLAP has reached 178 members worldwide, gaining coverage in
mass media and collaboratively organising several international
meetings.
In 2005 Access Space initiated the L.O.S.S. project
(http://loss.access-space.org) to support free music creativity and
distribution. It featured a series of commissions leading to a
Creative Commons licensed audio CD and repository website produced
entirely with open source tools.
Continuing their series of LOSS commissions and events, Access Space
have teamed up with TOPLAP and lurk to create a three day
international festival, bringing live coding musicians and video
artists together to explore and showcase new approaches in live
performing and participatory arts.
---> CALL FOR PARTICIPATION <-----------------------------------------
Your performance and/or presentation proposal is called for.
For the latest version of this call, please refer to
http://livecode.access-space.org/
Commissions are available to help realise ambitious projects and
performances. Presenters and performers will gain free entry throughout
the festival, and those without institutional support may apply for a
small bursary.
---> IMPORTANT DATES <---
* 14th March - Call for participation
* 14th April - Deadline for proposals
* 1st May - Notification of acceptance
* 16th June - Copy deadline for proceedings (to be confirmed)
* 20th-22nd July - Conference - schedule TBA
---> PRESENTATIONS <---
Short (up to 20 minute) presentations during a day long symposium. The
remit is broad, but possible subjects may include
* A demo of a novel live coding language/environment
* Historical context of live coding
* Live coding without computers
* Critique of live coding practice
* Live patching
* Reflections on live coding experiences
* Adapting general purpose languages to live coding
* Analysis of live coding performances
* Live algorithms that live code
* Life coding
* Portable live coding devices
* Reflective/self-modifying code
* Live visualisation of sourcecode
* Collaborative networked live coding
* ...
Proposals do not have to be long - however much or little you need to
explain your ideas is fine.
If you are unsure if you can make it, submit your idea anyway - we may
be able to accommodate a small number of remotely streamed
presentations for those unable to attend in person.
There will also be time for a brief (around three hours) introductory
workshop. Please indicate if you would like to be involved.
---> PERFORMANCES <---
There will be at least two evenings of performances, ranging from 10
to 40 minutes. Please outline what you would like to perform,
including technical requirements. We plan to have at least three data
projectors, many pairs of small speakers for participatory
improvisations, enough headphone amps for 100 pairs of headphones, and
a big stereo sound-system for 'traditional' performances. Please state
your preference, and feel free to be creative (see commissions below).
We are also thinking about a pre-event in London, UK some days before
the festival, let us know if you would like to take part.
---> PROCEEDINGS <---
If your proposal is accepted you will be encouraged to submit short
texts and images for publication in the proceedings. All speakers and
performers will receive a free copy at the beginning of the
conference.
---> COMMISSIONS <---
If you would like time or resources to develop a new way of
performing, some new language or software feature, or something else
interesting then please include a short estimated budget in your
brief, which may include an artist fee. Note that due to funding
constraints the project should have a strong audio component. The
maximum commission will be of £1000 (about 1470 euro).
---> BURSARY <---
A small bursary is available to contribute towards travel and
accommodation. Please include an estimated budget for your attendance
and we will apportion this money based on need. Money is however very
short, if you are a member of an academic institution we are keen to
help you apply for local funding.
---> PROPOSAL SUBMISSION <---
Preferably in plaintext, but all common formats are accepted.
Supporting material including web links to previous work, audio
and video files are welcome but not mandatory.
Proposals should arrive before midnight, 14th April 2007.
Proposals are accepted by email (preferred):
livecode(a)access-space.org
Or by post:
LOSS Livecode
c/o Alex McLean
Access Space
1 Sidney Street
Sheffield S1 4RG
Royaume Uni
If sending via email please do not include large attachments - either
include URLs or contact us in advance.
If sending via post include an email address so that we may confirm
receipt.
---> MAILING LIST <---
As members of the "keep avant garde internet tidy" campaign, we keep
our cross posts to a minimum. To continue receiving news of the
conference, please sign up to our mailing list:
http://lists.lurk.org/mailman/listinfo/lc/
---> ABOUT ACCESS SPACE <---
Based in Sheffield, Access Space is the UK's first "Free Media Lab"
- a community space equipped with locally recycled computers running
free, open source software. It provides a framework, resources and
support for self-directed learning, arts and creativity. Taking part
is totally free, and anyone can walk in and contribute:
http://access-space.org
---> FOR MORE INFORMATION <---
Don't hesitate to email with questions to the submission address
above. The conference website is not yet ready, but more information
about live coding may be found at the official TOPLAP wiki:
http://toplap.org/
Hope to see you in July!
LOSS Livecode is funded by Arts Council England, Yorkshire and The PRS
Foundation.
======================================================================
--
Alex McLean
http://yaxu.org/http://slub.org/http://lurk.org/http://doc.gold.ac.uk/~ma503am/
By continually breaking the drivers, it forces someone to look over them when updating and maybe fix other problems.
Taybin
-----Original Message-----
>From: Christian Schoenebeck <cuse(a)users.sourceforge.net>
>Sent: Mar 14, 2007 10:17 AM
>To: The Linux Audio Developers' Mailing List <linux-audio-dev(a)music.columbia.edu>
>Subject: Re: [linux-audio-dev] Getting out of the software game
>
>Am Mittwoch, 14. März 2007 14:16 schrieb Paul Davis:
>> in theory, you certainly can. but the kernel development team, and linus
>> in particular, are not interested in an engineering effort/long term
>> approach that makes this feasible. if you define a stable driver binary
>> interface, you can change the kernel out around it and drivers keep
>> working. linus has made it clear that he sees no reason to do this, and
>> is perhaps even opposed to it for some possibly sound engineering
>> arguments (though that is open to debate).
>
>And what are these arguments?
>
>CU
>Christian
Hey everyone,
I'm working my way through a simple mixer application using ALSA's
mixer API. However, the mixer section of the documentation is blank,
so I've taken to reading through amixer's source code to try and
figure out how it does it's things.
It's a bit hard to piece together how some of the stuff goes together.
I've got the basics of getting/setting volume values and getting
volume ranges from mixer elements.
I'm a bit confused by how to get element types. It seems like amixer
opens the ctl device to get a mixer element's type? Am I right about
this? It seems like a very confusing API to require the ctl device to
get info on an element from the mixer device.
If it does require a handle to the ctl device, could someone give me
the quick overview of how this works together in the big picture? If
not, I'd appreciate a quick explanation of the right way to determine
the type of a mixer element.
--
Ross Vandegrift
ross(a)kallisti.us
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
> To start with have a look at Wikipedia. They have a pretty good section
> about audio engineering which covers a lot of topics including
> sterophony. (At least the German edition does)
>
>
>
> Yours sincerely,
> Dennis Schulmeister
Thank you for your suggestions.
Hi,
is it possible to record multi channel speech audio and down sample it
2 stereo channels BUT with horizontal audio positioning of each audio
channel ie. like dolby surround effect.
I'm not looking to make multi channel podcast, just a plain stereo
podcast. Ok, not just plain, maybe better term is enhanced podcast
with "dolby surround" or "binaural" effect so that different voices
appear to be coming from different positions.
The idea is to make a podcast with 5 (or more/less people) and record
each of them individually and to make an stereo file for distribution
but such that each person has a distinct "location" in your ear. So
you can hear that person A is in front of you, person B is on the far
left, person C is to the right but closer to center... and so on.
I would maybe have 3 channels sometimes and sometimes maybe 7
channels, it differs by the amount of guests there are in each podcast
show.
Is there some linux audio tool which I could feed 3-7 wav tracks and
define where in horizontal space I want each channel to be located -
and it makes final stereo track with this "binaural" or "Dolby
surround" enhanced podcast?
Thank you in advance.
--
http://kernelreloaded.blog385.com/
linux, blog, anime, spirituality, windsurf, wireless
registered as user #367004 with the Linux Counter, http://counter.li.org.
ICQ: 2125241
Skype: valent.turkovic
On 3/13/07, Georg Holzmann <georg.holzmann(a)student.kug.ac.at> wrote:
> Hallo!
>
> > I'm not looking to make multi channel podcast, just a plain stereo
> > podcast. Ok, not just plain, maybe better term is enhanced podcast
> > with "dolby surround" or "binaural" effect so that different voices
> > appear to be coming from different positions.
> [...]
> > Is there some linux audio tool which I could feed 3-7 wav tracks and
> > define where in horizontal space I want each channel to be located -
> > and it makes final stereo track with this "binaural" or "Dolby
> > surround" enhanced podcast?
>
> Yes, there are various implementations of binaural spatialization in
> Pure Data (and I guess also in Supercollider and CSound ...).
>
> I am just making some ready-made tutorial patches regarding this topic
> for the linux audio conference - so if you wait some days you can have
> them !
>
> LG
> Georg
>
Great! I would love to gave it!
Thank you in advance.
Valent.
Greetings all,
For the impatient, download at:
http://ico.bukvic.net/Max/munger1~_1.0.0.tar.gz
(270KB, includes source, Linux-Pd-i386, Mac-Max-i386, and Win32-Max-i386
binaries, and 3 cases of beer)
OVERVIEW
========
munger1~ (March 12, 2007 1.0.0 release)
a realtime multichannel granulator
a.k.a. the swiss-army-knife of realtime granular synthesis
a flext (cross-platform PD & Max/MSP) port of
the munger~ object from the PeRColate library (0.9 beta5)
http://www.music.columbia.edu/PeRColate/
Original PeRColate library by:
Dan Trueman http://www.music.princeton.edu/~dan/
R. Luke DuBois's http://www.lukedubois.com/
Flext port and additions by:
Ivica Ico Bukvic http://ico.bukvic.net
Ji-Sun Kim hideaway(a)vt.edu
http://www.music.vt.eduhttp://www.cctad.vt.edu
Released under GPL license
(whichever is the latest version--as of this release, version 2)
For more info on the GPL license please visit:
http://www.gnu.org/copyleft/gpl.html
ACKNOWLEDGEMENTS
================
Many thanks to Dan Trueman for open-sourcing this great object!
SOURCE INSTALL
==============
If you simply intend to use prebuilt binaries, please skip to the INSTALL
section. Otherwise take a big breath and read on...
1) You need stk library which can be downloaded from:
http://ccrma.stanford.edu/software/stk/
2) You need to also install latest flext library (this is a library that
allows for creation of externals for both Max/MSP and PD using the same
source). Version 0.4.x can obtained from the following link:
http://grrrr.org/ext/flext/
Latest CVS version (0.5.1) is found in the Pure-Data CVS (this one is
recommended):
http://sourceforge.net/cvs/?group_id=55736
3) If you are using latest CVS version (0.5.1) Before compiling the source
you will need to add the following to the top of the flext/source/flstk.h
file right below the #define __FLSTK_H:
#ifdef PI
#undef PI
#endif
This step will probably become quickly obsolete once Thomas updates CVS.
Until then, this is needed to be able to compile flext against stk.
4) To compile flext, read flext instructions (it boils down to running
build.sh with appropriate parameters and then editing two simple config
files, i.e. "build pd gcc build" or "build max gcc" or "build max msvc"
etc.)
Your will need to edit buildsys/config-<platform-compiler-pdormax>.txt to
adjust paths to various folders.
Then you will need to edit config.txt file. You do not need to include
SndObj for this external but you do need stk option to be properly set. On
Windows+MSVC, STK flag at the time of this release does not work, so you
will have to use included testmunger1 MSVC project file and adjust path
settings to compile munger1~.
5) Once stk and flext are compiled, go into munger1~ folder and type:
<path to flext folder>/build.sh <platform> <compiler> <build/clean/install>
NB: on Mac <build/clean/install> is not needed. On Windows, please use MSVC
and open the testmunger1 project file in the root of the folder.
6) Once compiled, your binary will be created in a <maxorpd-platform>
subfolder (i.e. pd-linux, or max-darwin), followed by another subfolder
which reflects whether a threaded or singlethread flext was used. Inside you
will find your external.
INSTALL
=======
You can either use the prebuilt externals (found in the bin/ folder) or ones
built using the "SOURCE INSTALL" instructions above. Binaries are provided
for Intel-based Macs, Win32, and Intel-based Linux OS. The included prebuilt
binaries DO NOT REQUIRE you to install flext or stk as these are statically
linked.
1) Copy the external in your externals folder (i.e. /usr/lib/pd/extra or
C:\Program Files\Cycling '74\MaxMSP 4.6\Cycling '74\externals\, or
"Applications/MaxMSP 4.6/Cycling '74/externals)
2) Copy appropriate help file (found in the help/ folder) into the help
folder (i.e. /usr/lib/pd/doc/5.reference or C:\Program Files\Cycling
'74\MaxMSP 4.6\max-help, or "Applications/MaxMSP 4.6/max-help)
NB: Pd help file has a ".pd" extension, while Max/MSP help file has a
".help" extension.
3) Start your app (PD or Max) and create object called munger1~. Right-click
(ctrl-click on Macs) and select "help" and this should open the help file
with additional documentation.
Questions? See OVERVIEW for contact and Q&A info.
Enjoy!
FAQ
===
The following is Ico's FAQ, so it may or may not reflect other project
participants' opinions, including original author(s) of munger~, flext, etc.
Q: Why porting to flext?
A: Flext library (by Thomas Grill) is a layer which allows creation of
externals for both Max/MSP and PD without any alterations to the code
(obviously once it is adapted to use flext). While there have been a number
of Max/MSP <-> PD external ports in the past, many of them have become
outdated because such attempts required either maintaining one code full of
ugly #ifdefs, or worse--maintaining two sources. Either way, what usually
turned out to be the case is that original authors did not have the time,
interest, or simply the software/hardware to deal with the newly generated
overhead and/or test the code, while volunteers who made the original
porting efforts eventually moved on to other projects. The result was/is
outdated and/or broken externals. Flext circumvents this problem by allowing
one clean code to compile on both platforms while also supplying in many
cases cleaner (more legible) API and (as a whipped cream on top)
object-oriented environment (C++).
Q: Why bother with PD <-> Max/MSP cross-platform compatibility...
...when I use only <insert-your-favorite-application-here>?
...<insert-your-favorite-application-here> is better?
A: Choice is what makes us human (this is also what makes Arts so vibrant
and exciting). And while everyone's welcome to express their own
preferences, we also have to realize that in this case these same
preferences are also the main cause of a virtual divide which manifests
itself at everyone's detriment. Wouldn't it be nicer if we could share
externals transparently, or even better, open PD patches in Max and
vice-versa? This would help in both the cross-pollination of ideas as well
as creative efforts. This project has also taught me that creating
flext-ready externals is as easy if not easier (due to the aforesaid API's
legibility) than native objects (whether that be PD or Max/MSP). Finally, if
all else fails, such externals are bound to reach wider audience, and are
much easier to maintain if cross-platform compatibility is to be pursued.
Q: If flext is so cool, why don't we see more porting efforts?
A: Good question. The fact is that flext is much more widely known among PD
users than it is among the Max/MSP community, so this seemingly one-way road
may have contributed to the current situation. One could only hope that
projects like this may help reverse this unfortunate trend.
Q: So, is all really that peachy in the flext-land?
A: Well, our lives teach us that nothing is truly free in this world. Flext
is no exception. Its "fees," however are not tied to our checkbooks. Rather,
they manifest themselves in a slightly greater CPU overhead in signal flow
due to message translation. Thus, one could consider flext a "middle-person"
between the <app-of-your-choice> and the external. This, however, in today's
world is so negligible that during the testing phase I was unable to measure
any noticeable CPU-overhead difference.
Another consideration is that flext might not be complete (see KNOWN ISSUES
for an example). That being said, in its current state it did the trick for
a relatively complex external such as munger~ or even FFTEASE collection
which had been ported several years ago. All this leads me to believe that
it is more than ready for the day-to-day use.
Q: I already have Dan and Luke's awesome PeRColate lib. Why should I
download this one?
A: This is a cross-platform port of the latest version with several new
features. Thus, it allows for those platforms which have not had the beta6
available (Linux, Windows) to finally dig into all the goodies it brings.
Plus you also get the cool stuff such as verbose modes, discrete panning,
more thorough documentation, up to 500 grains per sample (instead of 50), up
to 24-channel output (instead of 2 or 16, depending which one you used),
etc.
KNOWN ISSUES
============
munger1~ has been tested extensively on Linux+PD, OSX+Max/MSP and
Win32+Max/MSP setups, suggesting that it should work on other setups as
well. Your mileage may vary, though.
Currently there is only one known issue in the wild which requires changes
to flext in order to be fixed. Namely, if you use munger1~ object in
conjunction with an external buffer in PD (known as an array) and if you
dubiously decide to delete that particular buffer in the middle of your
performance while munger1~ is still associated with it, this will
[unsurprisingly] crash PD. Max/MSP currently has a check implemented against
that via flext layer so Max/MSP will simply stop outputting anything until
buffer is reset. The flext author is aware of this and PD fix should appear
in the flext CVS hopefully soon. That being said, the lingering question is
why would you want to do this in the first place...
FYI, even though munger1~ allows up to 500 simultaneous grains per sample
and has been compiled with all available optimizations (SSE, Altivec is
supposedly available via flext but has not been tested), on MBP (Core Duo
1.83GHz) I was unable to get more than 160 simultaneous grains per sample
(or ~32,000 grains/second) without dropouts, even though CPUs were not
getting maxed out, so something else might be the cause of this limitation
(flext?). Win32 machine (3-year old AMD64 3000+) fared marginally better at
around 165 simultaneous grains per sample (or ~33,000 grains/second) before
its CPU was maxed out. Linux on the same AMD64 3000+ hardware fared the
best. It topped off at 47,999 grains per second at 48KHz sampling rate which
for some reason the sampling rate appears to be the upper limit (i.e. if you
run PD or Max/MSP at lower sampling rates, your upper limit will be
restricted to the sampling rate), even though the code allows for multiple
initiations of grains per cycle. This, however, is also the way original
munger~ works.
An interesting bit is that while on Linux/PD combo 48K grains are already
reached when we get 64 simultaneous grains, on Win32/Mac even 160
simultaneous grains yield only ~32-33K grains. Could this be a flext bug?
Best wishes,
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology, CCTAD, CHCI
Virginia Tech
Dept. of Music - 0240
Blacksburg, VA 24061
(540) 231-1137
(540) 231-5034 (fax)
ico(a)vt.edu
http://www.music.vt.edu/people/faculty/bukvic/