After a few days of careful consideration, I've decided that I no longer
want to be involved in developing software for Linux. It's been a
difficult decision to make, having used Linux as my main desktop OS for
around 10 years now, but I feel that the community as a whole is going
in a direction that is not compatible with my moral compass.
To that end, I'm pulling everything I've written under the GPL or a
GPL-compatible licence. If there are copies out there, great, feel
free. Anything I'm interested in will be rewritten from the ground up
under a BSD-style licence, which to be honest I've always preferred.
Part of the reason for this is the increasing difficulty of using binary
drivers with Linux. I know a lot of people don't like them, but I like
to have things like accelerated video *and* custom kernels without all
the buggering about involved in getting it working. In particular the
Debian-based distributions seem to be intentionally hamstrung when comes
to supporting binary-only drivers, which makes running the custom kernel
required for low-latency work *and* the binary nVidia driver almost
impossible.
I don't want to be associated with this nonsense any more. It's not
what Free Software is about.
Gordonjcp
STUDENT: Audio and Music software is your thing? Need a part-time
job of $4500 for this summer? Want to have fun developing free
software?
We are happy to announce that the CLAM project is participating,
for the first time, in the 2007 edition of Google's Summer of
Code.
CLAM is not the only audio-linux project in GSoC, Ardour and
Mixxx also did it. See [1] for the whole list
APPLICATION DEADLINE:
24 March 2007
Find all the information in the CLAM web:
http://clam.iua.upf.edu
and its SoC wiki page [2]
We are very excited to offer a number of ideas that would benefit
CLAM now that it is about to reach its 1.0 release. We also
encourage you to propose new ideas if you feel none of the ones
offered by the CLAM team suits your profile or interests.
Looking forward to working with you…
The CLAM team
1. http://code.google.com/soc
2. http://iua-share.upf.edu/wikis/clam/index.php/GSoC_2007
--
Google Summer of Code is a program that offers student developers
stipends to write code for various open source projects. Google
will be working with a several open source, free software and
technology-related groups to identify and fund several projects
over a three month period. Historically, the program has brought
together over 1,000 students with over 100 open source projects,
to create hundreds of thousands of lines of code. The program,
which kicked off in 2005, is now in its third year, following on
from a very successful 2006.
Greetings:
Okay, I'm stumped by this one. I'm trying to compile a small app on my
old machine, it's running Dynebolic with the required devel packages.
I've built other apps in this environment with no problem, but when I
compile this one app I keep receiving this kind of error :
/tmp/cc30Bs2G.o : No space left on the device
So how do I get more space in the /tmp directory ? I tried setting TMP
and TMPDIR, got no joy. I'll gladly supply more details if needed, but
that's basically what's happening. Any suggestions from the gurus ?
Best,
dp
FOR IMMEDIATE RELEASE
Open Sound System v4.0 Released.
CULVER CITY, CA, March 15, 2007: 4Front Technologies is announcing the availability of Open Sound System (OSS)
version 4.0 for Linux, Solaris, FreeBSD, Open Server6 and UnixWare7.
Open Sound System is a cross platform audio architecture that provides drivers for most consumer and professional
audio devices and comes with an API that allows applications to be simply recompiled on any of the supported
operating systems.
New Features:
o New and improved transparent Virtual Mixer engine
- Allows up to 16 applications to share the same "real" audio device.
- Supports recording and full duplex in addition to playback.
- Ability to mix stereo and multichannel audio streams up to 7.1/192Khz/32bit.
- Supports full 24 bit range without loss of precision during internal computations.
- mmap() support for games like DoomIII and Quake4.
- Each application has its own independant volume controls.
- Supports loop back recording.
o Full Solaris Audio Device Architecture (SADA) emulation on Solaris so that legacy
Solaris audio apps can run on Open Sound System drivers.
o Advanced Linux Sound Architecture (ALSA) Library emulation support so that popular ALSA apps
(the ones that use the ALSA library interface) can run on Open Sound System.
o 100% backwards compatibility for Open Sound System (OSS) v3 API.
o 64bit internal processing guarantees audio fidelity and precision if the audio data needs to be converted.
o New device enumeration and mixer API makes it very easy to manage devices programatically.
o Uses up to date native kernel interfaces and installation methods will enable OSS to keep up with changes
to the operating systems for the forseeable future.
o Updated drivers for all devices supported in the older OSS v3.9x versions. Support for obsolete ISA bus
devices has been finally withdrawn from Open Sound System v4.0.
The virtual mixer in Open Sound v4.0 will give the user multiple virtual full duplex multichannel audio streams.
It is possible to run a full duplex VOIP session, view a DVD in full 5.1 surround sound and play popular video game
all at the same time without any complicated device setup and configuration. Open Sound v4.0 brings unmatched device
and system management capabilities that make it ideal and easy to setup in virtualized environments.
For more information and to download a free-for-personal-use copy of the software, visit 4Front's WWW site
at http://www.opensound.com.
--- xxx ---
All trademarks and copyrights belong to their respective owners.
Open Sound System is a trademark of 4Front Technologies.
Copyright (C) 1996-2007, 4Front Technologies, All Rights Reserved.
Contact: Dev Mazumdar
4Front Technologies
4035 Lafayette Place, Unit F
Culver City, CA 90232
USA.
Tel: (310) 202 8530 E-mail: info(a)opensound.com
Fax: (310) 202 0486 Web: http://www.opensound.com
On 13 Mar 2007, at 09:15, Valent Turkovic wrote:
> Hi,
> is it possible to record multi channel speech audio and down sample it
> 2 stereo channels BUT with horizontal audio positioning of each audio
> channel ie. like dolby surround effect.
Sure, but you might be disappointed by the results, Pro Logic is not
very sophisticated (eg. there's no separate rear left and right
channel, and the bandwidth is limited). Also the only free software
implementation that I know of is not state of the art, partly due to
patent problems, and partly due to lack of interest.
There's a LADSPA plugin: http://plugin.org.uk/ladspa-swh/docs/ladspa-
swh.html#id1401
which does Pro Logic matrix encoding, you will need some other
software to mix your independent streams into LCR channels (you can
do it by ear with a 3+ variable send bus mixer), but once you have
that you can feed it to the plugin and get a pro logic compatible
stream out.
PS ignore the documentation, it has been tested, and it does work,
just not brilliantly. You will get better results if you EQ the mics
a bit to cut the level of the highs and lows a bit, especially in the
rear channel.
You would get much better results with an ambisonic stream, but I
don't /think/ you can make stereo-compatible ambisonic streams, and
not many people have ambisonic decoders. Other people on this list
know a lot more about ambisonics though.
- Steve
Dear FireWire enabled Linux audio users,
libfreebob 1.0.3 is available as from today. It is downloadable at our
SourceForge page:
http://downloads.sourceforge.net/freebob/libfreebob-1.0.3.tar.gz
This is a maintenance release for the freebob 1.0 branch, and contains
no new features.
It fixes two bugs:
- a buffer reset bug that prevented jackd freewheeling from working.
- a bug that caused MIDI output to fail on all but the last channel of a
device.
Greets,
Pieter
Festival/conference about live coding in Sheffield, UK this Summer...
======================================================================
_ ___ ____ ____ _ _ ____ _
| | / _ \/ ___/ ___| | | (_)_ _____ / ___|___ __| | ___
| | | | | \___ \___ \ | | | \ \ / / _ \ | / _ \ / _` |/ _ \
| |__| |_| |___) |__) | | |___| |\ V / __/ |__| (_) | (_| | __/
|_____\___/|____/____/ |_____|_| \_/ \___|\____\___/ \__,_|\___|
---------------------> LOSS Livecode Festival <-----------------------
Sheffield, UK -- 20-22 July 2007
http://livecode.access-space.org/
In association with Access Space, TOPLAP and lurk
When we improvise music, we are creating music while it is being
performed. "Live Coding" is the creation of software while it is being
executed; the software in turn generating music or video.
Thanks to dynamic programming languages, the live coder is able to
modify and extend their program without restarts, their music and/or
visual growing with the code that describes it. This way of working
allows instant results for every sourcecode edit. Programming becomes
a fast, creative process - expressive enough that a whole audio/visual
performance may be created as software.
Live Coding began during the 1980s, primarily with FORTH and Lisp. In
recent years new live coding environments and languages such as Chuck,
Fluxus, Impromptu and SuperCollider 3 have appeared, with enthusiastic
communities growing around them. Live Coding performances have also
used Smalltalk, PureData, Scheme, Perl, Haskell, Ruby, Python...
In early 2004 the "Temporary Organisation for the Promotion of Live
Algorithm Programming" (TOPLAP) was formed to support open dialog
between all live coders. Since its early beginnings in a smoky bar in
Hamburg, TOPLAP has reached 178 members worldwide, gaining coverage in
mass media and collaboratively organising several international
meetings.
In 2005 Access Space initiated the L.O.S.S. project
(http://loss.access-space.org) to support free music creativity and
distribution. It featured a series of commissions leading to a
Creative Commons licensed audio CD and repository website produced
entirely with open source tools.
Continuing their series of LOSS commissions and events, Access Space
have teamed up with TOPLAP and lurk to create a three day
international festival, bringing live coding musicians and video
artists together to explore and showcase new approaches in live
performing and participatory arts.
---> CALL FOR PARTICIPATION <-----------------------------------------
Your performance and/or presentation proposal is called for.
For the latest version of this call, please refer to
http://livecode.access-space.org/
Commissions are available to help realise ambitious projects and
performances. Presenters and performers will gain free entry throughout
the festival, and those without institutional support may apply for a
small bursary.
---> IMPORTANT DATES <---
* 14th March - Call for participation
* 14th April - Deadline for proposals
* 1st May - Notification of acceptance
* 16th June - Copy deadline for proceedings (to be confirmed)
* 20th-22nd July - Conference - schedule TBA
---> PRESENTATIONS <---
Short (up to 20 minute) presentations during a day long symposium. The
remit is broad, but possible subjects may include
* A demo of a novel live coding language/environment
* Historical context of live coding
* Live coding without computers
* Critique of live coding practice
* Live patching
* Reflections on live coding experiences
* Adapting general purpose languages to live coding
* Analysis of live coding performances
* Live algorithms that live code
* Life coding
* Portable live coding devices
* Reflective/self-modifying code
* Live visualisation of sourcecode
* Collaborative networked live coding
* ...
Proposals do not have to be long - however much or little you need to
explain your ideas is fine.
If you are unsure if you can make it, submit your idea anyway - we may
be able to accommodate a small number of remotely streamed
presentations for those unable to attend in person.
There will also be time for a brief (around three hours) introductory
workshop. Please indicate if you would like to be involved.
---> PERFORMANCES <---
There will be at least two evenings of performances, ranging from 10
to 40 minutes. Please outline what you would like to perform,
including technical requirements. We plan to have at least three data
projectors, many pairs of small speakers for participatory
improvisations, enough headphone amps for 100 pairs of headphones, and
a big stereo sound-system for 'traditional' performances. Please state
your preference, and feel free to be creative (see commissions below).
We are also thinking about a pre-event in London, UK some days before
the festival, let us know if you would like to take part.
---> PROCEEDINGS <---
If your proposal is accepted you will be encouraged to submit short
texts and images for publication in the proceedings. All speakers and
performers will receive a free copy at the beginning of the
conference.
---> COMMISSIONS <---
If you would like time or resources to develop a new way of
performing, some new language or software feature, or something else
interesting then please include a short estimated budget in your
brief, which may include an artist fee. Note that due to funding
constraints the project should have a strong audio component. The
maximum commission will be of £1000 (about 1470 euro).
---> BURSARY <---
A small bursary is available to contribute towards travel and
accommodation. Please include an estimated budget for your attendance
and we will apportion this money based on need. Money is however very
short, if you are a member of an academic institution we are keen to
help you apply for local funding.
---> PROPOSAL SUBMISSION <---
Preferably in plaintext, but all common formats are accepted.
Supporting material including web links to previous work, audio
and video files are welcome but not mandatory.
Proposals should arrive before midnight, 14th April 2007.
Proposals are accepted by email (preferred):
livecode(a)access-space.org
Or by post:
LOSS Livecode
c/o Alex McLean
Access Space
1 Sidney Street
Sheffield S1 4RG
Royaume Uni
If sending via email please do not include large attachments - either
include URLs or contact us in advance.
If sending via post include an email address so that we may confirm
receipt.
---> MAILING LIST <---
As members of the "keep avant garde internet tidy" campaign, we keep
our cross posts to a minimum. To continue receiving news of the
conference, please sign up to our mailing list:
http://lists.lurk.org/mailman/listinfo/lc/
---> ABOUT ACCESS SPACE <---
Based in Sheffield, Access Space is the UK's first "Free Media Lab"
- a community space equipped with locally recycled computers running
free, open source software. It provides a framework, resources and
support for self-directed learning, arts and creativity. Taking part
is totally free, and anyone can walk in and contribute:
http://access-space.org
---> FOR MORE INFORMATION <---
Don't hesitate to email with questions to the submission address
above. The conference website is not yet ready, but more information
about live coding may be found at the official TOPLAP wiki:
http://toplap.org/
Hope to see you in July!
LOSS Livecode is funded by Arts Council England, Yorkshire and The PRS
Foundation.
======================================================================
--
Alex McLean
http://yaxu.org/http://slub.org/http://lurk.org/http://doc.gold.ac.uk/~ma503am/
By continually breaking the drivers, it forces someone to look over them when updating and maybe fix other problems.
Taybin
-----Original Message-----
>From: Christian Schoenebeck <cuse(a)users.sourceforge.net>
>Sent: Mar 14, 2007 10:17 AM
>To: The Linux Audio Developers' Mailing List <linux-audio-dev(a)music.columbia.edu>
>Subject: Re: [linux-audio-dev] Getting out of the software game
>
>Am Mittwoch, 14. März 2007 14:16 schrieb Paul Davis:
>> in theory, you certainly can. but the kernel development team, and linus
>> in particular, are not interested in an engineering effort/long term
>> approach that makes this feasible. if you define a stable driver binary
>> interface, you can change the kernel out around it and drivers keep
>> working. linus has made it clear that he sees no reason to do this, and
>> is perhaps even opposed to it for some possibly sound engineering
>> arguments (though that is open to debate).
>
>And what are these arguments?
>
>CU
>Christian
Hey everyone,
I'm working my way through a simple mixer application using ALSA's
mixer API. However, the mixer section of the documentation is blank,
so I've taken to reading through amixer's source code to try and
figure out how it does it's things.
It's a bit hard to piece together how some of the stuff goes together.
I've got the basics of getting/setting volume values and getting
volume ranges from mixer elements.
I'm a bit confused by how to get element types. It seems like amixer
opens the ctl device to get a mixer element's type? Am I right about
this? It seems like a very confusing API to require the ctl device to
get info on an element from the mixer device.
If it does require a handle to the ctl device, could someone give me
the quick overview of how this works together in the big picture? If
not, I'd appreciate a quick explanation of the right way to determine
the type of a mixer element.
--
Ross Vandegrift
ross(a)kallisti.us
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
> To start with have a look at Wikipedia. They have a pretty good section
> about audio engineering which covers a lot of topics including
> sterophony. (At least the German edition does)
>
>
>
> Yours sincerely,
> Dennis Schulmeister
Thank you for your suggestions.