Hi Everyone,
Version 0.5pre3 of FreeWheeling, a live looper for Linux, is now available.
This release is focused on using FreeWheeling together with other Linux audio
apps, in a live setting. I've done quite a bit of improvising lately, using
FreeWheeling with Linux softsynths such as Hexter, Beatrix, Aeolus, and
LinuxSampler.
Along with new features to help you use FreeWheeling to control a live audio
setup, there are two new video tutorials, and new music that demonstrates live
FW improv with synth plugins.
Thanks to Mark Knecht for inspiring tutorial 3, and to Sean Bolton for his
wicked work on Hexter, which is featured prominently in the new music and videos.
http://freewheeling.sourceforge.net
Details:
2005-03-11 v0.5pre3
New Features
------------
* A new way to tap downbeat and tempo (tap-pulse event)
* Switching of metronome sound for pulses (switch-metronome event)
* FreeWheeling events can now trigger MIDI events, so you can
control other audio apps from within FluidSynth--
For example, you can fire off changes to your modular synths
from your keyboard or footpedals.
This eliminates the need for an extra MIDI router app, since
it is now built in to FreeWheeling. Your custom FreeWheeling
setup defines how MIDI events are generated.
* More flexible, clearer configuration syntax, with better error
checking and colored warning messages.
* Full input & output implementation of MIDI program change
and pitch bend messages.
* An example of the new MIDI output features-
A MIDI patch changer has been configured using the left/right
arrow keys. An on screen display shows patch number.
See .fweelin.rc and video tutorial: Hookups.
* Faster, tighter memory management for events.
* Scripts are now included that load FreeWheeling and Hexter DX7
softsynth, connect to LADSPA reverb and tube amp sim- see
scripts/README.
Fixes
-----
* fixed config bug that may cause "PreallocatedType: no instance
available" errors on startup
* compile fix on Mandrake GCC 3.4.x - 'parenthesized type-id' error
Kind Regards,
-JP Mercury
g'day
i've just finished re-recording a song that a few people on the list
have expressed an interest in mixing ... i'm just wondering the best way
to make the tracks available, and i was thinking that the solution would
be to tarball the seperate tracks and make a .torrent file out of them.
the thing is, i haven't got that much space on my server, and it seems a
bit of a waste to take all the time to upload it there (i only have an
upload speed of, like, 7k/b or something) when i could just share it
directly in the superb way that bittorrent does.
also, what would be the best way to share the seperate tracks? as *.wav
files or *.flac or something else?
shayne
For those interested in the use of Linux in more traditional environments:
"Indianapolis High School Band Makes History by Performing First Paperless
Concert
"Instead of reading their music from printed sheets during their performance,
the band members used a Linux-based notepad device from Freehand Systems called
the MusicPad Pro Plus. This was the first time in history that a high school
band gave a concert using digital music notation in place of sheet music."
http://www.marketwire.com/mw/release_html_b1?release_id=83186
---
On their website FreeHand Systems does not seem to mention that their product is
Linux-based:
http://www.freehandsystems.com/
Cheers,
Andreas
Openlab#1 Friday 1st April
The Foundry (old street tube) London
starts at 8 pm
A foolish night of opensource audio/visual action!
It's free! but, erm - you'll have to pay for the beer :)
http://www.pawfal.org/openlab/
slick lister (Pd) & nebogeo (fluxus)
sonicvariable (Pd) & oli (Gem)
Jeremah (Pd) & acpi (pdp)
yaxu paxo (SC)
claudiusmaximus (Pd) & sonicvariable (Blinkenlights)
mattin - (Pd)
karl (Pd) & carlos (pdp)
I wrote:
> It's also really worth playing with the LADSPA swh impulse convolver
> plugin. Guitar amps sound really good on hammond! Also the TAP preamps
> are nice. A bit of fuzz and grit+limited freqency range and odd
> resonances really brings it to life.
> The one thing no midi hammond can ever do is the way different tones
> come in at different times as you press down the key. This means you
can
> kinda flick the keys and just get the top drawbar to plip a little and
> the percussion to ping.
John Check wrote:
"Can you elaborate on that? Is it an artifact of the differing
wavelengths or
the physical construction?"
It's the way all the key contacts don't touch the bus bars at the same
time, so as you press down the key, the high drawbars and percussion
sound first, and then as you press down a tiny bit more the other
drawbars sound. I think my hammond's keyboard is probably more knackered
than most so I notice it/ use it more.
Hello all,
I recently downloaded the mpegplus encoder and decoder from
musepack.net. The decoder works fine, but every time I attempt to use
the encoder, I get an "Illegal Operation" output from the console.
Anyone have any idea what may be going on here?
Thanks,
Brad
Shayne, Wolfgang-
It's good to hear that you two are interested in Net jamming. I read your
ideas and I think we have a good starting point. Let's have a look-
> >What I want to do with FreeWheeling is to have users able to connect to a
> >common jam room. As different users capture loops from their improvisations,
> >the loops become available to other users in real-time. Since the loops are
> >syncronized to a common downbeat and tempo, Wolfgang in Germany can take
> >Latifah in Brooklyn's loops and add them to his own improvisation.
.
> that sounds idyllic ... i remember a similar concept with arturia storm
> (version 2.0, i think) where you could connect to a sort of chat-like
> room and share loops and samples with other users ... i think a good
> idea would be to have different "song" rooms, created by a particular
> user who would define the tempo, key etc of the song - perhaps you could
> preview a room to see if it took your fancy - and joined by others
> who would add layers or segments to it ... this would be a pretty complex
> implementation, though ...
I like the idea of having different rooms for sharing loops. And I also like
the idea of previewing.
Wolfgang seems to be coming at this more from the audio sync point of view,
while Shayne addressed the ways in which loops could be shared.
My take on this is that we can allow several users to connect together to form
a session, or room. A room is populated by users, and also loops. The users
have both live audio (inputs and outputs) and possibly their own library of
loops. When we enter a room, we are able to preview the live audio of
different users to hear what they are doing.
I don't see having a single audio stream from a room that everyone jams in.
Wolfgang was mentioning syncronization, and I agree that it would be difficult
to syncronize all those clients. So I would suggest turning the problem on its
head-- why not let each user develop his own improvisation using the loops of
the other users. So the session can go in several directions at once. As
Shayne grabs a new loop from his guitar, it appears on Wolfgang's screen.
Playing the loop, Wolfgang is inspired and grabs something else to add another
layer. Meanwhile, Mercury is listening to Wolfgang's mix and decides to
improvise a break. He cuts out several of Wolfgang's loops, adds some thinner
break loops of his own. He does this in his own mix, so it doesn't affect the
others. Shayne and Wolfgang finish their loopy dialog and see that Mercury has
gone off on a tangent-- so they grab what Mercury is doing and it becomes a
break in their own improvisations.
In this way, I see a session as being a kind of quantum field of musical
possibilities. Different users contribute new loops as they are inspired. We
can peer into another user's sound, but we can also work on our own.
Besides loops, we can share live audio, but we will always be hearing it with
some latency. Perhaps we can choose whether we want the lowest possible
latency, or to quantize to the next beat.
Wolfgang mentioned bandwidth. I agree that's an issue. I think good results
could be achieved with different codecs, OGG springs to mind. Good quality
OGGs of loops could be shared quite quickly.
And if this works, we could allow rooms to persist, so that a server stores
the loops and 'scenes' (collections of playing loops and settings), allowing
others to connect later and add to the palette.
It sounds like both of you would be into testing such a system. If you have
more design ideas, please let me know. And while this won't save the world, I
do think that there is a lot of potential for a unique type of collaborative
music making here. And I do think there are social implications whenever we
change the way music is made.
Peace,
Mercury
ps Nothing will ever replace two hands touching, two faces looking at each
other, two birds singing in the wilderness
> no worries - i'm just glad a few people find this idea worth
> pursuing, cos i really think the potential is huge - the internet
> and digital audio has changed the way we think about music, but it
> hasn't much changed the way we *make* music yet .... imagine ....
>
> shayne
that's excellent!
ron
--- Andreas Kuckartz <A.Kuckartz(a)ping.de> wrote:
> For those interested in the use of Linux in more
> traditional environments:
>
> "Indianapolis High School Band Makes History by
> Performing First Paperless
> Concert
>
> "Instead of reading their music from printed sheets
> during their performance,
> the band members used a Linux-based notepad device
> from Freehand Systems called
> the MusicPad Pro Plus. This was the first time in
> history that a high school
> band gave a concert using digital music notation in
> place of sheet music."
>
>
http://www.marketwire.com/mw/release_html_b1?release_id=83186
>
> ---
>
> On their website FreeHand Systems does not seem to
> mention that their product is
> Linux-based:
> http://www.freehandsystems.com/
>
> Cheers,
> Andreas
>
>
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/
Hi Jan,
Thanks, but I am using Gnome 2.8 (standard with Ubuntu) & I don't have
any KDE stuff on my system (I am not a fan of KDE).
Gavin.
So back to square one, I'm afraid..
> Message: 2
> Date: Mon, 21 Mar 2005 18:25:09 -0600
> From: Jan Depner <eviltwin69(a)cableone.net>
> Subject: Re: [linux-audio-user] Audacity error
> To: A list for linux audio users <linux-audio-user(a)music.columbia.edu>
> Message-ID: <1111451109.5277.7.camel@eviltwin>
> Content-Type: text/plain
>
> My first guess would be that artsd or some other sound thing is
> running. Try killall -9 artsd and see if that helps. If you're using
> KDE even the stinking system beep starts up artsd.
>
> Jan
>
> On Mon, 2005-03-21 at 17:52, Gavin Stevens wrote:
> > (de-lurk mode)
> >
> > Hi all,
> >
> > I keep getting an error when starting Audacity. It reads "There was
> > an error initialising the audio i/o layer. You will not be able to
> > play audio".
> >
> > It then starts & works, apart from not being able to play what it is
> > doing.
> >
> > The strange thing is that it worked fine the first time I used it (I
> > recently "rested" Debian in order to try Ubuntu, so it's a new
> > installation). Every time since then, it has shown this error.
> >
> > Everything else seems to be working fine on the audio front: XMMS is
> > happy, MIDI is working, even Audacity is functioning, but I have to
> > play files saved in Audacity via XMMS.
> >
> > Is there something silly that I've missed? I can't work out why it
> > would play audio once & then not subsequently.
> >
> > Any help appreciated.
> >
> > TIA
> >
> > Gavin.
Please excuse the double post,
I hit the moderator and realized I was not subscibed :-)
=========
Hi all,
I am writing a multi-streamed audio player for an embedded linux system
and am a little confused about what technology will accomplish what task for
me (I've been reading up but thought that maybe some of you might easily
point me in the right directions).
- Is JACK a suitable place to implement entire audio pipelines ?
(i.e. if I have one "jack client" for each link in the pipeline; one
reading an mp3 file, another decoding the mp3 file and outputting
pcm data and another one creating FFT data for other purposes etc.)
- Is ALSA capable of really "mixing" or does it only route available
commands supported by the hardware ?
Some (or most) sound cards come with a DSP with a bunch of
funky fresh features on it (like mixing two or more input channels
into one or more output channels + control volumes on inputs
and outputs + equalizer etc.)
My initial assumption is that a mixer with fine-grained control
should be implemented as close as possible to the hardware
(assuming that the driver will make use of any hardware acceleration
and then fall back to software routines where needed).
Does ALSA offer me an api that will allow me to "mix" streams ?
The design I'm probably going to go with is:
- GStreamer to handle audio decription & any mixing / filtering
- Jack to obtain RCA & Microphone Input channels and write pcm
data to output through ALSA (possibly making alsa transperent to
the player application ?).
As you can clearly see,
/me is a little lost
Any help & pointers are greatly appreciated.
Cheers,
-Tristan