Hi,
ambient fun with 100Hz dynamic wavetable oscillator and physics.
Track is called "scans", a real extude again as it includes the Pd
patch used to synthesize and record it, both at:
http://footils.org/cms/show/41
Ciao
--
Frank Barknecht _ ______footils.org__
_ __latest track: "scans" _ http://footils.org/cms/show/41
Hi all,
Just thought it was about time I posted some music - the first two tracks in a
project of mine to grow acid techno:
http://www.archive.org/audio/audio-details-db.php?collection=opensource_aud…
The tracks were made and recorded in realtime, using a form of genetic
programming, which develops formal production rules (loosely based on l
systems) for a text based musical score language. This is good for taking
melodic or percussion patterns and slowly developing them, but in some cases
the changes can be quite radical, and can take you by suprise. you can also
hand edit the rules on the fly - which is handy for things like forcing a 4/4
beat, which is sometimes desirable :)
The drum sounds are sampled from the venerable 808, also some synthesised
percussion in there, all synthesis is done by a fixed function performance
synth based on the code from SSM.
More info about the software here:
http://www.pawfal.org/Software/livenoisetools/
cheers,
dave
Would anyone care to comment as to whether this means it's
okay to redistribute this document, or not ?
"INTERNAL USE ONLY" could be a showstopper.
A LICENSE IS HEREBY GRANTED TO COPY, REPRODUCE, AND DISTRIBUTE
THIS SPECIFICATION FOR INTERNAL USE ONLY. NO OTHER LICENSE
EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY OTHER
INTELLECTUAL PROPERTY RIGHTS IS GRANTED OR INTENDED HEREBY.
And, regardless of the status of (re)distributing the
document itself, has anyone got a feel for the openness, or
not, of the specification outlined within this document ?
Is it ultimately a waste of time to use this sf2 standard
in conjunction with perpetual open source projects ?
If it is not open enough to take advantage of then is there
any truly open soundfont-like standard anywhere on the planet?
The rest of it is here...
http://www.soundfont.com/documents/sfspec21.pdf
--markc
Hi!
At http://freepats.opensrc.org there is a mellotron sample in the flac
format. I'm very interested in this sound. I'd like to see it in a soundfont,
so it can be used with fluidsynth. Unfortunately I only can convert this
sample to the .wav or .raw format and split the different samples from one
another. The actual soundfont creation (with swami) I can't do. Would anyone
be interested in this kind of project?
As said I'd convert and split the flac file and do, what else I can, but
for the final swami-touch, I'd need some help, because I'm blind.
I'm looking forward to hear from someone!
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
Greetings:
I've added another recording to my "music made with Ardour" page, a
guitar duet this time. It's a performance of an old Jimmy Dorsey tune
called Maria Elena, you can check it out here:
http://linux-sound.org/ardour-songs.html
Best,
dp
Was wondering what sound cards people recommend for streaming a 128 OGG
stream. I was thinking there was not much point in paying over
£200/$380. I realise get as good a sound card as you can is generally
accepted advice but is there any point in spending £600 for this type of
thing.
I guess we are talking about a USB card (we want to use it with a
laptop).
Ben
--
Ben Edwards - Bristol, UK, England
Web Services, Database Development and general IT services
If you have a problem sending me email use this link
http://www.gurtlush.org.uk/profiles.php?uid=4
(email address this email is sent from may be defunct)
--
Ben Edwards - Bristol, UK, England
Web Services, Database Development and general IT services
If you have a problem sending me email use this link
http://www.gurtlush.org.uk/profiles.php?uid=4
(email address this email is sent from may be defunct)
Wolfgang, Shayne, Jorge-
Jorge wrote--
>its a great idea....I'm a "no musician" sound maker, don't know if i
>coud join, but there are 3 things i think should be agreed before
>start:
No, you are already participating, amigo. This is an idea that cannot manifest
unless a group of people find it worth pursuing. It isn't going to come out of
the music industry and so it has to begin with us.
>let motif. an abstrat word so everybody sync there minds (like Brian
>Eno has done...) it would be beautiful to discover what the word
>"love" means for those 4 people playing music. That it means for the 4
>not for each one of theam... and how it grows by the passing time of
>jamming...
It is nice that you can be so poetic in the midst of all this technical stuff.
>roles. depending in the moment i think people shoud be able to have a
>rol in the session so that the efford should be comparted. (One does
>the beat, and another a synt stuff, and the next moment they could
>change the rol, but in every moment they know what is their roles in
>the music)
You are looking at the social side of net jamming, and I think it's great. I
agree that everyone has to have their own place in the music. And it's
intruiging to think of different games that could be played in such a jam room.
You, Shayne and Wolfgang have each come at this from a different angle and I
appreciate hearing from you all.
>chating. it may be dificut to make music and type but I have to be
>able to say to the other guy, grrreeeeEEEAt beat woooaaaooo eheheheh
Definitely a chat.
Shayne, are you in North America?
Wolfgang is in Deutschland.
Jorge, Portugal, wasn't it?
Cheers,
-Mercury
Hi,
I am Systems Manager for a student radio station in the UK called SURGE
- although it doesn't really answer your questions, I hope my little
history is of use/interest to you. We are quite a low-budget station,
so most things we done on the cheap.
Before I came to the station, there were no Linux boxen at all, but I
have been trying to go all Linux :-)
Originally there was:
- A studio PC (win 2k) running WaveCart
http://www.bsiusa.com/software/wavecart/wavecart.htm
- An office PC (win 2k) for MS Office and production - CoolEdit
I created an Automation System (which we call TotalRequest), which is
based on a mixture of perl scripts, mysql and mpg123. It has the master
copy of our Music Library - which is rsynced on to the Studio PC which
WaveCart uses. It can accept requests automatically via the website,
phone and SMS text messages.
Then we managed to get decent Internet access to the station over a
fibre optic link, and added a machine to do website (apache2), mail
server (qmail) and NFS filestore.
Next came a server to record station output. We are required to keep a
copy of the past 1000 hours/42 days by the uk government. I run
darkice/icecast on the server, along with a simple perl HTTP client to
record an hours audio at a time to disc. The server is completely
independent from our streaming server, in case of failure.
Finally is our streaming server, which also runs darkice/icecast.
Pretty standard setup, with encoding to high and low quality MP3 and
Ogg.
For historical reasons, all the music is stored in the format that
WaveCart likes - which is MPEG Audio inside a Broadcast Wave file. The
metadata is stored in the wave file too (including segue and intro
times), which is extracted and loaded into MySQL.
Most of the scripts that the station runs on are custom written
scripts, much of which are very SURGE specific and hard for other
stations to use. It would be less effort to write them again, than try
and make them usable by other people.
However I have an upcoming project to develop a JACK/OSC based studio
playout system to replace WaveCart. We plan to make it very
client/server based, so the you can have multiple front ends
controlling a backend that plays the audio out.
nick.
> ----- Forwarded message from ben racher <bracher(a)iupui.edu> -----
>
>> Date: Thu, 10 Mar 2005 22:03:07 -0500
>> From: ben racher <bracher(a)iupui.edu>
>> Subject: [linux-audio-user] IUPUI Student Radio Station should be
>> based on
>> Linux
>> To: linux-audio-user(a)music.columbia.edu,
>> linux-audio-dev(a)music.columbia.edu,
>> Michael Schultheiss <schultmc(a)cinlug.org>, Matt Beal
>> <mbeal(a)biosound.com>
>> Cc:
>>
>> Hello,
>>
>> I'm starting a student radio station at IUPUI in Indianapolis, Indiana
>> and I want our entire audio infrastructure to be based on Linux. I've
>> got a rough sense of all the apps we need and what apps to setup on
>> which computers, but I thought I'd run the blueprints by you guys to
>> see
>> if you could give me any feedback.
>>
>> Streaming/Web Server: Runs apache and icecast or the icecast mod for
>> Apache.
>>
>> Automation Computer: Runs some sort of playback program, I've been
>> keeping my eyes on LiveSupport http://www.campware.org/ to schedule
>> and
>> automate the station when DJs aren't present.
>>
>> Audio Archive: File Server for our digital library, probably all FLAC
>> files, maybe Ogg, but I think we want FLAC in case we want to burn
>> CDs.
>>
>> And this is the part that I need help on...
>>
>> Production Computer... so I've been tooling around with JACK and
>> Ardour
>> and MusE (not to be confused with MuSE) and other JACK apps and its
>> all
>> really cool and exciting. I never got the sound input to even really
>> work in linux until a couple weeks ago. Yay for the 2.6.8+ kernels. So
>> here are my thoughts on setting up a workstation, and I don't even
>> know
>> if this is possible, but that's why I'm mailing you guys. One
>> department
>> has kindly donated a brand new Dell Poweredge Dual Xeon 2.4 ghz
>> somethin
>> or other. The rest of our computers are from the university junkyard
>> of
>> midgrade PowerPC G4s and Pentium 3s. So the Poweredge is our gem
>> computer out of all the other crappy computers. Is there any way for
>> me
>> to set up the speedy new poweredge as some kind of audio production
>> renderfarm, and get the PPCs and the Pentium 3s to connect to it as
>> production terminals? Cause, although multi-tracking on the G4s and
>> Pentium 3s is possible, doing extensive work with FX plugins is
>> probably
>> out of the question.
>>
>> See what I'm getting at? Also, the Poweredge also has about a 500gb
>> raid
>> system with it, which would be nice to use for storing our audio on
>> and
>> maybe even using as our digital archive as well, but that might be
>> pushing it if we are doing audio production work on it as well? I'd
>> imagine this might be the case, but I don't see why ftping flac files
>> on
>> a local network would be too much of a burden on the raid drive or
>> dual
>> processors. Another reason why it would be nice to be able to connect
>> to
>> a poweredge remotely to do audio work, is that it the poweredge makes
>> about as much noise as a 747. So... its not exactly an audio
>> production
>> friendly unit.
>>
>> So these are my thoughts. Am I crazy... or is there some magical way
>> to
>> make this happen?
>>
>> - Ben Racher
>> bracher(a)iupui.edu
>
> ----- End forwarded message -----
> > Maybe next time, if there is one, you can just try to restart ALSA
> > rather than reboot. If ALSA restart doesn't work, then try some
> > other things, I wouldn't know what, to see if you can get it going
> > again. This is just a generic debugging method that I'll use in lieu
> > of reboots. It sure helps to isolate where the problem might be,
> > whereas a reboot obscures where the problem might be.
>
> yes I should try that - thanks :) I'm going to try recreating the problem
> tonight. I'm not sure if I'll ever see it again or not...
well I finally recreated the anomoly and yes I just tried restarting alsa
instead of rebooting and it seems to have gone away...hmmm. the way I
recreated it was by taxing the system a little - running like 8 instances of
mplayer at once
hmmmm
Okay, looking at both libinstpatch and sf2text, I'm getting a feel for
what we might want.
I have something that does /some/ extraction of info into a formatted
file. Still no wavs, but that's just because I wanted to get
something going quite quickly.
Rather than rabble on, here's an (incomplete) example:
head{
name=User Bank
sfversion=2.0
#0 presets, 0 instruments, 2 samples
}
samples{
sample{
#Sample 0
name=0sib
loopstart=102260
loopend=104523
rate=44100
opitch=60
pitchc=0
samplelink=0
type=1
}
sample{
#Sample 1
name=6mi
loopstart=158834
loopend=159451
rate=44100
opitch=60
pitchc=0
samplelink=0
type=1
}
}
It's fairly easy to follow, I think. Comments start with a # - these
are put in automatically for the decoding to add some extra info for
the human reader.
Now for some detail. At the moment, you'll see I have missed out
samplestart and sampleend. I don't think we need this in the text
file because it just relates to the wavs once they are packed - our
compiler should work all that out at compile-time.
We can also chop our rate, because it's something the wavs will
specify.
Original pitch and pitch correction are more awkward - numbers for
pitch aren't great. Can we agree on a format for this? Like C-1 for
first octave C, etc.
Pitch correction I think is in semitones - if not, I think we should
convert it to that and back again so the user can work with a musical
abstraction rather than anything more low level.
Sample link I think we should remove completely and replace it with a
sample file name - the order means nothing outside of the soundfont.
And type can be made more readable.
Instruments will be similar, as will presets. All references should
be made by name, I think, ratehr than position, for obvious reasons.
Okay - does anyone have any thoughts on this? Any
suggestions/corrections/better ideas?
Oh, and I've only just noticed that we're using the LAU list and
probably ought to move either to the LAD list or a separate one
altogether of we're going to keep up this volume - awkward choice
because more eyes means more input, but also more annoyance for
uninterested readers. Thoughts, anyone?
James
On Fri, 11 Mar, 2005 at 01:27AM +1000, Mark Constable spake thus:
> james(a)dis-dot-dat.net wrote:
> >I see you're already on top of checking out the licensing.
>
> It's as much to see if there are others out there with
> a similar interest... and maybe someones been way down
> this path already and has got some time-saving comments.
>
> >Shall we agree, then, on the first step?
>
> We can always try :-)
>
> >I think we should take a look at the format in detail, see exactly
> >what goes into it and then agree on how we want to structure our
> >soundfont-source file. Once we have that, we can start writing the
> >compiler/decompiler to a specification, or working on a sf processing
> >library that we can then use to make the apps.
>
> I skimmed thru the pdf spec and lasted about 15 minutes
> before going cross-eyed.
>
> >If we both start examining the file and writing a preliminary source
> >format, we can take the best from both and ensure neither of us has
> >missed anything.
> >
> >Sound like a good start?
>
> Well we could keep "talking about it" forever so I agree
> it would be better if we independantly dived in and got
> some hands on with the spec and some code... then compare
> serious "paper" notes.
>
> >Once we have something to work on, we can sort out where we're going
> >to keep things (cvs/sourceforge/savannah/etc) and how we're going to
> >divide up the work.
>
> My goodness, sounds like a plan! For now we can throw
> comments and URLs at each other and if others show any
> interest we can externalize and formalize the process in
> a number of conventional ways.
>
> Heh... here's a note of some related effort and some code
> that could perhaps provide some hints... maybe.
>
> http://sourceforge.net/mailarchive/forum.php?thread_id=6703044&forum_id=128…
>
>
--
"I'd crawl over an acre of 'Visual This++' and 'Integrated Development
That' to get to gcc, Emacs, and gdb. Thank you."
(By Vance Petree, Virginia Power)