On Thu, 25 Mar 2004 23:09:26 -0500
Pete Bessman <ninjadroid(a)ml1.net> wrote:
[Chris]
> Am I missing some obvious things here? How
do people use
> samplers, for the most part?
I think Samplers can be used in two ways:
1) To emulate "real" instruments
2) To noodle around
Instrument emulation is traditionally the domain of SoundFonts, which
is just a collection of samples and some data on how they should be
used. Using SoundFonts is pretty swell. Grab a soundfont player like
fluidsynth and keep trying soundfonts until you find one you like. If
all you ever want to do is have the best spitting image of a violin
modern synthesis can provide, this is the way to go.
I'd been leaning away from SoundFonts because I'd only been using the
wavetable synth on my soundcard. Right now I'm poor, so I have a card
with a strong limitation on maximum soundfont size (SBLive -- 32MB).
Using e.g. FluidSynth just never occurred to me before this thread,
making me glad I asked about all this once again.
For point number 2, I think SoundFonts suck arse.
This is
where "traditional" samplers are cool. The creative process
goes something like this, for me:
1) Load a sample your familiar with, and put together a melody.
2) Walk through your massive library of samples and try different
sounds.
3) Experiment with effects, modulation, and regurgitation until
you've got what can be objectively deemed a Cool Sound.
Yeah, I haven't done much experimentation yet, for reasons detailed
below . . .
[Chris]
> Of course, there are tons and tons of samples
available; but then,
> in order to express the music you're hearing in your head, you're
> gonna be spending hours and hours trying to find samples that
> work.
This depends on whether you're trying to achieve goal number 1 or 2.
For number 1, just get a good SoundFont of whatever instrument you're
trying to model (this is really easy if you're willing to pay). For
number 2, there's generally no escaping the time drain. Since you
have so much sound sculpting power at your disposal, you'll inevitably
end up spending as much time creating sounds as you will orchestrating
your song.
The rule of thumb is that Using is time-cheap, whereas Creating is
time-expensive.
Yeah. I need to do a lot more just playing around with it all. At the
moment, in order to learn what the roles of all of this stuff are, and to
learn the basics of how to use the software, I've been working on a
specific project.
I play guitar in a band that does traditional Irish music. It's not my
favorite music in the world -- I like it, but I like electric funk or
acoustic blues a lot better. But I have so many demands on time right
now being, uh, underemployed, that I know I'd find reasons to play a
lot less often if I wasn't playing with them. Anyway, to learn this
stuff, I'm trying to take one of the few vocal tunes we do (recorded
in mono at a performance), cut it up, dub in drums and other instruments,
and turn it into a hip-hop track as a bit of a joke. I could use more
extended loops; but the singer provides a melody line, and I haven't
come across an extended sample that gave me the impression it would
work with it well. Hence, writing my own stuff, but on instruments
I don't play; and hence, the questions about one-note samples.
One word of caution: don't try to
"stretch" a sample beyond one octave
if you're aiming for realism. That is, don't up-pitch it or
down-pitch it by more than one octave (and purists will tell you that
you shouldn't up-pitch it at all).
Just to make sure I understand the last parenthetical comment -- the
idea is that if I want to shift the sample up or down in pitch, I have
to adjust the sample rate via interpolation in the existing samples.
I had been thinking that this was in general a bad thing to do, and so
if you're going to do it at all, you only want to do it over a few
semitones at most. But here you're saying that shifting the sample
up in pitch is worse than shifting down in pitch. And I'm guessing
that the reason for that is: for a fixed sample rate, shifting the
sample data you have up in pitch results in having fewer samples
covering the sample data than before, which results in information
loss. (e.g. if I shift the sample by an octave -- a factor of 2 in
frequency -- then the sample should play in half the time, and so
at a fixed sample rate that's half the samples covering the data)
Do I have this correctly?
(...specimen's author...)
Actually, Specimen is the sampler I've been messing with up to this
point. The slap bass in the largest GM-complete soundfont I've been
able to use with the wavetable synth on my soundcard sounds, well,
crappy. So I had Rosegarden sending the bass track MIDI sequence to
Specimen instead, using some bass notes I *did* find online. This
introduced a delay -- the sound from Specimen lagged the sound
from all the other tracks (coming from the soundcard synth) by a
small but noticeable bit. But I haven't really cared about that
so much, since once I liked the instrument sounds, my thought was to
record the instruments individually into tracks in Ardour, and then
adjust timing and relative gain there.
Thanks muchly for your comments!
-c
--
Chris Metzler cmetzler(a)speakeasy.snip-me.net
(remove "snip-me." to email)
"As a child I understood how to give; I have forgotten this grace since I
have become civilized." - Chief Luther Standing Bear