Hello,
I'm always hesitating between netjack1/2.
Considering netjack1 and sync transport. I works fine, but I would like
the master to set the tempo for the slave(s), and it looks like netjack1
does not that (in the jackdmp version, at least). Am I wrong? Is there a
way to get the tempo from master to slave?
Thanks a lot.
--
Aurélien
Some thoughts about your questions about perfect pitch.
(which could imho better be called non-relative pitch. That way it
would not have the connotations the words "absolute" and "perfect" can
have.)
Am 16.04.2010 um 14:33 schrieb Atte André Jensen
<atte.jensen(a)gmail.com>:
> Hi
>
> I have quite good relative pitch,
From what I have heard of your invention and playing that is good for
you and may be quite an understatement. :-)
It's all that really matters. Imagine having perfect pitch but lousy
relative capabilities! You would probably not bother us with music as
great as the tunes you keep posting on LAU, (will you please do so, I
really love them).
> but not perfect pitch.
Don't care, it's nothing magical. It is just remembering a certain
quality of sound.
> By accident I
> stumbled upon some information that gave me the idea "why not give
> it a
> shot, it might be possible to pick it up". Please let's not go (too
> deep) into either "it can't be learned" or "it makes you unmusical".
>
> However, I don't really know what the steps int the learning process
> would be.
>
> One course seems to start with CDEF and then add more notes when those
> are stuck in your head. However with these notes played at random
> I'd be
> able to tell any of the other if I'm told what the first note is :-
> ( To
> I guess that wouldn't work...
>
> Another seems to play all 12 notes at random and then you should only
> focus on one at the time, for instance be able to identify whenever C
> comes up.
>
> Are there anyone here that *learned* perfect pitch (don't care 'bout
> the
> lucky bastards that was born with it).
Lucky? What about having to listen to music that is out of tune most
of the time?
OTOH I don't think there is anyone born with it. It is a product of
practice.
> How did you learn it?
>
String instruments. Violin, guitar, bass guitar. Especially with vl
and gt I started to notice that I could imagine the notes of the empty
strings before really touching the instrument.
It developed from there. Never really perceived it as an advantage.
No, wait: maybe with singing in a choir there can be some. Being able
to perfom relative analysis and imagination is far more important.
I noticed this quality of my perception one day in the park, playing
guitar, hanging around in the sun, moving from one camp fire to the
next. A friend asked me why I had tuned his guitar before playing it,
he was sure that it had been tuned perfectly. (Ha! That ambiguos word
again!!) yeah, I thought, but not to my pitch.
I discovered that when I was about to pick up the violin in the
morning I could remember how it had sounded the day before without
touching it, just by imagining to do so.
Fading memory that gets more persistent along with your daily
practice. That's how I'd explain it.
And it varies a bit, depending on how much I play.
I don't play the piano very well. Don't know a good technique for what
I think is your main instrument.
Do you have a favourite tune or beginning of a tune you could start
every day with? The hook for my memory was the empty strings. Maybe it
could be the first two or eleven notes of "Sophisticated Lady" for
you? (blocked chords, anything that feels natural and beautiful to you
and your hands. Ritualise! It's all about remembering. Let the body
help your imagination. )
> Now to the linux part: It would be dead simple to write a script that
> throws notes at you, even with different constraints (which
> instrument,
> which group of notes). Besides one would need *really* well tuned
> notes
> of instruments like piano, guitar + more.
>
> Would anyone here be interested in exchanging scripts, samples and
> practice results for such a journey; "collecting a set of files for
> learning perfect pitch with your linux box, and using them to learn
> yourself perfect pitch along the way"?
Count me out. That sounds like setting up a gym for your musical mind
(booooring!).
But maybe consider learning an instrument that must be tuned before
you play. Even better: learn an instrument that requires practicing
intonation. Even better++: don't overrate perfect pitch. It won't make
your music and your playing any better, imnsho.
All the best to all of your efforts, no matter how I'd judge them,
- Burkhard
>
> --
> Atte
>
> http://atte.dkhttp://modlys.dk
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-user
Kind of a general mastering question, but obligatory Linux screenshots of JAPA are included, I promise.
I've noticed with some professional cd's/oggs/mp3s I have, the high end is rolled off at around 20Khz.
Some roll off hard core:
http://storage.restivo.org.s3.amazonaws.com/rolloffs/hardrolloff.png
Some have a softer, gentler rolloff, but they still roll everything off:
http://storage.restivo.org.s3.amazonaws.com/rolloffs/softerrolloff.png
My own mixes, however, don't do that.
http://storage.restivo.org.s3.amazonaws.com/rolloffs/norolloff2.png
Now, the examples in question are from the 1980's and 1990's, because that's the last time I actually bought a CD.
The question is: why do they roll off like that, and is there some reason I should do it in this day and age?
The LADSPA GLAME Lowpass filter in 4-pole mode seems to do the trick, it's maximum freq is like 19.5Khz though. But should I?
-ken
On 04/24/2010 06:40 AM, Niels Mayer wrote:
> Monty, quoting me out of context:
<snip>
> A few people seem to think I'm in denial of Nyquist.
you are mistaken. what at least i think is that you are taking the
sampling theorem as a model of human hearing. it is not. which makes
most of your arguments moot.
> Please note what I
> said at the top of the thread. What is hopeless is your need to argue about
> something you clearly know nothing about -- psychology, neurology, the
> biology of human perception, and cognition.
first of all, there seems little point in getting into a pissing contest
with monty about auditory perception. i'm all for healthy scepticism,
but i also happen to know that this guy has probably forgotten more
about dsp and psychoacoustic coding than i will ever know. does "ogg
vorbis" ring a bell?
> Your assumption that only power or real-plane information matters to
> biological entities is nonsense.
nobody ever assumed this.
> The notion that "sampling" applies to biological entities is nonsense.
indeed. and you are the first to even suggest this. everybody else is
talking about digital devices represent, store and reproduce information.
> Even just saying "our ear drums" are
> vibrating is a gross oversimplification as ear pinnae-shape actually acts as
> a directional-filter, that in conjuntion with cognitive processes, allow us
> to locate the position of sound. This positioning is extremely phase and
> timing dependent.
thanks for this lecture on the basics of binaural perception ;)
> Humans can potentially locate sound sources to within 10 degrees of arc,
> even with complex reflections and reverberations taking place.
and they can also do that in artificially (re-)created sound fields made
of digital signals sampled at 16 khz or less. also in the presence of
significant phase distortion.
you are welcome to drop by whenever you're in europe and hear for
yourself. which should demonstrate that you are mixing lines of argument
which should be kept separate.
> Looking at the world from a POV that power and spectrums is the only thing
> that matters is total nonsense.
nobody is doing that.
> How much positional resolution is lost by
> quantizing the onset/reflection to with 1/44,000th of a second? Asking
> wolfram alpha
> "(1/44000 seconds) * (speed of sound) = 7.73386364 millimeters"
> .... which seems like a short distance, until you take a formerly-aligned
> woofer and tweeter on a high end studio monitor and then move them forward
> or back an additional centimeter.
this example is looks very tempting on the surface. it is also very wrong.
the auditory information is not quantized in time. it is just _sampled_
at fixed intervals. you can demonstrably gain timing information from
digital systems with sub-sample accuracy. the analogue reconstruction
filter will interpolate and yield sub-sample information. google for
"inter-sample peaks" to find out more.
all you lose by using finite sample rates is some high frequency content
(i'm assuming proper anti-aliasing, of course).
moreover, even if it were quantized, it would equally apply to all
frequencies, and hence would not create time alignment issues.
all this boils down to _bandwidth_. amplitude and phase information is
retained, correctly, for all frequencies below nyquist (or more
correctly, for all frequencies unaffected by the anti-alias filter).
> Then run some test tones at the crossover
> frequency, and some impulse reponses too... You can watch that difference on
> a scope && you can graph a different diffraction and lobing pattern around
> the room. Some people will be able to tell by listening that something's
> wrong.
i'm pretty sure monty has looked as his share of impulse responses in
his lifetime. so have i :)
the question is: have you?
> Understand something fundamental about humans: We're not linear.
has it occured to you that people who have developed psychoacoustic
codecs might already know this? it's basic introductory textbook knowledge.
it does not support your assumptions. all these issues you mention are
orthogonal to the question at hand.
> So if you *really* want to get information theoretic,
how a sound is represented, stored and reproduced outside of the human
hearing apparatus is orthogonal to hearing physiology. knowing about
hearing just helps to make the best compromises.
all the information that our hearing apparatus needs is still there
after proper sampling, within reasonable limits. (and no, i won't
discuss sample rates with you :)
> how do
> you explain the fact that the mastering process easily loses a lot of the
> low-level positional cues that people can easily hear -- even in the face of
> louder masking information.
i'd like to know what these cues are supposed to be, and whether there
is any scientific data to support this argument.
> Is the "mixing process" losing information --
> from the nyquist perspective -- no -- but from the human perspective -- yes.
> Those Low-order bits might matter as much as the MSB, so you can't just keep
> adding MSBs and truncating LSB's and expect it to not sound like a big wall
> of mud eventually.
you are again mixing things that are orthogonal. this is childish.
otoh, claiming a difference between MSBs and LSBs is funny. please, try
to get some basic understanding of how digital audio works.
the human hearing has a very well known and understood dynamic range.
whether you represent it by 24 more significant bits or 24 less
significant bits does not matter.
truncation in a signal chain is a problem, and believe me, everybody
here knows that. it's another orthogonal random thought in your line of
argument that distracts from the original topic.
> Why is that?? We're logarithmic, and nyquist, sampling and binary coding is
> linear. Our wide dynamic range is provided by a simple Nyquist-violating
> equation that is both true for amplitude and frequency perception: log(a *
> b) = log(a)+log(b)
get some coffee. this is basic arithmetic. it's no more
nyquist-violating than n(a+b) = na + nb.
this is getting ridiculous.
> (where multiplication is akin to what happens when
> "mixing"). [[NB: http://www.aes.org/e-lib/browse.cfm?elib=11981 Dynamic
> Range Requirement for Subjective Noise Free Reproduction of Music -- 118db]]
did you just quote an AES paper in support of logarithmic computation
rules? or to reveal the spectacular fact that human hearing has a
dynamic range of > 118dB? (which is uncontested, and, like most of your
other arguments, has nothing to do with the original topic.)
> How
> does nyquist, as human perceptual theorem, "model" our logarithmic
> perception of power and frequency??
until you understand that the sampling theorem is not a model of human
hearing, there is really no point in continuing this.
the rest of the mail doesn't get any better, so i'll just drop out here.
no need to cc: me on followups, i read the list.
Hi all!
I'm trying to control plugins in ingen using a Korg Nanokontrol, and I
think I'm missing something.
For starters, I tried a simple amplifier. So, I added a controller and
connected its output to the amplifier's gain. Moving a slider, I see the
"Controller" display change from 0 to 127. The other controls are set
to: Logarithmic: off, Minimum: 0.0, Maximum: 1.0. At first I though
nothing was happening, but then I noticed that when the slider is moved
all the way down (actually, to 2), the amplifier's gain is set to 0.0157
and then stays there. Can anyone explain what's going on?
Thanks!
Alexandros
So I'm trying to burn this masterpeice to a CD.
$ crdao write tofile.toc
... ok here we go. It works, but then when I try to read the disk back, i get tons of CRC errors.
"5 Q sub-channels with CRC errors" .. etc etc.
Someone On The Internet (tm) said thusly:
http://linux.derkeiler.com/Mailing-Lists/Debian/2005-04/1370.html
OK, so I try to write at a lower speed.
$ cdrdao write --speed 4 tofile.toc
Starting write at speed 8...
Pausing 10 seconds - hit CTRL-C to abort.
No no no no, I said speed 4!
$ cdrdao write --speed 2 tofile.toc
Starting write at speed 8...
Pausing 10 seconds - hit CTRL-C to abort.
Arrgggh...
$ cdrdao write --speed 1 tofile.toc
Starting write at speed 8...
Pausing 10 seconds - hit CTRL-C to abort.
Hey, cdrdao, are you even listening to me!??
So, two questions:
1) Is there any reason to burn an audio CD at a speed < maximum, or is this the dreaded digital voodoo?
2) Why is cdrdao refusing to do what I tell it to do?
Thanks, all.
-ken
I'm converting a bunch of wav files from 32-bit float to 16-bit (CD) quality.
What's the best quality tool to use on the command line? I have sndfile-convert 1.0.17-4, and sox 14.0.1-2+b1, and also ecasound and a bunch of other stuff.
Thanks.
-ken
Hi
I have quite good relative pitch, but not perfect pitch. By accident I
stumbled upon some information that gave me the idea "why not give it a
shot, it might be possible to pick it up". Please let's not go (too
deep) into either "it can't be learned" or "it makes you unmusical".
However, I don't really know what the steps int the learning process
would be.
One course seems to start with CDEF and then add more notes when those
are stuck in your head. However with these notes played at random I'd be
able to tell any of the other if I'm told what the first note is :-( To
I guess that wouldn't work...
Another seems to play all 12 notes at random and then you should only
focus on one at the time, for instance be able to identify whenever C
comes up.
Are there anyone here that *learned* perfect pitch (don't care 'bout the
lucky bastards that was born with it). How did you learn it?
Now to the linux part: It would be dead simple to write a script that
throws notes at you, even with different constraints (which instrument,
which group of notes). Besides one would need *really* well tuned notes
of instruments like piano, guitar + more.
Would anyone here be interested in exchanging scripts, samples and
practice results for such a journey; "collecting a set of files for
learning perfect pitch with your linux box, and using them to learn
yourself perfect pitch along the way"?
--
Atte
http://atte.dkhttp://modlys.dk