On Friday 11 January 2008 02:55:25 Steve Fosdick wrote:
On Thu, 10 Jan 2008 21:42:28 -0200
"robert lazarski" <robertlazarski(a)gmail.com> wrote:
I converted the bass track off a midi file to
ogg, muting out all the
other tracks via timidity. The purpose being I'm about to buy some
monitor speakers and I'm deciding what low frequency range is
acceptable. I tried audacity and analyze-->plot spectrum, but I can't
get that to show me what the low end is doing (I'm sure I'm missing
something simple).
Interesting. I tried this too and, as you say, there is very little
resolution in the graph at the low end of the spectrum.
Anyways, here's a link. What frequency is
that bass around? How can I
measure that?
What I tried was importing the track into ardour, so I could play it with
output going to jack, starting jamin, then playing the track and watching
the HDEQ spectrum plot. From that the lowest peak seems to be around 50Hz.
As you have the original MIDI file you could so a quick sanity check of
this as that frequency would mean that the lowest note in that bass part
should be G1 or Ab1. There is a table of note to frequency values at:
http://www.phy.mtu.edu/~suits/notefreqs.html
HTH,
Steve.
_______________________________________________
Hi Steve
This app seems to be designed to find the eye lashes on gnats.
http://www.sonicvisualiser.org/
Sonic Visualiser contains features for the following:
Load audio files in WAV, Ogg and MP3 formats, and view their waveforms.
Look at audio visualisations such as spectrogram views, with interactive
adjustment of display parameters.
Annotate audio data by adding labelled time points and defining segments,
point values and curves.
Overlay annotations on top of one another with aligned scales, and overlay
annotations on top of waveform or spectrogram views.
View the same data at multiple time resolutions simultaneously (for close-up
and overview).
Run feature-extraction plugins to calculate annotations automatically, using
algorithms such as beat trackers, pitch detectors and so on.
Import annotation layers from various text file formats.
Import note data from MIDI files, view it alongside other frequency scales,
and play it with the original audio.
Play back the audio plus synthesised annotations, taking care to synchronise
playback with display.
Select areas of interest, optionally snapping to nearby feature locations, and
audition individual and comparative selections in seamless loops.
Time-stretch playback, slowing right down or speeding up to a tiny fraction or
huge multiple of the original speed while retaining a synchronised display.
Export audio regions and annotation layers to external files.
The design goals for Sonic Visualiser are:
To provide the best available core waveform and spectrogram audio
visualisations for use with substantial files of music audio data.
To facilitate ready comparisons between different kinds of data, for example
by making it easy to overlay one set of data on another, or display the same
data in more than one way at the same time.
To be straightforward. The user interface should be simpler to learn and to
explain than the internal data structures. In this respect, Sonic Visualiser
aims to resemble a consumer audio application.
To be responsive, slick, and enjoyable. Even if you have to wait for your
results to be calculated, you should be able to do something else with the
audio data while you wait. Sonic Visualiser is pervasively multithreaded,
loves multiprocessor and multicore systems, and can make good use of fast
processors with plenty of memory.
To handle large data sets. The work Sonic Visualiser does is intrinsically
processor-hungry and (often) memory-hungry, but the aim is to allow you to
work with long audio files on machines with modest CPU and memory where
reasonable. (Disk space is another matter. Sonic Visualiser eats that.)
Hope this helps
Tom