Rolling by on randomize came a Me and My Cronies jam/joke from years ago:
http://www.restivo.org/blog/podpress_trac/web/558/0/Not_OK_Computer.ogg
And I was struck by how much PHASEX sounds like a real analog synth, like an ARP 2600 or similar, and so much more real than any other software synths I've used.
It sounds so... raw, uncontrolled, well, ANALOG. Most software simulations sound more or less authentic, but all so much more "tame", for want of a better term. But PHASEX always sounded to me (and felt, as I was playing with it) that at any moment it could do something crazy like throw a DC offset, to into an uncontrollable oscillation, or blow up my speakers, etc.
I don't like being at a loss for precise, engineering terms, or understanding WHY something is, so I'm asking any of the DSP'ers here who might also have looked at (and understood) PHASEX's source.
Any ideas what is so different about PHASEX, and what might be this quality of it's sound I could be trying to describe?
-ken
I'm having some difficulty getting Ardour 3 and Muse to consistently
tempo lock to eachother. By consistently, I don't mean that they'll
lose sync once I've got it working. Rather, it seems to be a tricky
business getting it to happen at all.
It seems to somewhat depend on the order they're started in, but also
I've noticed that if I change Muse's sync settings, the tempo selection
can become grayed out -- appropriate when it's a slave, but it never
becomes active again when it is set as a master again.
Does anyone have any recommendations for this? I'd prefer Ardour as the
master, since it's the one handling actual sample data (with measure
lines in its timeline for real audio waves) and not just sequencer
playback, but at this point I'd almost take anything that would
consistently work. It seems that even if I setup Ardour as Jack
timebase master and edit its tempo ruler to 140bpm, it will show measure
lines on its timeline that are appropriate for 140bpm, but Jack programs
like Muse and Hydrogen will still playback at their default 120bpm, as
though they're receiving the transport control messages but not the
tempo sync. If I have one of them be the master, they disagree about
what 140bpm is, which means they're still not really synced. It's very
frustrating...
--
+ Brent A. Busby + "We've all heard that a million monkeys
+ Sr. UNIX Systems Admin + banging on a million typewriters will
+ University of Chicago + eventually reproduce the entire works of
+ James Franck Institute + Shakespeare. Now, thanks to the Internet,
+ Materials Research Ctr + we know this is not true." -Robert Wilensky
Silvet is a Vamp plugin for note transcription in polyphonic music.
http://code.soundsoftware.ac.uk/projects/silvet
** What does it do?
Silvet listens to audio recordings of music and tries to work out what
notes are being played.
To use it, you need a Vamp plugin host (such as Sonic Visualiser).
How to use the plugin will depend on the host you use, but in the case
of Sonic Visualiser, you should load an audio file and then run Silvet
Note Transcription from the Transform menu. This will add a note
layer to your session with the transcription in it, which you can
listen to or export as a MIDI file.
** How good is it?
Silvet performs well for some recordings, but the range of music that
works well is quite limited at this stage. Generally it works best
with piano or acoustic instruments in solo or small-ensemble music.
Silvet does not transcribe percussion and has a limited range of
instrument support. It does not technically support vocals, although
it will sometimes transcribe them anyway.
You can usually expect the output to be reasonably informative and to
bear some audible relationship to the actual notes, but you shouldn't
expect to get something that can be directly converted to a readable
score. For much rock/pop music in particular the results will be, at
best, recognisable.
To summarise: try it and see.
** Can it be used live?
In theory it can, because the plugin is causal: it emits notes as it
hears the audio. But it has to operate on long blocks of audio with a
latency of many seconds, so although it will work with non-seekable
streams, it isn't in practice responsive enough to use live.
** How does it work?
Silvet uses the method described in "A Shift-Invariant Latent Variable
Model for Automatic Music Transcription" by Emmanouil Benetos and
Simon Dixon (Computer Music Journal, 2012).
It uses probablistic latent-variable estimation to decompose a
Constant-Q time-frequency matrix into note activations using a set of
spectral templates learned from recordings of solo instruments.
For a formal evaluation, please refer to the 2012 edition of MIREX,
the Music Information Retrieval Evaluation Exchange, where the basic
method implemented in Silvet formed the BD1, BD2 and BD3 submissions
in the Multiple F0 Tracking task:
http://www.music-ir.org/mirex/wiki/2012:Multiple_Fundamental_Frequency_Esti…
Annotations for that track (http://www.restivo.org/blog/podpress_trac/web/558/0/Not_OK_Computer.ogg)
PHASEX is present/prevalent from the intro through 05:59.
WhySynth is from 6:00 through 6:58
Nekostring is 07:48 through 07:55
AMS is 07:59 through 10:00
PHASEX sounds very much to me like a real analog synth. The rest sound very much like software simulations. I have no idea why the difference seems so striking.
-ken
Hi!
I'm in the midst of trying to treat my room acoustically, and I really want
to do measurements of my room acoustically. Of course, I know way too
little about this, and therefore, I need your help as always! ;) (if you
reply to this e-mail, imagine I'm 5 years old when you explain stuff).
So, what I have is this:
- A set of studio monitors (Adam A3X)
- A microphone (Senheiser MK4)
- A room...
- My studio computer with Linux
What I want to do, roughly, is this:
- Through a test tone(? test sound?) measure the frequency "response" of
the room. I want to get a nice curve, like an equalizer, which tells me
roughly how my room responds to the different frequencies.
- I want to do this before and after I apply treatment.
So, my question is: How on earth do I do this?! Are there FLOSS tools
available for this? Is it easy to do something as basic as this?
Best wishes,
zth/Gabriel
On Wed, 2014-08-06 at 03:57 +0200, Ralf Mardorf wrote:
> PS: I see colours, when I listen to music, but I wouldn't call it real
> synesthesia.
PPS: I don't see always the same colour for an oboe playing one tone,
but I see coloured films about the content. It's a pity that e.g. for
Qtractor it's impossible to select a wanted colour for tracks, it's only
possible to select a vague colour. For me, it would be very informing,
if I really would get a colour of choice.
Well, I have the s/pdif output of my
USBDualTubePre microphone preamp plugged
into the s/pdif input of my M-Audio 2496,
though I have yet to be able to record
any sound.
I am using Audacity.
I have used Audacity successfully in the past
to record though the analog inputs of the
M-Audio 2496 (to digitize sound from cassettes
and vinyl) but Audacity's VU meters just
don't register anything from the microphone
going to the USBDualTubePre preamp to the
M-Audio 2496's s/pdif input.
Does anyone have any suggestions?
I imagine I need to configure something
with one of the mixers, I have available?:
alsamixer
alsamixergui
envy24control (Envy24 Control Utility)
xfce4-mixer
Thank you for your help.
P.S. I am trying to only do a mono recording and so
I have my microphone plugged into the right channel
of the preamp.
> From: Ivan K <ivan_521521(a)yahoo.com>
>> HOWEVER ... I have lost some of my M-Audio cards
>> functionality. ?
> I think I figured out why this is so. Jack is currently configured
> to look for "card 0" as displayed in /proc/asound/cards
The number changes if you have hot plug devices such as USB sound devices
(which includes webcams). It could also change if the kernel or ALSA
changed the order of device enumeration between version upgrades.
You should be using the name and not number. See this FAQ from
jackaudio.org:
http://jackaudio.org/faq/device_naming.html
For the Audiophile 2496 you would use hw:M2496 as the device instead of
hw:0 or hw:1.
--
Chris Caudle
Zita-njbridge-0.1.0 is now available.
Zita-j2n and zita-n2j are command line Jack clients to
transmit full quality multichannel audio over a local IP
network, with adaptive resampling by the receiver(s).
Main features:
* One-to-one (UDP) or one-to-many (multicast).
* Sender and receiver(s) can each have their own
sample rate and period size. No word clock sync
is assumed.
* Up to 64 channels, 16 or 24 bit or float samples.
* Receiver(s) can select any combination of channels.
* Low latency, optional additional buffering.
* High quality jitter-free resampling.
* Graceful handling of xruns, skipped cycles, lost
packets and freewheeling.
* IP6 fully supported.
* Requires zita-resampler, no other non-standard
dependencies.
Note that zita-njbridge is meant for use on a *local*
network providing more or less reliable delivery with
low to moderate delay. It may work or not on the wider
internet if receiver(s) are configured for additional
buffering, and if you are lucky. Performance on wire-
less networks is just a matter of chance.
You will need a fairly recent Jack version, as the
code uses jack_get_cycle_times() and no fallback for
that is provided.
Download from <http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
See man zita-njbridge for more info.
--
FA
A world of exhaustive, reliable metadata would be a
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hi all,
Are there any music applications for linux that would be suitable for a 10
year old to sit down and have fun without reading any documentation? (i.e.
something as simple as Mario Paint Composer, which I played with and loved
as a child)
I've scanned the list at http://wiki.linuxaudio.org/apps/start and googled
but nothing is jumping out at me.
Thanks!
Brian Sorahan