Just got my march issue.. ARDOUR on the cover.. YEAH BABY YEAH....
but there is some confusion..
the author writes that the m0audio delta66 has "12" input channels.. ?
(page 65 under the qjackctl image)
I don't get it.. are they digital inputs?
I thought them-audio had 4x4 ...?
Dear list,
I've got a question about using various soundfile players with Mozilla
Thunderbird. I've got a webpage on exhibit in a festival right now, with
a selection of MP3s to listen to. I thought it would be a simple thing,
but lusers always find ways of making simple things break ;-)
The problem is that, each time a user clicks on a link, it opens a new
instance of each soundplayer. Of course, most people don't bother to
close the other player, so pretty soon there are too many instances and
the sound just stops. I've tried XMMS and VLC in particular, and have
resorted to using MPG321 because it will wait for the first soundfile to
finish before playing the next one. But the users can't see the app
playing the sound, and more than that, they can't interrupt one sound
with another because there's no interface to stop playback.
So what I'd like to know is which app will support enqueuing files for
playback in a single instance? I've tried all the XMMS and Thunderbird
options available to me and can't come up with the answer. And I can't
stand around all day instructing people on how to keep the computer from
crashing ;-)
Suggestions welcome!
best wishes,
Derek
current project: http://berlin.soundscape-fm.net
--
derek holzer ::: http://www.umatic.nl
---Oblique Strategy # 36:
"Consult other sources
-promising
-unpromising"
Hi,
QjackCtl 0.2.15 has been released.
As a major new feature you are now allowed to rename (alias) the JACK/ALSA
connections client/port names to something intelligible. Another nice one
is about actual ALSA hardware device names which are now presented for
selection as a pull-down menu on the setup dialog.
Grab it from:
http://qjackctl.sourceforge.net
As taken from the change log:
- JACK/ALSA client and port name aliasing (renaming) is now an optional
feature for the connections window; all client/port aliases are saved on a
per preset basis (as proposed for Lionstracs' Mediastation).
- Server state now shown (back gain) on the system tray icon tooltip;
speaking of which, tooltips are now also featured on connections, status
and patchbay windows.
- New actual hardware device selection menu featured on setup dialog;
these new button menus are only available for the ALSA driver settings.
- Server path factory default to jackd instead of jackstart; preset setup
button icons are back.
- Fixed rare connection port item removal/disconnection dangling pointer bug.
Have fun.
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
I just joined this list and figured I would introduce myself. I have
been using Linux for ~10 years or more but never really for the purpose
of audio work. I recently became reinterested in playing music and have
since found that Linux audio has taken a few leaps since I started using
it. I will probably be asking for help getting my midi keyboard working
and such (I just bought the connectors and have never done this before),
but for now I have already used Linux to record several "songs" with
ardour, hydrogen, and jack.
My soundclick site is http://www.soundclick.com/noahroberts/
I use Linux exclusively for everything but playing Tomb Raider and have
for several years. Lately I have had to use Windows for printing out
powertabs though; maybe that will change soon.
--
Our enemies are innovative and resourceful - and so are we
They never stop thinking about new ways to harm our country and our people - and neither do we.
-- George W. Bush
...a simple, but powerfull paradigm that helps to make
simple algorithms that produces very beautiful sounds.
Hi.
I want to open a topic about what I call "bandwidth of
each harmonic", to say some ideas that helped me to
make beautifull sounds. I am Paul, the author of
ZynAddSubFX software synthsizer (
http://zynaddsubfx.sourceforge.net ) and I wish to
share some ideas ;)
Referring to musical sounds,usually the harmonics are
considered simple sine function. Of course, the
reality shows different thing, and so, the harmonics
was started to be considered as sine functions
modulated by a lower frequencies (I saw some time ago
a link that describes that the choir sound is
beautifull because of the "micromodulations",etc...).
A very good thing is to "have a look" in a frequency
domains. Let's take for example a choir that sing same
note (like A=440Hz). Because all are human, even they
are very well trained singer, they will not sing
exactely the same note; for example one will sing at
435 Hz or other, at 443 Hz, and so on. Now, the first
harmonic (the fundamental note) is not longer a sine
of 440 Hz, but a narrow band signal that has a certain
bandwith. For example, let's take a very simple case:
if all sing at same loudness, and they sing from 435
Hz and the 445 Hz. In this case, the bandwidth will be
10 Hz. Of course, in real choir the frequency
distribution of the harmonic will be not flat, but
usually will be a curve that looks like a normal
(gaussian) curve(found this after I did some research=
very fine frequency analysis with very long FFTs).
Now, let's go to the second harmonic: if you multiply
435 and 445 with 2, the difference will be 20 Hz. So,
the bandwidth of the second harmonic is 20 Hz. So here
a important rule of real instruments (esp. ensembles):
the bandwith of each harmonic is proportional to the
it's frequency; so in this example the first harmonic
has bandwidth 10Hz, the second 20Hz, the third 30Hz,
and so on.
Here I made a very fine frequency analisys of a
synthsized sound(a real orchestra sound will give
similar results):
[img]http://zynaddsubfx.sourceforge.net/doc/paul1.png[/img]
You see, that the harmonic's bandwith increase
according to their frequency. If you don't increase
the bandwidth of higher harmonics, the resulting sound
will be unpleasant especially on higher bandwiths of
the first harmonic. So, not all quasi-periodic sounds
are good, usually only if you increase the bandwidth
of the harmonics.
Now, what's happens if there are a lot of harmonics or
the pitch of the sound is enough low?
Let's see:
[img]http://zynaddsubfx.sourceforge.net/doc/paul2.png[/img]
The upper harmonics will merge to a single frequency
band that will sound like a hiss, that is pleasent to
the ear(eg. choir).
I found that bandwidth of each harmonics can go even
greater than 50 cents (quater of a tone) and still
sound musical - I can give you some example wavs.
Please note that this a different situation than 2
notes detuned by 50 cents (that sound very dissonand).
Also, the best is that the phases are random. In real
life this is ensured by the reverberation of the hall,
for example if a instrument plays with a vibratto (a
flute).
Perhaps you noticed that always if more instruments
plays the same note(as a enseble), the sound is very
pleasant. I consider that this is the cause: the
bandwidth of each harmonic.
Unfortunately, there is very little on the internet
about this stuff, because, I think, that are used more
complicated paradigms (like some statistics on how
sine harmonics modulates, or so).
Now I want to give examples on how you can synthesize
sounds, using this ideea.
You can use zynaddsubfx or another synthesizer in most
cases. Of course: I don't claim that I invented the
bandwidth of each harmonic (or frequency distribution
of each harmonic), because the ensembles, the choirs,
the reverberations, exists since thousands of years
(or more :) ). I just consider this a very important
fact on how sounds are beautifull and it helped me to
synthsize good sounds even since I was on highschool
:D.
You can make bandwidth of each harmonics by:
1) make more oscillators and detune them a bit (and a
slight vibratto helps alot). Most synth use this and
allows this, and perhaps it is known by you. This is
one of the simplest method. This is implemented in
ZynAddSubFX as "ADDsynth" module
2) generate white noise and filter each harmonics with
a bandpass filter and mix the results. Be carefull to
make higher harmonics to have higher bandwidth. This
is implememnted in zynaddsubfx as "SUBsynth" module.
3) You see that above graphs. You can represent them
as numbers in the frequency domain that represent the
amplitudes of the frequencies, add random phases and
do a SINGLE IFFT and voila! A very beautifull sound
will "born". This new ideea is implemented in
ZynAddSubFX as "PADsynth"
4) you can do other things, like a vibratto on a
periodic oscillator and do a FFT to all sound, put
random phases, and do a IFFT.
For more information, I made few years ago a page that
describes above things at:
http://zynaddsubfx.sourceforge.net/doc_0.html
Good luck.
Paul
__________________________________
Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com
Bob,
You recently posted:
> I use the ' less is more ' when it comes to reverb!
> Also using just one reverb as an Aux send and sending all tracks to it
> with varying amounts really helps.
This is what many hardware synths do for a "performance" or "program." All
of the instruments go through the same reverb (and chorus). The amount of
reverb can be varied for each channel.
With care, this can be made to sound ~OK. Electronic instruments are recorded
directly, so there are no room acoustics. Other instruments are recorded
in rooms with noticeable acoustic properties. When these instruments sampled
under different conditions are combined, the room acoustics are imbalanced.
In order to correct this problem, one can add reverb to those instruments
which have little to none while adding none to those instruments which
were recorded in some sort of room. What is really being done here is that
the user is attempting to recreate the room that some of those instruments
were sampled in, which may sound OK. If one then adds additional nonphysical
reverb to the mix, the sound begins to deviate from that of a real room ---
so "less is more" --- that is, less sounds more like a real room. Any
rooms that samples are recorded in is usually small, so normally there isn't
much overall reverb with this approach.
I can't say and am not saying that this is what you are doing, but it may
at least partially explain why you say "less is more."
Personaly, I don't like the sound that my hardware synths produce when I
attempt to do what you described. Part of it has to do with the reverb, but
it also has to do with the artificial stereo separation for most instruments.
It never sounds good with headphones with this approach no matter what I do.
Even those genres which use little reverb don't sound good. The room
acoustic models, such as they are, and listening models are physics-deficient.
Hi Tim,
> Over-the-top reverbs have their uses, however, they do have to be pleasing
> to ear.
Absolutely. I've created some electronic music tunes and appreciate this
use of reverb. But it's better in my experience to listen to these over-the-top
reverbs (which essentially create a new instrument from an existing one)
in good rooms or alternatively with a good room acoustics model. When
I refer to "reverb," I'm actually talking about room acoustics, including
reverb. There is also a phenomenon referred to as binaural listening, which
you may have heard of.
In other posts, I've referred to this as "stereo separation." Many people
familiar with binaural listening, including myself, will tell you that
this is the best "stereo" or "surround sound" that they've ever heard,
bar none. Although in the past, binaural recordings have not been
successful, I suspect this is because at the time it was tried, headphones
were very heavy and rather expensive on top of (not instead of) the cost
of speakers. They may be commercially viable in the future with all
the inexpensive, lightweight headphones around.
One of the advantages of calculated impulse response functions is that
binaural images are easily obtained from any monophonic recording --- and
as accurately as you want.
Regards,
Dave.
Hi all,
This is my first post to the list and I'm still pretty much a newb, so
go gently... :-)
I have an Audigy2 Platinum Pro ZS card that I would like to use for 6
channel input and output work. I am using JACK (0.99.49) via the
QJackCtl (0.2.14) frontend. I've installed ALSA 1.0.8 with Lee's
emu10k1 multichannel v0008. I am running a pretty much stock, up to
date FC3 distro using kernel 2.6.10 with the realtime-lsm kernel module
installed so I have realtime privileges for jackd.
/proc/asound/devices:
4: [0- 0]: hardware dependent
9: [0- 1]: raw midi
8: [0- 0]: raw midi
19: [0- 3]: digital audio playback
18: [0- 2]: digital audio playback
26: [0- 2]: digital audio capture
25: [0- 1]: digital audio capture
16: [0- 0]: digital audio playback
24: [0- 0]: digital audio capture
0: [0- 0]: ctl
1: : sequencer
6: [0- 2]: hardware dependent
10: [0- 2]: raw midi
11: [0- 3]: raw midi
33: : timer
All software seems to be working properly. My problem is that I cannot
figure out how to configure my 6 desired capture channels so that I can
read signals on the L & R channels of my 3 Line In inputs. I just think
I have a configuration problem Any and all help greatly appreciated.
Here is the PCM device definition from my .asoundrc file (which is a
modification of the example file found on the included reference webpage):
################################################################################
#
# 5.1 Channel Surround Sound
#
# Reference information:
# http://alsa.opensrc.org/index.php?page=SurroundSound
ctl.jack51 {
type hw
card 0
}
pcm.jack51 {
# "asym" allows for different handling of in/out devices
type asym
playback.pcm {
# # route for mmap workaround
type route
slave.pcm surround51
# # Had to switch all L and R Channels to conform playback
# # channels within JACK to standard 5.1 channel mapping.
# # The trailing 1's indicate unity gain (valid values are 0.0-1.0)
ttable.0.1 1 # routes 0 to 1 (playback_1 [0] to output
channel 1 [1])
ttable.1.0 1 # routes 1 to 0 (playback_2 [1] to output
channel 0 [0])
ttable.2.3 1 # routes 2 to 3 (playback_3 [2] to output
channel 3 [3])
ttable.3.2 1 # routes 3 to 2 (playback_4 [3] to output
channel 2 [2])
ttable.4.5 1 # routes 4 to 5 (playback_5 [4] to output
channel 5 [5])
ttable.5.4 1 # routes 5 to 4 (playback_6 [5] to output
channel 4 [4])
}
capture.pcm {
# # 2 channels only
type hw
card 0
}
}
################################################################################
Using this "pcm.jack51" from within QJackCtl (equivalent to. jackd -R
-dalsa -d jack51 -S) I get 2 Capture channels (which both appear to
contain the R channel of Line In #1) and 6 playback channels (which are
correct). BTW, I can use my jack51 PCM from within QJackCtl because I
edited the "Interfaces" menu via the qjackctlrc file so that I have a
"jack51" menu option. Once JACK is started, channel interconnectivity
works correctly. How do I write the "capture.pcm" part of my .asoundrc
file so that all 6 inputs become capture channels in JACK?
I've done loads of reading on customizing a .asoundrc file and have
tried many permutations in attemting to configure my Audigy2's 6 input
channels, but no luck so far... There seems to be a good deal of
information for configuring ALSA for *playing* audio, but not so much
for *capturing* audio!
Does anyone have any experience with this that I could draw on and would
be willing to help? Other questions: Is this supported for my hardware
yet? Any recommendations for multichannel (at least 6 in & out) cards
that are currently well supported?
Thanks in advance for any assistance. I can post more detailed info as
needed.
-Rick
A couple of days ago, Andrew asked the following:
> Is there some kind of (software) head phones processor with the aim
> to eliminate an impression a sound is inside a head?
I wasn't sure whether he was asking about fixing existing material or
avoiding the problem in the first place, so I asked him. He replied that
he was interested in fixing existing material. He finds, as I do, that
the "sound inside the head" problem tires him out as he listens.
There are headphone amplifiers that help fix this problem by by signal
processing means. There may be software, but I'm unaware of a any
specific software. I have experimented with some techniques, but none
were satisfactory compared to preventing the problem in the first place
(for example, the engineered IR's I've produced or simply making binaural
recordings at the start).
Is anyone else interested in a software solution to this problem of fixing
existing material? Does any software exist that emulates these binaural
headphone amplifiers? If there is no Linux software yet interest, I
may spend some time going back to working on this problem, time permitting.
I would think that there should be a lot of interest, but apparently very
few people in general are familiar with the improved sound of binaural audio
images. Again, a fix is going to be far short of doing it right in the
first place. It also colors the sound.
Thanks for replies and regards to all,
Dave.