Here's a copy of a very good article about normalization, it's from
http://www.hometracked.com/2008/04/20/10-myths-about-normalization/
10 Myths About Normalization
Sunday, April 20th, 2008 in Articles for Beginners by des
distortionThe process of normalization often confuses newcomers to
digital audio production. The word itself, “normalize,” has various
meanings, and this certainly contributes to the confusion. However,
beginners and experts alike are also tripped up by the myths and
misinformation that abound on the topic.
I address the 10 most common myths, and the truth behind each, below.
Peak Normalization
First, some background: While “normalize” can mean several things (see
below), the myths below primarily involve peak normalization.
Peak normalization is an automated process that changes the level of
each sample in a digital audio signal by the same amount, such that the
loudest sample reaches a specified level. Traditionally, the process is
used to ensure that the signal peaks at 0dBfs, the loudest level allowed
in a digital system.
Normalizing is indistinguishable from moving a volume knob or fader. The
entire signal changes by the same fixed amount, up or down, as required.
But the process is automated: The digital audio system scans the entire
signal to find the loudest peak, then adjusts each sample accordingly.
Some of the myths below reflect nothing more than a misunderstanding of
this process. As usual with common misconceptions, though, some of the
myths also stem from a more fundamental misunderstanding – in this case,
about sound, mixing, and digital audio.
Myths and misinformation
Myth #1: Normalizing makes each track the same volume
Normalizing a set of tracks to a common level ensures only that the
loudest peak in each track is the same. However, our perception of
loudness depends on many factors, including sound intensity, duration,
and frequency. While the peak signal level is important, it has no
consistent relationship to the overall loudness of a track – think of
the cannon blasts in the 1812 Overture.
Myth #2: Normalizing makes a track as loud as it can be
Consider these two mp3 files, each normalized to -3dB:
[download MP3]
[download MP3]
The second is, by any subjective standard, “louder” than the first. And
while the normalized level of the first file obviously depends on a
single peak, the snare drum hit at 0:04, this serves to better
illustrate the point: Our perception of loudness is largely unrelated to
the peaks in a track, and much more dependent on the average level
throughout the track.
Myth #3: Normalizing makes mixing easier
I suspect this myth stems from a desire to remove some mystery from the
mixing process. Especially for beginners, the challenge of learning to
mix can seem insurmountable, and the promise of a “trick” to simplify
the process is compelling.
In this case, unfortunately, there are no short cuts. A track’s level
pre-fader has no bearing on how that track will sit in a mix. With the
audio files above, for example, the guitar must come down in level at
least 12dB to mix properly with the drums.
Simply put, there is no “correct” track volume – let alone a correct
track peak level.
Myth #4: Normalizing increases (or decreases) the dynamic range
A normalized track can sound as though it has more punch. However, this
is an illusion dependent on our tendency to mistake “louder” for
“better.”
By definition, the dynamic range of a recording is the difference
between the loudest and softest parts. Peak normalization affects these
equally, and as such leaves the difference between them unchanged. You
can affect a recording’s dynamics with fader moves & volume automation,
or with processors like compressors and limiters. But a simple volume
change that moves everything up or down in level by the same amount
doesn’t alter the dynamic range.
Myth #5: Normalized tracks “use all the bits”
With the relationship between bit depth and dynamic range, each bit in a
digital audio sample represents 6dB of dynamic range. An 8-bit sample
can capture a maximum range of 48dB between silence and the loudest
sound, where a 16-bit sample can capture a 96dB range.
In a 16-bit system, a signal peaking at -36dBfs has a maximum dynamic
range of 60dB. So in effect, this signal doesn’t use the top 6 bits of
each sample*. The thinking goes, then, that by normalizing the signal
peak to 0dBfs, we “reclaim” those bits and make use of the full 96dB
dynamic range.
But as shown above, normalization doesn’t affect the dynamic range of a
recording. Normalizing may increase the range of sample values used, but
the actual dynamic range of the encoded audio doesn’t change. To the
extent it even makes sense to think of a signal in these terms*,
normalization only changes which bits are used to represent the signal.
*NOTE: This myth also rests on a fundamental misunderstanding of digital
audio, and perhaps binary numbering. Every sample in a digital (PCM)
audio stream uses all the bits, all the time. Some bits may be set to 0,
or “turned off,” but they still carry information.
Myth #6: Normalizing can’t hurt the audio, so why not just do it?
Best mixing practices dictate that you never apply processing “just
because.” But even setting that aside, there are at least 3 reasons NOT
to normalize:
1. Normalizing raises the signal level, but also raises the noise
level. Louder tracks inevitably mean louder noise. You can turn the
level of a normalized track down to lower the noise, of course, but then
why normalize in the first place?
2. Louder tracks leave less headroom before clipping occurs. Tracks
that peak near 0dBfs are more likely to clip when processed with EQ and
effects.
3. Normalizing to near 0dbfs can introduce inter sample peaks.
Myth #7: One should always normalize
As mixing and recording engineers, “always” and “never” are the closest
we have to dirty words. Every mixing decision depends on the mix itself,
and since every mix is different, no single technique will be correct
100% of the time.
And so it goes with normalization. Normalizing has valid applications,
but you should decide on a track-by-track basis whether or not the
process is required.
Myth #8: Normalizing is a complete waste of time.
There are at least 2 instances when your DAW’s ‘normalize’ feature is a
great tool:
1. When a track’s level is so low that you can’t use gain and volume
faders to make the track loud enough for your mix. This points to an
issue with the recording, and ideally you’d re-record the track at a
more appropriate level. But at times when that’s not possible,
normalizing can salvage an otherwise unusable take.
2. When you explicitly need to set a track’s peak level without
regard to its perceived loudness. For example, when working with test
tones, white noise, and other non-musical content. You can set the peak
level manually – play through the track once, note the peak, and raise
the track’s level accordingly – but the normalize feature does the work
for you.
Myth #9: Normalizing ensures a track won’t clip
A single track normalized to 0dBfs won’t clip. However, that track may
be processed or filtered (e.g. an EQ boost,) causing it to clip. And if
the track is part of a mix that includes other tracks, all normalized to
0dB, it’s virtually guaranteed that the sum of all the tracks will
exceed the loudest peak in any single track. In other words, normalizing
only protects you against clipping in the simplest possible case.
Myth #10: Normalizing requires an extra dithering step
(Note: Please read Adam’s comment below for a great description of how I
oversimplified this myth.) This last myth is a little esoteric, but it
pops up sporadically in online recording discussions. Usually, in the
form of a claim, “it’s OK to normalize in 24 bits but not in 16 bits,
because …” followed by an explanation that betrays a misunderstanding of
digital audio.
Simply put: A digital system dithers when changing bit depth. (i.e.
Converting from 24-bits to 16-bits.) Normalizing operates independent of
bit depth, changing only the level of each sample. Since no bit-rate
conversion takes place, no dithering is required.
Other Definitions
Normalizing can mean a few other things. In the context of mastering an
album, engineers often normalize the album’s tracks to the same level.
This refers to the perceived level, though, as judged by the mastering
engineer, and bears no relationship to the peak level of each track.
Some systems (e.g. Sound Forge) also offer “RMS Normalization,” designed
to adjust a track based on its average, rather than peak, level. This
approach closer matches how we interpret loudness. However, as with peak
normalization, it ultimately still requires human judgment to confirm
that the change works as intended.
--
Speeding Cars - Imogen Heap (cover)
http://www.youtube.com/watch?v=SQ0_yUXidDI&feature=related
I just encountered a product (Mymix) which utilizes a draft IEEE standard
(IEEE 1722 and IEEE 1733) for networked, distributed, audio/video
distribution.
Is anyone aware of ongoing work to develop an AVB backend for JACK so
that these devices could be utilized as the A/D frontend for an
Ardour-based portable recording system?
--
Rick Green
"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."
-Benjamin Franklin
"As for our common defense, we reject as false the choice between our
safety and our ideals."
-President Barack Obama 20 Jan 2009
Hi all,
Just wanted to report a working configuration. I've built an audio logger
on a P4 machine with an Edirol FA-101, and it seems to be doing its thing
well enough. AVLinux 5.0, Rotter, and Jack are working like this: I
installed AVLinux on the aforementioned box, then installed Rotter from the
repos. I edited one of the autolaunch scripts so that six instances of
Rotter are launched when I log in. I edited /etc/slim.conf so that I am
automatically logged in upon reboot. I defined a patchbay in Qjackctl so
that audio is routed from individual physical inputs to each instance of
Rotter (one channel per instance), and is looped through to the output with
the same number so that I can monitor with a Q-box (portable speaker). I
added a cron job, as root, so that the machine reboots every few hours (this
is just a remedy for Rotter crashing after 12 hours or so; still can't fix
it, eaux whell). This system writes mono flac files, and puts the files
from each input into its own directory. So far it has been going for about
a week and a half, no problems. Many thanks to all on this list for advice,
info, and moral support.
-Steiny
(415)819-2009
Greetings all,
I'm running qjackctl 0.3.7 on Ubuntu 11.04. The audio connections in
my patchbay file get saved and reconnected to each other fine, but all
the jack and alsa midi connections never do, so I have to keep on
reconnecting the midi connections. Is this unique to my system?
Thanks,
Andrew.
As Leigh Dyer has noted the options with the Saffire Pro24 (or any
DICE-based firewire device, as far as I know), is to either use the old
firewire stack and any older lib1394 or a the new stack and the new
stack. I will point out (though I'm afraid of sounding like a broken
record), that this device will work out of the box with AVLinux 5.0.
The Saffire Pro 24 or as least the linux driver for it also has the
small issue that the first time you start ffado the device will connect
(connection LED lights) and then disconnect after about a minute.
After you start it a 2nd time it will work just fine.
--
Hello, list,
I'm in a bit of a hurr and a fix and so I am forced to send this from my phone: so apologies for any typos and misspellings.
I am trying to set up a system with ubuntu interfacing with a Focusrite Saffire pro24, none of which are my own or my own choosing. After some fooling with the code ffado recognises the card and jackd seems to interface fine with them. However, no sound will come out of the sound card no matter what I do. Jack sees all the channels and the card lights up like a nice christmas tree, but no joy. I was wondering whether any Saffire user on this list might give me a hint. I can always use my own trusty ittle UA25EX, but the saffire would be more convenient. Anyway... Any pointers would be much appreciated.
Cheers,
S.M.
I actually figured it out.
Falktx has created something called Catia that is similar to the Jack
"connect" interface. After looking in there, it was clear that pd had no
output connection.
I simply hooked things up and all was well.
Now I've gotta figure out what the hell I was thinking when I created that
pd patch.
This is where I usually jump on the developer for not commenting his code.
It was me though. Damn.
On Aug 21, 2011 6:45 PM, "Marco Donnarumma" <devel(a)thesaddj.com> wrote:
> From the jack log it seems that jack sink for PA is launched on startup
> (along with other clients).
>
> New client 'cadence' with PID 22834
>
> Sun Aug 21 11:09:48 2011: New client 'a2j' with PID 23755
>
> Sun Aug 21 11:09:48 2011: New client 'PulseAudio JACK Sink' with PID 23814
>
> Sun Aug 21 11:09:48 2011: Connecting 'PulseAudio JACK Sink:front-left' to
> 'system:playback_1'
>
> Sun Aug 21 11:09:48 2011: Connecting 'PulseAudio JACK Sink:front-right' to
> 'system:playback_2'
>
> Sun Aug 21 11:09:48 2011: New client 'PulseAudio JACK Source' with PID
23814
>
> Sun Aug 21 11:09:48 2011: Connecting 'system:capture_1' to 'PulseAudio
JACK
> Source:front-left'
>
> Sun Aug 21 11:09:48 2011: Connecting 'system:capture_2' to 'PulseAudio
JACK
> Source:front-right'
>
> Whereas that is useful to get always audio from the internet browser and
the
> likes, perhaps it could interfere with Pd trying to connect to the same
> ports.
> Sometimes when ports are already in use Pd might fail to connect.
>
> I would try to launch jack with no other clients connected (in your case
> avoid launching cadence, a2j, and jack sink) and then start pd from
terminal
> with the jack flag.
>
> M
>
>
>
>
> On Sun, Aug 21, 2011 at 7:14 PM, Aaron L. <elmastero74(a)gmail.com> wrote:
>
>> To be honest, it's been so long since I've had my head in PD that I can't
>> even remember what all of the pieces of my fiddle script is doing (that's
>> another email for another list, I think....)
>>
>> But let's say I just want to start pd like this:
>> pdextended -jack channels 2
>>
>> Within PD, when I click on 'test audio and midi', I get the following DIO
>> errors in pd along with no test tone (after I select "60", of course) :
>> error: channels: can't open
>> error: 2: can't open
>> audio I/O error history:
>> seconds ago error type
>> 70.39 unknown
>> 70.39 unknown
>> warning: tone-osc: multiply defined
>> warning: tone-mon: multiply defined
>> warning: tone-osc: multiply defined
>> warning: tone-mon: multiply defined
>> warning: tone-osc: multiply defined
>>
>> No messages within Jack regarding any of this.
>>
>> Here's my Jack log:
>>
>> Sun Aug 21 11:09:33 2011: Loading settings from
>> "/home/aaron/.config/jack/conf.xml" using expat_2.0.1 ...
>>
>> Sun Aug 21 11:09:33 2011: setting engine option "driver" to value "alsa"
>>
>> Sun Aug 21 11:09:33 2011: driver "alsa" selected
>>
>> Sun Aug 21 11:09:33 2011: setting engine option "realtime" to value
"false"
>>
>> Sun Aug 21 11:09:33 2011: setting engine option "verbose" to value
"false"
>>
>> Sun Aug 21 11:09:33 2011: setting engine option "client-timeout" to value
>> "500"
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "net" found
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "dummy" found
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "alsa" found
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "device" to value "hw:0"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "rate" to value "44100"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "period" to value "1024"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "nperiods" to value "3"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "hwmon" to value "false"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "hwmeter" to value
"false"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "duplex" to value "true"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "softmode" to value
"false"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "monitor" to value
"false"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "dither" to value "n"
>>
>> Sun Aug 21 11:09:33 2011: setting driver option "shorts" to value "false"
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "loopback" found
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "netone" found
>>
>> Sun Aug 21 11:09:33 2011: setting for driver "firewire" found
>>
>> Sun Aug 21 11:09:33 2011: setting for internal "netmanager" found
>>
>> Sun Aug 21 11:09:33 2011: setting for internal "profiler" found
>>
>> Sun Aug 21 11:09:33 2011: setting for internal "audioadapter" found
>>
>> Sun Aug 21 11:09:33 2011: setting for internal "netadapter" found
>>
>> Sun Aug 21 11:09:33 2011: Listening for D-Bus messages
>>
>> Sun Aug 21 11:09:42 2011: ------------------
>>
>> Sun Aug 21 11:09:42 2011: Controller activated. Version 1.9.7 (4236)
built
>> on Thu Jun 16 13:33:27 2011
>>
>> Sun Aug 21 11:09:42 2011: Loading settings from
>> "/home/aaron/.config/jack/conf.xml" using expat_2.0.1 ...
>>
>> Sun Aug 21 11:09:42 2011: setting engine option "driver" to value "alsa"
>>
>> Sun Aug 21 11:09:42 2011: driver "alsa" selected
>>
>> Sun Aug 21 11:09:42 2011: setting engine option "realtime" to value
"false"
>>
>> Sun Aug 21 11:09:42 2011: setting engine option "verbose" to value
"false"
>>
>> Sun Aug 21 11:09:42 2011: setting engine option "client-timeout" to value
>> "500"
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "net" found
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "dummy" found
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "alsa" found
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "device" to value "hw:0"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "rate" to value "44100"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "period" to value "1024"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "nperiods" to value "3"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "hwmon" to value "false"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "hwmeter" to value
"false"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "duplex" to value "true"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "softmode" to value
"false"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "monitor" to value
"false"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "dither" to value "n"
>>
>> Sun Aug 21 11:09:42 2011: setting driver option "shorts" to value "false"
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "loopback" found
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "netone" found
>>
>> Sun Aug 21 11:09:42 2011: setting for driver "firewire" found
>>
>> Sun Aug 21 11:09:42 2011: setting for internal "netmanager" found
>>
>> Sun Aug 21 11:09:42 2011: setting for internal "profiler" found
>>
>> Sun Aug 21 11:09:42 2011: setting for internal "audioadapter" found
>>
>> Sun Aug 21 11:09:42 2011: setting for internal "netadapter" found
>>
>> Sun Aug 21 11:09:42 2011: Listening for D-Bus messages
>>
>> Sun Aug 21 11:09:47 2011: Starting jack server...
>>
>> Sun Aug 21 11:09:47 2011: JACK server starting in non-realtime mode
>>
>> 11:09:47.801 D-BUS: JACK server was started (org.jackaudio.service aka
>> jackdbus).
>>
>> 11:09:48.054 ALSA connection graph change.
>>
>> Sun Aug 21 11:09:47 2011: control device hw:0
>>
>> Sun Aug 21 11:09:47 2011: control device hw:0
>>
>> Sun Aug 21 11:09:47 2011: Acquired audio card Audio0
>>
>> Sun Aug 21 11:09:47 2011: creating alsa driver ...
>> hw:0|hw:0|1024|3|44100|0|0|nomon|swmeter|-|32bit
>>
>> Sun Aug 21 11:09:47 2011: control device hw:0
>>
>> Sun Aug 21 11:09:47 2011: configuring for 44100Hz, period = 1024 frames
>> (23.2 ms), buffer = 3 periods
>>
>> Sun Aug 21 11:09:47 2011: ALSA: final selected sample format for capture:
>> 16bit little-endian
>>
>> Sun Aug 21 11:09:47 2011: ALSA: use 3 periods for capture
>>
>> Sun Aug 21 11:09:47 2011: ALSA: final selected sample format for
playback:
>> 32bit integer little-endian
>>
>> Sun Aug 21 11:09:47 2011: ALSA: use 3 periods for playback
>>
>> Sun Aug 21 11:09:47 2011: graph reorder: new port 'system:capture_1'
>>
>> Sun Aug 21 11:09:47 2011: New client 'system' with PID 0
>>
>> Sun Aug 21 11:09:47 2011: graph reorder: new port 'system:capture_2'
>>
>> Sun Aug 21 11:09:47 2011: graph reorder: new port 'system:playback_1'
>>
>> Sun Aug 21 11:09:47 2011: graph reorder: new port 'system:playback_2'
>>
>> Sun Aug 21 11:09:47 2011: New client 'cadence' with PID 22834
>>
>> Sun Aug 21 11:09:48 2011: New client 'a2j' with PID 23755
>>
>> Sun Aug 21 11:09:48 2011: New client 'PulseAudio JACK Sink' with PID
23814
>>
>> Sun Aug 21 11:09:48 2011: Connecting 'PulseAudio JACK Sink:front-left' to
>> 'system:playback_1'
>>
>> Sun Aug 21 11:09:48 2011: Connecting 'PulseAudio JACK Sink:front-right'
to
>> 'system:playback_2'
>>
>> Sun Aug 21 11:09:48 2011: New client 'PulseAudio JACK Source' with PID
>> 23814
>>
>> Sun Aug 21 11:09:48 2011: Connecting 'system:capture_1' to 'PulseAudio
JACK
>> Source:front-left'
>>
>> Sun Aug 21 11:09:48 2011: Connecting 'system:capture_2' to 'PulseAudio
JACK
>> Source:front-right'
>>
>> 11:09:49.960 Statistics reset.
>>
>> 11:09:49.979 Client activated.
>>
>> 11:09:50.016 JACK connection graph change.
>>
>> Sun Aug 21 11:09:49 2011: New client 'qjackctl' with PID 5754
>>
>>
>>
>> Thanks, all.
>>
>>
>> -Aaron
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Aug 21, 2011 at 10:47 AM, <harryhaaren(a)gmail.com> wrote:
>>
>>> Pasuspender is a way to make PulseAudio let go of the soundcard when
JACK
>>> wants to access it. This *is* nessisary on systems where Pulse is the
main
>>> sound daemon, but as KX uses JACK for all its audio needs, it will not
have
>>> any effect.
>>>
>>> "Error: 2: can't open" did catch my eye though, are you passing any
>>> special arguments to your "dac~" in the patch? like "dac~ 1 2" or
something
>>> to open the first & second channel? I've had problems there before,
either
>>> because it did (or did *not*) start from 0. Can't remember the details,
but
>>> perhaps that's it?
>>>
>>> -Harry
>>
>>
>>
>
>
> --
> Marco Donnarumma
> Independent New Media and Sonic Arts Practitioner, Performer, Teacher
> ACE, Sound Design MSc by Research (ongoing)
> The University of Edinburgh, UK
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Portfolio: http://marcodonnarumma.com
> Research: http://res.marcodonnarumma.com | http://www.thesaddj.com |
> http://www.flxer.net
> Director: http://www.liveperformersmeeting.net
I'm announcing a new release of Nama[1], with
several fixes and improvements:
- for visualizing/editing waveforms, Nama will invoke
Mhwavedit or Audacity on current track/version
(view_waveform, edit_waveform commands)
- envelope fades now work with transport seeking (fix in Ecasound 2.8.0)
- intuitive behavior for bus REC-enable flag
- reorganized source code
- documentation of variables and track subclasses
Nama is available as a Debian package,[2] as a native perl
distribution from CPAN[3] or from github.[4]
A blurb follows.
Regards,
Joel
------------------------------------------------------------
Nama performs multitrack recording, non-destructive
editing, mixing and mastering using the Ecasound audio
engine developed by Kai Vehmanen. Its command-line interface
enables users to perform many functions expected of a
digital audio workstation. A simple Tk-based GUI is also
provided.
Audio features
* stable and mature audio engine
* unlimited tracks supporting multiple WAV versions (AKA takes)
* track caching (AKA track freezing)
* effects (LADSPA, Ecasound)
* effect chains (multi-effect presets)
* effect profiles (multi-track presets)
* controllers
* sends
* inserts
* marks
* soloing
* regions
* buses
* edits
* instrument monitor outputs with per-musician mixes
* mastering mode
* project templates
* autoselect JACK/ALSA modes
* autodetect LADSPA plugins, Ecasound presets
* autosave project state
* launch Mhwaveedit on current track/version/waveform
* launch Audacity on current track/version/waveform
* Ladish Level 1 session handling.
Command prompt features
* grammar-based command language
* Ecasound interactive-mode commands
* shell commands
* perl code
* command history
* scripting
* user-defined commands
* autocompletion for commands, filenames, effect names
* help menus with keyword search
GUI
* simple and convenient
* two panels, no dialog boxes
* coexists with command prompt
* colors can be customized
Debugging resources
* track and bus status displays
* signal routing shown as Ecasound chain setup
* viewing of any and all data structures
* text-format config, project state and chain setup files
* separate debugging outputs at Nama and Ecasound levels
* mailing list support[5,6]
1. http://freeshell.de/~bolangi/cgi1/nama.cgi/00home.html
2. http://packages.debian.org/search?keywords=nama
3. http://search.cpan.org/dist/Audio-Nama/
4. http://github.com/~bolangi/nama
5. http://www.freelists.org/list/nama
6. http://eca.cx/ecasound/mlists.php
--
Joel Roth
Pasuspender is a way to make PulseAudio let go of the soundcard when JACK
wants to access it. This *is* nessisary on systems where Pulse is the main
sound daemon, but as KX uses JACK for all its audio needs, it will not have
any effect.
"Error: 2: can't open" did catch my eye though, are you passing any special
arguments to your "dac~" in the patch? like "dac~ 1 2" or something to open
the first & second channel? I've had problems there before, either because
it did (or did *not*) start from 0. Can't remember the details, but perhaps
that's it?
-Harry