Hi all,
I once saw a helpful chart showing how latency
propagates through the JACK graph, through
the capture and playback ports of each
node.
Can anyone provide a link?
I want to make sure I understand correctly, that my app's
latency callback should be checking capture latency at the
input ports and setting playback latency on the output
ports.
Thanks,
Joel
--
Joel Roth
Hi All,
Just a note that I've extracted JNAJack (Java bindings for JACK) from
the Praxis LIVE repository, along with other elements of the
JAudioLibs code, and it's now on GitHub at
https://github.com/jaudiolibs Should make it easier for people to
work with (and contribute to!) the code.
Downloads and other facilities are still on the Google Code site at
http://code.google.com/p/java-audio-utils/ for now.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis LIVE - open-source, graphical environment for rapid development
of intermedia performance tools, projections and interactive spaces -
http://code.google.com/p/praxis
OpenEye - specialist web solutions for the cultural, education,
charitable and local government sectors - http://openeye.info
Hi, I need some advice, clear up some confusion:
I noticed our app uses this pan formula:
vol_L = volume * (1.0 - pan);
vol_R = volume * (1.0 + pan);
where volume is the fader value, pan is the pan knob value
which ranges between -1.0 and 1.0, and vol_L and vol_R are the
factors to be applied to the data when sending a mono signal
to a stereo bus.
When pan is center, 100% of the signal is sent to L and R.
At pan extremities, the signal is boosted by 3dB.
But according to [1], we should be using a Pan Law [2],
where pan center is around 3dB to 6dB down and pan
extremities is full signal.
So I want to change how we mix mono -> stereo and use
true Pan Law. I could add a Pan Law selector, seems like it
might be useful for various studio acoustics.
Then I noticed we use the same formula above to apply 'balance'
(using the same pan knob) when sending a stereo signal to
a stereo bus.
But according to [3] we should be using a true balance control, not those
same pan factors above. And according to [1]:
"Note that mixers which have stereo input channels controlled by a single
pan pot are in fact using the balance control architecture in those channels,
not pan control."
So I want to change how we mix stereo -> stereo and use true balance.
But then I checked some other apps to see what they do.
In an unofficial test I noticed that QTractor seems to do the same thing,
that is, when pan is adjusted on a stereo track, one meter goes up while
the other goes down. RG seems not to have stereo meters and Ardour
I couldn't seem to make pan affect the meters, I will try some more.
My questions:
Is the pan formula above popular?
What is the consensus on stereo balance - use a Pan Law, being the
formula above or otherwise, or use a true balance?
What should I do in the remaining case sending a stereo signal to a mono bus?
If I am using a Pan Law as balance, the two signals will have already been
attenuated at pan center so I could simply sum the two channels together.
But if instead I use true balance, at center the two signals are 100%.
So should I attenuate the signals before summing them to a mono bus?
Currently as our pan formula above shows, there would be no attenuation.
Thanks.
Tim.
[1]
http://en.wikipedia.org/wiki/Panning_%28audio%29
[2]
http://en.wikipedia.org/wiki/Pan_law
[3]
http://www.rane.com/par-b.html#balance
The deadline for submission of papers for DAFX13 has been extended to SUNDAY APRIL 14th
More information on http://dafx13.nuim.ie/authors.html
You can also follow us on twitter https://twitter.com/DAFxInfo
We're looking forward to your submission!
Dr Victor Lazzarini
Senior Lecturer
Dept. of Music
NUI Maynooth Ireland
tel.: +353 1 708 3545
Victor dot Lazzarini AT nuim dot ie
The deadline for submission of papers for DAFX13 has been extended to April 14th.
More information on http://dafx13.nuim.ie/authors.html
You can also follow us on twitter https://twitter.com/DAFxInfo
We're looking forward to your submission!
Dr Victor Lazzarini
Senior Lecturer
Dept. of Music
NUI Maynooth Ireland
tel.: +353 1 708 3545
Victor dot Lazzarini AT nuim dot ie
Trying to get live strings and vocal in linuxsampler, i was suprised by
fact, that this controller doesn't have effect (with all engines and
banks, including suggested for fluidsynth), but with fluidsynth all ok.
It is not clear now, how this controller should be handler at all. I
expected, that it should be same as for velocity, but set for enter
channel and variable, unlike V, specified for notes and constant).
I searched both in fluidsynth and qsynth sources to see, for handling
code, but found in fluidsynth only two constants (EXPRESSION_LSB and
EXPRESSION_MSB) and command, setting these parameters to default value
(127).
IMHO, it is very necessary for linuxsampler, and who knows, what is not
done for fluidsynth. Looks like LS devs left it for user (using
qmidiroute, mididings or sequencer's builtint converting capabilities
if available).
This link only confirms that:
http://bb.linuxsampler.org/viewtopic.php?f=6&t=207
Hi again. Looking for any advice, tips, tricks, anecdotes etc.
I want to eliminate or reduce 'zipper' noise on volume changes.
So I'm looking at two techniques:
Zero-crossing / zero-value signal detection, and slew-rate limiting.
Code is almost done, almost ready to start testing each technique.
Each technique has some advantages and disadvantages.
If I use a slew-rate limiter, I figure for a sudden volume factor change
from 0.0 to 1.0, if I limit the slew rate to say 0.01 per sample then after
100 samples the ramp will be done.
But even with a fine ramp, this still might introduce artifacts in the audio.
If I use a zero-crossing/zero-value detector and apply volume changes
only at these safe points, that's a much more desirable 'perfect' system.
But I stuck a time limit on waiting for a zero-cross because it's possible
the signal might have a high DC offset, or contain VLF < 20Hz.
(One cannot simply wait for the current data value to be 'zero' because
for example with a perfect square wave signal the 'current' value will never
approach zero, hence the zero-crossing detection requirement.)
At some point waiting for a zero-cross, a ramp would have already finished
and it may have been better to use that ramp instead.
Conversely, a zero-cross might happen sooner than a ramp could finish,
and we definitely want to use that zero-cross here instead of the ramp.
So it means I either have to give the user a choice of the two techniques,
or try to automatically switch between them - which ever one occurs first,
the ramp finishing or the zero-cross, use it.
But it means I have to keep two audio buffers - one for applying the ramp
as the samples are processed, and one for waiting until zero-cross happens -
and which ever one "finishes the race" first, that buffer "gets the nod".
The zero-crossing technique has some interesting implications.
For a stereo signal each channel's zero-cross will happen at different times.
So I'm trying to imagine what that's going to sound like where the volume
changes happen at slightly different times, if it will be noticeable, even
though that is far better than 'zipper' noise.
Also I'm trying to imagine how track cross-fading support would deal
with zero-crossing - if it is better to use ramps in that case.
What do you think?
Thanks.
Tim.
Hi all,
Hopefully this will be useful to others. I just wrote a short note
describing my experiences when moving from a SysV init script based
OpenMixer[*] system to one that uses systemd. In short, yes, it is
possible, no, it was not easy (mostly because of my own ignorance of
systemd, of course :-)
https://ccrma.stanford.edu/~nando/plog/20130315/
Enjoy!
-- Fernando
[*] OpenMixer is a mixing and routing application written in
SuperCollider that manages audio for our 22 channel 3D Listening Room.
It has to start on boot and runs as the "openmixer" user and group, and
I wanted it to not use a regular Desktop "autologin" feature but rather
be a SysV init service...
https://ccrma.stanford.edu/room-guides/listening-room/https://ccrma.stanford.edu/software/openmixer/manual/
Spring is nigh.
Qtractor 0.5.8 (india romeo) is out, singing a serenade...
Nothing but the change-log (see below:))
Enjoy && lots of fun.
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
- source tarball:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.8.tar.gz
- source package (openSUSE 12.3):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.8-5.rncbc.suse123.sr…
- binary packages (openSUSE 12.3):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.8-5.rncbc.suse123.i5…http://downloads.sourceforge.net/qtractor/qtractor-0.5.8-5.rncbc.suse123.x8…
- quick start guide & user manual:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.x-user-manual.pdf
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.
Change-log:
- Dropped old audio ramping spin-locks as its glitching reduction
weren't that effective anymore.
- Audio bus and track gain may now be set for amplification again, from
+0dB up to +6dB, while using the Mixer strip sliders/faders (an old
function found missing ever since pre-TYOQA).
- Basic LV2 X11 UI support has been added through libSUIL but only
really effective if plugins doesn't support the LV2 External UI
extension in addition which takes precedence on any case.
- Improved precision tolerance on the Tempo Map / Markers dialog.
- Reinstated and fixed (old) warning and impending re-conversion on
loading session files which the original sample-rate differs from
current audio device engine (aka. JACK).
- LV2 Plugin State/Preset name discrimination fix (after a ticket by
Jiri Prochaszka aka. Anchakor, thanks:)
- Linked/ref-counted audio clips must not overlap and now must have a
buffer-size worth of a gap between each other.
- Something fishy has been detected in the SSE (not so) optimized code
from SoundTouch's-inspired WSOLA time-stretching.
- Splitting clips apart is now easier than ever: a brand new entry
enters the main menu scene: Edit/Split (Ctrl+Y) splits up clips
according to current range/rectangular selection.
- Audio clip offsets are now properly corrected when time-stretching is
applied via Shift/Ctrl+dragging any of the clip edges.
- One semi-colon typo was hiding proper descrimination of peak files
used to draw distinct waveforms of time-stretched audio clips.
- Track automation curves are now also affected by Edit/Insert/Range
commands.
- Finally, some visual feedback is shown while audio track export is
running, in he form of a main status progress bar.
- New user option: save backup versions of existing session files.
- Default session directory now set to regular file's path on load.
- A convenient minimum slack duration has been fixed for MIDI SysEx
messages.
- LV2 Time/position information is now asynchronously fed back into
their parameter (control input) ports when designated.
- LV2 State is now properly restored for plugins inserted on buses,
probably solving the Calf Fluidsynth SoundFont information missing on
buses ticket, reported by Albert Graef, thanks.
- Fixed an immediate null pointer crash on creating a parentless new
group while on the files organizer widget.
- Preparations for future Qt5 migration.
Cheers!
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org