I've been working with large lscp files, over a hundred .idfs for muse 2,
etc, for sometime now, and i'm pretty well knackered from manually creating
large monolithic instrument config files of some sort or another, for the
same sample library set, across apps.
I propose the following, and i'm wiling to put some money in someone's
pocket for this:
A simple app (CLI is ok but GUI based preferred) that:
1) automatically converts lscp files containing single or multiple midi
instrument maps into individual midnam files.
2) automatically converts muse2 .idf files into midnam files.
3) converts midnam files into muse2 .idf files
Obviously needs to work properly, and create correct ( auto app checked)
midnam and .idf files.
App needs to be opensource (obviously), and public, for those other poor
souls who have to struggle with large instrument files, particularly those
using large sample libs with copious articulations per instrument, and for
whom automation of these excruciating data processing tasks would save
massive amounts of time and effort.
Thanks, and if you're interested, contact me privately,
Alex Stone.
Hi all,
Time for another update for general consumption, minor changes done but
quite a few of them!
Nearly arbitrary list of changes from the changelog:
- Allow reading old drummaps for new style drumtracks
- Added metronome icon in main window
- Fixed moving events with keyboard in Drum editor
- Added theme support, Light, Dark and Ardour so far
- Added missing line draw shortcut (F) to drum editor.
- Added new french translation from Yann Collette
- Added: Pan and Zoom tools to editors. P + Z shortcuts. Added a Settings
item for alternate behaviour.
- Fixed: Pianoroll and Drum Editor 'Snap' boxes not remembering 1st or 3rd
columns.
- Fixed: Arranger 'Snap' was not stored or remembered.
- Fixed: Accelerator buttons shift/ctrl/alt for moving/copying/cloning /
restricting movement.
- Fixed: Shift key restricting movement: Ignore snap setting now.
- Fixed: Resize shift key ignore snap setting now.
- Fixed: Draw new item shift key ignore snap setting now.
- Fixed: Shift key was not snapping to vertical.
- Fixed: ALL 'Speaker' related playing of notes. Works with new notes,
moving notes, piano press etc.
- Fixed: ALL 'Speaker' related notes now send true note-offs instead of
zero-velocity note-ons.
- Fixed: Drum 'Cursor' mode was playing double notes.
- Fixed: New Drums 'Cursor' mode and instrument up/down movement was
broken, jumping all over the place.
- Added prebuilt PDF of manual (work in progress)
- Improved: Shortcut listings: Added Wave/Score categories. Re-categorized
several keys. Updated README.shortcuts
- Improved: Right-click menus expanded. Now also shows 'Tools' menu when
clicked on parts.
- Added choice of new metronome with different sounds and adjustable volume.
- Fixed gain adjustment with 'Other' choice in wave editor, it was reversed
For more information and additional changes see the full changelog:
http://lmuse.svn.sourceforge.net/viewvc/lmuse/trunk/muse2/ChangeLog?revisio…
Find the download at:
https://sourceforge.net/projects/lmuse/files/
MusE on!
The MusE Team
Hi all,
I once saw a helpful chart showing how latency
propagates through the JACK graph, through
the capture and playback ports of each
node.
Can anyone provide a link?
I want to make sure I understand correctly, that my app's
latency callback should be checking capture latency at the
input ports and setting playback latency on the output
ports.
Thanks,
Joel
--
Joel Roth
Hi All,
Just a note that I've extracted JNAJack (Java bindings for JACK) from
the Praxis LIVE repository, along with other elements of the
JAudioLibs code, and it's now on GitHub at
https://github.com/jaudiolibs Should make it easier for people to
work with (and contribute to!) the code.
Downloads and other facilities are still on the Google Code site at
http://code.google.com/p/java-audio-utils/ for now.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis LIVE - open-source, graphical environment for rapid development
of intermedia performance tools, projections and interactive spaces -
http://code.google.com/p/praxis
OpenEye - specialist web solutions for the cultural, education,
charitable and local government sectors - http://openeye.info
Hi, I need some advice, clear up some confusion:
I noticed our app uses this pan formula:
vol_L = volume * (1.0 - pan);
vol_R = volume * (1.0 + pan);
where volume is the fader value, pan is the pan knob value
which ranges between -1.0 and 1.0, and vol_L and vol_R are the
factors to be applied to the data when sending a mono signal
to a stereo bus.
When pan is center, 100% of the signal is sent to L and R.
At pan extremities, the signal is boosted by 3dB.
But according to [1], we should be using a Pan Law [2],
where pan center is around 3dB to 6dB down and pan
extremities is full signal.
So I want to change how we mix mono -> stereo and use
true Pan Law. I could add a Pan Law selector, seems like it
might be useful for various studio acoustics.
Then I noticed we use the same formula above to apply 'balance'
(using the same pan knob) when sending a stereo signal to
a stereo bus.
But according to [3] we should be using a true balance control, not those
same pan factors above. And according to [1]:
"Note that mixers which have stereo input channels controlled by a single
pan pot are in fact using the balance control architecture in those channels,
not pan control."
So I want to change how we mix stereo -> stereo and use true balance.
But then I checked some other apps to see what they do.
In an unofficial test I noticed that QTractor seems to do the same thing,
that is, when pan is adjusted on a stereo track, one meter goes up while
the other goes down. RG seems not to have stereo meters and Ardour
I couldn't seem to make pan affect the meters, I will try some more.
My questions:
Is the pan formula above popular?
What is the consensus on stereo balance - use a Pan Law, being the
formula above or otherwise, or use a true balance?
What should I do in the remaining case sending a stereo signal to a mono bus?
If I am using a Pan Law as balance, the two signals will have already been
attenuated at pan center so I could simply sum the two channels together.
But if instead I use true balance, at center the two signals are 100%.
So should I attenuate the signals before summing them to a mono bus?
Currently as our pan formula above shows, there would be no attenuation.
Thanks.
Tim.
[1]
http://en.wikipedia.org/wiki/Panning_%28audio%29
[2]
http://en.wikipedia.org/wiki/Pan_law
[3]
http://www.rane.com/par-b.html#balance
The deadline for submission of papers for DAFX13 has been extended to SUNDAY APRIL 14th
More information on http://dafx13.nuim.ie/authors.html
You can also follow us on twitter https://twitter.com/DAFxInfo
We're looking forward to your submission!
Dr Victor Lazzarini
Senior Lecturer
Dept. of Music
NUI Maynooth Ireland
tel.: +353 1 708 3545
Victor dot Lazzarini AT nuim dot ie
The deadline for submission of papers for DAFX13 has been extended to April 14th.
More information on http://dafx13.nuim.ie/authors.html
You can also follow us on twitter https://twitter.com/DAFxInfo
We're looking forward to your submission!
Dr Victor Lazzarini
Senior Lecturer
Dept. of Music
NUI Maynooth Ireland
tel.: +353 1 708 3545
Victor dot Lazzarini AT nuim dot ie
Trying to get live strings and vocal in linuxsampler, i was suprised by
fact, that this controller doesn't have effect (with all engines and
banks, including suggested for fluidsynth), but with fluidsynth all ok.
It is not clear now, how this controller should be handler at all. I
expected, that it should be same as for velocity, but set for enter
channel and variable, unlike V, specified for notes and constant).
I searched both in fluidsynth and qsynth sources to see, for handling
code, but found in fluidsynth only two constants (EXPRESSION_LSB and
EXPRESSION_MSB) and command, setting these parameters to default value
(127).
IMHO, it is very necessary for linuxsampler, and who knows, what is not
done for fluidsynth. Looks like LS devs left it for user (using
qmidiroute, mididings or sequencer's builtint converting capabilities
if available).
This link only confirms that:
http://bb.linuxsampler.org/viewtopic.php?f=6&t=207
Hi again. Looking for any advice, tips, tricks, anecdotes etc.
I want to eliminate or reduce 'zipper' noise on volume changes.
So I'm looking at two techniques:
Zero-crossing / zero-value signal detection, and slew-rate limiting.
Code is almost done, almost ready to start testing each technique.
Each technique has some advantages and disadvantages.
If I use a slew-rate limiter, I figure for a sudden volume factor change
from 0.0 to 1.0, if I limit the slew rate to say 0.01 per sample then after
100 samples the ramp will be done.
But even with a fine ramp, this still might introduce artifacts in the audio.
If I use a zero-crossing/zero-value detector and apply volume changes
only at these safe points, that's a much more desirable 'perfect' system.
But I stuck a time limit on waiting for a zero-cross because it's possible
the signal might have a high DC offset, or contain VLF < 20Hz.
(One cannot simply wait for the current data value to be 'zero' because
for example with a perfect square wave signal the 'current' value will never
approach zero, hence the zero-crossing detection requirement.)
At some point waiting for a zero-cross, a ramp would have already finished
and it may have been better to use that ramp instead.
Conversely, a zero-cross might happen sooner than a ramp could finish,
and we definitely want to use that zero-cross here instead of the ramp.
So it means I either have to give the user a choice of the two techniques,
or try to automatically switch between them - which ever one occurs first,
the ramp finishing or the zero-cross, use it.
But it means I have to keep two audio buffers - one for applying the ramp
as the samples are processed, and one for waiting until zero-cross happens -
and which ever one "finishes the race" first, that buffer "gets the nod".
The zero-crossing technique has some interesting implications.
For a stereo signal each channel's zero-cross will happen at different times.
So I'm trying to imagine what that's going to sound like where the volume
changes happen at slightly different times, if it will be noticeable, even
though that is far better than 'zipper' noise.
Also I'm trying to imagine how track cross-fading support would deal
with zero-crossing - if it is better to use ramps in that case.
What do you think?
Thanks.
Tim.