QMidiArp 0.5.2 has just seen the light of the day. It brings mainly
two improvements. One is a comeback, that of tempo changes on the fly,
and that now includes also tempo changes of a potential Jack Transport
master. Also the Jack Transport starting position is finally taken into
account, so that QMidiArp should be in sync also when starting the
transport master not at zero.
The second one is Non Session Manager support, mainly thanks to the work done by Roy Vegard Ovesen!
Note that for compiling in NSM support you will now need liblo as dependency.
Enjoy, and enjoy LAC in Graz this year
QMidiArp is an advanced MIDI arpeggiator, programmable step sequencer and LFO.
Everything is on
o Tempo changes are again possible while running, both manually or by
a Jack Transport Master
o Jack Transport position is now taken into account when starting,
QMidiArp used to start always at zero
o Muting and sequencer parameter changes can be deferred to pattern
end using a new toolbutton
o Modules in the Global Storage window have mute/defer buttons
o Global Storage location switches can be set to affect only the pattern
o Non Session Manager support with "switch" capability (thanks to
Roy Vegard Ovesen)
o NSM support requires liblo development headers (liblo-dev package)
is it correct that the following two scenarios give the exact same result?
(digital audio signal) -> (record) -> (playback) -> (apply fx) -> (result)
(digital audio signal) -> (apply fx) -> (record) -> (playback) -> (result)
> Partly also for historical reasons, I think. In many ways digital
> recording started as the "poor man's tape". Direct to disk recording with
> no effects was at first all that could be handled and most peole using it
> were replacing 8 track tape with it. They already had a mixer. As the DAW
> developed, mix down on the computer has been next. But for many people the
> recording part of the strip has been outside of the DAW, on an analog
> mixer. This is changing as a new batch of people are going mic->
> interface. Their input strip is whatever the interface provides... often
> only trim (either as a pot on the pre or in ALSA).
Is this for convenience or not having the ability to afford something else?
I think a lot of times money is spent on the wrong thing such as buying a
fancy multicore computer when something from 8 years ago is totally adequate
for digital audio.
> So digital recording is
> also going through a two mixer to inline transition. From hybrid to
> digital only. The trim controls are there, where they should be, as close
> to incoming signal as possible. I don't suppose it would be too hard to
> add alsa trim for a card like the d1010 to ardour, but many USB IFs (even
> PCIe) have no controls in alsa. It is a physical pot somewhere. So rather
> than being in front of the engineer, it is hidden and easily missed by the
> newby... or even not so new. So much is done digitally, that the remaining
> analog items are forgotten. This is a real problem with a two input IF,
> The trim needs to be set every time and the variety of signals through one
> channel is huge. Everything from a ribbon to line level. Having a set of
> good pre amps could be worth while, this is probably the biggest hole in
> the hobby studio. I have two, tube and solid state. (plus line)
It is very simple to keep a few notes on what is a good preamp setting for
a given mic and preamp combination. One inconvenience with some budget
preamps is that you don't know what sort of gain it is providing, so while you
may write down the setting by using a notation like 2:00 for dial position,
you haven't learned anything about gain, so if you swap out a preamp you need
to guess at where to start.
You can get into trouble with a mismatch between preamp and converters, such that
you are trying to "maximize bits" by getting a hot signal level into your converters.
The preamp ends up distorting and you have a hi-res recording of a distorted sound!
I actually had this problem with a remake of a vintage preamp. So it seems every
preamp has a voltage sweet spot that it should be operated in.
The best situation is if you have converters with analog trims, which is I think
what you were saying, and set them accordingly for each preamp. I leave my preamps
plugged into a specific A/D channels that have been calibrated for that preamp.
One other note, some budget preamps are not qualified for certain levels of input.
I have a Presonus Audiobox which can sound fine for an acoustic guitar, but throw a
drum at it and it is automatically over full scale and unusable.
You didn't really read my post didn't you? You are slghtly off-topic, it reads like the catalogus of a keyboard shop. Look at the name of this forum. Linux: that is about software. Developers: that
are people interested in creating something new, not in purchaging all kinds of gear.
Still: thanks for the information.
On 08/28/2014 11:53 AM, Ralf Mardorf wrote:
> Programming a sound using what kind of synthesis ever needs knowledge
> and many parameters. But there's another way to easily make new sounds
> based on existing sounds. E.g the Yamaha TG33's joystick, the vector
> control records a mixing sequence, where the volume and/or the tuning of
> 4 sound can be mixed. Since you mentioned touch screens, Alchemy for the
> iPad allows to morph sounds by touching the screen similar to the
> joystick used by the TG33, but it also can be used to control filters,
> effects and arpeggiator. There already are several old school synth and
> AFAIK new workstations, especially new proprietary virtual synth that
> provide what you describes. Btw. 2 of the 4 TG33 sounds are FM sounds,
> not that advanced as provided by the DX7, the other two are AWM (sound
> samples). Regarding the complexity of DX7 sound programming, the biggest
> issue is that it has got no knobs. There are books about DX7
> programming, such as Yasuhiko Fukuda's, but IMO it's easier to learn by
> trail and error. JFTR e.g. the Roland Juno-106 provides just a few
> controllers, but you easily can get a lot of sounds, without much
> knowledge http://www.vintagesynth.com/roland/juno106.php , in theory
> this could be emulated by virtual synth, in practise the hardware allows
> to use specialized microchips that produce analog sound, that can't be
> emulated that easily, not to mention that at the end of the computers
> sound chain there always is a sound card, so if you emulate several
> synth with the same computer, it's not the same as having several real
> instruments, a B3, Minimoog etc..
Hi fellow audio developers,
This forum is apparently mainly about audio production. But there's another side regarding audio, and that is: how to create interesting and/or beautiful sounds in software? Many sound generating
programs try to emulate the sounds of vintage instruments as close as possible, sometimes with impressive results, but software has many more possibilities then electro-mechanic or early electronic
I try to imagine how the Hammond organ was developed. There must have been a person with some ideas how he could generate organ-like sounds using spinning tone wheels, each capable to generate one
sine waveform, and to combine them using drawbars. Then he implemented this idea, listening carefully to the results, adding and removing different components. The key-clicks, caused by bouncing
contacts, formed a serious problem, however musicians seemed to like them, and they became part of the unique Hammond sound.
Compared to the available technical possibilities of the past, software designers nowadays have a much easier life. A computer and a MIDI keyboard is all you need, you can try all kinds of sound
creation, so why should you stick trying to reproduce the sounds of yore?
Maybe there are one or two eccentrics like me reading this post? In my opinion a software musical instrument must be controllable in a simple and intuitive way. So not a synthesizer with many knobs,
or an FM instrument with 4 operators and several envelope generators. You must be able to control the sounds while playing. A tablet (Android or iOS) would be an ideal control gadget. And: not only
sliders and knobs, but real-time, informative graphics.
As an example let me describe an algorithm that I implemented in a (open-source) program CT-Farfisa. I use virtual drawbars controlling the different harmonics (additive synthesis). The basic waveform
is not a sine, but also modelled with virtual drawbars. The basic waveform can have a duty cycle of 1, 0.7, 0.5 etcetera. The final waveform is shortened with the same amount. The beauty of this is
that you can control the duty cycle with the modulation wheel of the MIDI keyboard, so it's easy to modify the sound while playing. The program has build-in patches that have names of existing
instruments, but that's only meant as an indication: they do not sound very similar to those instruments. This description might sound a bit complicated, but coding it is not that difficult. Also
several attack sounds are provided, which is very important for the final result. The program has a touch-friendly interface, runs under Linux (for easy development and experimentation) and Android
It is not my aim to provide another software tool that you can download and use or not, but to exchange ideas about sound generation. I know there are many technics, e.g. wave guides, physical
modelling, granular synthesis, but I think that often it's difficult to control and modify the sound while playing, in an intuitive way. By the way, did you know that Yamaha, creator of the famous DX7
FM synth, had only 1 or 2 employees who could really program the instrument?
Maybe I'm missing it, but I don't think such as feature as I'm about to
I'm making lots of use of the MIDI track functionality. I find myself
wanting to take an audio segment, convert it to sample, and then use it on
a midi track. I don't believe there is a way to do this without the aide
of an external sampler program.
My dream feature? Click on segment, select "convert to sample" and a new
midi track appears linked to a plugin sampler, ready to play.
On August 26 we welcome again all creative music coders at STEIM for an
evening of exchanging current work, problems and solutions - and music
Entrance is free.
And let us know if you plan to join (just to get an idea of how many
seats, and how much coffee and tea we should prepare)!
JFTR some kinds of special audio effects people often think they are
inventions of the digital age already were done in the year I was born.
Audio engineering in the early days
pitch shifting while keeping the length without digital algorithms :D.
Since I'm a child from the 80th born in 1966, there's a remake from
Jello Biafra's The last temtation of reid in 1990:
> On Sat, 2014-08-23 at 07:56 -0400, Grekim Jennings wrote:
> > I have a Presonus Audiobox which can sound fine for an acoustic
> > guitar, but throw a drum at it and it is automatically over full
> > scale and unusable.
> Actually you cant blame a preamp, if the microphone is missing a PAD
A pad would solve the problem, but it's hardly a requirement of a good
microphone and a purist would probably say it's a bad idea to add that
to a mic. It's just not a professional preamp so I didn't have high
On Sun, 17 Aug 2014, Will Godfrey wrote:
> On Sun, 17 Aug 2014 16:15:58 +0000
> Fons Adriaensen <fons(a)linuxaudio.org> wrote:
>> On Sun, Aug 17, 2014 at 08:24:38AM -0700, Len Ovens wrote:
>>> So Allen & Heath uses 127 levels on their top end digital control
surfaces, How do they do it? Well they have two different scales: - fader:
((Gain+54)/64)*7f - also used for sends
>>> - Gain: ((Gain-10)/55)*7f - this is preamp gain
>> Suppose you have *real* faders which have a range of 127 mm.
>> That's not far from a typical size on a pro mixer.
>> Would you ever adjust them by half a millimeter ?
>> 127 steps, provided they are mapped well, and zipper noise
>> is avoided by interpolation or filtering, should be enough.
>> The real problem is that many SW mixers
>> * don't use a good mapping,
>> * and don't have any other gain controls.
>> The latter may force you to use the fader in a range
>> where it has bigger steps.
> Well that got me thinking!
> Presumably this should be set up as a proper log law, so even if the
> represent (say) 0.5dB that still gives a control range of over 60dB
I forgot to add:
I would think ((Gain+54)/64)*7f uses a lot less CPU time than a real
(proper) log. Think 8 fingers (plus thumbs?) fading around 80 steps in a
small time. Remember that this calculation has to be done at both ends too
and the receiving end also has to deal with doing more calculation on as
many as 64 tracks of low latency audio at the same time (amongst other
Also remember, this is only of use if you are building a control surface
(I am) and not buying one where "you get what you get". Add to that, even
if you are building your own control surface, do you want to use Yet
Another standard that you then have to make middle-ware for so that the SW
you are talking to will understand? A&H does supply middle-ware (for OSX)
that takes the above values and converts them (both ways) so that their
control surface looks to the sw like a Mackie (just about put Wackie)
control surface. Talk about lot of computations in you music box!