Hi Ralph,
You didn't really read my post didn't you? You are slghtly off-topic, it reads like the catalogus of a keyboard shop. Look at the name of this forum. Linux: that is about software. Developers: that
are people interested in creating something new, not in purchaging all kinds of gear.
Still: thanks for the information.
W.
On 08/28/2014 11:53 AM, Ralf Mardorf wrote:
> Programming a sound using what kind of synthesis ever needs knowledge
> and many parameters. But there's another way to easily make new sounds
> based on existing sounds. E.g the Yamaha TG33's joystick, the vector
> control records a mixing sequence, where the volume and/or the tuning of
> 4 sound can be mixed. Since you mentioned touch screens, Alchemy for the
> iPad allows to morph sounds by touching the screen similar to the
> joystick used by the TG33, but it also can be used to control filters,
> effects and arpeggiator. There already are several old school synth and
> AFAIK new workstations, especially new proprietary virtual synth that
> provide what you describes. Btw. 2 of the 4 TG33 sounds are FM sounds,
> not that advanced as provided by the DX7, the other two are AWM (sound
> samples). Regarding the complexity of DX7 sound programming, the biggest
> issue is that it has got no knobs. There are books about DX7
> programming, such as Yasuhiko Fukuda's, but IMO it's easier to learn by
> trail and error. JFTR e.g. the Roland Juno-106 provides just a few
> controllers, but you easily can get a lot of sounds, without much
> knowledge http://www.vintagesynth.com/roland/juno106.php , in theory
> this could be emulated by virtual synth, in practise the hardware allows
> to use specialized microchips that produce analog sound, that can't be
> emulated that easily, not to mention that at the end of the computers
> sound chain there always is a sound card, so if you emulate several
> synth with the same computer, it's not the same as having several real
> instruments, a B3, Minimoog etc..
>
>
Hi Gordon,
You are totally right, with one exception: the 2/3 harmonic (corresponding to a 12' pipe) is not weird-sounding, it sounds very nice and mellow, also while playing chords. I had the 'idea fix' that it
was common for Hammond and church organs also, but as you said: it does not exist there.
Wouter
On 09/01/2014 05:56 PM, gordonjcp(a)gjcp.net wrote:
> On Mon, Sep 01, 2014 at 05:44:08PM +0100, W.Boeke wrote:
>> Yes, I'm talking about adding lower frequencies. They don't sound shit(?), Hammond and organ players do it all the time.
> 2/3 would, because it doesn't "fit" nicely.
>
> If you look at the stops available on a Hammond your base generator is 8', with the option to add in a 16' "pipe" giving a partial one octave lower. There are straight octave partials at 4', 2' and 1' and then there are stops at a perfect fifth (5 1/3), an octave and a fifth (2 2/3), two octaves and a major third (1 3/5) and two octaves and a fifth (1 1/3).
>
> There isn't one at 2/3 (which would correspond to a 12' pipe). That would come out at a fourth below your fundamental, which would be a bit weird-sounding.
>
> Try it if you don't believe me.
>
Hi fellow audio developers,
This forum is apparently mainly about audio production. But there's another side regarding audio, and that is: how to create interesting and/or beautiful sounds in software? Many sound generating
programs try to emulate the sounds of vintage instruments as close as possible, sometimes with impressive results, but software has many more possibilities then electro-mechanic or early electronic
intruments.
I try to imagine how the Hammond organ was developed. There must have been a person with some ideas how he could generate organ-like sounds using spinning tone wheels, each capable to generate one
sine waveform, and to combine them using drawbars. Then he implemented this idea, listening carefully to the results, adding and removing different components. The key-clicks, caused by bouncing
contacts, formed a serious problem, however musicians seemed to like them, and they became part of the unique Hammond sound.
Compared to the available technical possibilities of the past, software designers nowadays have a much easier life. A computer and a MIDI keyboard is all you need, you can try all kinds of sound
creation, so why should you stick trying to reproduce the sounds of yore?
Maybe there are one or two eccentrics like me reading this post? In my opinion a software musical instrument must be controllable in a simple and intuitive way. So not a synthesizer with many knobs,
or an FM instrument with 4 operators and several envelope generators. You must be able to control the sounds while playing. A tablet (Android or iOS) would be an ideal control gadget. And: not only
sliders and knobs, but real-time, informative graphics.
As an example let me describe an algorithm that I implemented in a (open-source) program CT-Farfisa. I use virtual drawbars controlling the different harmonics (additive synthesis). The basic waveform
is not a sine, but also modelled with virtual drawbars. The basic waveform can have a duty cycle of 1, 0.7, 0.5 etcetera. The final waveform is shortened with the same amount. The beauty of this is
that you can control the duty cycle with the modulation wheel of the MIDI keyboard, so it's easy to modify the sound while playing. The program has build-in patches that have names of existing
instruments, but that's only meant as an indication: they do not sound very similar to those instruments. This description might sound a bit complicated, but coding it is not that difficult. Also
several attack sounds are provided, which is very important for the final result. The program has a touch-friendly interface, runs under Linux (for easy development and experimentation) and Android
(for playing).
It is not my aim to provide another software tool that you can download and use or not, but to exchange ideas about sound generation. I know there are many technics, e.g. wave guides, physical
modelling, granular synthesis, but I think that often it's difficult to control and modify the sound while playing, in an intuitive way. By the way, did you know that Yamaha, creator of the famous DX7
FM synth, had only 1 or 2 employees who could really program the instrument?
Wouter Boeke
Hi everyone,
Some of you might have noticed, we're having issues delivering mailing lists
posts to gmail users (it can be fairly random). I'm also having similar
issues at work and on my own server. It seems gmail has tightened their
filtering rules a bit and generate quite the amount of backscatter emails
which generally results in mailman accounts being disabled.
TL;DR
mailman issues, gmail sucks, I'm working on a fix :)
Cheers !
--
Marc-Olivier Barre
XMPP ID : marco(a)marcochapeau.org
www.MarcOChapeau.org
Maybe I'm missing it, but I don't think such as feature as I'm about to
describe exists.
I'm making lots of use of the MIDI track functionality. I find myself
wanting to take an audio segment, convert it to sample, and then use it on
a midi track. I don't believe there is a way to do this without the aide
of an external sampler program.
My dream feature? Click on segment, select "convert to sample" and a new
midi track appears linked to a plugin sampler, ready to play.
-
Devin
Hi all,
On August 26 we welcome again all creative music coders at STEIM for an
evening of exchanging current work, problems and solutions - and music
together.
More information:
http://steim.org/event/creative-music-coding-lab-13/
Entrance is free.
And let us know if you plan to join (just to get an idea of how many
seats, and how much coffee and tea we should prepare)!
sincerely,
Marije
JFTR some kinds of special audio effects people often think they are
inventions of the digital age already were done in the year I was born.
Audio engineering in the early days
https://www.youtube.com/watch?v=hnzHtm1jhL4
pitch shifting while keeping the length without digital algorithms :D.
Since I'm a child from the 80th born in 1966, there's a remake from
Jello Biafra's The last temtation of reid in 1990:
https://www.youtube.com/watch?v=PQ0VMDmGdx0
> On Sat, 2014-08-23 at 07:56 -0400, Grekim Jennings wrote:
> > I have a Presonus Audiobox which can sound fine for an acoustic
> > guitar, but throw a drum at it and it is automatically over full
> > scale and unusable.
>
> Actually you cant blame a preamp, if the microphone is missing a PAD
> switch.
A pad would solve the problem, but it's hardly a requirement of a good
microphone and a purist would probably say it's a bad idea to add that
to a mic. It's just not a professional preamp so I didn't have high
expectations.
On Sun, 17 Aug 2014, Will Godfrey wrote:
> On Sun, 17 Aug 2014 16:15:58 +0000
> Fons Adriaensen <fons(a)linuxaudio.org> wrote:
>> On Sun, Aug 17, 2014 at 08:24:38AM -0700, Len Ovens wrote:
>>> So Allen & Heath uses 127 levels on their top end digital control
surfaces, How do they do it? Well they have two different scales: - fader:
((Gain+54)/64)*7f - also used for sends
>>> - Gain: ((Gain-10)/55)*7f - this is preamp gain
>> Suppose you have *real* faders which have a range of 127 mm.
>> That's not far from a typical size on a pro mixer.
>> Would you ever adjust them by half a millimeter ?
>> 127 steps, provided they are mapped well, and zipper noise
>> is avoided by interpolation or filtering, should be enough.
>> The real problem is that many SW mixers
>> * don't use a good mapping,
>> * and don't have any other gain controls.
>> The latter may force you to use the fader in a range
>> where it has bigger steps.
> Well that got me thinking!
> Presumably this should be set up as a proper log law, so even if the
steps
> represent (say) 0.5dB that still gives a control range of over 60dB
I forgot to add:
I would think ((Gain+54)/64)*7f uses a lot less CPU time than a real
(proper) log. Think 8 fingers (plus thumbs?) fading around 80 steps in a
small time. Remember that this calculation has to be done at both ends too
and the receiving end also has to deal with doing more calculation on as
many as 64 tracks of low latency audio at the same time (amongst other
things).
Also remember, this is only of use if you are building a control surface
(I am) and not buying one where "you get what you get". Add to that, even
if you are building your own control surface, do you want to use Yet
Another standard that you then have to make middle-ware for so that the SW
you are talking to will understand? A&H does supply middle-ware (for OSX)
that takes the above values and converts them (both ways) so that their
control surface looks to the sw like a Mackie (just about put Wackie)
control surface. Talk about lot of computations in you music box!
--
Len Ovens
www.ovenwerks.net
On August 15, 2014 05:24:51 PM Len Ovens wrote:
> On Fri, 15 Aug 2014, Paul Davis wrote:
> > On Wed, Aug 13, 2014 at 10:16 PM, Len Ovens <len(a)ovenwerks.net> wrote:
> > Is it just me? Has anyone else looked at pitch bend events on the
> > Ardour MIDI Tracer? Quick test:
> >
> > ================================================
> > - edit->Preferences->Control surfaces.
> > - select both enabled and feedback for generic MIDI.
> > - double click on it and select bcf2000 with mackie protocol
> > - open an external midi monitor (using qmidiroute here) and connect
> > it to
> > Ardours MIDI control out.
> > - also open Ardours MIDI tracer window and connect it to the same
> > output.
> > (Now qmidiroute will be in decimal and MIDI Tracer is hex..)
> > - use the mouse to move the gain up and down on channel one with an
> > audio track.
> > =================================================
> >
> > the trace shows MIDI messages. there are no 10 or 14 bit MIDI messages.
> > only "controllers" with 14 bit state. 14 bit state is sent as two
> > messages (when necessary). the tracer shows each individual message.
>
> Pitch bend, which the mackie faders use, is specified in the MIDI standard
> and in the MCP spec as: Ex yy zz in one event.
Correct.
Pitch bend is the only truly 14-bit encapsulated message.
There is also one more, real time Song Position Pointer (F2, LSB, MSB).
I think it is 12-bit though.
One can use the pitch wheel for something other than pitch.
It is per-channel. You can have up to 16 14-bit pitch wheels
(or knobs/sliders/etc), per interface, each routed to whatever you want.
You would just have to reserve some channels, or even a whole interface.
Yay Mackie! And others doing this.
But do I wish MIDI had included at least one /more/ encapsulated
14-bit message per channel.
> WHere x = channel, yy = LSB
> (7 bits) and zz = MSB (7 bits). This is one event. This the way I send
> fader info to Ardour and It is also the way Ardour sends info back to me
> when I use a mouse to move a fader. Lets compare the output of the tracer
> with qmidiroute: (using fader5 as happens)
>
> tracer db qmidiroute midi
> ===========================================================================
> Pitch Bend chn 5 1a -0.5 Ch 5, Pitch 4378 (111a) e4 1a 42
> crt/alt/mousewheel down
> Pitch Bend chn 5 09 -0.5 Ch 5, Pitch 4361 (1109) e4 09 42
> Pitch Bend chn 5 79 -0.5 Ch 5, Pitch 4345 (10f9) e4 79 41
> There is no second midi event with the msb, this info is missing.
>
> I am well aware that controllers are 7 bits in an event. The MIDI standard
> does double up some of them for 14 bit,
I think you are referring to what I will call
"14-bit aggregate 7-bit controllers", as opposed to
"14-bit (N)RPN controllers".
> but I am not aware of anyone who
> uses them.
In my searches for products/software that use them I have seen a few.
Good to plan for it anyway. I think ALSA can use it too - I think it is one
of the event types.
>
> Speaking of which... I was looking at the MIDI codes for some more upscale
> Control surfaces:
> Allen & Heath iLive control surfaces
> Yamaha CL series mixers
>
> Both of these use NRPN (Non-Registered Parameter Number) for some of their
> messages. The A&H uses this because they have run out of controllers (I
> would guess) and still only get 7 bits for their faders. They use three
> events, the first one has the mixer channel (as well as the midi channel),
> the second has a code for what in the channel it controls (0x17 for fader)
> and the third is the data.
Wow, so they just ate up 16,384 available 7-bit NRPN controllers.
That's an entire interface full.
Any 14-bit controllers at all on these AH models?
If so, do they have detailed info on exactly how it is sent, and options?
If not, do you have a way of reading what comes out of them (and when)?
NOT with ALSA sequencer or any of the 'snooper' apps I've seen.
It must be either a true UART HW project or a low-level API for reading
the MIDI data.
Does anyone out there know of a *true* sniffer application that handles
all 7-bit and 14-bit controller types and displays them conveniently
and correctly?
> Yamaha uses four events, the first two carrying
> the controller number (they have not bothered to group them the way A&H
> has) and the last two are MSB and then LSB. The wiki for NRPN says the
> controller number for the first two bytes should be 0x63 and 0x62,
> however, Yamaha has these listed backwards. As there are other typos in
> the same page, this may just be a typo too.
These are 14-bit NRPN controllers.
> I do not know if the Ardour midimap binding msg="byte byte byte" would
> handle this or not. I know Jack would consider this 4 events and probably
> not accept just the first status byte with 8 running status data bytes
> following as a single event.
I think Jack does not care. It simply passes on whatever bytes it receives.
This is a blessing for me actually. I have to make my own encoder/decoder
(this converts raw 7-bit midi to/from MusE controllers) but at least
*I* control what goes on inside there.
(I'd like to take this opportunity for a bit of an update):
It's been a year since I reported how I was attempting to finish adding true,
full 14-bit midi controller support to MusE (that's 7-bit controllers,
14-bit aggregate 7-bit controllers, 7 bit RPN and NRPN controllers, and
14-bit RPN and NRPN controllers).
I said I found that ALSA sequencer API doesn't handle 7-bit RPN.
And wouldn'tcha know it, my keyboard sends only 7-bit.
As I say, it means I have to make my own encoder/decoder.
I think possibly the ALSA RAW API is what I want.
But MusE uses the ALSA sequencer API, big job to switch to RAW.
Fortunately MusE also uses Jack Midi.
I said that I /could/ have taken Jack's raw midi data and fed it
to/from an ALSA encoder/decoder - but again, no 7-bit RPN.
So it's /all/ home-brew for me! Workin' on C++ classes all these months.
My efforts are moving along, but slowly.
I'm trying to make it a library 'cause it seems useful for other apps,
but right now some of it very specific to MusE.
Trust me, robust controller support needs some fairly 'acrobatic' code,
and user options concerning formatting. Example:
I read where one company manufactured a whole line of
small embedded midi board products.
Only to have a user point out that they didn't do 7-bit RPN
conventionally (they used the LOWER 7 bits of data), but the
company owner said there was nothing he could do until
next designs and he actually argued vigorously that his way
was correct (well, he is right but the almost universal convention
is the other way - the UPPER 7 bits).
See what I mean?
I scoured the 'net for product docs and found wide variations in
implementations, even though they are all 'correct'.
Cheers. Greetings Len.
Tim.