On Mon, Jun 18, 2018 at 9:40 AM, Paul Davis <paul@linuxaudiosystems.com> wrote:


On Mon, Jun 18, 2018 at 11:27 AM, robertlazarski <robertlazarski@gmail.com> wrote:

I guess I didn't clearly indicate that I am not talking about recording at all.

Put a Nord stage in the same room as a Steinway - as I have seen first hand - and the difference is huge. I seen Keith Emersons Moog Modular live and felt it in the balcony.

Record the same Nord and Steinway and the difference is largely lost. If you record that modular and compare it the Moog plugin you won't really know what the big deal about the Modular is. A live performance of a good horn section like Chicago is largely lost in the recordings.

Analog synths cannot be sampled for all possibilities, which is why CV controlled analog synth and effect modulation separate its digital counterparts in sound sculpting.

​Well, now the discussion is somewhat different.

An acoustic instrument interacts with the space you hear it in in ways that no stereo playback with common point sources is ever going to capture. It isn't easy to capture it even using ambisonics.  ​Even if you did an analog recording and played it back via a very good speaker system in the same space, there will be things missing unless you do some really unusual things with the recording and playback systems.

An electrical instrument, whether analog like the Moog Model D  or digital like Pianoteq can't do this: it never generates ANY sound at all except via some amplified speaker system. So it is entirely reasonable to think that you will always hear the same thing when you play a recording (analog or digital) of the instrument over the same playback system that you first heard it on. There is absolutely no difference ... electrical signal encounters speakers, is transformed into a pressure wave, reaches your ears. Recording or original sound ... no difference.

The inability to "sample for all possibilities" certainly has an impact, but it isn't relevant to physically modelled synthesizers, and it also has more impact on performance possibilities than actual acoustic tone.



Had to delay my response due to internet issues ... 

I look at digital audio capture and digital audio production of sounds differently, the former I don't really have a problem with besides its not a good way to compare analog signals. 

Here's some API lunchboxes, ARP 2600 TTSH clone, Moog D and Oberheim TVS together: 

https://drive.google.com/open?id=1x3XFCfv91_IacdqdiyAndnmxjBXro1BH

I see a lot of pop shows with my family and I can definitely tell when a synth is sampled, especially analog synths with some of the hipper artists. Sounds like a "Virtual analog"  synth. I believe these analog synths and effects are best heard live.

For synth audio production , modulation and LFO's with CV just sounds different and is more flexible as I see it. Not everyone agrees which is why most people use midi. CV sources can modulate each other, and CV sources can be combined. CV can be easily adjusted, inverted, and stretched in any which way. Very fast. Not stepped. I use midi for some things as it can be easily stored, but only with a stand alone step sequencer. 

Here's a quick example of what I am talking about. 

Some analog oscillators of VCO style can self oscillate. The Vermona Retroverb is a spring reverb / multi mode Filter / LFO that supports CV, so I use a Moog drum machine (DFAM) to trigger the Retroverb 'sample and hold' thru its self oscillating band pass filter. The DFAM is going thru a Moog 500 series delay and Moog Ladder HPF with the resonance on max, creating a low end rumble and some high end to the beat ... however a lot of that was lost in the recording - you had to be there in the same room at least this in this example.

The synth is a Studio Electronics CODE 8 analog poly synth, which in this case is using CS80 inspired filters as a strings instrument. The effect is an analog stereo chorus ensemble, the Elkorus.

All gear going thru its own API compressor and EQ channel, the API 2500, 525, 550a and 5500 if you are in to that sort of thing. The sample and hold is going thru a LA3A compressor with no eq. The synth also goes thru an Elysia Karacter 500 series distortion / saturation module. 

Trying to be relevant to LAU .... while the sounds and video were recorded on a couple Zoom Q8 cameras and a F8, the final video was auto edited via a Bash script using FFMPEG and ltcdump on my OpenSuse leap 42 Zareason notebook ... some of the FFMPEG commands were inspired by the debug output of Ardour.

I like using LTC audio to sync multi cam video and this is an example, created by this script. This video was put together in a few spare minutes I had yesterday so no post production, the camera angles need some work, mix isn't perfect but anyways ... a 1 minute video and smaller wav. 

script:

https://www.dropbox.com/s/6lkxftwjk1vhkqy/parseLTC.sh?dl=0

Video (200 mb)

https://www.dropbox.com/s/wzsfu5q2adws878/output.mov?dl=0

wav

https://www.dropbox.com/s/sgh8v50c6scd9vv/f8t.wav?dl=0

Best regards,
Robert