I need to correct a mistake in what I wrote yesterday.

VST3 does support double precision samples, and for whatever inexplicable reason even makes it the default.

Thanks to Robin Gareus for pointing this out.

Also, to follow on from something Fons wrote: an actual 32 bit sample value would have at least the low 4-6 bits representing brownian (atomic) motion. That's pretty crazy.

On Mon, Mar 6, 2017 at 9:30 PM, Paul Davis <paul@linuxaudiosystems.com> wrote:


On Mon, Mar 6, 2017 at 8:59 PM, Taylor <tay10r@protonmail.com> wrote:
Hey,

I'm a little bit new to LADSPA and LV2, so this may be a naive question.

I would like to know why single precision floating point types are used in the plugin interface, instead of double precision.

I would also like to know if there are plans to standardize a plugin interface that may process double-precision instead of single-precision data (or both).

Nobody needs double precision when moving data between host and plugins (or from one plugin to another).

You might be able to make a case for double precision math inside a plugin (and indeed several people have). But once that particular math is done, single precision is more than adequate.

As to why.... because everybody else who knew anything about this stuff was using 32 bit floating point already.

No existing plugin API supports double precision floating point as a standard sample format (you could do it in AU, but it would involve a converter to/from single precision on either side of the plugin that asks for this.