On Wed, Oct 16, 2024 at 03:04:39PM +0000, Juan P C wrote:
CPUs have Branch Prediction algorithms, and Speculative
Execution Algorithms....
that means:
CPU double Guess the result, before result is actually computed,,,
that does Not affect simple things, like counting from 0 to 1000 in C++
but does affect Audio plugins.
Prediction can Never bee 100% accurate,
CPUs will never sound like real analog,
unless you make a CPU without Prediction algorithms..
I think you are mistaken in your understanding of how CPUs/DSPs function. If
branch prediction etc produced different answers to mathematical operations,
then nothing on computers would work with our current models. ¯\_(ツ)_/¯
It is absolutely true that the amount of time a series of operations takes CAN
be variable due to (as you say) prediction, as well as effects of the memory
subsystem (caches, bus interference), or (even bigger, since we're usually talking
about desktop compute) CPU sharing. However, much as the telecom, aerospace,
etc industries (that rely on deterministic compute) have found, you CAN have
prediction and cache enabled and produce certified products (and even CPU
sharing!), because you can typically show that they meet their compute
deadlines on a well-designed system.
But, if you're not happy with those technologies, you can often disable
prediction, cache, etc on a modern processor and live with a more predictable
(and predictably slower) experience. I don't recommend that, though it's
something I've often used to isolate races and misconfigured memory at the
kernel level.
I think you are forgetting that a world without cache and speculative execution
in CPUs is where we came from, and that there is a reason that we have moved on
from that.
DSP like ProTools HD or HDX,
dont have that problem.
plugins if done properly sound almost the same as analog.
You think DSPs aren't using the same predictive tricks as general-purpose CPUs
to accelerate their performance? :)
YES! I do agree with you, that plugins - if implemented properly - sound almost the same
as analog!
and even if you make a CPU without prediction
algorithms in FPGA,
still wont sound the same....
because CPUs also have 10 Million interrupts per second.
digital audio is serial, algorithms does Not like to be paused 10 Million times per
second.
creates another sample and hold over, digitized audio "sample & hold"
Algorithms don't care what timeline they are executed in nor how many times
they are context switched. This is getting weird.
N samples arrive, and N samples are either processed within their allotted time or not.
If they are processed in time, then they are output on time. If they are not, then
it will be heard as a glitch/distortion in the output, and the system is considered
overloaded.
You can build a custom audio system that runs without interrupts, but you'll
be able to handle far lower audio bandwidth. Still, pure time-sliced systems ARE
sometimes used for very critical systems, because you know they will always behave
incredibly predictably. As with the other modern conveniences, though, such designs
have been largely phased out.
FWIW, you should probably expects to see 10ks of interrupts/sec on a
general purpose CPU, not 10 million, to give you a sense of scale.
using buffer size of 32 or less, minimize that problem,
does sound a bit better,
at least on ProTools, but CPU goes crazy...
If by 'problem' you mean 'input to output latency', then yes, that will
sound better.
I can't imagine small buffer sizes making the output sound better in any other way.
-greg