[LAD] Fw: Re: Some questions about the Jack callback
len at ovenwerks.net
Sun Sep 21 16:15:22 UTC 2014
On Sun, 21 Sep 2014, Will Godfrey wrote:
> On Sun, 21 Sep 2014 07:50:45 -0700 (PDT)
> Len Ovens <len at ovenwerks.net> wrote:
>> On Sun, 21 Sep 2014, Fons Adriaensen wrote:
>>> Almost all lossy audio compression schemes are based on this.
>>> So one way would be to explore in how far you could parametrise
>>> e.g. an ogg or mp3 decoder and turn it into a synth engine.
>> The expected latency with ogg or mp3 is 200+ms, the Celt end of Opus might
>> be a better choice (5ms). Even Silk is probably higher latency than a
> I can follow this in outline, although not the finer details, but my greatest
> concern is how would you go about proving there was no noticeable difference
> to the listener? With all the interaction possibilities I suspect there are
> rather a lot of corner cases.
My first comment is that there would be slight differences. The question
is really if those differences are more or less pleasing to listen to.
What would be happening is not a replacenet for the same sound, but rather
a new sound that happened to be similar and would have to have it's own
> I'm also reminded of the situation when the web was fairly new and people would
> make copies of copies of jpegs. The differences only got noticeable about 3
> steps down, but by then it was too late. The damage had been done.
In this case it is always first generation. No matter what your first
generation is, lossy encoding will give differences to the final sound.
Although, a sound that started out using compression techniques might
sound less different than other sounds.
> When I show off my music to others, the sound is the first comment (I could
> wish otherwise) so personally I'm rather twitchy about anything that might
> alter that, and as a musician I'd rather spend out for a more powerful computer
> than have a more efficient but possibly compromised mojo.
Sound is king. Lossless encoding is best. The start of the thread was
based on the present hardware and lowering cpu load... Faster Hardware,
more cores, etc. may not be possible for everyone... particularely someone
who is using a small R-pi like board as a head-less stage box. I have seen
effects boxes done this way, but not sound generators yet (though the MOD
could possibly be used this way).
> ... of course, as an engineer I would like the greatest efficiency possible -
> fortunately I don't talk to myself :)
As a musician, I am quite willing to use 500watts for an amp delivering
50watts of sound if it just happens to be "that sound".
> Reading this back, it seems rather like a rant. I'm sorry, but 'our' sound has
> become critical to my compositions.
I did not feel ranted at. It is hard to know how much time or sound matter
to the person using the SW. For example with the note on instance noted
earlier, The sound module could at note start, choose to do only half of
it's setup in the first period and finish in the second, only starting to
make sound at that point. (using silence as was suggested in the first
period) However, that note start delay may not be acceptable to the
artist. They may be quite willing to have faster HW or even use two HW
boxes for more layers rather than have that small delay. The latency you
originally used as an example was to my mind higher than I would like to
use for a guitar effect, though I have with this netbook because
internal sound can't go lower (jack won't even start at 64/2).
<dream helmet on>
I think the MOD is in many ways the wave of the future. I see off-loading
more of the sound processing to the audio interface as the general
computer interfaces become more throughput oriented and less lowlatency
capable. Having an audio interface that is kind of a secialty computer,
but with OS access for the user just makes sense. Many AIs already have
quite a lot of processing inside, but are not open. The cost is not that
high for this added processing (end cost of $50?) and I would think having
the ability to add processing power with cards the size of the mini/micro
PCIe wireless cards should not be difficult. If Jack is run with very low
latency, then using a netjack like interface between cores could easily
allow the use of 16 or more cores/threads and still have an acceptable
latency. What if a second (open) video card was used for audio processing?
More information about the Linux-audio-dev