On Sun, February 10, 2013 3:03 pm, Paul Davis wrote:
On Sun, Feb 10, 2013 at 5:58 PM, Len Ovens
<len(a)ovenwerks.net> wrote:
Two linux drivers to consider. The PCIe stuff in the kernel is probably
optimized for throughput to make the best use of Video cards, fast ether
net, etc. Means larger chunks of data for less overhead. Maybe higher
latency too. It shouldn't be as it is faster than PCI which handled
things
just fine. The extra throughput should actually help for higher channel
counts. (from reading though some of the docs on netjack)
not really. the drivers for RME devices are PCI-species independent. the
PCI bus doesn't really exist for them, which is why the same driver can
interact with the (old) cardbus version as if it is directly on the bus.
drivers see address spaces and registers, not the bus.
Ok, so what I get from this is that the CPU sees a PCIe device in the same
way it sees a PCI device. Yet I know from reading the specs on both the
PCI to PCIe bridge and the MB chip sets, that there is some firmware
involved at both ends. This may not be something that is run by the CPU
itself (though it could be) but something that could be run by another
processor in the glue logic itself. It may rather be part of the bios. I
don't know. Most of the PCI(e) sound cards have a DSP with some sort of
firmware as well. How much all of this interacts I don't know. What I do
see is a lot of brand new MBs that handle sound badly.
--
Len Ovens
www.OvenWerks.net