On Sat, 2011-02-26 at 19:32 +0100, Olivier Guilyardi wrote:
On 02/26/2011 07:18 PM, David Robillard wrote:
I'm more in to shrinking the actual runtime
memory overhead to the
absolute bare minimum than shrinking the code. I have roughly infinity
more useful things to do than pretending <100k libraries are bloated :)
Oddly, on most Android devices you actually have plenty of RAM to work with,
especially in native code where the Java maximum heap size do not apply.
Memory's not a bottleneck.
Fair enough. The sum of all installed LV2 data loaded into a data
structure can be large, though. My new implementation is still not quite
optimal, though:
The per-statement overhead in Sord can be reduced from
4 * sizeof(void*)) * n_triples * n_indices
to
4 * sizeof(void*)) * n_triples
relatively easily, but it already seems to be much smaller than the old
librdf version (judging by my last unscientific experiment and feedback
from others, at least). n_indices is typically in [1..4], and IIRC 2 for
slv2.
There's also, from SLV2's perspective, 2 copies made of every parsed
node (from disk to serd, then from serd to sord). This can be reduced to
1, the optimum[1], at a slight cost in processing overhead (reading a
character gets one function call more expensive).
Everything else is pretty tight; with these two improvements serd =>
sord will be roughly as space efficient as it is possible for an
implementation of "reading Turtle in to a searchable model" can be.
On Android it's probably not an issue though, as you say. For other
embedded situations it would be nice to have absolutely minimal space
overhead on top of the actual data itself... plus it's fun :)
I wasn't saying that libraries <100k are
bloated. So far, I had the impression
that we were in the order hundreds of kilobytes, which is too much for plugin
support in an app, when you think about it as a whole, which means codecs and
other deps.
Fair enough. It seems the libraries involved can be built to be well
under 100k with -Os and such, but I have no idea how 64-bit shared
library code compares to static android code in terms of size. I'm all
for contributions that shave a bit of space, but everything seems pretty
good to me now (glib aside).
-dr
[1] In a basic model where reading from "disk" is considered a copy. To
make the implementation literally as lightweight as possible in reality,
it could mmap everything, so in the case where the data is already
cached it would not be copied at all thanks to virtual memory. I will
probably try this eventually for fun, but it's probably more of a
high-performance computing CS nerd wank than something that needs to
happen...