[Interest] Qt5, XCB and X11
Rutledge Shawn
Shawn.Rutledge at digia.com
Tue Jul 15 17:40:52 CEST 2014
On 15 Jul 2014, at 5:04 PM, Thiago Macieira wrote:
> On Tuesday 15 July 2014 10:07:18 Rutledge Shawn wrote:
>> I was thinking the solution to both problems might be some sort of
>> deduplication by hashing, both in storage and in memory. It could be done
>> at a block level or at the level of individual functions. It's probably a
>> research project somewhere...
>
> It's actually implemented in Linux. It's done at page level.
>
> http://en.wikipedia.org/wiki/Kernel_SamePage_Merging_(KSM)
Sounds like a good start, but with some limitations: it's done by periodic scanning; and Linux Documentation/vm/ksm.txt says
KSM only operates on those areas of address space which an application
has advised to be likely candidates for merging, by using the madvise(2)
system call: int madvise(addr, length, MADV_MERGEABLE).
…
Applications should be considerate in their use of MADV_MERGEABLE,
restricting its use to areas likely to benefit. KSM's scans may use a lot
of processing power: some installations will disable KSM for that reason.
cat /sys/kernel/mm/ksm/run => 0: so it's disabled on my system by default
So we need to add madvise() calls to Qt to enable this, or is it done already on another layer? The only place I found any is in src/3rdparty/pcre/sljit/sljitUtils.c. But it sounds like the cost outweighs the benefit often enough that it has remained a cautious opt-in feature.
Then I wonder if we could test, after getting it working, how different Qt has to be before it stops working. I don't suppose it's easy to track which de-duplicated pages came from which (ranges inside) files.
http://jak-linux.org/projects/hardlink/ claims to replace complete file duplicates with hard links; I will have to try it. But that is apparently also a periodic-scanning solution.
More information about the Interest
mailing list