[Interest] memory fragmentation?
Constantin Makshin
cmakshin at gmail.com
Thu Aug 23 02:37:42 CEST 2012
I doubt that implementing moveable private objects/pointers would be
[really] useful.
Firstly, Qt can't control the location of the memory block allocated
by new/malloc, so the assumption that some [random] reallocations will
improve the contiguity of allocated memory looks quite naive.
Secondly, d-ptr allocation is (in most cases) the first operation
performed by an object constructor, so there's some (not too little, I
think) chance the private object will be placed directly after its
"public" buddy. In this case, destruction of both objects won't create
any excess "holes" in allocated/free memory.
And, at last but not at least, moving private objects around would
unnecessarily complicate Qt's internals (reallocation itself, code for
deciding when and where the d-ptr should be moved, etc.) and,
probably, make it less predictable in terms of performance
(reallocating d-ptr without reallocating "sub-pointers" makes little
sense IMHO, and moving all that stuff around can be expensive).
Yes, the idea is interesting, but crazy. :)
On Wed, Aug 22, 2012 at 9:27 PM, Jason H <scorp1us at yahoo.com> wrote:
> C++ on .NET functions as it does now, however the compiler introduces the
> operator of ^ as a type modifier like * (pointer)
> ^ are handles to managed objects, as * are addresses of objects. The runtime
> then handles dereferencing the handles for you. jsut like your compiler uses
> the appropriate instructions with a pointer.
>
> Now, here's my crazy idea. If we have small objects - say Qt's interface
> classes, and large objects, say Qt's private classes, then could we do some
> d-ptr trickery where Qt can reallocate and copy/move the memory around and
> reassign a d-ptr? We can't get all the coolness of .NET's GC, but we can
> come close, at least for "large" objects (objects using d-ptrs). We've
> already talked about the GUI, but what is more interesting to me is a
> QObject hierarchy (not necessarily QWidgets) in that you could say for this
> large, old tree of objects, do something that would result in "better" (more
> contiguous) memory allocations.
>
>
>
>
> ________________________________
> From: Konrad Rosenbaum <konrad at silmor.de>
> To: interest at qt-project.org
> Sent: Wednesday, August 22, 2012 4:50 AM
>
> Subject: Re: [Interest] memory fragmentation?
>
> Hi,
>
> On Tuesday 21 August 2012 12:01:49 Bo Thorsen wrote:
>> Memory fragmentation is defined as the problem where you allocate a
>> bigger chunk of memory than what is available, even if the total amount
>> of free memory is available.
>>
>> Do you know if the ^ implementation in .NET actually does the realloc
>> stuff, or do they only say that it's a possibility? I ask because this
>> sounds hard to do well. You either have a really slow operation running
>> often (just moving stuff back) or an almost impossible task (move things
>> *you will keep for a long time* to the front of temporary objects.
>
> I'm not entirely certain how "C++" is implemented on .NET - it is an alien
> in
> that world, since it normally expects allocation that does not move around.
> My
> guess would be that it marks objects assigned to Type* style pointers as
> "unmovable".
>
> See http://msdn.microsoft.com/en-us/library/ee787088.aspx for a detailed
> (and
> for MS untypically readable) description of the .NET Garbage Collector.
>
> The short version:
>
> .NET uses a generational and segmented garbage collector: small objects are
> created in an "ephemeral" memory area (large objects are presumed to be
> long-
> lived from the start) and are marked "generation 0". When the GC discoveres
> that it promoted most of the objects to "generation 1" (not temporary
> anymore)
> or "generation 2" (long lived object) it marks the whole segment as
> "generation 2", which just makes the GC sweeps happen less often. A new
> segment is chosen as "ephemeral" then.
>
> When the GC actually runs on a segment it does so in several phases. The 1st
> phase "mark" is: the GC checks which objects are there to stay and which
> ones
> can be removed (i.e. they do not have any connection to a running thread).
> Phase 2 is "sweep": it actually removes those objects. After that comes
> phase
> 3 "compact": it reallocates objects to the start of their segment to
> eliminate
> fragmented space. If necessary it can even reallocate objects to other
> segments.
>
> In other words: .NET manages memory in large chunks and automatically
> compacts
> those chunks when it feels this is necessary. So object references are
> pointers into a lookup table that contains the real memory location, which
> can
> change during GC runs.
>
>> The one case where you might have a problem is if you do have
>> allocs/deallocs done often (for example list views that change content
>> often) and you sometimes alloc a big chunk of the memory in one go.
>
> As far as I've seen in this discussion this falls into two categories:
>
> 1) Software on small and/or real-time systems that has critical parts and Qt
> as display. [I.e. physical memory is the limit.]
>
> 2) Software that crunches lots of data and is combined with a GUI -
> scientific
> applications, data analysis, etc. [I.e. pointer size is the main limit,
> memory
> can be extended for money]
>
>> Things you can do to combat this is mostly to make your objects smaller.
>> For example, use linked lists instead of arrays. The lists will add 4
>> bytes to each object, but the the object sizes are much smaller.
>>
>> If you want to go even further, you should add code to do memory
>> intensive stuff on the disc - much much slower, but at least it will run.
>
> Both solve the problem for category 2 software (data/number crunching with a
> GUI). I would in all earnesty add: go 64bit! Your target audience can easily
> use the added flexibility of 64bit pointers (some of them could use more if
> they could get it).
>
> For category 1 (real time, small memory FP) I can only suggest: separate the
> processes. Have one that does the critical stuff in a deterministic manner
> with pre-allocated memory and another process for the display - if there is
> a
> memory problem in the display process it does not hurt much to just kill it,
> restart and resync with the main process. Yes, that is quite a bit of extra
> effort, but if you have serious worries about this it may be a lot easier
> than
> making something as complex as Qt predictable. (The one time I did program
> such a tool I even had the critical part in its own microcontroller.)
>
>
> Konrad
>
> _______________________________________________
> Interest mailing list
> Interest at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/interest
>
>
>
> _______________________________________________
> Interest mailing list
> Interest at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/interest
>
More information about the Interest
mailing list