[Interest] memory fragmentation?
Tony Rietwyk
tony at rightsoft.com.au
Thu Aug 23 03:54:43 CEST 2012
Hi Jason,
If there are 5 public objects whose d-pointers reference the same private
object address, how are you going to update them when you relocate the
private object?
I don't think .Net stores addresses, just handles, so the relocate only
needs to update the handle info. That's why you have to go through all of
the marshalling crap when you do want to pass the address somewhere!
Tony.
From: interest-bounces+tony=rightsoft.com.au at qt-project.org
[mailto:interest-bounces+tony=rightsoft.com.au at qt-project.org] On Behalf Of
Jason H
Sent: Thursday, 23 August 2012 11:18 AM
To: Constantin Makshin; Qt Interest
Subject: Re: [Interest] memory fragmentation?
I'll take that as a compliment. I fully expected someone to raise the issues
you did. And I will counter them with the following comments.
Yes, Qt would be less predictable, but more predictable than .NET's
allocator. If you want GC-like features, it'll come at a GC-like price.
Next on when to do it, if we know the compacted size of all the objects
stored, we can calculate a density (total size/(largest address-smallest
address)) and compact at a certain threshold. Since all the reallocations
would be consecutive, they should be contiguous. This then would free up
memory pages and should increase pagefile performance.
You do mention a complication - in that Dptrs are allocated immediately,
which means we can never move the original non-private class, unless we have
some other way to reference a QObject (name()?) and the expectation that
that is the only way to access it, unless when we start to use it we set
some no_move mutex.
And I am sure others are wondering, just as I still kinda do, if all this is
worth it. And my only answer to that, is MS did it for .NET, so there's got
to be some good reason why they did it like that.
_____
From: Constantin Makshin <cmakshin at gmail.com>
To: Qt Interest <interest at qt-project.org>
Sent: Wednesday, August 22, 2012 8:37 PM
Subject: Re: [Interest] memory fragmentation?
I doubt that implementing moveable private objects/pointers would be
[really] useful.
Firstly, Qt can't control the location of the memory block allocated
by new/malloc, so the assumption that some [random] reallocations will
improve the contiguity of allocated memory looks quite naive.
Secondly, d-ptr allocation is (in most cases) the first operation
performed by an object constructor, so there's some (not too little, I
think) chance the private object will be placed directly after its
"public" buddy. In this case, destruction of both objects won't create
any excess "holes" in allocated/free memory.
And, at last but not at least, moving private objects around would
unnecessarily complicate Qt's internals (reallocation itself, code for
deciding when and where the d-ptr should be moved, etc.) and,
probably, make it less predictable in terms of performance
(reallocating d-ptr without reallocating "sub-pointers" makes little
sense IMHO, and moving all that stuff around can be expensive).
Yes, the idea is interesting, but crazy. :)
On Wed, Aug 22, 2012 at 9:27 PM, Jason H <scorp1us at yahoo.com> wrote:
> C++ on .NET functions as it does now, however the compiler introduces the
> operator of ^ as a type modifier like * (pointer)
> ^ are handles to managed objects, as * are addresses of objects. The
runtime
> then handles dereferencing the handles for you. jsut like your compiler
uses
> the appropriate instructions with a pointer.
>
> Now, here's my crazy idea. If we have small objects - say Qt's interface
> classes, and large objects, say Qt's private classes, then could we do
some
> d-ptr trickery where Qt can reallocate and copy/move the memory around and
> reassign a d-ptr? We can't get all the coolness of .NET's GC, but we can
> come close, at least for "large" objects (objects using d-ptrs). We've
> already talked about the GUI, but what is more interesting to me is a
> QObject hierarchy (not necessarily QWidgets) in that you could say for
this
> large, old tree of objects, do something that would result in "better"
(more
> contiguous) memory allocations.
>
>
>
>
> ________________________________
> From: Konrad Rosenbaum <konrad at silmor.de>
> To: interest at qt-project.org
> Sent: Wednesday, August 22, 2012 4:50 AM
>
> Subject: Re: [Interest] memory fragmentation?
>
> Hi,
>
> On Tuesday 21 August 2012 12:01:49 Bo Thorsen wrote:
>> Memory fragmentation is defined as the problem where you allocate a
>> bigger chunk of memory than what is available, even if the total amount
>> of free memory is available.
>>
>> Do you know if the ^ implementation in .NET actually does the realloc
>> stuff, or do they only say that it's a possibility? I ask because this
>> sounds hard to do well. You either have a really slow operation running
>> often (just moving stuff back) or an almost impossible task (move things
>> *you will keep for a long time* to the front of temporary objects.
>
> I'm not entirely certain how "C++" is implemented on .NET - it is an alien
> in
> that world, since it normally expects allocation that does not move
around.
> My
> guess would be that it marks objects assigned to Type* style pointers as
> "unmovable".
>
> See http://msdn.microsoft.com/en-us/library/ee787088.aspx for a detailed
> (and
> for MS untypically readable) description of the .NET Garbage Collector.
>
> The short version:
>
> .NET uses a generational and segmented garbage collector: small objects
are
> created in an "ephemeral" memory area (large objects are presumed to be
> long-
> lived from the start) and are marked "generation 0". When the GC
discoveres
> that it promoted most of the objects to "generation 1" (not temporary
> anymore)
> or "generation 2" (long lived object) it marks the whole segment as
> "generation 2", which just makes the GC sweeps happen less often. A new
> segment is chosen as "ephemeral" then.
>
> When the GC actually runs on a segment it does so in several phases. The
1st
> phase "mark" is: the GC checks which objects are there to stay and which
> ones
> can be removed (i.e. they do not have any connection to a running thread).
> Phase 2 is "sweep": it actually removes those objects. After that comes
> phase
> 3 "compact": it reallocates objects to the start of their segment to
> eliminate
> fragmented space. If necessary it can even reallocate objects to other
> segments.
>
> In other words: .NET manages memory in large chunks and automatically
> compacts
> those chunks when it feels this is necessary. So object references are
> pointers into a lookup table that contains the real memory location, which
> can
> change during GC runs.
>
>> The one case where you might have a problem is if you do have
>> allocs/deallocs done often (for example list views that change content
>> often) and you sometimes alloc a big chunk of the memory in one go.
>
> As far as I've seen in this discussion this falls into two categories:
>
> 1) Software on small and/or real-time systems that has critical parts and
Qt
> as display. [I.e. physical memory is the limit.]
>
> 2) Software that crunches lots of data and is combined with a GUI -
> scientific
> applications, data analysis, etc. [I.e. pointer size is the main limit,
> memory
> can be extended for money]
>
>> Things you can do to combat this is mostly to make your objects smaller.
>> For example, use linked lists instead of arrays. The lists will add 4
>> bytes to each object, but the the object sizes are much smaller.
>>
>> If you want to go even further, you should add code to do memory
>> intensive stuff on the disc - much much slower, but at least it will run.
>
> Both solve the problem for category 2 software (data/number crunching with
a
> GUI). I would in all earnesty add: go 64bit! Your target audience can
easily
> use the added flexibility of 64bit pointers (some of them could use more
if
> they could get it).
>
> For category 1 (real time, small memory FP) I can only suggest: separate
the
> processes. Have one that does the critical stuff in a deterministic manner
> with pre-allocated memory and another process for the display - if there
is
> a
> memory problem in the display process it does not hurt much to just kill
it,
> restart and resync with the main process. Yes, that is quite a bit of
extra
> effort, but if you have serious worries about this it may be a lot easier
> than
> making something as complex as Qt predictable. (The one time I did program
> such a tool I even had the critical part in its own microcontroller.)
>
>
> Konrad
>
> _______________________________________________
> Interest mailing list
> Interest at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/interest
>
>
>
> _______________________________________________
> Interest mailing list
> Interest at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/interest
>
_______________________________________________
Interest mailing list
Interest at qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20120823/7c42d556/attachment.html>
More information about the Interest
mailing list