[Interest] memory fragmentation?

Till Oliver Knoll till.oliver.knoll at gmail.com
Tue Aug 21 22:26:04 CEST 2012


Am 21.08.12 18:04, schrieb Thiago Macieira:
> On terça-feira, 21 de agosto de 2012 07.09.44, Jason H wrote:
>> By returning out of memory, unrolling the stack, then at the top level
>> displaying the error.
>
> Well, don't use for Qt.
>
> Qt code *will* crash in unpredictable ways under OOM circumstances.

Folks, I gave up checking for NULL pointers (C, malloc) or bad_alloc 
exceptions (new, C++) a long time ago. I remember a discussion several 
years ago (here on Qt interest?) about desktop memory managers actually 
never returning a NULL pointer (or throwing an exception) when they 
cannot allocate memory.

The reasoning was that (at least on Windows, but I think the same holds 
for Linux and probably others such as OS X as well) even when you try to 
allocate a ridiculously large amount of memory, more than there is 
currently available, that Windows would happily return you that amount 
of memory: you get a valid pointer (no exception thrown).

But not unless you would actually try to *use* that memory (probably 
also depending on the "memory page you are trying to write to), when you 
use more than there is really virtually available. Then the OS would 
simply terminate your process - and no way you could react to that.

The reasoning is that the OS gives you that memory on a "optimistic 
view": it hopes that at the time you are *really* to use that memory 
another process might have freed the corresponding memory in the 
meantime. Or that you are actually not really going to use the entire 
requested memory... If that assumption was wrong, well, bang! There goes 
your process...


Here are a few metions about this OS behaviour here:

 
http://stackoverflow.com/questions/2497151/can-the-c-new-operator-ever-throw-an-exception-in-real-life

"I use Mac OS X, and I've never seen malloc return NULL (which would 
imply an exception from new in C++). The machine bogs down, does its 
best to allocate dwindling memory to processes, and finally sends 
SIGSTOP and invites the user to kill processes rather than have them 
deal with allocation failure."

   and

"Note that in Windows, very large new/mallocs will just allocate from 
virtual memory. In practice, your machine will crash before you see that 
exception."


There's also a short mention here:

 
http://stackoverflow.com/questions/550451/will-new-return-null-in-any-case

"There are also situations where your OS will let you allocate the 
memory without really mapping new pages in (lazy evaluation). But when 
you go to try and use that memory, there's nothing available and process 
gets killed."


However some answers also say that depending on ulimit (Unix/Linux) a 
process actually does get an bad_alloc exception.


But on a typical desktop the "realistic programmer's view" is probably 
that when the system goes low on memory it becomes soooo terribly slow 
that it is the user who kills processes first - so no real gain (on 
desktop applications!) in handling "out of memory" (unless you really 
try to allocate really large chunks of memory, where there is at least 
the chance that you get a bad_alloc exception).

At least I cannot remember any program - ever! - crashing because of an 
"out of memory" condition (on a desktop, off course - not talking about 
mobile devices), least to say that I got a user message saying "sorry, I 
cannot allocate X MBytes". (However I do remember sessions where my 
harddisk and my patience was dying, and me sending furious kill -9 
commands ;))

Cheers, Oliver




More information about the Interest mailing list