I'm a game programmer that has worked for both corporate and 3rd party companies, and contributed to hit titles including Transformers, Wall-E and Spider-man. Currently I'm doing an personal project (keeping scope small, and the passion alive) while I search for my next gig.
Posts by Paul Laska
  1. User Defined Build Command Macros in MSVS 2010... ( Counting comments... )
  2. Windows XP's Low Fragmentation Heaps... ( Counting comments... )
  3. Ready, Set, Allocate! (Part 6) ( Counting comments... )
  4. Ready, Set, Allocate! (Part 5) ( Counting comments... )
  5. Ready, Set, Allocate! (Part 4) ( Counting comments... )
  6. Ready, Set, Allocate! (Part 3) ( Counting comments... )
  7. Ready, Set, Allocate! (Part 2) ( Counting comments... )
  8. Ready, Set, Allocate! (Part 1) ( Counting comments... )
  9. Social Media Games On the Mind ( Counting comments... )
  10. Now Where Did I Allocate That?! ( Counting comments... )
Technology/ Code /

After finishing Part 6 of my Ready, Set, Allocate! series, I received a kind suggestion that I examine the Windows Low-Fragmentation Heaps (LFH) and compare allocation times for the heap system developed in my series.

Well, I spent a few hours getting a simple LFH policy setup and working; it really shouldn't have been that hard. The time spent was mostly due to a couple wierd quirks (read as requirements) Microsoft has made for using a LFH policy, and me just skimming the documentation and forums to learn about the LFH policy. With Windows 7 and Windows Vista, a process's heap is created by default with a LFH policy. However with Windows XP it allocates with a default look-aside lists heap policy, and you have to change it into a LFH policy using their HeapSetInformation method. But here comes the first quirk.

Most developers I know write and test code using some kind of IDE (typically Visual Studio), where they have a key combination setup to compile, link and then run their shiny new executable. And they typically first run their programs in a Debug configuration so they can debug any issues they run into. But Microsoft's LFH policy won't run in any Debug configuration, on Windows XP, unless you set an environment variable (_NO_DEBUG_HEAP = 1) that turns off the usage of debug memory heaps. Well, that seems counter-productive for testing and debugging memory based issues!?! And then there's this setting of an environment variable, that if not cleaned out when you're done can be easily forgotten and will prevent using debug heaps in any other project you run.

A handle to the default heap can be retrieved using GetProcessHeap, or one can be created using HeapCreate. Creating a heap allows for specifying upto 3 flags (1 of which will not work with LFH policies), an intial size, and a maximum size, but there is no option to create it with an LFH policy, so it has to be changed after it is created. In the series of blogs, I created a fixed size block of memory for allocating memory, which helped me simulate the memory constraints of a limited memory device like a PSP. But the second quirk is that you can only switch your heap to a LFH policy if you don't set a maximum size, because the LFH policy will grow the number of pages allocated to the policy whenever it runs out of memory. So trying to simulate a fixed memory size with a LFH policy is not an option.

As a policy, LFH gets applied to an allocation request under certain conditions, and utilizes the process's existing heap. One of those conditions is that each allocation has to be less than 512KB, so any test values 512KB or greater aren't using the LFH policy, they are utilizing the same methods previously compared in my blog series.

When test one was run, the choice of what a real world example might be like (64KB allocated 1000 times) went from 66.3 milliseconds without the LFH policy to 8.9 milliseconds with the LFH policy. The heap system I presented in the blog series still operates faster at 0.7 milliseconds.

When test two was run, exhausting 32MB of memory using only 4KB allocations. it also showed an improvement, going from 101.5 milliseconds without the LFH policy to 64.0 milliseconds with the LFH policy, to complete the exhaustion 1 time; and to exhaust the memory 10000 times, it went from 15 minutes 52 seconds 593.3 milliseconds, to 10 minutes 34 seconds 249.2 milliseconds. This still falls significantly short of the heap system I presented, which performed the same tasks in 10.1 milliseconds, to do it 1 time, and 1 minute 39 seconds 995 milliseconds, to do it 10000 times. The best improvement from the implementation of the LFH policy was in the amount of time it took to free the memory for test two, which used to range from 613.6 milliseconds to 1 hour 43 minutes 35 seconds 161.5 milliseconds, and now ranges from 50.8 milliseconds to 7 minutes 36 seconds 251.1 milliseconds. But again the heap system I presented still wins for time to execute with 6.1 milliseconds for one exhaustion, to 1 minute 1 second 891.4 milliseconds for 10000 exhaustions of the memory.

While the heap system I presented is faster, the two main drawbacks at the moment are that it is not thread safe, and it also does nothing to prevent memory fragmentation, and those are two advantages that can be had using the LFH policy. The main advantages to using the heap system I presented is that it is cross-platform capable, it runs very fast, and the code is available for modification. I'm grateful to have learned something new regarding the memory heaps on Windows, and am also pleased to see the LFH policy improving the standard allocator on Windows XP, but that isn't enough to convince me to use it, since I aim to target other platforms as well.

References:

  1. Low-Fragmentation Heap
  2. HeapCreate
  3. HeapSetInformation