Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
The Longstanding Memory Fragmentation Problem Has Been Covered Many Times in These Pages

The Longstanding Memory Fragmentation Problem Has Been Covered Many Times in These Pages

Ratings: (0)|Views: 2|Likes:

More info:

Published by: Krishna Lal Baishnab on Jul 29, 2010
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less





The longstanding memory fragmentation problem has been covered many times in these pages. In short: as the system runs, pages tend to be scattered between users, making ithard to find groups of physically-contiguous pages when they are needed. Much work hasgone into avoiding the need for higher-order (multi-page) memory allocations whenever  possible, with the result that most kernel functionality is not hurt by page fragmentation.But there are still situations where higher-order allocations are needed; code which needssuch allocations can fail on a fragmented system.It's worth noting that, in one way, this problem is actually getting worse. Contemporary processors are not limited to 4K pages; they can work with much larger pages ("huge pages") in portions of a process's address space. There can be real performanceadvantages to using huge pages, mostly as a result of reduced pressure on the processor'stranslation lookaside buffer. But the use of huge pages requires that the system be able tofind physically-contiguous areas of memory which are not only big enough, but whichare properly aligned as well. Finding that kind of space can be quite challenging onsystems which have been running for any period of time.Over the years, the kernel developers have made various attempts to mitigate this problem; techniques likeZONE_MOVABLEand lumpy reclaimhave been the result. There is still more that can be done, though, especially in the area of fixing fragmentationto recover larger chunks of memory. After taking a break from this area, Mel Gorman hasrecently returned with a new patch set implementingmemory compaction.Here we'll take a quick look at how this patch works.Imagine a very small memory zone which looks like this:Here, the white pages are free, while those in red are allocated to some use. As can beseen, the zone is quite fragmented, with no contiguous blocks of larger than two pagesavailable; any attempt to allocate, for example, a four-page block from this zone will fail.Indeed, even two-page allocations will fail, since none of the free pairs of pages are properly aligned.It's time to call in the compaction code. This code runs as two separate algorithms; thefirst of them starts at the bottom of the zone and builds a list of allocated pages whichcould be moved:
Meanwhile, at the top of the zone, the other half of the algorithm is creating a list of free pages which could be used as the target of page migration:Eventually the two algorithms will meet somewhere toward the middle of the zone. Atthat point, it's mostly just a matter of invoking the  page migration code(which is not just for NUMA systems anymore) to shift the used pages to the free space at the top of thezone, yielding a pretty picture like this:We now have a nice, eight-page, contiguous span of free space which can be used tosatisfy higher-order allocations if need be.Of course, the picture given here has been simplified considerably from what happens ona real system. To begin with, the memory zones will be much larger; that means there'smore work to do, but the resulting free areas may be much larger as well.But all this only works if the pages in question can actually be moved. Not all pages can be moved at will; only those which are addressed through a layer of indirection andwhich are not otherwise pinned down are movable. So most user-space pages - which areaccessed through user virtual addresses - can be moved; all that is needed is to tweak therelevant page table entries accordingly. Most memory used by the kernel directly cannot be moved - though some of it is reclaimable, meaning that it can be freed entirely ondemand. It only takes one non-movable page to ruin a contiguous segment of memory.The good news here is that the kernel already takes care to separate movable and non-movable pages, so, in reality, non-movable pages should be a smaller problem than onemight think.The running of the compaction algorithm can be triggered in either of two ways. One isto write a node number to
, causing compaction to happenon the indicated NUMA node. The other is for the system to fail in an attempt to allocatea higher-order page; in this case, compaction will run as a preferable alternative tofreeing pages through direct reclaim. In the absence of an explicit trigger, the compaction

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->