[Geek] Linux Memory Management

Oct 10, 2012 13:09

This has been bugging me for a while. Now I'm coming to the end of my employment at my current place of work I've finally had some time to investigate this area a little further.

As we all know Linux makes use of all available memory for caching if nothing else.

How do you work out when it's 'full'?

This turns out to be quite a complicated answer. All the following variables refer to stuff found in /proc/meminfo. The simple answer is, excluding kernel bugs, when MemFree and SwapFree are zero or close to it.

Some swapping will occur naturally, but 'thrashing' is what you want to avoid - when more stuff wants to be accessed than can fit into RAM at one time, meaning the system has to page something else out, page in the thing to be run, run it, then page something else out to make room for the next thing on the list. This manifests as constant numbers in both the si and so columns in vmstat.

However, when there is little free memory, Cached still appears to be large. Why doesn't this shrink accordingly? Well, this is controlled by a number of factors:
  • /proc/sys/vm/swappiness - a number from 0 to 100 that determines how likely the kernel is to try to swap infrequently accessed items from memory to swap - although this will have little effect if there isn't enough memory (RAM or swap space) to cope.
  • It turns out that you can flush the cache by sync; echo 3 > /proc/sys/vm/drop_caches (there are other values but 3 clears all, and the sync flushes dirty data waiting to be written). It seems the bottom limit of the size of Cached is Mapped + Dirty + /dev/shm
With this information it should be possible to predict the hard limit of a system - although OOM may well cut in before this threshold and start killing off proceses.
Previous post Next post
Up