Revision d9a047aeffcef5755952d18f2901d8777d84019d authored by Doug Ledford on 09 July 2015, 14:21:08 UTC, committed by Doug Ledford on 14 July 2015, 17:20:15 UTC
There is little chance our memory allocation will fail, so we can combine initializing the work structs with allocating them instead of looping through all of them once to allocate and again to initialize. Then when we need to actually find out if our device is up or in the process of going down, have all of our work structs batched up, take the spin_lock once and only once, and do all of the batch under the one spin_lock invocation instead of incurring all of the locked memory cycles we would otherwise incur to take/release the spin_lock over and over again. Signed-off-by: Doug Ledford <dledford@redhat.com>
1 parent 9bbf282
numastat.txt
Numa policy hit/miss statistics
/sys/devices/system/node/node*/numastat
All units are pages. Hugepages have separate counters.
numa_hit A process wanted to allocate memory from this node,
and succeeded.
numa_miss A process wanted to allocate memory from another node,
but ended up with memory from this node.
numa_foreign A process wanted to allocate on this node,
but ended up with memory from another one.
local_node A process ran on this node and got memory from it.
other_node A process ran on this node and got memory from another node.
interleave_hit Interleaving wanted to allocate from this node
and succeeded.
For easier reading you can use the numastat utility from the numactl package
(http://oss.sgi.com/projects/libnuma/). Note that it only works
well right now on machines with a small number of CPUs.
![swh spinner](/static/img/swh-spinner.gif)
Computing file changes ...