numactl is a utility which can be used to control NUMA policy for processes or shared memory. NUMA (stands for Non-Uniform Memory Access) is a memory architecture in which a given CPU core has variable access speeds to different regions of memory. Typically, each core will have a region of memory attached to it directly which it can access quickly (local memory), while access to the rest of the memory is slower (non-local memory).

This is in contrast to symmetric multiprocessing or SMP architecture in which each processor is connected to the whole of main memory at uniform speed. For some programs understanding the structure of NUMA memory and accessing it correctly may be crucial for obtaining good performance. Good performance usually depends on making sure that the data needed by a process is in local memory as often as possible.

numactl gives the ability to control

  1. NUMA scheduling policy
    1. for example, which cores do I want to run these tasks on
  2. Memory placement policy
    1. where to allocate data

Here shows you how to install numactl and some numactl examples:

# yum install numactl

 Package               Arch               Version                 Repository             Size
numactl               x86_64             2.0.9-5.el7_1           sl                     65 k
trransaction Summary
Install  1 Package

Total download size: 65 k
Installed size: 141 k
Is this ok [y/d/N]: y

  numactl.x86_64 0:2.0.9-5.el7_1  

numactl Examples:

Show inventory of available nodes on the system

#numactl --hardware
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5
node 0 size: 16383 MB
node 0 free: 9762 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 16384 MB
node 1 free: 4575 MB
node distances:
node   0   1
  0:  10  13
  1:  13  10

In above example, this node has two numa nodes, 6 processes are attached to each NUMA node.

Show NUMA policy settings of the current process

# numactl -show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11
cpubind: 0 1
nodebind: 0 1
membind: 0 1

Show how much memory is free on each node

# numactl -H | grep free
node 0 free: 9769 MB
node 1 free: 4566 MB

Control application use particular CPU node

You can use numactl to control how your program uses CPU node, controlling which CPUs are used to execute a program.

numactl --cpunodebind=<node> ls
numactl -N node

Control application use particular CPU

You can use numactl to control how your program uses CPU, controlling which CPU are used to execute a program 

numactl --physcpubind=<cpu> ls
numactl -C <cpu>

This accepts cpu numbers as shown in the processor fields of /proc/cpuinfo. Thus, if hyper threading is enabled, the virtual cores representing a hyper thread, not a hardware thread will show up as well.

You can also specify range of CPUs for an application, the following command runs  myapp  on  cpus 0-4 and 8-12 of the current cpuset.

numactl  --physcpubind=+0-4,8-12  myapp  arguments  

Control application use local memory

You can use numactl to control how your program uses memory quite precisely, simply using numactl to require that the program uses only local memory can result in much improved performance. This can be achieved by using numactl with flag -l

numactl --physcpubind=0 --localalloc ls

The program can use only local memory, which is less than the total memory, and will fail if it tries to allocate more memory when the local memory is already full.

Control application use preferred node and memory

Preferably allocate memory on node, but if memory cannot be allocated  there  fall  back  to  other nodes.  This option takes only a single node number.

numactl --preferred=<node> ls

Interleave memory for some application

Run big database with its memory interleaved on all CPUs.

numactl --interleave=all bigdatabase arguments

Run network server on specified network device with its memory in the same node

The command below run network-server on the node of network device eth0 with its memory also in the same node.

numactl  --cpunodebind=netdev:eth0  --membind=netdev:eth0 



Comments powered by CComment