7 This document defines Ganeti's support for CPU pinning (aka CPU
10 CPU pinning enables mapping and unmapping entire virtual machines or a
11 specific virtual CPU (vCPU), to a physical CPU or a range of CPUs.
13 At this stage Pinning will be implemented for Xen and KVM.
18 Suggested command line parameters for controlling CPU pinning are as
21 gnt-instance modify -H cpu_mask=<cpu-pinning-info> <instance>
23 cpu-pinning-info can be any of the following:
25 * One vCPU mapping, which can be the word "all" or a combination
26 of CPU numbers and ranges separated by comma. In this case, all
27 vCPUs will be mapped to the indicated list.
28 * A list of vCPU mappings, separated by a colon ':'. In this case
29 each vCPU is mapped to an entry in the list, and the size of the
30 list must match the number of vCPUs defined for the instance. This
31 is enforced when setting CPU pinning or when setting the number of
32 vCPUs using ``-B vcpus=#``.
34 The mapping list is matched to consecutive virtual CPUs, so the first entry
35 would be the CPU pinning information for vCPU 0, the second entry
38 The default setting for new instances is "all", which maps the entire
39 instance to all CPUs, thus effectively turning off CPU pinning.
41 Here are some usage examples::
43 # Map vCPU 0 to physical CPU 1 and vCPU 1 to CPU 3 (assuming 2 vCPUs)
44 gnt-instance modify -H cpu_mask=1:3 my-inst
46 # Pin vCPU 0 to CPUs 1 or 2, and vCPU 1 to any CPU
47 gnt-instance modify -H cpu_mask=1-2:all my-inst
49 # Pin vCPU 0 to any CPU, vCPU 1 to CPUs 1, 3, 4 or 5, and CPU 2 to
51 gnt-instance modify -H cpu_mask=all:1\\,3-5:0 my-inst
53 # Pin entire VM to CPU 0
54 gnt-instance modify -H cpu_mask=0 my-inst
56 # Turn off CPU pinning (default setting)
57 gnt-instance modify -H cpu_mask=all my-inst
59 Assuming an instance has 3 vCPUs, the following commands will fail::
62 gnt-instance modify -H cpu_mask=0:1 my-inst
65 gnt-instance modify -H cpu_mask=2:1:1:all my-inst
70 CPU pinning information is validated by making sure it matches the
71 number of vCPUs. This validation happens when changing either the
72 cpu_mask or vcpus parameters.
73 Changing either parameter in a way that conflicts with the other will
74 fail with a proper error message.
75 To make such a change, both parameters should be modified at the same
77 ``gnt-instance modify -B vcpus=4 -H cpu_mask=1:1:2-3:4\\,6 my-inst``
79 Besides validating CPU configuration, i.e. the number of vCPUs matches
80 the requested CPU pinning, Ganeti will also verify the number of
81 physical CPUs is enough to support the required configuration. For
82 example, trying to run a configuration of vcpus=2,cpu_mask=0:4 on
83 a node with 4 cores will fail (Note: CPU numbers are 0-based).
85 This validation should repeat every time an instance is started or
86 migrated live. See more details under Migration below.
88 Cluster verification should also test the compatibility of other nodes in
89 the cluster to required configuration and alert if a minimum requirement
95 CPU pinning configuration can be transferred from node to node, unless
96 the number of physical CPUs is smaller than what the configuration calls
97 for. It is suggested that unless this is the case, all transfers and
98 migrations will succeed.
100 In case the number of physical CPUs is smaller than the numbers
101 indicated by CPU pinning information, instance failover will fail.
103 In case of emergency, to force failover to ignore mismatching CPU
104 information, the following switch can be used:
105 ``gnt-instance failover --fix-cpu-mismatch my-inst``.
106 This command will try to failover the instance with the current cpu mask,
107 but if that fails, it will change the mask to be "all".
112 In case of live migration, and in addition to failover considerations,
113 it is required to remap CPU pinning after migration. This can be done in
114 realtime for instances for both Xen and KVM, and only depends on the
115 number of physical CPUs being sufficient to support the migrated
121 Pinning information will be kept as a list of integers per vCPU.
122 To mark a mapping of any CPU, we will use (-1).
123 A single entry, no matter what the number of vCPUs is, will always mean
124 that all vCPUs have the same mapping.
129 The pinning information is kept for each instance's hypervisor
130 params section of the configuration file as the original string.
135 There are 2 ways to control pinning in Xen, either via the command line
136 or through the configuration file.
138 The commands to make direct pinning changes are the following::
140 # To pin a vCPU to a specific CPU
141 xm vcpu-pin <domain> <vcpu> <cpu>
144 xm vcpu-pin <domain> <vcpu> all
146 # To get the current pinning status
147 xm vcpu-list <domain>
149 Since currently controlling Xen in Ganeti is done in the configuration
150 file, it is straight forward to use the same method for CPU pinning.
151 There are 2 different parameters that control Xen's CPU pinning and
155 controls the number of vCPUs
157 maps vCPUs to physical CPUs
159 When no pinning is required (pinning information is "all"), the
160 "cpus" entry is removed from the configuration file.
162 For all other cases, the configuration is "translated" to Xen, which
163 expects either ``cpus = "a"`` or ``cpus = [ "a", "b", "c", ...]``,
164 where each a, b or c are a physical CPU number, CPU range, or a
165 combination, and the number of entries (if a list is used) must match
166 the number of vCPUs, and are mapped in order.
168 For example, CPU pinning information of ``1:2,4-7:0-1`` is translated
169 to this entry in Xen's configuration ``cpus = [ "1", "2,4-7", "0-1" ]``
174 Controlling pinning in KVM is a little more complicated as there is no
175 configuration to control pinning before instances are started.
177 The way to change or assign CPU pinning under KVM is to use ``taskset`` or
178 its underlying system call ``sched_setaffinity``. Setting the affinity for
179 the VM process will change CPU pinning for the entire VM, and setting it
180 for specific vCPU threads will control specific vCPUs.
182 The sequence of commands to control pinning is this: start the instance
183 with the ``-S`` switch, so it halts before starting execution, get the
184 process ID or identify thread IDs of each vCPU by sending ``info cpus``
185 to the monitor, map vCPUs as required by the cpu-pinning information,
186 and issue a ``cont`` command on the KVM monitor to allow the instance
189 For example, a sequence of commands to control CPU affinity under KVM
192 * Start KVM: ``/usr/bin/kvm … <kvm-command-line-options> … -S``
193 * Use socat to connect to monitor
194 * send ``info cpus`` to monitor to get thread/vCPU information
195 * call ``sched_setaffinity`` for each thread with the CPU mask
196 * send ``cont`` to KVM's monitor
198 A CPU mask is a hexadecimal bit mask where each bit represents one
199 physical CPU. See man page for :manpage:`sched_setaffinity(2)` for more
202 For example, to run a specific thread-id on CPUs 1 or 3 the mask is
205 We will control process and thread affinity using the python affinity
206 package (http://pypi.python.org/pypi/affinity). This package is a Python
207 wrapper around the two affinity system calls, and has no other
210 Alternative Design Options
211 --------------------------
213 1. There's an option to ignore the limitations of the underlying
214 hypervisor and instead of requiring explicit pinning information
215 for *all* vCPUs, assume a mapping of "all" to vCPUs not mentioned.
216 This can lead to inadvertent missing information, but either way,
217 since using cpu-pinning options is probably not going to be
218 frequent, there's no real advantage.
220 .. vim: set textwidth=72 :