|
1 |
=======
|
|
2 |
Hotplug
|
|
3 |
=======
|
|
4 |
|
|
5 |
.. contents:: :depth: 4
|
|
6 |
|
|
7 |
This is a design document detailing the implementation of device
|
|
8 |
hotplugging in Ganeti. The logic used is hypervisor agnostic but still
|
|
9 |
the initial implementation will target the KVM hypervisor. The
|
|
10 |
implementation adds ``python-fdsend`` as a new dependency. In case
|
|
11 |
it is not installed hotplug will not be possible and the user will
|
|
12 |
be notified with a warning.
|
|
13 |
|
|
14 |
|
|
15 |
Current state and shortcomings
|
|
16 |
==============================
|
|
17 |
|
|
18 |
Currently, Ganeti supports addition/removal/modification of devices
|
|
19 |
(NICs, Disks) but the actual modification takes place only after
|
|
20 |
rebooting the instance. To this end an instance cannot change network,
|
|
21 |
get a new disk etc. without a hard reboot.
|
|
22 |
|
|
23 |
Until now, in case of KVM hypervisor, code does not name devices nor
|
|
24 |
places them in specific PCI slots. Devices are appended in the KVM
|
|
25 |
command and Ganeti lets KVM decide where to place them. This means that
|
|
26 |
there is a possibility a device that resides in PCI slot 5, after a
|
|
27 |
reboot (due to another device removal) to be moved to another PCI slot
|
|
28 |
and probably get renamed too (due to udev rules, etc.).
|
|
29 |
|
|
30 |
In order migration to succeed, the process on the target node should be
|
|
31 |
started with exactly the same machine version, CPU architecture and PCI
|
|
32 |
configuration with the running process. During instance creation/startup
|
|
33 |
ganeti creates a KVM runtime file with all the necessary information to
|
|
34 |
generate the KVM command. This runtime file is used during instance
|
|
35 |
migration to start a new identical KVM process. The current format
|
|
36 |
includes the fixed part of the final KVM command, a list of NICs',
|
|
37 |
and hvparams dict. It does not favor easy manipulations concerning
|
|
38 |
disks, because they are encapsulated in the fixed KVM command.
|
|
39 |
|
|
40 |
Proposed changes
|
|
41 |
================
|
|
42 |
|
|
43 |
For the case of the KVM hypervisor, QEMU exposes 32 PCI slots to the
|
|
44 |
instance. Disks and NICs occupy some of these slots. Recent versions of
|
|
45 |
QEMU have introduced monitor commands that allow addition/removal of PCI
|
|
46 |
devices. Devices are referenced based on their name or position on the
|
|
47 |
virtual PCI bus. To be able to use these commands, we need to be able to
|
|
48 |
assign each device a unique name.
|
|
49 |
|
|
50 |
To keep track where each device is plugged into, we add the
|
|
51 |
``pci`` slot to Disk and NIC objects, but we save it only in runtime
|
|
52 |
files, since it is hypervisor specific info. This is added for easy
|
|
53 |
object manipulation and is ensured not to be written back to the config.
|
|
54 |
|
|
55 |
We propose to make use of QEMU 1.0 monitor commands so that
|
|
56 |
modifications to devices take effect instantly without the need for hard
|
|
57 |
reboot. The only change exposed to the end-user will be the addition of
|
|
58 |
a ``--hotplug`` option to the ``gnt-instance modify`` command.
|
|
59 |
|
|
60 |
Upon hotplugging the PCI configuration of an instance is changed.
|
|
61 |
Runtime files should be updated correspondingly. Currently this is
|
|
62 |
impossible in case of disk hotplug because disks are included in command
|
|
63 |
line entry of the runtime file, contrary to NICs that are correctly
|
|
64 |
treated separately. We change the format of runtime files, we remove
|
|
65 |
disks from the fixed KVM command and create new entry containing them
|
|
66 |
only. KVM options concerning disk are generated during
|
|
67 |
``_ExecuteKVMCommand()``, just like NICs.
|
|
68 |
|
|
69 |
Design decisions
|
|
70 |
================
|
|
71 |
|
|
72 |
Which should be each device ID? Currently KVM does not support arbitrary
|
|
73 |
IDs for devices; supported are only names starting with a letter, max 32
|
|
74 |
chars length, and only including '.' '_' '-' special chars.
|
|
75 |
For debugging purposes and in order to be more informative, device will be
|
|
76 |
named after: <device type>-<part of uuid>-pci-<slot>.
|
|
77 |
|
|
78 |
Who decides where to hotplug each device? As long as this is a
|
|
79 |
hypervisor specific matter, there is no point for the master node to
|
|
80 |
decide such a thing. Master node just has to request noded to hotplug a
|
|
81 |
device. To this end, hypervisor specific code should parse the current
|
|
82 |
PCI configuration (i.e. ``info pci`` QEMU monitor command), find the first
|
|
83 |
available slot and hotplug the device. Having noded to decide where to
|
|
84 |
hotplug a device we ensure that no error will occur due to duplicate
|
|
85 |
slot assignment (if masterd keeps track of PCI reservations and noded
|
|
86 |
fails to return the PCI slot that the device was plugged into then next
|
|
87 |
hotplug will fail).
|
|
88 |
|
|
89 |
Where should we keep track of devices' PCI slots? As already mentioned,
|
|
90 |
we must keep track of devices PCI slots to successfully migrate
|
|
91 |
instances. First option is to save this info to config data, which would
|
|
92 |
allow us to place each device at the same PCI slot after reboot. This
|
|
93 |
would require to make the hypervisor return the PCI slot chosen for each
|
|
94 |
device, and storing this information to config data. Additionally the
|
|
95 |
whole instance configuration should be returned with PCI slots filled
|
|
96 |
after instance start and each instance should keep track of current PCI
|
|
97 |
reservations. We decide not to go towards this direction in order to
|
|
98 |
keep it simple and do not add hypervisor specific info to configuration
|
|
99 |
data (``pci_reservations`` at instance level and ``pci`` at device
|
|
100 |
level). For the aforementioned reason, we decide to store this info only
|
|
101 |
in KVM runtime files.
|
|
102 |
|
|
103 |
Where to place the devices upon instance startup? QEMU has by default 4
|
|
104 |
pre-occupied PCI slots. So, hypervisor can use the remaining ones for
|
|
105 |
disks and NICs. Currently, PCI configuration is not preserved after
|
|
106 |
reboot. Each time an instance starts, KVM assigns PCI slots to devices
|
|
107 |
based on their ordering in Ganeti configuration, i.e. the second disk
|
|
108 |
will be placed after the first, the third NIC after the second, etc.
|
|
109 |
Since we decided that there is no need to keep track of devices PCI
|
|
110 |
slots, there is no need to change current functionality.
|
|
111 |
|
|
112 |
How to deal with existing instances? Hotplug depends on runtime file
|
|
113 |
manipulation. It stores there pci info and every device the kvm process is
|
|
114 |
currently using. Existing files have no pci info in devices and have block
|
|
115 |
devices encapsulated inside kvm_cmd entry. Thus hotplugging of existing devices
|
|
116 |
will not be possible. Still migration and hotplugging of new devices will
|
|
117 |
succeed. The workaround will happen upon loading kvm runtime: if we detect old
|
|
118 |
style format we will add an empty list for block devices and upon saving kvm
|
|
119 |
runtime we will include this empty list as well. Switching entirely to new
|
|
120 |
format will happen upon instance reboot.
|
|
121 |
|
|
122 |
|
|
123 |
Configuration changes
|
|
124 |
---------------------
|
|
125 |
|
|
126 |
The ``NIC`` and ``Disk`` objects get one extra slot: ``pci``. It refers to
|
|
127 |
PCI slot that the device gets plugged into.
|
|
128 |
|
|
129 |
In order to be able to live migrate successfully, runtime files should
|
|
130 |
be updated every time a live modification (hotplug) takes place. To this
|
|
131 |
end we change the format of runtime files. The KVM options referring to
|
|
132 |
instance's disks are no longer recorded as part of the KVM command line.
|
|
133 |
Disks are treated separately, just as we treat NICs right now. We insert
|
|
134 |
and remove entries to reflect the current PCI configuration.
|
|
135 |
|
|
136 |
|
|
137 |
Backend changes
|
|
138 |
---------------
|
|
139 |
|
|
140 |
Introduce one new RPC call:
|
|
141 |
|
|
142 |
- hotplug_device(DEVICE_TYPE, ACTION, device, ...)
|
|
143 |
|
|
144 |
where DEVICE_TYPE can be either NIC or Disk, and ACTION either REMOVE or ADD.
|
|
145 |
|
|
146 |
Hypervisor changes
|
|
147 |
------------------
|
|
148 |
|
|
149 |
We implement hotplug on top of the KVM hypervisor. We take advantage of
|
|
150 |
QEMU 1.0 monitor commands (``device_add``, ``device_del``,
|
|
151 |
``drive_add``, ``drive_del``, ``netdev_add``,`` netdev_del``). QEMU
|
|
152 |
refers to devices based on their id. We use ``uuid`` to name them
|
|
153 |
properly. If a device is about to be hotplugged we parse the output of
|
|
154 |
``info pci`` and find the occupied PCI slots. We choose the first
|
|
155 |
available and the whole device object is appended to the corresponding
|
|
156 |
entry in the runtime file.
|
|
157 |
|
|
158 |
Concerning NIC handling, we build on the top of the existing logic
|
|
159 |
(first create a tap with _OpenTap() and then pass its file descriptor to
|
|
160 |
the KVM process). To this end we need to pass access rights to the
|
|
161 |
corresponding file descriptor over the monitor socket (UNIX domain
|
|
162 |
socket). The open file is passed as a socket-level control message
|
|
163 |
(SCM), using the ``fdsend`` python library.
|
|
164 |
|
|
165 |
|
|
166 |
User interface
|
|
167 |
--------------
|
|
168 |
|
|
169 |
The new ``--hotplug`` option to gnt-instance modify is introduced, which
|
|
170 |
forces live modifications.
|
|
171 |
|
|
172 |
|
|
173 |
Enabling hotplug
|
|
174 |
++++++++++++++++
|
|
175 |
|
|
176 |
Hotplug will be optional during gnt-instance modify. For existing
|
|
177 |
instance, after installing a version that supports hotplugging we
|
|
178 |
have the restriction that hotplug will not be supported for existing
|
|
179 |
devices. The reason is that old runtime files lack of:
|
|
180 |
|
|
181 |
1. Device pci configuration info.
|
|
182 |
|
|
183 |
2. Separate block device entry.
|
|
184 |
|
|
185 |
Hotplug will be supported only for KVM in the first implementation. For
|
|
186 |
all other hypervisors, backend will raise an Exception case hotplug is
|
|
187 |
requested.
|
|
188 |
|
|
189 |
|
|
190 |
NIC Hotplug
|
|
191 |
+++++++++++
|
|
192 |
|
|
193 |
The user can add/modify/remove NICs either with hotplugging or not. If a
|
|
194 |
NIC is to be added a tap is created first and configured properly with
|
|
195 |
kvm-vif-bridge script. Then the instance gets a new network interface.
|
|
196 |
Since there is no QEMU monitor command to modify a NIC, we modify a NIC
|
|
197 |
by temporary removing the existing one and adding a new with the new
|
|
198 |
configuration. When removing a NIC the corresponding tap gets removed as
|
|
199 |
well.
|
|
200 |
|
|
201 |
::
|
|
202 |
|
|
203 |
gnt-instance modify --net add --hotplug test
|
|
204 |
gnt-instance modify --net 1:mac=aa:00:00:55:44:33 --hotplug test
|
|
205 |
gnt-instance modify --net 1:remove --hotplug test
|
|
206 |
|
|
207 |
|
|
208 |
Disk Hotplug
|
|
209 |
++++++++++++
|
|
210 |
|
|
211 |
The user can add and remove disks with hotplugging or not. QEMU monitor
|
|
212 |
supports resizing of disks, however the initial implementation will
|
|
213 |
support only disk addition/deletion.
|
|
214 |
|
|
215 |
::
|
|
216 |
|
|
217 |
gnt-instance modify --disk add:size=1G --hotplug test
|
|
218 |
gnt-instance modify --net 1:remove --hotplug test
|
|
219 |
|
|
220 |
|
|
221 |
Dealing with chroot and uid pool
|
|
222 |
--------------------------------
|
|
223 |
|
|
224 |
The design so far covers all issues that arise without addressing the
|
|
225 |
case where the kvm process will not run with root privileges.
|
|
226 |
Specifically:
|
|
227 |
|
|
228 |
- in case of chroot, the kvm process cannot see the newly created device
|
|
229 |
|
|
230 |
- in case of uid pool security model, the kvm process is not allowed
|
|
231 |
to access the device
|
|
232 |
|
|
233 |
For NIC hotplug we address this problem by using the ``getfd`` monitor
|
|
234 |
command and passing the file descriptor to the kvm process over the
|
|
235 |
monitor socket using SCM_RIGHTS. For disk hotplug and in case of uid
|
|
236 |
pool we can let the hypervisor code temporarily ``chown()`` the device
|
|
237 |
before the actual hotplug. Still this is insufficient in case of chroot.
|
|
238 |
In this case, we need to ``mknod()`` the device inside the chroot. Both
|
|
239 |
workarounds can be avoided, if we make use of the ``add-fd`` qemu
|
|
240 |
monitor command, that was introduced in version 1.3. This command is the
|
|
241 |
equivalent of NICs' `get-fd`` for disks and will allow disk hotplug in
|
|
242 |
every case. So, if the qemu monitor does not support the ``add-fd``
|
|
243 |
command, we will not allow disk hotplug for chroot and uid security
|
|
244 |
model and notify the user with the corresponding warning.
|
|
245 |
|
|
246 |
.. vim: set textwidth=72 :
|
|
247 |
.. Local Variables:
|
|
248 |
.. mode: rst
|
|
249 |
.. fill-column: 72
|
|
250 |
.. End:
|