root / man / gnt-instance.rst @ 2b846304
History | View | Annotate | Download (53.1 kB)
1 |
gnt-instance(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
================================================= |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-instance - Ganeti instance administration |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-instance** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-instance** command is used for instance administration in |
18 |
the Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
Creation/removal/querying |
24 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
25 |
|
26 |
ADD |
27 |
^^^ |
28 |
|
29 |
| **add** |
30 |
| {-t|--disk-template {diskless | file \| plain \| drbd}} |
31 |
| {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,metavg=*VG*][,mode=*ro\|rw*] |
32 |
| \| {-s|--os-size} *SIZE*} |
33 |
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install] |
34 |
| [--net=*N* [:options...] \| --no-nics] |
35 |
| [{-B|--backend-parameters} *BEPARAMS*] |
36 |
| [{-H|--hypervisor-parameters} *HYPERVISOR* [: option=*value*... ]] |
37 |
| [{-O|--os-parameters} *param*=*value*... ] |
38 |
| [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}] |
39 |
| {{-n|--node} *node[:secondary-node]* \| {-I|--iallocator} *name*} |
40 |
| {{-o|--os-type} *os-type*} |
41 |
| [--submit] |
42 |
| {*instance*} |
43 |
|
44 |
Creates a new instance on the specified host. The *instance* argument |
45 |
must be in DNS, but depending on the bridge/routing setup, need not be |
46 |
in the same network as the nodes in the cluster. |
47 |
|
48 |
The ``disk`` option specifies the parameters for the disks of the |
49 |
instance. The numbering of disks starts at zero, and at least one disk |
50 |
needs to be passed. For each disk, either the size or the adoption |
51 |
source needs to be given, and optionally the access mode (read-only or |
52 |
the default of read-write) and the LVM volume group can also be |
53 |
specified (via the ``vg`` key). For DRBD devices, a different VG can |
54 |
be specified for the metadata device using the ``metavg`` key. The |
55 |
size is interpreted (when no unit is given) in mebibytes. You can also |
56 |
use one of the suffixes *m*, *g* or *t* to specify the exact the units |
57 |
used; these suffixes map to mebibytes, gibibytes and tebibytes. |
58 |
|
59 |
When using the ``adopt`` key in the disk definition, Ganeti will |
60 |
reuse those volumes (instead of creating new ones) as the |
61 |
instance's disks. Ganeti will rename these volumes to the standard |
62 |
format, and (without installing the OS) will use them as-is for the |
63 |
instance. This allows migrating instances from non-managed mode |
64 |
(e.q. plain KVM with LVM) to being managed via Ganeti. Note that |
65 |
this works only for the \`plain' disk template (see below for |
66 |
template details). |
67 |
|
68 |
Alternatively, a single-disk instance can be created via the ``-s`` |
69 |
option which takes a single argument, the size of the disk. This is |
70 |
similar to the Ganeti 1.2 version (but will only create one disk). |
71 |
|
72 |
The minimum disk specification is therefore ``--disk 0:size=20G`` (or |
73 |
``-s 20G`` when using the ``-s`` option), and a three-disk instance |
74 |
can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk |
75 |
2:size=100G``. |
76 |
|
77 |
The ``--no-ip-check`` skips the checks that are done to see if the |
78 |
instance's IP is not already alive (i.e. reachable from the master |
79 |
node). |
80 |
|
81 |
The ``--no-name-check`` skips the check for the instance name via |
82 |
the resolver (e.g. in DNS or /etc/hosts, depending on your setup). |
83 |
Since the name check is used to compute the IP address, if you pass |
84 |
this option you must also pass the ``--no-ip-check`` option. |
85 |
|
86 |
If you don't wat the instance to automatically start after |
87 |
creation, this is possible via the ``--no-start`` option. This will |
88 |
leave the instance down until a subsequent **gnt-instance start** |
89 |
command. |
90 |
|
91 |
The NICs of the instances can be specified via the ``--net`` |
92 |
option. By default, one NIC is created for the instance, with a |
93 |
random MAC, and set up according the the cluster level nic |
94 |
parameters. Each NIC can take these parameters (all optional): |
95 |
|
96 |
mac |
97 |
either a value or 'generate' to generate a new unique MAC |
98 |
|
99 |
ip |
100 |
specifies the IP address assigned to the instance from the Ganeti |
101 |
side (this is not necessarily what the instance will use, but what |
102 |
the node expects the instance to use) |
103 |
|
104 |
mode |
105 |
specifies the connection mode for this nic: routed or bridged. |
106 |
|
107 |
link |
108 |
in bridged mode specifies the bridge to attach this NIC to, in |
109 |
routed mode it's intended to differentiate between different |
110 |
routing tables/instance groups (but the meaning is dependent on |
111 |
the network script, see gnt-cluster(8) for more details) |
112 |
|
113 |
|
114 |
Of these "mode" and "link" are nic parameters, and inherit their |
115 |
default at cluster level. Alternatively, if no network is desired for |
116 |
the instance, you can prevent the default of one NIC with the |
117 |
``--no-nics`` option. |
118 |
|
119 |
The ``-o (--os-type)`` option specifies the operating system to be |
120 |
installed. The available operating systems can be listed with |
121 |
**gnt-os list**. Passing ``--no-install`` will however skip the OS |
122 |
installation, allowing a manual import if so desired. Note that the |
123 |
no-installation mode will automatically disable the start-up of the |
124 |
instance (without an OS, it most likely won't be able to start-up |
125 |
successfully). |
126 |
|
127 |
The ``-B (--backend-parameters)`` option specifies the backend |
128 |
parameters for the instance. If no such parameters are specified, the |
129 |
values are inherited from the cluster. Possible parameters are: |
130 |
|
131 |
memory |
132 |
the memory size of the instance; as usual, suffixes can be used to |
133 |
denote the unit, otherwise the value is taken in mebibites |
134 |
|
135 |
vcpus |
136 |
the number of VCPUs to assign to the instance (if this value makes |
137 |
sense for the hypervisor) |
138 |
|
139 |
auto\_balance |
140 |
whether the instance is considered in the N+1 cluster checks |
141 |
(enough redundancy in the cluster to survive a node failure) |
142 |
|
143 |
|
144 |
The ``-H (--hypervisor-parameters)`` option specified the hypervisor |
145 |
to use for the instance (must be one of the enabled hypervisors on the |
146 |
cluster) and optionally custom parameters for this instance. If not |
147 |
other options are used (i.e. the invocation is just -H *NAME*) the |
148 |
instance will inherit the cluster options. The defaults below show the |
149 |
cluster defaults at cluster creation time. |
150 |
|
151 |
The possible hypervisor options are as follows: |
152 |
|
153 |
boot\_order |
154 |
Valid for the Xen HVM and KVM hypervisors. |
155 |
|
156 |
A string value denoting the boot order. This has different meaning |
157 |
for the Xen HVM hypervisor and for the KVM one. |
158 |
|
159 |
For Xen HVM, The boot order is a string of letters listing the boot |
160 |
devices, with valid device letters being: |
161 |
|
162 |
a |
163 |
floppy drive |
164 |
|
165 |
c |
166 |
hard disk |
167 |
|
168 |
d |
169 |
CDROM drive |
170 |
|
171 |
n |
172 |
network boot (PXE) |
173 |
|
174 |
The default is not to set an HVM boot order which is interpreted |
175 |
as 'dc'. |
176 |
|
177 |
For KVM the boot order is either "floppy", "cdrom", "disk" or |
178 |
"network". Please note that older versions of KVM couldn't netboot |
179 |
from virtio interfaces. This has been fixed in more recent versions |
180 |
and is confirmed to work at least with qemu-kvm 0.11.1. Also note |
181 |
that if you have set the ``kernel_path`` option, that will be used |
182 |
for booting, and this setting will be silently ignored. |
183 |
|
184 |
blockdev\_prefix |
185 |
Valid for the Xen HVM and PVM hypervisors. |
186 |
|
187 |
Relevant to non-pvops guest kernels, in which the disk device names |
188 |
are given by the host. Allows one to specify 'xvd', which helps run |
189 |
Red Hat based installers, driven by anaconda. |
190 |
|
191 |
floppy\_image\_path |
192 |
Valid for the KVM hypervisor. |
193 |
|
194 |
The path to a floppy disk image to attach to the instance. This |
195 |
is useful to install Windows operating systems on Virt/IO disks |
196 |
because you can specify here the floppy for the drivers at |
197 |
installation time. |
198 |
|
199 |
cdrom\_image\_path |
200 |
Valid for the Xen HVM and KVM hypervisors. |
201 |
|
202 |
The path to a CDROM image to attach to the instance. |
203 |
|
204 |
cdrom2\_image\_path |
205 |
Valid for the KVM hypervisor. |
206 |
|
207 |
The path to a second CDROM image to attach to the instance. |
208 |
**NOTE**: This image can't be used to boot the system. To do that |
209 |
you have to use the 'cdrom\_image\_path' option. |
210 |
|
211 |
nic\_type |
212 |
Valid for the Xen HVM and KVM hypervisors. |
213 |
|
214 |
This parameter determines the way the network cards are presented |
215 |
to the instance. The possible options are: |
216 |
|
217 |
- rtl8139 (default for Xen HVM) (HVM & KVM) |
218 |
- ne2k\_isa (HVM & KVM) |
219 |
- ne2k\_pci (HVM & KVM) |
220 |
- i82551 (KVM) |
221 |
- i82557b (KVM) |
222 |
- i82559er (KVM) |
223 |
- pcnet (KVM) |
224 |
- e1000 (KVM) |
225 |
- paravirtual (default for KVM) (HVM & KVM) |
226 |
|
227 |
disk\_type |
228 |
Valid for the Xen HVM and KVM hypervisors. |
229 |
|
230 |
This parameter determines the way the disks are presented to the |
231 |
instance. The possible options are: |
232 |
|
233 |
- ioemu [default] (HVM & KVM) |
234 |
- ide (HVM & KVM) |
235 |
- scsi (KVM) |
236 |
- sd (KVM) |
237 |
- mtd (KVM) |
238 |
- pflash (KVM) |
239 |
|
240 |
|
241 |
cdrom\_disk\_type |
242 |
Valid for the KVM hypervisor. |
243 |
|
244 |
This parameter determines the way the cdroms disks are presented |
245 |
to the instance. The default behavior is to get the same value of |
246 |
the eariler parameter (disk_type). The possible options are: |
247 |
|
248 |
- paravirtual |
249 |
- ide |
250 |
- scsi |
251 |
- sd |
252 |
- mtd |
253 |
- pflash |
254 |
|
255 |
|
256 |
vnc\_bind\_address |
257 |
Valid for the Xen HVM and KVM hypervisors. |
258 |
|
259 |
Specifies the address that the VNC listener for this instance |
260 |
should bind to. Valid values are IPv4 addresses. Use the address |
261 |
0.0.0.0 to bind to all available interfaces (this is the default) |
262 |
or specify the address of one of the interfaces on the node to |
263 |
restrict listening to that interface. |
264 |
|
265 |
vnc\_tls |
266 |
Valid for the KVM hypervisor. |
267 |
|
268 |
A boolean option that controls whether the VNC connection is |
269 |
secured with TLS. |
270 |
|
271 |
vnc\_x509\_path |
272 |
Valid for the KVM hypervisor. |
273 |
|
274 |
If ``vnc_tls`` is enabled, this options specifies the path to the |
275 |
x509 certificate to use. |
276 |
|
277 |
vnc\_x509\_verify |
278 |
Valid for the KVM hypervisor. |
279 |
|
280 |
spice\_bind |
281 |
Valid for the KVM hypervisor. |
282 |
|
283 |
Specifies the address or interface on which the SPICE server will |
284 |
listen. Valid values are: |
285 |
|
286 |
- IPv4 addresses, including 0.0.0.0 and 127.0.0.1 |
287 |
- IPv6 addresses, including :: and ::1 |
288 |
- names of network interfaces |
289 |
|
290 |
If a network interface is specified, the SPICE server will be bound |
291 |
to one of the addresses of that interface. |
292 |
|
293 |
spice\_ip\_version |
294 |
Valid for the KVM hypervisor. |
295 |
|
296 |
Specifies which version of the IP protocol should be used by the |
297 |
SPICE server. |
298 |
|
299 |
It is mainly intended to be used for specifying what kind of IP |
300 |
addresses should be used if a network interface with both IPv4 and |
301 |
IPv6 addresses is specified via the ``spice_bind`` parameter. In |
302 |
this case, if the ``spice_ip_version`` parameter is not used, the |
303 |
default IP version of the cluster will be used. |
304 |
|
305 |
acpi |
306 |
Valid for the Xen HVM and KVM hypervisors. |
307 |
|
308 |
A boolean option that specifies if the hypervisor should enable |
309 |
ACPI support for this instance. By default, ACPI is disabled. |
310 |
|
311 |
pae |
312 |
Valid for the Xen HVM and KVM hypervisors. |
313 |
|
314 |
A boolean option that specifies if the hypervisor should enabled |
315 |
PAE support for this instance. The default is false, disabling PAE |
316 |
support. |
317 |
|
318 |
use\_localtime |
319 |
Valid for the Xen HVM and KVM hypervisors. |
320 |
|
321 |
A boolean option that specifies if the instance should be started |
322 |
with its clock set to the localtime of the machine (when true) or |
323 |
to the UTC (When false). The default is false, which is useful for |
324 |
Linux/Unix machines; for Windows OSes, it is recommended to enable |
325 |
this parameter. |
326 |
|
327 |
kernel\_path |
328 |
Valid for the Xen PVM and KVM hypervisors. |
329 |
|
330 |
This option specifies the path (on the node) to the kernel to boot |
331 |
the instance with. Xen PVM instances always require this, while for |
332 |
KVM if this option is empty, it will cause the machine to load the |
333 |
kernel from its disks (and the boot will be done accordingly to |
334 |
``boot_order``). |
335 |
|
336 |
kernel\_args |
337 |
Valid for the Xen PVM and KVM hypervisors. |
338 |
|
339 |
This options specifies extra arguments to the kernel that will be |
340 |
loaded. device. This is always used for Xen PVM, while for KVM it |
341 |
is only used if the ``kernel_path`` option is also specified. |
342 |
|
343 |
The default setting for this value is simply ``"ro"``, which |
344 |
mounts the root disk (initially) in read-only one. For example, |
345 |
setting this to single will cause the instance to start in |
346 |
single-user mode. |
347 |
|
348 |
initrd\_path |
349 |
Valid for the Xen PVM and KVM hypervisors. |
350 |
|
351 |
This option specifies the path (on the node) to the initrd to boot |
352 |
the instance with. Xen PVM instances can use this always, while |
353 |
for KVM if this option is only used if the ``kernel_path`` option |
354 |
is also specified. You can pass here either an absolute filename |
355 |
(the path to the initrd) if you want to use an initrd, or use the |
356 |
format no\_initrd\_path for no initrd. |
357 |
|
358 |
root\_path |
359 |
Valid for the Xen PVM and KVM hypervisors. |
360 |
|
361 |
This options specifies the name of the root device. This is always |
362 |
needed for Xen PVM, while for KVM it is only used if the |
363 |
``kernel_path`` option is also specified. |
364 |
|
365 |
Please note, that if this setting is an empty string and the |
366 |
hypervisor is Xen it will not be written to the Xen configuration |
367 |
file |
368 |
|
369 |
serial\_console |
370 |
Valid for the KVM hypervisor. |
371 |
|
372 |
This boolean option specifies whether to emulate a serial console |
373 |
for the instance. |
374 |
|
375 |
disk\_cache |
376 |
Valid for the KVM hypervisor. |
377 |
|
378 |
The disk cache mode. It can be either default to not pass any |
379 |
cache option to KVM, or one of the KVM cache modes: none (for |
380 |
direct I/O), writethrough (to use the host cache but report |
381 |
completion to the guest only when the host has committed the |
382 |
changes to disk) or writeback (to use the host cache and report |
383 |
completion as soon as the data is in the host cache). Note that |
384 |
there are special considerations for the cache mode depending on |
385 |
version of KVM used and disk type (always raw file under Ganeti), |
386 |
please refer to the KVM documentation for more details. |
387 |
|
388 |
security\_model |
389 |
Valid for the KVM hypervisor. |
390 |
|
391 |
The security model for kvm. Currently one of *none*, *user* or |
392 |
*pool*. Under *none*, the default, nothing is done and instances |
393 |
are run as the Ganeti daemon user (normally root). |
394 |
|
395 |
Under *user* kvm will drop privileges and become the user |
396 |
specified by the security\_domain parameter. |
397 |
|
398 |
Under *pool* a global cluster pool of users will be used, making |
399 |
sure no two instances share the same user on the same node. (this |
400 |
mode is not implemented yet) |
401 |
|
402 |
security\_domain |
403 |
Valid for the KVM hypervisor. |
404 |
|
405 |
Under security model *user* the username to run the instance |
406 |
under. It must be a valid username existing on the host. |
407 |
|
408 |
Cannot be set under security model *none* or *pool*. |
409 |
|
410 |
kvm\_flag |
411 |
Valid for the KVM hypervisor. |
412 |
|
413 |
If *enabled* the -enable-kvm flag is passed to kvm. If *disabled* |
414 |
-disable-kvm is passed. If unset no flag is passed, and the |
415 |
default running mode for your kvm binary will be used. |
416 |
|
417 |
mem\_path |
418 |
Valid for the KVM hypervisor. |
419 |
|
420 |
This option passes the -mem-path argument to kvm with the path (on |
421 |
the node) to the mount point of the hugetlbfs file system, along |
422 |
with the -mem-prealloc argument too. |
423 |
|
424 |
use\_chroot |
425 |
Valid for the KVM hypervisor. |
426 |
|
427 |
This boolean option determines wether to run the KVM instance in a |
428 |
chroot directory. |
429 |
|
430 |
If it is set to ``true``, an empty directory is created before |
431 |
starting the instance and its path is passed via the -chroot flag |
432 |
to kvm. The directory is removed when the instance is stopped. |
433 |
|
434 |
It is set to ``false`` by default. |
435 |
|
436 |
migration\_downtime |
437 |
Valid for the KVM hypervisor. |
438 |
|
439 |
The maximum amount of time (in ms) a KVM instance is allowed to be |
440 |
frozen during a live migration, in order to copy dirty memory |
441 |
pages. Default value is 30ms, but you may need to increase this |
442 |
value for busy instances. |
443 |
|
444 |
This option is only effective with kvm versions >= 87 and qemu-kvm |
445 |
versions >= 0.11.0. |
446 |
|
447 |
cpu\_mask |
448 |
Valid for the LXC hypervisor. |
449 |
|
450 |
The processes belonging to the given instance are only scheduled |
451 |
on the specified CPUs. |
452 |
|
453 |
The parameter format is a comma-separated list of CPU IDs or CPU |
454 |
ID ranges. The ranges are defined by a lower and higher boundary, |
455 |
separated by a dash. The boundaries are inclusive. |
456 |
|
457 |
usb\_mouse |
458 |
Valid for the KVM hypervisor. |
459 |
|
460 |
This option specifies the usb mouse type to be used. It can be |
461 |
"mouse" or "tablet". When using VNC it's recommended to set it to |
462 |
"tablet". |
463 |
|
464 |
keymap |
465 |
Valid for the KVM hypervisor. |
466 |
|
467 |
This option specifies the keyboard mapping to be used. It is only |
468 |
needed when using the VNC console. For example: "fr" or "en-gb". |
469 |
|
470 |
reboot\_behavior |
471 |
Valid for Xen PVM, Xen HVM and KVM hypervisors. |
472 |
|
473 |
Normally if an instance reboots, the hypervisor will restart it. If |
474 |
this option is set to ``exit``, the hypervisor will treat a reboot |
475 |
as a shutdown instead. |
476 |
|
477 |
It is set to ``reboot`` by default. |
478 |
|
479 |
|
480 |
The ``-O (--os-parameters)`` option allows customisation of the OS |
481 |
parameters. The actual parameter names and values depends on the OS |
482 |
being used, but the syntax is the same key=value. For example, setting |
483 |
a hypothetical ``dhcp`` parameter to yes can be achieved by:: |
484 |
|
485 |
gnt-instance add -O dhcp=yes ... |
486 |
|
487 |
The ``-I (--iallocator)`` option specifies the instance allocator |
488 |
plugin to use. If you pass in this option the allocator will select |
489 |
nodes for this instance automatically, so you don't need to pass them |
490 |
with the ``-n`` option. For more information please refer to the |
491 |
instance allocator documentation. |
492 |
|
493 |
The ``-t (--disk-template)`` options specifies the disk layout type |
494 |
for the instance. The available choices are: |
495 |
|
496 |
diskless |
497 |
This creates an instance with no disks. Its useful for testing only |
498 |
(or other special cases). |
499 |
|
500 |
file |
501 |
Disk devices will be regular files. |
502 |
|
503 |
plain |
504 |
Disk devices will be logical volumes. |
505 |
|
506 |
drbd |
507 |
Disk devices will be drbd (version 8.x) on top of lvm volumes. |
508 |
|
509 |
|
510 |
The optional second value of the ``-n (--node)`` is used for the drbd |
511 |
template type and specifies the remote node. |
512 |
|
513 |
If you do not want gnt-instance to wait for the disk mirror to be |
514 |
synced, use the ``--no-wait-for-sync`` option. |
515 |
|
516 |
The ``--file-storage-dir`` specifies the relative path under the |
517 |
cluster-wide file storage directory to store file-based disks. It is |
518 |
useful for having different subdirectories for different |
519 |
instances. The full path of the directory where the disk files are |
520 |
stored will consist of cluster-wide file storage directory + optional |
521 |
subdirectory + instance name. Example: |
522 |
``@RPL_FILE_STORAGE_DIR@``*/mysubdir/instance1.example.com*. This |
523 |
option is only relevant for instances using the file storage backend. |
524 |
|
525 |
The ``--file-driver`` specifies the driver to use for file-based |
526 |
disks. Note that currently these drivers work with the xen hypervisor |
527 |
only. This option is only relevant for instances using the file |
528 |
storage backend. The available choices are: |
529 |
|
530 |
loop |
531 |
Kernel loopback driver. This driver uses loopback devices to |
532 |
access the filesystem within the file. However, running I/O |
533 |
intensive applications in your instance using the loop driver |
534 |
might result in slowdowns. Furthermore, if you use the loopback |
535 |
driver consider increasing the maximum amount of loopback devices |
536 |
(on most systems it's 8) using the max\_loop param. |
537 |
|
538 |
blktap |
539 |
The blktap driver (for Xen hypervisors). In order to be able to |
540 |
use the blktap driver you should check if the 'blktapctrl' user |
541 |
space disk agent is running (usually automatically started via |
542 |
xend). This user-level disk I/O interface has the advantage of |
543 |
better performance. Especially if you use a network file system |
544 |
(e.g. NFS) to store your instances this is the recommended choice. |
545 |
|
546 |
|
547 |
The ``--submit`` option is used to send the job to the master daemon |
548 |
but not wait for its completion. The job ID will be shown so that it |
549 |
can be examined via **gnt-job info**. |
550 |
|
551 |
Example:: |
552 |
|
553 |
# gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \ |
554 |
-n node1.example.com --file-storage-dir=mysubdir instance1.example.com |
555 |
# gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \ |
556 |
-n node1.example.com instance1.example.com |
557 |
# gnt-instance add -t plain --disk 0:size=30g --disk 1:size=100g,vg=san \ |
558 |
-B memory=512 -o debian-etch -n node1.example.com instance1.example.com |
559 |
# gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \ |
560 |
-n node1.example.com:node2.example.com instance2.example.com |
561 |
|
562 |
|
563 |
BATCH-CREATE |
564 |
^^^^^^^^^^^^ |
565 |
|
566 |
**batch-create** {instances\_file.json} |
567 |
|
568 |
This command (similar to the Ganeti 1.2 **batcher** tool) submits |
569 |
multiple instance creation jobs based on a definition file. The |
570 |
instance configurations do not encompass all the possible options for |
571 |
the **add** command, but only a subset. |
572 |
|
573 |
The instance file should be a valid-formed JSON file, containing a |
574 |
dictionary with instance name and instance parameters. The accepted |
575 |
parameters are: |
576 |
|
577 |
disk\_size |
578 |
The size of the disks of the instance. |
579 |
|
580 |
disk\_template |
581 |
The disk template to use for the instance, the same as in the |
582 |
**add** command. |
583 |
|
584 |
backend |
585 |
A dictionary of backend parameters. |
586 |
|
587 |
hypervisor |
588 |
A dictionary with a single key (the hypervisor name), and as value |
589 |
the hypervisor options. If not passed, the default hypervisor and |
590 |
hypervisor options will be inherited. |
591 |
|
592 |
mac, ip, mode, link |
593 |
Specifications for the one NIC that will be created for the |
594 |
instance. 'bridge' is also accepted as a backwards compatibile |
595 |
key. |
596 |
|
597 |
nics |
598 |
List of nics that will be created for the instance. Each entry |
599 |
should be a dict, with mac, ip, mode and link as possible keys. |
600 |
Please don't provide the "mac, ip, mode, link" parent keys if you |
601 |
use this method for specifying nics. |
602 |
|
603 |
primary\_node, secondary\_node |
604 |
The primary and optionally the secondary node to use for the |
605 |
instance (in case an iallocator script is not used). |
606 |
|
607 |
iallocator |
608 |
Instead of specifying the nodes, an iallocator script can be used |
609 |
to automatically compute them. |
610 |
|
611 |
start |
612 |
whether to start the instance |
613 |
|
614 |
ip\_check |
615 |
Skip the check for already-in-use instance; see the description in |
616 |
the **add** command for details. |
617 |
|
618 |
name\_check |
619 |
Skip the name check for instances; see the description in the |
620 |
**add** command for details. |
621 |
|
622 |
file\_storage\_dir, file\_driver |
623 |
Configuration for the file disk type, see the **add** command for |
624 |
details. |
625 |
|
626 |
|
627 |
A simple definition for one instance can be (with most of the |
628 |
parameters taken from the cluster defaults):: |
629 |
|
630 |
{ |
631 |
"instance3": { |
632 |
"template": "drbd", |
633 |
"os": "debootstrap", |
634 |
"disk_size": ["25G"], |
635 |
"iallocator": "dumb" |
636 |
}, |
637 |
"instance5": { |
638 |
"template": "drbd", |
639 |
"os": "debootstrap", |
640 |
"disk_size": ["25G"], |
641 |
"iallocator": "dumb", |
642 |
"hypervisor": "xen-hvm", |
643 |
"hvparams": {"acpi": true}, |
644 |
"backend": {"memory": 512} |
645 |
} |
646 |
} |
647 |
|
648 |
The command will display the job id for each submitted instance, as |
649 |
follows:: |
650 |
|
651 |
# gnt-instance batch-create instances.json |
652 |
instance3: 11224 |
653 |
instance5: 11225 |
654 |
|
655 |
REMOVE |
656 |
^^^^^^ |
657 |
|
658 |
**remove** [--ignore-failures] [--shutdown-timeout=*N*] [--submit] |
659 |
[--force] {*instance*} |
660 |
|
661 |
Remove an instance. This will remove all data from the instance and |
662 |
there is *no way back*. If you are not sure if you use an instance |
663 |
again, use **shutdown** first and leave it in the shutdown state for a |
664 |
while. |
665 |
|
666 |
The ``--ignore-failures`` option will cause the removal to proceed |
667 |
even in the presence of errors during the removal of the instance |
668 |
(e.g. during the shutdown or the disk removal). If this option is not |
669 |
given, the command will stop at the first error. |
670 |
|
671 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
672 |
before forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the |
673 |
kvm process for KVM, etc.). By default two minutes are given to each |
674 |
instance to stop. |
675 |
|
676 |
The ``--submit`` option is used to send the job to the master daemon |
677 |
but not wait for its completion. The job ID will be shown so that it |
678 |
can be examined via **gnt-job info**. |
679 |
|
680 |
The ``--force`` option is used to skip the interactive confirmation. |
681 |
|
682 |
Example:: |
683 |
|
684 |
# gnt-instance remove instance1.example.com |
685 |
|
686 |
|
687 |
LIST |
688 |
^^^^ |
689 |
|
690 |
| **list** |
691 |
| [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*] [-v] |
692 |
| [{-o|--output} *[+]FIELD,...*] [--filter] [instance...] |
693 |
|
694 |
Shows the currently configured instances with memory usage, disk |
695 |
usage, the node they are running on, and their run status. |
696 |
|
697 |
The ``--no-headers`` option will skip the initial header line. The |
698 |
``--separator`` option takes an argument which denotes what will be |
699 |
used between the output fields. Both these options are to help |
700 |
scripting. |
701 |
|
702 |
The units used to display the numeric values in the output varies, |
703 |
depending on the options given. By default, the values will be |
704 |
formatted in the most appropriate unit. If the ``--separator`` option |
705 |
is given, then the values are shown in mebibytes to allow parsing by |
706 |
scripts. In both cases, the ``--units`` option can be used to enforce |
707 |
a given output unit. |
708 |
|
709 |
The ``-v`` option activates verbose mode, which changes the display of |
710 |
special field states (see **ganeti(7)**). |
711 |
|
712 |
The ``-o (--output)`` option takes a comma-separated list of output |
713 |
fields. The available fields and their meaning are: |
714 |
|
715 |
@QUERY_FIELDS_INSTANCE@ |
716 |
|
717 |
If the value of the option starts with the character ``+``, the new |
718 |
field(s) will be added to the default list. This allows one to quickly |
719 |
see the default list plus a few other fields, instead of retyping the |
720 |
entire list of fields. |
721 |
|
722 |
There is a subtle grouping about the available output fields: all |
723 |
fields except for ``oper_state``, ``oper_ram``, ``oper_vcpus`` and |
724 |
``status`` are configuration value and not run-time values. So if you |
725 |
don't select any of the these fields, the query will be satisfied |
726 |
instantly from the cluster configuration, without having to ask the |
727 |
remote nodes for the data. This can be helpful for big clusters when |
728 |
you only want some data and it makes sense to specify a reduced set of |
729 |
output fields. |
730 |
|
731 |
If exactly one argument is given and it appears to be a query filter |
732 |
(see **ganeti(7)**), the query result is filtered accordingly. For |
733 |
ambiguous cases (e.g. a single field name as a filter) the ``--filter`` |
734 |
(``-F``) option forces the argument to be treated as a filter (e.g. |
735 |
``gnt-instance list -F admin_state``). |
736 |
|
737 |
The default output field list is: ``name``, ``os``, ``pnode``, |
738 |
``admin_state``, ``oper_state``, ``oper_ram``. |
739 |
|
740 |
|
741 |
LIST-FIELDS |
742 |
~~~~~~~~~~~ |
743 |
|
744 |
**list-fields** [field...] |
745 |
|
746 |
Lists available fields for instances. |
747 |
|
748 |
|
749 |
INFO |
750 |
^^^^ |
751 |
|
752 |
**info** [-s \| --static] [--roman] {--all \| *instance*} |
753 |
|
754 |
Show detailed information about the given instance(s). This is |
755 |
different from **list** as it shows detailed data about the instance's |
756 |
disks (especially useful for the drbd disk template). |
757 |
|
758 |
If the option ``-s`` is used, only information available in the |
759 |
configuration file is returned, without querying nodes, making the |
760 |
operation faster. |
761 |
|
762 |
Use the ``--all`` to get info about all instances, rather than |
763 |
explicitly passing the ones you're interested in. |
764 |
|
765 |
The ``--roman`` option can be used to cause envy among people who like |
766 |
ancient cultures, but are stuck with non-latin-friendly cluster |
767 |
virtualization technologies. |
768 |
|
769 |
MODIFY |
770 |
^^^^^^ |
771 |
|
772 |
| **modify** |
773 |
| [{-H|--hypervisor-parameters} *HYPERVISOR\_PARAMETERS*] |
774 |
| [{-B|--backend-parameters} *BACKEND\_PARAMETERS*] |
775 |
| [--net add*[:options]* \| --net remove \| --net *N:options*] |
776 |
| [--disk add:size=*SIZE*[,vg=*VG*][,metavg=*VG*] \| --disk remove \| |
777 |
| --disk *N*:mode=*MODE*] |
778 |
| [{-t|--disk-template} plain | {-t|--disk-template} drbd -n *new_secondary*] [--no-wait-for-sync] |
779 |
| [--os-type=*OS* [--force-variant]] |
780 |
| [{-O|--os-parameters} *param*=*value*... ] |
781 |
| [--submit] |
782 |
| {*instance*} |
783 |
|
784 |
Modifies the memory size, number of vcpus, ip address, MAC address |
785 |
and/or nic parameters for an instance. It can also add and remove |
786 |
disks and NICs to/from the instance. Note that you need to give at |
787 |
least one of the arguments, otherwise the command complains. |
788 |
|
789 |
The ``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)`` |
790 |
and ``-O (--os-parameters)`` options specifies hypervisor, backend and |
791 |
OS parameter options in the form of name=value[,...]. For details |
792 |
which options can be specified, see the **add** command. |
793 |
|
794 |
The ``-t (--disk-template)`` option will change the disk template of |
795 |
the instance. Currently only conversions between the plain and drbd |
796 |
disk templates are supported, and the instance must be stopped before |
797 |
attempting the conversion. When changing from the plain to the drbd |
798 |
disk template, a new secondary node must be specified via the ``-n`` |
799 |
option. The option ``--no-wait-for-sync`` can be used when converting |
800 |
to the ``drbd`` template in order to make the instance available for |
801 |
startup before DRBD has finished resyncing. |
802 |
|
803 |
The ``--disk add:size=``*SIZE* option adds a disk to the instance. The |
804 |
optional ``vg=``*VG* option specifies LVM volume group other than |
805 |
default vg to create the disk on. For DRBD disks, the ``metavg=``*VG* |
806 |
option specifies the volume group for the metadata device. The |
807 |
``--disk remove`` option will remove the last disk of the |
808 |
instance. The ``--disk`` *N*``:mode=``*MODE* option will change the |
809 |
mode of the Nth disk of the instance between read-only (``ro``) and |
810 |
read-write (``rw``). |
811 |
|
812 |
The ``--net add:``*options* option will add a new NIC to the |
813 |
instance. The available options are the same as in the **add** command |
814 |
(mac, ip, link, mode). The ``--net remove`` will remove the last NIC |
815 |
of the instance, while the ``--net`` *N*:*options* option will change |
816 |
the parameters of the Nth instance NIC. |
817 |
|
818 |
The option ``-o (--os-type)`` will change the OS name for the instance |
819 |
(without reinstallation). In case an OS variant is specified that is |
820 |
not found, then by default the modification is refused, unless |
821 |
``--force-variant`` is passed. An invalid OS will also be refused, |
822 |
unless the ``--force`` option is given. |
823 |
|
824 |
The ``--submit`` option is used to send the job to the master daemon |
825 |
but not wait for its completion. The job ID will be shown so that it |
826 |
can be examined via **gnt-job info**. |
827 |
|
828 |
All the changes take effect at the next restart. If the instance is |
829 |
running, there is no effect on the instance. |
830 |
|
831 |
REINSTALL |
832 |
^^^^^^^^^ |
833 |
|
834 |
| **reinstall** [{-o|--os-type} *os-type*] [--select-os] [-f *force*] |
835 |
| [--force-multiple] |
836 |
| [--instance \| --node \| --primary \| --secondary \| --all] |
837 |
| [{-O|--os-parameters} *OS\_PARAMETERS*] [--submit] {*instance*...} |
838 |
|
839 |
Reinstalls the operating system on the given instance(s). The |
840 |
instance(s) must be stopped when running this command. If the ``-o |
841 |
(--os-type)`` is specified, the operating system is changed. |
842 |
|
843 |
The ``--select-os`` option switches to an interactive OS reinstall. |
844 |
The user is prompted to select the OS template from the list of |
845 |
available OS templates. OS parameters can be overridden using ``-O |
846 |
(--os-parameters)`` (more documentation for this option under the |
847 |
**add** command). |
848 |
|
849 |
Since this is a potentially dangerous command, the user will be |
850 |
required to confirm this action, unless the ``-f`` flag is passed. |
851 |
When multiple instances are selected (either by passing multiple |
852 |
arguments or by using the ``--node``, ``--primary``, ``--secondary`` |
853 |
or ``--all`` options), the user must pass the ``--force-multiple`` |
854 |
options to skip the interactive confirmation. |
855 |
|
856 |
The ``--submit`` option is used to send the job to the master daemon |
857 |
but not wait for its completion. The job ID will be shown so that it |
858 |
can be examined via **gnt-job info**. |
859 |
|
860 |
RENAME |
861 |
^^^^^^ |
862 |
|
863 |
| **rename** [--no-ip-check] [--no-name-check] [--submit] |
864 |
| {*instance*} {*new\_name*} |
865 |
|
866 |
Renames the given instance. The instance must be stopped when running |
867 |
this command. The requirements for the new name are the same as for |
868 |
adding an instance: the new name must be resolvable and the IP it |
869 |
resolves to must not be reachable (in order to prevent duplicate IPs |
870 |
the next time the instance is started). The IP test can be skipped if |
871 |
the ``--no-ip-check`` option is passed. |
872 |
|
873 |
The ``--no-name-check`` skips the check for the new instance name via |
874 |
the resolver (e.g. in DNS or /etc/hosts, depending on your setup) and |
875 |
that the resolved name matches the provided name. Since the name check |
876 |
is used to compute the IP address, if you pass this option you must also |
877 |
pass the ``--no-ip-check`` option. |
878 |
|
879 |
The ``--submit`` option is used to send the job to the master daemon |
880 |
but not wait for its completion. The job ID will be shown so that it |
881 |
can be examined via **gnt-job info**. |
882 |
|
883 |
Starting/stopping/connecting to console |
884 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
885 |
|
886 |
STARTUP |
887 |
^^^^^^^ |
888 |
|
889 |
| **startup** |
890 |
| [--force] [--ignore-offline] |
891 |
| [--force-multiple] [--no-remember] |
892 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
893 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
894 |
| [{-H|--hypervisor-parameters} ``key=value...``] |
895 |
| [{-B|--backend-parameters} ``key=value...``] |
896 |
| [--submit] [--paused] |
897 |
| {*name*...} |
898 |
|
899 |
Starts one or more instances, depending on the following options. The |
900 |
four available modes are: |
901 |
|
902 |
--instance |
903 |
will start the instances given as arguments (at least one argument |
904 |
required); this is the default selection |
905 |
|
906 |
--node |
907 |
will start the instances who have the given node as either primary |
908 |
or secondary |
909 |
|
910 |
--primary |
911 |
will start all instances whose primary node is in the list of nodes |
912 |
passed as arguments (at least one node required) |
913 |
|
914 |
--secondary |
915 |
will start all instances whose secondary node is in the list of |
916 |
nodes passed as arguments (at least one node required) |
917 |
|
918 |
--all |
919 |
will start all instances in the cluster (no arguments accepted) |
920 |
|
921 |
--tags |
922 |
will start all instances in the cluster with the tags given as |
923 |
arguments |
924 |
|
925 |
--node-tags |
926 |
will start all instances in the cluster on nodes with the tags |
927 |
given as arguments |
928 |
|
929 |
--pri-node-tags |
930 |
will start all instances in the cluster on primary nodes with the |
931 |
tags given as arguments |
932 |
|
933 |
--sec-node-tags |
934 |
will start all instances in the cluster on secondary nodes with the |
935 |
tags given as arguments |
936 |
|
937 |
Note that although you can pass more than one selection option, the |
938 |
last one wins, so in order to guarantee the desired result, don't pass |
939 |
more than one such option. |
940 |
|
941 |
Use ``--force`` to start even if secondary disks are failing. |
942 |
``--ignore-offline`` can be used to ignore offline primary nodes and |
943 |
mark the instance as started even if the primary is not available. |
944 |
|
945 |
The ``--force-multiple`` will skip the interactive confirmation in the |
946 |
case the more than one instance will be affected. |
947 |
|
948 |
The ``--no-remember`` option will perform the startup but not change |
949 |
the state of the instance in the configuration file (if it was stopped |
950 |
before, Ganeti will still thinks it needs to be stopped). This can be |
951 |
used for testing, or for a one shot-start where you don't want the |
952 |
watcher to restart the instance if it crashes. |
953 |
|
954 |
The ``-H (--hypervisor-parameters)`` and ``-B (--backend-parameters)`` |
955 |
options specify temporary hypervisor and backend parameters that can |
956 |
be used to start an instance with modified parameters. They can be |
957 |
useful for quick testing without having to modify an instance back and |
958 |
forth, e.g.:: |
959 |
|
960 |
# gnt-instance start -H kernel_args="single" instance1 |
961 |
# gnt-instance start -B memory=2048 instance2 |
962 |
|
963 |
|
964 |
The first form will start the instance instance1 in single-user mode, |
965 |
and the instance instance2 with 2GB of RAM (this time only, unless |
966 |
that is the actual instance memory size already). Note that the values |
967 |
override the instance parameters (and not extend them): an instance |
968 |
with "kernel\_args=ro" when started with -H kernel\_args=single will |
969 |
result in "single", not "ro single". The ``--submit`` option is used |
970 |
to send the job to the master daemon but not wait for its |
971 |
completion. The job ID will be shown so that it can be examined via |
972 |
**gnt-job info**. |
973 |
|
974 |
The ``--paused`` option is only valid for Xen and kvm hypervisors. This |
975 |
pauses the instance at the start of bootup, awaiting ``gnt-instance |
976 |
console`` to unpause it, allowing the entire boot process to be |
977 |
monitored for debugging. |
978 |
|
979 |
Example:: |
980 |
|
981 |
# gnt-instance start instance1.example.com |
982 |
# gnt-instance start --node node1.example.com node2.example.com |
983 |
# gnt-instance start --all |
984 |
|
985 |
|
986 |
SHUTDOWN |
987 |
^^^^^^^^ |
988 |
|
989 |
| **shutdown** |
990 |
| [--timeout=*N*] |
991 |
| [--force-multiple] [--ignore-offline] [--no-remember] |
992 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
993 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
994 |
| [--submit] |
995 |
| {*name*...} |
996 |
|
997 |
Stops one or more instances. If the instance cannot be cleanly stopped |
998 |
during a hardcoded interval (currently 2 minutes), it will forcibly |
999 |
stop the instance (equivalent to switching off the power on a physical |
1000 |
machine). |
1001 |
|
1002 |
The ``--timeout`` is used to specify how much time to wait before |
1003 |
forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the kvm |
1004 |
process for KVM, etc.). By default two minutes are given to each |
1005 |
instance to stop. |
1006 |
|
1007 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1008 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1009 |
``--sec-node-tags`` options are similar as for the **startup** command |
1010 |
and they influence the actual instances being shutdown. |
1011 |
|
1012 |
The ``--submit`` option is used to send the job to the master daemon |
1013 |
but not wait for its completion. The job ID will be shown so that it |
1014 |
can be examined via **gnt-job info**. |
1015 |
|
1016 |
``--ignore-offline`` can be used to ignore offline primary nodes and |
1017 |
force the instance to be marked as stopped. This option should be used |
1018 |
with care as it can lead to an inconsistent cluster state. |
1019 |
|
1020 |
The ``--no-remember`` option will perform the shutdown but not change |
1021 |
the state of the instance in the configuration file (if it was running |
1022 |
before, Ganeti will still thinks it needs to be running). This can be |
1023 |
useful for a cluster-wide shutdown, where some instances are marked as |
1024 |
up and some as down, and you don't want to change the running state: |
1025 |
you just need to disable the watcher, shutdown all instances with |
1026 |
``--no-remember``, and when the watcher is activated again it will |
1027 |
restore the correct runtime state for all instances. |
1028 |
|
1029 |
Example:: |
1030 |
|
1031 |
# gnt-instance shutdown instance1.example.com |
1032 |
# gnt-instance shutdown --all |
1033 |
|
1034 |
|
1035 |
REBOOT |
1036 |
^^^^^^ |
1037 |
|
1038 |
| **reboot** |
1039 |
| [{-t|--type} *REBOOT-TYPE*] |
1040 |
| [--ignore-secondaries] |
1041 |
| [--shutdown-timeout=*N*] |
1042 |
| [--force-multiple] |
1043 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
1044 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
1045 |
| [--submit] |
1046 |
| [*name*...] |
1047 |
|
1048 |
Reboots one or more instances. The type of reboot depends on the value |
1049 |
of ``-t (--type)``. A soft reboot does a hypervisor reboot, a hard reboot |
1050 |
does a instance stop, recreates the hypervisor config for the instance |
1051 |
and starts the instance. A full reboot does the equivalent of |
1052 |
**gnt-instance shutdown && gnt-instance startup**. The default is |
1053 |
hard reboot. |
1054 |
|
1055 |
For the hard reboot the option ``--ignore-secondaries`` ignores errors |
1056 |
for the secondary node while re-assembling the instance disks. |
1057 |
|
1058 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1059 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1060 |
``--sec-node-tags`` options are similar as for the **startup** command |
1061 |
and they influence the actual instances being rebooted. |
1062 |
|
1063 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1064 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1065 |
process, for kvm). By default two minutes are given to each instance |
1066 |
to stop. |
1067 |
|
1068 |
The ``--force-multiple`` will skip the interactive confirmation in the |
1069 |
case the more than one instance will be affected. |
1070 |
|
1071 |
Example:: |
1072 |
|
1073 |
# gnt-instance reboot instance1.example.com |
1074 |
# gnt-instance reboot --type=full instance1.example.com |
1075 |
|
1076 |
|
1077 |
CONSOLE |
1078 |
^^^^^^^ |
1079 |
|
1080 |
**console** [--show-cmd] {*instance*} |
1081 |
|
1082 |
Connects to the console of the given instance. If the instance is not |
1083 |
up, an error is returned. Use the ``--show-cmd`` option to display the |
1084 |
command instead of executing it. |
1085 |
|
1086 |
For HVM instances, this will attempt to connect to the serial console |
1087 |
of the instance. To connect to the virtualized "physical" console of a |
1088 |
HVM instance, use a VNC client with the connection info from the |
1089 |
**info** command. |
1090 |
|
1091 |
For Xen/kvm instances, if the instance is paused, this attempts to |
1092 |
unpause the instance after waiting a few seconds for the connection to |
1093 |
the console to be made. |
1094 |
|
1095 |
Example:: |
1096 |
|
1097 |
# gnt-instance console instance1.example.com |
1098 |
|
1099 |
|
1100 |
Disk management |
1101 |
~~~~~~~~~~~~~~~ |
1102 |
|
1103 |
REPLACE-DISKS |
1104 |
^^^^^^^^^^^^^ |
1105 |
|
1106 |
**replace-disks** [--submit] [--early-release] {-p} [--disks *idx*] |
1107 |
{*instance*} |
1108 |
|
1109 |
**replace-disks** [--submit] [--early-release] {-s} [--disks *idx*] |
1110 |
{*instance*} |
1111 |
|
1112 |
**replace-disks** [--submit] [--early-release] {--iallocator *name* |
1113 |
\| --new-secondary *NODE*} {*instance*} |
1114 |
|
1115 |
**replace-disks** [--submit] [--early-release] {--auto} |
1116 |
{*instance*} |
1117 |
|
1118 |
This command is a generalized form for replacing disks. It is |
1119 |
currently only valid for the mirrored (DRBD) disk template. |
1120 |
|
1121 |
The first form (when passing the ``-p`` option) will replace the disks |
1122 |
on the primary, while the second form (when passing the ``-s`` option |
1123 |
will replace the disks on the secondary node. For these two cases (as |
1124 |
the node doesn't change), it is possible to only run the replace for a |
1125 |
subset of the disks, using the option ``--disks`` which takes a list |
1126 |
of comma-delimited disk indices (zero-based), e.g. 0,2 to replace only |
1127 |
the first and third disks. |
1128 |
|
1129 |
The third form (when passing either the ``--iallocator`` or the |
1130 |
``--new-secondary`` option) is designed to change secondary node of |
1131 |
the instance. Specifying ``--iallocator`` makes the new secondary be |
1132 |
selected automatically by the specified allocator plugin, otherwise |
1133 |
the new secondary node will be the one chosen manually via the |
1134 |
``--new-secondary`` option. |
1135 |
|
1136 |
The fourth form (when using ``--auto``) will automatically determine |
1137 |
which disks of an instance are faulty and replace them within the same |
1138 |
node. The ``--auto`` option works only when an instance has only |
1139 |
faulty disks on either the primary or secondary node; it doesn't work |
1140 |
when both sides have faulty disks. |
1141 |
|
1142 |
The ``--submit`` option is used to send the job to the master daemon |
1143 |
but not wait for its completion. The job ID will be shown so that it |
1144 |
can be examined via **gnt-job info**. |
1145 |
|
1146 |
The ``--early-release`` changes the code so that the old storage on |
1147 |
secondary node(s) is removed early (before the resync is completed) |
1148 |
and the internal Ganeti locks for the current (and new, if any) |
1149 |
secondary node are also released, thus allowing more parallelism in |
1150 |
the cluster operation. This should be used only when recovering from a |
1151 |
disk failure on the current secondary (thus the old storage is already |
1152 |
broken) or when the storage on the primary node is known to be fine |
1153 |
(thus we won't need the old storage for potential recovery). |
1154 |
|
1155 |
Note that it is not possible to select an offline or drained node as a |
1156 |
new secondary. |
1157 |
|
1158 |
ACTIVATE-DISKS |
1159 |
^^^^^^^^^^^^^^ |
1160 |
|
1161 |
**activate-disks** [--submit] [--ignore-size] {*instance*} |
1162 |
|
1163 |
Activates the block devices of the given instance. If successful, the |
1164 |
command will show the location and name of the block devices:: |
1165 |
|
1166 |
node1.example.com:disk/0:/dev/drbd0 |
1167 |
node1.example.com:disk/1:/dev/drbd1 |
1168 |
|
1169 |
|
1170 |
In this example, *node1.example.com* is the name of the node on which |
1171 |
the devices have been activated. The *disk/0* and *disk/1* are the |
1172 |
Ganeti-names of the instance disks; how they are visible inside the |
1173 |
instance is hypervisor-specific. */dev/drbd0* and */dev/drbd1* are the |
1174 |
actual block devices as visible on the node. The ``--submit`` option |
1175 |
is used to send the job to the master daemon but not wait for its |
1176 |
completion. The job ID will be shown so that it can be examined via |
1177 |
**gnt-job info**. |
1178 |
|
1179 |
The ``--ignore-size`` option can be used to activate disks ignoring |
1180 |
the currently configured size in Ganeti. This can be used in cases |
1181 |
where the configuration has gotten out of sync with the real-world |
1182 |
(e.g. after a partially-failed grow-disk operation or due to rounding |
1183 |
in LVM devices). This should not be used in normal cases, but only |
1184 |
when activate-disks fails without it. |
1185 |
|
1186 |
Note that it is safe to run this command while the instance is already |
1187 |
running. |
1188 |
|
1189 |
DEACTIVATE-DISKS |
1190 |
^^^^^^^^^^^^^^^^ |
1191 |
|
1192 |
**deactivate-disks** [-f] [--submit] {*instance*} |
1193 |
|
1194 |
De-activates the block devices of the given instance. Note that if you |
1195 |
run this command for an instance with a drbd disk template, while it |
1196 |
is running, it will not be able to shutdown the block devices on the |
1197 |
primary node, but it will shutdown the block devices on the secondary |
1198 |
nodes, thus breaking the replication. |
1199 |
|
1200 |
The ``-f``/``--force`` option will skip checks that the instance is |
1201 |
down; in case the hypervisor is confused and we can't talk to it, |
1202 |
normally Ganeti will refuse to deactivate the disks, but with this |
1203 |
option passed it will skip this check and directly try to deactivate |
1204 |
the disks. This can still fail due to the instance actually running or |
1205 |
other issues. |
1206 |
|
1207 |
The ``--submit`` option is used to send the job to the master daemon |
1208 |
but not wait for its completion. The job ID will be shown so that it |
1209 |
can be examined via **gnt-job info**. |
1210 |
|
1211 |
GROW-DISK |
1212 |
^^^^^^^^^ |
1213 |
|
1214 |
**grow-disk** [--no-wait-for-sync] [--submit] {*instance*} {*disk*} |
1215 |
{*amount*} |
1216 |
|
1217 |
Grows an instance's disk. This is only possible for instances having a |
1218 |
plain or drbd disk template. |
1219 |
|
1220 |
Note that this command only change the block device size; it will not |
1221 |
grow the actual filesystems, partitions, etc. that live on that |
1222 |
disk. Usually, you will need to: |
1223 |
|
1224 |
#. use **gnt-instance grow-disk** |
1225 |
|
1226 |
#. reboot the instance (later, at a convenient time) |
1227 |
|
1228 |
#. use a filesystem resizer, such as ext2online(8) or |
1229 |
xfs\_growfs(8) to resize the filesystem, or use fdisk(8) to change |
1230 |
the partition table on the disk |
1231 |
|
1232 |
The *disk* argument is the index of the instance disk to grow. The |
1233 |
*amount* argument is given either as a number (and it represents the |
1234 |
amount to increase the disk with in mebibytes) or can be given similar |
1235 |
to the arguments in the create instance operation, with a suffix |
1236 |
denoting the unit. |
1237 |
|
1238 |
Note that the disk grow operation might complete on one node but fail |
1239 |
on the other; this will leave the instance with different-sized LVs on |
1240 |
the two nodes, but this will not create problems (except for unused |
1241 |
space). |
1242 |
|
1243 |
If you do not want gnt-instance to wait for the new disk region to be |
1244 |
synced, use the ``--no-wait-for-sync`` option. |
1245 |
|
1246 |
The ``--submit`` option is used to send the job to the master daemon |
1247 |
but not wait for its completion. The job ID will be shown so that it |
1248 |
can be examined via **gnt-job info**. |
1249 |
|
1250 |
Example (increase the first disk for instance1 by 16GiB):: |
1251 |
|
1252 |
# gnt-instance grow-disk instance1.example.com 0 16g |
1253 |
|
1254 |
|
1255 |
Also note that disk shrinking is not supported; use **gnt-backup |
1256 |
export** and then **gnt-backup import** to reduce the disk size of an |
1257 |
instance. |
1258 |
|
1259 |
RECREATE-DISKS |
1260 |
^^^^^^^^^^^^^^ |
1261 |
|
1262 |
**recreate-disks** [--submit] [--disks=``indices``] [-n node1:[node2]] |
1263 |
{*instance*} |
1264 |
|
1265 |
Recreates the disks of the given instance, or only a subset of the |
1266 |
disks (if the option ``disks`` is passed, which must be a |
1267 |
comma-separated list of disk indices, starting from zero). |
1268 |
|
1269 |
Note that this functionality should only be used for missing disks; if |
1270 |
any of the given disks already exists, the operation will fail. While |
1271 |
this is suboptimal, recreate-disks should hopefully not be needed in |
1272 |
normal operation and as such the impact of this is low. |
1273 |
|
1274 |
Optionally the instance's disks can be recreated on different |
1275 |
nodes. This can be useful if, for example, the original nodes of the |
1276 |
instance have gone down (and are marked offline), so we can't recreate |
1277 |
on the same nodes. To do this, pass the new node(s) via ``-n`` option, |
1278 |
with a syntax similar to the **add** command. The number of nodes |
1279 |
passed must equal the number of nodes that the instance currently |
1280 |
has. Note that changing nodes is only allowed for 'all disk' |
1281 |
replacement (when ``--disks`` is not passed). |
1282 |
|
1283 |
The ``--submit`` option is used to send the job to the master daemon |
1284 |
but not wait for its completion. The job ID will be shown so that it |
1285 |
can be examined via **gnt-job info**. |
1286 |
|
1287 |
Recovery |
1288 |
~~~~~~~~ |
1289 |
|
1290 |
FAILOVER |
1291 |
^^^^^^^^ |
1292 |
|
1293 |
**failover** [-f] [--ignore-consistency] [--shutdown-timeout=*N*] |
1294 |
[--submit] {*instance*} |
1295 |
|
1296 |
Failover will stop the instance (if running), change its primary node, |
1297 |
and if it was originally running it will start it again (on the new |
1298 |
primary). This only works for instances with drbd template (in which |
1299 |
case you can only fail to the secondary node) and for externally |
1300 |
mirrored templates (shared storage) (which can change to any other |
1301 |
node). |
1302 |
|
1303 |
Normally the failover will check the consistency of the disks before |
1304 |
failing over the instance. If you are trying to migrate instances off |
1305 |
a dead node, this will fail. Use the ``--ignore-consistency`` option |
1306 |
for this purpose. Note that this option can be dangerous as errors in |
1307 |
shutting down the instance will be ignored, resulting in possibly |
1308 |
having the instance running on two machines in parallel (on |
1309 |
disconnected DRBD drives). |
1310 |
|
1311 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1312 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1313 |
process, for kvm). By default two minutes are given to each instance |
1314 |
to stop. |
1315 |
|
1316 |
The ``--submit`` option is used to send the job to the master daemon |
1317 |
but not wait for its completion. The job ID will be shown so that it |
1318 |
can be examined via **gnt-job info**. |
1319 |
|
1320 |
Example:: |
1321 |
|
1322 |
# gnt-instance failover instance1.example.com |
1323 |
|
1324 |
|
1325 |
MIGRATE |
1326 |
^^^^^^^ |
1327 |
|
1328 |
**migrate** [-f] {--cleanup} {*instance*} |
1329 |
|
1330 |
**migrate** [-f] [--allow-failover] [--non-live] |
1331 |
[--migration-mode=live\|non-live] {*instance*} |
1332 |
|
1333 |
Migrate will move the instance to its secondary node without |
1334 |
shutdown. It only works for instances having the drbd8 disk template |
1335 |
type. |
1336 |
|
1337 |
The migration command needs a perfectly healthy instance, as we rely |
1338 |
on the dual-master capability of drbd8 and the disks of the instance |
1339 |
are not allowed to be degraded. |
1340 |
|
1341 |
The ``--non-live`` and ``--migration-mode=non-live`` options will |
1342 |
switch (for the hypervisors that support it) between a "fully live" |
1343 |
(i.e. the interruption is as minimal as possible) migration and one in |
1344 |
which the instance is frozen, its state saved and transported to the |
1345 |
remote node, and then resumed there. This all depends on the |
1346 |
hypervisor support for two different methods. In any case, it is not |
1347 |
an error to pass this parameter (it will just be ignored if the |
1348 |
hypervisor doesn't support it). The option ``--migration-mode=live`` |
1349 |
option will request a fully-live migration. The default, when neither |
1350 |
option is passed, depends on the hypervisor parameters (and can be |
1351 |
viewed with the **gnt-cluster info** command). |
1352 |
|
1353 |
If the ``--cleanup`` option is passed, the operation changes from |
1354 |
migration to attempting recovery from a failed previous migration. In |
1355 |
this mode, Ganeti checks if the instance runs on the correct node (and |
1356 |
updates its configuration if not) and ensures the instances's disks |
1357 |
are configured correctly. In this mode, the ``--non-live`` option is |
1358 |
ignored. |
1359 |
|
1360 |
The option ``-f`` will skip the prompting for confirmation. |
1361 |
|
1362 |
If ``--allow-failover`` is specified it tries to fallback to failover if |
1363 |
it already can determine that a migration wont work (i.e. if the |
1364 |
instance is shutdown). Please note that the fallback will not happen |
1365 |
during execution. If a migration fails during execution it still fails. |
1366 |
|
1367 |
Example (and expected output):: |
1368 |
|
1369 |
# gnt-instance migrate instance1 |
1370 |
Migrate will happen to the instance instance1. Note that migration is |
1371 |
**experimental** in this version. This might impact the instance if |
1372 |
anything goes wrong. Continue? |
1373 |
y/[n]/?: y |
1374 |
* checking disk consistency between source and target |
1375 |
* ensuring the target is in secondary mode |
1376 |
* changing disks into dual-master mode |
1377 |
- INFO: Waiting for instance instance1 to sync disks. |
1378 |
- INFO: Instance instance1's disks are in sync. |
1379 |
* migrating instance to node2.example.com |
1380 |
* changing the instance's disks on source node to secondary |
1381 |
- INFO: Waiting for instance instance1 to sync disks. |
1382 |
- INFO: Instance instance1's disks are in sync. |
1383 |
* changing the instance's disks to single-master |
1384 |
# |
1385 |
|
1386 |
|
1387 |
MOVE |
1388 |
^^^^ |
1389 |
|
1390 |
**move** [-f] [--ignore-consistency] |
1391 |
[-n *node*] [--shutdown-timeout=*N*] [--submit] |
1392 |
{*instance*} |
1393 |
|
1394 |
Move will move the instance to an arbitrary node in the cluster. This |
1395 |
works only for instances having a plain or file disk template. |
1396 |
|
1397 |
Note that since this operation is done via data copy, it will take a |
1398 |
long time for big disks (similar to replace-disks for a drbd |
1399 |
instance). |
1400 |
|
1401 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1402 |
before forcing the shutdown (e.g. ``xm destroy`` in XEN, killing the |
1403 |
kvm process for KVM, etc.). By default two minutes are given to each |
1404 |
instance to stop. |
1405 |
|
1406 |
The ``--ignore-consistency`` option will make Ganeti ignore any errors |
1407 |
in trying to shutdown the instance on its node; useful if the |
1408 |
hypervisor is broken and you want to recuperate the data. |
1409 |
|
1410 |
The ``--submit`` option is used to send the job to the master daemon |
1411 |
but not wait for its completion. The job ID will be shown so that it |
1412 |
can be examined via **gnt-job info**. |
1413 |
|
1414 |
Example:: |
1415 |
|
1416 |
# gnt-instance move -n node3.example.com instance1.example.com |
1417 |
|
1418 |
|
1419 |
CHANGE-GROUP |
1420 |
~~~~~~~~~~~~ |
1421 |
|
1422 |
**change-group** [--iallocator *NAME*] [--to *GROUP*...] {*instance*} |
1423 |
|
1424 |
This command moves an instance to another node group. The move is |
1425 |
calculated by an iallocator, either given on the command line or as a |
1426 |
cluster default. |
1427 |
|
1428 |
If no specific destination groups are specified using ``--to``, all |
1429 |
groups except the one containing the instance are considered. |
1430 |
|
1431 |
Example:: |
1432 |
|
1433 |
# gnt-instance change-group -I hail --to rack2 inst1.example.com |
1434 |
|
1435 |
|
1436 |
TAGS |
1437 |
~~~~ |
1438 |
|
1439 |
ADD-TAGS |
1440 |
^^^^^^^^ |
1441 |
|
1442 |
**add-tags** [--from *file*] {*instancename*} {*tag*...} |
1443 |
|
1444 |
Add tags to the given instance. If any of the tags contains invalid |
1445 |
characters, the entire operation will abort. |
1446 |
|
1447 |
If the ``--from`` option is given, the list of tags will be extended |
1448 |
with the contents of that file (each line becomes a tag). In this |
1449 |
case, there is not need to pass tags on the command line (if you do, |
1450 |
both sources will be used). A file name of ``-`` will be interpreted |
1451 |
as stdin. |
1452 |
|
1453 |
LIST-TAGS |
1454 |
^^^^^^^^^ |
1455 |
|
1456 |
**list-tags** {*instancename*} |
1457 |
|
1458 |
List the tags of the given instance. |
1459 |
|
1460 |
REMOVE-TAGS |
1461 |
^^^^^^^^^^^ |
1462 |
|
1463 |
**remove-tags** [--from *file*] {*instancename*} {*tag*...} |
1464 |
|
1465 |
Remove tags from the given instance. If any of the tags are not |
1466 |
existing on the node, the entire operation will abort. |
1467 |
|
1468 |
If the ``--from`` option is given, the list of tags to be removed will |
1469 |
be extended with the contents of that file (each line becomes a tag). |
1470 |
In this case, there is not need to pass tags on the command line (if |
1471 |
you do, tags from both sources will be removed). A file name of ``-`` |
1472 |
will be interpreted as stdin. |
1473 |
|
1474 |
.. vim: set textwidth=72 : |
1475 |
.. Local Variables: |
1476 |
.. mode: rst |
1477 |
.. fill-column: 72 |
1478 |
.. End: |