root / man / gnt-instance.rst @ 0a68e0ff
History | View | Annotate | Download (52.4 kB)
1 |
gnt-instance(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
================================================= |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-instance - Ganeti instance administration |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-instance** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-instance** command is used for instance administration in |
18 |
the Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
Creation/removal/querying |
24 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
25 |
|
26 |
ADD |
27 |
^^^ |
28 |
|
29 |
| **add** |
30 |
| {-t|--disk-template {diskless | file \| plain \| drbd}} |
31 |
| {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,metavg=*VG*][,mode=*ro\|rw*] |
32 |
| \| {-s|--os-size} *SIZE*} |
33 |
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install] |
34 |
| [--net=*N* [:options...] \| --no-nics] |
35 |
| [{-B|--backend-parameters} *BEPARAMS*] |
36 |
| [{-H|--hypervisor-parameters} *HYPERVISOR* [: option=*value*... ]] |
37 |
| [{-O|--os-parameters} *param*=*value*... ] |
38 |
| [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}] |
39 |
| {{-n|--node} *node[:secondary-node]* \| {-I|--iallocator} *name*} |
40 |
| {{-o|--os-type} *os-type*} |
41 |
| [--submit] |
42 |
| {*instance*} |
43 |
|
44 |
Creates a new instance on the specified host. The *instance* argument |
45 |
must be in DNS, but depending on the bridge/routing setup, need not be |
46 |
in the same network as the nodes in the cluster. |
47 |
|
48 |
The ``disk`` option specifies the parameters for the disks of the |
49 |
instance. The numbering of disks starts at zero, and at least one disk |
50 |
needs to be passed. For each disk, either the size or the adoption |
51 |
source needs to be given, and optionally the access mode (read-only or |
52 |
the default of read-write) and the LVM volume group can also be |
53 |
specified (via the ``vg`` key). For DRBD devices, a different VG can |
54 |
be specified for the metadata device using the ``metavg`` key. The |
55 |
size is interpreted (when no unit is given) in mebibytes. You can also |
56 |
use one of the suffixes *m*, *g* or *t* to specify the exact the units |
57 |
used; these suffixes map to mebibytes, gibibytes and tebibytes. |
58 |
|
59 |
When using the ``adopt`` key in the disk definition, Ganeti will |
60 |
reuse those volumes (instead of creating new ones) as the |
61 |
instance's disks. Ganeti will rename these volumes to the standard |
62 |
format, and (without installing the OS) will use them as-is for the |
63 |
instance. This allows migrating instances from non-managed mode |
64 |
(e.q. plain KVM with LVM) to being managed via Ganeti. Note that |
65 |
this works only for the \`plain' disk template (see below for |
66 |
template details). |
67 |
|
68 |
Alternatively, a single-disk instance can be created via the ``-s`` |
69 |
option which takes a single argument, the size of the disk. This is |
70 |
similar to the Ganeti 1.2 version (but will only create one disk). |
71 |
|
72 |
The minimum disk specification is therefore ``--disk 0:size=20G`` (or |
73 |
``-s 20G`` when using the ``-s`` option), and a three-disk instance |
74 |
can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk |
75 |
2:size=100G``. |
76 |
|
77 |
The ``--no-ip-check`` skips the checks that are done to see if the |
78 |
instance's IP is not already alive (i.e. reachable from the master |
79 |
node). |
80 |
|
81 |
The ``--no-name-check`` skips the check for the instance name via |
82 |
the resolver (e.g. in DNS or /etc/hosts, depending on your setup). |
83 |
Since the name check is used to compute the IP address, if you pass |
84 |
this option you must also pass the ``--no-ip-check`` option. |
85 |
|
86 |
If you don't wat the instance to automatically start after |
87 |
creation, this is possible via the ``--no-start`` option. This will |
88 |
leave the instance down until a subsequent **gnt-instance start** |
89 |
command. |
90 |
|
91 |
The NICs of the instances can be specified via the ``--net`` |
92 |
option. By default, one NIC is created for the instance, with a |
93 |
random MAC, and set up according the the cluster level nic |
94 |
parameters. Each NIC can take these parameters (all optional): |
95 |
|
96 |
mac |
97 |
either a value or 'generate' to generate a new unique MAC |
98 |
|
99 |
ip |
100 |
specifies the IP address assigned to the instance from the Ganeti |
101 |
side (this is not necessarily what the instance will use, but what |
102 |
the node expects the instance to use) |
103 |
|
104 |
mode |
105 |
specifies the connection mode for this nic: routed or bridged. |
106 |
|
107 |
link |
108 |
in bridged mode specifies the bridge to attach this NIC to, in |
109 |
routed mode it's intended to differentiate between different |
110 |
routing tables/instance groups (but the meaning is dependent on |
111 |
the network script, see gnt-cluster(8) for more details) |
112 |
|
113 |
|
114 |
Of these "mode" and "link" are nic parameters, and inherit their |
115 |
default at cluster level. Alternatively, if no network is desired for |
116 |
the instance, you can prevent the default of one NIC with the |
117 |
``--no-nics`` option. |
118 |
|
119 |
The ``-o (--os-type)`` option specifies the operating system to be |
120 |
installed. The available operating systems can be listed with |
121 |
**gnt-os list**. Passing ``--no-install`` will however skip the OS |
122 |
installation, allowing a manual import if so desired. Note that the |
123 |
no-installation mode will automatically disable the start-up of the |
124 |
instance (without an OS, it most likely won't be able to start-up |
125 |
successfully). |
126 |
|
127 |
The ``-B (--backend-parameters)`` option specifies the backend |
128 |
parameters for the instance. If no such parameters are specified, the |
129 |
values are inherited from the cluster. Possible parameters are: |
130 |
|
131 |
memory |
132 |
the memory size of the instance; as usual, suffixes can be used to |
133 |
denote the unit, otherwise the value is taken in mebibites |
134 |
|
135 |
vcpus |
136 |
the number of VCPUs to assign to the instance (if this value makes |
137 |
sense for the hypervisor) |
138 |
|
139 |
auto\_balance |
140 |
whether the instance is considered in the N+1 cluster checks |
141 |
(enough redundancy in the cluster to survive a node failure) |
142 |
|
143 |
|
144 |
The ``-H (--hypervisor-parameters)`` option specified the hypervisor |
145 |
to use for the instance (must be one of the enabled hypervisors on the |
146 |
cluster) and optionally custom parameters for this instance. If not |
147 |
other options are used (i.e. the invocation is just -H *NAME*) the |
148 |
instance will inherit the cluster options. The defaults below show the |
149 |
cluster defaults at cluster creation time. |
150 |
|
151 |
The possible hypervisor options are as follows: |
152 |
|
153 |
boot\_order |
154 |
Valid for the Xen HVM and KVM hypervisors. |
155 |
|
156 |
A string value denoting the boot order. This has different meaning |
157 |
for the Xen HVM hypervisor and for the KVM one. |
158 |
|
159 |
For Xen HVM, The boot order is a string of letters listing the boot |
160 |
devices, with valid device letters being: |
161 |
|
162 |
a |
163 |
floppy drive |
164 |
|
165 |
c |
166 |
hard disk |
167 |
|
168 |
d |
169 |
CDROM drive |
170 |
|
171 |
n |
172 |
network boot (PXE) |
173 |
|
174 |
The default is not to set an HVM boot order which is interpreted |
175 |
as 'dc'. |
176 |
|
177 |
For KVM the boot order is either "floppy", "cdrom", "disk" or |
178 |
"network". Please note that older versions of KVM couldn't |
179 |
netboot from virtio interfaces. This has been fixed in more recent |
180 |
versions and is confirmed to work at least with qemu-kvm 0.11.1. |
181 |
|
182 |
blockdev\_prefix |
183 |
Valid for the Xen HVM and PVM hypervisors. |
184 |
|
185 |
Relevant to nonpvops guest kernels, in which the disk device names |
186 |
are given by the host. Allows to specify 'xvd', which helps run |
187 |
Red Hat based installers, driven by anaconda. |
188 |
|
189 |
floppy\_image\_path |
190 |
Valid for the KVM hypervisor. |
191 |
|
192 |
The path to a floppy disk image to attach to the instance. This |
193 |
is useful to install Windows operating systems on Virt/IO disks |
194 |
because you can specify here the floppy for the drivers at |
195 |
installation time. |
196 |
|
197 |
cdrom\_image\_path |
198 |
Valid for the Xen HVM and KVM hypervisors. |
199 |
|
200 |
The path to a CDROM image to attach to the instance. |
201 |
|
202 |
cdrom2\_image\_path |
203 |
Valid for the KVM hypervisor. |
204 |
|
205 |
The path to a second CDROM image to attach to the instance. |
206 |
**NOTE**: This image can't be used to boot the system. To do that |
207 |
you have to use the 'cdrom\_image\_path' option. |
208 |
|
209 |
nic\_type |
210 |
Valid for the Xen HVM and KVM hypervisors. |
211 |
|
212 |
This parameter determines the way the network cards are presented |
213 |
to the instance. The possible options are: |
214 |
|
215 |
- rtl8139 (default for Xen HVM) (HVM & KVM) |
216 |
- ne2k\_isa (HVM & KVM) |
217 |
- ne2k\_pci (HVM & KVM) |
218 |
- i82551 (KVM) |
219 |
- i82557b (KVM) |
220 |
- i82559er (KVM) |
221 |
- pcnet (KVM) |
222 |
- e1000 (KVM) |
223 |
- paravirtual (default for KVM) (HVM & KVM) |
224 |
|
225 |
disk\_type |
226 |
Valid for the Xen HVM and KVM hypervisors. |
227 |
|
228 |
This parameter determines the way the disks are presented to the |
229 |
instance. The possible options are: |
230 |
|
231 |
- ioemu [default] (HVM & KVM) |
232 |
- ide (HVM & KVM) |
233 |
- scsi (KVM) |
234 |
- sd (KVM) |
235 |
- mtd (KVM) |
236 |
- pflash (KVM) |
237 |
|
238 |
|
239 |
cdrom\_disk\_type |
240 |
Valid for the KVM hypervisor. |
241 |
|
242 |
This parameter determines the way the cdroms disks are presented |
243 |
to the instance. The default behavior is to get the same value of |
244 |
the eariler parameter (disk_type). The possible options are: |
245 |
|
246 |
- paravirtual |
247 |
- ide |
248 |
- scsi |
249 |
- sd |
250 |
- mtd |
251 |
- pflash |
252 |
|
253 |
|
254 |
vnc\_bind\_address |
255 |
Valid for the Xen HVM and KVM hypervisors. |
256 |
|
257 |
Specifies the address that the VNC listener for this instance |
258 |
should bind to. Valid values are IPv4 addresses. Use the address |
259 |
0.0.0.0 to bind to all available interfaces (this is the default) |
260 |
or specify the address of one of the interfaces on the node to |
261 |
restrict listening to that interface. |
262 |
|
263 |
vnc\_tls |
264 |
Valid for the KVM hypervisor. |
265 |
|
266 |
A boolean option that controls whether the VNC connection is |
267 |
secured with TLS. |
268 |
|
269 |
vnc\_x509\_path |
270 |
Valid for the KVM hypervisor. |
271 |
|
272 |
If ``vnc_tls`` is enabled, this options specifies the path to the |
273 |
x509 certificate to use. |
274 |
|
275 |
vnc\_x509\_verify |
276 |
Valid for the KVM hypervisor. |
277 |
|
278 |
acpi |
279 |
Valid for the Xen HVM and KVM hypervisors. |
280 |
|
281 |
A boolean option that specifies if the hypervisor should enable |
282 |
ACPI support for this instance. By default, ACPI is disabled. |
283 |
|
284 |
pae |
285 |
Valid for the Xen HVM and KVM hypervisors. |
286 |
|
287 |
A boolean option that specifies if the hypervisor should enabled |
288 |
PAE support for this instance. The default is false, disabling PAE |
289 |
support. |
290 |
|
291 |
use\_localtime |
292 |
Valid for the Xen HVM and KVM hypervisors. |
293 |
|
294 |
A boolean option that specifies if the instance should be started |
295 |
with its clock set to the localtime of the machine (when true) or |
296 |
to the UTC (When false). The default is false, which is useful for |
297 |
Linux/Unix machines; for Windows OSes, it is recommended to enable |
298 |
this parameter. |
299 |
|
300 |
kernel\_path |
301 |
Valid for the Xen PVM and KVM hypervisors. |
302 |
|
303 |
This option specifies the path (on the node) to the kernel to boot |
304 |
the instance with. Xen PVM instances always require this, while |
305 |
for KVM if this option is empty, it will cause the machine to load |
306 |
the kernel from its disks. |
307 |
|
308 |
kernel\_args |
309 |
Valid for the Xen PVM and KVM hypervisors. |
310 |
|
311 |
This options specifies extra arguments to the kernel that will be |
312 |
loaded. device. This is always used for Xen PVM, while for KVM it |
313 |
is only used if the ``kernel_path`` option is also specified. |
314 |
|
315 |
The default setting for this value is simply ``"ro"``, which |
316 |
mounts the root disk (initially) in read-only one. For example, |
317 |
setting this to single will cause the instance to start in |
318 |
single-user mode. |
319 |
|
320 |
initrd\_path |
321 |
Valid for the Xen PVM and KVM hypervisors. |
322 |
|
323 |
This option specifies the path (on the node) to the initrd to boot |
324 |
the instance with. Xen PVM instances can use this always, while |
325 |
for KVM if this option is only used if the ``kernel_path`` option |
326 |
is also specified. You can pass here either an absolute filename |
327 |
(the path to the initrd) if you want to use an initrd, or use the |
328 |
format no\_initrd\_path for no initrd. |
329 |
|
330 |
root\_path |
331 |
Valid for the Xen PVM and KVM hypervisors. |
332 |
|
333 |
This options specifies the name of the root device. This is always |
334 |
needed for Xen PVM, while for KVM it is only used if the |
335 |
``kernel_path`` option is also specified. |
336 |
|
337 |
serial\_console |
338 |
Valid for the KVM hypervisor. |
339 |
|
340 |
This boolean option specifies whether to emulate a serial console |
341 |
for the instance. |
342 |
|
343 |
disk\_cache |
344 |
Valid for the KVM hypervisor. |
345 |
|
346 |
The disk cache mode. It can be either default to not pass any |
347 |
cache option to KVM, or one of the KVM cache modes: none (for |
348 |
direct I/O), writethrough (to use the host cache but report |
349 |
completion to the guest only when the host has committed the |
350 |
changes to disk) or writeback (to use the host cache and report |
351 |
completion as soon as the data is in the host cache). Note that |
352 |
there are special considerations for the cache mode depending on |
353 |
version of KVM used and disk type (always raw file under Ganeti), |
354 |
please refer to the KVM documentation for more details. |
355 |
|
356 |
security\_model |
357 |
Valid for the KVM hypervisor. |
358 |
|
359 |
The security model for kvm. Currently one of *none*, *user* or |
360 |
*pool*. Under *none*, the default, nothing is done and instances |
361 |
are run as the Ganeti daemon user (normally root). |
362 |
|
363 |
Under *user* kvm will drop privileges and become the user |
364 |
specified by the security\_domain parameter. |
365 |
|
366 |
Under *pool* a global cluster pool of users will be used, making |
367 |
sure no two instances share the same user on the same node. (this |
368 |
mode is not implemented yet) |
369 |
|
370 |
security\_domain |
371 |
Valid for the KVM hypervisor. |
372 |
|
373 |
Under security model *user* the username to run the instance |
374 |
under. It must be a valid username existing on the host. |
375 |
|
376 |
Cannot be set under security model *none* or *pool*. |
377 |
|
378 |
kvm\_flag |
379 |
Valid for the KVM hypervisor. |
380 |
|
381 |
If *enabled* the -enable-kvm flag is passed to kvm. If *disabled* |
382 |
-disable-kvm is passed. If unset no flag is passed, and the |
383 |
default running mode for your kvm binary will be used. |
384 |
|
385 |
mem\_path |
386 |
Valid for the KVM hypervisor. |
387 |
|
388 |
This option passes the -mem-path argument to kvm with the path (on |
389 |
the node) to the mount point of the hugetlbfs file system, along |
390 |
with the -mem-prealloc argument too. |
391 |
|
392 |
use\_chroot |
393 |
Valid for the KVM hypervisor. |
394 |
|
395 |
This boolean option determines wether to run the KVM instance in a |
396 |
chroot directory. |
397 |
|
398 |
If it is set to ``true``, an empty directory is created before |
399 |
starting the instance and its path is passed via the -chroot flag |
400 |
to kvm. The directory is removed when the instance is stopped. |
401 |
|
402 |
It is set to ``false`` by default. |
403 |
|
404 |
migration\_downtime |
405 |
Valid for the KVM hypervisor. |
406 |
|
407 |
The maximum amount of time (in ms) a KVM instance is allowed to be |
408 |
frozen during a live migration, in order to copy dirty memory |
409 |
pages. Default value is 30ms, but you may need to increase this |
410 |
value for busy instances. |
411 |
|
412 |
This option is only effective with kvm versions >= 87 and qemu-kvm |
413 |
versions >= 0.11.0. |
414 |
|
415 |
cpu\_mask |
416 |
Valid for the LXC hypervisor. |
417 |
|
418 |
The processes belonging to the given instance are only scheduled |
419 |
on the specified CPUs. |
420 |
|
421 |
The parameter format is a comma-separated list of CPU IDs or CPU |
422 |
ID ranges. The ranges are defined by a lower and higher boundary, |
423 |
separated by a dash. The boundaries are inclusive. |
424 |
|
425 |
usb\_mouse |
426 |
Valid for the KVM hypervisor. |
427 |
|
428 |
This option specifies the usb mouse type to be used. It can be |
429 |
"mouse" or "tablet". When using VNC it's recommended to set it to |
430 |
"tablet". |
431 |
|
432 |
|
433 |
The ``-O (--os-parameters)`` option allows customisation of the OS |
434 |
parameters. The actual parameter names and values depends on the OS |
435 |
being used, but the syntax is the same key=value. For example, setting |
436 |
a hypothetical ``dhcp`` parameter to yes can be achieved by:: |
437 |
|
438 |
gnt-instance add -O dhcp=yes ... |
439 |
|
440 |
The ``-I (--iallocator)`` option specifies the instance allocator |
441 |
plugin to use. If you pass in this option the allocator will select |
442 |
nodes for this instance automatically, so you don't need to pass them |
443 |
with the ``-n`` option. For more information please refer to the |
444 |
instance allocator documentation. |
445 |
|
446 |
The ``-t (--disk-template)`` options specifies the disk layout type |
447 |
for the instance. The available choices are: |
448 |
|
449 |
diskless |
450 |
This creates an instance with no disks. Its useful for testing only |
451 |
(or other special cases). |
452 |
|
453 |
file |
454 |
Disk devices will be regular files. |
455 |
|
456 |
plain |
457 |
Disk devices will be logical volumes. |
458 |
|
459 |
drbd |
460 |
Disk devices will be drbd (version 8.x) on top of lvm volumes. |
461 |
|
462 |
|
463 |
The optional second value of the ``-n (--node)`` is used for the drbd |
464 |
template type and specifies the remote node. |
465 |
|
466 |
If you do not want gnt-instance to wait for the disk mirror to be |
467 |
synced, use the ``--no-wait-for-sync`` option. |
468 |
|
469 |
The ``--file-storage-dir`` specifies the relative path under the |
470 |
cluster-wide file storage directory to store file-based disks. It is |
471 |
useful for having different subdirectories for different |
472 |
instances. The full path of the directory where the disk files are |
473 |
stored will consist of cluster-wide file storage directory + optional |
474 |
subdirectory + instance name. Example: |
475 |
``@RPL_FILE_STORAGE_DIR@``*/mysubdir/instance1.example.com*. This |
476 |
option is only relevant for instances using the file storage backend. |
477 |
|
478 |
The ``--file-driver`` specifies the driver to use for file-based |
479 |
disks. Note that currently these drivers work with the xen hypervisor |
480 |
only. This option is only relevant for instances using the file |
481 |
storage backend. The available choices are: |
482 |
|
483 |
loop |
484 |
Kernel loopback driver. This driver uses loopback devices to |
485 |
access the filesystem within the file. However, running I/O |
486 |
intensive applications in your instance using the loop driver |
487 |
might result in slowdowns. Furthermore, if you use the loopback |
488 |
driver consider increasing the maximum amount of loopback devices |
489 |
(on most systems it's 8) using the max\_loop param. |
490 |
|
491 |
blktap |
492 |
The blktap driver (for Xen hypervisors). In order to be able to |
493 |
use the blktap driver you should check if the 'blktapctrl' user |
494 |
space disk agent is running (usually automatically started via |
495 |
xend). This user-level disk I/O interface has the advantage of |
496 |
better performance. Especially if you use a network file system |
497 |
(e.g. NFS) to store your instances this is the recommended choice. |
498 |
|
499 |
|
500 |
The ``--submit`` option is used to send the job to the master daemon |
501 |
but not wait for its completion. The job ID will be shown so that it |
502 |
can be examined via **gnt-job info**. |
503 |
|
504 |
Example:: |
505 |
|
506 |
# gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \ |
507 |
-n node1.example.com --file-storage-dir=mysubdir instance1.example.com |
508 |
# gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \ |
509 |
-n node1.example.com instance1.example.com |
510 |
# gnt-instance add -t plain --disk 0:size=30g --disk 1:size=100g,vg=san \ |
511 |
-B memory=512 -o debian-etch -n node1.example.com instance1.example.com |
512 |
# gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \ |
513 |
-n node1.example.com:node2.example.com instance2.example.com |
514 |
|
515 |
|
516 |
BATCH-CREATE |
517 |
^^^^^^^^^^^^ |
518 |
|
519 |
**batch-create** {instances\_file.json} |
520 |
|
521 |
This command (similar to the Ganeti 1.2 **batcher** tool) submits |
522 |
multiple instance creation jobs based on a definition file. The |
523 |
instance configurations do not encompass all the possible options for |
524 |
the **add** command, but only a subset. |
525 |
|
526 |
The instance file should be a valid-formed JSON file, containing a |
527 |
dictionary with instance name and instance parameters. The accepted |
528 |
parameters are: |
529 |
|
530 |
disk\_size |
531 |
The size of the disks of the instance. |
532 |
|
533 |
disk\_template |
534 |
The disk template to use for the instance, the same as in the |
535 |
**add** command. |
536 |
|
537 |
backend |
538 |
A dictionary of backend parameters. |
539 |
|
540 |
hypervisor |
541 |
A dictionary with a single key (the hypervisor name), and as value |
542 |
the hypervisor options. If not passed, the default hypervisor and |
543 |
hypervisor options will be inherited. |
544 |
|
545 |
mac, ip, mode, link |
546 |
Specifications for the one NIC that will be created for the |
547 |
instance. 'bridge' is also accepted as a backwards compatibile |
548 |
key. |
549 |
|
550 |
nics |
551 |
List of nics that will be created for the instance. Each entry |
552 |
should be a dict, with mac, ip, mode and link as possible keys. |
553 |
Please don't provide the "mac, ip, mode, link" parent keys if you |
554 |
use this method for specifying nics. |
555 |
|
556 |
primary\_node, secondary\_node |
557 |
The primary and optionally the secondary node to use for the |
558 |
instance (in case an iallocator script is not used). |
559 |
|
560 |
iallocator |
561 |
Instead of specifying the nodes, an iallocator script can be used |
562 |
to automatically compute them. |
563 |
|
564 |
start |
565 |
whether to start the instance |
566 |
|
567 |
ip\_check |
568 |
Skip the check for already-in-use instance; see the description in |
569 |
the **add** command for details. |
570 |
|
571 |
name\_check |
572 |
Skip the name check for instances; see the description in the |
573 |
**add** command for details. |
574 |
|
575 |
file\_storage\_dir, file\_driver |
576 |
Configuration for the file disk type, see the **add** command for |
577 |
details. |
578 |
|
579 |
|
580 |
A simple definition for one instance can be (with most of the |
581 |
parameters taken from the cluster defaults):: |
582 |
|
583 |
{ |
584 |
"instance3": { |
585 |
"template": "drbd", |
586 |
"os": "debootstrap", |
587 |
"disk_size": ["25G"], |
588 |
"iallocator": "dumb" |
589 |
}, |
590 |
"instance5": { |
591 |
"template": "drbd", |
592 |
"os": "debootstrap", |
593 |
"disk_size": ["25G"], |
594 |
"iallocator": "dumb", |
595 |
"hypervisor": "xen-hvm", |
596 |
"hvparams": {"acpi": true}, |
597 |
"backend": {"memory": 512} |
598 |
} |
599 |
} |
600 |
|
601 |
The command will display the job id for each submitted instance, as |
602 |
follows:: |
603 |
|
604 |
# gnt-instance batch-create instances.json |
605 |
instance3: 11224 |
606 |
instance5: 11225 |
607 |
|
608 |
REMOVE |
609 |
^^^^^^ |
610 |
|
611 |
**remove** [--ignore-failures] [--shutdown-timeout=*N*] [--submit] |
612 |
{*instance*} |
613 |
|
614 |
Remove an instance. This will remove all data from the instance and |
615 |
there is *no way back*. If you are not sure if you use an instance |
616 |
again, use **shutdown** first and leave it in the shutdown state for a |
617 |
while. |
618 |
|
619 |
The ``--ignore-failures`` option will cause the removal to proceed |
620 |
even in the presence of errors during the removal of the instance |
621 |
(e.g. during the shutdown or the disk removal). If this option is not |
622 |
given, the command will stop at the first error. |
623 |
|
624 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
625 |
before forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the |
626 |
kvm process for KVM, etc.). By default two minutes are given to each |
627 |
instance to stop. |
628 |
|
629 |
The ``--submit`` option is used to send the job to the master daemon |
630 |
but not wait for its completion. The job ID will be shown so that it |
631 |
can be examined via **gnt-job info**. |
632 |
|
633 |
Example:: |
634 |
|
635 |
# gnt-instance remove instance1.example.com |
636 |
|
637 |
|
638 |
LIST |
639 |
^^^^ |
640 |
|
641 |
| **list** |
642 |
| [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*] [-v] |
643 |
| [{-o|--output} *[+]FIELD,...*] [instance...] |
644 |
|
645 |
Shows the currently configured instances with memory usage, disk |
646 |
usage, the node they are running on, and their run status. |
647 |
|
648 |
The ``--no-headers`` option will skip the initial header line. The |
649 |
``--separator`` option takes an argument which denotes what will be |
650 |
used between the output fields. Both these options are to help |
651 |
scripting. |
652 |
|
653 |
The units used to display the numeric values in the output varies, |
654 |
depending on the options given. By default, the values will be |
655 |
formatted in the most appropriate unit. If the ``--separator`` option |
656 |
is given, then the values are shown in mebibytes to allow parsing by |
657 |
scripts. In both cases, the ``--units`` option can be used to enforce |
658 |
a given output unit. |
659 |
|
660 |
The ``-v`` option activates verbose mode, which changes the display of |
661 |
special field states (see **ganeti(7)**). |
662 |
|
663 |
The ``-o (--output)`` option takes a comma-separated list of output |
664 |
fields. The available fields and their meaning are: |
665 |
|
666 |
name |
667 |
the instance name |
668 |
|
669 |
os |
670 |
the OS of the instance |
671 |
|
672 |
pnode |
673 |
the primary node of the instance |
674 |
|
675 |
snodes |
676 |
comma-separated list of secondary nodes for the instance; usually |
677 |
this will be just one node |
678 |
|
679 |
admin\_state |
680 |
the desired state of the instance (either "yes" or "no" denoting |
681 |
the instance should run or not) |
682 |
|
683 |
disk\_template |
684 |
the disk template of the instance |
685 |
|
686 |
oper\_state |
687 |
the actual state of the instance; can be one of the values |
688 |
"running", "stopped", "(node down)" |
689 |
|
690 |
status |
691 |
combined form of ``admin_state`` and ``oper_stat``; this can be one of: |
692 |
``ERROR_nodedown`` if the node of the instance is down, ``ERROR_down`` if |
693 |
the instance should run but is down, ``ERROR_up`` if the instance should be |
694 |
stopped but is actually running, ``ERROR_wrongnode`` if the instance is |
695 |
running but not on the primary, ``ADMIN_down`` if the instance has been |
696 |
stopped (and is stopped) and ``running`` if the instance is set to be |
697 |
running (and is running) |
698 |
|
699 |
oper\_ram |
700 |
the actual memory usage of the instance as seen by the hypervisor |
701 |
|
702 |
oper\_vcpus |
703 |
the actual number of VCPUs the instance is using as seen by the |
704 |
hypervisor |
705 |
|
706 |
ip |
707 |
the ip address Ganeti recognizes as associated with the first |
708 |
instance interface |
709 |
|
710 |
mac |
711 |
the first instance interface MAC address |
712 |
|
713 |
nic\_mode |
714 |
the mode of the first instance NIC (routed or bridged) |
715 |
|
716 |
nic\_link |
717 |
the link of the first instance NIC |
718 |
|
719 |
sda\_size |
720 |
the size of the instance's first disk |
721 |
|
722 |
sdb\_size |
723 |
the size of the instance's second disk, if any |
724 |
|
725 |
vcpus |
726 |
the number of VCPUs allocated to the instance |
727 |
|
728 |
tags |
729 |
comma-separated list of the instances's tags |
730 |
|
731 |
serial\_no |
732 |
the so called 'serial number' of the instance; this is a numeric |
733 |
field that is incremented each time the instance is modified, and |
734 |
it can be used to track modifications |
735 |
|
736 |
ctime |
737 |
the creation time of the instance; note that this field contains |
738 |
spaces and as such it's harder to parse |
739 |
|
740 |
if this attribute is not present (e.g. when upgrading from older |
741 |
versions), then "N/A" will be shown instead |
742 |
|
743 |
mtime |
744 |
the last modification time of the instance; note that this field |
745 |
contains spaces and as such it's harder to parse |
746 |
|
747 |
if this attribute is not present (e.g. when upgrading from older |
748 |
versions), then "N/A" will be shown instead |
749 |
|
750 |
uuid |
751 |
Show the UUID of the instance (generated automatically by Ganeti) |
752 |
|
753 |
network\_port |
754 |
If the instance has a network port assigned to it (e.g. for VNC |
755 |
connections), this will be shown, otherwise - will be displayed. |
756 |
|
757 |
beparams |
758 |
A text format of the entire beparams for the instance. It's more |
759 |
useful to select individual fields from this dictionary, see |
760 |
below. |
761 |
|
762 |
disk.count |
763 |
The number of instance disks. |
764 |
|
765 |
disk.size/N |
766 |
The size of the instance's Nth disk. This is a more generic form of |
767 |
the sda\_size and sdb\_size fields. |
768 |
|
769 |
disk.sizes |
770 |
A comma-separated list of the disk sizes for this instance. |
771 |
|
772 |
disk\_usage |
773 |
The total disk space used by this instance on each of its nodes. |
774 |
This is not the instance-visible disk size, but the actual disk |
775 |
"cost" of the instance. |
776 |
|
777 |
nic.mac/N |
778 |
The MAC of the Nth instance NIC. |
779 |
|
780 |
nic.ip/N |
781 |
The IP address of the Nth instance NIC. |
782 |
|
783 |
nic.mode/N |
784 |
The mode of the Nth instance NIC |
785 |
|
786 |
nic.link/N |
787 |
The link of the Nth instance NIC |
788 |
|
789 |
nic.macs |
790 |
A comma-separated list of all the MACs of the instance's NICs. |
791 |
|
792 |
nic.ips |
793 |
A comma-separated list of all the IP addresses of the instance's |
794 |
NICs. |
795 |
|
796 |
nic.modes |
797 |
A comma-separated list of all the modes of the instance's NICs. |
798 |
|
799 |
nic.links |
800 |
A comma-separated list of all the link parameters of the instance's |
801 |
NICs. |
802 |
|
803 |
nic.count |
804 |
The number of instance nics. |
805 |
|
806 |
hv/*NAME* |
807 |
The value of the hypervisor parameter called *NAME*. For details of |
808 |
what hypervisor parameters exist and their meaning, see the **add** |
809 |
command. |
810 |
|
811 |
be/memory |
812 |
The configured memory for the instance. |
813 |
|
814 |
be/vcpus |
815 |
The configured number of VCPUs for the instance. |
816 |
|
817 |
be/auto\_balance |
818 |
Whether the instance is considered in N+1 checks. |
819 |
|
820 |
|
821 |
If the value of the option starts with the character ``+``, the new |
822 |
field(s) will be added to the default list. This allows to quickly see |
823 |
the default list plus a few other fields, instead of retyping the |
824 |
entire list of fields. |
825 |
|
826 |
There is a subtle grouping about the available output fields: all |
827 |
fields except for ``oper_state``, ``oper_ram``, ``oper_vcpus`` and |
828 |
``status`` are configuration value and not run-time values. So if you |
829 |
don't select any of the these fields, the query will be satisfied |
830 |
instantly from the cluster configuration, without having to ask the |
831 |
remote nodes for the data. This can be helpful for big clusters when |
832 |
you only want some data and it makes sense to specify a reduced set of |
833 |
output fields. |
834 |
|
835 |
The default output field list is: name, os, pnode, admin\_state, |
836 |
oper\_state, oper\_ram. |
837 |
|
838 |
|
839 |
LIST-FIELDS |
840 |
~~~~~~~~~~~ |
841 |
|
842 |
**list-fields** [field...] |
843 |
|
844 |
Lists available fields for instances. |
845 |
|
846 |
|
847 |
INFO |
848 |
^^^^ |
849 |
|
850 |
**info** [-s \| --static] [--roman] {--all \| *instance*} |
851 |
|
852 |
Show detailed information about the given instance(s). This is |
853 |
different from **list** as it shows detailed data about the instance's |
854 |
disks (especially useful for the drbd disk template). |
855 |
|
856 |
If the option ``-s`` is used, only information available in the |
857 |
configuration file is returned, without querying nodes, making the |
858 |
operation faster. |
859 |
|
860 |
Use the ``--all`` to get info about all instances, rather than |
861 |
explicitly passing the ones you're interested in. |
862 |
|
863 |
The ``--roman`` option can be used to cause envy among people who like |
864 |
ancient cultures, but are stuck with non-latin-friendly cluster |
865 |
virtualization technologies. |
866 |
|
867 |
MODIFY |
868 |
^^^^^^ |
869 |
|
870 |
| **modify** |
871 |
| [{-H|--hypervisor-parameters} *HYPERVISOR\_PARAMETERS*] |
872 |
| [{-B|--backend-parameters} *BACKEND\_PARAMETERS*] |
873 |
| [--net add*[:options]* \| --net remove \| --net *N:options*] |
874 |
| [--disk add:size=*SIZE*[,vg=*VG*][,metavg=*VG*] \| --disk remove \| |
875 |
| --disk *N*:mode=*MODE*] |
876 |
| [{-t|--disk-template} plain | {-t|--disk-template} drbd -n *new_secondary*] [--no-wait-for-sync] |
877 |
| [--os-type=*OS* [--force-variant]] |
878 |
| [{-O|--os-parameters} *param*=*value*... ] |
879 |
| [--submit] |
880 |
| {*instance*} |
881 |
|
882 |
Modifies the memory size, number of vcpus, ip address, MAC address |
883 |
and/or nic parameters for an instance. It can also add and remove |
884 |
disks and NICs to/from the instance. Note that you need to give at |
885 |
least one of the arguments, otherwise the command complains. |
886 |
|
887 |
The ``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)`` |
888 |
and ``-O (--os-parameters)`` options specifies hypervisor, backend and |
889 |
OS parameter options in the form of name=value[,...]. For details |
890 |
which options can be specified, see the **add** command. |
891 |
|
892 |
The ``-t (--disk-template)`` option will change the disk template of |
893 |
the instance. Currently only conversions between the plain and drbd |
894 |
disk templates are supported, and the instance must be stopped before |
895 |
attempting the conversion. When changing from the plain to the drbd |
896 |
disk template, a new secondary node must be specified via the ``-n`` |
897 |
option. The option ``--no-wait-for-sync`` can be used when converting |
898 |
to the ``drbd`` template in order to make the instance available for |
899 |
startup before DRBD has finished resyncing. |
900 |
|
901 |
The ``--disk add:size=``*SIZE* option adds a disk to the instance. The |
902 |
optional ``vg=``*VG* option specifies LVM volume group other than |
903 |
default vg to create the disk on. For DRBD disks, the ``metavg=``*VG* |
904 |
option specifies the volume group for the metadata device. The |
905 |
``--disk remove`` option will remove the last disk of the |
906 |
instance. The ``--disk`` *N*``:mode=``*MODE* option will change the |
907 |
mode of the Nth disk of the instance between read-only (``ro``) and |
908 |
read-write (``rw``). |
909 |
|
910 |
The ``--net add:``*options* option will add a new NIC to the |
911 |
instance. The available options are the same as in the **add** command |
912 |
(mac, ip, link, mode). The ``--net remove`` will remove the last NIC |
913 |
of the instance, while the ``--net`` *N*:*options* option will change |
914 |
the parameters of the Nth instance NIC. |
915 |
|
916 |
The option ``-o (--os-type)`` will change the OS name for the instance |
917 |
(without reinstallation). In case an OS variant is specified that is |
918 |
not found, then by default the modification is refused, unless |
919 |
``--force-variant`` is passed. An invalid OS will also be refused, |
920 |
unless the ``--force`` option is given. |
921 |
|
922 |
The ``--submit`` option is used to send the job to the master daemon |
923 |
but not wait for its completion. The job ID will be shown so that it |
924 |
can be examined via **gnt-job info**. |
925 |
|
926 |
All the changes take effect at the next restart. If the instance is |
927 |
running, there is no effect on the instance. |
928 |
|
929 |
REINSTALL |
930 |
^^^^^^^^^ |
931 |
|
932 |
| **reinstall** [{-o|--os-type} *os-type*] [--select-os] [-f *force*] |
933 |
| [--force-multiple] |
934 |
| [--instance \| --node \| --primary \| --secondary \| --all] |
935 |
| [{-O|--os-parameters} *OS\_PARAMETERS*] [--submit] {*instance*...} |
936 |
|
937 |
Reinstalls the operating system on the given instance(s). The |
938 |
instance(s) must be stopped when running this command. If the ``-o |
939 |
(--os-type)`` is specified, the operating system is changed. |
940 |
|
941 |
The ``--select-os`` option switches to an interactive OS reinstall. |
942 |
The user is prompted to select the OS template from the list of |
943 |
available OS templates. OS parameters can be overridden using ``-O |
944 |
(--os-parameters)`` (more documentation for this option under the |
945 |
**add** command). |
946 |
|
947 |
Since this is a potentially dangerous command, the user will be |
948 |
required to confirm this action, unless the ``-f`` flag is passed. |
949 |
When multiple instances are selected (either by passing multiple |
950 |
arguments or by using the ``--node``, ``--primary``, ``--secondary`` |
951 |
or ``--all`` options), the user must pass the ``--force-multiple`` |
952 |
options to skip the interactive confirmation. |
953 |
|
954 |
The ``--submit`` option is used to send the job to the master daemon |
955 |
but not wait for its completion. The job ID will be shown so that it |
956 |
can be examined via **gnt-job info**. |
957 |
|
958 |
RENAME |
959 |
^^^^^^ |
960 |
|
961 |
| **rename** [--no-ip-check] [--no-name-check] [--submit] |
962 |
| {*instance*} {*new\_name*} |
963 |
|
964 |
Renames the given instance. The instance must be stopped when running |
965 |
this command. The requirements for the new name are the same as for |
966 |
adding an instance: the new name must be resolvable and the IP it |
967 |
resolves to must not be reachable (in order to prevent duplicate IPs |
968 |
the next time the instance is started). The IP test can be skipped if |
969 |
the ``--no-ip-check`` option is passed. |
970 |
|
971 |
The ``--no-name-check`` skips the check for the new instance name via |
972 |
the resolver (e.g. in DNS or /etc/hosts, depending on your |
973 |
setup). Since the name check is used to compute the IP address, if you |
974 |
pass this option you must also pass the ``--no-ip-check`` option. |
975 |
|
976 |
The ``--submit`` option is used to send the job to the master daemon |
977 |
but not wait for its completion. The job ID will be shown so that it |
978 |
can be examined via **gnt-job info**. |
979 |
|
980 |
Starting/stopping/connecting to console |
981 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
982 |
|
983 |
STARTUP |
984 |
^^^^^^^ |
985 |
|
986 |
| **startup** |
987 |
| [--force] [--ignore-offline] |
988 |
| [--force-multiple] |
989 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
990 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
991 |
| [{-H|--hypervisor-parameters} ``key=value...``] |
992 |
| [{-B|--backend-parameters} ``key=value...``] |
993 |
| [--submit] |
994 |
| {*name*...} |
995 |
|
996 |
Starts one or more instances, depending on the following options. The |
997 |
four available modes are: |
998 |
|
999 |
--instance |
1000 |
will start the instances given as arguments (at least one argument |
1001 |
required); this is the default selection |
1002 |
|
1003 |
--node |
1004 |
will start the instances who have the given node as either primary |
1005 |
or secondary |
1006 |
|
1007 |
--primary |
1008 |
will start all instances whose primary node is in the list of nodes |
1009 |
passed as arguments (at least one node required) |
1010 |
|
1011 |
--secondary |
1012 |
will start all instances whose secondary node is in the list of |
1013 |
nodes passed as arguments (at least one node required) |
1014 |
|
1015 |
--all |
1016 |
will start all instances in the cluster (no arguments accepted) |
1017 |
|
1018 |
--tags |
1019 |
will start all instances in the cluster with the tags given as |
1020 |
arguments |
1021 |
|
1022 |
--node-tags |
1023 |
will start all instances in the cluster on nodes with the tags |
1024 |
given as arguments |
1025 |
|
1026 |
--pri-node-tags |
1027 |
will start all instances in the cluster on primary nodes with the |
1028 |
tags given as arguments |
1029 |
|
1030 |
--sec-node-tags |
1031 |
will start all instances in the cluster on secondary nodes with the |
1032 |
tags given as arguments |
1033 |
|
1034 |
|
1035 |
Note that although you can pass more than one selection option, the |
1036 |
last one wins, so in order to guarantee the desired result, don't pass |
1037 |
more than one such option. |
1038 |
|
1039 |
Use ``--force`` to start even if secondary disks are failing. |
1040 |
``--ignore-offline`` can be used to ignore offline primary nodes and |
1041 |
mark the instance as started even if the primary is not available. |
1042 |
|
1043 |
The ``--force-multiple`` will skip the interactive confirmation in the |
1044 |
case the more than one instance will be affected. |
1045 |
|
1046 |
The ``-H (--hypervisor-parameters)`` and ``-B (--backend-parameters)`` |
1047 |
options specify temporary hypervisor and backend parameters that can |
1048 |
be used to start an instance with modified parameters. They can be |
1049 |
useful for quick testing without having to modify an instance back and |
1050 |
forth, e.g.:: |
1051 |
|
1052 |
# gnt-instance start -H root_args="single" instance1 |
1053 |
# gnt-instance start -B memory=2048 instance2 |
1054 |
|
1055 |
|
1056 |
The first form will start the instance instance1 in single-user mode, |
1057 |
and the instance instance2 with 2GB of RAM (this time only, unless |
1058 |
that is the actual instance memory size already). Note that the values |
1059 |
override the instance parameters (and not extend them): an instance |
1060 |
with "root\_args=ro" when started with -H root\_args=single will |
1061 |
result in "single", not "ro single". The ``--submit`` option is used |
1062 |
to send the job to the master daemon but not wait for its |
1063 |
completion. The job ID will be shown so that it can be examined via |
1064 |
**gnt-job info**. |
1065 |
|
1066 |
Example:: |
1067 |
|
1068 |
# gnt-instance start instance1.example.com |
1069 |
# gnt-instance start --node node1.example.com node2.example.com |
1070 |
# gnt-instance start --all |
1071 |
|
1072 |
|
1073 |
SHUTDOWN |
1074 |
^^^^^^^^ |
1075 |
|
1076 |
| **shutdown** |
1077 |
| [--timeout=*N*] |
1078 |
| [--force-multiple] [--ignore-offline] |
1079 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
1080 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
1081 |
| [--submit] |
1082 |
| {*name*...} |
1083 |
|
1084 |
Stops one or more instances. If the instance cannot be cleanly stopped |
1085 |
during a hardcoded interval (currently 2 minutes), it will forcibly |
1086 |
stop the instance (equivalent to switching off the power on a physical |
1087 |
machine). |
1088 |
|
1089 |
The ``--timeout`` is used to specify how much time to wait before |
1090 |
forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the kvm |
1091 |
process for KVM, etc.). By default two minutes are given to each |
1092 |
instance to stop. |
1093 |
|
1094 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1095 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1096 |
``--sec-node-tags`` options are similar as for the **startup** command |
1097 |
and they influence the actual instances being shutdown. |
1098 |
|
1099 |
The ``--submit`` option is used to send the job to the master daemon |
1100 |
but not wait for its completion. The job ID will be shown so that it |
1101 |
can be examined via **gnt-job info**. |
1102 |
|
1103 |
``--ignore-offline`` can be used to ignore offline primary nodes and |
1104 |
force the instance to be marked as stopped. This option should be used |
1105 |
with care as it can lead to an inconsistent cluster state. |
1106 |
|
1107 |
Example:: |
1108 |
|
1109 |
# gnt-instance shutdown instance1.example.com |
1110 |
# gnt-instance shutdown --all |
1111 |
|
1112 |
|
1113 |
REBOOT |
1114 |
^^^^^^ |
1115 |
|
1116 |
| **reboot** |
1117 |
| [{-t|--type} *REBOOT-TYPE*] |
1118 |
| [--ignore-secondaries] |
1119 |
| [--shutdown-timeout=*N*] |
1120 |
| [--force-multiple] |
1121 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
1122 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
1123 |
| [--submit] |
1124 |
| [*name*...] |
1125 |
|
1126 |
Reboots one or more instances. The type of reboot depends on the value |
1127 |
of ``-t (--type)``. A soft reboot does a hypervisor reboot, a hard reboot |
1128 |
does a instance stop, recreates the hypervisor config for the instance |
1129 |
and starts the instance. A full reboot does the equivalent of |
1130 |
**gnt-instance shutdown && gnt-instance startup**. The default is |
1131 |
hard reboot. |
1132 |
|
1133 |
For the hard reboot the option ``--ignore-secondaries`` ignores errors |
1134 |
for the secondary node while re-assembling the instance disks. |
1135 |
|
1136 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1137 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1138 |
``--sec-node-tags`` options are similar as for the **startup** command |
1139 |
and they influence the actual instances being rebooted. |
1140 |
|
1141 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1142 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1143 |
process, for kvm). By default two minutes are given to each instance |
1144 |
to stop. |
1145 |
|
1146 |
The ``--force-multiple`` will skip the interactive confirmation in the |
1147 |
case the more than one instance will be affected. |
1148 |
|
1149 |
Example:: |
1150 |
|
1151 |
# gnt-instance reboot instance1.example.com |
1152 |
# gnt-instance reboot --type=full instance1.example.com |
1153 |
|
1154 |
|
1155 |
CONSOLE |
1156 |
^^^^^^^ |
1157 |
|
1158 |
**console** [--show-cmd] {*instance*} |
1159 |
|
1160 |
Connects to the console of the given instance. If the instance is not |
1161 |
up, an error is returned. Use the ``--show-cmd`` option to display the |
1162 |
command instead of executing it. |
1163 |
|
1164 |
For HVM instances, this will attempt to connect to the serial console |
1165 |
of the instance. To connect to the virtualized "physical" console of a |
1166 |
HVM instance, use a VNC client with the connection info from the |
1167 |
**info** command. |
1168 |
|
1169 |
Example:: |
1170 |
|
1171 |
# gnt-instance console instance1.example.com |
1172 |
|
1173 |
|
1174 |
Disk management |
1175 |
~~~~~~~~~~~~~~~ |
1176 |
|
1177 |
REPLACE-DISKS |
1178 |
^^^^^^^^^^^^^ |
1179 |
|
1180 |
**replace-disks** [--submit] [--early-release] {-p} [--disks *idx*] |
1181 |
{*instance*} |
1182 |
|
1183 |
**replace-disks** [--submit] [--early-release] {-s} [--disks *idx*] |
1184 |
{*instance*} |
1185 |
|
1186 |
**replace-disks** [--submit] [--early-release] {--iallocator *name* |
1187 |
\| --new-secondary *NODE*} {*instance*} |
1188 |
|
1189 |
**replace-disks** [--submit] [--early-release] {--auto} |
1190 |
{*instance*} |
1191 |
|
1192 |
This command is a generalized form for replacing disks. It is |
1193 |
currently only valid for the mirrored (DRBD) disk template. |
1194 |
|
1195 |
The first form (when passing the ``-p`` option) will replace the disks |
1196 |
on the primary, while the second form (when passing the ``-s`` option |
1197 |
will replace the disks on the secondary node. For these two cases (as |
1198 |
the node doesn't change), it is possible to only run the replace for a |
1199 |
subset of the disks, using the option ``--disks`` which takes a list |
1200 |
of comma-delimited disk indices (zero-based), e.g. 0,2 to replace only |
1201 |
the first and third disks. |
1202 |
|
1203 |
The third form (when passing either the ``--iallocator`` or the |
1204 |
``--new-secondary`` option) is designed to change secondary node of |
1205 |
the instance. Specifying ``--iallocator`` makes the new secondary be |
1206 |
selected automatically by the specified allocator plugin, otherwise |
1207 |
the new secondary node will be the one chosen manually via the |
1208 |
``--new-secondary`` option. |
1209 |
|
1210 |
The fourth form (when using ``--auto``) will automatically determine |
1211 |
which disks of an instance are faulty and replace them within the same |
1212 |
node. The ``--auto`` option works only when an instance has only |
1213 |
faulty disks on either the primary or secondary node; it doesn't work |
1214 |
when both sides have faulty disks. |
1215 |
|
1216 |
The ``--submit`` option is used to send the job to the master daemon |
1217 |
but not wait for its completion. The job ID will be shown so that it |
1218 |
can be examined via **gnt-job info**. |
1219 |
|
1220 |
The ``--early-release`` changes the code so that the old storage on |
1221 |
secondary node(s) is removed early (before the resync is completed) |
1222 |
and the internal Ganeti locks for the current (and new, if any) |
1223 |
secondary node are also released, thus allowing more parallelism in |
1224 |
the cluster operation. This should be used only when recovering from a |
1225 |
disk failure on the current secondary (thus the old storage is already |
1226 |
broken) or when the storage on the primary node is known to be fine |
1227 |
(thus we won't need the old storage for potential recovery). |
1228 |
|
1229 |
Note that it is not possible to select an offline or drained node as a |
1230 |
new secondary. |
1231 |
|
1232 |
ACTIVATE-DISKS |
1233 |
^^^^^^^^^^^^^^ |
1234 |
|
1235 |
**activate-disks** [--submit] [--ignore-size] {*instance*} |
1236 |
|
1237 |
Activates the block devices of the given instance. If successful, the |
1238 |
command will show the location and name of the block devices:: |
1239 |
|
1240 |
node1.example.com:disk/0:/dev/drbd0 |
1241 |
node1.example.com:disk/1:/dev/drbd1 |
1242 |
|
1243 |
|
1244 |
In this example, *node1.example.com* is the name of the node on which |
1245 |
the devices have been activated. The *disk/0* and *disk/1* are the |
1246 |
Ganeti-names of the instance disks; how they are visible inside the |
1247 |
instance is hypervisor-specific. */dev/drbd0* and */dev/drbd1* are the |
1248 |
actual block devices as visible on the node. The ``--submit`` option |
1249 |
is used to send the job to the master daemon but not wait for its |
1250 |
completion. The job ID will be shown so that it can be examined via |
1251 |
**gnt-job info**. |
1252 |
|
1253 |
The ``--ignore-size`` option can be used to activate disks ignoring |
1254 |
the currently configured size in Ganeti. This can be used in cases |
1255 |
where the configuration has gotten out of sync with the real-world |
1256 |
(e.g. after a partially-failed grow-disk operation or due to rounding |
1257 |
in LVM devices). This should not be used in normal cases, but only |
1258 |
when activate-disks fails without it. |
1259 |
|
1260 |
Note that it is safe to run this command while the instance is already |
1261 |
running. |
1262 |
|
1263 |
DEACTIVATE-DISKS |
1264 |
^^^^^^^^^^^^^^^^ |
1265 |
|
1266 |
**deactivate-disks** [-f] [--submit] {*instance*} |
1267 |
|
1268 |
De-activates the block devices of the given instance. Note that if you |
1269 |
run this command for an instance with a drbd disk template, while it |
1270 |
is running, it will not be able to shutdown the block devices on the |
1271 |
primary node, but it will shutdown the block devices on the secondary |
1272 |
nodes, thus breaking the replication. |
1273 |
|
1274 |
The ``-f``/``--force`` option will skip checks that the instance is |
1275 |
down; in case the hypervisor is confused and we can't talk to it, |
1276 |
normally Ganeti will refuse to deactivate the disks, but with this |
1277 |
option passed it will skip this check and directly try to deactivate |
1278 |
the disks. This can still fail due to the instance actually running or |
1279 |
other issues. |
1280 |
|
1281 |
The ``--submit`` option is used to send the job to the master daemon |
1282 |
but not wait for its completion. The job ID will be shown so that it |
1283 |
can be examined via **gnt-job info**. |
1284 |
|
1285 |
GROW-DISK |
1286 |
^^^^^^^^^ |
1287 |
|
1288 |
**grow-disk** [--no-wait-for-sync] [--submit] {*instance*} {*disk*} |
1289 |
{*amount*} |
1290 |
|
1291 |
Grows an instance's disk. This is only possible for instances having a |
1292 |
plain or drbd disk template. |
1293 |
|
1294 |
Note that this command only change the block device size; it will not |
1295 |
grow the actual filesystems, partitions, etc. that live on that |
1296 |
disk. Usually, you will need to: |
1297 |
|
1298 |
#. use **gnt-instance grow-disk** |
1299 |
|
1300 |
#. reboot the instance (later, at a convenient time) |
1301 |
|
1302 |
#. use a filesystem resizer, such as ext2online(8) or |
1303 |
xfs\_growfs(8) to resize the filesystem, or use fdisk(8) to change |
1304 |
the partition table on the disk |
1305 |
|
1306 |
The *disk* argument is the index of the instance disk to grow. The |
1307 |
*amount* argument is given either as a number (and it represents the |
1308 |
amount to increase the disk with in mebibytes) or can be given similar |
1309 |
to the arguments in the create instance operation, with a suffix |
1310 |
denoting the unit. |
1311 |
|
1312 |
Note that the disk grow operation might complete on one node but fail |
1313 |
on the other; this will leave the instance with different-sized LVs on |
1314 |
the two nodes, but this will not create problems (except for unused |
1315 |
space). |
1316 |
|
1317 |
If you do not want gnt-instance to wait for the new disk region to be |
1318 |
synced, use the ``--no-wait-for-sync`` option. |
1319 |
|
1320 |
The ``--submit`` option is used to send the job to the master daemon |
1321 |
but not wait for its completion. The job ID will be shown so that it |
1322 |
can be examined via **gnt-job info**. |
1323 |
|
1324 |
Example (increase the first disk for instance1 by 16GiB):: |
1325 |
|
1326 |
# gnt-instance grow-disk instance1.example.com 0 16g |
1327 |
|
1328 |
|
1329 |
Also note that disk shrinking is not supported; use **gnt-backup |
1330 |
export** and then **gnt-backup import** to reduce the disk size of an |
1331 |
instance. |
1332 |
|
1333 |
RECREATE-DISKS |
1334 |
^^^^^^^^^^^^^^ |
1335 |
|
1336 |
**recreate-disks** [--submit] [--disks=``indices``] [-n node1:[node2]] |
1337 |
{*instance*} |
1338 |
|
1339 |
Recreates the disks of the given instance, or only a subset of the |
1340 |
disks (if the option ``disks`` is passed, which must be a |
1341 |
comma-separated list of disk indices, starting from zero). |
1342 |
|
1343 |
Note that this functionality should only be used for missing disks; if |
1344 |
any of the given disks already exists, the operation will fail. While |
1345 |
this is suboptimal, recreate-disks should hopefully not be needed in |
1346 |
normal operation and as such the impact of this is low. |
1347 |
|
1348 |
Optionally the instance's disks can be recreated on different |
1349 |
nodes. This can be useful if, for example, the original nodes of the |
1350 |
instance have gone down (and are marked offline), so we can't recreate |
1351 |
on the same nodes. To do this, pass the new node(s) via ``-n`` option, |
1352 |
with a syntax similar to the **add** command. The number of nodes |
1353 |
passed must equal the number of nodes that the instance currently |
1354 |
has. Note that changing nodes is only allowed for 'all disk' |
1355 |
replacement (when ``--disks`` is not passed). |
1356 |
|
1357 |
The ``--submit`` option is used to send the job to the master daemon |
1358 |
but not wait for its completion. The job ID will be shown so that it |
1359 |
can be examined via **gnt-job info**. |
1360 |
|
1361 |
Recovery |
1362 |
~~~~~~~~ |
1363 |
|
1364 |
FAILOVER |
1365 |
^^^^^^^^ |
1366 |
|
1367 |
**failover** [-f] [--ignore-consistency] [--shutdown-timeout=*N*] |
1368 |
[--submit] {*instance*} |
1369 |
|
1370 |
Failover will fail the instance over its secondary node. This works |
1371 |
only for instances having a drbd disk template. |
1372 |
|
1373 |
Normally the failover will check the consistency of the disks before |
1374 |
failing over the instance. If you are trying to migrate instances off |
1375 |
a dead node, this will fail. Use the ``--ignore-consistency`` option |
1376 |
for this purpose. Note that this option can be dangerous as errors in |
1377 |
shutting down the instance will be ignored, resulting in possibly |
1378 |
having the instance running on two machines in parallel (on |
1379 |
disconnected DRBD drives). |
1380 |
|
1381 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1382 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1383 |
process, for kvm). By default two minutes are given to each instance |
1384 |
to stop. |
1385 |
|
1386 |
The ``--submit`` option is used to send the job to the master daemon |
1387 |
but not wait for its completion. The job ID will be shown so that it |
1388 |
can be examined via **gnt-job info**. |
1389 |
|
1390 |
Example:: |
1391 |
|
1392 |
# gnt-instance failover instance1.example.com |
1393 |
|
1394 |
|
1395 |
MIGRATE |
1396 |
^^^^^^^ |
1397 |
|
1398 |
**migrate** [-f] {--cleanup} {*instance*} |
1399 |
|
1400 |
**migrate** [-f] [--non-live] [--migration-mode=live\|non-live] |
1401 |
{*instance*} |
1402 |
|
1403 |
Migrate will move the instance to its secondary node without |
1404 |
shutdown. It only works for instances having the drbd8 disk template |
1405 |
type. |
1406 |
|
1407 |
The migration command needs a perfectly healthy instance, as we rely |
1408 |
on the dual-master capability of drbd8 and the disks of the instance |
1409 |
are not allowed to be degraded. |
1410 |
|
1411 |
The ``--non-live`` and ``--migration-mode=non-live`` options will |
1412 |
switch (for the hypervisors that support it) between a "fully live" |
1413 |
(i.e. the interruption is as minimal as possible) migration and one in |
1414 |
which the instance is frozen, its state saved and transported to the |
1415 |
remote node, and then resumed there. This all depends on the |
1416 |
hypervisor support for two different methods. In any case, it is not |
1417 |
an error to pass this parameter (it will just be ignored if the |
1418 |
hypervisor doesn't support it). The option ``--migration-mode=live`` |
1419 |
option will request a fully-live migration. The default, when neither |
1420 |
option is passed, depends on the hypervisor parameters (and can be |
1421 |
viewed with the **gnt-cluster info** command). |
1422 |
|
1423 |
If the ``--cleanup`` option is passed, the operation changes from |
1424 |
migration to attempting recovery from a failed previous migration. In |
1425 |
this mode, Ganeti checks if the instance runs on the correct node (and |
1426 |
updates its configuration if not) and ensures the instances's disks |
1427 |
are configured correctly. In this mode, the ``--non-live`` option is |
1428 |
ignored. |
1429 |
|
1430 |
The option ``-f`` will skip the prompting for confirmation. |
1431 |
|
1432 |
Example (and expected output):: |
1433 |
|
1434 |
# gnt-instance migrate instance1 |
1435 |
Migrate will happen to the instance instance1. Note that migration is |
1436 |
**experimental** in this version. This might impact the instance if |
1437 |
anything goes wrong. Continue? |
1438 |
y/[n]/?: y |
1439 |
* checking disk consistency between source and target |
1440 |
* ensuring the target is in secondary mode |
1441 |
* changing disks into dual-master mode |
1442 |
- INFO: Waiting for instance instance1 to sync disks. |
1443 |
- INFO: Instance instance1's disks are in sync. |
1444 |
* migrating instance to node2.example.com |
1445 |
* changing the instance's disks on source node to secondary |
1446 |
- INFO: Waiting for instance instance1 to sync disks. |
1447 |
- INFO: Instance instance1's disks are in sync. |
1448 |
* changing the instance's disks to single-master |
1449 |
# |
1450 |
|
1451 |
|
1452 |
MOVE |
1453 |
^^^^ |
1454 |
|
1455 |
**move** [-f] [-n *node*] [--shutdown-timeout=*N*] [--submit] |
1456 |
{*instance*} |
1457 |
|
1458 |
Move will move the instance to an arbitrary node in the cluster. This |
1459 |
works only for instances having a plain or file disk template. |
1460 |
|
1461 |
Note that since this operation is done via data copy, it will take a |
1462 |
long time for big disks (similar to replace-disks for a drbd |
1463 |
instance). |
1464 |
|
1465 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1466 |
before forcing the shutdown (e.g. ``xm destroy`` in XEN, killing the |
1467 |
kvm process for KVM, etc.). By default two minutes are given to each |
1468 |
instance to stop. |
1469 |
|
1470 |
The ``--submit`` option is used to send the job to the master daemon |
1471 |
but not wait for its completion. The job ID will be shown so that it |
1472 |
can be examined via **gnt-job info**. |
1473 |
|
1474 |
Example:: |
1475 |
|
1476 |
# gnt-instance move -n node3.example.com instance1.example.com |
1477 |
|
1478 |
|
1479 |
TAGS |
1480 |
~~~~ |
1481 |
|
1482 |
ADD-TAGS |
1483 |
^^^^^^^^ |
1484 |
|
1485 |
**add-tags** [--from *file*] {*instancename*} {*tag*...} |
1486 |
|
1487 |
Add tags to the given instance. If any of the tags contains invalid |
1488 |
characters, the entire operation will abort. |
1489 |
|
1490 |
If the ``--from`` option is given, the list of tags will be extended |
1491 |
with the contents of that file (each line becomes a tag). In this |
1492 |
case, there is not need to pass tags on the command line (if you do, |
1493 |
both sources will be used). A file name of ``-`` will be interpreted |
1494 |
as stdin. |
1495 |
|
1496 |
LIST-TAGS |
1497 |
^^^^^^^^^ |
1498 |
|
1499 |
**list-tags** {*instancename*} |
1500 |
|
1501 |
List the tags of the given instance. |
1502 |
|
1503 |
REMOVE-TAGS |
1504 |
^^^^^^^^^^^ |
1505 |
|
1506 |
**remove-tags** [--from *file*] {*instancename*} {*tag*...} |
1507 |
|
1508 |
Remove tags from the given instance. If any of the tags are not |
1509 |
existing on the node, the entire operation will abort. |
1510 |
|
1511 |
If the ``--from`` option is given, the list of tags to be removed will |
1512 |
be extended with the contents of that file (each line becomes a tag). |
1513 |
In this case, there is not need to pass tags on the command line (if |
1514 |
you do, tags from both sources will be removed). A file name of ``-`` |
1515 |
will be interpreted as stdin. |