root / man / gnt-instance.rst @ 7af3534e
History | View | Annotate | Download (48.8 kB)
1 |
gnt-instance(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
================================================= |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-instance - Ganeti instance administration |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-instance** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-instance** command is used for instance administration in |
18 |
the Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
Creation/removal/querying |
24 |
~~~~~~~~~~~~~~~~~~~~~~~~~ |
25 |
|
26 |
ADD |
27 |
^^^ |
28 |
|
29 |
| **add** |
30 |
| {-t {diskless | file \| plain \| drbd}} |
31 |
| {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,mode=*ro\|rw*] |
32 |
| \| -s *SIZE*} |
33 |
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install] |
34 |
| [--net=*N* [:options...] \| --no-nics] |
35 |
| [-B *BEPARAMS*] |
36 |
| [-H *HYPERVISOR* [: option=*value*... ]] |
37 |
| [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}] |
38 |
| {-n *node[:secondary-node]* \| --iallocator *name*} |
39 |
| {-o *os-type*} |
40 |
| [--submit] |
41 |
| {*instance*} |
42 |
|
43 |
Creates a new instance on the specified host. The *instance* argument |
44 |
must be in DNS, but depending on the bridge/routing setup, need not be |
45 |
in the same network as the nodes in the cluster. |
46 |
|
47 |
The ``disk`` option specifies the parameters for the disks of the |
48 |
instance. The numbering of disks starts at zero, and at least one disk |
49 |
needs to be passed. For each disk, either the size or the adoption |
50 |
source needs to be given, and optionally the access mode (read-only or |
51 |
the default of read-write) and LVM volume group can also be specified. |
52 |
The size is interpreted (when no unit is given) in mebibytes. You can |
53 |
also use one of the suffixes *m*, *g* or *t* to specify the exact the |
54 |
units used; these suffixes map to mebibytes, gibibytes and tebibytes. |
55 |
|
56 |
When using the ``adopt`` key in the disk definition, Ganeti will |
57 |
reuse those volumes (instead of creating new ones) as the |
58 |
instance's disks. Ganeti will rename these volumes to the standard |
59 |
format, and (without installing the OS) will use them as-is for the |
60 |
instance. This allows migrating instances from non-managed mode |
61 |
(e.q. plain KVM with LVM) to being managed via Ganeti. Note that |
62 |
this works only for the \`plain' disk template (see below for |
63 |
template details). |
64 |
|
65 |
Alternatively, a single-disk instance can be created via the ``-s`` |
66 |
option which takes a single argument, the size of the disk. This is |
67 |
similar to the Ganeti 1.2 version (but will only create one disk). |
68 |
|
69 |
The minimum disk specification is therefore ``--disk 0:size=20G`` (or |
70 |
``-s 20G`` when using the ``-s`` option), and a three-disk instance |
71 |
can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk |
72 |
2:size=100G``. |
73 |
|
74 |
The ``--no-ip-check`` skips the checks that are done to see if the |
75 |
instance's IP is not already alive (i.e. reachable from the master |
76 |
node). |
77 |
|
78 |
The ``--no-name-check`` skips the check for the instance name via |
79 |
the resolver (e.g. in DNS or /etc/hosts, depending on your setup). |
80 |
Since the name check is used to compute the IP address, if you pass |
81 |
this option you must also pass the ``--no-ip-check`` option. |
82 |
|
83 |
If you don't wat the instance to automatically start after |
84 |
creation, this is possible via the ``--no-start`` option. This will |
85 |
leave the instance down until a subsequent **gnt-instance start** |
86 |
command. |
87 |
|
88 |
The NICs of the instances can be specified via the ``--net`` |
89 |
option. By default, one NIC is created for the instance, with a |
90 |
random MAC, and set up according the the cluster level nic |
91 |
parameters. Each NIC can take these parameters (all optional): |
92 |
|
93 |
|
94 |
|
95 |
mac |
96 |
either a value or 'generate' to generate a new unique MAC |
97 |
|
98 |
ip |
99 |
specifies the IP address assigned to the instance from the Ganeti |
100 |
side (this is not necessarily what the instance will use, but what |
101 |
the node expects the instance to use) |
102 |
|
103 |
mode |
104 |
specifies the connection mode for this nic: routed or bridged. |
105 |
|
106 |
link |
107 |
in bridged mode specifies the bridge to attach this NIC to, in |
108 |
routed mode it's intended to differentiate between different |
109 |
routing tables/instance groups (but the meaning is dependent on the |
110 |
network script, see gnt-cluster(8) for more details) |
111 |
|
112 |
|
113 |
Of these "mode" and "link" are nic parameters, and inherit their |
114 |
default at cluster level. |
115 |
Alternatively, if no network is desired for the instance, you can |
116 |
prevent the default of one NIC with the ``--no-nics`` option. |
117 |
|
118 |
The ``-o`` options specifies the operating system to be installed. |
119 |
The available operating systems can be listed with **gnt-os list**. |
120 |
Passing ``--no-install`` will however skip the OS installation, |
121 |
allowing a manual import if so desired. Note that the |
122 |
no-installation mode will automatically disable the start-up of the |
123 |
instance (without an OS, it most likely won't be able to start-up |
124 |
successfully). |
125 |
|
126 |
The ``-B`` option specifies the backend parameters for the |
127 |
instance. If no such parameters are specified, the values are |
128 |
inherited from the cluster. Possible parameters are: |
129 |
|
130 |
|
131 |
|
132 |
memory |
133 |
the memory size of the instance; as usual, suffixes can be used to |
134 |
denote the unit, otherwise the value is taken in mebibites |
135 |
|
136 |
vcpus |
137 |
the number of VCPUs to assign to the instance (if this value makes |
138 |
sense for the hypervisor) |
139 |
|
140 |
auto\_balance |
141 |
whether the instance is considered in the N+1 cluster checks |
142 |
(enough redundancy in the cluster to survive a node failure) |
143 |
|
144 |
|
145 |
The ``-H`` option specified the hypervisor to use for the instance |
146 |
(must be one of the enabled hypervisors on the cluster) and |
147 |
optionally custom parameters for this instance. If not other |
148 |
options are used (i.e. the invocation is just -H *NAME*) the |
149 |
instance will inherit the cluster options. The defaults below show |
150 |
the cluster defaults at cluster creation time. |
151 |
|
152 |
The possible hypervisor options are as follows: |
153 |
|
154 |
|
155 |
|
156 |
boot\_order |
157 |
Valid for the Xen HVM and KVM hypervisors. |
158 |
|
159 |
A string value denoting the boot order. This has different meaning |
160 |
for the Xen HVM hypervisor and for the KVM one. |
161 |
|
162 |
For Xen HVM, The boot order is a string of letters listing the boot |
163 |
devices, with valid device letters being: |
164 |
|
165 |
|
166 |
|
167 |
a |
168 |
floppy drive |
169 |
|
170 |
c |
171 |
hard disk |
172 |
|
173 |
d |
174 |
CDROM drive |
175 |
|
176 |
n |
177 |
network boot (PXE) |
178 |
|
179 |
|
180 |
The default is not to set an HVM boot order which is interpreted as |
181 |
'dc'. |
182 |
|
183 |
For KVM the boot order is either "cdrom", "disk" or "network". |
184 |
Please note that older versions of KVM couldn't netboot from virtio |
185 |
interfaces. This has been fixed in more recent versions and is |
186 |
confirmed to work at least with qemu-kvm 0.11.1. |
187 |
|
188 |
cdrom\_image\_path |
189 |
Valid for the Xen HVM and KVM hypervisors. |
190 |
|
191 |
The path to a CDROM image to attach to the instance. |
192 |
|
193 |
nic\_type |
194 |
Valid for the Xen HVM and KVM hypervisors. |
195 |
|
196 |
This parameter determines the way the network cards are presented |
197 |
to the instance. The possible options are: |
198 |
|
199 |
|
200 |
|
201 |
rtl8139 (default for Xen HVM) (HVM & KVM) |
202 |
ne2k\_isa (HVM & KVM) |
203 |
ne2k\_pci (HVM & KVM) |
204 |
i82551 (KVM) |
205 |
i82557b (KVM) |
206 |
i82559er (KVM) |
207 |
pcnet (KVM) |
208 |
e1000 (KVM) |
209 |
paravirtual (default for KVM) (HVM & KVM) |
210 |
|
211 |
|
212 |
disk\_type |
213 |
Valid for the Xen HVM and KVM hypervisors. |
214 |
|
215 |
This parameter determines the way the disks are presented to the |
216 |
instance. The possible options are: |
217 |
|
218 |
|
219 |
|
220 |
ioemu (default for HVM & KVM) (HVM & KVM) |
221 |
ide (HVM & KVM) |
222 |
scsi (KVM) |
223 |
sd (KVM) |
224 |
mtd (KVM) |
225 |
pflash (KVM) |
226 |
|
227 |
|
228 |
vnc\_bind\_address |
229 |
Valid for the Xen HVM and KVM hypervisors. |
230 |
|
231 |
Specifies the address that the VNC listener for this instance |
232 |
should bind to. Valid values are IPv4 addresses. Use the address |
233 |
0.0.0.0 to bind to all available interfaces (this is the default) |
234 |
or specify the address of one of the interfaces on the node to |
235 |
restrict listening to that interface. |
236 |
|
237 |
vnc\_tls |
238 |
Valid for the KVM hypervisor. |
239 |
|
240 |
A boolean option that controls whether the VNC connection is |
241 |
secured with TLS. |
242 |
|
243 |
vnc\_x509\_path |
244 |
Valid for the KVM hypervisor. |
245 |
|
246 |
If ``vnc_tls`` is enabled, this options specifies the path to the |
247 |
x509 certificate to use. |
248 |
|
249 |
vnc\_x509\_verify |
250 |
Valid for the KVM hypervisor. |
251 |
|
252 |
acpi |
253 |
Valid for the Xen HVM and KVM hypervisors. |
254 |
|
255 |
A boolean option that specifies if the hypervisor should enable |
256 |
ACPI support for this instance. By default, ACPI is disabled. |
257 |
|
258 |
pae |
259 |
Valid for the Xen HVM and KVM hypervisors. |
260 |
|
261 |
A boolean option that specifies if the hypervisor should enabled |
262 |
PAE support for this instance. The default is false, disabling PAE |
263 |
support. |
264 |
|
265 |
use\_localtime |
266 |
Valid for the Xen HVM and KVM hypervisors. |
267 |
|
268 |
A boolean option that specifies if the instance should be started |
269 |
with its clock set to the localtime of the machine (when true) or |
270 |
to the UTC (When false). The default is false, which is useful for |
271 |
Linux/Unix machines; for Windows OSes, it is recommended to enable |
272 |
this parameter. |
273 |
|
274 |
kernel\_path |
275 |
Valid for the Xen PVM and KVM hypervisors. |
276 |
|
277 |
This option specifies the path (on the node) to the kernel to boot |
278 |
the instance with. Xen PVM instances always require this, while for |
279 |
KVM if this option is empty, it will cause the machine to load the |
280 |
kernel from its disks. |
281 |
|
282 |
kernel\_args |
283 |
Valid for the Xen PVM and KVM hypervisors. |
284 |
|
285 |
This options specifies extra arguments to the kernel that will be |
286 |
loaded. device. This is always used for Xen PVM, while for KVM it |
287 |
is only used if the ``kernel_path`` option is also specified. |
288 |
|
289 |
The default setting for this value is simply ``"ro"``, which mounts |
290 |
the root disk (initially) in read-only one. For example, setting |
291 |
this to single will cause the instance to start in single-user |
292 |
mode. |
293 |
|
294 |
initrd\_path |
295 |
Valid for the Xen PVM and KVM hypervisors. |
296 |
|
297 |
This option specifies the path (on the node) to the initrd to boot |
298 |
the instance with. Xen PVM instances can use this always, while for |
299 |
KVM if this option is only used if the ``kernel_path`` option is |
300 |
also specified. You can pass here either an absolute filename (the |
301 |
path to the initrd) if you want to use an initrd, or use the format |
302 |
no\_initrd\_path for no initrd. |
303 |
|
304 |
root\_path |
305 |
Valid for the Xen PVM and KVM hypervisors. |
306 |
|
307 |
This options specifies the name of the root device. This is always |
308 |
needed for Xen PVM, while for KVM it is only used if the |
309 |
``kernel_path`` option is also specified. |
310 |
|
311 |
serial\_console |
312 |
Valid for the KVM hypervisor. |
313 |
|
314 |
This boolean option specifies whether to emulate a serial console |
315 |
for the instance. |
316 |
|
317 |
disk\_cache |
318 |
Valid for the KVM hypervisor. |
319 |
|
320 |
The disk cache mode. It can be either default to not pass any cache |
321 |
option to KVM, or one of the KVM cache modes: none (for direct |
322 |
I/O), writethrough (to use the host cache but report completion to |
323 |
the guest only when the host has committed the changes to disk) or |
324 |
writeback (to use the host cache and report completion as soon as |
325 |
the data is in the host cache). Note that there are special |
326 |
considerations for the cache mode depending on version of KVM used |
327 |
and disk type (always raw file under Ganeti), please refer to the |
328 |
KVM documentation for more details. |
329 |
|
330 |
security\_model |
331 |
Valid for the KVM hypervisor. |
332 |
|
333 |
The security model for kvm. Currently one of "none", "user" or |
334 |
"pool". Under "none", the default, nothing is done and instances |
335 |
are run as the Ganeti daemon user (normally root). |
336 |
|
337 |
Under "user" kvm will drop privileges and become the user specified |
338 |
by the security\_domain parameter. |
339 |
|
340 |
Under "pool" a global cluster pool of users will be used, making |
341 |
sure no two instances share the same user on the same node. (this |
342 |
mode is not implemented yet) |
343 |
|
344 |
security\_domain |
345 |
Valid for the KVM hypervisor. |
346 |
|
347 |
Under security model "user" the username to run the instance under. |
348 |
It must be a valid username existing on the host. |
349 |
|
350 |
Cannot be set under security model "none" or "pool". |
351 |
|
352 |
kvm\_flag |
353 |
Valid for the KVM hypervisor. |
354 |
|
355 |
If "enabled" the -enable-kvm flag is passed to kvm. If "disabled" |
356 |
-disable-kvm is passed. If unset no flag is passed, and the default |
357 |
running mode for your kvm binary will be used. |
358 |
|
359 |
mem\_path |
360 |
Valid for the KVM hypervisor. |
361 |
|
362 |
This option passes the -mem-path argument to kvm with the path (on |
363 |
the node) to the mount point of the hugetlbfs file system, along |
364 |
with the -mem-prealloc argument too. |
365 |
|
366 |
use\_chroot |
367 |
Valid for the KVM hypervisor. |
368 |
|
369 |
This boolean option determines wether to run the KVM instance in a |
370 |
chroot directory. |
371 |
|
372 |
If it is set to ``true``, an empty directory is created before |
373 |
starting the instance and its path is passed via the -chroot flag |
374 |
to kvm. The directory is removed when the instance is stopped. |
375 |
|
376 |
It is set to ``false`` by default. |
377 |
|
378 |
migration\_downtime |
379 |
Valid for the KVM hypervisor. |
380 |
|
381 |
The maximum amount of time (in ms) a KVM instance is allowed to be |
382 |
frozen during a live migration, in order to copy dirty memory |
383 |
pages. Default value is 30ms, but you may need to increase this |
384 |
value for busy instances. |
385 |
|
386 |
This option is only effective with kvm versions >= 87 and qemu-kvm |
387 |
versions >= 0.11.0. |
388 |
|
389 |
cpu\_mask |
390 |
Valid for the LXC hypervisor. |
391 |
|
392 |
The processes belonging to the given instance are only scheduled on |
393 |
the specified CPUs. |
394 |
|
395 |
The parameter format is a comma-separated list of CPU IDs or CPU ID |
396 |
ranges. The ranges are defined by a lower and higher boundary, |
397 |
separated by a dash. The boundaries are inclusive. |
398 |
|
399 |
usb\_mouse |
400 |
Valid for the KVM hypervisor. |
401 |
|
402 |
This option specifies the usb mouse type to be used. It can be |
403 |
"mouse" or "tablet". When using VNC it's recommended to set it to |
404 |
"tablet". |
405 |
|
406 |
|
407 |
The ``--iallocator`` option specifies the instance allocator plugin |
408 |
to use. If you pass in this option the allocator will select nodes |
409 |
for this instance automatically, so you don't need to pass them |
410 |
with the ``-n`` option. For more information please refer to the |
411 |
instance allocator documentation. |
412 |
|
413 |
The ``-t`` options specifies the disk layout type for the instance. |
414 |
The available choices are: |
415 |
|
416 |
|
417 |
|
418 |
diskless |
419 |
This creates an instance with no disks. Its useful for testing only |
420 |
(or other special cases). |
421 |
|
422 |
file |
423 |
Disk devices will be regular files. |
424 |
|
425 |
plain |
426 |
Disk devices will be logical volumes. |
427 |
|
428 |
drbd |
429 |
Disk devices will be drbd (version 8.x) on top of lvm volumes. |
430 |
|
431 |
|
432 |
The optional second value of the ``--node`` is used for the drbd |
433 |
template type and specifies the remote node. |
434 |
|
435 |
If you do not want gnt-instance to wait for the disk mirror to be |
436 |
synced, use the ``--no-wait-for-sync`` option. |
437 |
|
438 |
The ``--file-storage-dir`` specifies the relative path under the |
439 |
cluster-wide file storage directory to store file-based disks. It is |
440 |
useful for having different subdirectories for different |
441 |
instances. The full path of the directory where the disk files are |
442 |
stored will consist of cluster-wide file storage directory + optional |
443 |
subdirectory + instance name. Example: |
444 |
``@RPL_FILE_STORAGE_DIR@``*/mysubdir/instance1.example.com*. This |
445 |
option is only relevant for instances using the file storage backend. |
446 |
|
447 |
The ``--file-driver`` specifies the driver to use for file-based |
448 |
disks. Note that currently these drivers work with the xen |
449 |
hypervisor only. This option is only relevant for instances using |
450 |
the file storage backend. The available choices are: |
451 |
|
452 |
|
453 |
|
454 |
loop |
455 |
Kernel loopback driver. This driver uses loopback devices to access |
456 |
the filesystem within the file. However, running I/O intensive |
457 |
applications in your instance using the loop driver might result in |
458 |
slowdowns. Furthermore, if you use the loopback driver consider |
459 |
increasing the maximum amount of loopback devices (on most systems |
460 |
it's 8) using the max\_loop param. |
461 |
|
462 |
blktap |
463 |
The blktap driver (for Xen hypervisors). In order to be able to use |
464 |
the blktap driver you should check if the 'blktapctrl' user space |
465 |
disk agent is running (usually automatically started via xend). |
466 |
This user-level disk I/O interface has the advantage of better |
467 |
performance. Especially if you use a network file system (e.g. NFS) |
468 |
to store your instances this is the recommended choice. |
469 |
|
470 |
|
471 |
The ``--submit`` option is used to send the job to the master |
472 |
daemon but not wait for its completion. The job ID will be shown so |
473 |
that it can be examined via **gnt-job info**. |
474 |
|
475 |
Example:: |
476 |
|
477 |
# gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \ |
478 |
-n node1.example.com --file-storage-dir=mysubdir instance1.example.com |
479 |
# gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \ |
480 |
-n node1.example.com instance1.example.com |
481 |
# gnt-instance add -t plain --disk 0:size=30g --disk 1:size=100g,vg=san \ |
482 |
-B memory=512 -o debian-etch -n node1.example.com instance1.example.com |
483 |
# gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \ |
484 |
-n node1.example.com:node2.example.com instance2.example.com |
485 |
|
486 |
|
487 |
BATCH-CREATE |
488 |
^^^^^^^^^^^^ |
489 |
|
490 |
**batch-create** {instances\_file.json} |
491 |
|
492 |
This command (similar to the Ganeti 1.2 **batcher** tool) submits |
493 |
multiple instance creation jobs based on a definition file. The |
494 |
instance configurations do not encompass all the possible options |
495 |
for the **add** command, but only a subset. |
496 |
|
497 |
The instance file should be a valid-formed JSON file, containing a |
498 |
dictionary with instance name and instance parameters. The accepted |
499 |
parameters are: |
500 |
|
501 |
|
502 |
|
503 |
disk\_size |
504 |
The size of the disks of the instance. |
505 |
|
506 |
disk\_template |
507 |
The disk template to use for the instance, the same as in the |
508 |
**add** command. |
509 |
|
510 |
backend |
511 |
A dictionary of backend parameters. |
512 |
|
513 |
hypervisor |
514 |
A dictionary with a single key (the hypervisor name), and as value |
515 |
the hypervisor options. If not passed, the default hypervisor and |
516 |
hypervisor options will be inherited. |
517 |
|
518 |
mac, ip, mode, link |
519 |
Specifications for the one NIC that will be created for the |
520 |
instance. 'bridge' is also accepted as a backwards compatibile |
521 |
key. |
522 |
|
523 |
nics |
524 |
List of nics that will be created for the instance. Each entry |
525 |
should be a dict, with mac, ip, mode and link as possible keys. |
526 |
Please don't provide the "mac, ip, mode, link" parent keys if you |
527 |
use this method for specifying nics. |
528 |
|
529 |
primary\_node, secondary\_node |
530 |
The primary and optionally the secondary node to use for the |
531 |
instance (in case an iallocator script is not used). |
532 |
|
533 |
iallocator |
534 |
Instead of specifying the nodes, an iallocator script can be used |
535 |
to automatically compute them. |
536 |
|
537 |
start |
538 |
whether to start the instance |
539 |
|
540 |
ip\_check |
541 |
Skip the check for already-in-use instance; see the description in |
542 |
the **add** command for details. |
543 |
|
544 |
name\_check |
545 |
Skip the name check for instances; see the description in the |
546 |
**add** command for details. |
547 |
|
548 |
file\_storage\_dir, file\_driver |
549 |
Configuration for the file disk type, see the **add** command for |
550 |
details. |
551 |
|
552 |
|
553 |
A simple definition for one instance can be (with most of the |
554 |
parameters taken from the cluster defaults):: |
555 |
|
556 |
{ |
557 |
"instance3": { |
558 |
"template": "drbd", |
559 |
"os": "debootstrap", |
560 |
"disk_size": ["25G"], |
561 |
"iallocator": "dumb" |
562 |
}, |
563 |
"instance5": { |
564 |
"template": "drbd", |
565 |
"os": "debootstrap", |
566 |
"disk_size": ["25G"], |
567 |
"iallocator": "dumb", |
568 |
"hypervisor": "xen-hvm", |
569 |
"hvparams": {"acpi": true}, |
570 |
"backend": {"memory": 512} |
571 |
} |
572 |
} |
573 |
|
574 |
The command will display the job id for each submitted instance, as |
575 |
follows:: |
576 |
|
577 |
# gnt-instance batch-create instances.json |
578 |
instance3: 11224 |
579 |
instance5: 11225 |
580 |
|
581 |
REMOVE |
582 |
^^^^^^ |
583 |
|
584 |
**remove** [--ignore-failures] [--shutdown-timeout=*N*] [--submit] |
585 |
{*instance*} |
586 |
|
587 |
Remove an instance. This will remove all data from the instance and |
588 |
there is *no way back*. If you are not sure if you use an instance |
589 |
again, use **shutdown** first and leave it in the shutdown state |
590 |
for a while. |
591 |
|
592 |
The ``--ignore-failures`` option will cause the removal to proceed |
593 |
even in the presence of errors during the removal of the instance |
594 |
(e.g. during the shutdown or the disk removal). If this option is |
595 |
not given, the command will stop at the first error. |
596 |
|
597 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
598 |
before forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the |
599 |
kvm process for KVM, etc.). By default two minutes are given to each |
600 |
instance to stop. |
601 |
|
602 |
The ``--submit`` option is used to send the job to the master |
603 |
daemon but not wait for its completion. The job ID will be shown so |
604 |
that it can be examined via **gnt-job info**. |
605 |
|
606 |
Example:: |
607 |
|
608 |
# gnt-instance remove instance1.example.com |
609 |
|
610 |
|
611 |
LIST |
612 |
^^^^ |
613 |
|
614 |
| **list** |
615 |
| [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*] |
616 |
| [-o *[+]FIELD,...*] [--roman] [instance...] |
617 |
|
618 |
Shows the currently configured instances with memory usage, disk |
619 |
usage, the node they are running on, and their run status. |
620 |
|
621 |
The ``--no-headers`` option will skip the initial header line. The |
622 |
``--separator`` option takes an argument which denotes what will be |
623 |
used between the output fields. Both these options are to help |
624 |
scripting. |
625 |
|
626 |
The units used to display the numeric values in the output varies, |
627 |
depending on the options given. By default, the values will be |
628 |
formatted in the most appropriate unit. If the ``--separator`` |
629 |
option is given, then the values are shown in mebibytes to allow |
630 |
parsing by scripts. In both cases, the ``--units`` option can be |
631 |
used to enforce a given output unit. |
632 |
|
633 |
The ``--roman`` option allows latin people to better understand the |
634 |
cluster instances' status. |
635 |
|
636 |
The ``-o`` option takes a comma-separated list of output fields. |
637 |
The available fields and their meaning are: |
638 |
|
639 |
|
640 |
|
641 |
name |
642 |
the instance name |
643 |
|
644 |
os |
645 |
the OS of the instance |
646 |
|
647 |
pnode |
648 |
the primary node of the instance |
649 |
|
650 |
snodes |
651 |
comma-separated list of secondary nodes for the instance; usually |
652 |
this will be just one node |
653 |
|
654 |
admin\_state |
655 |
the desired state of the instance (either "yes" or "no" denoting |
656 |
the instance should run or not) |
657 |
|
658 |
disk\_template |
659 |
the disk template of the instance |
660 |
|
661 |
oper\_state |
662 |
the actual state of the instance; can be one of the values |
663 |
"running", "stopped", "(node down)" |
664 |
|
665 |
status |
666 |
combined form of admin\_state and oper\_stat; this can be one of: |
667 |
ERROR\_nodedown if the node of the instance is down, ERROR\_down if |
668 |
the instance should run but is down, ERROR\_up if the instance |
669 |
should be stopped but is actually running, ADMIN\_down if the |
670 |
instance has been stopped (and is stopped) and running if the |
671 |
instance is set to be running (and is running) |
672 |
|
673 |
oper\_ram |
674 |
the actual memory usage of the instance as seen by the hypervisor |
675 |
|
676 |
oper\_vcpus |
677 |
the actual number of VCPUs the instance is using as seen by the |
678 |
hypervisor |
679 |
|
680 |
ip |
681 |
the ip address Ganeti recognizes as associated with the first |
682 |
instance interface |
683 |
|
684 |
mac |
685 |
the first instance interface MAC address |
686 |
|
687 |
nic\_mode |
688 |
the mode of the first instance NIC (routed or bridged) |
689 |
|
690 |
nic\_link |
691 |
the link of the first instance NIC |
692 |
|
693 |
sda\_size |
694 |
the size of the instance's first disk |
695 |
|
696 |
sdb\_size |
697 |
the size of the instance's second disk, if any |
698 |
|
699 |
vcpus |
700 |
the number of VCPUs allocated to the instance |
701 |
|
702 |
tags |
703 |
comma-separated list of the instances's tags |
704 |
|
705 |
serial\_no |
706 |
the so called 'serial number' of the instance; this is a numeric |
707 |
field that is incremented each time the instance is modified, and |
708 |
it can be used to track modifications |
709 |
|
710 |
ctime |
711 |
the creation time of the instance; note that this field contains |
712 |
spaces and as such it's harder to parse |
713 |
|
714 |
if this attribute is not present (e.g. when upgrading from older |
715 |
versions), then "N/A" will be shown instead |
716 |
|
717 |
mtime |
718 |
the last modification time of the instance; note that this field |
719 |
contains spaces and as such it's harder to parse |
720 |
|
721 |
if this attribute is not present (e.g. when upgrading from older |
722 |
versions), then "N/A" will be shown instead |
723 |
|
724 |
uuid |
725 |
Show the UUID of the instance (generated automatically by Ganeti) |
726 |
|
727 |
network\_port |
728 |
If the instance has a network port assigned to it (e.g. for VNC |
729 |
connections), this will be shown, otherwise - will be displayed. |
730 |
|
731 |
beparams |
732 |
A text format of the entire beparams for the instance. It's more |
733 |
useful to select individual fields from this dictionary, see |
734 |
below. |
735 |
|
736 |
disk.count |
737 |
The number of instance disks. |
738 |
|
739 |
disk.size/N |
740 |
The size of the instance's Nth disk. This is a more generic form of |
741 |
the sda\_size and sdb\_size fields. |
742 |
|
743 |
disk.sizes |
744 |
A comma-separated list of the disk sizes for this instance. |
745 |
|
746 |
disk\_usage |
747 |
The total disk space used by this instance on each of its nodes. |
748 |
This is not the instance-visible disk size, but the actual disk |
749 |
"cost" of the instance. |
750 |
|
751 |
nic.mac/N |
752 |
The MAC of the Nth instance NIC. |
753 |
|
754 |
nic.ip/N |
755 |
The IP address of the Nth instance NIC. |
756 |
|
757 |
nic.mode/N |
758 |
The mode of the Nth instance NIC |
759 |
|
760 |
nic.link/N |
761 |
The link of the Nth instance NIC |
762 |
|
763 |
nic.macs |
764 |
A comma-separated list of all the MACs of the instance's NICs. |
765 |
|
766 |
nic.ips |
767 |
A comma-separated list of all the IP addresses of the instance's |
768 |
NICs. |
769 |
|
770 |
nic.modes |
771 |
A comma-separated list of all the modes of the instance's NICs. |
772 |
|
773 |
nic.links |
774 |
A comma-separated list of all the link parameters of the instance's |
775 |
NICs. |
776 |
|
777 |
nic.count |
778 |
The number of instance nics. |
779 |
|
780 |
hv/*NAME* |
781 |
The value of the hypervisor parameter called *NAME*. For details of |
782 |
what hypervisor parameters exist and their meaning, see the **add** |
783 |
command. |
784 |
|
785 |
be/memory |
786 |
The configured memory for the instance. |
787 |
|
788 |
be/vcpus |
789 |
The configured number of VCPUs for the instance. |
790 |
|
791 |
be/auto\_balance |
792 |
Whether the instance is considered in N+1 checks. |
793 |
|
794 |
|
795 |
If the value of the option starts with the character ``+``, the new |
796 |
field(s) will be added to the default list. This allows to quickly |
797 |
see the default list plus a few other fields, instead of retyping |
798 |
the entire list of fields. |
799 |
|
800 |
There is a subtle grouping about the available output fields: all |
801 |
fields except for ``oper_state``, ``oper_ram``, ``oper_vcpus`` and |
802 |
``status`` are configuration value and not run-time values. So if |
803 |
you don't select any of the these fields, the query will be |
804 |
satisfied instantly from the cluster configuration, without having |
805 |
to ask the remote nodes for the data. This can be helpful for big |
806 |
clusters when you only want some data and it makes sense to specify |
807 |
a reduced set of output fields. |
808 |
|
809 |
The default output field list is: name, os, pnode, admin\_state, |
810 |
oper\_state, oper\_ram. |
811 |
|
812 |
INFO |
813 |
^^^^ |
814 |
|
815 |
**info** [-s \| --static] [--roman] {--all \| *instance*} |
816 |
|
817 |
Show detailed information about the given instance(s). This is |
818 |
different from **list** as it shows detailed data about the |
819 |
instance's disks (especially useful for the drbd disk template). |
820 |
|
821 |
If the option ``-s`` is used, only information available in the |
822 |
configuration file is returned, without querying nodes, making the |
823 |
operation faster. |
824 |
|
825 |
Use the ``--all`` to get info about all instances, rather than |
826 |
explicitly passing the ones you're interested in. |
827 |
|
828 |
The ``--roman`` option can be used to cause envy among people who |
829 |
like ancient cultures, but are stuck with non-latin-friendly |
830 |
cluster virtualization technologies. |
831 |
|
832 |
MODIFY |
833 |
^^^^^^ |
834 |
|
835 |
| **modify** |
836 |
| [-H *HYPERVISOR\_PARAMETERS*] |
837 |
| [-B *BACKEND\_PARAMETERS*] |
838 |
| [--net add*[:options]* \| --net remove \| --net *N:options*] |
839 |
| [--disk add:size=*SIZE*[,vg=*VG*] \| --disk remove \| |
840 |
| --disk *N*:mode=*MODE*] |
841 |
| [-t plain | -t drbd -n *new_secondary*] |
842 |
| [--os-name=*OS* [--force-variant]] |
843 |
| [--submit] |
844 |
| {*instance*} |
845 |
|
846 |
Modifies the memory size, number of vcpus, ip address, MAC address |
847 |
and/or nic parameters for an instance. It can also add and remove |
848 |
disks and NICs to/from the instance. Note that you need to give at |
849 |
least one of the arguments, otherwise the command complains. |
850 |
|
851 |
The ``-H`` option specifies hypervisor options in the form of |
852 |
name=value[,...]. For details which options can be specified, see |
853 |
the **add** command. |
854 |
|
855 |
The ``-t`` option will change the disk template of the instance. |
856 |
Currently only conversions between the plain and drbd disk templates |
857 |
are supported, and the instance must be stopped before attempting the |
858 |
conversion. When changing from the plain to the drbd disk template, a |
859 |
new secondary node must be specified via the ``-n`` option. |
860 |
|
861 |
The ``--disk add:size=``*SIZE* option adds a disk to the instance. The |
862 |
optional ``vg=``*VG* option specifies LVM volume group other than default |
863 |
vg to create disk on. The ``--disk remove`` option will remove the last |
864 |
disk of the instance. The ``--disk`` *N*``:mode=``*MODE* option will change |
865 |
the mode of the Nth disk of the instance between read-only (``ro``) and |
866 |
read-write (``rw``). |
867 |
|
868 |
The ``--net add:``*options* option will add a new NIC to the |
869 |
instance. The available options are the same as in the **add** command |
870 |
(mac, ip, link, mode). The ``--net remove`` will remove the last NIC |
871 |
of the instance, while the ``--net`` *N*:*options* option will |
872 |
change the parameters of the Nth instance NIC. |
873 |
|
874 |
The option ``--os-name`` will change the OS name for the instance |
875 |
(without reinstallation). In case an OS variant is specified that |
876 |
is not found, then by default the modification is refused, unless |
877 |
``--force-variant`` is passed. An invalid OS will also be refused, |
878 |
unless the ``--force`` option is given. |
879 |
|
880 |
The ``--submit`` option is used to send the job to the master |
881 |
daemon but not wait for its completion. The job ID will be shown so |
882 |
that it can be examined via **gnt-job info**. |
883 |
|
884 |
All the changes take effect at the next restart. If the instance is |
885 |
running, there is no effect on the instance. |
886 |
|
887 |
REINSTALL |
888 |
^^^^^^^^^ |
889 |
|
890 |
| **reinstall** [-o *os-type*] [--select-os] [-f *force*] |
891 |
| [--force-multiple] |
892 |
| [--instance \| --node \| --primary \| --secondary \| --all] |
893 |
| [-O *OS\_PARAMETERS*] [--submit] {*instance*...} |
894 |
|
895 |
Reinstalls the operating system on the given instance(s). The |
896 |
instance(s) must be stopped when running this command. If the |
897 |
``--os-type`` is specified, the operating system is changed. |
898 |
|
899 |
The ``--select-os`` option switches to an interactive OS reinstall. |
900 |
The user is prompted to select the OS template from the list of |
901 |
available OS templates. OS parameters can be overridden using |
902 |
``-O``. |
903 |
|
904 |
Since this is a potentially dangerous command, the user will be |
905 |
required to confirm this action, unless the ``-f`` flag is passed. |
906 |
When multiple instances are selected (either by passing multiple |
907 |
arguments or by using the ``--node``, ``--primary``, |
908 |
``--secondary`` or ``--all`` options), the user must pass the |
909 |
``--force-multiple`` options to skip the interactive confirmation. |
910 |
|
911 |
The ``--submit`` option is used to send the job to the master |
912 |
daemon but not wait for its completion. The job ID will be shown so |
913 |
that it can be examined via **gnt-job info**. |
914 |
|
915 |
RENAME |
916 |
^^^^^^ |
917 |
|
918 |
| **rename** [--no-ip-check] [--no-name-check] [--submit] |
919 |
| {*instance*} {*new\_name*} |
920 |
|
921 |
Renames the given instance. The instance must be stopped when |
922 |
running this command. The requirements for the new name are the |
923 |
same as for adding an instance: the new name must be resolvable and |
924 |
the IP it resolves to must not be reachable (in order to prevent |
925 |
duplicate IPs the next time the instance is started). The IP test |
926 |
can be skipped if the ``--no-ip-check`` option is passed. |
927 |
|
928 |
The ``--no-name-check`` skips the check for the new instance name |
929 |
via the resolver (e.g. in DNS or /etc/hosts, depending on your |
930 |
setup). Since the name check is used to compute the IP address, if |
931 |
you pass this option you must also pass the ``--no-ip-check`` |
932 |
option. |
933 |
|
934 |
The ``--submit`` option is used to send the job to the master |
935 |
daemon but not wait for its completion. The job ID will be shown so |
936 |
that it can be examined via **gnt-job info**. |
937 |
|
938 |
Starting/stopping/connecting to console |
939 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
940 |
|
941 |
STARTUP |
942 |
^^^^^^^ |
943 |
|
944 |
| **startup** |
945 |
| [--force] [--ignore-offline] |
946 |
| [--force-multiple] |
947 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
948 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
949 |
| [-H ``key=value...``] [-B ``key=value...``] |
950 |
| [--submit] |
951 |
| {*name*...} |
952 |
|
953 |
Starts one or more instances, depending on the following options. |
954 |
The four available modes are: |
955 |
|
956 |
|
957 |
--instance |
958 |
will start the instances given as arguments (at least one argument |
959 |
required); this is the default selection |
960 |
|
961 |
--node |
962 |
will start the instances who have the given node as either primary |
963 |
or secondary |
964 |
|
965 |
--primary |
966 |
will start all instances whose primary node is in the list of nodes |
967 |
passed as arguments (at least one node required) |
968 |
|
969 |
--secondary |
970 |
will start all instances whose secondary node is in the list of |
971 |
nodes passed as arguments (at least one node required) |
972 |
|
973 |
--all |
974 |
will start all instances in the cluster (no arguments accepted) |
975 |
|
976 |
--tags |
977 |
will start all instances in the cluster with the tags given as |
978 |
arguments |
979 |
|
980 |
--node-tags |
981 |
will start all instances in the cluster on nodes with the tags |
982 |
given as arguments |
983 |
|
984 |
--pri-node-tags |
985 |
will start all instances in the cluster on primary nodes with the |
986 |
tags given as arguments |
987 |
|
988 |
--sec-node-tags |
989 |
will start all instances in the cluster on secondary nodes with the |
990 |
tags given as arguments |
991 |
|
992 |
|
993 |
Note that although you can pass more than one selection option, the |
994 |
last one wins, so in order to guarantee the desired result, don't |
995 |
pass more than one such option. |
996 |
|
997 |
Use ``--force`` to start even if secondary disks are failing. |
998 |
``--ignore-offline`` can be used to ignore offline primary nodes |
999 |
and mark the instance as started even if the primary is not |
1000 |
available. |
1001 |
|
1002 |
The ``--force-multiple`` will skip the interactive confirmation in |
1003 |
the case the more than one instance will be affected. |
1004 |
|
1005 |
The ``-H`` and ``-B`` options specify temporary hypervisor and |
1006 |
backend parameters that can be used to start an instance with |
1007 |
modified parameters. They can be useful for quick testing without |
1008 |
having to modify an instance back and forth, e.g.:: |
1009 |
|
1010 |
# gnt-instance start -H root_args="single" instance1 |
1011 |
# gnt-instance start -B memory=2048 instance2 |
1012 |
|
1013 |
|
1014 |
The first form will start the instance instance1 in single-user |
1015 |
mode, and the instance instance2 with 2GB of RAM (this time only, |
1016 |
unless that is the actual instance memory size already). Note that |
1017 |
the values override the instance parameters (and not extend them): |
1018 |
an instance with "root\_args=ro" when started with -H |
1019 |
root\_args=single will result in "single", not "ro single". |
1020 |
The ``--submit`` option is used to send the job to the master |
1021 |
daemon but not wait for its completion. The job ID will be shown so |
1022 |
that it can be examined via **gnt-job info**. |
1023 |
|
1024 |
Example:: |
1025 |
|
1026 |
# gnt-instance start instance1.example.com |
1027 |
# gnt-instance start --node node1.example.com node2.example.com |
1028 |
# gnt-instance start --all |
1029 |
|
1030 |
|
1031 |
SHUTDOWN |
1032 |
^^^^^^^^ |
1033 |
|
1034 |
| **shutdown** |
1035 |
| [--timeout=*N*] |
1036 |
| [--force-multiple] [--ignore-offline] |
1037 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
1038 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
1039 |
| [--submit] |
1040 |
| {*name*...} |
1041 |
|
1042 |
Stops one or more instances. If the instance cannot be cleanly |
1043 |
stopped during a hardcoded interval (currently 2 minutes), it will |
1044 |
forcibly stop the instance (equivalent to switching off the power |
1045 |
on a physical machine). |
1046 |
|
1047 |
The ``--timeout`` is used to specify how much time to wait before |
1048 |
forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the kvm |
1049 |
process for KVM, etc.). By default two minutes are given to each |
1050 |
instance to stop. |
1051 |
|
1052 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1053 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1054 |
``--sec-node-tags`` options are similar as for the **startup** |
1055 |
command and they influence the actual instances being shutdown. |
1056 |
|
1057 |
The ``--submit`` option is used to send the job to the master |
1058 |
daemon but not wait for its completion. The job ID will be shown so |
1059 |
that it can be examined via **gnt-job info**. |
1060 |
|
1061 |
``--ignore-offline`` can be used to ignore offline primary nodes |
1062 |
and force the instance to be marked as stopped. This option should |
1063 |
be used with care as it can lead to an inconsistent cluster state. |
1064 |
|
1065 |
Example:: |
1066 |
|
1067 |
# gnt-instance shutdown instance1.example.com |
1068 |
# gnt-instance shutdown --all |
1069 |
|
1070 |
|
1071 |
REBOOT |
1072 |
^^^^^^ |
1073 |
|
1074 |
| **reboot** |
1075 |
| [--type=*REBOOT-TYPE*] |
1076 |
| [--ignore-secondaries] |
1077 |
| [--shutdown-timeout=*N*] |
1078 |
| [--force-multiple] |
1079 |
| [--instance \| --node \| --primary \| --secondary \| --all \| |
1080 |
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags] |
1081 |
| [--submit] |
1082 |
| [*name*...] |
1083 |
|
1084 |
Reboots one or more instances. The type of reboot depends on the |
1085 |
value of ``--type``. A soft reboot does a hypervisor reboot, a hard |
1086 |
reboot does a instance stop, recreates the hypervisor config for |
1087 |
the instance and starts the instance. A full reboot does the |
1088 |
equivalent of **gnt-instance shutdown && gnt-instance startup**. |
1089 |
The default is hard reboot. |
1090 |
|
1091 |
For the hard reboot the option ``--ignore-secondaries`` ignores |
1092 |
errors for the secondary node while re-assembling the instance |
1093 |
disks. |
1094 |
|
1095 |
The ``--instance``, ``--node``, ``--primary``, ``--secondary``, |
1096 |
``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and |
1097 |
``--sec-node-tags`` options are similar as for the **startup** |
1098 |
command and they influence the actual instances being rebooted. |
1099 |
|
1100 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1101 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1102 |
process, for kvm). By default two minutes are given to each |
1103 |
instance to stop. |
1104 |
|
1105 |
The ``--force-multiple`` will skip the interactive confirmation in |
1106 |
the case the more than one instance will be affected. |
1107 |
|
1108 |
Example:: |
1109 |
|
1110 |
# gnt-instance reboot instance1.example.com |
1111 |
# gnt-instance reboot --type=full instance1.example.com |
1112 |
|
1113 |
|
1114 |
CONSOLE |
1115 |
^^^^^^^ |
1116 |
|
1117 |
**console** [--show-cmd] {*instance*} |
1118 |
|
1119 |
Connects to the console of the given instance. If the instance is |
1120 |
not up, an error is returned. Use the ``--show-cmd`` option to |
1121 |
display the command instead of executing it. |
1122 |
|
1123 |
For HVM instances, this will attempt to connect to the serial |
1124 |
console of the instance. To connect to the virtualized "physical" |
1125 |
console of a HVM instance, use a VNC client with the connection |
1126 |
info from the **info** command. |
1127 |
|
1128 |
Example:: |
1129 |
|
1130 |
# gnt-instance console instance1.example.com |
1131 |
|
1132 |
|
1133 |
Disk management |
1134 |
~~~~~~~~~~~~~~~ |
1135 |
|
1136 |
REPLACE-DISKS |
1137 |
^^^^^^^^^^^^^ |
1138 |
|
1139 |
**replace-disks** [--submit] [--early-release] {-p} [--disks *idx*] |
1140 |
{*instance*} |
1141 |
|
1142 |
**replace-disks** [--submit] [--early-release] {-s} [--disks *idx*] |
1143 |
{*instance*} |
1144 |
|
1145 |
**replace-disks** [--submit] [--early-release] {--iallocator *name* |
1146 |
\| --new-secondary *NODE*} {*instance*} |
1147 |
|
1148 |
**replace-disks** [--submit] [--early-release] {--auto} |
1149 |
{*instance*} |
1150 |
|
1151 |
This command is a generalized form for replacing disks. It is |
1152 |
currently only valid for the mirrored (DRBD) disk template. |
1153 |
|
1154 |
The first form (when passing the ``-p`` option) will replace the |
1155 |
disks on the primary, while the second form (when passing the |
1156 |
``-s`` option will replace the disks on the secondary node. For |
1157 |
these two cases (as the node doesn't change), it is possible to |
1158 |
only run the replace for a subset of the disks, using the option |
1159 |
``--disks`` which takes a list of comma-delimited disk indices |
1160 |
(zero-based), e.g. 0,2 to replace only the first and third disks. |
1161 |
|
1162 |
The third form (when passing either the ``--iallocator`` or the |
1163 |
``--new-secondary`` option) is designed to change secondary node of |
1164 |
the instance. Specifying ``--iallocator`` makes the new secondary |
1165 |
be selected automatically by the specified allocator plugin, |
1166 |
otherwise the new secondary node will be the one chosen manually |
1167 |
via the ``--new-secondary`` option. |
1168 |
|
1169 |
The fourth form (when using ``--auto``) will automatically |
1170 |
determine which disks of an instance are faulty and replace them |
1171 |
within the same node. The ``--auto`` option works only when an |
1172 |
instance has only faulty disks on either the primary or secondary |
1173 |
node; it doesn't work when both sides have faulty disks. |
1174 |
|
1175 |
The ``--submit`` option is used to send the job to the master |
1176 |
daemon but not wait for its completion. The job ID will be shown so |
1177 |
that it can be examined via **gnt-job info**. |
1178 |
|
1179 |
The ``--early-release`` changes the code so that the old storage on |
1180 |
secondary node(s) is removed early (before the resync is completed) |
1181 |
and the internal Ganeti locks for the current (and new, if any) |
1182 |
secondary node are also released, thus allowing more parallelism in |
1183 |
the cluster operation. This should be used only when recovering |
1184 |
from a disk failure on the current secondary (thus the old storage |
1185 |
is already broken) or when the storage on the primary node is known |
1186 |
to be fine (thus we won't need the old storage for potential |
1187 |
recovery). |
1188 |
|
1189 |
Note that it is not possible to select an offline or drained node |
1190 |
as a new secondary. |
1191 |
|
1192 |
ACTIVATE-DISKS |
1193 |
^^^^^^^^^^^^^^ |
1194 |
|
1195 |
**activate-disks** [--submit] [--ignore-size] {*instance*} |
1196 |
|
1197 |
Activates the block devices of the given instance. If successful, |
1198 |
the command will show the location and name of the block devices:: |
1199 |
|
1200 |
node1.example.com:disk/0:/dev/drbd0 |
1201 |
node1.example.com:disk/1:/dev/drbd1 |
1202 |
|
1203 |
|
1204 |
In this example, *node1.example.com* is the name of the node on |
1205 |
which the devices have been activated. The *disk/0* and *disk/1* |
1206 |
are the Ganeti-names of the instance disks; how they are visible |
1207 |
inside the instance is hypervisor-specific. */dev/drbd0* and |
1208 |
*/dev/drbd1* are the actual block devices as visible on the node. |
1209 |
The ``--submit`` option is used to send the job to the master |
1210 |
daemon but not wait for its completion. The job ID will be shown so |
1211 |
that it can be examined via **gnt-job info**. |
1212 |
|
1213 |
The ``--ignore-size`` option can be used to activate disks ignoring |
1214 |
the currently configured size in Ganeti. This can be used in cases |
1215 |
where the configuration has gotten out of sync with the real-world |
1216 |
(e.g. after a partially-failed grow-disk operation or due to |
1217 |
rounding in LVM devices). This should not be used in normal cases, |
1218 |
but only when activate-disks fails without it. |
1219 |
|
1220 |
Note that it is safe to run this command while the instance is |
1221 |
already running. |
1222 |
|
1223 |
DEACTIVATE-DISKS |
1224 |
^^^^^^^^^^^^^^^^ |
1225 |
|
1226 |
**deactivate-disks** [--submit] {*instance*} |
1227 |
|
1228 |
De-activates the block devices of the given instance. Note that if |
1229 |
you run this command for an instance with a drbd disk template, |
1230 |
while it is running, it will not be able to shutdown the block |
1231 |
devices on the primary node, but it will shutdown the block devices |
1232 |
on the secondary nodes, thus breaking the replication. |
1233 |
|
1234 |
The ``--submit`` option is used to send the job to the master |
1235 |
daemon but not wait for its completion. The job ID will be shown so |
1236 |
that it can be examined via **gnt-job info**. |
1237 |
|
1238 |
GROW-DISK |
1239 |
^^^^^^^^^ |
1240 |
|
1241 |
**grow-disk** [--no-wait-for-sync] [--submit] {*instance*} {*disk*} |
1242 |
{*amount*} |
1243 |
|
1244 |
Grows an instance's disk. This is only possible for instances |
1245 |
having a plain or drbd disk template. |
1246 |
|
1247 |
Note that this command only change the block device size; it will |
1248 |
not grow the actual filesystems, partitions, etc. that live on that |
1249 |
disk. Usually, you will need to: |
1250 |
|
1251 |
|
1252 |
|
1253 |
|
1254 |
#. use **gnt-instance grow-disk** |
1255 |
|
1256 |
#. reboot the instance (later, at a convenient time) |
1257 |
|
1258 |
#. use a filesystem resizer, such as ext2online(8) or |
1259 |
xfs\_growfs(8) to resize the filesystem, or use fdisk(8) to change |
1260 |
the partition table on the disk |
1261 |
|
1262 |
|
1263 |
The *disk* argument is the index of the instance disk to grow. The |
1264 |
*amount* argument is given either as a number (and it represents |
1265 |
the amount to increase the disk with in mebibytes) or can be given |
1266 |
similar to the arguments in the create instance operation, with a |
1267 |
suffix denoting the unit. |
1268 |
|
1269 |
Note that the disk grow operation might complete on one node but |
1270 |
fail on the other; this will leave the instance with |
1271 |
different-sized LVs on the two nodes, but this will not create |
1272 |
problems (except for unused space). |
1273 |
|
1274 |
If you do not want gnt-instance to wait for the new disk region to |
1275 |
be synced, use the ``--no-wait-for-sync`` option. |
1276 |
|
1277 |
The ``--submit`` option is used to send the job to the master |
1278 |
daemon but not wait for its completion. The job ID will be shown so |
1279 |
that it can be examined via **gnt-job info**. |
1280 |
|
1281 |
Example (increase the first disk for instance1 by 16GiB):: |
1282 |
|
1283 |
# gnt-instance grow-disk instance1.example.com 0 16g |
1284 |
|
1285 |
|
1286 |
Also note that disk shrinking is not supported; use |
1287 |
**gnt-backup export** and then **gnt-backup import** to reduce the |
1288 |
disk size of an instance. |
1289 |
|
1290 |
RECREATE-DISKS |
1291 |
^^^^^^^^^^^^^^ |
1292 |
|
1293 |
**recreate-disks** [--submit] [--disks=``indices``] {*instance*} |
1294 |
|
1295 |
Recreates the disks of the given instance, or only a subset of the |
1296 |
disks (if the option ``disks`` is passed, which must be a |
1297 |
comma-separated list of disk indices, starting from zero). |
1298 |
|
1299 |
Note that this functionality should only be used for missing disks; |
1300 |
if any of the given disks already exists, the operation will fail. |
1301 |
While this is suboptimal, recreate-disks should hopefully not be |
1302 |
needed in normal operation and as such the impact of this is low. |
1303 |
|
1304 |
The ``--submit`` option is used to send the job to the master |
1305 |
daemon but not wait for its completion. The job ID will be shown so |
1306 |
that it can be examined via **gnt-job info**. |
1307 |
|
1308 |
Recovery |
1309 |
~~~~~~~~ |
1310 |
|
1311 |
FAILOVER |
1312 |
^^^^^^^^ |
1313 |
|
1314 |
**failover** [-f] [--ignore-consistency] [--shutdown-timeout=*N*] |
1315 |
[--submit] {*instance*} |
1316 |
|
1317 |
Failover will fail the instance over its secondary node. This works |
1318 |
only for instances having a drbd disk template. |
1319 |
|
1320 |
Normally the failover will check the consistency of the disks |
1321 |
before failing over the instance. If you are trying to migrate |
1322 |
instances off a dead node, this will fail. Use the |
1323 |
``--ignore-consistency`` option for this purpose. Note that this |
1324 |
option can be dangerous as errors in shutting down the instance |
1325 |
will be ignored, resulting in possibly having the instance running |
1326 |
on two machines in parallel (on disconnected DRBD drives). |
1327 |
|
1328 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1329 |
before forcing the shutdown (xm destroy in xen, killing the kvm |
1330 |
process, for kvm). By default two minutes are given to each |
1331 |
instance to stop. |
1332 |
|
1333 |
The ``--submit`` option is used to send the job to the master |
1334 |
daemon but not wait for its completion. The job ID will be shown so |
1335 |
that it can be examined via **gnt-job info**. |
1336 |
|
1337 |
Example:: |
1338 |
|
1339 |
# gnt-instance failover instance1.example.com |
1340 |
|
1341 |
|
1342 |
MIGRATE |
1343 |
^^^^^^^ |
1344 |
|
1345 |
**migrate** [-f] {--cleanup} {*instance*} |
1346 |
|
1347 |
**migrate** [-f] [--non-live] [--migration-mode=live\|non-live] |
1348 |
{*instance*} |
1349 |
|
1350 |
Migrate will move the instance to its secondary node without |
1351 |
shutdown. It only works for instances having the drbd8 disk |
1352 |
template type. |
1353 |
|
1354 |
The migration command needs a perfectly healthy instance, as we |
1355 |
rely on the dual-master capability of drbd8 and the disks of the |
1356 |
instance are not allowed to be degraded. |
1357 |
|
1358 |
The ``--non-live`` and ``--migration-mode=non-live`` options will |
1359 |
switch (for the hypervisors that support it) between a "fully live" |
1360 |
(i.e. the interruption is as minimal as possible) migration and one |
1361 |
in which the instance is frozen, its state saved and transported to |
1362 |
the remote node, and then resumed there. This all depends on the |
1363 |
hypervisor support for two different methods. In any case, it is |
1364 |
not an error to pass this parameter (it will just be ignored if the |
1365 |
hypervisor doesn't support it). The option |
1366 |
``--migration-mode=live`` option will request a fully-live |
1367 |
migration. The default, when neither option is passed, depends on |
1368 |
the hypervisor parameters (and can be viewed with the |
1369 |
**gnt-cluster info** command). |
1370 |
|
1371 |
If the ``--cleanup`` option is passed, the operation changes from |
1372 |
migration to attempting recovery from a failed previous migration. |
1373 |
In this mode, Ganeti checks if the instance runs on the correct |
1374 |
node (and updates its configuration if not) and ensures the |
1375 |
instances's disks are configured correctly. In this mode, the |
1376 |
``--non-live`` option is ignored. |
1377 |
|
1378 |
The option ``-f`` will skip the prompting for confirmation. |
1379 |
|
1380 |
Example (and expected output):: |
1381 |
|
1382 |
# gnt-instance migrate instance1 |
1383 |
Migrate will happen to the instance instance1. Note that migration is |
1384 |
**experimental** in this version. This might impact the instance if |
1385 |
anything goes wrong. Continue? |
1386 |
y/[n]/?: y |
1387 |
* checking disk consistency between source and target |
1388 |
* ensuring the target is in secondary mode |
1389 |
* changing disks into dual-master mode |
1390 |
- INFO: Waiting for instance instance1 to sync disks. |
1391 |
- INFO: Instance instance1's disks are in sync. |
1392 |
* migrating instance to node2.example.com |
1393 |
* changing the instance's disks on source node to secondary |
1394 |
- INFO: Waiting for instance instance1 to sync disks. |
1395 |
- INFO: Instance instance1's disks are in sync. |
1396 |
* changing the instance's disks to single-master |
1397 |
# |
1398 |
|
1399 |
|
1400 |
MOVE |
1401 |
^^^^ |
1402 |
|
1403 |
**move** [-f] [-n *node*] [--shutdown-timeout=*N*] [--submit] |
1404 |
{*instance*} |
1405 |
|
1406 |
Move will move the instance to an arbitrary node in the cluster. |
1407 |
This works only for instances having a plain or file disk |
1408 |
template. |
1409 |
|
1410 |
Note that since this operation is done via data copy, it will take |
1411 |
a long time for big disks (similar to replace-disks for a drbd |
1412 |
instance). |
1413 |
|
1414 |
The ``--shutdown-timeout`` is used to specify how much time to wait |
1415 |
before forcing the shutdown (e.g. ``xm destroy`` in XEN, killing the |
1416 |
kvm process for KVM, etc.). By default two minutes are given to each |
1417 |
instance to stop. |
1418 |
|
1419 |
The ``--submit`` option is used to send the job to the master |
1420 |
daemon but not wait for its completion. The job ID will be shown so |
1421 |
that it can be examined via **gnt-job info**. |
1422 |
|
1423 |
Example:: |
1424 |
|
1425 |
# gnt-instance move -n node3.example.com instance1.example.com |
1426 |
|
1427 |
|
1428 |
TAGS |
1429 |
~~~~ |
1430 |
|
1431 |
ADD-TAGS |
1432 |
^^^^^^^^ |
1433 |
|
1434 |
**add-tags** [--from *file*] {*instancename*} {*tag*...} |
1435 |
|
1436 |
Add tags to the given instance. If any of the tags contains invalid |
1437 |
characters, the entire operation will abort. |
1438 |
|
1439 |
If the ``--from`` option is given, the list of tags will be |
1440 |
extended with the contents of that file (each line becomes a tag). |
1441 |
In this case, there is not need to pass tags on the command line |
1442 |
(if you do, both sources will be used). A file name of - will be |
1443 |
interpreted as stdin. |
1444 |
|
1445 |
LIST-TAGS |
1446 |
^^^^^^^^^ |
1447 |
|
1448 |
**list-tags** {*instancename*} |
1449 |
|
1450 |
List the tags of the given instance. |
1451 |
|
1452 |
REMOVE-TAGS |
1453 |
^^^^^^^^^^^ |
1454 |
|
1455 |
**remove-tags** [--from *file*] {*instancename*} {*tag*...} |
1456 |
|
1457 |
Remove tags from the given instance. If any of the tags are not |
1458 |
existing on the node, the entire operation will abort. |
1459 |
|
1460 |
If the ``--from`` option is given, the list of tags to be removed will |
1461 |
be extended with the contents of that file (each line becomes a tag). |
1462 |
In this case, there is not need to pass tags on the command line (if |
1463 |
you do, tags from both sources will be removed). A file name of - will |
1464 |
be interpreted as stdin. |