1 gnt-instance(8) Ganeti | Version @GANETI_VERSION@
2 =================================================
7 gnt-instance - Ganeti instance administration
12 **gnt-instance** {command} [arguments...]
17 The **gnt-instance** command is used for instance administration in
23 Creation/removal/querying
24 ~~~~~~~~~~~~~~~~~~~~~~~~~
30 | {-t {diskless | file \| plain \| drbd}}
31 | {--disk=*N*: {size=*VAL* \| adopt=*LV*}[,vg=*VG*][,mode=*ro\|rw*]
33 | [--no-ip-check] [--no-name-check] [--no-start] [--no-install]
34 | [--net=*N* [:options...] \| --no-nics]
36 | [-H *HYPERVISOR* [: option=*value*... ]]
37 | [-O, --os-parameters *param*=*value*... ]
38 | [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}]
39 | {-n *node[:secondary-node]* \| --iallocator *name*}
44 Creates a new instance on the specified host. The *instance* argument
45 must be in DNS, but depending on the bridge/routing setup, need not be
46 in the same network as the nodes in the cluster.
48 The ``disk`` option specifies the parameters for the disks of the
49 instance. The numbering of disks starts at zero, and at least one disk
50 needs to be passed. For each disk, either the size or the adoption
51 source needs to be given, and optionally the access mode (read-only or
52 the default of read-write) and LVM volume group can also be specified.
53 The size is interpreted (when no unit is given) in mebibytes. You can
54 also use one of the suffixes *m*, *g* or *t* to specify the exact the
55 units used; these suffixes map to mebibytes, gibibytes and tebibytes.
57 When using the ``adopt`` key in the disk definition, Ganeti will
58 reuse those volumes (instead of creating new ones) as the
59 instance's disks. Ganeti will rename these volumes to the standard
60 format, and (without installing the OS) will use them as-is for the
61 instance. This allows migrating instances from non-managed mode
62 (e.q. plain KVM with LVM) to being managed via Ganeti. Note that
63 this works only for the \`plain' disk template (see below for
66 Alternatively, a single-disk instance can be created via the ``-s``
67 option which takes a single argument, the size of the disk. This is
68 similar to the Ganeti 1.2 version (but will only create one disk).
70 The minimum disk specification is therefore ``--disk 0:size=20G`` (or
71 ``-s 20G`` when using the ``-s`` option), and a three-disk instance
72 can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk
75 The ``--no-ip-check`` skips the checks that are done to see if the
76 instance's IP is not already alive (i.e. reachable from the master
79 The ``--no-name-check`` skips the check for the instance name via
80 the resolver (e.g. in DNS or /etc/hosts, depending on your setup).
81 Since the name check is used to compute the IP address, if you pass
82 this option you must also pass the ``--no-ip-check`` option.
84 If you don't wat the instance to automatically start after
85 creation, this is possible via the ``--no-start`` option. This will
86 leave the instance down until a subsequent **gnt-instance start**
89 The NICs of the instances can be specified via the ``--net``
90 option. By default, one NIC is created for the instance, with a
91 random MAC, and set up according the the cluster level nic
92 parameters. Each NIC can take these parameters (all optional):
97 either a value or 'generate' to generate a new unique MAC
100 specifies the IP address assigned to the instance from the Ganeti
101 side (this is not necessarily what the instance will use, but what
102 the node expects the instance to use)
105 specifies the connection mode for this nic: routed or bridged.
108 in bridged mode specifies the bridge to attach this NIC to, in
109 routed mode it's intended to differentiate between different
110 routing tables/instance groups (but the meaning is dependent on the
111 network script, see gnt-cluster(8) for more details)
114 Of these "mode" and "link" are nic parameters, and inherit their
115 default at cluster level.
116 Alternatively, if no network is desired for the instance, you can
117 prevent the default of one NIC with the ``--no-nics`` option.
119 The ``-o`` options specifies the operating system to be installed.
120 The available operating systems can be listed with **gnt-os list**.
121 Passing ``--no-install`` will however skip the OS installation,
122 allowing a manual import if so desired. Note that the
123 no-installation mode will automatically disable the start-up of the
124 instance (without an OS, it most likely won't be able to start-up
127 The ``-B`` option specifies the backend parameters for the
128 instance. If no such parameters are specified, the values are
129 inherited from the cluster. Possible parameters are:
132 the memory size of the instance; as usual, suffixes can be used to
133 denote the unit, otherwise the value is taken in mebibites
136 the number of VCPUs to assign to the instance (if this value makes
137 sense for the hypervisor)
140 whether the instance is considered in the N+1 cluster checks
141 (enough redundancy in the cluster to survive a node failure)
144 The ``-H`` option specified the hypervisor to use for the instance
145 (must be one of the enabled hypervisors on the cluster) and
146 optionally custom parameters for this instance. If not other
147 options are used (i.e. the invocation is just -H *NAME*) the
148 instance will inherit the cluster options. The defaults below show
149 the cluster defaults at cluster creation time.
151 The possible hypervisor options are as follows:
154 Valid for the Xen HVM and KVM hypervisors.
156 A string value denoting the boot order. This has different meaning
157 for the Xen HVM hypervisor and for the KVM one.
159 For Xen HVM, The boot order is a string of letters listing the boot
160 devices, with valid device letters being:
177 The default is not to set an HVM boot order which is interpreted as
180 For KVM the boot order is either "cdrom", "disk" or "network".
181 Please note that older versions of KVM couldn't netboot from virtio
182 interfaces. This has been fixed in more recent versions and is
183 confirmed to work at least with qemu-kvm 0.11.1.
186 Valid for the Xen HVM and PVM hypervisors.
188 Relevant to nonpvops guest kernels, in which the disk device names are
189 given by the host. Allows to specify 'xvd', which helps run Red Hat based
190 installers, driven by anaconda.
193 Valid for the Xen HVM and KVM hypervisors.
195 The path to a CDROM image to attach to the instance.
198 Valid for the Xen HVM and KVM hypervisors.
200 This parameter determines the way the network cards are presented
201 to the instance. The possible options are:
205 rtl8139 (default for Xen HVM) (HVM & KVM)
206 ne2k\_isa (HVM & KVM)
207 ne2k\_pci (HVM & KVM)
213 paravirtual (default for KVM) (HVM & KVM)
217 Valid for the Xen HVM and KVM hypervisors.
219 This parameter determines the way the disks are presented to the
220 instance. The possible options are:
224 ioemu (default for HVM & KVM) (HVM & KVM)
233 Valid for the Xen HVM and KVM hypervisors.
235 Specifies the address that the VNC listener for this instance
236 should bind to. Valid values are IPv4 addresses. Use the address
237 0.0.0.0 to bind to all available interfaces (this is the default)
238 or specify the address of one of the interfaces on the node to
239 restrict listening to that interface.
242 Valid for the KVM hypervisor.
244 A boolean option that controls whether the VNC connection is
248 Valid for the KVM hypervisor.
250 If ``vnc_tls`` is enabled, this options specifies the path to the
251 x509 certificate to use.
254 Valid for the KVM hypervisor.
257 Valid for the Xen HVM and KVM hypervisors.
259 A boolean option that specifies if the hypervisor should enable
260 ACPI support for this instance. By default, ACPI is disabled.
263 Valid for the Xen HVM and KVM hypervisors.
265 A boolean option that specifies if the hypervisor should enabled
266 PAE support for this instance. The default is false, disabling PAE
270 Valid for the Xen HVM and KVM hypervisors.
272 A boolean option that specifies if the instance should be started
273 with its clock set to the localtime of the machine (when true) or
274 to the UTC (When false). The default is false, which is useful for
275 Linux/Unix machines; for Windows OSes, it is recommended to enable
279 Valid for the Xen PVM and KVM hypervisors.
281 This option specifies the path (on the node) to the kernel to boot
282 the instance with. Xen PVM instances always require this, while for
283 KVM if this option is empty, it will cause the machine to load the
284 kernel from its disks.
287 Valid for the Xen PVM and KVM hypervisors.
289 This options specifies extra arguments to the kernel that will be
290 loaded. device. This is always used for Xen PVM, while for KVM it
291 is only used if the ``kernel_path`` option is also specified.
293 The default setting for this value is simply ``"ro"``, which mounts
294 the root disk (initially) in read-only one. For example, setting
295 this to single will cause the instance to start in single-user
299 Valid for the Xen PVM and KVM hypervisors.
301 This option specifies the path (on the node) to the initrd to boot
302 the instance with. Xen PVM instances can use this always, while for
303 KVM if this option is only used if the ``kernel_path`` option is
304 also specified. You can pass here either an absolute filename (the
305 path to the initrd) if you want to use an initrd, or use the format
306 no\_initrd\_path for no initrd.
309 Valid for the Xen PVM and KVM hypervisors.
311 This options specifies the name of the root device. This is always
312 needed for Xen PVM, while for KVM it is only used if the
313 ``kernel_path`` option is also specified.
316 Valid for the KVM hypervisor.
318 This boolean option specifies whether to emulate a serial console
322 Valid for the KVM hypervisor.
324 The disk cache mode. It can be either default to not pass any cache
325 option to KVM, or one of the KVM cache modes: none (for direct
326 I/O), writethrough (to use the host cache but report completion to
327 the guest only when the host has committed the changes to disk) or
328 writeback (to use the host cache and report completion as soon as
329 the data is in the host cache). Note that there are special
330 considerations for the cache mode depending on version of KVM used
331 and disk type (always raw file under Ganeti), please refer to the
332 KVM documentation for more details.
335 Valid for the KVM hypervisor.
337 The security model for kvm. Currently one of "none", "user" or
338 "pool". Under "none", the default, nothing is done and instances
339 are run as the Ganeti daemon user (normally root).
341 Under "user" kvm will drop privileges and become the user specified
342 by the security\_domain parameter.
344 Under "pool" a global cluster pool of users will be used, making
345 sure no two instances share the same user on the same node. (this
346 mode is not implemented yet)
349 Valid for the KVM hypervisor.
351 Under security model "user" the username to run the instance under.
352 It must be a valid username existing on the host.
354 Cannot be set under security model "none" or "pool".
357 Valid for the KVM hypervisor.
359 If "enabled" the -enable-kvm flag is passed to kvm. If "disabled"
360 -disable-kvm is passed. If unset no flag is passed, and the default
361 running mode for your kvm binary will be used.
364 Valid for the KVM hypervisor.
366 This option passes the -mem-path argument to kvm with the path (on
367 the node) to the mount point of the hugetlbfs file system, along
368 with the -mem-prealloc argument too.
371 Valid for the KVM hypervisor.
373 This boolean option determines wether to run the KVM instance in a
376 If it is set to ``true``, an empty directory is created before
377 starting the instance and its path is passed via the -chroot flag
378 to kvm. The directory is removed when the instance is stopped.
380 It is set to ``false`` by default.
383 Valid for the KVM hypervisor.
385 The maximum amount of time (in ms) a KVM instance is allowed to be
386 frozen during a live migration, in order to copy dirty memory
387 pages. Default value is 30ms, but you may need to increase this
388 value for busy instances.
390 This option is only effective with kvm versions >= 87 and qemu-kvm
394 Valid for the LXC hypervisor.
396 The processes belonging to the given instance are only scheduled on
399 The parameter format is a comma-separated list of CPU IDs or CPU ID
400 ranges. The ranges are defined by a lower and higher boundary,
401 separated by a dash. The boundaries are inclusive.
404 Valid for the KVM hypervisor.
406 This option specifies the usb mouse type to be used. It can be
407 "mouse" or "tablet". When using VNC it's recommended to set it to
411 The ``-O`` (``--os-parameters``) option allows customisation of the OS
412 parameters. The actual parameter names and values depends on the OS
413 being used, but the syntax is the same key=value. For example, setting
414 a hypothetical ``dhcp`` parameter to yes can be achieved by::
416 gnt-instance add -O dhcp=yes ...
419 The ``--iallocator`` option specifies the instance allocator plugin
420 to use. If you pass in this option the allocator will select nodes
421 for this instance automatically, so you don't need to pass them
422 with the ``-n`` option. For more information please refer to the
423 instance allocator documentation.
425 The ``-t`` options specifies the disk layout type for the instance.
426 The available choices are:
431 This creates an instance with no disks. Its useful for testing only
432 (or other special cases).
435 Disk devices will be regular files.
438 Disk devices will be logical volumes.
441 Disk devices will be drbd (version 8.x) on top of lvm volumes.
444 The optional second value of the ``--node`` is used for the drbd
445 template type and specifies the remote node.
447 If you do not want gnt-instance to wait for the disk mirror to be
448 synced, use the ``--no-wait-for-sync`` option.
450 The ``--file-storage-dir`` specifies the relative path under the
451 cluster-wide file storage directory to store file-based disks. It is
452 useful for having different subdirectories for different
453 instances. The full path of the directory where the disk files are
454 stored will consist of cluster-wide file storage directory + optional
455 subdirectory + instance name. Example:
456 ``@RPL_FILE_STORAGE_DIR@``*/mysubdir/instance1.example.com*. This
457 option is only relevant for instances using the file storage backend.
459 The ``--file-driver`` specifies the driver to use for file-based
460 disks. Note that currently these drivers work with the xen
461 hypervisor only. This option is only relevant for instances using
462 the file storage backend. The available choices are:
467 Kernel loopback driver. This driver uses loopback devices to access
468 the filesystem within the file. However, running I/O intensive
469 applications in your instance using the loop driver might result in
470 slowdowns. Furthermore, if you use the loopback driver consider
471 increasing the maximum amount of loopback devices (on most systems
472 it's 8) using the max\_loop param.
475 The blktap driver (for Xen hypervisors). In order to be able to use
476 the blktap driver you should check if the 'blktapctrl' user space
477 disk agent is running (usually automatically started via xend).
478 This user-level disk I/O interface has the advantage of better
479 performance. Especially if you use a network file system (e.g. NFS)
480 to store your instances this is the recommended choice.
483 The ``--submit`` option is used to send the job to the master
484 daemon but not wait for its completion. The job ID will be shown so
485 that it can be examined via **gnt-job info**.
489 # gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \
490 -n node1.example.com --file-storage-dir=mysubdir instance1.example.com
491 # gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \
492 -n node1.example.com instance1.example.com
493 # gnt-instance add -t plain --disk 0:size=30g --disk 1:size=100g,vg=san \
494 -B memory=512 -o debian-etch -n node1.example.com instance1.example.com
495 # gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \
496 -n node1.example.com:node2.example.com instance2.example.com
502 **batch-create** {instances\_file.json}
504 This command (similar to the Ganeti 1.2 **batcher** tool) submits
505 multiple instance creation jobs based on a definition file. The
506 instance configurations do not encompass all the possible options
507 for the **add** command, but only a subset.
509 The instance file should be a valid-formed JSON file, containing a
510 dictionary with instance name and instance parameters. The accepted
516 The size of the disks of the instance.
519 The disk template to use for the instance, the same as in the
523 A dictionary of backend parameters.
526 A dictionary with a single key (the hypervisor name), and as value
527 the hypervisor options. If not passed, the default hypervisor and
528 hypervisor options will be inherited.
531 Specifications for the one NIC that will be created for the
532 instance. 'bridge' is also accepted as a backwards compatibile
536 List of nics that will be created for the instance. Each entry
537 should be a dict, with mac, ip, mode and link as possible keys.
538 Please don't provide the "mac, ip, mode, link" parent keys if you
539 use this method for specifying nics.
541 primary\_node, secondary\_node
542 The primary and optionally the secondary node to use for the
543 instance (in case an iallocator script is not used).
546 Instead of specifying the nodes, an iallocator script can be used
547 to automatically compute them.
550 whether to start the instance
553 Skip the check for already-in-use instance; see the description in
554 the **add** command for details.
557 Skip the name check for instances; see the description in the
558 **add** command for details.
560 file\_storage\_dir, file\_driver
561 Configuration for the file disk type, see the **add** command for
565 A simple definition for one instance can be (with most of the
566 parameters taken from the cluster defaults)::
572 "disk_size": ["25G"],
578 "disk_size": ["25G"],
579 "iallocator": "dumb",
580 "hypervisor": "xen-hvm",
581 "hvparams": {"acpi": true},
582 "backend": {"memory": 512}
586 The command will display the job id for each submitted instance, as
589 # gnt-instance batch-create instances.json
596 **remove** [--ignore-failures] [--shutdown-timeout=*N*] [--submit]
599 Remove an instance. This will remove all data from the instance and
600 there is *no way back*. If you are not sure if you use an instance
601 again, use **shutdown** first and leave it in the shutdown state
604 The ``--ignore-failures`` option will cause the removal to proceed
605 even in the presence of errors during the removal of the instance
606 (e.g. during the shutdown or the disk removal). If this option is
607 not given, the command will stop at the first error.
609 The ``--shutdown-timeout`` is used to specify how much time to wait
610 before forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the
611 kvm process for KVM, etc.). By default two minutes are given to each
614 The ``--submit`` option is used to send the job to the master
615 daemon but not wait for its completion. The job ID will be shown so
616 that it can be examined via **gnt-job info**.
620 # gnt-instance remove instance1.example.com
627 | [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*] [-v]
628 | [-o *[+]FIELD,...*] [--filter] [instance...]
630 Shows the currently configured instances with memory usage, disk
631 usage, the node they are running on, and their run status.
633 The ``--no-headers`` option will skip the initial header line. The
634 ``--separator`` option takes an argument which denotes what will be
635 used between the output fields. Both these options are to help
638 The units used to display the numeric values in the output varies,
639 depending on the options given. By default, the values will be
640 formatted in the most appropriate unit. If the ``--separator``
641 option is given, then the values are shown in mebibytes to allow
642 parsing by scripts. In both cases, the ``--units`` option can be
643 used to enforce a given output unit.
645 The ``-v`` option activates verbose mode, which changes the display of
646 special field states (see **ganeti(7)**).
648 The ``-o`` option takes a comma-separated list of output fields.
649 The available fields and their meaning are:
651 @QUERY_FIELDS_INSTANCE@
653 If the value of the option starts with the character ``+``, the new
654 field(s) will be added to the default list. This allows to quickly
655 see the default list plus a few other fields, instead of retyping
656 the entire list of fields.
658 There is a subtle grouping about the available output fields: all
659 fields except for ``oper_state``, ``oper_ram``, ``oper_vcpus`` and
660 ``status`` are configuration value and not run-time values. So if
661 you don't select any of the these fields, the query will be
662 satisfied instantly from the cluster configuration, without having
663 to ask the remote nodes for the data. This can be helpful for big
664 clusters when you only want some data and it makes sense to specify
665 a reduced set of output fields.
667 If exactly one argument is given and it appears to be a query filter
668 (see **ganeti(7)**), the query result is filtered accordingly. For
669 ambiguous cases (e.g. a single field name as a filter) the ``--filter``
670 (``-F``) option forces the argument to be treated as a filter (e.g.
671 ``gnt-instance list -F admin_state``).
673 The default output field list is: ``name``, ``os``, ``pnode``,
674 ``admin_state``, ``oper_state``, ``oper_ram``.
680 **list-fields** [field...]
682 Lists available fields for instances.
688 **info** [-s \| --static] [--roman] {--all \| *instance*}
690 Show detailed information about the given instance(s). This is
691 different from **list** as it shows detailed data about the
692 instance's disks (especially useful for the drbd disk template).
694 If the option ``-s`` is used, only information available in the
695 configuration file is returned, without querying nodes, making the
698 Use the ``--all`` to get info about all instances, rather than
699 explicitly passing the ones you're interested in.
701 The ``--roman`` option can be used to cause envy among people who
702 like ancient cultures, but are stuck with non-latin-friendly
703 cluster virtualization technologies.
709 | [-H *HYPERVISOR\_PARAMETERS*]
710 | [-B *BACKEND\_PARAMETERS*]
711 | [--net add*[:options]* \| --net remove \| --net *N:options*]
712 | [--disk add:size=*SIZE*[,vg=*VG*] \| --disk remove \|
713 | --disk *N*:mode=*MODE*]
714 | [-t plain | -t drbd -n *new_secondary*]
715 | [--os-type=*OS* [--force-variant]]
716 | [-O, --os-parameters *param*=*value*... ]
720 Modifies the memory size, number of vcpus, ip address, MAC address
721 and/or nic parameters for an instance. It can also add and remove
722 disks and NICs to/from the instance. Note that you need to give at
723 least one of the arguments, otherwise the command complains.
725 The ``-H``, ``-B`` and ``-O`` options specifies hypervisor, backend
726 and OS parameter options in the form of name=value[,...]. For details
727 which options can be specified, see the **add** command.
729 The ``-t`` option will change the disk template of the instance.
730 Currently only conversions between the plain and drbd disk templates
731 are supported, and the instance must be stopped before attempting the
732 conversion. When changing from the plain to the drbd disk template, a
733 new secondary node must be specified via the ``-n`` option.
735 The ``--disk add:size=``*SIZE* option adds a disk to the instance. The
736 optional ``vg=``*VG* option specifies LVM volume group other than default
737 vg to create disk on. The ``--disk remove`` option will remove the last
738 disk of the instance. The ``--disk`` *N*``:mode=``*MODE* option will change
739 the mode of the Nth disk of the instance between read-only (``ro``) and
742 The ``--net add:``*options* option will add a new NIC to the
743 instance. The available options are the same as in the **add** command
744 (mac, ip, link, mode). The ``--net remove`` will remove the last NIC
745 of the instance, while the ``--net`` *N*:*options* option will
746 change the parameters of the Nth instance NIC.
748 The option ``--os-type`` will change the OS name for the instance
749 (without reinstallation). In case an OS variant is specified that
750 is not found, then by default the modification is refused, unless
751 ``--force-variant`` is passed. An invalid OS will also be refused,
752 unless the ``--force`` option is given.
754 The ``--submit`` option is used to send the job to the master
755 daemon but not wait for its completion. The job ID will be shown so
756 that it can be examined via **gnt-job info**.
758 All the changes take effect at the next restart. If the instance is
759 running, there is no effect on the instance.
764 | **reinstall** [-o *os-type*] [--select-os] [-f *force*]
766 | [--instance \| --node \| --primary \| --secondary \| --all]
767 | [-O *OS\_PARAMETERS*] [--submit] {*instance*...}
769 Reinstalls the operating system on the given instance(s). The
770 instance(s) must be stopped when running this command. If the
771 ``--os-type`` is specified, the operating system is changed.
773 The ``--select-os`` option switches to an interactive OS reinstall.
774 The user is prompted to select the OS template from the list of
775 available OS templates. OS parameters can be overridden using ``-O``
776 (more documentation for this option under the **add** command).
778 Since this is a potentially dangerous command, the user will be
779 required to confirm this action, unless the ``-f`` flag is passed.
780 When multiple instances are selected (either by passing multiple
781 arguments or by using the ``--node``, ``--primary``,
782 ``--secondary`` or ``--all`` options), the user must pass the
783 ``--force-multiple`` options to skip the interactive confirmation.
785 The ``--submit`` option is used to send the job to the master
786 daemon but not wait for its completion. The job ID will be shown so
787 that it can be examined via **gnt-job info**.
792 | **rename** [--no-ip-check] [--no-name-check] [--submit]
793 | {*instance*} {*new\_name*}
795 Renames the given instance. The instance must be stopped when
796 running this command. The requirements for the new name are the
797 same as for adding an instance: the new name must be resolvable and
798 the IP it resolves to must not be reachable (in order to prevent
799 duplicate IPs the next time the instance is started). The IP test
800 can be skipped if the ``--no-ip-check`` option is passed.
802 The ``--no-name-check`` skips the check for the new instance name via
803 the resolver (e.g. in DNS or /etc/hosts, depending on your setup) and
804 that the resolved name matches the provided name. Since the name check
805 is used to compute the IP address, if you pass this option you must also
806 pass the ``--no-ip-check`` option.
808 The ``--submit`` option is used to send the job to the master
809 daemon but not wait for its completion. The job ID will be shown so
810 that it can be examined via **gnt-job info**.
812 Starting/stopping/connecting to console
813 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
819 | [--force] [--ignore-offline]
821 | [--instance \| --node \| --primary \| --secondary \| --all \|
822 | --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
823 | [-H ``key=value...``] [-B ``key=value...``]
827 Starts one or more instances, depending on the following options.
828 The four available modes are:
832 will start the instances given as arguments (at least one argument
833 required); this is the default selection
836 will start the instances who have the given node as either primary
840 will start all instances whose primary node is in the list of nodes
841 passed as arguments (at least one node required)
844 will start all instances whose secondary node is in the list of
845 nodes passed as arguments (at least one node required)
848 will start all instances in the cluster (no arguments accepted)
851 will start all instances in the cluster with the tags given as
855 will start all instances in the cluster on nodes with the tags
859 will start all instances in the cluster on primary nodes with the
860 tags given as arguments
863 will start all instances in the cluster on secondary nodes with the
864 tags given as arguments
867 Note that although you can pass more than one selection option, the
868 last one wins, so in order to guarantee the desired result, don't
869 pass more than one such option.
871 Use ``--force`` to start even if secondary disks are failing.
872 ``--ignore-offline`` can be used to ignore offline primary nodes
873 and mark the instance as started even if the primary is not
876 The ``--force-multiple`` will skip the interactive confirmation in
877 the case the more than one instance will be affected.
879 The ``-H`` and ``-B`` options specify temporary hypervisor and
880 backend parameters that can be used to start an instance with
881 modified parameters. They can be useful for quick testing without
882 having to modify an instance back and forth, e.g.::
884 # gnt-instance start -H root_args="single" instance1
885 # gnt-instance start -B memory=2048 instance2
888 The first form will start the instance instance1 in single-user
889 mode, and the instance instance2 with 2GB of RAM (this time only,
890 unless that is the actual instance memory size already). Note that
891 the values override the instance parameters (and not extend them):
892 an instance with "root\_args=ro" when started with -H
893 root\_args=single will result in "single", not "ro single".
894 The ``--submit`` option is used to send the job to the master
895 daemon but not wait for its completion. The job ID will be shown so
896 that it can be examined via **gnt-job info**.
900 # gnt-instance start instance1.example.com
901 # gnt-instance start --node node1.example.com node2.example.com
902 # gnt-instance start --all
910 | [--force-multiple] [--ignore-offline]
911 | [--instance \| --node \| --primary \| --secondary \| --all \|
912 | --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
916 Stops one or more instances. If the instance cannot be cleanly
917 stopped during a hardcoded interval (currently 2 minutes), it will
918 forcibly stop the instance (equivalent to switching off the power
919 on a physical machine).
921 The ``--timeout`` is used to specify how much time to wait before
922 forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the kvm
923 process for KVM, etc.). By default two minutes are given to each
926 The ``--instance``, ``--node``, ``--primary``, ``--secondary``,
927 ``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and
928 ``--sec-node-tags`` options are similar as for the **startup**
929 command and they influence the actual instances being shutdown.
931 The ``--submit`` option is used to send the job to the master
932 daemon but not wait for its completion. The job ID will be shown so
933 that it can be examined via **gnt-job info**.
935 ``--ignore-offline`` can be used to ignore offline primary nodes
936 and force the instance to be marked as stopped. This option should
937 be used with care as it can lead to an inconsistent cluster state.
941 # gnt-instance shutdown instance1.example.com
942 # gnt-instance shutdown --all
949 | [--type=*REBOOT-TYPE*]
950 | [--ignore-secondaries]
951 | [--shutdown-timeout=*N*]
953 | [--instance \| --node \| --primary \| --secondary \| --all \|
954 | --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
958 Reboots one or more instances. The type of reboot depends on the
959 value of ``--type``. A soft reboot does a hypervisor reboot, a hard
960 reboot does a instance stop, recreates the hypervisor config for
961 the instance and starts the instance. A full reboot does the
962 equivalent of **gnt-instance shutdown && gnt-instance startup**.
963 The default is hard reboot.
965 For the hard reboot the option ``--ignore-secondaries`` ignores
966 errors for the secondary node while re-assembling the instance
969 The ``--instance``, ``--node``, ``--primary``, ``--secondary``,
970 ``--all``, ``--tags``, ``--node-tags``, ``--pri-node-tags`` and
971 ``--sec-node-tags`` options are similar as for the **startup**
972 command and they influence the actual instances being rebooted.
974 The ``--shutdown-timeout`` is used to specify how much time to wait
975 before forcing the shutdown (xm destroy in xen, killing the kvm
976 process, for kvm). By default two minutes are given to each
979 The ``--force-multiple`` will skip the interactive confirmation in
980 the case the more than one instance will be affected.
984 # gnt-instance reboot instance1.example.com
985 # gnt-instance reboot --type=full instance1.example.com
991 **console** [--show-cmd] {*instance*}
993 Connects to the console of the given instance. If the instance is
994 not up, an error is returned. Use the ``--show-cmd`` option to
995 display the command instead of executing it.
997 For HVM instances, this will attempt to connect to the serial
998 console of the instance. To connect to the virtualized "physical"
999 console of a HVM instance, use a VNC client with the connection
1000 info from the **info** command.
1004 # gnt-instance console instance1.example.com
1013 **replace-disks** [--submit] [--early-release] {-p} [--disks *idx*]
1016 **replace-disks** [--submit] [--early-release] {-s} [--disks *idx*]
1019 **replace-disks** [--submit] [--early-release] {--iallocator *name*
1020 \| --new-secondary *NODE*} {*instance*}
1022 **replace-disks** [--submit] [--early-release] {--auto}
1025 This command is a generalized form for replacing disks. It is
1026 currently only valid for the mirrored (DRBD) disk template.
1028 The first form (when passing the ``-p`` option) will replace the
1029 disks on the primary, while the second form (when passing the
1030 ``-s`` option will replace the disks on the secondary node. For
1031 these two cases (as the node doesn't change), it is possible to
1032 only run the replace for a subset of the disks, using the option
1033 ``--disks`` which takes a list of comma-delimited disk indices
1034 (zero-based), e.g. 0,2 to replace only the first and third disks.
1036 The third form (when passing either the ``--iallocator`` or the
1037 ``--new-secondary`` option) is designed to change secondary node of
1038 the instance. Specifying ``--iallocator`` makes the new secondary
1039 be selected automatically by the specified allocator plugin,
1040 otherwise the new secondary node will be the one chosen manually
1041 via the ``--new-secondary`` option.
1043 The fourth form (when using ``--auto``) will automatically
1044 determine which disks of an instance are faulty and replace them
1045 within the same node. The ``--auto`` option works only when an
1046 instance has only faulty disks on either the primary or secondary
1047 node; it doesn't work when both sides have faulty disks.
1049 The ``--submit`` option is used to send the job to the master
1050 daemon but not wait for its completion. The job ID will be shown so
1051 that it can be examined via **gnt-job info**.
1053 The ``--early-release`` changes the code so that the old storage on
1054 secondary node(s) is removed early (before the resync is completed)
1055 and the internal Ganeti locks for the current (and new, if any)
1056 secondary node are also released, thus allowing more parallelism in
1057 the cluster operation. This should be used only when recovering
1058 from a disk failure on the current secondary (thus the old storage
1059 is already broken) or when the storage on the primary node is known
1060 to be fine (thus we won't need the old storage for potential
1063 Note that it is not possible to select an offline or drained node
1069 **activate-disks** [--submit] [--ignore-size] {*instance*}
1071 Activates the block devices of the given instance. If successful,
1072 the command will show the location and name of the block devices::
1074 node1.example.com:disk/0:/dev/drbd0
1075 node1.example.com:disk/1:/dev/drbd1
1078 In this example, *node1.example.com* is the name of the node on
1079 which the devices have been activated. The *disk/0* and *disk/1*
1080 are the Ganeti-names of the instance disks; how they are visible
1081 inside the instance is hypervisor-specific. */dev/drbd0* and
1082 */dev/drbd1* are the actual block devices as visible on the node.
1083 The ``--submit`` option is used to send the job to the master
1084 daemon but not wait for its completion. The job ID will be shown so
1085 that it can be examined via **gnt-job info**.
1087 The ``--ignore-size`` option can be used to activate disks ignoring
1088 the currently configured size in Ganeti. This can be used in cases
1089 where the configuration has gotten out of sync with the real-world
1090 (e.g. after a partially-failed grow-disk operation or due to
1091 rounding in LVM devices). This should not be used in normal cases,
1092 but only when activate-disks fails without it.
1094 Note that it is safe to run this command while the instance is
1100 **deactivate-disks** [-f] [--submit] {*instance*}
1102 De-activates the block devices of the given instance. Note that if
1103 you run this command for an instance with a drbd disk template,
1104 while it is running, it will not be able to shutdown the block
1105 devices on the primary node, but it will shutdown the block devices
1106 on the secondary nodes, thus breaking the replication.
1108 The ``-f``/``--force`` option will skip checks that the instance is
1109 down; in case the hypervisor is confused and we can't talk to it,
1110 normally Ganeti will refuse to deactivate the disks, but with this
1111 option passed it will skip this check and directly try to deactivate
1112 the disks. This can still fail due to the instance actually running or
1115 The ``--submit`` option is used to send the job to the master
1116 daemon but not wait for its completion. The job ID will be shown so
1117 that it can be examined via **gnt-job info**.
1122 **grow-disk** [--no-wait-for-sync] [--submit] {*instance*} {*disk*}
1125 Grows an instance's disk. This is only possible for instances
1126 having a plain or drbd disk template.
1128 Note that this command only change the block device size; it will
1129 not grow the actual filesystems, partitions, etc. that live on that
1130 disk. Usually, you will need to:
1135 #. use **gnt-instance grow-disk**
1137 #. reboot the instance (later, at a convenient time)
1139 #. use a filesystem resizer, such as ext2online(8) or
1140 xfs\_growfs(8) to resize the filesystem, or use fdisk(8) to change
1141 the partition table on the disk
1144 The *disk* argument is the index of the instance disk to grow. The
1145 *amount* argument is given either as a number (and it represents
1146 the amount to increase the disk with in mebibytes) or can be given
1147 similar to the arguments in the create instance operation, with a
1148 suffix denoting the unit.
1150 Note that the disk grow operation might complete on one node but
1151 fail on the other; this will leave the instance with
1152 different-sized LVs on the two nodes, but this will not create
1153 problems (except for unused space).
1155 If you do not want gnt-instance to wait for the new disk region to
1156 be synced, use the ``--no-wait-for-sync`` option.
1158 The ``--submit`` option is used to send the job to the master
1159 daemon but not wait for its completion. The job ID will be shown so
1160 that it can be examined via **gnt-job info**.
1162 Example (increase the first disk for instance1 by 16GiB)::
1164 # gnt-instance grow-disk instance1.example.com 0 16g
1167 Also note that disk shrinking is not supported; use
1168 **gnt-backup export** and then **gnt-backup import** to reduce the
1169 disk size of an instance.
1174 **recreate-disks** [--submit] [--disks=``indices``] {*instance*}
1176 Recreates the disks of the given instance, or only a subset of the
1177 disks (if the option ``disks`` is passed, which must be a
1178 comma-separated list of disk indices, starting from zero).
1180 Note that this functionality should only be used for missing disks;
1181 if any of the given disks already exists, the operation will fail.
1182 While this is suboptimal, recreate-disks should hopefully not be
1183 needed in normal operation and as such the impact of this is low.
1185 The ``--submit`` option is used to send the job to the master
1186 daemon but not wait for its completion. The job ID will be shown so
1187 that it can be examined via **gnt-job info**.
1195 **failover** [-f] [--ignore-consistency] [--shutdown-timeout=*N*]
1196 [--submit] {*instance*}
1198 Failover will fail the instance over its secondary node. This works
1199 only for instances having a drbd disk template.
1201 Normally the failover will check the consistency of the disks
1202 before failing over the instance. If you are trying to migrate
1203 instances off a dead node, this will fail. Use the
1204 ``--ignore-consistency`` option for this purpose. Note that this
1205 option can be dangerous as errors in shutting down the instance
1206 will be ignored, resulting in possibly having the instance running
1207 on two machines in parallel (on disconnected DRBD drives).
1209 The ``--shutdown-timeout`` is used to specify how much time to wait
1210 before forcing the shutdown (xm destroy in xen, killing the kvm
1211 process, for kvm). By default two minutes are given to each
1214 The ``--submit`` option is used to send the job to the master
1215 daemon but not wait for its completion. The job ID will be shown so
1216 that it can be examined via **gnt-job info**.
1220 # gnt-instance failover instance1.example.com
1226 **migrate** [-f] {--cleanup} {*instance*}
1228 **migrate** [-f] [--allow-failover] [--non-live]
1229 [--migration-mode=live\|non-live] {*instance*}
1231 Migrate will move the instance to its secondary node without
1232 shutdown. It only works for instances having the drbd8 disk
1235 The migration command needs a perfectly healthy instance, as we
1236 rely on the dual-master capability of drbd8 and the disks of the
1237 instance are not allowed to be degraded.
1239 The ``--non-live`` and ``--migration-mode=non-live`` options will
1240 switch (for the hypervisors that support it) between a "fully live"
1241 (i.e. the interruption is as minimal as possible) migration and one
1242 in which the instance is frozen, its state saved and transported to
1243 the remote node, and then resumed there. This all depends on the
1244 hypervisor support for two different methods. In any case, it is
1245 not an error to pass this parameter (it will just be ignored if the
1246 hypervisor doesn't support it). The option
1247 ``--migration-mode=live`` option will request a fully-live
1248 migration. The default, when neither option is passed, depends on
1249 the hypervisor parameters (and can be viewed with the
1250 **gnt-cluster info** command).
1252 If the ``--cleanup`` option is passed, the operation changes from
1253 migration to attempting recovery from a failed previous migration.
1254 In this mode, Ganeti checks if the instance runs on the correct
1255 node (and updates its configuration if not) and ensures the
1256 instances's disks are configured correctly. In this mode, the
1257 ``--non-live`` option is ignored.
1259 The option ``-f`` will skip the prompting for confirmation.
1261 If ``--allow-failover`` is specified it tries to fallback to failover if
1262 it already can determine that a migration wont work (i.e. if the
1263 instance is shutdown). Please note that the fallback will not happen
1264 during execution. If a migration fails during execution it still fails.
1266 Example (and expected output)::
1268 # gnt-instance migrate instance1
1269 Migrate will happen to the instance instance1. Note that migration is
1270 **experimental** in this version. This might impact the instance if
1271 anything goes wrong. Continue?
1273 * checking disk consistency between source and target
1274 * ensuring the target is in secondary mode
1275 * changing disks into dual-master mode
1276 - INFO: Waiting for instance instance1 to sync disks.
1277 - INFO: Instance instance1's disks are in sync.
1278 * migrating instance to node2.example.com
1279 * changing the instance's disks on source node to secondary
1280 - INFO: Waiting for instance instance1 to sync disks.
1281 - INFO: Instance instance1's disks are in sync.
1282 * changing the instance's disks to single-master
1289 **move** [-f] [-n *node*] [--shutdown-timeout=*N*] [--submit]
1292 Move will move the instance to an arbitrary node in the cluster.
1293 This works only for instances having a plain or file disk
1296 Note that since this operation is done via data copy, it will take
1297 a long time for big disks (similar to replace-disks for a drbd
1300 The ``--shutdown-timeout`` is used to specify how much time to wait
1301 before forcing the shutdown (e.g. ``xm destroy`` in XEN, killing the
1302 kvm process for KVM, etc.). By default two minutes are given to each
1305 The ``--submit`` option is used to send the job to the master
1306 daemon but not wait for its completion. The job ID will be shown so
1307 that it can be examined via **gnt-job info**.
1311 # gnt-instance move -n node3.example.com instance1.example.com
1320 **add-tags** [--from *file*] {*instancename*} {*tag*...}
1322 Add tags to the given instance. If any of the tags contains invalid
1323 characters, the entire operation will abort.
1325 If the ``--from`` option is given, the list of tags will be
1326 extended with the contents of that file (each line becomes a tag).
1327 In this case, there is not need to pass tags on the command line
1328 (if you do, both sources will be used). A file name of - will be
1329 interpreted as stdin.
1334 **list-tags** {*instancename*}
1336 List the tags of the given instance.
1341 **remove-tags** [--from *file*] {*instancename*} {*tag*...}
1343 Remove tags from the given instance. If any of the tags are not
1344 existing on the node, the entire operation will abort.
1346 If the ``--from`` option is given, the list of tags to be removed will
1347 be extended with the contents of that file (each line becomes a tag).
1348 In this case, there is not need to pass tags on the command line (if
1349 you do, tags from both sources will be removed). A file name of - will
1350 be interpreted as stdin.
1352 .. vim: set textwidth=72 :