1 gnt-cluster(8) Ganeti | Version @GANETI_VERSION@
2 ================================================
7 gnt-cluster - Ganeti administration, cluster-wide
12 **gnt-cluster** {command} [arguments...]
17 The **gnt-cluster** is used for cluster-wide administration in the
26 **activate-master-ip**
28 Activates the master IP on the master node.
33 **command** [-n *node*] [-g *group*] [-M] {*command*}
35 Executes a command on all nodes. This command is designed for simple
36 usage. For more complex use cases the commands **dsh**\(1) or **cssh**\(1)
37 should be used instead.
39 If the option ``-n`` is not given, the command will be executed on all
40 nodes, otherwise it will be executed only on the node(s) specified. Use
41 the option multiple times for running it on multiple nodes, like::
43 # gnt-cluster command -n node1.example.com -n node2.example.com date
45 The ``-g`` option can be used to run a command only on a specific node
48 # gnt-cluster command -g default date
50 The ``-M`` option can be used to prepend the node name to all output
51 lines. The ``--failure-only`` option hides successful commands, making
52 it easier to see failures.
54 The command is executed serially on the selected nodes. If the
55 master node is present in the list, the command will be executed
56 last on the master. Regarding the other nodes, the execution order
57 is somewhat alphabetic, so that node2.example.com will be earlier
58 than node10.example.com but after node1.example.com.
60 So given the node names node1, node2, node3, node10, node11, with
61 node3 being the master, the order will be: node1, node2, node10,
64 The command is constructed by concatenating all other command line
65 arguments. For example, to list the contents of the /etc directory
68 # gnt-cluster command ls -l /etc
70 and the command which will be executed will be ``ls -l /etc``.
75 | **copyfile** [\--use-replication-network] [-n *node*] [-g *group*]
78 Copies a file to all or to some nodes. The argument specifies the
79 source file (on the current system), the ``-n`` argument specifies
80 the target node, or nodes if the option is given multiple times. If
81 ``-n`` is not given at all, the file will be copied to all nodes. The
82 ``-g`` option can be used to only select nodes in a specific node group.
83 Passing the ``--use-replication-network`` option will cause the
84 copy to be done over the replication network (only matters if the
85 primary/secondary IPs are different). Example::
87 # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
89 This will copy the file /tmp/test from the current node to the two
95 **deactivate-master-ip** [\--yes]
97 Deactivates the master IP on the master node.
99 This should be run only locally or on a connection to the node ip
100 directly, as a connection to the master ip will be broken by this
101 operation. Because of this risk it will require user confirmation
102 unless the ``--yes`` option is passed.
107 **destroy** {\--yes-do-it}
109 Remove all configuration files related to the cluster, so that a
110 **gnt-cluster init** can be done again afterwards.
112 Since this is a dangerous command, you are required to pass the
113 argument *\--yes-do-it.*
118 **epo** [\--on] [\--groups|\--all] [\--power-delay] *arguments*
120 Performs an emergency power-off on nodes given as arguments. If
121 ``--groups`` is given, arguments are node groups. If ``--all`` is
122 provided, the whole cluster will be shut down.
124 The ``--on`` flag recovers the cluster after an emergency power-off.
125 When powering on the cluster you can use ``--power-delay`` to define the
126 time in seconds (fractions allowed) waited between powering on
129 Please note that the master node will not be turned down or up
130 automatically. It will just be left in a state, where you can manully
131 perform the shutdown of that one node. If the master is in the list of
132 affected nodes and this is not a complete cluster emergency power-off
133 (e.g. using ``--all``), you're required to do a master failover to
134 another node not affected.
141 Displays the current master node.
148 Shows runtime cluster information: cluster name, architecture (32
149 or 64 bit), master node, node list and instance list.
151 Passing the ``--roman`` option gnt-cluster info will try to print
152 its integer fields in a latin friendly way. This allows further
153 diffusion of Ganeti among ancient cultures.
160 Shows the command line that can be used to recreate the cluster with the
161 same options relative to specs in the instance policies.
167 | [{-s|\--secondary-ip} *secondary\_ip*]
168 | [\--vg-name *vg-name*]
169 | [\--master-netdev *interface-name*]
170 | [\--master-netmask *netmask*]
171 | [\--use-external-mip-script {yes \| no}]
172 | [{-m|\--mac-prefix} *mac-prefix*]
175 | [\--file-storage-dir *dir*]
176 | [\--shared-file-storage-dir *dir*]
177 | [\--enabled-hypervisors *hypervisors*]
178 | [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
179 | [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
180 | [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
181 | [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
182 | [\--maintain-node-health {yes \| no}]
183 | [\--uid-pool *user-id pool definition*]
184 | [{-I|\--default-iallocator} *default instance allocator*]
185 | [\--primary-ip-version *version*]
186 | [\--prealloc-wipe-disks {yes \| no}]
187 | [\--node-parameters *ndparams*]
188 | [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
189 | [\--specs-cpu-count *spec-param*=*value* [,*spec-param*=*value*...]]
190 | [\--specs-disk-count *spec-param*=*value* [,*spec-param*=*value*...]]
191 | [\--specs-disk-size *spec-param*=*value* [,*spec-param*=*value*...]]
192 | [\--specs-mem-size *spec-param*=*value* [,*spec-param*=*value*...]]
193 | [\--specs-nic-count *spec-param*=*value* [,*spec-param*=*value*...]]
194 | [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
195 | [\--ipolicy-bounds-specs *bounds_ispecs*]
196 | [\--ipolicy-disk-templates *template* [,*template*...]]
197 | [\--ipolicy-spindle-ratio *ratio*]
198 | [\--ipolicy-vcpu-ratio *ratio*]
199 | [\--disk-state *diskstate*]
200 | [\--hypervisor-state *hvstate*]
201 | [\--drbd-usermode-helper *helper*]
202 | [\--enabled-disk-templates *template* [,*template*...]]
205 This commands is only run once initially on the first node of the
206 cluster. It will initialize the cluster configuration, setup the
207 ssh-keys, start the daemons on the master node, etc. in order to have
208 a working one-node cluster.
210 Note that the *clustername* is not any random name. It has to be
211 resolvable to an IP address using DNS, and it is best if you give the
212 fully-qualified domain name. This hostname must resolve to an IP
213 address reserved exclusively for this purpose, i.e. not already in
216 The cluster can run in two modes: single-home or dual-homed. In the
217 first case, all traffic (both public traffic, inter-node traffic and
218 data replication traffic) goes over the same interface. In the
219 dual-homed case, the data replication traffic goes over the second
220 network. The ``-s (--secondary-ip)`` option here marks the cluster as
221 dual-homed and its parameter represents this node's address on the
222 second network. If you initialise the cluster with ``-s``, all nodes
223 added must have a secondary IP as well.
225 Note that for Ganeti it doesn't matter if the secondary network is
226 actually a separate physical network, or is done using tunneling,
227 etc. For performance reasons, it's recommended to use a separate
230 The ``--vg-name`` option will let you specify a volume group
231 different than "xenvg" for Ganeti to use when creating instance
232 disks. This volume group must have the same name on all nodes. Once
233 the cluster is initialized this can be altered by using the
234 **modify** command. Note that if the volume group name is modified after
235 the cluster creation and DRBD support is enabled you might have to
236 manually modify the metavg as well.
238 If you don't want to use lvm storage at all use
239 the ``--enabled-disk-templates`` option to restrict the set of enabled
240 disk templates. Once the cluster is initialized
241 you can change this setup with the **modify** command.
243 The ``--master-netdev`` option is useful for specifying a different
244 interface on which the master will activate its IP address. It's
245 important that all nodes have this interface because you'll need it
246 for a master failover.
248 The ``--master-netmask`` option allows to specify a netmask for the
249 master IP. The netmask must be specified as an integer, and will be
250 interpreted as a CIDR netmask. The default value is 32 for an IPv4
251 address and 128 for an IPv6 address.
253 The ``--use-external-mip-script`` option allows to specify whether to
254 use an user-supplied master IP address setup script, whose location is
255 ``@SYSCONFDIR@/ganeti/scripts/master-ip-setup``. If the option value is
256 set to False, the default script (located at
257 ``@PKGLIBDIR@/tools/master-ip-setup``) will be executed.
259 The ``-m (--mac-prefix)`` option will let you specify a three byte
260 prefix under which the virtual MAC addresses of your instances will be
261 generated. The prefix must be specified in the format ``XX:XX:XX`` and
262 the default is ``aa:00:00``.
264 The ``--no-etc-hosts`` option allows you to initialize the cluster
265 without modifying the /etc/hosts file.
267 The ``--no-ssh-init`` option allows you to initialize the cluster
268 without creating or distributing SSH key pairs.
270 The ``--file-storage-dir`` and ``--shared-file-storage-dir`` options
271 allow you set the directory to use for storing the instance disk files
272 when using file storage backend, respectively shared file storage
273 backend, for instance disks. Note that the file and shared file storage
274 dir must be an allowed directory for file storage. Those directories
275 are specified in the ``@SYSCONFDIR@/ganeti/file-storage-paths`` file.
276 The file storage directory can also be a subdirectory of an allowed one.
277 The file storage directory should be present on all nodes.
279 The ``--prealloc-wipe-disks`` sets a cluster wide configuration value
280 for wiping disks prior to allocation and size changes (``gnt-instance
281 grow-disk``). This increases security on instance level as the instance
282 can't access untouched data from its underlying storage.
284 The ``--enabled-hypervisors`` option allows you to set the list of
285 hypervisors that will be enabled for this cluster. Instance
286 hypervisors can only be chosen from the list of enabled
287 hypervisors, and the first entry of this list will be used by
288 default. Currently, the following hypervisors are available:
300 a simple chroot manager that starts chroot based on a script at the
301 root of the filesystem holding the chroot
304 fake hypervisor for development/testing
306 Either a single hypervisor name or a comma-separated list of
307 hypervisor names can be specified. If this option is not specified,
308 only the xen-pvm hypervisor is enabled by default.
310 The ``-H (--hypervisor-parameters)`` option allows you to set default
311 hypervisor specific parameters for the cluster. The format of this
312 option is the name of the hypervisor, followed by a colon and a
313 comma-separated list of key=value pairs. The keys available for each
314 hypervisors are detailed in the **gnt-instance**\(8) man page, in the
315 **add** command plus the following parameters which are only
316 configurable globally (at cluster level):
319 Valid for the Xen PVM and KVM hypervisors.
321 This options specifies the TCP port to use for live-migration. For
322 Xen, the same port should be configured on all nodes in the
323 ``@XEN_CONFIG_DIR@/xend-config.sxp`` file, under the key
324 "xend-relocation-port".
327 Valid for the KVM hypervisor.
329 This option specifies the maximum bandwidth that KVM will use for
330 instance live migrations. The value is in MiB/s.
332 This option is only effective with kvm versions >= 78 and qemu-kvm
335 The ``-B (--backend-parameters)`` option allows you to set the default
336 backend parameters for the cluster. The parameter format is a
337 comma-separated list of key=value pairs with the following supported
341 Number of VCPUs to set for an instance by default, must be an
342 integer, will be set to 1 if no specified.
345 Maximum amount of memory to allocate for an instance by default, can
346 be either an integer or an integer followed by a unit (M for
347 mebibytes and G for gibibytes are supported), will be set to 128M if
351 Minimum amount of memory to allocate for an instance by default, can
352 be either an integer or an integer followed by a unit (M for
353 mebibytes and G for gibibytes are supported), will be set to 128M if
357 Value of the auto\_balance flag for instances to use by default,
358 will be set to true if not specified.
361 Default value for the ``always_failover`` flag for instances; if
362 not set, ``False`` is used.
365 The ``-N (--nic-parameters)`` option allows you to set the default
366 network interface parameters for the cluster. The parameter format is a
367 comma-separated list of key=value pairs with the following supported
371 The default NIC mode, one of ``routed``, ``bridged`` or
375 In ``bridged`` or ``openvswitch`` mode the default interface where
376 to attach NICs. In ``routed`` mode it represents an
377 hypervisor-vif-script dependent value to allow different instance
378 groups. For example under the KVM default network script it is
379 interpreted as a routing table number or name. Openvswitch support
380 is also hypervisor dependent and currently works for the default KVM
381 network script. Under Xen a custom network script must be provided.
383 The ``-D (--disk-parameters)`` option allows you to set the default disk
384 template parameters at cluster level. The format used for this option is
385 similar to the one use by the ``-H`` option: the disk template name
386 must be specified first, followed by a colon and by a comma-separated
387 list of key-value pairs. These parameters can only be specified at
388 cluster and node group level; the cluster-level parameter are inherited
389 by the node group at the moment of its creation, and can be further
390 modified at node group level using the **gnt-group**\(8) command.
392 The following is the list of disk parameters available for the **drbd**
393 template, with measurement units specified in square brackets at the end
394 of the description (when applicable):
397 Static re-synchronization rate. [KiB/s]
400 Number of stripes to use for data LVs.
403 Number of stripes to use for meta LVs.
406 What kind of barriers to **disable** for disks. It can either assume
407 the value "n", meaning no barrier disabled, or a non-empty string
408 containing a subset of the characters "bfd". "b" means disable disk
409 barriers, "f" means disable disk flushes, "d" disables disk drains.
412 Boolean value indicating whether the meta barriers should be
413 disabled (True) or not (False).
416 String containing the name of the default LVM volume group for DRBD
417 metadata. By default, it is set to ``xenvg``. It can be overridden
418 during the instance creation process by using the ``metavg`` key of
419 the ``--disk`` parameter.
422 String containing additional parameters to be appended to the
423 arguments list of ``drbdsetup disk``.
426 String containing additional parameters to be appended to the
427 arguments list of ``drbdsetup net``.
430 Replication protocol for the DRBD device. Has to be either "A", "B"
431 or "C". Refer to the DRBD documentation for further information
432 about the differences between the protocols.
435 Boolean indicating whether to use the dynamic resync speed
436 controller or not. If enabled, c-plan-ahead must be non-zero and all
437 the c-* parameters will be used by DRBD. Otherwise, the value of
438 resync-rate will be used as a static resync speed.
441 Agility factor of the dynamic resync speed controller. (the higher,
442 the slower the algorithm will adapt the resync speed). A value of 0
443 (that is the default) disables the controller. [ds]
446 Maximum amount of in-flight resync data for the dynamic resync speed
447 controller. [sectors]
450 Maximum estimated peer response latency for the dynamic resync speed
454 Minimum resync speed for the dynamic resync speed controller. [KiB/s]
457 Upper bound on resync speed for the dynamic resync speed controller.
460 List of parameters available for the **plain** template:
463 Number of stripes to use for new LVs.
465 List of parameters available for the **rbd** template:
468 The RADOS cluster pool, inside which all rbd volumes will reside.
469 When a new RADOS cluster is deployed, the default pool to put rbd
470 volumes (Images in RADOS terminology) is 'rbd'.
473 If 'userspace', instances will access their disks directly without
474 going through a block device, avoiding expensive context switches
475 with kernel space and the potential for deadlocks_ in low memory
478 The default value is 'kernelspace' and it disables this behaviour.
479 This setting may only be changed to 'userspace' if all instance
480 disks in the affected group or cluster can be accessed in userspace.
482 Attempts to use this feature without rbd support compiled in KVM
483 result in a "no such file or directory" error messages.
485 .. _deadlocks: http://tracker.ceph.com/issues/3076
487 The option ``--maintain-node-health`` allows one to enable/disable
488 automatic maintenance actions on nodes. Currently these include
489 automatic shutdown of instances and deactivation of DRBD devices on
490 offline nodes; in the future it might be extended to automatic
491 removal of unknown LVM volumes, etc. Note that this option is only
492 useful if the use of ``ganeti-confd`` was enabled at compilation.
494 The ``--uid-pool`` option initializes the user-id pool. The
495 *user-id pool definition* can contain a list of user-ids and/or a
496 list of user-id ranges. The parameter format is a comma-separated
497 list of numeric user-ids or user-id ranges. The ranges are defined
498 by a lower and higher boundary, separated by a dash. The boundaries
499 are inclusive. If the ``--uid-pool`` option is not supplied, the
500 user-id pool is initialized to an empty list. An empty list means
501 that the user-id pool feature is disabled.
503 The ``-I (--default-iallocator)`` option specifies the default
504 instance allocator. The instance allocator will be used for operations
505 like instance creation, instance and node migration, etc. when no
506 manual override is specified. If this option is not specified and
507 htools was not enabled at build time, the default instance allocator
508 will be blank, which means that relevant operations will require the
509 administrator to manually specify either an instance allocator, or a
510 set of nodes. If the option is not specified but htools was enabled,
511 the default iallocator will be **hail**\(1) (assuming it can be found
512 on disk). The default iallocator can be changed later using the
515 The ``--primary-ip-version`` option specifies the IP version used
516 for the primary address. Possible values are 4 and 6 for IPv4 and
517 IPv6, respectively. This option is used when resolving node names
518 and the cluster name.
520 The ``--node-parameters`` option allows you to set default node
521 parameters for the cluster. Please see **ganeti**\(7) for more
522 information about supported key=value pairs.
524 The ``-C (--candidate-pool-size)`` option specifies the
525 ``candidate_pool_size`` cluster parameter. This is the number of nodes
526 that the master will try to keep as master\_candidates. For more
527 details about this role and other node roles, see the **ganeti**\(7).
529 The ``--specs-...`` and ``--ipolicy-...`` options specify the instance
530 policy on the cluster. The ``--ipolicy-bounds-specs`` option sets the
531 minimum and maximum specifications for instances. The format is:
532 min:*param*=*value*,.../max:*param*=*value*,... and further
533 specifications pairs can be added by using ``//`` as a separator. The
534 ``--ipolicy-std-specs`` option takes a list of parameter/value pairs.
535 For both options, *param* can be:
537 - ``cpu-count``: number of VCPUs for an instance
538 - ``disk-count``: number of disk for an instance
539 - ``disk-size``: size of each disk
540 - ``memory-size``: instance memory
541 - ``nic-count``: number of network interface
542 - ``spindle-use``: spindle usage for an instance
544 For the ``--specs-...`` options, each option can have three values:
545 ``min``, ``max`` and ``std``, which can also be modified on group level
546 (except for ``std``, which is defined once for the entire cluster).
547 Please note, that ``std`` values are not the same as defaults set by
548 ``--beparams``, but they are used for the capacity calculations.
550 - ``--specs-cpu-count`` limits the number of VCPUs that can be used by an
552 - ``--specs-disk-count`` limits the number of disks
553 - ``--specs-disk-size`` limits the disk size for every disk used
554 - ``--specs-mem-size`` limits the amount of memory available
555 - ``--specs-nic-count`` sets limits on the number of NICs used
557 The ``--ipolicy-spindle-ratio`` option takes a decimal number. The
558 ``--ipolicy-disk-templates`` option takes a comma-separated list of disk
559 templates. This list of disk templates must be a subset of the list
560 of cluster-wide enabled disk templates (which can be set with
561 ``--enabled-disk-templates``).
563 - ``--ipolicy-spindle-ratio`` limits the instances-spindles ratio
564 - ``--ipolicy-vcpu-ratio`` limits the vcpu-cpu ratio
566 All the instance policy elements can be overridden at group level. Group
567 level overrides can be removed by specifying ``default`` as the value of
570 The ``--drbd-usermode-helper`` option can be used to specify a usermode
571 helper. Check that this string is the one used by the DRBD kernel.
573 For details about how to use ``--hypervisor-state`` and ``--disk-state``
574 have a look at **ganeti**\(7).
576 The ``--enabled-disk-templates`` option specifies a list of disk templates
577 that can be used by instances of the cluster. For the possible values in
578 this list, see **gnt-instance**\(8). Note that in contrast to the list of
579 disk templates in the ipolicy, this list is a hard restriction. It is not
580 possible to create instances with disk templates that are not enabled in
581 the cluster. It is also not possible to disable a disk template when there
582 are still instances using it. The first disk template in the list of
583 enabled disk template is the default disk template. It will be used for
584 instance creation, if no disk template is requested explicitely.
589 **master-failover** [\--no-voting] [\--yes-do-it]
591 Failover the master role to the current node.
593 The ``--no-voting`` option skips the remote node agreement checks.
594 This is dangerous, but necessary in some cases (for example failing
595 over the master role in a 2 node cluster with the original master
596 down). If the original master then comes up, it won't be able to
597 start its master daemon because it won't have enough votes, but so
598 won't the new master, if the master daemon ever needs a restart.
599 You can pass ``--no-voting`` to **ganeti-masterd** on the new
600 master to solve this problem, and run **gnt-cluster redist-conf**
601 to make sure the cluster is consistent again.
603 The option ``--yes-do-it`` is used together with ``--no-voting``, for
604 skipping the interactive checks. This is even more dangerous, and should
605 only be used in conjunction with other means (e.g. a HA suite) to
606 confirm that the operation is indeed safe.
613 Checks if the master daemon is alive.
615 If the master daemon is alive and can respond to a basic query (the
616 equivalent of **gnt-cluster info**), then the exit code of the
617 command will be 0. If the master daemon is not alive (either due to
618 a crash or because this is not the master node), the exit code will
624 | **modify** [\--submit] [\--print-job-id]
626 | [\--vg-name *vg-name*]
627 | [\--enabled-hypervisors *hypervisors*]
628 | [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
629 | [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
630 | [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
631 | [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
632 | [\--uid-pool *user-id pool definition*]
633 | [\--add-uids *user-id pool definition*]
634 | [\--remove-uids *user-id pool definition*]
635 | [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
636 | [\--maintain-node-health {yes \| no}]
637 | [\--prealloc-wipe-disks {yes \| no}]
638 | [{-I|\--default-iallocator} *default instance allocator*]
639 | [\--reserved-lvs=*NAMES*]
640 | [\--node-parameters *ndparams*]
641 | [\--master-netdev *interface-name*]
642 | [\--master-netmask *netmask*]
643 | [\--use-external-mip-script {yes \| no}]
644 | [\--hypervisor-state *hvstate*]
645 | [\--disk-state *diskstate*]
646 | [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
647 | [\--ipolicy-bounds-specs *bounds_ispecs*]
648 | [\--ipolicy-disk-templates *template* [,*template*...]]
649 | [\--ipolicy-spindle-ratio *ratio*]
650 | [\--ipolicy-vcpu-ratio *ratio*]
651 | [\--enabled-disk-templates *template* [,*template*...]]
652 | [\--drbd-usermode-helper *helper*]
653 | [\--file-storage-dir *dir*]
654 | [\--shared-file-storage-dir *dir*]
657 Modify the options for the cluster.
659 The ``--vg-name``, ``--enabled-hypervisors``, ``-H (--hypervisor-parameters)``,
660 ``-B (--backend-parameters)``, ``-D (--disk-parameters)``, ``--nic-parameters``,
661 ``-C (--candidate-pool-size)``, ``--maintain-node-health``,
662 ``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``,
663 ``--master-netdev``, ``--master-netmask``, ``--use-external-mip-script``,
664 ``--drbd-usermode-helper``, ``--file-storage-dir``,
665 ``--shared-file-storage-dir``, and ``--enabled-disk-templates`` options are
666 described in the **init** command.
668 The ``--hypervisor-state`` and ``--disk-state`` options are described in
669 detail in **ganeti**\(7).
671 The ``--add-uids`` and ``--remove-uids`` options can be used to
672 modify the user-id pool by adding/removing a list of user-ids or
675 The option ``--reserved-lvs`` specifies a list (comma-separated) of
676 logical volume group names (regular expressions) that will be
677 ignored by the cluster verify operation. This is useful if the
678 volume group used for Ganeti is shared with the system for other
679 uses. Note that it's not recommended to create and mark as ignored
680 logical volume names which match Ganeti's own name format (starting
681 with UUID and then .diskN), as this option only skips the
682 verification, but not the actual use of the names given.
684 To remove all reserved logical volumes, pass in an empty argument
685 to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``.
687 The ``-I (--default-iallocator)`` is described in the **init**
688 command. To clear the default iallocator, just pass an empty string
691 The ``--ipolicy-...`` options are described in the **init** command.
693 See **ganeti**\(7) for a description of ``--submit`` and other common
699 **queue** {drain | undrain | info}
701 Change job queue properties.
703 The ``drain`` option sets the drain flag on the job queue. No new
704 jobs will be accepted, but jobs already in the queue will be
707 The ``undrain`` will unset the drain flag on the job queue. New
708 jobs will be accepted.
710 The ``info`` option shows the properties of the job queue.
715 **watcher** {pause *duration* | continue | info}
717 Make the watcher pause or let it continue.
719 The ``pause`` option causes the watcher to pause for *duration*
722 The ``continue`` option will let the watcher continue.
724 The ``info`` option shows whether the watcher is currently paused.
729 **redist-conf** [\--submit] [\--print-job-id]
731 This command forces a full push of configuration files from the
732 master node to the other nodes in the cluster. This is normally not
733 needed, but can be run if the **verify** complains about
734 configuration mismatches.
736 See **ganeti**\(7) for a description of ``--submit`` and other common
742 **rename** [-f] {*name*}
744 Renames the cluster and in the process updates the master IP
745 address to the one the new name resolves to. At least one of either
746 the name or the IP address must be different, otherwise the
747 operation will be aborted.
749 Note that since this command can be dangerous (especially when run
750 over SSH), the command will require confirmation unless run with
756 | **renew-crypto** [-f]
757 | [\--new-cluster-certificate] [\--new-confd-hmac-key]
758 | [\--new-rapi-certificate] [\--rapi-certificate *rapi-cert*]
759 | [\--new-spice-certificate | \--spice-certificate *spice-cert*
760 | \--spice-ca-certificate *spice-ca-cert*]
761 | [\--new-cluster-domain-secret] [\--cluster-domain-secret *filename*]
763 This command will stop all Ganeti daemons in the cluster and start
764 them again once the new certificates and keys are replicated. The
765 options ``--new-cluster-certificate`` and ``--new-confd-hmac-key``
766 can be used to regenerate respectively the cluster-internal SSL
767 certificate and the HMAC key used by **ganeti-confd**\(8).
769 To generate a new self-signed RAPI certificate (used by
770 **ganeti-rapi**\(8)) specify ``--new-rapi-certificate``. If you want to
771 use your own certificate, e.g. one signed by a certificate
772 authority (CA), pass its filename to ``--rapi-certificate``.
774 To generate a new self-signed SPICE certificate, used for SPICE
775 connections to the KVM hypervisor, specify the
776 ``--new-spice-certificate`` option. If you want to provide a
777 certificate, pass its filename to ``--spice-certificate`` and pass the
778 signing CA certificate to ``--spice-ca-certificate``.
780 Finally ``--new-cluster-domain-secret`` generates a new, random
781 cluster domain secret, and ``--cluster-domain-secret`` reads the
782 secret from a file. The cluster domain secret is used to sign
783 information exchanged between separate clusters via a third party.
788 **repair-disk-sizes** [instance...]
790 This command checks that the recorded size of the given instance's
791 disks matches the actual size and updates any mismatches found.
792 This is needed if the Ganeti configuration is no longer consistent
793 with reality, as it will impact some disk operations. If no
794 arguments are given, all instances will be checked. When exclusive
795 storage is active, also spindles are updated.
797 Note that only active disks can be checked by this command; in case
798 a disk cannot be activated it's advised to use
799 **gnt-instance activate-disks \--ignore-size ...** to force
800 activation without regard to the current size.
802 When all the disk sizes are consistent, the command will return no
803 output. Otherwise it will log details about the inconsistencies in
809 **upgrade** {--to *version* | --resume}
811 This command safely switches all nodes of the cluster to a new Ganeti
812 version. It is a prerequisite that the new version is already installed,
813 albeit not activated, on all nodes; this requisite is checked before any
816 If called with the ``--resume`` option, any pending upgrade is
817 continued, that was interrupted by a power failure or similar on
818 master. (This option is not yet implemented.)
824 | **verify** [\--no-nplus1-mem] [\--node-group *nodegroup*]
825 | [\--error-codes] [{-I|\--ignore-errors} *errorcode*]
826 | [{-I|\--ignore-errors} *errorcode*...]
828 Verify correctness of cluster configuration. This is safe with
829 respect to running instances, and incurs no downtime of the
832 If the ``--no-nplus1-mem`` option is given, Ganeti won't check
833 whether if it loses a node it can restart all the instances on
834 their secondaries (and report an error otherwise).
836 With ``--node-group``, restrict the verification to those nodes and
837 instances that live in the named group. This will not verify global
838 settings, but will allow to perform verification of a group while other
839 operations are ongoing in other groups.
841 The ``--error-codes`` option outputs each error in the following
842 parseable format: *ftype*:*ecode*:*edomain*:*name*:*msg*.
843 These fields have the following meaning:
846 Failure type. Can be *WARNING* or *ERROR*.
849 Error code of the failure. See below for a list of error codes.
852 Can be *cluster*, *node* or *instance*.
855 Contains the name of the item that is affected from the failure.
858 Contains a descriptive error message about the error
860 ``gnt-cluster verify`` will have a non-zero exit code if at least one of
861 the failures that are found are of type *ERROR*.
863 The ``--ignore-errors`` option can be used to change this behaviour,
864 because it demotes the error represented by the error code received as a
865 parameter to a warning. The option must be repeated for each error that
866 should be ignored (e.g.: ``-I ENODEVERSION -I ENODEORPHANLV``). The
867 ``--error-codes`` option can be used to determine the error code of a
879 The command checks which instances have degraded DRBD disks and
880 activates the disks of those instances.
882 This command is run from the **ganeti-watcher** tool, which also
883 has a different, complementary algorithm for doing this check.
884 Together, these two should ensure that DRBD disks are kept
892 Show the cluster version.
900 **add-tags** [\--from *file*] {*tag*...}
902 Add tags to the cluster. If any of the tags contains invalid
903 characters, the entire operation will abort.
905 If the ``--from`` option is given, the list of tags will be
906 extended with the contents of that file (each line becomes a tag).
907 In this case, there is not need to pass tags on the command line
908 (if you do, both sources will be used). A file name of - will be
909 interpreted as stdin.
916 List the tags of the cluster.
921 **remove-tags** [\--from *file*] {*tag*...}
923 Remove tags from the cluster. If any of the tags are not existing
924 on the cluster, the entire operation will abort.
926 If the ``--from`` option is given, the list of tags to be removed will
927 be extended with the contents of that file (each line becomes a tag).
928 In this case, there is not need to pass tags on the command line (if
929 you do, tags from both sources will be removed). A file name of - will
930 be interpreted as stdin.
935 **search-tags** {*pattern*}
937 Searches the tags on all objects in the cluster (the cluster
938 itself, the nodes and the instances) for a given pattern. The
939 pattern is interpreted as a regular expression and a search will be
940 done on it (i.e. the given pattern is not anchored to the beggining
941 of the string; if you want that, prefix the pattern with ^).
943 If no tags are matching the pattern, the exit code of the command
944 will be one. If there is at least one match, the exit code will be
945 zero. Each match is listed on one line, the object and the tag
946 separated by a space. The cluster will be listed as /cluster, a
947 node will be listed as /nodes/*name*, and an instance as
948 /instances/*name*. Example:
952 # gnt-cluster search-tags time
953 /cluster ctime:2007-09-01
954 /nodes/node1.example.com mtime:2007-10-04
956 .. vim: set textwidth=72 :