1 Ganeti administrator's guide
2 ============================
4 Documents Ganeti version |version|
13 Ganeti is a virtualization cluster management software. You are expected
14 to be a system administrator familiar with your Linux distribution and
15 the Xen or KVM virtualization environments before using it.
17 The various components of Ganeti all have man pages and interactive
18 help. This manual though will help you getting familiar with the system
19 by explaining the most common operations, grouped by related use.
21 After a terminology glossary and a section on the prerequisites needed
22 to use this manual, the rest of this document is divided in sections
23 for the different targets that a command affects: instance, nodes, etc.
25 .. _terminology-label:
30 This section provides a small introduction to Ganeti terminology, which
31 might be useful when reading the rest of the document.
36 A set of machines (nodes) that cooperate to offer a coherent, highly
37 available virtualization service under a single administration domain.
42 A physical machine which is member of a cluster. Nodes are the basic
43 cluster infrastructure, and they don't need to be fault tolerant in
44 order to achieve high availability for instances.
46 Node can be added and removed (if they host no instances) at will from
47 the cluster. In a HA cluster and only with HA instances, the loss of any
48 single node will not cause disk data loss for any instance; of course,
49 a node crash will cause the crash of the its primary instances.
51 A node belonging to a cluster can be in one of the following roles at a
54 - *master* node, which is the node from which the cluster is controlled
55 - *master candidate* node, only nodes in this role have the full cluster
56 configuration and knowledge, and only master candidates can become the
58 - *regular* node, which is the state in which most nodes will be on
59 bigger clusters (>20 nodes)
60 - *drained* node, nodes in this state are functioning normally but the
61 cannot receive new instances; the intention is that nodes in this role
62 have some issue and they are being evacuated for hardware repairs
63 - *offline* node, in which there is a record in the cluster
64 configuration about the node, but the daemons on the master node will
65 not talk to this node; any instances declared as having an offline
66 node as either primary or secondary will be flagged as an error in the
67 cluster verify operation
69 Depending on the role, each node will run a set of daemons:
71 - the :command:`ganeti-noded` daemon, which control the manipulation of
72 this node's hardware resources; it runs on all nodes which are in a
74 - the :command:`ganeti-confd` daemon (Ganeti 2.1+) which runs on all
75 nodes, but is only functional on master candidate nodes
76 - the :command:`ganeti-rapi` daemon which runs on the master node and
77 offers an HTTP-based API for the cluster
78 - the :command:`ganeti-masterd` daemon which runs on the master node and
79 allows control of the cluster
84 A virtual machine which runs on a cluster. It can be a fault tolerant,
85 highly available entity.
87 An instance has various parameters, which are classified in three
88 categories: hypervisor related-parameters (called ``hvparams``), general
89 parameters (called ``beparams``) and per network-card parameters (called
90 ``nicparams``). All these parameters can be modified either at instance
91 level or via defaults at cluster level.
96 The are multiple options for the storage provided to an instance; while
97 the instance sees the same virtual drive in all cases, the node-level
98 configuration varies between them.
100 There are four disk templates you can choose from:
103 The instance has no disks. Only used for special purpose operating
104 systems or for testing.
107 The instance will use plain files as backend for its disks. No
108 redundancy is provided, and this is somewhat more difficult to
109 configure for high performance.
112 The instance will use LVM devices as backend for its disks. No
113 redundancy is provided.
116 .. note:: This is only valid for multi-node clusters using DRBD 8.0+
118 A mirror is set between the local node and a remote one, which must be
119 specified with the second value of the --node option. Use this option
120 to obtain a highly available instance that can be failed over to a
121 remote node should the primary one fail.
126 A framework for using external (user-provided) scripts to compute the
127 placement of instances on the cluster nodes. This eliminates the need to
128 manually specify nodes in instance add, instance moves, node evacuate,
131 In order for Ganeti to be able to use these scripts, they must be place
132 in the iallocator directory (usually ``lib/ganeti/iallocators`` under
133 the installation prefix, e.g. ``/usr/local``).
135 “Primary” and “secondary” concepts
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138 An instance has a primary and depending on the disk configuration, might
139 also have a secondary node. The instance always runs on the primary node
140 and only uses its secondary node for disk replication.
142 Similarly, the term of primary and secondary instances when talking
143 about a node refers to the set of instances having the given node as
144 primary, respectively secondary.
149 Tags are short strings that can be attached to either to cluster itself,
150 or to nodes or instances. They are useful as a very simplistic
151 information store for helping with cluster administration, for example
152 by attaching owner information to each instance after it's created::
154 gnt-instance add … instance1
155 gnt-instance add-tags instance1 owner:user2
157 And then by listing each instance and its tags, this information could
158 be used for contacting the users of each instance.
163 While not directly visible by an end-user, it's useful to know that a
164 basic cluster operation (e.g. starting an instance) is represented
165 internall by Ganeti as an *OpCode* (abbreviation from operation
166 code). These OpCodes are executed as part of a *Job*. The OpCodes in a
167 single Job are processed serially by Ganeti, but different Jobs will be
168 processed (depending on resource availability) in parallel.
170 For example, shutting down the entire cluster can be done by running the
171 command ``gnt-instance shutdown --all``, which will submit for each
172 instance a separate job containing the “shutdown instance” OpCode.
178 You need to have your Ganeti cluster installed and configured before you
179 try any of the commands in this document. Please follow the
180 :doc:`install` for instructions on how to do that.
188 The add operation might seem complex due to the many parameters it
189 accepts, but once you have understood the (few) required parameters and
190 the customisation capabilities you will see it is an easy operation.
192 The add operation requires at minimum five parameters:
194 - the OS for the instance
196 - the disk count and size
197 - the node specification or alternatively the iallocator to use
198 - and finally the instance name
200 The OS for the instance must be visible in the output of the command
201 ``gnt-os list`` and specifies which guest OS to install on the instance.
203 The disk template specifies what kind of storage to use as backend for
204 the (virtual) disks presented to the instance; note that for instances
205 with multiple virtual disks, they all must be of the same type.
207 The node(s) on which the instance will run can be given either manually,
208 via the ``-n`` option, or computed automatically by Ganeti, if you have
209 installed any iallocator script.
211 With the above parameters in mind, the command is::
214 -n TARGET_NODE:SECONDARY_NODE \
216 -t DISK_TEMPLATE -s DISK_SIZE \
219 The instance name must be resolvable (e.g. exist in DNS) and usually
220 points to an address in the same subnet as the cluster itself.
222 The above command has the minimum required options; other options you
223 can give include, among others:
225 - The memory size (``-B memory``)
227 - The number of virtual CPUs (``-B vcpus``)
229 - Arguments for the NICs of the instance; by default, a single-NIC
230 instance is created. The IP and/or bridge of the NIC can be changed
231 via ``--nic 0:ip=IP,bridge=BRIDGE``
233 See the manpage for gnt-instance for the detailed option list.
235 For example if you want to create an highly available instance, with a
236 single disk of 50GB and the default memory size, having primary node
237 ``node1`` and secondary node ``node3``, use the following command::
239 gnt-instance add -n node1:node3 -o debootstrap -t drbd \
242 There is a also a command for batch instance creation from a
243 specification file, see the ``batch-create`` operation in the
244 gnt-instance manual page.
246 Regular instance operations
247 +++++++++++++++++++++++++++
252 Removing an instance is even easier than creating one. This operation is
253 irreversible and destroys all the contents of your instance. Use with
256 gnt-instance remove INSTANCE_NAME
261 Instances are automatically started at instance creation time. To
262 manually start one which is currently stopped you can run::
264 gnt-instance startup INSTANCE_NAME
266 While the command to stop one is::
268 gnt-instance shutdown INSTANCE_NAME
270 .. warning:: Do not use the Xen or KVM commands directly to stop
271 instances. If you run for example ``xm shutdown`` or ``xm destroy``
272 on an instance Ganeti will automatically restart it (via the
273 :command:`ganeti-watcher` command which is launched via cron).
278 There are two ways to get information about instances: listing
279 instances, which does a tabular output containing a given set of fields
280 about each instance, and querying detailed information about a set of
283 The command to see all the instances configured and their status is::
287 The command can return a custom set of information when using the ``-o``
288 option (as always, check the manpage for a detailed specification). Each
289 instance will be represented on a line, thus making it easy to parse
290 this output via the usual shell utilities (grep, sed, etc.).
292 To get more detailed information about an instance, you can run::
294 gnt-instance info INSTANCE
296 which will give a multi-line block of information about the instance,
297 it's hardware resources (especially its disks and their redundancy
298 status), etc. This is harder to parse and is more expensive than the
299 list operation, but returns much more detailed information.
305 You can create a snapshot of an instance disk and its Ganeti
306 configuration, which then you can backup, or import into another
307 cluster. The way to export an instance is::
309 gnt-backup export -n TARGET_NODE INSTANCE_NAME
312 The target node can be any node in the cluster with enough space under
313 ``/srv/ganeti`` to hold the instance image. Use the ``--noshutdown``
314 option to snapshot an instance without rebooting it. Note that Ganeti
315 only keeps one snapshot for an instance - any previous snapshot of the
316 same instance existing cluster-wide under ``/srv/ganeti`` will be
317 removed by this operation: if you want to keep them, you need to move
318 them out of the Ganeti exports directory.
320 Importing an instance is similar to creating a new one, but additionally
321 one must specify the location of the snapshot. The command is::
323 gnt-backup import -n TARGET_NODE \
324 --src-node=NODE --src-dir=DIR INSTANCE_NAME
326 By default, parameters will be read from the export information, but you
327 can of course pass them in via the command line - most of the options
328 available for the command :command:`gnt-instance add` are supported here
331 Import of foreign instances
332 +++++++++++++++++++++++++++
334 There is a possibility to import a foreign instance whose disk data is
335 already stored as LVM volumes without going through copying it: the disk
338 For this, ensure that the original, non-managed instance is stopped,
339 then create a Ganeti instance in the usual way, except that instead of
340 passing the disk information you specify the current volumes::
342 gnt-instance add -t plain -n HOME_NODE ... \
343 --disk 0:adopt=lv_name INSTANCE_NAME
345 This will take over the given logical volumes, rename them to the Ganeti
346 standard (UUID-based), and without installing the OS on them start
347 directly the instance. If you configure the hypervisor similar to the
348 non-managed configuration that the instance had, the transition should
349 be seamless for the instance. For more than one disk, just pass another
350 disk parameter (e.g. ``--disk 1:adopt=...``).
355 .. note:: This section only applies to multi-node clusters
357 .. _instance-change-primary-label:
359 Changing the primary node
360 +++++++++++++++++++++++++
362 There are three ways to exchange an instance's primary and secondary
363 nodes; the right one to choose depends on how the instance has been
364 created and the status of its current primary node. See
365 :ref:`rest-redundancy-label` for information on changing the secondary
366 node. Note that it's only possible to change the primary node to the
367 secondary and vice-versa; a direct change of the primary node with a
368 third node, while keeping the current secondary is not possible in a
369 single step, only via multiple operations as detailed in
370 :ref:`instance-relocation-label`.
372 Failing over an instance
373 ~~~~~~~~~~~~~~~~~~~~~~~~
375 If an instance is built in highly available mode you can at any time
376 fail it over to its secondary node, even if the primary has somehow
377 failed and it's not up anymore. Doing it is really easy, on the master
378 node you can just run::
380 gnt-instance failover INSTANCE_NAME
382 That's it. After the command completes the secondary node is now the
383 primary, and vice-versa.
385 Live migrating an instance
386 ~~~~~~~~~~~~~~~~~~~~~~~~~~
388 If an instance is built in highly available mode, it currently runs and
389 both its nodes are running fine, you can at migrate it over to its
390 secondary node, without downtime. On the master node you need to run::
392 gnt-instance migrate INSTANCE_NAME
394 The current load on the instance and its memory size will influence how
395 long the migration will take. In any case, for both KVM and Xen
396 hypervisors, the migration will be transparent to the instance.
398 Moving an instance (offline)
399 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
401 If an instance has not been create as mirrored, then the only way to
402 change its primary node is to execute the move command::
404 gnt-instance move -n NEW_NODE INSTANCE
406 This has a few prerequisites:
408 - the instance must be stopped
409 - its current primary node must be on-line and healthy
410 - the disks of the instance must not have any errors
412 Since this operation actually copies the data from the old node to the
413 new node, expect it to take proportional to the size of the instance's
414 disks and the speed of both the nodes' I/O system and their networking.
419 Disk failures are a common cause of errors in any server
420 deployment. Ganeti offers protection from single-node failure if your
421 instances were created in HA mode, and it also offers ways to restore
422 redundancy after a failure.
424 Preparing for disk operations
425 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
427 It is important to note that for Ganeti to be able to do any disk
428 operation, the Linux machines on top of which Ganeti must be consistent;
429 for LVM, this means that the LVM commands must not return failures; it
430 is common that after a complete disk failure, any LVM command aborts
431 with an error similar to::
434 /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
435 /dev/sdb1: read failed after 0 of 4096 at 750153695232: Input/output
437 /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
438 Couldn't find device with uuid
439 't30jmN-4Rcf-Fr5e-CURS-pawt-z0jU-m1TgeJ'.
440 Couldn't find all physical volumes for volume group xenvg.
442 Before restoring an instance's disks to healthy status, it's needed to
443 fix the volume group used by Ganeti so that we can actually create and
444 manage the logical volumes. This is usually done in a multi-step
447 #. first, if the disk is completely gone and LVM commands exit with
448 “Couldn't find device with uuid…” then you need to run the command::
450 vgreduce --removemissing VOLUME_GROUP
452 #. after the above command, the LVM commands should be executing
453 normally (warnings are normal, but the commands will not fail
456 #. if the failed disk is still visible in the output of the ``pvs``
457 command, you need to deactivate it from allocations by running::
461 At this point, the volume group should be consistent and any bad
462 physical volumes should not longer be available for allocation.
464 Note that since version 2.1 Ganeti provides some commands to automate
465 these two operations, see :ref:`storage-units-label`.
467 .. _rest-redundancy-label:
469 Restoring redundancy for DRBD-based instances
470 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
472 A DRBD instance has two nodes, and the storage on one of them has
473 failed. Depending on which node (primary or secondary) has failed, you
474 have three options at hand:
476 - if the storage on the primary node has failed, you need to re-create
478 - if the storage on the secondary node has failed, you can either
479 re-create the disks on it or change the secondary and recreate
480 redundancy on the new secondary node
482 Of course, at any point it's possible to force re-creation of disks even
483 though everything is already fine.
485 For all three cases, the ``replace-disks`` operation can be used::
487 # re-create disks on the primary node
488 gnt-instance replace-disks -p INSTANCE_NAME
489 # re-create disks on the current secondary
490 gnt-instance replace-disks -s INSTANCE_NAME
491 # change the secondary node, via manual specification
492 gnt-instance replace-disks -n NODE INSTANCE_NAME
493 # change the secondary node, via an iallocator script
494 gnt-instance replace-disks -I SCRIPT INSTANCE_NAME
495 # since Ganeti 2.1: automatically fix the primary or secondary node
496 gnt-instance replace-disks -a INSTANCE_NAME
498 Since the process involves copying all data from the working node to the
499 target node, it will take a while, depending on the instance's disk
500 size, node I/O system and network speed. But it is (baring any network
501 interruption) completely transparent for the instance.
503 Re-creating disks for non-redundant instances
504 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
506 .. versionadded:: 2.1
508 For non-redundant instances, there isn't a copy (except backups) to
509 re-create the disks. But it's possible to at-least re-create empty
510 disks, after which a reinstall can be run, via the ``recreate-disks``
513 gnt-instance recreate-disks INSTANCE
515 Note that this will fail if the disks already exists.
517 Conversion of an instance's disk type
518 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
520 It is possible to convert between a non-redundant instance of type
521 ``plain`` (LVM storage) and redundant ``drbd`` via the ``gnt-instance
524 # start with a non-redundant instance
525 gnt-instance add -t plain ... INSTANCE
527 # later convert it to redundant
528 gnt-instance stop INSTANCE
529 gnt-instance modify -t drbd INSTANCE
530 gnt-instance start INSTANCE
532 # and convert it back
533 gnt-instance stop INSTANCE
534 gnt-instance modify -t plain INSTANCE
535 gnt-instance start INSTANCE
537 The conversion must be done while the instance is stopped, and
538 converting from plain to drbd template presents a small risk, especially
539 if the instance has multiple disks and/or if one node fails during the
540 conversion procedure). As such, it's recommended (as always) to make
541 sure that downtime for manual recovery is acceptable and that the
542 instance has up-to-date backups.
547 Accessing an instance's disks
548 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
550 From an instance's primary node you can have access to its disks. Never
551 ever mount the underlying logical volume manually on a fault tolerant
552 instance, or will break replication and your data will be
553 inconsistent. The correct way to access an instance's disks is to run
554 (on the master node, as usual) the command::
556 gnt-instance activate-disks INSTANCE
558 And then, *on the primary node of the instance*, access the device that
559 gets created. For example, you could mount the given disks, then edit
560 files on the filesystem, etc.
562 Note that with partitioned disks (as opposed to whole-disk filesystems),
563 you will need to use a tool like :manpage:`kpartx(8)`::
565 node1# gnt-instance activate-disks instance1
568 node3# kpartx -l /dev/…
569 node3# kpartx -a /dev/…
570 node3# mount /dev/mapper/… /mnt/
571 # edit files under mnt as desired
573 node3# kpartx -d /dev/…
577 After you've finished you can deactivate them with the deactivate-disks
578 command, which works in the same way::
580 gnt-instance deactivate-disks INSTANCE
582 Note that if any process started by you is still using the disks, the
583 above command will error out, and you **must** cleanup and ensure that
584 the above command runs successfully before you start the instance,
585 otherwise the instance will suffer corruption.
587 Accessing an instance's console
588 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
590 The command to access a running instance's console is::
592 gnt-instance console INSTANCE_NAME
594 Use the console normally and then type ``^]`` when done, to exit.
596 Other instance operations
597 +++++++++++++++++++++++++
602 There is a wrapper command for rebooting instances::
604 gnt-instance reboot instance2
606 By default, this does the equivalent of shutting down and then starting
607 the instance, but it accepts parameters to perform a soft-reboot (via
608 the hypervisor), a hard reboot (hypervisor shutdown and then startup) or
609 a full one (the default, which also de-configures and then configures
610 again the disks of the instance).
612 Instance OS definitions debugging
613 +++++++++++++++++++++++++++++++++
615 Should you have any problems with instance operating systems the command
616 to see a complete status for all your nodes is::
620 .. _instance-relocation-label:
625 While it is not possible to move an instance from nodes ``(A, B)`` to
626 nodes ``(C, D)`` in a single move, it is possible to do so in a few
629 # instance is located on A, B
630 node1# gnt-instance replace -n nodeC instance1
631 # instance has moved from (A, B) to (A, C)
632 # we now flip the primary/secondary nodes
633 node1# gnt-instance migrate instance1
634 # instance lives on (C, A)
635 # we can then change A to D via:
636 node1# gnt-instance replace -n nodeD instance1
638 Which brings it into the final configuration of ``(C, D)``. Note that we
639 needed to do two replace-disks operation (two copies of the instance
640 disks), because we needed to get rid of both the original nodes (A and
646 There are much fewer node operations available than for instances, but
647 they are equivalently important for maintaining a healthy cluster.
652 It is at any time possible to extend the cluster with one more node, by
653 using the node add operation::
655 gnt-node add NEW_NODE
657 If the cluster has a replication network defined, then you need to pass
658 the ``-s REPLICATION_IP`` parameter to this option.
660 A variation of this command can be used to re-configure a node if its
661 Ganeti configuration is broken, for example if it has been reinstalled
664 gnt-node add --readd EXISTING_NODE
666 This will reinitialise the node as if it's been newly added, but while
667 keeping its existing configuration in the cluster (primary/secondary IP,
668 etc.), in other words you won't need to use ``-s`` here.
670 Changing the node role
671 ++++++++++++++++++++++
673 A node can be in different roles, as explained in the
674 :ref:`terminology-label` section. Promoting a node to the master role is
675 special, while the other roles are handled all via a single command.
677 Failing over the master node
678 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 If you want to promote a different node to the master role (for whatever
681 reason), run on any other master-candidate node the command::
683 gnt-cluster masterfailover
685 and the node you ran it on is now the new master. In case you try to run
686 this on a non master-candidate node, you will get an error telling you
687 which nodes are valid.
689 Changing between the other roles
690 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
692 The ``gnt-node modify`` command can be used to select a new role::
694 # change to master candidate
695 gnt-node modify -C yes NODE
696 # change to drained status
697 gnt-node modify -D yes NODE
698 # change to offline status
699 gnt-node modify -O yes NODE
700 # change to regular mode (reset all flags)
701 gnt-node modify -O no -D no -C no NODE
703 Note that the cluster requires that at any point in time, a certain
704 number of nodes are master candidates, so changing from master candidate
705 to other roles might fail. It is recommended to either force the
706 operation (via the ``--force`` option) or first change the number of
707 master candidates in the cluster - see :ref:`cluster-config-label`.
712 There are two steps of moving instances off a node:
714 - moving the primary instances (actually converting them into secondary
716 - moving the secondary instances (including any instances converted in
719 Primary instance conversion
720 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
722 For this step, you can use either individual instance move
723 commands (as seen in :ref:`instance-change-primary-label`) or the bulk
724 per-node versions; these are::
726 gnt-node migrate NODE
727 gnt-node evacuate NODE
729 Note that the instance “move” command doesn't currently have a node
732 Both these commands, or the equivalent per-instance command, will make
733 this node the secondary node for the respective instances, whereas their
734 current secondary node will become primary. Note that it is not possible
735 to change in one step the primary node to another node as primary, while
736 keeping the same secondary node.
738 Secondary instance evacuation
739 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
741 For the evacuation of secondary instances, a command called
742 :command:`gnt-node evacuate` is provided and its syntax is::
744 gnt-node evacuate -I IALLOCATOR_SCRIPT NODE
745 gnt-node evacuate -n DESTINATION_NODE NODE
747 The first version will compute the new secondary for each instance in
748 turn using the given iallocator script, whereas the second one will
749 simply move all instances to DESTINATION_NODE.
754 Once a node no longer has any instances (neither primary nor secondary),
755 it's easy to remove it from the cluster::
757 gnt-node remove NODE_NAME
759 This will deconfigure the node, stop the ganeti daemons on it and leave
760 it hopefully like before it joined to the cluster.
765 When using LVM (either standalone or with DRBD), it can become tedious
766 to debug and fix it in case of errors. Furthermore, even file-based
767 storage can become complicated to handle manually on many hosts. Ganeti
768 provides a couple of commands to help with automation.
773 This is a command specific to LVM handling. It allows listing the
774 logical volumes on a given node or on all nodes and their association to
775 instances via the ``volumes`` command::
777 node1# gnt-node volumes
778 Node PhysDev VG Name Size Instance
779 node1 /dev/sdb1 xenvg e61fbc97-….disk0 512M instance17
780 node1 /dev/sdb1 xenvg ebd1a7d1-….disk0 512M instance19
781 node2 /dev/sdb1 xenvg 0af08a3d-….disk0 512M instance20
782 node2 /dev/sdb1 xenvg cc012285-….disk0 512M instance16
783 node2 /dev/sdb1 xenvg f0fac192-….disk0 512M instance18
785 The above command maps each logical volume to a volume group and
786 underlying physical volume and (possibly) to an instance.
788 .. _storage-units-label:
790 Generalized storage handling
791 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
793 .. versionadded:: 2.1
795 Starting with Ganeti 2.1, a new storage framework has been implemented
796 that tries to abstract the handling of the storage type the cluster
799 First is listing the backend storage and their space situation::
801 node1# gnt-node list-storage
802 Node Name Size Used Free
803 node1 /dev/sda7 673.8G 0M 673.8G
804 node1 /dev/sdb1 698.6G 1.5G 697.1G
805 node2 /dev/sda7 673.8G 0M 673.8G
806 node2 /dev/sdb1 698.6G 1.0G 697.6G
808 The default is to list LVM physical volumes. It's also possible to list
809 the LVM volume groups::
811 node1# gnt-node list-storage -t lvm-vg
816 Next is repairing storage units, which is currently only implemented for
817 volume groups and does the equivalent of ``vgreduce --removemissing``::
819 node1# gnt-node repair-storage node2 lvm-vg xenvg
820 Sun Oct 25 22:21:45 2009 Repairing storage unit 'xenvg' on node2 ...
822 Last is the modification of volume properties, which is (again) only
823 implemented for LVM physical volumes and allows toggling the
824 ``allocatable`` value::
826 node1# gnt-node modify-storage --allocatable=no node2 lvm-pv /dev/sdb1
828 Use of the storage commands
829 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
831 All these commands are needed when recovering a node from a disk
834 - first, we need to recover from complete LVM failure (due to missing
835 disk), by running the ``repair-storage`` command
836 - second, we need to change allocation on any partially-broken disk
837 (i.e. LVM still sees it, but it has bad blocks) by running
839 - then we can evacuate the instances as needed
845 Beside the cluster initialisation command (which is detailed in the
846 :doc:`install` document) and the master failover command which is
847 explained under node handling, there are a couple of other cluster
848 operations available.
850 .. _cluster-config-label:
855 One of the few commands that can be run on any node (not only the
856 master) is the ``getmaster`` command::
858 node2# gnt-cluster getmaster
862 It is possible to query and change global cluster parameters via the
863 ``info`` and ``modify`` commands::
865 node1# gnt-cluster info
866 Cluster name: cluster.example.com
867 Cluster UUID: 07805e6f-f0af-4310-95f1-572862ee939c
868 Creation time: 2009-09-25 05:04:15
869 Modification time: 2009-10-18 22:11:47
870 Master node: node1.example.com
871 Architecture (this node): 64bit (x86_64)
874 Default hypervisor: xen-pvm
875 Enabled hypervisors: xen-pvm
876 Hypervisor parameters:
881 - candidate pool size: 10
883 Default instance parameters:
887 Default nic parameters:
892 There various parameters above can be changed via the ``modify``
895 - the hypervisor parameters can be changed via ``modify -H
896 xen-pvm:root_path=…``, and so on for other hypervisors/key/values
897 - the "default instance parameters" are changeable via ``modify -B
898 parameter=value…`` syntax
899 - the cluster parameters are changeable via separate options to the
900 modify command (e.g. ``--candidate-pool-size``, etc.)
902 For detailed option list see the :manpage:`gnt-cluster(8)` man page.
904 The cluster version can be obtained via the ``version`` command::
905 node1# gnt-cluster version
906 Software version: 2.1.0
907 Internode protocol: 20
908 Configuration format: 2010000
912 This is not very useful except when debugging Ganeti.
917 There are two commands provided for replicating files to all nodes of a
918 cluster and for running commands on all the nodes::
920 node1# gnt-cluster copyfile /path/to/file
921 node1# gnt-cluster command ls -l /path/to/file
923 These are simple wrappers over scp/ssh and more advanced usage can be
924 obtained using :manpage:`dsh(1)` and similar commands. But they are
925 useful to update an OS script from the master node, for example.
930 There are three commands that relate to global cluster checks. The first
931 one is ``verify`` which gives an overview on the cluster state,
932 highlighting any issues. In normal operation, this command should return
933 no ``ERROR`` messages::
935 node1# gnt-cluster verify
936 Sun Oct 25 23:08:58 2009 * Verifying global settings
937 Sun Oct 25 23:08:58 2009 * Gathering data (2 nodes)
938 Sun Oct 25 23:09:00 2009 * Verifying node status
939 Sun Oct 25 23:09:00 2009 * Verifying instance status
940 Sun Oct 25 23:09:00 2009 * Verifying orphan volumes
941 Sun Oct 25 23:09:00 2009 * Verifying remaining instances
942 Sun Oct 25 23:09:00 2009 * Verifying N+1 Memory redundancy
943 Sun Oct 25 23:09:00 2009 * Other Notes
944 Sun Oct 25 23:09:00 2009 - NOTICE: 5 non-redundant instance(s) found.
945 Sun Oct 25 23:09:00 2009 * Hooks Results
947 The second command is ``verify-disks``, which checks that the instance's
948 disks have the correct status based on the desired instance state
951 node1# gnt-cluster verify-disks
953 Note that this command will show no output when disks are healthy.
955 The last command is used to repair any discrepancies in Ganeti's
956 recorded disk size and the actual disk size (disk size information is
957 needed for proper activation and growth of DRBD-based disks)::
959 node1# gnt-cluster repair-disk-sizes
960 Sun Oct 25 23:13:16 2009 - INFO: Disk 0 of instance instance1 has mismatched size, correcting: recorded 512, actual 2048
961 Sun Oct 25 23:13:17 2009 - WARNING: Invalid result from node node4, ignoring node results
963 The above shows one instance having wrong disk size, and a node which
964 returned invalid data, and thus we ignored all primary instances of that
967 Configuration redistribution
968 ++++++++++++++++++++++++++++
970 If the verify command complains about file mismatches between the master
971 and other nodes, due to some node problems or if you manually modified
972 configuration files, you can force an push of the master configuration
973 to all other nodes via the ``redist-conf`` command::
975 node1# gnt-cluster redist-conf
978 This command will be silent unless there are problems sending updates to
985 It is possible to rename a cluster, or to change its IP address, via the
986 ``rename`` command. If only the IP has changed, you need to pass the
987 current name and Ganeti will realise its IP has changed::
989 node1# gnt-cluster rename cluster.example.com
990 This will rename the cluster to 'cluster.example.com'. If
991 you are connected over the network to the cluster name, the operation
992 is very dangerous as the IP address will be removed from the node and
993 the change may not go through. Continue?
995 Failure: prerequisites not met for this operation:
996 Neither the name nor the IP address of the cluster has changed
998 In the above output, neither value has changed since the cluster
999 initialisation so the operation is not completed.
1004 The job queue execution in Ganeti 2.0 and higher can be inspected,
1005 suspended and resumed via the ``queue`` command::
1007 node1~# gnt-cluster queue info
1008 The drain flag is unset
1009 node1~# gnt-cluster queue drain
1010 node1~# gnt-instance stop instance1
1011 Failed to submit job for instance1: Job queue is drained, refusing job
1012 node1~# gnt-cluster queue info
1013 The drain flag is set
1014 node1~# gnt-cluster queue undrain
1016 This is most useful if you have an active cluster and you need to
1017 upgrade the Ganeti software, or simply restart the software on any node:
1019 #. suspend the queue via ``queue drain``
1020 #. wait until there are no more running jobs via ``gnt-job list``
1021 #. restart the master or another node, or upgrade the software
1022 #. resume the queue via ``queue undrain``
1024 .. note:: this command only stores a local flag file, and if you
1025 failover the master, it will not have effect on the new master.
1031 The :manpage:`ganeti-watcher` is a program, usually scheduled via
1032 ``cron``, that takes care of cluster maintenance operations (restarting
1033 downed instances, activating down DRBD disks, etc.). However, during
1034 maintenance and troubleshooting, this can get in your way; disabling it
1035 via commenting out the cron job is not so good as this can be
1036 forgotten. Thus there are some commands for automated control of the
1037 watcher: ``pause``, ``info`` and ``continue``::
1039 node1~# gnt-cluster watcher info
1040 The watcher is not paused.
1041 node1~# gnt-cluster watcher pause 1h
1042 The watcher is paused until Mon Oct 26 00:30:37 2009.
1043 node1~# gnt-cluster watcher info
1044 The watcher is paused until Mon Oct 26 00:30:37 2009.
1045 node1~# ganeti-watcher -d
1046 2009-10-25 23:30:47,984: pid=28867 ganeti-watcher:486 DEBUG Pause has been set, exiting
1047 node1~# gnt-cluster watcher continue
1048 The watcher is no longer paused.
1049 node1~# ganeti-watcher -d
1050 2009-10-25 23:31:04,789: pid=28976 ganeti-watcher:345 DEBUG Archived 0 jobs, left 0
1051 2009-10-25 23:31:05,884: pid=28976 ganeti-watcher:280 DEBUG Got data from cluster, writing instance status file
1052 2009-10-25 23:31:06,061: pid=28976 ganeti-watcher:150 DEBUG Data didn't change, just touching status file
1053 node1~# gnt-cluster watcher info
1054 The watcher is not paused.
1057 The exact details of the argument to the ``pause`` command are available
1060 .. note:: this command only stores a local flag file, and if you
1061 failover the master, it will not have effect on the new master.
1063 Node auto-maintenance
1064 +++++++++++++++++++++
1066 If the cluster parameter ``maintain_node_health`` is enabled (see the
1067 manpage for :command:`gnt-cluster`, the init and modify subcommands),
1068 then the following will happen automatically:
1070 - the watcher will shutdown any instances running on offline nodes
1071 - the watcher will deactivate any DRBD devices on offline nodes
1073 In the future, more actions are planned, so only enable this parameter
1074 if the nodes are completely dedicated to Ganeti; otherwise it might be
1075 possible to lose data due to auto-maintenance actions.
1077 Removing a cluster entirely
1078 +++++++++++++++++++++++++++
1080 The usual method to cleanup a cluster is to run ``gnt-cluster destroy``
1081 however if the Ganeti installation is broken in any way then this will
1084 It is possible in such a case to cleanup manually most if not all traces
1085 of a cluster installation by following these steps on all of the nodes:
1087 1. Shutdown all instances. This depends on the virtualisation method
1088 used (Xen, KVM, etc.):
1090 - Xen: run ``xm list`` and ``xm destroy`` on all the non-Domain-0
1092 - KVM: kill all the KVM processes
1093 - chroot: kill all processes under the chroot mountpoints
1095 2. If using DRBD, shutdown all DRBD minors (which should by at this time
1096 no-longer in use by instances); on each node, run ``drbdsetup
1097 /dev/drbdN down`` for each active DRBD minor.
1099 3. If using LVM, cleanup the Ganeti volume group; if only Ganeti created
1100 logical volumes (and you are not sharing the volume group with the
1101 OS, for example), then simply running ``lvremove -f xenvg`` (replace
1102 'xenvg' with your volume group name) should do the required cleanup.
1104 4. If using file-based storage, remove recursively all files and
1105 directories under your file-storage directory: ``rm -rf
1106 /srv/ganeti/file-storage/*`` replacing the path with the correct path
1109 5. Stop the ganeti daemons (``/etc/init.d/ganeti stop``) and kill any
1110 that remain alive (``pgrep ganeti`` and ``pkill ganeti``).
1112 6. Remove the ganeti state directory (``rm -rf /var/lib/ganeti/*``),
1113 replacing the path with the correct path for your installation.
1115 On the master node, remove the cluster from the master-netdev (usually
1116 ``xen-br0`` for bridged mode, otherwise ``eth0`` or similar), by running
1117 ``ip a del $clusterip/32 dev xen-br0`` (use the correct cluster ip and
1118 network device name).
1120 At this point, the machines are ready for a cluster creation; in case
1121 you want to remove Ganeti completely, you need to also undo some of the
1122 SSH changes and log directories:
1124 - ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct
1126 - remove from ``/root/.ssh`` the keys that Ganeti added (check the
1127 ``authorized_keys`` and ``id_dsa`` files)
1128 - regenerate the host's SSH keys (check the OpenSSH startup scripts)
1131 Otherwise, if you plan to re-create the cluster, you can just go ahead
1132 and rerun ``gnt-cluster init``.
1137 The tags handling (addition, removal, listing) is similar for all the
1138 objects that support it (instances, nodes, and the cluster).
1143 Note that the set of characters present in a tag and the maximum tag
1144 length are restricted. Currently the maximum length is 128 characters,
1145 there can be at most 4096 tags per object, and the set of characters is
1146 comprised by alphanumeric characters and additionally ``.+*/:-``.
1151 Tags can be added via ``add-tags``::
1153 gnt-instance add-tags INSTANCE a b c
1154 gnt-node add-tags INSTANCE a b c
1155 gnt-cluster add-tags a b c
1158 The above commands add three tags to an instance, to a node and to the
1159 cluster. Note that the cluster command only takes tags as arguments,
1160 whereas the node and instance commands first required the node and
1163 Tags can also be added from a file, via the ``--from=FILENAME``
1164 argument. The file is expected to contain one tag per line.
1166 Tags can also be remove via a syntax very similar to the add one::
1168 gnt-instance remove-tags INSTANCE a b c
1172 gnt-instance list-tags
1174 gnt-cluster list-tags
1179 It is also possible to execute a global search on the all tags defined
1180 in the cluster configuration, via a cluster command::
1182 gnt-cluster search-tags REGEXP
1184 The parameter expected is a regular expression (see
1185 :manpage:`regex(7)`). This will return all tags that match the search,
1186 together with the object they are defined in (the names being show in a
1187 hierarchical kind of way)::
1189 node1# gnt-cluster search-tags o
1191 /instances/instance1 owner:bar
1197 The various jobs submitted by the instance/node/cluster commands can be
1198 examined, canceled and archived by various invocations of the
1199 ``gnt-job`` command.
1201 First is the job list command::
1204 17771 success INSTANCE_QUERY_DATA
1205 17773 success CLUSTER_VERIFY_DISKS
1206 17775 success CLUSTER_REPAIR_DISK_SIZES
1207 17776 error CLUSTER_RENAME(cluster.example.com)
1208 17780 success CLUSTER_REDIST_CONF
1209 17792 success INSTANCE_REBOOT(instance1.example.com)
1211 More detailed information about a job can be found via the ``info``
1214 node1# gnt-job info 17776
1217 Received: 2009-10-25 23:18:02.180569
1218 Processing start: 2009-10-25 23:18:02.200335 (delta 0.019766s)
1219 Processing end: 2009-10-25 23:18:02.279743 (delta 0.079408s)
1220 Total processing time: 0.099174 seconds
1224 Processing start: 2009-10-25 23:18:02.200335
1225 Processing end: 2009-10-25 23:18:02.252282
1227 name: cluster.example.com
1230 [Neither the name nor the IP address of the cluster has changed]
1233 During the execution of a job, it's possible to follow the output of a
1234 job, similar to the log that one get from the ``gnt-`` commands, via the
1237 node1# gnt-instance add --submit … instance1
1239 node1# gnt-job watch 17818
1240 Output from job 17818 follows
1241 -----------------------------
1242 Mon Oct 26 00:22:48 2009 - INFO: Selected nodes for instance instance1 via iallocator dumb: node1, node2
1243 Mon Oct 26 00:22:49 2009 * creating instance disks...
1244 Mon Oct 26 00:22:52 2009 adding instance instance1 to cluster config
1245 Mon Oct 26 00:22:52 2009 - INFO: Waiting for instance instance1 to sync disks.
1247 Mon Oct 26 00:23:03 2009 creating os for instance instance1 on node node1
1248 Mon Oct 26 00:23:03 2009 * running the instance OS create scripts...
1249 Mon Oct 26 00:23:13 2009 * starting instance...
1252 This is useful if you need to follow a job's progress from multiple
1255 A job that has not yet started to run can be canceled::
1257 node1# gnt-job cancel 17810
1259 But not one that has already started execution::
1261 node1# gnt-job cancel 17805
1262 Job 17805 is no longer waiting in the queue
1264 There are two queues for jobs: the *current* and the *archive*
1265 queue. Jobs are initially submitted to the current queue, and they stay
1266 in that queue until they have finished execution (either successfully or
1267 not). At that point, they can be moved into the archive queue, and the
1268 ganeti-watcher script will do this automatically after 6 hours. The
1269 ganeti-cleaner script will remove the jobs from the archive directory
1272 Note that only jobs in the current queue can be viewed via the list and
1273 info commands; Ganeti itself doesn't examine the archive directory. If
1274 you need to see an older job, either move the file manually in the
1275 top-level queue directory, or look at its contents (it's a
1276 JSON-formatted file).
1281 Beside the usual ``gnt-`` and ``ganeti-`` commands which are provided
1282 and installed in ``$prefix/sbin`` at install time, there are a couple of
1283 other tools installed which are used seldom but can be helpful in some
1289 The ``lvmstrap`` tool, introduced in :ref:`configure-lvm-label` section,
1290 has two modes of operation:
1292 - ``diskinfo`` shows the discovered disks on the system and their status
1293 - ``create`` takes all not-in-use disks and creates a volume group out
1296 .. warning:: The ``create`` argument to this command causes data-loss!
1301 The ``cfgupgrade`` tools is used to upgrade between major (and minor)
1302 Ganeti versions. Point-releases are usually transparent for the admin.
1304 More information about the upgrade procedure is listed on the wiki at
1305 http://code.google.com/p/ganeti/wiki/UpgradeNotes.
1307 There is also a script designed to upgrade from Ganeti 1.2 to 2.0,
1308 called ``cfgupgrade12``.
1313 .. note:: This command is not actively maintained; make sure you backup
1314 your configuration before using it
1316 This can be used as an alternative to direct editing of the
1317 main configuration file if Ganeti has a bug and prevents you, for
1318 example, from removing an instance or a node from the configuration
1326 .. warning:: This command will erase existing instances if given as
1329 This tool is used to exercise either the hardware of machines or
1330 alternatively the Ganeti software. It is safe to run on an existing
1331 cluster **as long as you don't pass it existing instance names**.
1333 The command will, by default, execute a comprehensive set of operations
1334 against a list of instances, these being:
1337 - disk replacement (for redundant instances)
1338 - failover and migration (for redundant instances)
1339 - move (for non-redundant instances)
1341 - add disks, remove disk
1342 - add NICs, remove NICs
1343 - export and then import
1347 - and finally removal of the test instances
1349 Executing all these operations will test that the hardware performs
1350 well: the creation, disk replace, disk add and disk growth will exercise
1351 the storage and network; the migrate command will test the memory of the
1352 systems. Depending on the passed options, it can also test that the
1353 instance OS definitions are executing properly the rename, import and
1359 This tool takes the Ganeti configuration and outputs a "sanitized"
1360 version, by randomizing or clearing:
1362 - DRBD secrets and cluster public key (always)
1363 - host names (optional)
1365 - OS names (optional)
1366 - LV names (optional, only useful for very old clusters which still have
1367 instances whose LVs are based on the instance name)
1369 By default, all optional items are activated except the LV name
1370 randomization. When passing ``--no-randomization``, which disables the
1371 optional items (i.e. just the DRBD secrets and cluster public keys are
1372 randomized), the resulting file can be used as a safety copy of the
1373 cluster config - while not trivial, the layout of the cluster can be
1374 recreated from it and if the instance disks have not been lost it
1375 permits recovery from the loss of all master candidates.
1380 See :doc:`separate documentation for move-instance <move-instance>`.
1382 .. TODO: document cluster-merge tool
1385 Other Ganeti projects
1386 ---------------------
1388 There are two other Ganeti-related projects that can be useful in a
1389 Ganeti deployment. These can be downloaded from the project site
1390 (http://code.google.com/p/ganeti/) and the repositories are also on the
1391 project git site (http://git.ganeti.org).
1396 The ``ganeti-nbma`` software is designed to allow instances to live on a
1397 separate, virtual network from the nodes, and in an environment where
1398 nodes are not guaranteed to be able to reach each other via multicasting
1399 or broadcasting. For more information see the README in the source
1405 The ``ganeti-htools`` software consists of a set of tools:
1407 - ``hail``: an advanced iallocator script compared to Ganeti's builtin
1409 - ``hbal``: a tool for rebalancing the cluster, i.e. moving instances
1410 around in order to better use the resources on the nodes
1411 - ``hspace``: a tool for estimating the available capacity of a cluster,
1412 so that capacity planning can be done efficiently
1414 For more information and installation instructions, see the README file
1415 in the source archive.
1417 .. vim: set textwidth=72 :