1 Ganeti administrator's guide
2 ============================
4 Documents Ganeti version |version|
13 Ganeti is a virtualization cluster management software. You are expected
14 to be a system administrator familiar with your Linux distribution and
15 the Xen or KVM virtualization environments before using it.
17 The various components of Ganeti all have man pages and interactive
18 help. This manual though will help you getting familiar with the system
19 by explaining the most common operations, grouped by related use.
21 After a terminology glossary and a section on the prerequisites needed
22 to use this manual, the rest of this document is divided in sections
23 for the different targets that a command affects: instance, nodes, etc.
25 .. _terminology-label:
30 This section provides a small introduction to Ganeti terminology, which
31 might be useful when reading the rest of the document.
36 A set of machines (nodes) that cooperate to offer a coherent, highly
37 available virtualization service under a single administration domain.
42 A physical machine which is member of a cluster. Nodes are the basic
43 cluster infrastructure, and they don't need to be fault tolerant in
44 order to achieve high availability for instances.
46 Node can be added and removed (if they host no instances) at will from
47 the cluster. In a HA cluster and only with HA instances, the loss of any
48 single node will not cause disk data loss for any instance; of course,
49 a node crash will cause the crash of the its primary instances.
51 A node belonging to a cluster can be in one of the following roles at a
54 - *master* node, which is the node from which the cluster is controlled
55 - *master candidate* node, only nodes in this role have the full cluster
56 configuration and knowledge, and only master candidates can become the
58 - *regular* node, which is the state in which most nodes will be on
59 bigger clusters (>20 nodes)
60 - *drained* node, nodes in this state are functioning normally but the
61 cannot receive new instances; the intention is that nodes in this role
62 have some issue and they are being evacuated for hardware repairs
63 - *offline* node, in which there is a record in the cluster
64 configuration about the node, but the daemons on the master node will
65 not talk to this node; any instances declared as having an offline
66 node as either primary or secondary will be flagged as an error in the
67 cluster verify operation
69 Depending on the role, each node will run a set of daemons:
71 - the :command:`ganeti-noded` daemon, which control the manipulation of
72 this node's hardware resources; it runs on all nodes which are in a
74 - the :command:`ganeti-confd` daemon (Ganeti 2.1+) which runs on all
75 nodes, but is only functional on master candidate nodes; this daemon
76 can be disabled at configuration time if you don't need its
78 - the :command:`ganeti-rapi` daemon which runs on the master node and
79 offers an HTTP-based API for the cluster
80 - the :command:`ganeti-masterd` daemon which runs on the master node and
81 allows control of the cluster
83 Beside the node role, there are other node flags that influence its
86 - the *master_capable* flag denotes whether the node can ever become a
87 master candidate; setting this to 'no' means that auto-promotion will
88 never make this node a master candidate; this flag can be useful for a
89 remote node that only runs local instances, and having it become a
90 master is impractical due to networking or other constraints
91 - the *vm_capable* flag denotes whether the node can host instances or
92 not; for example, one might use a non-vm_capable node just as a master
93 candidate, for configuration backups; setting this flag to no
94 disallows placement of instances of this node, deactivates hypervisor
95 and related checks on it (e.g. bridge checks, LVM check, etc.), and
96 removes it from cluster capacity computations
102 A virtual machine which runs on a cluster. It can be a fault tolerant,
103 highly available entity.
105 An instance has various parameters, which are classified in three
106 categories: hypervisor related-parameters (called ``hvparams``), general
107 parameters (called ``beparams``) and per network-card parameters (called
108 ``nicparams``). All these parameters can be modified either at instance
109 level or via defaults at cluster level.
114 The are multiple options for the storage provided to an instance; while
115 the instance sees the same virtual drive in all cases, the node-level
116 configuration varies between them.
118 There are four disk templates you can choose from:
121 The instance has no disks. Only used for special purpose operating
122 systems or for testing.
125 The instance will use plain files as backend for its disks. No
126 redundancy is provided, and this is somewhat more difficult to
127 configure for high performance.
130 The instance will use LVM devices as backend for its disks. No
131 redundancy is provided.
134 .. note:: This is only valid for multi-node clusters using DRBD 8.0+
136 A mirror is set between the local node and a remote one, which must be
137 specified with the second value of the --node option. Use this option
138 to obtain a highly available instance that can be failed over to a
139 remote node should the primary one fail.
144 A framework for using external (user-provided) scripts to compute the
145 placement of instances on the cluster nodes. This eliminates the need to
146 manually specify nodes in instance add, instance moves, node evacuate,
149 In order for Ganeti to be able to use these scripts, they must be place
150 in the iallocator directory (usually ``lib/ganeti/iallocators`` under
151 the installation prefix, e.g. ``/usr/local``).
153 “Primary” and “secondary” concepts
154 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156 An instance has a primary and depending on the disk configuration, might
157 also have a secondary node. The instance always runs on the primary node
158 and only uses its secondary node for disk replication.
160 Similarly, the term of primary and secondary instances when talking
161 about a node refers to the set of instances having the given node as
162 primary, respectively secondary.
167 Tags are short strings that can be attached to either to cluster itself,
168 or to nodes or instances. They are useful as a very simplistic
169 information store for helping with cluster administration, for example
170 by attaching owner information to each instance after it's created::
172 gnt-instance add … instance1
173 gnt-instance add-tags instance1 owner:user2
175 And then by listing each instance and its tags, this information could
176 be used for contacting the users of each instance.
181 While not directly visible by an end-user, it's useful to know that a
182 basic cluster operation (e.g. starting an instance) is represented
183 internall by Ganeti as an *OpCode* (abbreviation from operation
184 code). These OpCodes are executed as part of a *Job*. The OpCodes in a
185 single Job are processed serially by Ganeti, but different Jobs will be
186 processed (depending on resource availability) in parallel. They will
187 not be executed in the submission order, but depending on resource
188 availability, locks and (starting with Ganeti 2.3) priority. An earlier
189 job may have to wait for a lock while a newer job doesn't need any locks
190 and can be executed right away. Operations requiring a certain order
191 need to be submitted as a single job, or the client must submit one job
192 at a time and wait for it to finish before continuing.
194 For example, shutting down the entire cluster can be done by running the
195 command ``gnt-instance shutdown --all``, which will submit for each
196 instance a separate job containing the “shutdown instance” OpCode.
202 You need to have your Ganeti cluster installed and configured before you
203 try any of the commands in this document. Please follow the
204 :doc:`install` for instructions on how to do that.
212 The add operation might seem complex due to the many parameters it
213 accepts, but once you have understood the (few) required parameters and
214 the customisation capabilities you will see it is an easy operation.
216 The add operation requires at minimum five parameters:
218 - the OS for the instance
220 - the disk count and size
221 - the node specification or alternatively the iallocator to use
222 - and finally the instance name
224 The OS for the instance must be visible in the output of the command
225 ``gnt-os list`` and specifies which guest OS to install on the instance.
227 The disk template specifies what kind of storage to use as backend for
228 the (virtual) disks presented to the instance; note that for instances
229 with multiple virtual disks, they all must be of the same type.
231 The node(s) on which the instance will run can be given either manually,
232 via the ``-n`` option, or computed automatically by Ganeti, if you have
233 installed any iallocator script.
235 With the above parameters in mind, the command is::
238 -n TARGET_NODE:SECONDARY_NODE \
240 -t DISK_TEMPLATE -s DISK_SIZE \
243 The instance name must be resolvable (e.g. exist in DNS) and usually
244 points to an address in the same subnet as the cluster itself.
246 The above command has the minimum required options; other options you
247 can give include, among others:
249 - The maximum/minimum memory size (``-B maxmem``, ``-B minmem``)
250 (``-B memory`` can be used to specify only one size)
252 - The number of virtual CPUs (``-B vcpus``)
254 - Arguments for the NICs of the instance; by default, a single-NIC
255 instance is created. The IP and/or bridge of the NIC can be changed
256 via ``--nic 0:ip=IP,bridge=BRIDGE``
258 See the manpage for gnt-instance for the detailed option list.
260 For example if you want to create an highly available instance, with a
261 single disk of 50GB and the default memory size, having primary node
262 ``node1`` and secondary node ``node3``, use the following command::
264 gnt-instance add -n node1:node3 -o debootstrap -t drbd \
267 There is a also a command for batch instance creation from a
268 specification file, see the ``batch-create`` operation in the
269 gnt-instance manual page.
271 Regular instance operations
272 +++++++++++++++++++++++++++
277 Removing an instance is even easier than creating one. This operation is
278 irreversible and destroys all the contents of your instance. Use with
281 gnt-instance remove INSTANCE_NAME
283 .. _instance-startup-label:
288 Instances are automatically started at instance creation time. To
289 manually start one which is currently stopped you can run::
291 gnt-instance startup INSTANCE_NAME
293 Ganeti will start an instance with up to its maximum instance memory. If
294 not enough memory is available Ganeti will use all the available memory
295 down to the instance minumum memory. If not even that amount of memory
296 is free Ganeti will refuse to start the instance.
298 Note, that this will not work when an instance is in a permanently
299 stopped state ``offline``. In this case, you will first have to
300 put it back to online mode by running::
302 gnt-instance modify --online INSTANCE_NAME
304 The command to stop the running instance is::
306 gnt-instance shutdown INSTANCE_NAME
308 If you want to shut the instance down more permanently, so that it
309 does not require dynamically allocated resources (memory and vcpus),
310 after shutting down an instance, execute the following::
312 gnt-instance modify --offline INSTANCE_NAME
314 .. warning:: Do not use the Xen or KVM commands directly to stop
315 instances. If you run for example ``xm shutdown`` or ``xm destroy``
316 on an instance Ganeti will automatically restart it (via
317 the :command:`ganeti-watcher` command which is launched via cron).
322 There are two ways to get information about instances: listing
323 instances, which does a tabular output containing a given set of fields
324 about each instance, and querying detailed information about a set of
327 The command to see all the instances configured and their status is::
331 The command can return a custom set of information when using the ``-o``
332 option (as always, check the manpage for a detailed specification). Each
333 instance will be represented on a line, thus making it easy to parse
334 this output via the usual shell utilities (grep, sed, etc.).
336 To get more detailed information about an instance, you can run::
338 gnt-instance info INSTANCE
340 which will give a multi-line block of information about the instance,
341 it's hardware resources (especially its disks and their redundancy
342 status), etc. This is harder to parse and is more expensive than the
343 list operation, but returns much more detailed information.
345 Changing an instance's runtime memory
346 +++++++++++++++++++++++++++++++++++++
348 Ganeti will always make sure an instance has a value between its maximum
349 and its minimum memory available as runtime memory. As of version 2.6
350 Ganeti will only choose a size different than the maximum size when
351 starting up, failing over, or migrating an instance on a node with less
352 than the maximum memory available. It won't resize other instances in
353 order to free up space for an instance.
355 If you find that you need more memory on a node any instance can be
356 manually resized without downtime, with the command::
358 gnt-instance modify -m SIZE INSTANCE_NAME
360 The same command can also be used to increase the memory available on an
361 instance, provided that enough free memory is available on its node, and
362 the specified size is not larger than the maximum memory size the
363 instance had when it was first booted (an instance will be unable to see
364 new memory above the maximum that was specified to the hypervisor at its
365 boot time, if it needs to grow further a reboot becomes necessary).
370 You can create a snapshot of an instance disk and its Ganeti
371 configuration, which then you can backup, or import into another
372 cluster. The way to export an instance is::
374 gnt-backup export -n TARGET_NODE INSTANCE_NAME
377 The target node can be any node in the cluster with enough space under
378 ``/srv/ganeti`` to hold the instance image. Use the ``--noshutdown``
379 option to snapshot an instance without rebooting it. Note that Ganeti
380 only keeps one snapshot for an instance - any previous snapshot of the
381 same instance existing cluster-wide under ``/srv/ganeti`` will be
382 removed by this operation: if you want to keep them, you need to move
383 them out of the Ganeti exports directory.
385 Importing an instance is similar to creating a new one, but additionally
386 one must specify the location of the snapshot. The command is::
388 gnt-backup import -n TARGET_NODE \
389 --src-node=NODE --src-dir=DIR INSTANCE_NAME
391 By default, parameters will be read from the export information, but you
392 can of course pass them in via the command line - most of the options
393 available for the command :command:`gnt-instance add` are supported here
396 Import of foreign instances
397 +++++++++++++++++++++++++++
399 There is a possibility to import a foreign instance whose disk data is
400 already stored as LVM volumes without going through copying it: the disk
403 For this, ensure that the original, non-managed instance is stopped,
404 then create a Ganeti instance in the usual way, except that instead of
405 passing the disk information you specify the current volumes::
407 gnt-instance add -t plain -n HOME_NODE ... \
408 --disk 0:adopt=lv_name[,vg=vg_name] INSTANCE_NAME
410 This will take over the given logical volumes, rename them to the Ganeti
411 standard (UUID-based), and without installing the OS on them start
412 directly the instance. If you configure the hypervisor similar to the
413 non-managed configuration that the instance had, the transition should
414 be seamless for the instance. For more than one disk, just pass another
415 disk parameter (e.g. ``--disk 1:adopt=...``).
417 Instance kernel selection
418 +++++++++++++++++++++++++
420 The kernel that instances uses to bootup can come either from the node,
421 or from instances themselves, depending on the setup.
426 With Xen PVM, there are three options.
428 First, you can use a kernel from the node, by setting the hypervisor
431 - ``kernel_path`` to a valid file on the node (and appropriately
433 - ``kernel_args`` optionally set to a valid Linux setting (e.g. ``ro``)
434 - ``root_path`` to a valid setting (e.g. ``/dev/xvda1``)
435 - ``bootloader_path`` and ``bootloader_args`` to empty
437 Alternatively, you can delegate the kernel management to instances, and
438 use either ``pvgrub`` or the deprecated ``pygrub``. For this, you must
439 install the kernels and initrds in the instance and create a valid GRUB
440 v1 configuration file.
442 For ``pvgrub`` (new in version 2.4.2), you need to set:
444 - ``kernel_path`` to point to the ``pvgrub`` loader present on the node
445 (e.g. ``/usr/lib/xen/boot/pv-grub-x86_32.gz``)
446 - ``kernel_args`` to the path to the GRUB config file, relative to the
447 instance (e.g. ``(hd0,0)/grub/menu.lst``)
448 - ``root_path`` **must** be empty
449 - ``bootloader_path`` and ``bootloader_args`` to empty
451 While ``pygrub`` is deprecated, here is how you can configure it:
453 - ``bootloader_path`` to the pygrub binary (e.g. ``/usr/bin/pygrub``)
454 - the other settings are not important
456 More information can be found in the Xen wiki pages for `pvgrub
457 <http://wiki.xensource.com/xenwiki/PvGrub>`_ and `pygrub
458 <http://wiki.xensource.com/xenwiki/PyGrub>`_.
463 For KVM also the kernel can be loaded either way.
465 For loading the kernels from the node, you need to set:
467 - ``kernel_path`` to a valid value
468 - ``initrd_path`` optionally set if you use an initrd
469 - ``kernel_args`` optionally set to a valid value (e.g. ``ro``)
471 If you want instead to have the instance boot from its disk (and execute
472 its bootloader), simply set the ``kernel_path`` parameter to an empty
473 string, and all the others will be ignored.
478 .. note:: This section only applies to multi-node clusters
480 .. _instance-change-primary-label:
482 Changing the primary node
483 +++++++++++++++++++++++++
485 There are three ways to exchange an instance's primary and secondary
486 nodes; the right one to choose depends on how the instance has been
487 created and the status of its current primary node. See
488 :ref:`rest-redundancy-label` for information on changing the secondary
489 node. Note that it's only possible to change the primary node to the
490 secondary and vice-versa; a direct change of the primary node with a
491 third node, while keeping the current secondary is not possible in a
492 single step, only via multiple operations as detailed in
493 :ref:`instance-relocation-label`.
495 Failing over an instance
496 ~~~~~~~~~~~~~~~~~~~~~~~~
498 If an instance is built in highly available mode you can at any time
499 fail it over to its secondary node, even if the primary has somehow
500 failed and it's not up anymore. Doing it is really easy, on the master
501 node you can just run::
503 gnt-instance failover INSTANCE_NAME
505 That's it. After the command completes the secondary node is now the
506 primary, and vice-versa.
508 The instance will be started with an amount of memory between its
509 ``maxmem`` and its ``minmem`` value, depending on the free memory on its
510 target node, or the operation will fail if that's not possible. See
511 :ref:`instance-startup-label` for details.
513 Live migrating an instance
514 ~~~~~~~~~~~~~~~~~~~~~~~~~~
516 If an instance is built in highly available mode, it currently runs and
517 both its nodes are running fine, you can at migrate it over to its
518 secondary node, without downtime. On the master node you need to run::
520 gnt-instance migrate INSTANCE_NAME
522 The current load on the instance and its memory size will influence how
523 long the migration will take. In any case, for both KVM and Xen
524 hypervisors, the migration will be transparent to the instance.
526 If the destination node has less memory than the instance's current
527 runtime memory, but at least the instance's minimum memory available
528 Ganeti will automatically reduce the instance runtime memory before
529 migrating it, unless the ``--no-runtime-changes`` option is passed, in
530 which case the target node should have at least the instance's current
533 Moving an instance (offline)
534 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
536 If an instance has not been create as mirrored, then the only way to
537 change its primary node is to execute the move command::
539 gnt-instance move -n NEW_NODE INSTANCE
541 This has a few prerequisites:
543 - the instance must be stopped
544 - its current primary node must be on-line and healthy
545 - the disks of the instance must not have any errors
547 Since this operation actually copies the data from the old node to the
548 new node, expect it to take proportional to the size of the instance's
549 disks and the speed of both the nodes' I/O system and their networking.
554 Disk failures are a common cause of errors in any server
555 deployment. Ganeti offers protection from single-node failure if your
556 instances were created in HA mode, and it also offers ways to restore
557 redundancy after a failure.
559 Preparing for disk operations
560 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
562 It is important to note that for Ganeti to be able to do any disk
563 operation, the Linux machines on top of which Ganeti must be consistent;
564 for LVM, this means that the LVM commands must not return failures; it
565 is common that after a complete disk failure, any LVM command aborts
566 with an error similar to::
569 /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
570 /dev/sdb1: read failed after 0 of 4096 at 750153695232: Input/output
572 /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
573 Couldn't find device with uuid
574 't30jmN-4Rcf-Fr5e-CURS-pawt-z0jU-m1TgeJ'.
575 Couldn't find all physical volumes for volume group xenvg.
577 Before restoring an instance's disks to healthy status, it's needed to
578 fix the volume group used by Ganeti so that we can actually create and
579 manage the logical volumes. This is usually done in a multi-step
582 #. first, if the disk is completely gone and LVM commands exit with
583 “Couldn't find device with uuid…” then you need to run the command::
585 vgreduce --removemissing VOLUME_GROUP
587 #. after the above command, the LVM commands should be executing
588 normally (warnings are normal, but the commands will not fail
591 #. if the failed disk is still visible in the output of the ``pvs``
592 command, you need to deactivate it from allocations by running::
596 At this point, the volume group should be consistent and any bad
597 physical volumes should not longer be available for allocation.
599 Note that since version 2.1 Ganeti provides some commands to automate
600 these two operations, see :ref:`storage-units-label`.
602 .. _rest-redundancy-label:
604 Restoring redundancy for DRBD-based instances
605 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
607 A DRBD instance has two nodes, and the storage on one of them has
608 failed. Depending on which node (primary or secondary) has failed, you
609 have three options at hand:
611 - if the storage on the primary node has failed, you need to re-create
613 - if the storage on the secondary node has failed, you can either
614 re-create the disks on it or change the secondary and recreate
615 redundancy on the new secondary node
617 Of course, at any point it's possible to force re-creation of disks even
618 though everything is already fine.
620 For all three cases, the ``replace-disks`` operation can be used::
622 # re-create disks on the primary node
623 gnt-instance replace-disks -p INSTANCE_NAME
624 # re-create disks on the current secondary
625 gnt-instance replace-disks -s INSTANCE_NAME
626 # change the secondary node, via manual specification
627 gnt-instance replace-disks -n NODE INSTANCE_NAME
628 # change the secondary node, via an iallocator script
629 gnt-instance replace-disks -I SCRIPT INSTANCE_NAME
630 # since Ganeti 2.1: automatically fix the primary or secondary node
631 gnt-instance replace-disks -a INSTANCE_NAME
633 Since the process involves copying all data from the working node to the
634 target node, it will take a while, depending on the instance's disk
635 size, node I/O system and network speed. But it is (barring any network
636 interruption) completely transparent for the instance.
638 Re-creating disks for non-redundant instances
639 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
641 .. versionadded:: 2.1
643 For non-redundant instances, there isn't a copy (except backups) to
644 re-create the disks. But it's possible to at-least re-create empty
645 disks, after which a reinstall can be run, via the ``recreate-disks``
648 gnt-instance recreate-disks INSTANCE
650 Note that this will fail if the disks already exists.
652 Conversion of an instance's disk type
653 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
655 It is possible to convert between a non-redundant instance of type
656 ``plain`` (LVM storage) and redundant ``drbd`` via the ``gnt-instance
659 # start with a non-redundant instance
660 gnt-instance add -t plain ... INSTANCE
662 # later convert it to redundant
663 gnt-instance stop INSTANCE
664 gnt-instance modify -t drbd -n NEW_SECONDARY INSTANCE
665 gnt-instance start INSTANCE
667 # and convert it back
668 gnt-instance stop INSTANCE
669 gnt-instance modify -t plain INSTANCE
670 gnt-instance start INSTANCE
672 The conversion must be done while the instance is stopped, and
673 converting from plain to drbd template presents a small risk, especially
674 if the instance has multiple disks and/or if one node fails during the
675 conversion procedure). As such, it's recommended (as always) to make
676 sure that downtime for manual recovery is acceptable and that the
677 instance has up-to-date backups.
682 Accessing an instance's disks
683 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
685 From an instance's primary node you can have access to its disks. Never
686 ever mount the underlying logical volume manually on a fault tolerant
687 instance, or will break replication and your data will be
688 inconsistent. The correct way to access an instance's disks is to run
689 (on the master node, as usual) the command::
691 gnt-instance activate-disks INSTANCE
693 And then, *on the primary node of the instance*, access the device that
694 gets created. For example, you could mount the given disks, then edit
695 files on the filesystem, etc.
697 Note that with partitioned disks (as opposed to whole-disk filesystems),
698 you will need to use a tool like :manpage:`kpartx(8)`::
700 node1# gnt-instance activate-disks instance1
703 node3# kpartx -l /dev/…
704 node3# kpartx -a /dev/…
705 node3# mount /dev/mapper/… /mnt/
706 # edit files under mnt as desired
708 node3# kpartx -d /dev/…
712 After you've finished you can deactivate them with the deactivate-disks
713 command, which works in the same way::
715 gnt-instance deactivate-disks INSTANCE
717 Note that if any process started by you is still using the disks, the
718 above command will error out, and you **must** cleanup and ensure that
719 the above command runs successfully before you start the instance,
720 otherwise the instance will suffer corruption.
722 Accessing an instance's console
723 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
725 The command to access a running instance's console is::
727 gnt-instance console INSTANCE_NAME
729 Use the console normally and then type ``^]`` when done, to exit.
731 Other instance operations
732 +++++++++++++++++++++++++
737 There is a wrapper command for rebooting instances::
739 gnt-instance reboot instance2
741 By default, this does the equivalent of shutting down and then starting
742 the instance, but it accepts parameters to perform a soft-reboot (via
743 the hypervisor), a hard reboot (hypervisor shutdown and then startup) or
744 a full one (the default, which also de-configures and then configures
745 again the disks of the instance).
747 Instance OS definitions debugging
748 +++++++++++++++++++++++++++++++++
750 Should you have any problems with instance operating systems the command
751 to see a complete status for all your nodes is::
755 .. _instance-relocation-label:
760 While it is not possible to move an instance from nodes ``(A, B)`` to
761 nodes ``(C, D)`` in a single move, it is possible to do so in a few
764 # instance is located on A, B
765 node1# gnt-instance replace -n nodeC instance1
766 # instance has moved from (A, B) to (A, C)
767 # we now flip the primary/secondary nodes
768 node1# gnt-instance migrate instance1
769 # instance lives on (C, A)
770 # we can then change A to D via:
771 node1# gnt-instance replace -n nodeD instance1
773 Which brings it into the final configuration of ``(C, D)``. Note that we
774 needed to do two replace-disks operation (two copies of the instance
775 disks), because we needed to get rid of both the original nodes (A and
781 There are much fewer node operations available than for instances, but
782 they are equivalently important for maintaining a healthy cluster.
787 It is at any time possible to extend the cluster with one more node, by
788 using the node add operation::
790 gnt-node add NEW_NODE
792 If the cluster has a replication network defined, then you need to pass
793 the ``-s REPLICATION_IP`` parameter to this option.
795 A variation of this command can be used to re-configure a node if its
796 Ganeti configuration is broken, for example if it has been reinstalled
799 gnt-node add --readd EXISTING_NODE
801 This will reinitialise the node as if it's been newly added, but while
802 keeping its existing configuration in the cluster (primary/secondary IP,
803 etc.), in other words you won't need to use ``-s`` here.
805 Changing the node role
806 ++++++++++++++++++++++
808 A node can be in different roles, as explained in the
809 :ref:`terminology-label` section. Promoting a node to the master role is
810 special, while the other roles are handled all via a single command.
812 Failing over the master node
813 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815 If you want to promote a different node to the master role (for whatever
816 reason), run on any other master-candidate node the command::
818 gnt-cluster master-failover
820 and the node you ran it on is now the new master. In case you try to run
821 this on a non master-candidate node, you will get an error telling you
822 which nodes are valid.
824 Changing between the other roles
825 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
827 The ``gnt-node modify`` command can be used to select a new role::
829 # change to master candidate
830 gnt-node modify -C yes NODE
831 # change to drained status
832 gnt-node modify -D yes NODE
833 # change to offline status
834 gnt-node modify -O yes NODE
835 # change to regular mode (reset all flags)
836 gnt-node modify -O no -D no -C no NODE
838 Note that the cluster requires that at any point in time, a certain
839 number of nodes are master candidates, so changing from master candidate
840 to other roles might fail. It is recommended to either force the
841 operation (via the ``--force`` option) or first change the number of
842 master candidates in the cluster - see :ref:`cluster-config-label`.
847 There are two steps of moving instances off a node:
849 - moving the primary instances (actually converting them into secondary
851 - moving the secondary instances (including any instances converted in
854 Primary instance conversion
855 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
857 For this step, you can use either individual instance move
858 commands (as seen in :ref:`instance-change-primary-label`) or the bulk
859 per-node versions; these are::
861 gnt-node migrate NODE
862 gnt-node evacuate NODE
864 Note that the instance “move” command doesn't currently have a node
867 Both these commands, or the equivalent per-instance command, will make
868 this node the secondary node for the respective instances, whereas their
869 current secondary node will become primary. Note that it is not possible
870 to change in one step the primary node to another node as primary, while
871 keeping the same secondary node.
873 Secondary instance evacuation
874 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
876 For the evacuation of secondary instances, a command called
877 :command:`gnt-node evacuate` is provided and its syntax is::
879 gnt-node evacuate -I IALLOCATOR_SCRIPT NODE
880 gnt-node evacuate -n DESTINATION_NODE NODE
882 The first version will compute the new secondary for each instance in
883 turn using the given iallocator script, whereas the second one will
884 simply move all instances to DESTINATION_NODE.
889 Once a node no longer has any instances (neither primary nor secondary),
890 it's easy to remove it from the cluster::
892 gnt-node remove NODE_NAME
894 This will deconfigure the node, stop the ganeti daemons on it and leave
895 it hopefully like before it joined to the cluster.
900 When using LVM (either standalone or with DRBD), it can become tedious
901 to debug and fix it in case of errors. Furthermore, even file-based
902 storage can become complicated to handle manually on many hosts. Ganeti
903 provides a couple of commands to help with automation.
908 This is a command specific to LVM handling. It allows listing the
909 logical volumes on a given node or on all nodes and their association to
910 instances via the ``volumes`` command::
912 node1# gnt-node volumes
913 Node PhysDev VG Name Size Instance
914 node1 /dev/sdb1 xenvg e61fbc97-….disk0 512M instance17
915 node1 /dev/sdb1 xenvg ebd1a7d1-….disk0 512M instance19
916 node2 /dev/sdb1 xenvg 0af08a3d-….disk0 512M instance20
917 node2 /dev/sdb1 xenvg cc012285-….disk0 512M instance16
918 node2 /dev/sdb1 xenvg f0fac192-….disk0 512M instance18
920 The above command maps each logical volume to a volume group and
921 underlying physical volume and (possibly) to an instance.
923 .. _storage-units-label:
925 Generalized storage handling
926 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
928 .. versionadded:: 2.1
930 Starting with Ganeti 2.1, a new storage framework has been implemented
931 that tries to abstract the handling of the storage type the cluster
934 First is listing the backend storage and their space situation::
936 node1# gnt-node list-storage
937 Node Name Size Used Free
938 node1 /dev/sda7 673.8G 0M 673.8G
939 node1 /dev/sdb1 698.6G 1.5G 697.1G
940 node2 /dev/sda7 673.8G 0M 673.8G
941 node2 /dev/sdb1 698.6G 1.0G 697.6G
943 The default is to list LVM physical volumes. It's also possible to list
944 the LVM volume groups::
946 node1# gnt-node list-storage -t lvm-vg
951 Next is repairing storage units, which is currently only implemented for
952 volume groups and does the equivalent of ``vgreduce --removemissing``::
954 node1# gnt-node repair-storage node2 lvm-vg xenvg
955 Sun Oct 25 22:21:45 2009 Repairing storage unit 'xenvg' on node2 ...
957 Last is the modification of volume properties, which is (again) only
958 implemented for LVM physical volumes and allows toggling the
959 ``allocatable`` value::
961 node1# gnt-node modify-storage --allocatable=no node2 lvm-pv /dev/sdb1
963 Use of the storage commands
964 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
966 All these commands are needed when recovering a node from a disk
969 - first, we need to recover from complete LVM failure (due to missing
970 disk), by running the ``repair-storage`` command
971 - second, we need to change allocation on any partially-broken disk
972 (i.e. LVM still sees it, but it has bad blocks) by running
974 - then we can evacuate the instances as needed
980 Beside the cluster initialisation command (which is detailed in the
981 :doc:`install` document) and the master failover command which is
982 explained under node handling, there are a couple of other cluster
983 operations available.
985 .. _cluster-config-label:
990 One of the few commands that can be run on any node (not only the
991 master) is the ``getmaster`` command::
993 node2# gnt-cluster getmaster
997 It is possible to query and change global cluster parameters via the
998 ``info`` and ``modify`` commands::
1000 node1# gnt-cluster info
1001 Cluster name: cluster.example.com
1002 Cluster UUID: 07805e6f-f0af-4310-95f1-572862ee939c
1003 Creation time: 2009-09-25 05:04:15
1004 Modification time: 2009-10-18 22:11:47
1005 Master node: node1.example.com
1006 Architecture (this node): 64bit (x86_64)
1009 Default hypervisor: xen-pvm
1010 Enabled hypervisors: xen-pvm
1011 Hypervisor parameters:
1013 root_path: /dev/sda1
1016 - candidate pool size: 10
1018 Default instance parameters:
1022 Default nic parameters:
1027 There various parameters above can be changed via the ``modify``
1028 commands as follows:
1030 - the hypervisor parameters can be changed via ``modify -H
1031 xen-pvm:root_path=…``, and so on for other hypervisors/key/values
1032 - the "default instance parameters" are changeable via ``modify -B
1033 parameter=value…`` syntax
1034 - the cluster parameters are changeable via separate options to the
1035 modify command (e.g. ``--candidate-pool-size``, etc.)
1037 For detailed option list see the :manpage:`gnt-cluster(8)` man page.
1039 The cluster version can be obtained via the ``version`` command::
1040 node1# gnt-cluster version
1041 Software version: 2.1.0
1042 Internode protocol: 20
1043 Configuration format: 2010000
1047 This is not very useful except when debugging Ganeti.
1049 Global node commands
1050 ++++++++++++++++++++
1052 There are two commands provided for replicating files to all nodes of a
1053 cluster and for running commands on all the nodes::
1055 node1# gnt-cluster copyfile /path/to/file
1056 node1# gnt-cluster command ls -l /path/to/file
1058 These are simple wrappers over scp/ssh and more advanced usage can be
1059 obtained using :manpage:`dsh(1)` and similar commands. But they are
1060 useful to update an OS script from the master node, for example.
1062 Cluster verification
1063 ++++++++++++++++++++
1065 There are three commands that relate to global cluster checks. The first
1066 one is ``verify`` which gives an overview on the cluster state,
1067 highlighting any issues. In normal operation, this command should return
1068 no ``ERROR`` messages::
1070 node1# gnt-cluster verify
1071 Sun Oct 25 23:08:58 2009 * Verifying global settings
1072 Sun Oct 25 23:08:58 2009 * Gathering data (2 nodes)
1073 Sun Oct 25 23:09:00 2009 * Verifying node status
1074 Sun Oct 25 23:09:00 2009 * Verifying instance status
1075 Sun Oct 25 23:09:00 2009 * Verifying orphan volumes
1076 Sun Oct 25 23:09:00 2009 * Verifying remaining instances
1077 Sun Oct 25 23:09:00 2009 * Verifying N+1 Memory redundancy
1078 Sun Oct 25 23:09:00 2009 * Other Notes
1079 Sun Oct 25 23:09:00 2009 - NOTICE: 5 non-redundant instance(s) found.
1080 Sun Oct 25 23:09:00 2009 * Hooks Results
1082 The second command is ``verify-disks``, which checks that the instance's
1083 disks have the correct status based on the desired instance state
1086 node1# gnt-cluster verify-disks
1088 Note that this command will show no output when disks are healthy.
1090 The last command is used to repair any discrepancies in Ganeti's
1091 recorded disk size and the actual disk size (disk size information is
1092 needed for proper activation and growth of DRBD-based disks)::
1094 node1# gnt-cluster repair-disk-sizes
1095 Sun Oct 25 23:13:16 2009 - INFO: Disk 0 of instance instance1 has mismatched size, correcting: recorded 512, actual 2048
1096 Sun Oct 25 23:13:17 2009 - WARNING: Invalid result from node node4, ignoring node results
1098 The above shows one instance having wrong disk size, and a node which
1099 returned invalid data, and thus we ignored all primary instances of that
1102 Configuration redistribution
1103 ++++++++++++++++++++++++++++
1105 If the verify command complains about file mismatches between the master
1106 and other nodes, due to some node problems or if you manually modified
1107 configuration files, you can force an push of the master configuration
1108 to all other nodes via the ``redist-conf`` command::
1110 node1# gnt-cluster redist-conf
1113 This command will be silent unless there are problems sending updates to
1120 It is possible to rename a cluster, or to change its IP address, via the
1121 ``rename`` command. If only the IP has changed, you need to pass the
1122 current name and Ganeti will realise its IP has changed::
1124 node1# gnt-cluster rename cluster.example.com
1125 This will rename the cluster to 'cluster.example.com'. If
1126 you are connected over the network to the cluster name, the operation
1127 is very dangerous as the IP address will be removed from the node and
1128 the change may not go through. Continue?
1130 Failure: prerequisites not met for this operation:
1131 Neither the name nor the IP address of the cluster has changed
1133 In the above output, neither value has changed since the cluster
1134 initialisation so the operation is not completed.
1139 The job queue execution in Ganeti 2.0 and higher can be inspected,
1140 suspended and resumed via the ``queue`` command::
1142 node1~# gnt-cluster queue info
1143 The drain flag is unset
1144 node1~# gnt-cluster queue drain
1145 node1~# gnt-instance stop instance1
1146 Failed to submit job for instance1: Job queue is drained, refusing job
1147 node1~# gnt-cluster queue info
1148 The drain flag is set
1149 node1~# gnt-cluster queue undrain
1151 This is most useful if you have an active cluster and you need to
1152 upgrade the Ganeti software, or simply restart the software on any node:
1154 #. suspend the queue via ``queue drain``
1155 #. wait until there are no more running jobs via ``gnt-job list``
1156 #. restart the master or another node, or upgrade the software
1157 #. resume the queue via ``queue undrain``
1159 .. note:: this command only stores a local flag file, and if you
1160 failover the master, it will not have effect on the new master.
1166 The :manpage:`ganeti-watcher` is a program, usually scheduled via
1167 ``cron``, that takes care of cluster maintenance operations (restarting
1168 downed instances, activating down DRBD disks, etc.). However, during
1169 maintenance and troubleshooting, this can get in your way; disabling it
1170 via commenting out the cron job is not so good as this can be
1171 forgotten. Thus there are some commands for automated control of the
1172 watcher: ``pause``, ``info`` and ``continue``::
1174 node1~# gnt-cluster watcher info
1175 The watcher is not paused.
1176 node1~# gnt-cluster watcher pause 1h
1177 The watcher is paused until Mon Oct 26 00:30:37 2009.
1178 node1~# gnt-cluster watcher info
1179 The watcher is paused until Mon Oct 26 00:30:37 2009.
1180 node1~# ganeti-watcher -d
1181 2009-10-25 23:30:47,984: pid=28867 ganeti-watcher:486 DEBUG Pause has been set, exiting
1182 node1~# gnt-cluster watcher continue
1183 The watcher is no longer paused.
1184 node1~# ganeti-watcher -d
1185 2009-10-25 23:31:04,789: pid=28976 ganeti-watcher:345 DEBUG Archived 0 jobs, left 0
1186 2009-10-25 23:31:05,884: pid=28976 ganeti-watcher:280 DEBUG Got data from cluster, writing instance status file
1187 2009-10-25 23:31:06,061: pid=28976 ganeti-watcher:150 DEBUG Data didn't change, just touching status file
1188 node1~# gnt-cluster watcher info
1189 The watcher is not paused.
1192 The exact details of the argument to the ``pause`` command are available
1195 .. note:: this command only stores a local flag file, and if you
1196 failover the master, it will not have effect on the new master.
1198 Node auto-maintenance
1199 +++++++++++++++++++++
1201 If the cluster parameter ``maintain_node_health`` is enabled (see the
1202 manpage for :command:`gnt-cluster`, the init and modify subcommands),
1203 then the following will happen automatically:
1205 - the watcher will shutdown any instances running on offline nodes
1206 - the watcher will deactivate any DRBD devices on offline nodes
1208 In the future, more actions are planned, so only enable this parameter
1209 if the nodes are completely dedicated to Ganeti; otherwise it might be
1210 possible to lose data due to auto-maintenance actions.
1212 Removing a cluster entirely
1213 +++++++++++++++++++++++++++
1215 The usual method to cleanup a cluster is to run ``gnt-cluster destroy``
1216 however if the Ganeti installation is broken in any way then this will
1219 It is possible in such a case to cleanup manually most if not all traces
1220 of a cluster installation by following these steps on all of the nodes:
1222 1. Shutdown all instances. This depends on the virtualisation method
1223 used (Xen, KVM, etc.):
1225 - Xen: run ``xm list`` and ``xm destroy`` on all the non-Domain-0
1227 - KVM: kill all the KVM processes
1228 - chroot: kill all processes under the chroot mountpoints
1230 2. If using DRBD, shutdown all DRBD minors (which should by at this time
1231 no-longer in use by instances); on each node, run ``drbdsetup
1232 /dev/drbdN down`` for each active DRBD minor.
1234 3. If using LVM, cleanup the Ganeti volume group; if only Ganeti created
1235 logical volumes (and you are not sharing the volume group with the
1236 OS, for example), then simply running ``lvremove -f xenvg`` (replace
1237 'xenvg' with your volume group name) should do the required cleanup.
1239 4. If using file-based storage, remove recursively all files and
1240 directories under your file-storage directory: ``rm -rf
1241 /srv/ganeti/file-storage/*`` replacing the path with the correct path
1244 5. Stop the ganeti daemons (``/etc/init.d/ganeti stop``) and kill any
1245 that remain alive (``pgrep ganeti`` and ``pkill ganeti``).
1247 6. Remove the ganeti state directory (``rm -rf /var/lib/ganeti/*``),
1248 replacing the path with the correct path for your installation.
1250 On the master node, remove the cluster from the master-netdev (usually
1251 ``xen-br0`` for bridged mode, otherwise ``eth0`` or similar), by running
1252 ``ip a del $clusterip/32 dev xen-br0`` (use the correct cluster ip and
1253 network device name).
1255 At this point, the machines are ready for a cluster creation; in case
1256 you want to remove Ganeti completely, you need to also undo some of the
1257 SSH changes and log directories:
1259 - ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct
1261 - remove from ``/root/.ssh`` the keys that Ganeti added (check the
1262 ``authorized_keys`` and ``id_dsa`` files)
1263 - regenerate the host's SSH keys (check the OpenSSH startup scripts)
1266 Otherwise, if you plan to re-create the cluster, you can just go ahead
1267 and rerun ``gnt-cluster init``.
1272 The tags handling (addition, removal, listing) is similar for all the
1273 objects that support it (instances, nodes, and the cluster).
1278 Note that the set of characters present in a tag and the maximum tag
1279 length are restricted. Currently the maximum length is 128 characters,
1280 there can be at most 4096 tags per object, and the set of characters is
1281 comprised by alphanumeric characters and additionally ``.+*/:@-``.
1286 Tags can be added via ``add-tags``::
1288 gnt-instance add-tags INSTANCE a b c
1289 gnt-node add-tags INSTANCE a b c
1290 gnt-cluster add-tags a b c
1293 The above commands add three tags to an instance, to a node and to the
1294 cluster. Note that the cluster command only takes tags as arguments,
1295 whereas the node and instance commands first required the node and
1298 Tags can also be added from a file, via the ``--from=FILENAME``
1299 argument. The file is expected to contain one tag per line.
1301 Tags can also be remove via a syntax very similar to the add one::
1303 gnt-instance remove-tags INSTANCE a b c
1307 gnt-instance list-tags
1309 gnt-cluster list-tags
1314 It is also possible to execute a global search on the all tags defined
1315 in the cluster configuration, via a cluster command::
1317 gnt-cluster search-tags REGEXP
1319 The parameter expected is a regular expression (see
1320 :manpage:`regex(7)`). This will return all tags that match the search,
1321 together with the object they are defined in (the names being show in a
1322 hierarchical kind of way)::
1324 node1# gnt-cluster search-tags o
1326 /instances/instance1 owner:bar
1332 The various jobs submitted by the instance/node/cluster commands can be
1333 examined, canceled and archived by various invocations of the
1334 ``gnt-job`` command.
1336 First is the job list command::
1339 17771 success INSTANCE_QUERY_DATA
1340 17773 success CLUSTER_VERIFY_DISKS
1341 17775 success CLUSTER_REPAIR_DISK_SIZES
1342 17776 error CLUSTER_RENAME(cluster.example.com)
1343 17780 success CLUSTER_REDIST_CONF
1344 17792 success INSTANCE_REBOOT(instance1.example.com)
1346 More detailed information about a job can be found via the ``info``
1349 node1# gnt-job info 17776
1352 Received: 2009-10-25 23:18:02.180569
1353 Processing start: 2009-10-25 23:18:02.200335 (delta 0.019766s)
1354 Processing end: 2009-10-25 23:18:02.279743 (delta 0.079408s)
1355 Total processing time: 0.099174 seconds
1359 Processing start: 2009-10-25 23:18:02.200335
1360 Processing end: 2009-10-25 23:18:02.252282
1362 name: cluster.example.com
1365 [Neither the name nor the IP address of the cluster has changed]
1368 During the execution of a job, it's possible to follow the output of a
1369 job, similar to the log that one get from the ``gnt-`` commands, via the
1372 node1# gnt-instance add --submit … instance1
1374 node1# gnt-job watch 17818
1375 Output from job 17818 follows
1376 -----------------------------
1377 Mon Oct 26 00:22:48 2009 - INFO: Selected nodes for instance instance1 via iallocator dumb: node1, node2
1378 Mon Oct 26 00:22:49 2009 * creating instance disks...
1379 Mon Oct 26 00:22:52 2009 adding instance instance1 to cluster config
1380 Mon Oct 26 00:22:52 2009 - INFO: Waiting for instance instance1 to sync disks.
1382 Mon Oct 26 00:23:03 2009 creating os for instance instance1 on node node1
1383 Mon Oct 26 00:23:03 2009 * running the instance OS create scripts...
1384 Mon Oct 26 00:23:13 2009 * starting instance...
1387 This is useful if you need to follow a job's progress from multiple
1390 A job that has not yet started to run can be canceled::
1392 node1# gnt-job cancel 17810
1394 But not one that has already started execution::
1396 node1# gnt-job cancel 17805
1397 Job 17805 is no longer waiting in the queue
1399 There are two queues for jobs: the *current* and the *archive*
1400 queue. Jobs are initially submitted to the current queue, and they stay
1401 in that queue until they have finished execution (either successfully or
1402 not). At that point, they can be moved into the archive queue using e.g.
1403 ``gnt-job autoarchive all``. The ``ganeti-watcher`` script will do this
1404 automatically 6 hours after a job is finished. The ``ganeti-cleaner``
1405 script will then remove archived the jobs from the archive directory
1408 Note that ``gnt-job list`` only shows jobs in the current queue.
1409 Archived jobs can be viewed using ``gnt-job info <id>``.
1411 Special Ganeti deployments
1412 --------------------------
1414 Since Ganeti 2.4, it is possible to extend the Ganeti deployment with
1415 two custom scenarios: Ganeti inside Ganeti and multi-site model.
1417 Running Ganeti under Ganeti
1418 +++++++++++++++++++++++++++
1420 It is sometimes useful to be able to use a Ganeti instance as a Ganeti
1421 node (part of another cluster, usually). One example scenario is two
1422 small clusters, where we want to have an additional master candidate
1423 that holds the cluster configuration and can be used for helping with
1424 the master voting process.
1426 However, these Ganeti instance should not host instances themselves, and
1427 should not be considered in the normal capacity planning, evacuation
1428 strategies, etc. In order to accomplish this, mark these nodes as
1429 non-``vm_capable``::
1431 node1# gnt-node modify --vm-capable=no node3
1433 The vm_capable status can be listed as usual via ``gnt-node list``::
1435 node1# gnt-node list -oname,vm_capable
1441 When this flag is set, the cluster will not do any operations that
1442 relate to instances on such nodes, e.g. hypervisor operations,
1443 disk-related operations, etc. Basically they will just keep the ssconf
1444 files, and if master candidates the full configuration.
1449 If Ganeti is deployed in multi-site model, with each site being a node
1450 group (so that instances are not relocated across the WAN by mistake),
1451 it is conceivable that either the WAN latency is high or that some sites
1452 have a lower reliability than others. In this case, it doesn't make
1453 sense to replicate the job information across all sites (or even outside
1454 of a “central” node group), so it should be possible to restrict which
1455 nodes can become master candidates via the auto-promotion algorithm.
1457 Ganeti 2.4 introduces for this purpose a new ``master_capable`` flag,
1458 which (when unset) prevents nodes from being marked as master
1459 candidates, either manually or automatically.
1461 As usual, the node modify operation can change this flag::
1463 node1# gnt-node modify --auto-promote --master-capable=no node3
1464 Fri Jan 7 06:23:07 2011 - INFO: Demoting from master candidate
1465 Fri Jan 7 06:23:08 2011 - INFO: Promoted nodes to master candidate role: node4
1467 - master_capable -> False
1468 - master_candidate -> False
1470 And the node list operation will list this flag::
1472 node1# gnt-node list -oname,master_capable node1 node2 node3
1478 Note that marking a node both not ``vm_capable`` and not
1479 ``master_capable`` makes the node practically unusable from Ganeti's
1480 point of view. Hence these two flags should be used probably in
1481 contrast: some nodes will be only master candidates (master_capable but
1482 not vm_capable), and other nodes will only hold instances (vm_capable
1483 but not master_capable).
1489 Beside the usual ``gnt-`` and ``ganeti-`` commands which are provided
1490 and installed in ``$prefix/sbin`` at install time, there are a couple of
1491 other tools installed which are used seldom but can be helpful in some
1497 The ``lvmstrap`` tool, introduced in :ref:`configure-lvm-label` section,
1498 has two modes of operation:
1500 - ``diskinfo`` shows the discovered disks on the system and their status
1501 - ``create`` takes all not-in-use disks and creates a volume group out
1504 .. warning:: The ``create`` argument to this command causes data-loss!
1509 The ``cfgupgrade`` tools is used to upgrade between major (and minor)
1510 Ganeti versions. Point-releases are usually transparent for the admin.
1512 More information about the upgrade procedure is listed on the wiki at
1513 http://code.google.com/p/ganeti/wiki/UpgradeNotes.
1515 There is also a script designed to upgrade from Ganeti 1.2 to 2.0,
1516 called ``cfgupgrade12``.
1521 .. note:: This command is not actively maintained; make sure you backup
1522 your configuration before using it
1524 This can be used as an alternative to direct editing of the
1525 main configuration file if Ganeti has a bug and prevents you, for
1526 example, from removing an instance or a node from the configuration
1534 .. warning:: This command will erase existing instances if given as
1537 This tool is used to exercise either the hardware of machines or
1538 alternatively the Ganeti software. It is safe to run on an existing
1539 cluster **as long as you don't pass it existing instance names**.
1541 The command will, by default, execute a comprehensive set of operations
1542 against a list of instances, these being:
1545 - disk replacement (for redundant instances)
1546 - failover and migration (for redundant instances)
1547 - move (for non-redundant instances)
1549 - add disks, remove disk
1550 - add NICs, remove NICs
1551 - export and then import
1555 - and finally removal of the test instances
1557 Executing all these operations will test that the hardware performs
1558 well: the creation, disk replace, disk add and disk growth will exercise
1559 the storage and network; the migrate command will test the memory of the
1560 systems. Depending on the passed options, it can also test that the
1561 instance OS definitions are executing properly the rename, import and
1567 This tool takes the Ganeti configuration and outputs a "sanitized"
1568 version, by randomizing or clearing:
1570 - DRBD secrets and cluster public key (always)
1571 - host names (optional)
1573 - OS names (optional)
1574 - LV names (optional, only useful for very old clusters which still have
1575 instances whose LVs are based on the instance name)
1577 By default, all optional items are activated except the LV name
1578 randomization. When passing ``--no-randomization``, which disables the
1579 optional items (i.e. just the DRBD secrets and cluster public keys are
1580 randomized), the resulting file can be used as a safety copy of the
1581 cluster config - while not trivial, the layout of the cluster can be
1582 recreated from it and if the instance disks have not been lost it
1583 permits recovery from the loss of all master candidates.
1588 See :doc:`separate documentation for move-instance <move-instance>`.
1590 .. TODO: document cluster-merge tool
1593 Other Ganeti projects
1594 ---------------------
1596 Below is a list (which might not be up-to-date) of additional projects
1597 that can be useful in a Ganeti deployment. They can be downloaded from
1598 the project site (http://code.google.com/p/ganeti/) and the repositories
1599 are also on the project git site (http://git.ganeti.org).
1604 The ``ganeti-nbma`` software is designed to allow instances to live on a
1605 separate, virtual network from the nodes, and in an environment where
1606 nodes are not guaranteed to be able to reach each other via multicasting
1607 or broadcasting. For more information see the README in the source
1613 Before Ganeti version 2.5, this was a standalone project; since that
1614 version it is integrated into the Ganeti codebase (see
1615 :doc:`install-quick` for instructions on how to enable it). If you run
1616 an older Ganeti version, you will have to download and build it
1619 For more information and installation instructions, see the README file
1620 in the source archive.
1622 .. vim: set textwidth=72 :