1 Ganeti installation tutorial
2 ============================
4 Documents Ganeti version |version|
8 .. highlight:: shell-example
13 Ganeti is a cluster virtualization management system based on Xen or
14 KVM. This document explains how to bootstrap a Ganeti node (Xen *dom0*,
15 the host Linux system for KVM), create a running cluster and install
16 virtual instances (Xen *domUs*, KVM guests). You need to repeat most of
17 the steps in this document for every node you want to install, but of
18 course we recommend creating some semi-automatic procedure if you plan
19 to deploy Ganeti on a medium/large scale.
21 A basic Ganeti terminology glossary is provided in the introductory
22 section of the :doc:`admin`. Please refer to that document if you are
23 uncertain about the terms we are using.
25 Ganeti has been developed for Linux and should be distribution-agnostic.
26 This documentation will use Debian Squeeze as an example system but the
27 examples can be translated to any other distribution. You are expected
28 to be familiar with your distribution, its package management system,
29 and Xen or KVM before trying to use Ganeti.
31 This document is divided into two main sections:
33 - Installation of the base system and base components
35 - Configuration of the environment for Ganeti
37 Each of these is divided into sub-sections. While a full Ganeti system
38 will need all of the steps specified, some are not strictly required for
39 every environment. Which ones they are, and why, is specified in the
40 corresponding sections.
42 Installing the base system and base components
43 ----------------------------------------------
48 Any system supported by your Linux distribution is fine. 64-bit systems
49 are better as they can support more memory.
51 Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is
52 supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is
53 needed to get high-availability features (but of course, one can be used
54 to store the images). Whilte it is highly recommended to use more than
55 one disk drive in order to improve speed, Ganeti also works with one
58 Installing the base system
59 ++++++++++++++++++++++++++
61 **Mandatory** on all nodes.
63 It is advised to start with a clean, minimal install of the operating
64 system. The only requirement you need to be aware of at this stage is to
65 partition leaving enough space for a big (**minimum** 20GiB) LVM volume
66 group which will then host your instance filesystems, if you want to use
67 all Ganeti features. The volume group name Ganeti uses (by default) is
70 You can also use file-based storage only, without LVM, but this setup is
71 not detailed in this document.
73 If you choose to use RBD-based instances, there's no need for LVM
74 provisioning. However, this feature is experimental, and is not yet
75 recommended for production clusters.
77 While you can use an existing system, please note that the Ganeti
78 installation is intrusive in terms of changes to the system
79 configuration, and it's best to use a newly-installed system without
82 Also, for best results, it's advised that the nodes have as much as
83 possible the same hardware and software configuration. This will make
84 administration much easier.
89 Note that Ganeti requires the hostnames of the systems (i.e. what the
90 ``hostname`` command outputs to be a fully-qualified name, not a short
91 name. In other words, you should use *node1.example.com* as a hostname
94 .. admonition:: Debian
96 Debian usually configures the hostname differently than you need it
97 for Ganeti. For example, this is what it puts in ``/etc/hosts`` in
101 127.0.1.1 node1.example.com node1
103 but for Ganeti you need to have::
106 192.0.2.1 node1.example.com node1
108 replacing ``192.0.2.1`` with your node's address. Also, the file
109 ``/etc/hostname`` which configures the hostname of the system
110 should contain ``node1.example.com`` and not just ``node1`` (you
111 need to run the command ``/etc/init.d/hostname.sh start`` after
114 .. admonition:: Why a fully qualified host name
116 Although most distributions use only the short name in the
117 /etc/hostname file, we still think Ganeti nodes should use the full
118 name. The reason for this is that calling 'hostname --fqdn' requires
119 the resolver library to work and is a 'guess' via heuristics at what
120 is your domain name. Since Ganeti can be used among other things to
121 host DNS servers, we don't want to depend on them as much as
122 possible, and we'd rather have the uname() syscall return the full
125 We haven't ever found any breakage in using a full hostname on a
126 Linux system, and anyway we recommend to have only a minimal
127 installation on Ganeti nodes, and to use instances (or other
128 dedicated machines) to run the rest of your network services. By
129 doing this you can change the /etc/hostname file to contain an FQDN
130 without the fear of breaking anything unrelated.
133 Installing The Hypervisor
134 +++++++++++++++++++++++++
136 **Mandatory** on all nodes.
138 While Ganeti is developed with the ability to modularly run on different
139 virtualization environments in mind the only two currently useable on a
140 live system are Xen and KVM. Supported Xen versions are: 3.0.3 and later
141 3.x versions, and 4.x (tested up to 4.1). Supported KVM versions are 72
144 Please follow your distribution's recommended way to install and set up
145 Xen, or install Xen from the upstream source, if you wish, following
146 their manual. For KVM, make sure you have a KVM-enabled kernel and the
149 After installing Xen, you need to reboot into your new system. On some
150 distributions this might involve configuring GRUB appropriately, whereas
151 others will configure it automatically when you install the respective
152 kernels. For KVM no reboot should be necessary.
154 .. admonition:: Xen on Debian
156 Under Debian you can install the relevant ``xen-linux-system``
157 package, which will pull in both the hypervisor and the relevant
158 kernel. Also, if you are installing a 32-bit system, you should
159 install the ``libc6-xen`` package (run ``apt-get install
165 It's recommended that dom0 is restricted to a low amount of memory
166 (512MiB or 1GiB is reasonable) and that memory ballooning is disabled in
167 the file ``/etc/xen/xend-config.sxp`` by setting the value
168 ``dom0-min-mem`` to 0, like this::
172 For optimum performance when running both CPU and I/O intensive
173 instances, it's also recommended that the dom0 is restricted to one CPU
174 only, for example by booting with the kernel parameter ``maxcpus=1``.
176 It is recommended that you disable xen's automatic save of virtual
177 machines at system shutdown and subsequent restore of them at reboot.
178 To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
179 ``/etc/default/xendomains`` is set to an empty value.
181 If you want to use live migration make sure you have, in the xen config
182 file, something that allows the nodes to migrate instances between each
187 (xend-relocation-server yes)
188 (xend-relocation-port 8002)
189 (xend-relocation-address '')
190 (xend-relocation-hosts-allow '^192\\.0\\.2\\.[0-9]+$')
193 The second line assumes that the hypervisor parameter
194 ``migration_port`` is set 8002, otherwise modify it to match. The last
195 line assumes that all your nodes have secondary IPs in the
196 192.0.2.0/24 network, adjust it accordingly to your setup.
198 .. admonition:: Debian
200 Besides the ballooning change which you need to set in
201 ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
202 parameters in the file ``/boot/grub/menu.lst``. You need to modify
203 the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this:
207 ## Xen hypervisor options to use with the default Xen boot option
208 # xenhopt=dom0_mem=1024M
210 and the ``xenkopt`` needs to include the ``maxcpus`` option like
215 ## Xen Linux kernel options to use with the default Xen boot option
218 Any existing parameters can be left in place: it's ok to have
219 ``xenkopt=console=tty0 maxcpus=1``, for example. After modifying the
220 files, you need to run::
224 If you want to run HVM instances too with Ganeti and want VNC access to
225 the console of your instances, set the following two entries in
226 ``/etc/xen/xend-config.sxp``:
230 (vnc-listen '0.0.0.0') (vncpasswd '')
232 You need to restart the Xen daemon for these settings to take effect::
234 $ /etc/init.d/xend restart
236 Selecting the instance kernel
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
239 After you have installed Xen, you need to tell Ganeti exactly what
240 kernel to use for the instances it will create. This is done by creating
241 a symlink from your actual kernel to ``/boot/vmlinuz-3-xenU``, and one
242 from your initrd to ``/boot/initrd-3-xenU`` [#defkernel]_. Note that
243 if you don't use an initrd for the domU kernel, you don't need to create
246 .. admonition:: Debian
248 After installation of the ``xen-linux-system`` package, you need to
249 run (replace the exact version number with the one you have)::
252 $ ln -s vmlinuz-%2.6.26-1%-xen-amd64 vmlinuz-3-xenU
253 $ ln -s initrd.img-%2.6.26-1%-xen-amd64 initrd-3-xenU
255 By default, the initrd doesn't contain the Xen block drivers needed
256 to mount the root device, so it is recommended to update the initrd
257 by following these two steps:
259 - edit ``/etc/initramfs-tools/modules`` and add ``xen_blkfront``
260 - run ``update-initramfs -u``
265 Recommended on all nodes: DRBD_ is required if you want to use the high
266 availability (HA) features of Ganeti, but optional if you don't require
267 them or only run Ganeti on single-node clusters. You can upgrade a
268 non-HA cluster to an HA one later, but you might need to convert all
269 your instances to DRBD to take advantage of the new features.
271 .. _DRBD: http://www.drbd.org/
273 Supported DRBD versions: 8.0-8.3. It's recommended to have at least
274 version 8.0.12. Note that for version 8.2 and newer it is needed to pass
275 the ``usermode_helper=/bin/true`` parameter to the module, either by
276 configuring ``/etc/modules`` or when inserting it manually.
278 Now the bad news: unless your distribution already provides it
279 installing DRBD might involve recompiling your kernel or anyway fiddling
280 with it. Hopefully at least the Xen-ified kernel source to start from
281 will be provided (if you intend to use Xen).
283 The good news is that you don't need to configure DRBD at all. Ganeti
284 will do it for you for every instance you set up. If you have the DRBD
285 utils installed and the module in your kernel you're fine. Please check
286 that your system is configured to load the module at every boot, and
287 that it passes the following option to the module:
288 ``minor_count=NUMBER``. We recommend that you use 128 as the value of
289 the minor_count - this will allow you to use up to 64 instances in total
290 per node (both primary and secondary, when using only one disk per
291 instance). You can increase the number up to 255 if you need more
295 .. admonition:: Debian
297 On Debian, you can just install (build) the DRBD module with the
298 following commands, making sure you are running the target (Xen or
301 $ apt-get install drbd8-source drbd8-utils
305 Or on newer versions, if the kernel already has modules:
307 $ apt-get install drbd8-utils
309 Then to configure it for Ganeti::
311 $ echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
313 $ modprobe drbd minor_count=128 usermode_helper=/bin/true
315 It is also recommended that you comment out the default resources in
316 the ``/etc/drbd.conf`` file, so that the init script doesn't try to
317 configure any drbd devices. You can do this by prefixing all
318 *resource* lines in the file with the keyword *skip*, like this:
337 Recommended on all nodes: RBD_ is required if you want to create
338 instances with RBD disks residing inside a RADOS cluster (make use of
339 the rbd disk template). RBD-based instances can failover or migrate to
340 any other node in the ganeti cluster, enabling you to exploit of all
341 Ganeti's high availabilily (HA) features.
344 Be careful though: rbd is still experimental! For now it is
345 recommended only for testing purposes. No sensitive data should be
348 .. _RBD: http://ceph.newdream.net/
350 You will need the ``rbd`` and ``libceph`` kernel modules, the RBD/Ceph
351 userspace utils (ceph-common Debian package) and an appropriate
352 Ceph/RADOS configuration file on every VM-capable node.
354 You will also need a working RADOS Cluster accessible by the above
360 You will need a working RADOS Cluster accesible by all VM-capable nodes
361 to use the RBD template. For more information on setting up a RADOS
362 Cluster, refer to the `official docs <http://ceph.newdream.net/>`_.
364 If you want to use a pool for storing RBD disk images other than the
365 default (``rbd``), you should first create the pool in the RADOS
366 Cluster, and then set the corresponding rbd disk parameter named
372 Unless your distribution already provides it, you might need to compile
373 the ``rbd`` and ``libceph`` modules from source. You will need Linux
374 Kernel 3.2 or above for the kernel modules. Alternatively you will have
375 to build them as external modules (from Linux Kernel source 3.2 or
376 above), if you want to run a less recent kernel, or your kernel doesn't
382 The RBD template has been tested with ``ceph-common`` v0.38 and
383 above. We recommend using the latest version of ``ceph-common``.
385 .. admonition:: Debian
387 On Debian, you can just install the RBD/Ceph userspace utils with
388 the following command::
390 $ apt-get install ceph-common
395 You should also provide an appropriate configuration file
396 (``ceph.conf``) in ``/etc/ceph``. For the rbd userspace utils, you'll
397 only need to specify the IP addresses of the RADOS Cluster monitors.
399 .. admonition:: ceph.conf
401 Sample configuration file:
406 host = example_monitor_host1
407 mon addr = 1.2.3.4:6789
409 host = example_monitor_host2
410 mon addr = 1.2.3.5:6789
412 host = example_monitor_host3
413 mon addr = 1.2.3.6:6789
415 For more information, please see the `Ceph Docs
416 <http://ceph.newdream.net/docs/latest/>`_
418 Other required software
419 +++++++++++++++++++++++
421 See :doc:`install-quick`.
423 Setting up the environment for Ganeti
424 -------------------------------------
426 Configuring the network
427 +++++++++++++++++++++++
429 **Mandatory** on all nodes.
431 You can run Ganeti either in "bridged mode", "routed mode" or
432 "openvswitch mode". In bridged mode, the default, the instances network
433 interfaces will be attached to a software bridge running in dom0. Xen by
434 default creates such a bridge at startup, but your distribution might
435 have a different way to do things, and you'll definitely need to
436 manually set it up under KVM.
438 Beware that the default name Ganeti uses is ``xen-br0`` (which was used
439 in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. See the
440 `Initializing the cluster`_ section to learn how to choose a different
441 bridge, or not to use one at all and use "routed mode".
443 In order to use "routed mode" under Xen, you'll need to change the
444 relevant parameters in the Xen config file. Under KVM instead, no config
445 change is necessary, but you still need to set up your network
446 interfaces correctly.
448 By default, under KVM, the "link" parameter you specify per-nic will
449 represent, if non-empty, a different routing table name or number to use
450 for your instances. This allows isolation between different instance
451 groups, and different routing policies between node traffic and instance
454 You will need to configure your routing table basic routes and rules
455 outside of ganeti. The vif scripts will only add /32 routes to your
456 instances, through their interface, in the table you specified (under
457 KVM, and in the main table under Xen).
459 Also for "openvswitch mode" under Xen a custom network script is needed.
460 Under KVM everything should work, but you'll need to configure your
461 switches outside of Ganeti (as for bridges).
463 .. admonition:: Bridging issues with certain kernels
465 Some kernel versions (e.g. 2.6.32) have an issue where the bridge
466 will automatically change its ``MAC`` address to the lower-numbered
467 slave on port addition and removal. This means that, depending on
468 the ``MAC`` address of the actual NIC on the node and the addresses
469 of the instances, it could be that starting, stopping or migrating
470 instances will lead to timeouts due to the address of the bridge
471 (and thus node itself) changing.
473 To prevent this, it's enough to set the bridge manually to a
474 specific ``MAC`` address, which will disable this automatic address
475 change. In Debian, this can be done as follows in the bridge
476 configuration snippet::
478 up ip link set addr $(cat /sys/class/net/$IFACE/address) dev $IFACE
480 which will "set" the bridge address to the initial one, disallowing
483 .. admonition:: Bridging under Debian
485 The recommended way to configure the Xen bridge is to edit your
486 ``/etc/network/interfaces`` file and substitute your normal
487 Ethernet stanza with the following snippet::
490 iface xen-br0 inet static
491 address %YOUR_IP_ADDRESS%
492 netmask %YOUR_NETMASK%
493 network %YOUR_NETWORK%
494 broadcast %YOUR_BROADCAST_ADDRESS%
495 gateway %YOUR_GATEWAY%
499 # example for setting manually the bridge address to the eth0 NIC
500 up ip link set addr $(cat /sys/class/net/eth0/address) dev $IFACE
502 The following commands need to be executed on the local console::
507 To check if the bridge is setup, use the ``ip`` and ``brctl show``
511 9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
512 link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
513 inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
514 inet6 fe80::220:fcff:fe1e:d55d/64 scope link
515 valid_lft forever preferred_lft forever
518 bridge name bridge id STP enabled interfaces
519 xen-br0 8000.0020fc1ed55d no eth0
521 .. _configure-lvm-label:
526 **Mandatory** on all nodes.
528 The volume group is required to be at least 20GiB.
530 If you haven't configured your LVM volume group at install time you need
531 to do it before trying to initialize the Ganeti cluster. This is done by
532 formatting the devices/partitions you want to use for it and then adding
533 them to the relevant volume group::
535 $ pvcreate /dev/%sda3%
536 $ vgcreate xenvg /dev/%sda3%
540 $ pvcreate /dev/%sdb1%
541 $ pvcreate /dev/%sdc1%
542 $ vgcreate xenvg /dev/%sdb1% /dev/%sdc1%
544 If you want to add a device later you can do so with the *vgextend*
547 $ pvcreate /dev/%sdd1%
548 $ vgextend xenvg /dev/%sdd1%
550 Optional: it is recommended to configure LVM not to scan the DRBD
551 devices for physical volumes. This can be accomplished by editing
552 ``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular
553 expression to the ``filter`` variable, like this:
557 filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
559 Note that with Ganeti a helper script is provided - ``lvmstrap`` which
560 will erase and configure as LVM any not in-use disk on your system. This
561 is dangerous and it's recommended to read its ``--help`` output if you
567 **Mandatory** on all nodes.
569 It's now time to install the Ganeti software itself. Download the
570 source from the project page at `<http://code.google.com/p/ganeti/>`_,
571 and install it (replace 2.6.0 with the latest version)::
573 $ tar xvzf ganeti-%2.6.0%.tar.gz
575 $ ./configure --localstatedir=/var --sysconfdir=/etc
578 $ mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
580 You also need to copy the file ``doc/examples/ganeti.initd`` from the
581 source archive to ``/etc/init.d/ganeti`` and register it with your
582 distribution's startup scripts, for example in Debian::
584 $ chmod +x /etc/init.d/ganeti
585 $ update-rc.d ganeti defaults 20 80
587 In order to automatically restart failed instances, you need to setup a
588 cron job run the *ganeti-watcher* command. A sample cron file is
589 provided in the source at ``doc/examples/ganeti.cron`` and you can copy
590 that (eventually altering the path) to ``/etc/cron.d/ganeti``.
595 The above ``make install`` invocation, or installing via your
596 distribution mechanisms, will install on the system:
598 - a set of python libraries under the *ganeti* namespace (depending on
599 the python version this can be located in either
600 ``lib/python-$ver/site-packages`` or various other locations)
601 - a set of programs under ``/usr/local/sbin`` or ``/usr/sbin``
602 - if the htools component was enabled, a set of programs unde
603 ``/usr/local/bin`` or ``/usr/bin/``
604 - man pages for the above programs
605 - a set of tools under the ``lib/ganeti/tools`` directory
606 - an example iallocator script (see the admin guide for details) under
607 ``lib/ganeti/iallocators``
608 - a cron job that is needed for cluster maintenance
609 - an init script for automatic startup of Ganeti daemons
610 - provided but not installed automatically by ``make install`` is a bash
611 completion script that hopefully will ease working with the many
614 Installing the Operating System support packages
615 ++++++++++++++++++++++++++++++++++++++++++++++++
617 **Mandatory** on all nodes.
619 To be able to install instances you need to have an Operating System
620 installation script. An example OS that works under Debian and can
621 install Debian and Ubuntu instace OSes is provided on the project web
622 site. Download it from the project page and follow the instructions in
623 the ``README`` file. Here is the installation procedure (replace 0.9
624 with the latest version that is compatible with your ganeti version)::
627 $ wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-%0.9%.tar.gz
628 $ tar xzf ganeti-instance-debootstrap-%0.9%.tar.gz
629 $ cd ganeti-instance-debootstrap-%0.9%
634 In order to use this OS definition, you need to have internet access
635 from your nodes and have the *debootstrap*, *dump* and *restore*
636 commands installed on all nodes. Also, if the OS is configured to
637 partition the instance's disk in
638 ``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
641 .. admonition:: Debian
643 Use this command on all nodes to install the required packages::
645 $ apt-get install debootstrap dump kpartx
647 Or alternatively install the OS definition from the Debian package::
649 $ apt-get install ganeti-instance-debootstrap
653 In order for debootstrap instances to be able to shutdown cleanly
654 they must install have basic ACPI support inside the instance. Which
655 packages are needed depend on the exact flavor of Debian or Ubuntu
656 which you're installing, but the example defaults file has a
657 commented out configuration line that works for Debian Lenny and
660 EXTRA_PKGS="acpi-support-base,console-tools,udev"
662 ``kbd`` can be used instead of ``console-tools``, and more packages
663 can be added, of course, if needed.
665 Alternatively, you can create your own OS definitions. See the manpage
666 :manpage:`ganeti-os-interface(7)`.
668 Initializing the cluster
669 ++++++++++++++++++++++++
671 **Mandatory** once per cluster, on the first node.
673 The last step is to initialize the cluster. After you have repeated the
674 above process on all of your nodes, choose one as the master, and
677 $ gnt-cluster init %CLUSTERNAME%
679 The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must
680 exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You
681 must choose a name different from any of the nodes names for a
682 multi-node cluster. In general the best choice is to have a unique name
683 for a cluster, even if it consists of only one machine, as you will be
684 able to expand it later without any problems. Please note that the
685 hostname used for this must resolve to an IP address reserved
686 **exclusively** for this purpose, and cannot be the name of the first
689 If you want to use a bridge which is not ``xen-br0``, or no bridge at
690 all, change it with the ``--nic-parameters`` option. For example to
691 bridge on br0 you can add::
693 --nic-parameters link=br0
695 Or to not bridge at all, and use a separate routing table::
697 --nic-parameters mode=routed,link=100
699 If you don't have a ``xen-br0`` interface you also have to specify a
700 different network interface which will get the cluster IP, on the master
701 node, by using the ``--master-netdev <device>`` option.
703 You can use a different name than ``xenvg`` for the volume group (but
704 note that the name must be identical on all nodes). In this case you
705 need to specify it by passing the *--vg-name <VGNAME>* option to
706 ``gnt-cluster init``.
708 To set up the cluster as an Xen HVM cluster, use the
709 ``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
710 (you can also add ``,xen-pvm`` to enable the PVM one too). You will also
711 need to create the VNC cluster password file
712 ``/etc/ganeti/vnc-cluster-password`` which contains one line with the
713 default VNC password for the cluster.
715 To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
716 pass ``--enabled-hypervisors=kvm`` to the init command.
718 You can also invoke the command with the ``--help`` option in order to
719 see all the possibilities.
721 Hypervisor/Network/Cluster parameters
722 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
724 Please note that the default hypervisor/network/cluster parameters may
725 not be the correct one for your environment. Carefully check them, and
726 change them either at cluster init time, or later with ``gnt-cluster
729 Your instance types, networking environment, hypervisor type and version
730 may all affect what kind of parameters should be used on your cluster.
734 Instances are by default configured to use a host kernel, and to be
735 reached via serial console, which works nice for Linux paravirtualized
736 instances. If you want fully virtualized instances you may want to
737 handle their kernel inside the instance, and to use VNC.
739 Some versions of KVM have a bug that will make an instance hang when
740 configured to use the serial console (which is the default) unless a
741 connection is made to it within about 2 seconds of the instance's
742 startup. For such case it's recommended to disable the
743 ``serial_console`` option.
746 Joining the nodes to the cluster
747 ++++++++++++++++++++++++++++++++
749 **Mandatory** for all the other nodes.
751 After you have initialized your cluster you need to join the other nodes
752 to it. You can do so by executing the following command on the master
755 $ gnt-node add %NODENAME%
757 Separate replication network
758 ++++++++++++++++++++++++++++
762 Ganeti uses DRBD to mirror the disk of the virtual instances between
763 nodes. To use a dedicated network interface for this (in order to
764 improve performance or to enhance security) you need to configure an
765 additional interface for each node. Use the *-s* option with
766 ``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
767 this secondary interface to use for each node. Note that if you
768 specified this option at cluster setup time, you must afterwards use it
769 for every node add operation.
774 Execute the ``gnt-node list`` command to see all nodes in the cluster::
777 Node DTotal DFree MTotal MNode MFree Pinst Sinst
778 node1.example.com 197404 197404 2047 1896 125 0 0
780 The above shows a couple of things:
782 - The various Ganeti daemons can talk to each other
783 - Ganeti can examine the storage of the node (DTotal/DFree)
784 - Ganeti can talk to the selected hypervisor (MTotal/MNode/MFree)
789 With Ganeti a tool called :command:`burnin` is provided that can test
790 most of the Ganeti functionality. The tool is installed under the
791 ``lib/ganeti/tools`` directory (either under ``/usr`` or ``/usr/local``
792 based on the installation method). See more details under
798 You can now proceed either to the :doc:`admin`, or read the manpages of
799 the various commands (:manpage:`ganeti(7)`, :manpage:`gnt-cluster(8)`,
800 :manpage:`gnt-node(8)`, :manpage:`gnt-instance(8)`,
801 :manpage:`gnt-job(8)`).
803 .. rubric:: Footnotes
805 .. [#defkernel] The kernel and initrd paths can be changed at either
806 cluster level (which changes the default for all instances) or at
809 .. vim: set textwidth=72 :