1 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
3 <article class="specification">
5 <title>Ganeti installation tutorial</title>
7 <para>Documents Ganeti version 1.2</para>
10 <title>Introduction</title>
13 Ganeti is a cluster virtualization management system based on
14 Xen. This document explains how to bootstrap a Ganeti node (Xen
15 <literal>dom0</literal>), create a running cluster and install
16 virtual instance (Xen <literal>domU</literal>). You need to
17 repeat most of the steps in this document for every node you
18 want to install, but of course we recommend creating some
19 semi-automatic procedure if you plan to deploy Ganeti on a
24 A basic Ganeti terminology glossary is provided in the
25 introductory section of the <emphasis>Ganeti administrator's
26 guide</emphasis>. Please refer to that document if you are
27 uncertain about the terms we are using.
31 Ganeti has been developed for Linux and is
32 distribution-agnostic. This documentation will use Debian Etch
33 as an example system but the examples can easily be translated
34 to any other distribution. You are expected to be familiar with
35 your distribution, its package management system, and Xen before
39 <para>This document is divided into two main sections:
43 <simpara>Installation of the base system and base
47 <simpara>Configuration of the environment for
52 Each of these is divided into sub-sections. While a full Ganeti
53 system will need all of the steps specified, some are not strictly
54 required for every environment. Which ones they are, and why, is
55 specified in the corresponding sections.
61 <title>Installing the base system and base components</title>
64 <title>Hardware requirements</title>
67 Any system supported by your Linux distribution is fine.
68 64-bit systems are better as they can support more memory.
72 Any disk drive recognized by Linux
73 (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74 is supported in Ganeti. Note that no shared storage
75 (e.g. <literal>SAN</literal>) is needed to get high-availability features. It is
76 highly recommended to use more than one disk drive to improve
77 speed. But Ganeti also works with one disk per machine.
81 <title>Installing the base system</title>
84 <emphasis role="strong">Mandatory</emphasis> on all nodes.
88 It is advised to start with a clean, minimal install of the
89 operating system. The only requirement you need to be aware of
90 at this stage is to partition leaving enough space for a big
91 (<emphasis role="strong">minimum
92 <constant>20GiB</constant></emphasis>) LVM volume group which
93 will then host your instance filesystems. The volume group
94 name Ganeti 1.2 uses (by default) is
95 <emphasis>xenvg</emphasis>.
99 While you can use an existing system, please note that the
100 Ganeti installation is intrusive in terms of changes to the
101 system configuration, and it's best to use a newly-installed
102 system without important data on it.
106 Also, for best results, it's advised that the nodes have as
107 much as possible the same hardware and software
108 configuration. This will make administration much easier.
112 <title>Hostname issues</title>
114 Note that Ganeti requires the hostnames of the systems
115 (i.e. what the <computeroutput>hostname</computeroutput>
116 command outputs to be a fully-qualified name, not a short
117 name. In other words, you should use
118 <literal>node1.example.com</literal> as a hostname and not
119 just <literal>node1</literal>.
123 <title>Debian</title>
125 Note that Debian Etch configures the hostname differently
126 than you need it for Ganeti. For example, this is what
127 Etch puts in <filename>/etc/hosts</filename> in certain
131 127.0.1.1 node1.example.com node1
134 but for Ganeti you need to have:
137 192.168.1.1 node1.example.com node1
139 replacing <literal>192.168.1.1</literal> with your node's
140 address. Also, the file <filename>/etc/hostname</filename>
141 which configures the hostname of the system should contain
142 <literal>node1.example.com</literal> and not just
143 <literal>node1</literal> (you need to run the command
144 <computeroutput>/etc/init.d/hostname.sh
145 start</computeroutput> after changing the file).
153 <title>Installing Xen</title>
156 <emphasis role="strong">Mandatory</emphasis> on all nodes.
160 While Ganeti is developed with the ability to modularly run on
161 different virtualization environments in mind the only one
162 currently useable on a live system is <ulink
163 url="http://xen.xensource.com/">Xen</ulink>. Supported
164 versions are: <simplelist type="inline">
165 <member><literal>3.0.3</literal></member>
166 <member><literal>3.0.4</literal></member>
167 <member><literal>3.1</literal></member> </simplelist>.
171 Please follow your distribution's recommended way to install
172 and set up Xen, or install Xen from the upstream source, if
173 you wish, following their manual.
177 After installing Xen you need to reboot into your Xen-ified
178 dom0 system. On some distributions this might involve
179 configuring GRUB appropriately, whereas others will configure
180 it automatically when you install Xen from a package.
183 <formalpara><title>Debian</title>
185 Under Debian Etch or Sarge+backports you can install the
186 relevant <literal>xen-linux-system</literal> package, which
187 will pull in both the hypervisor and the relevant
188 kernel. Also, if you are installing a 32-bit Etch, you should
189 install the <computeroutput>libc6-xen</computeroutput> package
190 (run <computeroutput>apt-get install
191 libc6-xen</computeroutput>).
196 <title>Xen settings</title>
199 It's recommended that dom0 is restricted to a low amount of
200 memory (<constant>512MiB</constant> is reasonable) and that
201 memory ballooning is disabled in the file
202 <filename>/etc/xen/xend-config.sxp</filename> by setting the
203 value <literal>dom0-min-mem</literal> to
204 <constant>0</constant>, like this:
205 <computeroutput>(dom0-min-mem 0)</computeroutput>
209 For optimum performance when running both CPU and I/O
210 intensive instances, it's also recommended that the dom0 is
211 restricted to one CPU only, for example by booting with the
212 kernel parameter <literal>nosmp</literal>.
216 <title>Debian</title>
218 Besides the ballooning change which you need to set in
219 <filename>/etc/xen/xend-config.sxp</filename>, you need to
220 set the memory and nosmp parameters in the file
221 <filename>/boot/grub/menu.lst</filename>. You need to
222 modify the variable <literal>xenhopt</literal> to add
223 <userinput>dom0_mem=512M</userinput> like this:
225 ## Xen hypervisor options to use with the default Xen boot option
226 # xenhopt=dom0_mem=512M
228 and the <literal>xenkopt</literal> needs to include the
229 <userinput>nosmp</userinput> option like this:
231 ## Xen Linux kernel options to use with the default Xen boot option
235 Any existing parameters can be left in place: it's ok to
236 have <computeroutput>xenkopt=console=tty0
237 nosmp</computeroutput>, for example. After modifying the
238 files, you need to run:
248 <title>Selecting the instance kernel</title>
251 After you have installed Xen, you need to tell Ganeti
252 exactly what kernel to use for the instances it will
253 create. This is done by creating a
254 <emphasis>symlink</emphasis> from your actual kernel to
255 <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
257 <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
258 don't use an initrd for the <literal>domU</literal> kernel,
259 you don't need to create the initrd symlink.
263 <title>Debian</title>
265 After installation of the
266 <literal>xen-linux-system</literal> package, you need to
267 run (replace the exact version number with the one you
271 ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
272 ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
281 <title>Installing DRBD</title>
284 Recommended on all nodes: <ulink
285 url="http://www.drbd.org/">DRBD</ulink> is required if you
286 want to use the high availability (HA) features of Ganeti, but
287 optional if you don't require HA or only run Ganeti on
288 single-node clusters. You can upgrade a non-HA cluster to an
289 HA one later, but you might need to export and re-import all
290 your instances to take advantage of the new features.
294 Supported DRBD version: the <literal>0.7</literal>
295 series. It's recommended to have at least version
296 <literal>0.7.24</literal> if you use <command>udev</command>
297 since older versions have a bug related to device discovery
298 which can be triggered in cases of hard drive failure.
302 Now the bad news: unless your distribution already provides it
303 installing DRBD might involve recompiling your kernel or
304 anyway fiddling with it. Hopefully at least the Xen-ified
305 kernel source to start from will be provided.
309 The good news is that you don't need to configure DRBD at all.
310 Ganeti will do it for you for every instance you set up. If
311 you have the DRBD utils installed and the module in your
312 kernel you're fine. Please check that your system is
313 configured to load the module at every boot, and that it
314 passes the following option to the module:
315 <computeroutput>minor_count=64</computeroutput> (this will
316 allow you to use up to 32 instances per node).
319 <formalpara><title>Debian</title>
321 You can just install (build) the DRBD 0.7 module with the
322 following commands (make sure you are running the Xen
328 apt-get install drbd0.7-module-source drbd0.7-utils
331 echo drbd minor_count=64 >> /etc/modules
332 modprobe drbd minor_count=64
336 It is also recommended that you comment out the default
337 resources in the <filename>/etc/dbrd.conf</filename> file, so
338 that the init script doesn't try to configure any drbd
339 devices. You can do this by prefixing all
340 <literal>resource</literal> lines in the file with the keyword
341 <literal>skip</literal>, like this:
357 <title>Other required software</title>
359 <para>Besides Xen and DRBD, you will need to install the
360 following (on all nodes):</para>
364 <simpara><ulink url="http://sourceware.org/lvm2/">LVM
365 version 2</ulink></simpara>
369 url="http://www.openssl.org/">OpenSSL</ulink></simpara>
373 url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
376 <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
377 utilities</ulink></simpara>
381 url="http://fping.sourceforge.net/">fping</ulink></simpara>
385 url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
389 url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
390 (part of iputils package)</simpara>
394 url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
395 (Linux Software Raid tools)</simpara>
398 <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
401 <simpara><ulink url="http://twistedmatrix.com/">Python
402 Twisted library</ulink> - the core library is
407 url="http://pyopenssl.sourceforge.net/">Python OpenSSL
408 bindings</ulink></simpara>
413 These programs are supplied as part of most Linux
414 distributions, so usually they can be installed via apt or
415 similar methods. Also many of them will already be installed
416 on a standard machine.
420 <formalpara><title>Debian</title>
422 <para>You can use this command line to install all of them:</para>
426 # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
427 fping python2.4 python-twisted-core python-pyopenssl openssl \
437 <title>Setting up the environment for Ganeti</title>
440 <title>Configuring the network</title>
442 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
445 Ganeti relies on Xen running in "bridge mode", which means the
446 instances network interfaces will be attached to a software bridge
447 running in dom0. Xen by default creates such a bridge at startup, but
448 your distribution might have a different way to do things.
452 Beware that the default name Ganeti uses is
453 <hardware>xen-br0</hardware> (which was used in Xen 2.0)
454 while Xen 3.0 uses <hardware>xenbr0</hardware> by
455 default. The default bridge your Ganeti cluster will use for new
456 instances can be specified at cluster initialization time.
459 <formalpara><title>Debian</title>
461 The recommended Debian way to configure the Xen bridge is to
462 edit your <filename>/etc/network/interfaces</filename> file
463 and substitute your normal Ethernet stanza with the
468 iface xen-br0 inet static
469 address <replaceable>YOUR_IP_ADDRESS</replaceable>
470 netmask <replaceable>YOUR_NETMASK</replaceable>
471 network <replaceable>YOUR_NETWORK</replaceable>
472 broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
473 gateway <replaceable>YOUR_GATEWAY</replaceable>
482 The following commands need to be executed on the local console
490 To check if the bridge is setup, use <command>ip</command>
491 and <command>brctl show</command>:
496 9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
497 link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
498 inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
499 inet6 fe80::220:fcff:fe1e:d55d/64 scope link
500 valid_lft forever preferred_lft forever
503 bridge name bridge id STP enabled interfaces
504 xen-br0 8000.0020fc1ed55d no eth0
511 <title>Configuring LVM</title>
514 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
517 <simpara>The volume group is required to be at least
518 <constant>20GiB</constant>.</simpara>
521 If you haven't configured your LVM volume group at install
522 time you need to do it before trying to initialize the Ganeti
523 cluster. This is done by formatting the devices/partitions you
524 want to use for it and then adding them to the relevant volume
529 vgcreate xenvg /dev/sda3
535 vgcreate xenvg /dev/sdb1 /dev/sdc1
540 If you want to add a device later you can do so with the
541 <citerefentry><refentrytitle>vgextend</refentrytitle>
542 <manvolnum>8</manvolnum></citerefentry> command:
547 vgextend xenvg /dev/sdd1
551 <title>Optional</title>
553 It is recommended to configure LVM not to scan the DRBD
554 devices for physical volumes. This can be accomplished by
555 editing <filename>/etc/lvm/lvm.conf</filename> and adding
556 the <literal>/dev/drbd[0-9]+</literal> regular expression to
557 the <literal>filter</literal> variable, like this:
559 filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
567 <title>Installing Ganeti</title>
569 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
572 It's now time to install the Ganeti software itself. Download
573 the source from <ulink
574 url="http://code.google.com/p/ganeti/"></ulink>.
578 tar xvzf ganeti-1.2b1.tar.gz
580 ./configure --localstatedir=/var --sysconfdir=/etc
583 mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
587 You also need to copy the file
588 <filename>doc/examples/ganeti.initd</filename>
589 from the source archive to
590 <filename>/etc/init.d/ganeti</filename> and register it with
591 your distribution's startup scripts, for example in Debian:
593 <screen>update-rc.d ganeti defaults 20 80</screen>
596 In order to automatically restart failed instances, you need
597 to setup a cron job run the
598 <computeroutput>ganeti-watcher</computeroutput> program. A
599 sample cron file is provided in the source at
600 <filename>doc/examples/ganeti.cron</filename> and you can
601 copy that (eventually altering the path) to
602 <filename>/etc/cron.d/ganeti</filename>
608 <title>Installing the Operating System support packages</title>
610 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
613 To be able to install instances you need to have an Operating
614 System installation script. An example for Debian Etch is
615 provided on the project web site. Download it from <ulink
616 url="http://code.google.com/p/ganeti/"></ulink> and follow the
617 instructions in the <filename>README</filename> file. Here is
618 the installation procedure:
623 tar xvf instance-debian-etch-0.1.tar
624 mv instance-debian-etch-0.1 debian-etch
628 In order to use this OS definition, you need to have internet
629 access from your nodes and have <citerefentry>
630 <refentrytitle>debootstrap</refentrytitle>
631 <manvolnum>8</manvolnum> </citerefentry> installed on all the
635 <title>Debian</title>
637 Use this command on all nodes to install
638 <computeroutput>debootstrap</computeroutput>:
640 <screen>apt-get install debootstrap</screen>
645 Alternatively, you can create your own OS definitions. See the
648 <refentrytitle>ganeti-os-interface</refentrytitle>
649 <manvolnum>8</manvolnum>
656 <title>Initializing the cluster</title>
658 <para><emphasis role="strong">Mandatory:</emphasis> only on one
659 node per cluster.</para>
662 <para>The last step is to initialize the cluster. After you've repeated
663 the above process on all of your nodes, choose one as the master, and execute:
667 gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
671 The <replaceable>CLUSTERNAME</replaceable> is a hostname,
672 which must be resolvable (e.g. it must exist in DNS or in
673 <filename>/etc/hosts</filename>) by all the nodes in the
674 cluster. You must choose a name different from any of the
675 nodes names for a multi-node cluster. In general the best
676 choice is to have a unique name for a cluster, even if it
677 consists of only one machine, as you will be able to expand it
678 later without any problems.
682 If the bridge name you are using is not
683 <literal>xen-br0</literal>, use the <option>-b
684 <replaceable>BRIDGENAME</replaceable></option> option to
685 specify the bridge name. In this case, you should also use the
686 <option>--master-netdev
687 <replaceable>BRIDGENAME</replaceable></option> option with the
688 same <replaceable>BRIDGENAME</replaceable> argument.
692 You can use a different name than <literal>xenvg</literal> for
693 the volume group (but note that the name must be identical on
694 all nodes). In this case you need to specify it by passing the
695 <option>-g <replaceable>VGNAME</replaceable></option> option
696 to <computeroutput>gnt-cluster init</computeroutput>.
700 You can also invoke the command with the
701 <option>--help</option> option in order to see all the
708 <title>Joining the nodes to the cluster</title>
711 <emphasis role="strong">Mandatory:</emphasis> for all the
716 After you have initialized your cluster you need to join the
717 other nodes to it. You can do so by executing the following
718 command on the master node:
721 gnt-node add <replaceable>NODENAME</replaceable>
726 <title>Separate replication network</title>
728 <para><emphasis role="strong">Optional</emphasis></para>
730 Ganeti uses DRBD to mirror the disk of the virtual instances
731 between nodes. To use a dedicated network interface for this
732 (in order to improve performance or to enhance security) you
733 need to configure an additional interface for each node. Use
734 the <option>-s</option> option with
735 <computeroutput>gnt-cluster init</computeroutput> and
736 <computeroutput>gnt-node add</computeroutput> to specify the
737 IP address of this secondary interface to use for each
738 node. Note that if you specified this option at cluster setup
739 time, you must afterwards use it for every node add operation.
744 <title>Testing the setup</title>
748 Execute the <computeroutput>gnt-node list</computeroutput>
749 command to see all nodes in the cluster:
752 Node DTotal DFree MTotal MNode MFree Pinst Sinst
753 node1.example.com 197404 197404 2047 1896 125 0 0
759 <title>Setting up and managing virtual instances</title>
761 <title>Setting up virtual instances</title>
763 This step shows how to setup a virtual instance with either
764 non-mirrored disks (<computeroutput>plain</computeroutput>) or
765 with network mirrored disks
766 (<computeroutput>remote_raid1</computeroutput>). All commands
767 need to be executed on the Ganeti master node (the one on
768 which <computeroutput>gnt-cluster init</computeroutput> was
769 run). Verify that the OS scripts are present on all cluster
770 nodes with <computeroutput>gnt-os list</computeroutput>.
773 To create a virtual instance, you need a hostname which is
774 resolvable (DNS or <filename>/etc/hosts</filename> on all
775 nodes). The following command will create a non-mirrored
779 gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
780 * creating instance disks...
781 adding instance inst1.example.com to cluster config
782 Waiting for instance inst1.example.com to sync disks.
783 Instance inst1.example.com's disks are in sync.
784 creating os for instance inst1.example.com on node node1.example.com
785 * running the instance OS create scripts...
789 The above instance will have no network interface enabled.
790 You can access it over the virtual console with
791 <computeroutput>gnt-instance console
792 <literal>inst1</literal></computeroutput>. There is no
793 password for root. As this is a Debian instance, you can
794 modify the <filename>/etc/network/interfaces</filename> file
795 to setup the network interface (<literal>eth0</literal> is the
796 name of the interface provided to the instance).
800 To create a network mirrored instance, change the argument to
801 the <option>-t</option> option from <literal>plain</literal>
802 to <literal>remote_raid1</literal> and specify the node on
803 which the mirror should reside with the
804 <option>--secondary-node</option> option, like this:
808 # gnt-instance add -t remote_raid1 --secondary-node node1 \
809 -n node2 -o debian-etch instance2
810 * creating instance disks...
811 adding instance instance2 to cluster config
812 Waiting for instance instance1 to sync disks.
813 - device sdb: 3.50% done, 304 estimated seconds remaining
814 - device sdb: 21.70% done, 270 estimated seconds remaining
815 - device sdb: 39.80% done, 247 estimated seconds remaining
816 - device sdb: 58.10% done, 121 estimated seconds remaining
817 - device sdb: 76.30% done, 72 estimated seconds remaining
818 - device sdb: 94.80% done, 18 estimated seconds remaining
819 Instance instance2's disks are in sync.
820 creating os for instance instance2 on node node2.example.com
821 * running the instance OS create scripts...
822 * starting instance...
828 <title>Managing virtual instances</title>
830 All commands need to be executed on the Ganeti master node
834 To access the console of an instance, use
835 <computeroutput>gnt-instance console
836 <replaceable>INSTANCENAME</replaceable></computeroutput>.
840 To shutdown an instance, use <computeroutput>gnt-instance
842 <replaceable>INSTANCENAME</replaceable></computeroutput>. To
843 startup an instance, use <computeroutput>gnt-instance startup
844 <replaceable>INSTANCENAME</replaceable></computeroutput>.
848 To failover an instance to its secondary node (only possible
849 in <literal>remote_raid1</literal> setup), use
850 <computeroutput>gnt-instance failover
851 <replaceable>INSTANCENAME</replaceable></computeroutput>.
855 For more instance and cluster administration details, see the
856 <emphasis>Ganeti administrator's guide</emphasis>.