1 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
3 <article class="specification">
5 <title>Ganeti installation tutorial</title>
7 <para>Documents Ganeti version 1.2</para>
10 <title>Introduction</title>
13 Ganeti is a cluster virtualization management system based on
14 Xen. This document explains how to bootstrap a Ganeti node (Xen
15 <literal>dom0</literal>), create a running cluster and install
16 virtual instance (Xen <literal>domU</literal>). You need to
17 repeat most of the steps in this document for every node you
18 want to install, but of course we recommend creating some
19 semi-automatic procedure if you plan to deploy Ganeti on a
24 A basic Ganeti terminology glossary is provided in the
25 introductory section of the <emphasis>Ganeti administrator's
26 guide</emphasis>. Please refer to that document if you are
27 uncertain about the terms we are using.
31 Ganeti has been developed for Linux and is
32 distribution-agnostic. This documentation will use Debian Etch
33 as an example system but the examples can easily be translated
34 to any other distribution. You are expected to be familiar with
35 your distribution, its package management system, and Xen before
39 <para>This document is divided into two main sections:
43 <simpara>Installation of the base system and base
47 <simpara>Configuration of the environment for
52 Each of these is divided into sub-sections. While a full Ganeti system
53 will need all of the steps specified, some are not strictly required for
54 every environment. Which ones they are, and why, is specified in the
55 corresponding sections.
61 <title>Installing the base system and base components</title>
64 <title>Hardware requirements</title>
67 Any system supported by your Linux distribution is fine. 64-bit
68 systems are better as they can support more memory.
72 Any disk drive recognized by Linux
73 (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74 is supported in Ganeti. Note that no shared storage (e.g.
75 <literal>SAN</literal>) is needed to get high-availability features. It
76 is highly recommended to use more than one disk drive to improve speed.
77 But Ganeti also works with one disk per machine.
81 <title>Installing the base system</title>
84 <emphasis role="strong">Mandatory</emphasis> on all nodes.
88 It is advised to start with a clean, minimal install of the
89 operating system. The only requirement you need to be aware of
90 at this stage is to partition leaving enough space for a big
91 (<emphasis role="strong">minimum
92 <constant>20GiB</constant></emphasis>) LVM volume group which
93 will then host your instance filesystems. The volume group
94 name Ganeti 1.2 uses (by default) is
95 <emphasis>xenvg</emphasis>.
99 While you can use an existing system, please note that the
100 Ganeti installation is intrusive in terms of changes to the
101 system configuration, and it's best to use a newly-installed
102 system without important data on it.
106 Also, for best results, it's advised that the nodes have as
107 much as possible the same hardware and software
108 configuration. This will make administration much easier.
112 <title>Hostname issues</title>
114 Note that Ganeti requires the hostnames of the systems
115 (i.e. what the <computeroutput>hostname</computeroutput>
116 command outputs to be a fully-qualified name, not a short
117 name. In other words, you should use
118 <literal>node1.example.com</literal> as a hostname and not
119 just <literal>node1</literal>.
123 <title>Debian</title>
125 Note that Debian Etch configures the hostname differently
126 than you need it for Ganeti. For example, this is what
127 Etch puts in <filename>/etc/hosts</filename> in certain
131 127.0.1.1 node1.example.com node1
134 but for Ganeti you need to have:
137 192.168.1.1 node1.example.com node1
139 replacing <literal>192.168.1.1</literal> with your node's
140 address. Also, the file <filename>/etc/hostname</filename>
141 which configures the hostname of the system should contain
142 <literal>node1.example.com</literal> and not just
143 <literal>node1</literal> (you need to run the command
144 <computeroutput>/etc/init.d/hostname.sh
145 start</computeroutput> after changing the file).
153 <title>Installing Xen</title>
156 <emphasis role="strong">Mandatory</emphasis> on all nodes.
160 While Ganeti is developed with the ability to modularly run on
161 different virtualization environments in mind the only one
162 currently useable on a live system is <ulink
163 url="http://xen.xensource.com/">Xen</ulink>. Supported
164 versions are: <simplelist type="inline">
165 <member><literal>3.0.3</literal></member>
166 <member><literal>3.0.4</literal></member>
167 <member><literal>3.1</literal></member> </simplelist>.
171 Please follow your distribution's recommended way to install
172 and set up Xen, or install Xen from the upstream source, if
173 you wish, following their manual.
177 After installing Xen you need to reboot into your Xen-ified
178 dom0 system. On some distributions this might involve
179 configuring GRUB appropriately, whereas others will configure
180 it automatically when you install Xen from a package.
183 <formalpara><title>Debian</title>
185 Under Debian Etch or Sarge+backports you can install the
186 relevant <literal>xen-linux-system</literal> package, which
187 will pull in both the hypervisor and the relevant
188 kernel. Also, if you are installing a 32-bit Etch, you should
189 install the <computeroutput>libc6-xen</computeroutput> package
190 (run <computeroutput>apt-get install
191 libc6-xen</computeroutput>).
196 <title>Xen settings</title>
199 It's recommended that dom0 is restricted to a low amount of
200 memory (<constant>512MiB</constant> is reasonable) and that
201 memory ballooning is disabled in the file
202 <filename>/etc/xen/xend-config.sxp</filename> by setting the
203 value <literal>dom0-min-mem</literal> to
204 <constant>0</constant>, like this:
205 <computeroutput>(dom0-min-mem 0)</computeroutput>
209 For optimum performance when running both CPU and I/O
210 intensive instances, it's also recommended that the dom0 is
211 restricted to one CPU only, for example by booting with the
212 kernel parameter <literal>nosmp</literal>.
216 It is recommended that you disable xen's automatic save of virtual
217 machines at system shutdown and subsequent restore of them at reboot.
218 To obtain this make sure the variable
219 <literal>XENDOMAINS_SAVE</literal> in the file
220 <literal>/etc/default/xendomains</literal> is set to an empty value.
224 <title>Debian</title>
226 Besides the ballooning change which you need to set in
227 <filename>/etc/xen/xend-config.sxp</filename>, you need to
228 set the memory and nosmp parameters in the file
229 <filename>/boot/grub/menu.lst</filename>. You need to
230 modify the variable <literal>xenhopt</literal> to add
231 <userinput>dom0_mem=512M</userinput> like this:
233 ## Xen hypervisor options to use with the default Xen boot option
234 # xenhopt=dom0_mem=512M
236 and the <literal>xenkopt</literal> needs to include the
237 <userinput>nosmp</userinput> option like this:
239 ## Xen Linux kernel options to use with the default Xen boot option
243 Any existing parameters can be left in place: it's ok to
244 have <computeroutput>xenkopt=console=tty0
245 nosmp</computeroutput>, for example. After modifying the
246 files, you need to run:
256 <title>Selecting the instance kernel</title>
259 After you have installed Xen, you need to tell Ganeti
260 exactly what kernel to use for the instances it will
261 create. This is done by creating a
262 <emphasis>symlink</emphasis> from your actual kernel to
263 <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
265 <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
266 don't use an initrd for the <literal>domU</literal> kernel,
267 you don't need to create the initrd symlink.
271 <title>Debian</title>
273 After installation of the
274 <literal>xen-linux-system</literal> package, you need to
275 run (replace the exact version number with the one you
279 ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
280 ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
289 <title>Installing DRBD</title>
292 Recommended on all nodes: <ulink
293 url="http://www.drbd.org/">DRBD</ulink> is required if you
294 want to use the high availability (HA) features of Ganeti, but
295 optional if you don't require HA or only run Ganeti on
296 single-node clusters. You can upgrade a non-HA cluster to an
297 HA one later, but you might need to export and re-import all
298 your instances to take advantage of the new features.
302 Supported DRBD versions: the <literal>0.7</literal> series
303 <emphasis role="strong">or</emphasis>
304 <literal>8.0.x</literal>. It's recommended to have at least
305 version <literal>0.7.24</literal> if you use
306 <command>udev</command> since older versions have a bug
307 related to device discovery which can be triggered in cases of
312 Now the bad news: unless your distribution already provides it
313 installing DRBD might involve recompiling your kernel or
314 anyway fiddling with it. Hopefully at least the Xen-ified
315 kernel source to start from will be provided.
319 The good news is that you don't need to configure DRBD at all.
320 Ganeti will do it for you for every instance you set up. If
321 you have the DRBD utils installed and the module in your
322 kernel you're fine. Please check that your system is
323 configured to load the module at every boot, and that it
324 passes the following option to the module (for
325 <literal>0.7.x</literal>:
326 <computeroutput>minor_count=64</computeroutput> (this will
327 allow you to use up to 32 instances per node) or for
328 <literal>8.0.x</literal> you can use up to
329 <constant>255</constant>
330 (i.e. <computeroutput>minor_count=255</computeroutput>, but
331 for most clusters <constant>128</constant> should be enough).
334 <formalpara><title>Debian</title>
336 You can just install (build) the DRBD 0.7 module with the
337 following commands (make sure you are running the Xen
343 apt-get install drbd0.7-module-source drbd0.7-utils
346 echo drbd minor_count=64 >> /etc/modules
347 modprobe drbd minor_count=64
349 <para>or for using DRBD <literal>8.x</literal> from the etch
352 apt-get install -t etch-backports drbd8-module-source drbd8-utils
355 echo drbd minor_count=128 >> /etc/modules
356 modprobe drbd minor_count=128
360 It is also recommended that you comment out the default
361 resources in the <filename>/etc/dbrd.conf</filename> file, so
362 that the init script doesn't try to configure any drbd
363 devices. You can do this by prefixing all
364 <literal>resource</literal> lines in the file with the keyword
365 <literal>skip</literal>, like this:
381 <title>Other required software</title>
383 <para>Besides Xen and DRBD, you will need to install the
384 following (on all nodes):</para>
388 <simpara><ulink url="http://sourceware.org/lvm2/">LVM
389 version 2</ulink></simpara>
393 url="http://www.openssl.org/">OpenSSL</ulink></simpara>
397 url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
400 <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
401 utilities</ulink></simpara>
405 url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
409 url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
410 (part of iputils package)</simpara>
414 url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
415 (Linux Software Raid tools)</simpara>
418 <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
421 <simpara><ulink url="http://twistedmatrix.com/">Python
422 Twisted library</ulink> - the core library is
427 url="http://pyopenssl.sourceforge.net/">Python OpenSSL
428 bindings</ulink></simpara>
432 url="http://www.undefined.org/python/#simplejson">simplejson Python
433 module</ulink></simpara>
437 url="http://pyparsing.wikispaces.com/">pyparsing Python
438 module</ulink></simpara>
443 These programs are supplied as part of most Linux
444 distributions, so usually they can be installed via apt or
445 similar methods. Also many of them will already be installed
446 on a standard machine.
450 <formalpara><title>Debian</title>
452 <para>You can use this command line to install all of them:</para>
456 # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
457 python2.4 python-twisted-core python-pyopenssl openssl \
458 mdadm python-pyparsing python-simplejson
467 <title>Setting up the environment for Ganeti</title>
470 <title>Configuring the network</title>
472 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
475 Ganeti relies on Xen running in "bridge mode", which means the
476 instances network interfaces will be attached to a software bridge
477 running in dom0. Xen by default creates such a bridge at startup, but
478 your distribution might have a different way to do things.
482 Beware that the default name Ganeti uses is
483 <hardware>xen-br0</hardware> (which was used in Xen 2.0)
484 while Xen 3.0 uses <hardware>xenbr0</hardware> by
485 default. The default bridge your Ganeti cluster will use for new
486 instances can be specified at cluster initialization time.
489 <formalpara><title>Debian</title>
491 The recommended Debian way to configure the Xen bridge is to
492 edit your <filename>/etc/network/interfaces</filename> file
493 and substitute your normal Ethernet stanza with the
498 iface xen-br0 inet static
499 address <replaceable>YOUR_IP_ADDRESS</replaceable>
500 netmask <replaceable>YOUR_NETMASK</replaceable>
501 network <replaceable>YOUR_NETWORK</replaceable>
502 broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
503 gateway <replaceable>YOUR_GATEWAY</replaceable>
512 The following commands need to be executed on the local console
520 To check if the bridge is setup, use <command>ip</command>
521 and <command>brctl show</command>:
526 9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
527 link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
528 inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
529 inet6 fe80::220:fcff:fe1e:d55d/64 scope link
530 valid_lft forever preferred_lft forever
533 bridge name bridge id STP enabled interfaces
534 xen-br0 8000.0020fc1ed55d no eth0
541 <title>Configuring LVM</title>
544 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
547 <simpara>The volume group is required to be at least
548 <constant>20GiB</constant>.</simpara>
551 If you haven't configured your LVM volume group at install
552 time you need to do it before trying to initialize the Ganeti
553 cluster. This is done by formatting the devices/partitions you
554 want to use for it and then adding them to the relevant volume
559 vgcreate xenvg /dev/sda3
565 vgcreate xenvg /dev/sdb1 /dev/sdc1
570 If you want to add a device later you can do so with the
571 <citerefentry><refentrytitle>vgextend</refentrytitle>
572 <manvolnum>8</manvolnum></citerefentry> command:
577 vgextend xenvg /dev/sdd1
581 <title>Optional</title>
583 It is recommended to configure LVM not to scan the DRBD
584 devices for physical volumes. This can be accomplished by
585 editing <filename>/etc/lvm/lvm.conf</filename> and adding
586 the <literal>/dev/drbd[0-9]+</literal> regular expression to
587 the <literal>filter</literal> variable, like this:
589 filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
597 <title>Installing Ganeti</title>
599 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
602 It's now time to install the Ganeti software itself. Download
603 the source from <ulink
604 url="http://code.google.com/p/ganeti/"></ulink>.
608 tar xvzf ganeti-1.2b3.tar.gz
610 ./configure --localstatedir=/var --sysconfdir=/etc
613 mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
617 You also need to copy the file
618 <filename>doc/examples/ganeti.initd</filename>
619 from the source archive to
620 <filename>/etc/init.d/ganeti</filename> and register it with
621 your distribution's startup scripts, for example in Debian:
623 <screen>update-rc.d ganeti defaults 20 80</screen>
626 In order to automatically restart failed instances, you need
627 to setup a cron job run the
628 <computeroutput>ganeti-watcher</computeroutput> program. A
629 sample cron file is provided in the source at
630 <filename>doc/examples/ganeti.cron</filename> and you can
631 copy that (eventually altering the path) to
632 <filename>/etc/cron.d/ganeti</filename>
638 <title>Installing the Operating System support packages</title>
640 <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
643 To be able to install instances you need to have an Operating
644 System installation script. An example for Debian Etch is
645 provided on the project web site. Download it from <ulink
646 url="http://code.google.com/p/ganeti/"></ulink> and follow the
647 instructions in the <filename>README</filename> file. Here is
648 the installation procedure (replace <constant>0.2</constant>
649 with the latest version that is compatible with your ganeti
655 tar xvf ganeti-instance-debian-etch-0.3.tar
656 mv ganeti-instance-debian-etch-0.3 debian-etch
660 In order to use this OS definition, you need to have internet
661 access from your nodes and have the <citerefentry>
662 <refentrytitle>debootstrap</refentrytitle>
663 <manvolnum>8</manvolnum></citerefentry>, <citerefentry>
664 <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum>
665 </citerefentry> and <citerefentry>
666 <refentrytitle>restore</refentrytitle>
667 <manvolnum>8</manvolnum> </citerefentry> commands installed on
671 <title>Debian</title>
673 Use this command on all nodes to install the required
676 <screen>apt-get install debootstrap dump</screen>
681 Alternatively, you can create your own OS definitions. See the
684 <refentrytitle>ganeti-os-interface</refentrytitle>
685 <manvolnum>8</manvolnum>
692 <title>Initializing the cluster</title>
694 <para><emphasis role="strong">Mandatory:</emphasis> only on one
695 node per cluster.</para>
698 <para>The last step is to initialize the cluster. After you've repeated
699 the above process on all of your nodes, choose one as the master, and execute:
703 gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
707 The <replaceable>CLUSTERNAME</replaceable> is a hostname,
708 which must be resolvable (e.g. it must exist in DNS or in
709 <filename>/etc/hosts</filename>) by all the nodes in the
710 cluster. You must choose a name different from any of the
711 nodes names for a multi-node cluster. In general the best
712 choice is to have a unique name for a cluster, even if it
713 consists of only one machine, as you will be able to expand it
714 later without any problems.
718 If the bridge name you are using is not
719 <literal>xen-br0</literal>, use the <option>-b
720 <replaceable>BRIDGENAME</replaceable></option> option to
721 specify the bridge name. In this case, you should also use the
722 <option>--master-netdev
723 <replaceable>BRIDGENAME</replaceable></option> option with the
724 same <replaceable>BRIDGENAME</replaceable> argument.
728 You can use a different name than <literal>xenvg</literal> for
729 the volume group (but note that the name must be identical on
730 all nodes). In this case you need to specify it by passing the
731 <option>-g <replaceable>VGNAME</replaceable></option> option
732 to <computeroutput>gnt-cluster init</computeroutput>.
736 You can also invoke the command with the
737 <option>--help</option> option in order to see all the
744 <title>Joining the nodes to the cluster</title>
747 <emphasis role="strong">Mandatory:</emphasis> for all the
752 After you have initialized your cluster you need to join the
753 other nodes to it. You can do so by executing the following
754 command on the master node:
757 gnt-node add <replaceable>NODENAME</replaceable>
762 <title>Separate replication network</title>
764 <para><emphasis role="strong">Optional</emphasis></para>
766 Ganeti uses DRBD to mirror the disk of the virtual instances
767 between nodes. To use a dedicated network interface for this
768 (in order to improve performance or to enhance security) you
769 need to configure an additional interface for each node. Use
770 the <option>-s</option> option with
771 <computeroutput>gnt-cluster init</computeroutput> and
772 <computeroutput>gnt-node add</computeroutput> to specify the
773 IP address of this secondary interface to use for each
774 node. Note that if you specified this option at cluster setup
775 time, you must afterwards use it for every node add operation.
780 <title>Testing the setup</title>
783 Execute the <computeroutput>gnt-node list</computeroutput>
784 command to see all nodes in the cluster:
787 Node DTotal DFree MTotal MNode MFree Pinst Sinst
788 node1.example.com 197404 197404 2047 1896 125 0 0
794 <title>Setting up and managing virtual instances</title>
796 <title>Setting up virtual instances</title>
798 This step shows how to setup a virtual instance with either
799 non-mirrored disks (<computeroutput>plain</computeroutput>) or
800 with network mirrored disks
801 (<computeroutput>remote_raid1</computeroutput> for drbd 0.7
802 and <computeroutput>drbd</computeroutput> for drbd 8.x). All
803 commands need to be executed on the Ganeti master node (the
804 one on which <computeroutput>gnt-cluster init</computeroutput>
805 was run). Verify that the OS scripts are present on all
806 cluster nodes with <computeroutput>gnt-os
807 list</computeroutput>.
810 To create a virtual instance, you need a hostname which is
811 resolvable (DNS or <filename>/etc/hosts</filename> on all
812 nodes). The following command will create a non-mirrored
816 gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
817 * creating instance disks...
818 adding instance inst1.example.com to cluster config
819 Waiting for instance inst1.example.com to sync disks.
820 Instance inst1.example.com's disks are in sync.
821 creating os for instance inst1.example.com on node node1.example.com
822 * running the instance OS create scripts...
826 The above instance will have no network interface enabled.
827 You can access it over the virtual console with
828 <computeroutput>gnt-instance console
829 <literal>inst1</literal></computeroutput>. There is no
830 password for root. As this is a Debian instance, you can
831 modify the <filename>/etc/network/interfaces</filename> file
832 to setup the network interface (<literal>eth0</literal> is the
833 name of the interface provided to the instance).
837 To create a network mirrored instance, change the argument to
838 the <option>-t</option> option from <literal>plain</literal>
839 to <literal>remote_raid1</literal> (drbd 0.7) or
840 <literal>drbd</literal> (drbd 8.0) and specify the node on
841 which the mirror should reside with the second value of the
842 <option>--node</option> option, like this:
846 # gnt-instance add -t remote_raid1 -n node1:node2 -o debian-etch instance2
847 * creating instance disks...
848 adding instance instance2 to cluster config
849 Waiting for instance instance1 to sync disks.
850 - device sdb: 3.50% done, 304 estimated seconds remaining
851 - device sdb: 21.70% done, 270 estimated seconds remaining
852 - device sdb: 39.80% done, 247 estimated seconds remaining
853 - device sdb: 58.10% done, 121 estimated seconds remaining
854 - device sdb: 76.30% done, 72 estimated seconds remaining
855 - device sdb: 94.80% done, 18 estimated seconds remaining
856 Instance instance2's disks are in sync.
857 creating os for instance instance2 on node node1.example.com
858 * running the instance OS create scripts...
859 * starting instance...
865 <title>Managing virtual instances</title>
867 All commands need to be executed on the Ganeti master node
871 To access the console of an instance, use
872 <computeroutput>gnt-instance console
873 <replaceable>INSTANCENAME</replaceable></computeroutput>.
877 To shutdown an instance, use <computeroutput>gnt-instance
879 <replaceable>INSTANCENAME</replaceable></computeroutput>. To
880 startup an instance, use <computeroutput>gnt-instance startup
881 <replaceable>INSTANCENAME</replaceable></computeroutput>.
885 To failover an instance to its secondary node (only possible
886 in <literal>remote_raid1</literal> or <literal>drbd</literal>
887 disk templates), use <computeroutput>gnt-instance failover
888 <replaceable>INSTANCENAME</replaceable></computeroutput>.
892 For more instance and cluster administration details, see the
893 <emphasis>Ganeti administrator's guide</emphasis>.