Statistics
| Branch: | Tag: | Revision:

root / doc / install.rst @ 1232284c

History | View | Annotate | Download (22 kB)

1
Ganeti installation tutorial
2
============================
3

    
4
Documents Ganeti version |version|
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Ganeti is a cluster virtualization management system based on Xen or
12
KVM. This document explains how to bootstrap a Ganeti node (Xen
13
*dom0*), create a running cluster and install virtual instance (Xen
14
*domU*).  You need to repeat most of the steps in this document for
15
every node you want to install, but of course we recommend creating
16
some semi-automatic procedure if you plan to deploy Ganeti on a
17
medium/large scale.
18

    
19
A basic Ganeti terminology glossary is provided in the introductory
20
section of the *Ganeti administrator's guide*. Please refer to that
21
document if you are uncertain about the terms we are using.
22

    
23
Ganeti has been developed for Linux and is distribution-agnostic.
24
This documentation will use Debian Lenny as an example system but the
25
examples can easily be translated to any other distribution. ou are
26
expected to be familiar with your distribution, its package management
27
system, and Xen or KVM before trying to use Ganeti.
28

    
29
This document is divided into two main sections:
30

    
31
- Installation of the base system and base components
32

    
33
- Configuration of the environment for Ganeti
34

    
35
Each of these is divided into sub-sections. While a full Ganeti system
36
will need all of the steps specified, some are not strictly required
37
for every environment. Which ones they are, and why, is specified in
38
the corresponding sections.
39

    
40
Installing the base system and base components
41
----------------------------------------------
42

    
43
Hardware requirements
44
+++++++++++++++++++++
45

    
46
Any system supported by your Linux distribution is fine. 64-bit
47
systems are better as they can support more memory.
48

    
49
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.)
50
is supported in Ganeti. Note that no shared storage (e.g.  ``SAN``) is
51
needed to get high-availability features (but of course, one can be
52
used to store the images). It is highly recommended to use more than
53
one disk drive to improve speed. But Ganeti also works with one disk
54
per machine.
55

    
56
Installing the base system
57
++++++++++++++++++++++++++
58

    
59
**Mandatory** on all nodes.
60

    
61
It is advised to start with a clean, minimal install of the operating
62
system. The only requirement you need to be aware of at this stage is
63
to partition leaving enough space for a big (**minimum** 20GiB) LVM
64
volume group which will then host your instance filesystems, if you
65
want to use all Ganeti features. The volume group name Ganeti 2.0 uses
66
(by default) is ``xenvg``.
67

    
68
You can also use file-based storage only, without LVM, but this setup
69
is not detailed in this document.
70

    
71

    
72
While you can use an existing system, please note that the Ganeti
73
installation is intrusive in terms of changes to the system
74
configuration, and it's best to use a newly-installed system without
75
important data on it.
76

    
77
Also, for best results, it's advised that the nodes have as much as
78
possible the same hardware and software configuration. This will make
79
administration much easier.
80

    
81
Hostname issues
82
~~~~~~~~~~~~~~~
83

    
84
Note that Ganeti requires the hostnames of the systems (i.e. what the
85
``hostname`` command outputs to be a fully-qualified name, not a short
86
name. In other words, you should use *node1.example.com* as a hostname
87
and not just *node1*.
88

    
89
.. admonition:: Debian
90

    
91
   Debian Lenny and Etch configures the hostname differently than you
92
   need it for Ganeti. For example, this is what Etch puts in
93
   ``/etc/hosts`` in certain situations::
94

    
95
     127.0.0.1       localhost
96
     127.0.1.1       node1.example.com node1
97

    
98
   but for Ganeti you need to have::
99

    
100
     127.0.0.1       localhost
101
     192.168.1.1     node1.example.com node1
102

    
103
   replacing ``192.168.1.1`` with your node's address. Also, the file
104
   ``/etc/hostname`` which configures the hostname of the system
105
   should contain ``node1.example.com`` and not just ``node1`` (you
106
   need to run the command ``/etc/init.d/hostname.sh start`` after
107
   changing the file).
108

    
109
.. admonition:: Why a fully qualified host name
110

    
111
   Although most distributions use only the short name in the /etc/hostname
112
   file, we still think Ganeti nodes should use the full name. The reason for
113
   this is that calling 'hostname --fqdn' requires the resolver library to work
114
   and is a 'guess' via heuristics at what is your domain name. Since Ganeti
115
   can be used among other things to host DNS servers, we don't want to depend
116
   on them as much as possible, and we'd rather have the uname() syscall return
117
   the full node name.
118

    
119
   We haven't ever found any breakage in using a full hostname on a Linux
120
   system, and anyway we recommend to have only a minimal installation on
121
   Ganeti nodes, and to use instances (or other dedicated machines) to run the
122
   rest of your network services. By doing this you can change the
123
   /etc/hostname file to contain an FQDN without the fear of breaking anything
124
   unrelated.
125

    
126

    
127
Installing Xen
128
++++++++++++++
129

    
130
**Mandatory** on all nodes.
131

    
132
While Ganeti is developed with the ability to modularly run on
133
different virtualization environments in mind the only two currently
134
useable on a live system are Xen and KVM. Supported
135
Xen versions are: 3.0.3, 3.0.4 and 3.1.
136

    
137
Please follow your distribution's recommended way to install and set
138
up Xen, or install Xen from the upstream source, if you wish,
139
following their manual. For KVM, make sure you have a KVM-enabled
140
kernel and the KVM tools.
141

    
142
After installing either hypervisor, you need to reboot into your new
143
system. On some distributions this might involve configuring GRUB
144
appropriately, whereas others will configure it automatically when you
145
install the respective kernels.
146

    
147
.. admonition:: Debian
148

    
149
   Under Lenny or Etch you can install the relevant
150
   ``xen-linux-system`` package, which will pull in both the
151
   hypervisor and the relevant kernel. Also, if you are installing a
152
   32-bit Lenny/Etch, you should install the ``libc6-xen`` package
153
   (run ``apt-get install libc6-xen``).
154

    
155
Xen settings
156
~~~~~~~~~~~~
157

    
158
It's recommended that dom0 is restricted to a low amount of memory
159
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled
160
in the file ``/etc/xen/xend-config.sxp`` by setting
161
the value ``dom0-min-mem`` to 0,
162
like this::
163

    
164
  (dom0-min-mem 0)
165

    
166
For optimum performance when running both CPU and I/O intensive
167
instances, it's also recommended that the dom0 is restricted to one
168
CPU only, for example by booting with the kernel parameter ``nosmp``.
169

    
170
It is recommended that you disable xen's automatic save of virtual
171
machines at system shutdown and subsequent restore of them at reboot.
172
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
173
``/etc/default/xendomains`` is set to an empty value.
174

    
175
.. admonition:: Debian
176

    
177
   Besides the ballooning change which you need to set in
178
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
179
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
180
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
181

    
182
     ## Xen hypervisor options to use with the default Xen boot option
183
     # xenhopt=dom0_mem=1024M
184

    
185
   and the ``xenkopt`` needs to include the ``nosmp`` option like
186
   this::
187

    
188
     ## Xen Linux kernel options to use with the default Xen boot option
189
     # xenkopt=nosmp
190

    
191
   Any existing parameters can be left in place: it's ok to have
192
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
193
   files, you need to run::
194

    
195
     /sbin/update-grub
196

    
197
If you want to run HVM instances too with Ganeti and want VNC access
198
to the console of your instances, set the following two entries in
199
``/etc/xen/xend-config.sxp``::
200

    
201
  (vnc-listen '0.0.0.0') (vncpasswd '')
202

    
203
You need to restart the Xen daemon for these settings to take effect::
204

    
205
  /etc/init.d/xend restart
206

    
207
Selecting the instance kernel
208
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
209

    
210
After you have installed Xen, you need to tell Ganeti exactly what
211
kernel to use for the instances it will create. This is done by
212
creating a symlink from your actual kernel to
213
``/boot/vmlinuz-2.6-xenU``, and one from your initrd
214
to ``/boot/initrd-2.6-xenU``. Note that if you don't
215
use an initrd for the domU kernel, you don't need
216
to create the initrd symlink.
217

    
218
.. admonition:: Debian
219

    
220
   After installation of the ``xen-linux-system`` package, you need to
221
   run (replace the exact version number with the one you have)::
222

    
223
     cd /boot
224
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
225
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
226

    
227
Installing DRBD
228
+++++++++++++++
229

    
230
Recommended on all nodes: DRBD_ is required if you want to use the
231
high availability (HA) features of Ganeti, but optional if you don't
232
require HA or only run Ganeti on single-node clusters. You can upgrade
233
a non-HA cluster to an HA one later, but you might need to export and
234
re-import all your instances to take advantage of the new features.
235

    
236
.. _DRBD: http://www.drbd.org/
237

    
238
Supported DRBD versions: 8.0.x. It's recommended to have at least
239
version 8.0.12.
240

    
241
Now the bad news: unless your distribution already provides it
242
installing DRBD might involve recompiling your kernel or anyway
243
fiddling with it. Hopefully at least the Xen-ified kernel source to
244
start from will be provided.
245

    
246
The good news is that you don't need to configure DRBD at all. Ganeti
247
will do it for you for every instance you set up.  If you have the
248
DRBD utils installed and the module in your kernel you're fine. Please
249
check that your system is configured to load the module at every boot,
250
and that it passes the following option to the module
251
``minor_count=255``. This will allow you to use up to 128 instances
252
per node (for most clusters 128 should be enough, though).
253

    
254
.. admonition:: Debian
255

    
256
   On Debian, you can just install (build) the DRBD 8.0.x module with
257
   the following commands (make sure you are running the Xen kernel)::
258

    
259
     apt-get install drbd8-source drbd8-utils
260
     m-a update
261
     m-a a-i drbd8
262
     echo drbd minor_count=128 >> /etc/modules
263
     depmod -a
264
     modprobe drbd minor_count=128
265

    
266
   It is also recommended that you comment out the default resources
267
   in the ``/etc/drbd.conf`` file, so that the init script doesn't try
268
   to configure any drbd devices. You can do this by prefixing all
269
   *resource* lines in the file with the keyword *skip*, like this::
270

    
271
     skip resource r0 {
272
       ...
273
     }
274

    
275
     skip resource "r1" {
276
       ...
277
     }
278

    
279
Other required software
280
+++++++++++++++++++++++
281

    
282
Besides Xen and DRBD, you will need to install the following (on all
283
nodes):
284

    
285
- LVM version 2, `<http://sourceware.org/lvm2/>`_
286

    
287
- OpenSSL, `<http://www.openssl.org/>`_
288

    
289
- OpenSSH, `<http://www.openssh.com/portable.html>`_
290

    
291
- bridge utilities, `<http://bridge.sourceforge.net/>`_
292

    
293
- iproute2, `<http://developer.osdl.org/dev/iproute2>`_
294

    
295
- arping (part of iputils package),
296
  `<ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz>`_
297

    
298
- Python version 2.4 or 2.5, `<http://www.python.org>`_
299

    
300
- Python OpenSSL bindings, `<http://pyopenssl.sourceforge.net/>`_
301

    
302
- simplejson Python module, `<http://www.undefined.org/python/#simplejson>`_
303

    
304
- pyparsing Python module, `<http://pyparsing.wikispaces.com/>`_
305

    
306
- pyinotify Python module, `<http://trac.dbzteam.org/pyinotify>`_
307

    
308
These programs are supplied as part of most Linux distributions, so
309
usually they can be installed via apt or similar methods. Also many of
310
them will already be installed on a standard machine.
311

    
312

    
313
.. admonition:: Debian
314

    
315
   You can use this command line to install all needed packages::
316

    
317
     # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
318
     python python-pyopenssl openssl python-pyparsing python-simplejson \
319
     python-pyinotify
320

    
321
Setting up the environment for Ganeti
322
-------------------------------------
323

    
324
Configuring the network
325
+++++++++++++++++++++++
326

    
327
**Mandatory** on all nodes.
328

    
329
Ganeti relies on Xen running in "bridge mode", which means the
330
instances network interfaces will be attached to a software bridge
331
running in dom0. Xen by default creates such a bridge at startup, but
332
your distribution might have a different way to do things.
333

    
334
Beware that the default name Ganeti uses is ``xen-br0`` (which was
335
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default
336
bridge your Ganeti cluster will use for new instances can be specified
337
at cluster initialization time.
338

    
339
.. admonition:: Debian
340

    
341
   The recommended way to configure the Xen bridge is to edit your
342
   ``/etc/network/interfaces`` file and substitute your normal
343
   Ethernet stanza with the following snippet::
344

    
345
     auto xen-br0
346
     iface xen-br0 inet static
347
        address YOUR_IP_ADDRESS
348
        netmask YOUR_NETMASK
349
        network YOUR_NETWORK
350
        broadcast YOUR_BROADCAST_ADDRESS
351
        gateway YOUR_GATEWAY
352
        bridge_ports eth0
353
        bridge_stp off
354
        bridge_fd 0
355

    
356
The following commands need to be executed on the local console:
357

    
358
  ifdown eth0
359
  ifup xen-br0
360

    
361
To check if the bridge is setup, use the ``ip`` and ``brctl show``
362
commands::
363

    
364
  # ip a show xen-br0
365
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
366
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
367
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
368
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
369
         valid_lft forever preferred_lft forever
370

    
371
  # brctl show xen-br0
372
  bridge name     bridge id               STP enabled     interfaces
373
  xen-br0         8000.0020fc1ed55d       no              eth0
374

    
375
Configuring LVM
376
+++++++++++++++
377

    
378
**Mandatory** on all nodes.
379

    
380
The volume group is required to be at least 20GiB.
381

    
382
If you haven't configured your LVM volume group at install time you
383
need to do it before trying to initialize the Ganeti cluster. This is
384
done by formatting the devices/partitions you want to use for it and
385
then adding them to the relevant volume group::
386

    
387
  pvcreate /dev/sda3
388
  vgcreate xenvg /dev/sda3
389

    
390
or::
391

    
392
  pvcreate /dev/sdb1
393
  pvcreate /dev/sdc1
394
  vgcreate xenvg /dev/sdb1 /dev/sdc1
395

    
396
If you want to add a device later you can do so with the *vgextend*
397
command::
398

    
399
  pvcreate /dev/sdd1
400
  vgextend xenvg /dev/sdd1
401

    
402
Optional: it is recommended to configure LVM not to scan the DRBD
403
devices for physical volumes. This can be accomplished by editing
404
``/etc/lvm/lvm.conf`` and adding the
405
``/dev/drbd[0-9]+`` regular expression to the
406
``filter`` variable, like this::
407

    
408
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
409

    
410
Installing Ganeti
411
+++++++++++++++++
412

    
413
**Mandatory** on all nodes.
414

    
415
It's now time to install the Ganeti software itself.  Download the
416
source from the project page at `<http://code.google.com/p/ganeti/>`_,
417
and install it (replace 2.0.0 with the latest version)::
418

    
419
  tar xvzf ganeti-2.0.0.tar.gz
420
  cd ganeti-2.0.0
421
  ./configure --localstatedir=/var --sysconfdir=/etc
422
  make
423
  make install
424
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
425

    
426
You also need to copy the file
427
``doc/examples/ganeti.initd`` from the source archive
428
to ``/etc/init.d/ganeti`` and register it with your
429
distribution's startup scripts, for example in Debian::
430

    
431
  update-rc.d ganeti defaults 20 80
432

    
433
In order to automatically restart failed instances, you need to setup
434
a cron job run the *ganeti-watcher* command. A sample cron file is
435
provided in the source at ``doc/examples/ganeti.cron`` and you can
436
copy that (eventually altering the path) to ``/etc/cron.d/ganeti``.
437

    
438
Installing the Operating System support packages
439
++++++++++++++++++++++++++++++++++++++++++++++++
440

    
441
**Mandatory** on all nodes.
442

    
443
To be able to install instances you need to have an Operating System
444
installation script. An example OS that works under Debian and can
445
install Debian and Ubuntu instace OSes is provided on the project web
446
site.  Download it from the project page and follow the instructions
447
in the ``README`` file.  Here is the installation procedure (replace
448
0.7 with the latest version that is compatible with your ganeti
449
version)::
450

    
451
  cd /usr/local/src/
452
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz
453
  tar xzf ganeti-instance-debootstrap-0.7.tar.gz
454
  cd ganeti-instance-debootstrap-0.7
455
  ./configure
456
  make
457
  make install
458

    
459
In order to use this OS definition, you need to have internet access
460
from your nodes and have the *debootstrap*, *dump* and *restore*
461
commands installed on all nodes. Also, if the OS is configured to
462
partition the instance's disk in
463
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
464
installed.
465

    
466
.. admonition:: Debian
467

    
468
   Use this command on all nodes to install the required packages::
469

    
470
     apt-get install debootstrap dump kpartx
471

    
472
Alternatively, you can create your own OS definitions. See the manpage
473
:manpage:`ganeti-os-interface`.
474

    
475
Initializing the cluster
476
++++++++++++++++++++++++
477

    
478
**Mandatory** on one node per cluster.
479

    
480
The last step is to initialize the cluster. After you've repeated the
481
above process on all of your nodes, choose one as the master, and
482
execute::
483

    
484
  gnt-cluster init <CLUSTERNAME>
485

    
486
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it
487
must exist in DNS or in ``/etc/hosts``) by all the nodes in the
488
cluster. You must choose a name different from any of the nodes names
489
for a multi-node cluster. In general the best choice is to have a
490
unique name for a cluster, even if it consists of only one machine, as
491
you will be able to expand it later without any problems. Please note
492
that the hostname used for this must resolve to an IP address reserved
493
**exclusively** for this purpose, and cannot be the name of the first
494
(master) node.
495

    
496
If the bridge name you are using is not ``xen-br0``, use the *-b
497
<BRIDGENAME>* option to specify the bridge name. In this case, you
498
should also use the *--master-netdev <BRIDGENAME>* option with the
499
same BRIDGENAME argument.
500

    
501
You can use a different name than ``xenvg`` for the volume group (but
502
note that the name must be identical on all nodes). In this case you
503
need to specify it by passing the *-g <VGNAME>* option to
504
``gnt-cluster init``.
505

    
506
To set up the cluster as an HVM cluster, use the
507
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
508
(you can also add ``,xen-pvm`` to enable the PVM one too). You will
509
also need to create the VNC cluster password file
510
``/etc/ganeti/vnc-cluster-password`` which contains one line with the
511
default VNC password for the cluster.
512

    
513
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
514
pass ``--enabled-hypervisors=kvm`` to the init command.
515

    
516
You can also invoke the command with the ``--help`` option in order to
517
see all the possibilities.
518

    
519
Joining the nodes to the cluster
520
++++++++++++++++++++++++++++++++
521

    
522
**Mandatory** for all the other nodes.
523

    
524
After you have initialized your cluster you need to join the other
525
nodes to it. You can do so by executing the following command on the
526
master node::
527

    
528
  gnt-node add <NODENAME>
529

    
530
Separate replication network
531
++++++++++++++++++++++++++++
532

    
533
**Optional**
534

    
535
Ganeti uses DRBD to mirror the disk of the virtual instances between
536
nodes. To use a dedicated network interface for this (in order to
537
improve performance or to enhance security) you need to configure an
538
additional interface for each node.  Use the *-s* option with
539
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
540
this secondary interface to use for each node. Note that if you
541
specified this option at cluster setup time, you must afterwards use
542
it for every node add operation.
543

    
544
Testing the setup
545
+++++++++++++++++
546

    
547
Execute the ``gnt-node list`` command to see all nodes in the
548
cluster::
549

    
550
  # gnt-node list
551
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
552
  node1.example.com 197404 197404   2047  1896   125     0     0
553

    
554
Setting up and managing virtual instances
555
-----------------------------------------
556

    
557
Setting up virtual instances
558
++++++++++++++++++++++++++++
559

    
560
This step shows how to setup a virtual instance with either
561
non-mirrored disks (``plain``) or with network mirrored disks
562
(``drbd``).  All commands need to be executed on the Ganeti master
563
node (the one on which ``gnt-cluster init`` was run).  Verify that the
564
OS scripts are present on all cluster nodes with ``gnt-os list``.
565

    
566

    
567
To create a virtual instance, you need a hostname which is resolvable
568
(DNS or ``/etc/hosts`` on all nodes). The following command will
569
create a non-mirrored instance for you::
570

    
571
  gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com
572
  * creating instance disks...
573
  adding instance instance1.example.com to cluster config
574
   - INFO: Waiting for instance instance1.example.com to sync disks.
575
   - INFO: Instance instance1.example.com's disks are in sync.
576
  creating os for instance instance1.example.com on node node1.example.com
577
  * running the instance OS create scripts...
578
  * starting instance...
579

    
580
The above instance will have no network interface enabled. You can
581
access it over the virtual console with ``gnt-instance console
582
inst1``. There is no password for root. As this is a Debian instance,
583
you can modify the ``/etc/network/interfaces`` file to setup the
584
network interface (eth0 is the name of the interface provided to the
585
instance).
586

    
587
To create a network mirrored instance, change the argument to the *-t*
588
option from ``plain`` to ``drbd`` and specify the node on which the
589
mirror should reside with the second value of the *--node* option,
590
like this (note that the command output includes timestamps which have
591
been removed for clarity)::
592

    
593
  # gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2
594
  * creating instance disks...
595
  adding instance instance2.example.com to cluster config
596
   - INFO: Waiting for instance instance2.example.com to sync disks.
597
   - INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining
598
   - INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining
599
   - INFO: Instance instance2.example.com's disks are in sync.
600
  creating os for instance instance2.example.com on node node1.example.com
601
  * running the instance OS create scripts...
602
  * starting instance...
603

    
604
Managing virtual instances
605
++++++++++++++++++++++++++
606

    
607
All commands need to be executed on the Ganeti master node.
608

    
609
To access the console of an instance, run::
610

    
611
  gnt-instance console INSTANCENAME
612

    
613
To shutdown an instance, run::
614

    
615
  gnt-instance shutdown INSTANCENAME
616

    
617
To startup an instance, run::
618

    
619
  gnt-instance startup INSTANCENAME
620

    
621
To failover an instance to its secondary node (only possible with
622
``drbd`` disk templates), run::
623

    
624
  gnt-instance failover INSTANCENAME
625

    
626
For more instance and cluster administration details, see the
627
*Ganeti administrator's guide*.