Statistics
| Branch: | Tag: | Revision:

root / doc / install.rst @ fd07c6b3

History | View | Annotate | Download (21 kB)

1
Ganeti installation tutorial
2
============================
3

    
4
Documents Ganeti version |version|
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
Ganeti is a cluster virtualization management system based on Xen or
12
KVM. This document explains how to bootstrap a Ganeti node (Xen
13
*dom0*), create a running cluster and install virtual instance (Xen
14
*domU*).  You need to repeat most of the steps in this document for
15
every node you want to install, but of course we recommend creating
16
some semi-automatic procedure if you plan to deploy Ganeti on a
17
medium/large scale.
18

    
19
A basic Ganeti terminology glossary is provided in the introductory
20
section of the *Ganeti administrator's guide*. Please refer to that
21
document if you are uncertain about the terms we are using.
22

    
23
Ganeti has been developed for Linux and is distribution-agnostic.
24
This documentation will use Debian Lenny as an example system but the
25
examples can easily be translated to any other distribution. ou are
26
expected to be familiar with your distribution, its package management
27
system, and Xen or KVM before trying to use Ganeti.
28

    
29
This document is divided into two main sections:
30

    
31
- Installation of the base system and base components
32

    
33
- Configuration of the environment for Ganeti
34

    
35
Each of these is divided into sub-sections. While a full Ganeti system
36
will need all of the steps specified, some are not strictly required
37
for every environment. Which ones they are, and why, is specified in
38
the corresponding sections.
39

    
40
Installing the base system and base components
41
----------------------------------------------
42

    
43
Hardware requirements
44
+++++++++++++++++++++
45

    
46
Any system supported by your Linux distribution is fine. 64-bit
47
systems are better as they can support more memory.
48

    
49
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.)
50
is supported in Ganeti. Note that no shared storage (e.g.  ``SAN``) is
51
needed to get high-availability features (but of course, one can be
52
used to store the images). It is highly recommended to use more than
53
one disk drive to improve speed. But Ganeti also works with one disk
54
per machine.
55

    
56
Installing the base system
57
++++++++++++++++++++++++++
58

    
59
**Mandatory** on all nodes.
60

    
61
It is advised to start with a clean, minimal install of the operating
62
system. The only requirement you need to be aware of at this stage is
63
to partition leaving enough space for a big (**minimum** 20GiB) LVM
64
volume group which will then host your instance filesystems, if you
65
want to use all Ganeti features. The volume group name Ganeti 2.0 uses
66
(by default) is ``xenvg``.
67

    
68
You can also use file-based storage only, without LVM, but this setup
69
is not detailed in this document.
70

    
71

    
72
While you can use an existing system, please note that the Ganeti
73
installation is intrusive in terms of changes to the system
74
configuration, and it's best to use a newly-installed system without
75
important data on it.
76

    
77
Also, for best results, it's advised that the nodes have as much as
78
possible the same hardware and software configuration. This will make
79
administration much easier.
80

    
81
Hostname issues
82
~~~~~~~~~~~~~~~
83

    
84
Note that Ganeti requires the hostnames of the systems (i.e. what the
85
``hostname`` command outputs to be a fully-qualified name, not a short
86
name. In other words, you should use *node1.example.com* as a hostname
87
and not just *node1*.
88

    
89
.. admonition:: Debian
90

    
91
   Debian Lenny and Etch configures the hostname differently than you
92
   need it for Ganeti. For example, this is what Etch puts in
93
   ``/etc/hosts`` in certain situations::
94

    
95
     127.0.0.1       localhost
96
     127.0.1.1       node1.example.com node1
97

    
98
   but for Ganeti you need to have::
99

    
100
     127.0.0.1       localhost
101
     192.168.1.1     node1.example.com node1
102

    
103
   replacing ``192.168.1.1`` with your node's address. Also, the file
104
   ``/etc/hostname`` which configures the hostname of the system
105
   should contain ``node1.example.com`` and not just ``node1`` (you
106
   need to run the command ``/etc/init.d/hostname.sh start`` after
107
   changing the file).
108

    
109
Installing Xen
110
++++++++++++++
111

    
112
**Mandatory** on all nodes.
113

    
114
While Ganeti is developed with the ability to modularly run on
115
different virtualization environments in mind the only two currently
116
useable on a live system are Xen and KVM. Supported
117
Xen versions are: 3.0.3, 3.0.4 and 3.1.
118

    
119
Please follow your distribution's recommended way to install and set
120
up Xen, or install Xen from the upstream source, if you wish,
121
following their manual. For KVM, make sure you have a KVM-enabled
122
kernel and the KVM tools.
123

    
124
After installing either hypervisor, you need to reboot into your new
125
system. On some distributions this might involve configuring GRUB
126
appropriately, whereas others will configure it automatically when you
127
install the respective kernels.
128

    
129
.. admonition:: Debian
130

    
131
   Under Lenny or Etch you can install the relevant
132
   ``xen-linux-system`` package, which will pull in both the
133
   hypervisor and the relevant kernel. Also, if you are installing a
134
   32-bit Lenny/Etch, you should install the ``libc6-xen`` package
135
   (run ``apt-get install libc6-xen``).
136

    
137
Xen settings
138
~~~~~~~~~~~~
139

    
140
It's recommended that dom0 is restricted to a low amount of memory
141
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled
142
in the file ``/etc/xen/xend-config.sxp`` by setting
143
the value ``dom0-min-mem`` to 0,
144
like this::
145

    
146
  (dom0-min-mem 0)
147

    
148
For optimum performance when running both CPU and I/O intensive
149
instances, it's also recommended that the dom0 is restricted to one
150
CPU only, for example by booting with the kernel parameter ``nosmp``.
151

    
152
It is recommended that you disable xen's automatic save of virtual
153
machines at system shutdown and subsequent restore of them at reboot.
154
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
155
``/etc/default/xendomains`` is set to an empty value.
156

    
157
.. admonition:: Debian
158

    
159
   Besides the ballooning change which you need to set in
160
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
161
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
162
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
163

    
164
     ## Xen hypervisor options to use with the default Xen boot option
165
     # xenhopt=dom0_mem=1024M
166

    
167
   and the ``xenkopt`` needs to include the ``nosmp`` option like
168
   this::
169

    
170
     ## Xen Linux kernel options to use with the default Xen boot option
171
     # xenkopt=nosmp
172

    
173
   Any existing parameters can be left in place: it's ok to have
174
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
175
   files, you need to run::
176

    
177
     /sbin/update-grub
178

    
179
If you want to run HVM instances too with Ganeti and want VNC access
180
to the console of your instances, set the following two entries in
181
``/etc/xen/xend-config.sxp``::
182

    
183
  (vnc-listen '0.0.0.0') (vncpasswd '')
184

    
185
You need to restart the Xen daemon for these settings to take effect::
186

    
187
  /etc/init.d/xend restart
188

    
189
Selecting the instance kernel
190
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
191

    
192
After you have installed Xen, you need to tell Ganeti exactly what
193
kernel to use for the instances it will create. This is done by
194
creating a symlink from your actual kernel to
195
``/boot/vmlinuz-2.6-xenU``, and one from your initrd
196
to ``/boot/initrd-2.6-xenU``. Note that if you don't
197
use an initrd for the domU kernel, you don't need
198
to create the initrd symlink.
199

    
200
.. admonition:: Debian
201

    
202
   After installation of the ``xen-linux-system`` package, you need to
203
   run (replace the exact version number with the one you have)::
204

    
205
     cd /boot
206
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
207
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
208

    
209
Installing DRBD
210
+++++++++++++++
211

    
212
Recommended on all nodes: DRBD_ is required if you want to use the
213
high availability (HA) features of Ganeti, but optional if you don't
214
require HA or only run Ganeti on single-node clusters. You can upgrade
215
a non-HA cluster to an HA one later, but you might need to export and
216
re-import all your instances to take advantage of the new features.
217

    
218
.. _DRBD: http://www.drbd.org/
219

    
220
Supported DRBD versions: 8.0.x. It's recommended to have at least
221
version 8.0.12.
222

    
223
Now the bad news: unless your distribution already provides it
224
installing DRBD might involve recompiling your kernel or anyway
225
fiddling with it. Hopefully at least the Xen-ified kernel source to
226
start from will be provided.
227

    
228
The good news is that you don't need to configure DRBD at all. Ganeti
229
will do it for you for every instance you set up.  If you have the
230
DRBD utils installed and the module in your kernel you're fine. Please
231
check that your system is configured to load the module at every boot,
232
and that it passes the following option to the module
233
``minor_count=255``. This will allow you to use up to 128 instances
234
per node (for most clusters 128 should be enough, though).
235

    
236
.. admonition:: Debian
237

    
238
   On Debian, you can just install (build) the DRBD 8.0.x module with
239
   the following commands (make sure you are running the Xen kernel)::
240

    
241
     apt-get install drbd8-source drbd8-utils
242
     m-a update
243
     m-a a-i drbd8
244
     echo drbd minor_count=128 >> /etc/modules
245
     depmod -a
246
     modprobe drbd minor_count=128
247

    
248
   It is also recommended that you comment out the default resources
249
   in the ``/etc/drbd.conf`` file, so that the init script doesn't try
250
   to configure any drbd devices. You can do this by prefixing all
251
   *resource* lines in the file with the keyword *skip*, like this::
252

    
253
     skip resource r0 {
254
       ...
255
     }
256

    
257
     skip resource "r1" {
258
       ...
259
     }
260

    
261
Other required software
262
+++++++++++++++++++++++
263

    
264
Besides Xen and DRBD, you will need to install the following (on all
265
nodes):
266

    
267
- LVM version 2, `<http://sourceware.org/lvm2/>`_
268

    
269
- OpenSSL, `<http://www.openssl.org/>`_
270

    
271
- OpenSSH, `<http://www.openssh.com/portable.html>`_
272

    
273
- bridge utilities, `<http://bridge.sourceforge.net/>`_
274

    
275
- iproute2, `<http://developer.osdl.org/dev/iproute2>`_
276

    
277
- arping (part of iputils package),
278
  `<ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz>`_
279

    
280
- Python version 2.4 or 2.5, `<http://www.python.org>`_
281

    
282
- Python OpenSSL bindings, `<http://pyopenssl.sourceforge.net/>`_
283

    
284

    
285
- simplejson Python module, `<http://www.undefined.org/python/#simplejson>`_
286

    
287
- pyparsing Python module, `<http://pyparsing.wikispaces.com/>`_
288

    
289
These programs are supplied as part of most Linux distributions, so
290
usually they can be installed via apt or similar methods. Also many of
291
them will already be installed on a standard machine.
292

    
293

    
294
.. admonition:: Debian
295

    
296
   You can use this command line to install all needed packages::
297

    
298
     # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
299
     python python-pyopenssl openssl python-pyparsing python-simplejson
300

    
301
Setting up the environment for Ganeti
302
-------------------------------------
303

    
304
Configuring the network
305
+++++++++++++++++++++++
306

    
307
**Mandatory** on all nodes.
308

    
309
Ganeti relies on Xen running in "bridge mode", which means the
310
instances network interfaces will be attached to a software bridge
311
running in dom0. Xen by default creates such a bridge at startup, but
312
your distribution might have a different way to do things.
313

    
314
Beware that the default name Ganeti uses is ``xen-br0`` (which was
315
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default
316
bridge your Ganeti cluster will use for new instances can be specified
317
at cluster initialization time.
318

    
319
.. admonition:: Debian
320

    
321
   The recommended way to configure the Xen bridge is to edit your
322
   ``/etc/network/interfaces`` file and substitute your normal
323
   Ethernet stanza with the following snippet::
324

    
325
     auto xen-br0
326
     iface xen-br0 inet static
327
        address YOUR_IP_ADDRESS
328
        netmask YOUR_NETMASK
329
        network YOUR_NETWORK
330
        broadcast YOUR_BROADCAST_ADDRESS
331
        gateway YOUR_GATEWAY
332
        bridge_ports eth0
333
        bridge_stp off
334
        bridge_fd 0
335

    
336
The following commands need to be executed on the local console:
337

    
338
  ifdown eth0
339
  ifup xen-br0
340

    
341
To check if the bridge is setup, use the ``ip`` and ``brctl show``
342
commands::
343

    
344
  # ip a show xen-br0
345
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
346
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
347
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
348
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
349
         valid_lft forever preferred_lft forever
350

    
351
  # brctl show xen-br0
352
  bridge name     bridge id               STP enabled     interfaces
353
  xen-br0         8000.0020fc1ed55d       no              eth0
354

    
355
Configuring LVM
356
+++++++++++++++
357

    
358
**Mandatory** on all nodes.
359

    
360
The volume group is required to be at least 20GiB.
361

    
362
If you haven't configured your LVM volume group at install time you
363
need to do it before trying to initialize the Ganeti cluster. This is
364
done by formatting the devices/partitions you want to use for it and
365
then adding them to the relevant volume group::
366

    
367
  pvcreate /dev/sda3
368
  vgcreate xenvg /dev/sda3
369

    
370
or::
371

    
372
  pvcreate /dev/sdb1
373
  pvcreate /dev/sdc1
374
  vgcreate xenvg /dev/sdb1 /dev/sdc1
375

    
376
If you want to add a device later you can do so with the *vgextend*
377
command::
378

    
379
  pvcreate /dev/sdd1
380
  vgextend xenvg /dev/sdd1
381

    
382
Optional: it is recommended to configure LVM not to scan the DRBD
383
devices for physical volumes. This can be accomplished by editing
384
``/etc/lvm/lvm.conf`` and adding the
385
``/dev/drbd[0-9]+`` regular expression to the
386
``filter`` variable, like this::
387

    
388
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
389

    
390
Installing Ganeti
391
+++++++++++++++++
392

    
393
**Mandatory** on all nodes.
394

    
395
It's now time to install the Ganeti software itself.  Download the
396
source from the project page at `<http://code.google.com/p/ganeti/>`_,
397
and install it (replace 2.0.0 with the latest version)::
398

    
399
  tar xvzf ganeti-2.0.0.tar.gz
400
  cd ganeti-2.0.0
401
  ./configure --localstatedir=/var --sysconfdir=/etc
402
  make
403
  make install
404
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
405

    
406
You also need to copy the file
407
``doc/examples/ganeti.initd`` from the source archive
408
to ``/etc/init.d/ganeti`` and register it with your
409
distribution's startup scripts, for example in Debian::
410

    
411
  update-rc.d ganeti defaults 20 80
412

    
413
In order to automatically restart failed instances, you need to setup
414
a cron job run the *ganeti-watcher* command. A sample cron file is
415
provided in the source at ``doc/examples/ganeti.cron`` and you can
416
copy that (eventually altering the path) to ``/etc/cron.d/ganeti``.
417

    
418
Installing the Operating System support packages
419
++++++++++++++++++++++++++++++++++++++++++++++++
420

    
421
**Mandatory** on all nodes.
422

    
423
To be able to install instances you need to have an Operating System
424
installation script. An example OS that works under Debian and can
425
install Debian and Ubuntu instace OSes is provided on the project web
426
site.  Download it from the project page and follow the instructions
427
in the ``README`` file.  Here is the installation procedure (replace
428
0.7 with the latest version that is compatible with your ganeti
429
version)::
430

    
431
  cd /usr/local/src/
432
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz
433
  tar xzf ganeti-instance-debootstrap-0.7.tar.gz
434
  cd ganeti-instance-debootstrap-0.7
435
  ./configure
436
  make
437
  make install
438

    
439
In order to use this OS definition, you need to have internet access
440
from your nodes and have the *debootstrap*, *dump* and *restore*
441
commands installed on all nodes. Also, if the OS is configured to
442
partition the instance's disk in
443
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
444
installed.
445

    
446
.. admonition:: Debian
447

    
448
   Use this command on all nodes to install the required packages::
449

    
450
     apt-get install debootstrap dump kpartx
451

    
452
Alternatively, you can create your own OS definitions. See the manpage
453
:manpage:`ganeti-os-interface`.
454

    
455
Initializing the cluster
456
++++++++++++++++++++++++
457

    
458
**Mandatory** on one node per cluster.
459

    
460
The last step is to initialize the cluster. After you've repeated the
461
above process on all of your nodes, choose one as the master, and
462
execute::
463

    
464
  gnt-cluster init <CLUSTERNAME>
465

    
466
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it
467
must exist in DNS or in ``/etc/hosts``) by all the nodes in the
468
cluster. You must choose a name different from any of the nodes names
469
for a multi-node cluster. In general the best choice is to have a
470
unique name for a cluster, even if it consists of only one machine, as
471
you will be able to expand it later without any problems. Please note
472
that the hostname used for this must resolve to an IP address reserved
473
**exclusively** for this purpose, and cannot be the name of the first
474
(master) node.
475

    
476
If the bridge name you are using is not ``xen-br0``, use the *-b
477
<BRIDGENAME>* option to specify the bridge name. In this case, you
478
should also use the *--master-netdev <BRIDGENAME>* option with the
479
same BRIDGENAME argument.
480

    
481
You can use a different name than ``xenvg`` for the volume group (but
482
note that the name must be identical on all nodes). In this case you
483
need to specify it by passing the *-g <VGNAME>* option to
484
``gnt-cluster init``.
485

    
486
To set up the cluster as an HVM cluster, use the
487
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
488
(you can also add ``,xen-pvm`` to enable the PVM one too). You will
489
also need to create the VNC cluster password file
490
``/etc/ganeti/vnc-cluster-password`` which contains one line with the
491
default VNC password for the cluster.
492

    
493
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
494
pass ``--enabled-hypervisors=kvm`` to the init command.
495

    
496
You can also invoke the command with the ``--help`` option in order to
497
see all the possibilities.
498

    
499
Joining the nodes to the cluster
500
++++++++++++++++++++++++++++++++
501

    
502
**Mandatory** for all the other nodes.
503

    
504
After you have initialized your cluster you need to join the other
505
nodes to it. You can do so by executing the following command on the
506
master node::
507

    
508
  gnt-node add <NODENAME>
509

    
510
Separate replication network
511
++++++++++++++++++++++++++++
512

    
513
**Optional**
514

    
515
Ganeti uses DRBD to mirror the disk of the virtual instances between
516
nodes. To use a dedicated network interface for this (in order to
517
improve performance or to enhance security) you need to configure an
518
additional interface for each node.  Use the *-s* option with
519
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
520
this secondary interface to use for each node. Note that if you
521
specified this option at cluster setup time, you must afterwards use
522
it for every node add operation.
523

    
524
Testing the setup
525
+++++++++++++++++
526

    
527
Execute the ``gnt-node list`` command to see all nodes in the
528
cluster::
529

    
530
  # gnt-node list
531
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
532
  node1.example.com 197404 197404   2047  1896   125     0     0
533

    
534
Setting up and managing virtual instances
535
-----------------------------------------
536

    
537
Setting up virtual instances
538
++++++++++++++++++++++++++++
539

    
540
This step shows how to setup a virtual instance with either
541
non-mirrored disks (``plain``) or with network mirrored disks
542
(``drbd``).  All commands need to be executed on the Ganeti master
543
node (the one on which ``gnt-cluster init`` was run).  Verify that the
544
OS scripts are present on all cluster nodes with ``gnt-os list``.
545

    
546

    
547
To create a virtual instance, you need a hostname which is resolvable
548
(DNS or ``/etc/hosts`` on all nodes). The following command will
549
create a non-mirrored instance for you::
550

    
551
  gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com
552
  * creating instance disks...
553
  adding instance instance1.example.com to cluster config
554
   - INFO: Waiting for instance instance1.example.com to sync disks.
555
   - INFO: Instance instance1.example.com's disks are in sync.
556
  creating os for instance instance1.example.com on node node1.example.com
557
  * running the instance OS create scripts...
558
  * starting instance...
559

    
560
The above instance will have no network interface enabled. You can
561
access it over the virtual console with ``gnt-instance console
562
inst1``. There is no password for root. As this is a Debian instance,
563
you can modify the ``/etc/network/interfaces`` file to setup the
564
network interface (eth0 is the name of the interface provided to the
565
instance).
566

    
567
To create a network mirrored instance, change the argument to the *-t*
568
option from ``plain`` to ``drbd`` and specify the node on which the
569
mirror should reside with the second value of the *--node* option,
570
like this (note that the command output includes timestamps which have
571
been removed for clarity)::
572

    
573
  # gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2
574
  * creating instance disks...
575
  adding instance instance2.example.com to cluster config
576
   - INFO: Waiting for instance instance2.example.com to sync disks.
577
   - INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining
578
   - INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining
579
   - INFO: Instance instance2.example.com's disks are in sync.
580
  creating os for instance instance2.example.com on node node1.example.com
581
  * running the instance OS create scripts...
582
  * starting instance...
583

    
584
Managing virtual instances
585
++++++++++++++++++++++++++
586

    
587
All commands need to be executed on the Ganeti master node.
588

    
589
To access the console of an instance, run::
590

    
591
  gnt-instance console INSTANCENAME
592

    
593
To shutdown an instance, run::
594

    
595
  gnt-instance shutdown INSTANCENAME
596

    
597
To startup an instance, run::
598

    
599
  gnt-instance startup INSTANCENAME
600

    
601
To failover an instance to its secondary node (only possible with
602
``drbd`` disk templates), run::
603

    
604
  gnt-instance failover INSTANCENAME
605

    
606
For more instance and cluster administration details, see the
607
*Ganeti administrator's guide*.