Revision 28e15341

b/Makefile.am
109 109

  
110 110

  
111 111
docsgml = \
112
	doc/install.sgml \
113 112
	doc/rapi.sgml
114 113

  
115 114
docrst = \
......
117 116
	doc/design-2.0.rst \
118 117
	doc/hooks.rst \
119 118
	doc/iallocator.rst \
119
	doc/install.rst \
120 120
	doc/security.rst
121 121

  
122 122
docdot = \
b/doc/install.rst
1
Ganeti installation tutorial
2
============================
3

  
4
Documents Ganeti version 2.0.
5

  
6
.. contents::
7

  
8
Introduction
9
------------
10

  
11
Ganeti is a cluster virtualization management system based on Xen or
12
KVM. This document explains how to bootstrap a Ganeti node (Xen
13
*dom0*), create a running cluster and install virtual instance (Xen
14
*domU*).  You need to repeat most of the steps in this document for
15
every node you want to install, but of course we recommend creating
16
some semi-automatic procedure if you plan to deploy Ganeti on a
17
medium/large scale.
18

  
19
A basic Ganeti terminology glossary is provided in the introductory
20
section of the *Ganeti administrator's guide*. Please refer to that
21
document if you are uncertain about the terms we are using.
22

  
23
Ganeti has been developed for Linux and is distribution-agnostic.
24
This documentation will use Debian Lenny as an example system but the
25
examples can easily be translated to any other distribution. ou are
26
expected to be familiar with your distribution, its package management
27
system, and Xen or KVM before trying to use Ganeti.
28

  
29
This document is divided into two main sections:
30

  
31
- Installation of the base system and base components
32

  
33
- Configuration of the environment for Ganeti
34

  
35
Each of these is divided into sub-sections. While a full Ganeti system
36
will need all of the steps specified, some are not strictly required
37
for every environment. Which ones they are, and why, is specified in
38
the corresponding sections.
39

  
40
Installing the base system and base components
41
----------------------------------------------
42

  
43
Hardware requirements
44
+++++++++++++++++++++
45

  
46
Any system supported by your Linux distribution is fine. 64-bit
47
systems are better as they can support more memory.
48

  
49
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.)
50
is supported in Ganeti. Note that no shared storage (e.g.  ``SAN``) is
51
needed to get high-availability features (but of course, one can be
52
used to store the images). It is highly recommended to use more than
53
one disk drive to improve speed. But Ganeti also works with one disk
54
per machine.
55

  
56
Installing the base system
57
++++++++++++++++++++++++++
58

  
59
**Mandatory** on all nodes.
60

  
61
It is advised to start with a clean, minimal install of the operating
62
system. The only requirement you need to be aware of at this stage is
63
to partition leaving enough space for a big (**minimum** 20GiB) LVM
64
volume group which will then host your instance filesystems, if you
65
want to use all Ganeti features. The volume group name Ganeti 2.0 uses
66
(by default) is ``xenvg``.
67

  
68
You can also use file-based storage only, without LVM, but this setup
69
is not detailed in this document.
70

  
71

  
72
While you can use an existing system, please note that the Ganeti
73
installation is intrusive in terms of changes to the system
74
configuration, and it's best to use a newly-installed system without
75
important data on it.
76

  
77
Also, for best results, it's advised that the nodes have as much as
78
possible the same hardware and software configuration. This will make
79
administration much easier.
80

  
81
Hostname issues
82
~~~~~~~~~~~~~~~
83

  
84
Note that Ganeti requires the hostnames of the systems (i.e. what the
85
``hostname`` command outputs to be a fully-qualified name, not a short
86
name. In other words, you should use *node1.example.com* as a hostname
87
and not just *node1*.
88

  
89
.. admonition:: Debian
90

  
91
   Debian Lenny and Etch configures the hostname differently than you
92
   need it for Ganeti. For example, this is what Etch puts in
93
   ``/etc/hosts`` in certain situations::
94

  
95
     127.0.0.1       localhost
96
     127.0.1.1       node1.example.com node1
97

  
98
   but for Ganeti you need to have::
99

  
100
     127.0.0.1       localhost
101
     192.168.1.1     node1.example.com node1
102

  
103
   replacing ``192.168.1.1`` with your node's address. Also, the file
104
   ``/etc/hostname`` which configures the hostname of the system
105
   should contain ``node1.example.com`` and not just ``node1`` (you
106
   need to run the command ``/etc/init.d/hostname.sh start`` after
107
   changing the file).
108

  
109
Installing Xen
110
++++++++++++++
111

  
112
**Mandatory** on all nodes.
113

  
114
While Ganeti is developed with the ability to modularly run on
115
different virtualization environments in mind the only two currently
116
useable on a live system are Xen and KVM. Supported
117
Xen versions are: 3.0.3, 3.0.4 and 3.1.
118

  
119
Please follow your distribution's recommended way to install and set
120
up Xen, or install Xen from the upstream source, if you wish,
121
following their manual. For KVM, make sure you have a KVM-enabled
122
kernel and the KVM tools.
123

  
124
After installing either hypervisor, you need to reboot into your new
125
system. On some distributions this might involve configuring GRUB
126
appropriately, whereas others will configure it automatically when you
127
install the respective kernels.
128

  
129
.. admonition:: Debian
130

  
131
   Under Lenny or Etch you can install the relevant
132
   ``xen-linux-system`` package, which will pull in both the
133
   hypervisor and the relevant kernel. Also, if you are installing a
134
   32-bit Lenny/Etch, you should install the ``libc6-xen`` package
135
   (run ``apt-get install libc6-xen``).
136

  
137
Xen settings
138
~~~~~~~~~~~~
139

  
140
It's recommended that dom0 is restricted to a low amount of memory
141
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled
142
in the file ``/etc/xen/xend-config.sxp`` by setting
143
the value ``dom0-min-mem`` to 0,
144
like this::
145

  
146
  (dom0-min-mem 0)
147

  
148
For optimum performance when running both CPU and I/O intensive
149
instances, it's also recommended that the dom0 is restricted to one
150
CPU only, for example by booting with the kernel parameter ``nosmp``.
151

  
152
It is recommended that you disable xen's automatic save of virtual
153
machines at system shutdown and subsequent restore of them at reboot.
154
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
155
``/etc/default/xendomains`` is set to an empty value.
156

  
157
.. admonition:: Debian
158

  
159
   Besides the ballooning change which you need to set in
160
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
161
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
162
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
163

  
164
     ## Xen hypervisor options to use with the default Xen boot option
165
     # xenhopt=dom0_mem=1024M
166

  
167
   and the ``xenkopt`` needs to include the ``nosmp`` option like
168
   this::
169

  
170
     ## Xen Linux kernel options to use with the default Xen boot option
171
     # xenkopt=nosmp
172

  
173
   Any existing parameters can be left in place: it's ok to have
174
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
175
   files, you need to run::
176

  
177
     /sbin/update-grub
178

  
179
If you want to run HVM instances too with Ganeti and want VNC access
180
to the console of your instances, set the following two entries in
181
``/etc/xen/xend-config.sxp``::
182

  
183
  (vnc-listen '0.0.0.0') (vncpasswd '')
184

  
185
You need to restart the Xen daemon for these settings to take effect::
186

  
187
  /etc/init.d/xend restart
188

  
189
Selecting the instance kernel
190
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
191

  
192
After you have installed Xen, you need to tell Ganeti exactly what
193
kernel to use for the instances it will create. This is done by
194
creating a symlink from your actual kernel to
195
``/boot/vmlinuz-2.6-xenU``, and one from your initrd
196
to ``/boot/initrd-2.6-xenU``. Note that if you don't
197
use an initrd for the domU kernel, you don't need
198
to create the initrd symlink.
199

  
200
.. admonition:: Debian
201

  
202
   After installation of the ``xen-linux-system`` package, you need to
203
   run (replace the exact version number with the one you have)::
204

  
205
     cd /boot
206
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
207
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
208

  
209
Installing DRBD
210
+++++++++++++++
211

  
212
Recommended on all nodes: DRBD_ is required if you want to use the
213
high availability (HA) features of Ganeti, but optional if you don't
214
require HA or only run Ganeti on single-node clusters. You can upgrade
215
a non-HA cluster to an HA one later, but you might need to export and
216
re-import all your instances to take advantage of the new features.
217

  
218
.. _DRBD: http://www.drbd.org/
219

  
220
Supported DRBD versions: 8.0.x. It's recommended to have at least
221
version 8.0.12.
222

  
223
Now the bad news: unless your distribution already provides it
224
installing DRBD might involve recompiling your kernel or anyway
225
fiddling with it. Hopefully at least the Xen-ified kernel source to
226
start from will be provided.
227

  
228
The good news is that you don't need to configure DRBD at all. Ganeti
229
will do it for you for every instance you set up.  If you have the
230
DRBD utils installed and the module in your kernel you're fine. Please
231
check that your system is configured to load the module at every boot,
232
and that it passes the following option to the module
233
``minor_count=255``. This will allow you to use up to 128 instances
234
per node (for most clusters 128 should be enough, though).
235

  
236
.. admonition:: Debian
237

  
238
   On Debian, you can just install (build) the DRBD 8.0.x module with
239
   the following commands (make sure you are running the Xen kernel)::
240

  
241
     apt-get install drbd8-source drbd8-utils
242
     m-a update
243
     m-a a-i drbd8
244
     echo drbd minor_count=128 >> /etc/modules
245
     depmod -a
246
     modprobe drbd minor_count=128
247

  
248
   It is also recommended that you comment out the default resources
249
   in the ``/etc/drbd.conf`` file, so that the init script doesn't try
250
   to configure any drbd devices. You can do this by prefixing all
251
   *resource* lines in the file with the keyword *skip*, like this::
252

  
253
     skip resource r0 {
254
       ...
255
     }
256

  
257
     skip resource "r1" {
258
       ...
259
     }
260

  
261
Other required software
262
+++++++++++++++++++++++
263

  
264
Besides Xen and DRBD, you will need to install the following (on all
265
nodes):
266

  
267
- LVM version 2, `<http://sourceware.org/lvm2/>`_
268

  
269
- OpenSSL, `<http://www.openssl.org/>`_
270

  
271
- OpenSSH, `<http://www.openssh.com/portable.html>`_
272

  
273
- bridge utilities, `<http://bridge.sourceforge.net/>`_
274

  
275
- iproute2, `<http://developer.osdl.org/dev/iproute2>`_
276

  
277
- arping (part of iputils package),
278
  `<ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz>`_
279

  
280
- Python version 2.4 or 2.5, `<http://www.python.org>`_
281

  
282
- Python OpenSSL bindings, `<http://pyopenssl.sourceforge.net/>`_
283

  
284

  
285
- simplejson Python module, `<http://www.undefined.org/python/#simplejson>`_
286

  
287
- pyparsing Python module, `<http://pyparsing.wikispaces.com/>`_
288

  
289
These programs are supplied as part of most Linux distributions, so
290
usually they can be installed via apt or similar methods. Also many of
291
them will already be installed on a standard machine.
292

  
293

  
294
.. admonition:: Debian
295

  
296
   You can use this command line to install all needed packages::
297

  
298
     # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
299
     python python-pyopenssl openssl python-pyparsing python-simplejson
300

  
301
Setting up the environment for Ganeti
302
-------------------------------------
303

  
304
Configuring the network
305
+++++++++++++++++++++++
306

  
307
**Mandatory** on all nodes.
308

  
309
Ganeti relies on Xen running in "bridge mode", which means the
310
instances network interfaces will be attached to a software bridge
311
running in dom0. Xen by default creates such a bridge at startup, but
312
your distribution might have a different way to do things.
313

  
314
Beware that the default name Ganeti uses is ``xen-br0`` (which was
315
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default
316
bridge your Ganeti cluster will use for new instances can be specified
317
at cluster initialization time.
318

  
319
.. admonition:: Debian
320

  
321
   The recommended way to configure the Xen bridge is to edit your
322
   ``/etc/network/interfaces`` file and substitute your normal
323
   Ethernet stanza with the following snippet::
324

  
325
     auto xen-br0
326
     iface xen-br0 inet static
327
        address YOUR_IP_ADDRESS
328
        netmask YOUR_NETMASK
329
        network YOUR_NETWORK
330
        broadcast YOUR_BROADCAST_ADDRESS
331
        gateway YOUR_GATEWAY
332
        bridge_ports eth0
333
        bridge_stp off
334
        bridge_fd 0
335

  
336
The following commands need to be executed on the local console:
337

  
338
  ifdown eth0
339
  ifup xen-br0
340

  
341
To check if the bridge is setup, use the ``ip`` and ``brctl show``
342
commands::
343

  
344
  # ip a show xen-br0
345
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
346
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
347
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
348
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
349
         valid_lft forever preferred_lft forever
350

  
351
  # brctl show xen-br0
352
  bridge name     bridge id               STP enabled     interfaces
353
  xen-br0         8000.0020fc1ed55d       no              eth0
354

  
355
Configuring LVM
356
+++++++++++++++
357

  
358
**Mandatory** on all nodes.
359

  
360
The volume group is required to be at least 20GiB.
361

  
362
If you haven't configured your LVM volume group at install time you
363
need to do it before trying to initialize the Ganeti cluster. This is
364
done by formatting the devices/partitions you want to use for it and
365
then adding them to the relevant volume group::
366

  
367
  pvcreate /dev/sda3
368
  vgcreate xenvg /dev/sda3
369

  
370
or::
371

  
372
  pvcreate /dev/sdb1
373
  pvcreate /dev/sdc1
374
  vgcreate xenvg /dev/sdb1 /dev/sdc1
375

  
376
If you want to add a device later you can do so with the *vgextend*
377
command::
378

  
379
  pvcreate /dev/sdd1
380
  vgextend xenvg /dev/sdd1
381

  
382
Optional: it is recommended to configure LVM not to scan the DRBD
383
devices for physical volumes. This can be accomplished by editing
384
``/etc/lvm/lvm.conf`` and adding the
385
``/dev/drbd[0-9]+`` regular expression to the
386
``filter`` variable, like this::
387

  
388
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
389

  
390
Installing Ganeti
391
+++++++++++++++++
392

  
393
**Mandatory** on all nodes.
394

  
395
It's now time to install the Ganeti software itself.  Download the
396
source from the project page at `<http://code.google.com/p/ganeti/>`_,
397
and install it (replace 2.0.0 with the latest version)::
398

  
399
  tar xvzf ganeti-2.0.0.tar.gz
400
  cd ganeti-2.0.0
401
  ./configure --localstatedir=/var --sysconfdir=/etc
402
  make
403
  make install
404
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
405

  
406
You also need to copy the file
407
``doc/examples/ganeti.initd`` from the source archive
408
to ``/etc/init.d/ganeti`` and register it with your
409
distribution's startup scripts, for example in Debian::
410

  
411
  update-rc.d ganeti defaults 20 80
412

  
413
In order to automatically restart failed instances, you need to setup
414
a cron job run the *ganeti-watcher* command. A sample cron file is
415
provided in the source at ``doc/examples/ganeti.cron`` and you can
416
copy that (eventually altering the path) to ``/etc/cron.d/ganeti``.
417

  
418
Installing the Operating System support packages
419
++++++++++++++++++++++++++++++++++++++++++++++++
420

  
421
**Mandatory** on all nodes.
422

  
423
To be able to install instances you need to have an Operating System
424
installation script. An example OS that works under Debian and can
425
install Debian and Ubuntu instace OSes is provided on the project web
426
site.  Download it from the project page and follow the instructions
427
in the ``README`` file.  Here is the installation procedure (replace
428
0.7 with the latest version that is compatible with your ganeti
429
version)::
430

  
431
  cd /usr/local/src/
432
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz
433
  tar xzf ganeti-instance-debootstrap-0.7.tar.gz
434
  cd ganeti-instance-debootstrap-0.7
435
  ./configure
436
  make
437
  make install
438

  
439
In order to use this OS definition, you need to have internet access
440
from your nodes and have the *debootstrap*, *dump* and *restore*
441
commands installed on all nodes. Also, if the OS is configured to
442
partition the instance's disk in
443
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
444
installed.
445

  
446
.. admonition:: Debian
447

  
448
   Use this command on all nodes to install the required packages::
449

  
450
     apt-get install debootstrap dump kpartx
451

  
452
Alternatively, you can create your own OS definitions. See the manpage
453
*ganeti-os-interface*.
454

  
455
Initializing the cluster
456
++++++++++++++++++++++++
457

  
458
**Mandatory** on one node per cluster.
459

  
460
The last step is to initialize the cluster. After you've repeated the
461
above process on all of your nodes, choose one as the master, and
462
execute::
463

  
464
  gnt-cluster init <CLUSTERNAME>
465

  
466
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it
467
must exist in DNS or in ``/etc/hosts``) by all the nodes in the
468
cluster. You must choose a name different from any of the nodes names
469
for a multi-node cluster. In general the best choice is to have a
470
unique name for a cluster, even if it consists of only one machine, as
471
you will be able to expand it later without any problems. Please note
472
that the hostname used for this must resolve to an IP address reserved
473
**exclusively** for this purpose, and cannot be the name of the first
474
(master) node.
475

  
476
If the bridge name you are using is not ``xen-br0``, use the *-b
477
<BRIDGENAME>* option to specify the bridge name. In this case, you
478
should also use the *--master-netdev <BRIDGENAME>* option with the
479
same BRIDGENAME argument.
480

  
481
You can use a different name than ``xenvg`` for the volume group (but
482
note that the name must be identical on all nodes). In this case you
483
need to specify it by passing the *-g <VGNAME>* option to
484
``gnt-cluster init``.
485

  
486
To set up the cluster as an HVM cluster, use the
487
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
488
(you can also add ``,xen-pvm`` to enable the PVM one too). You will
489
also need to create the VNC cluster password file
490
``/etc/ganeti/vnc-cluster-password`` which contains one line with the
491
default VNC password for the cluster.
492

  
493
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
494
pass ``--enabled-hypervisors=kvm`` to the init command.
495

  
496
You can also invoke the command with the ``--help`` option in order to
497
see all the possibilities.
498

  
499
Joining the nodes to the cluster
500
++++++++++++++++++++++++++++++++
501

  
502
**Mandatory** for all the other nodes.
503

  
504
After you have initialized your cluster you need to join the other
505
nodes to it. You can do so by executing the following command on the
506
master node::
507

  
508
  gnt-node add <NODENAME>
509

  
510
Separate replication network
511
++++++++++++++++++++++++++++
512

  
513
**Optional**
514

  
515
Ganeti uses DRBD to mirror the disk of the virtual instances between
516
nodes. To use a dedicated network interface for this (in order to
517
improve performance or to enhance security) you need to configure an
518
additional interface for each node.  Use the *-s* option with
519
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
520
this secondary interface to use for each node. Note that if you
521
specified this option at cluster setup time, you must afterwards use
522
it for every node add operation.
523

  
524
Testing the setup
525
+++++++++++++++++
526

  
527
Execute the ``gnt-node list`` command to see all nodes in the
528
cluster::
529

  
530
  # gnt-node list
531
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
532
  node1.example.com 197404 197404   2047  1896   125     0     0
533

  
534
Setting up and managing virtual instances
535
-----------------------------------------
536

  
537
Setting up virtual instances
538
++++++++++++++++++++++++++++
539

  
540
This step shows how to setup a virtual instance with either
541
non-mirrored disks (``plain``) or with network mirrored disks
542
(``drbd``).  All commands need to be executed on the Ganeti master
543
node (the one on which ``gnt-cluster init`` was run).  Verify that the
544
OS scripts are present on all cluster nodes with ``gnt-os list``.
545

  
546

  
547
To create a virtual instance, you need a hostname which is resolvable
548
(DNS or ``/etc/hosts`` on all nodes). The following command will
549
create a non-mirrored instance for you::
550

  
551
  gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com
552
  * creating instance disks...
553
  adding instance instance1.example.com to cluster config
554
   - INFO: Waiting for instance instance1.example.com to sync disks.
555
   - INFO: Instance instance1.example.com's disks are in sync.
556
  creating os for instance instance1.example.com on node node1.example.com
557
  * running the instance OS create scripts...
558
  * starting instance...
559

  
560
The above instance will have no network interface enabled. You can
561
access it over the virtual console with ``gnt-instance console
562
inst1``. There is no password for root. As this is a Debian instance,
563
you can modify the ``/etc/network/interfaces`` file to setup the
564
network interface (eth0 is the name of the interface provided to the
565
instance).
566

  
567
To create a network mirrored instance, change the argument to the *-t*
568
option from ``plain`` to ``drbd`` and specify the node on which the
569
mirror should reside with the second value of the *--node* option,
570
like this (note that the command output includes timestamps which have
571
been removed for clarity)::
572

  
573
  # gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2
574
  * creating instance disks...
575
  adding instance instance2.example.com to cluster config
576
   - INFO: Waiting for instance instance2.example.com to sync disks.
577
   - INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining
578
   - INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining
579
   - INFO: Instance instance2.example.com's disks are in sync.
580
  creating os for instance instance2.example.com on node node1.example.com
581
  * running the instance OS create scripts...
582
  * starting instance...
583

  
584
Managing virtual instances
585
++++++++++++++++++++++++++
586

  
587
All commands need to be executed on the Ganeti master node.
588

  
589
To access the console of an instance, run::
590

  
591
  gnt-instance console INSTANCENAME
592

  
593
To shutdown an instance, run::
594

  
595
  gnt-instance shutdown INSTANCENAME
596

  
597
To startup an instance, run::
598

  
599
  gnt-instance startup INSTANCENAME
600

  
601
To failover an instance to its secondary node (only possible with
602
``drbd`` disk templates), run::
603

  
604
  gnt-instance failover INSTANCENAME
605

  
606
For more instance and cluster administration details, see the
607
*Ganeti administrator's guide*.
/dev/null
1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 2.0</para>
8

  
9
  <sect1>
10
    <title>Introduction</title>
11

  
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen or KVM. This document explains how to bootstrap a Ganeti
15
      node (Xen <literal>dom0</literal>), create a running cluster and
16
      install virtual instance (Xen <literal>domU</literal>).  You
17
      need to repeat most of the steps in this document for every node
18
      you want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

  
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

  
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Lenny
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen or KVM
36
      before trying to use Ganeti.
37
    </para>
38

  
39
    <para>This document is divided into two main sections:
40

  
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
            components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
            Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

  
52
      Each of these is divided into sub-sections. While a full Ganeti system
53
      will need all of the steps specified, some are not strictly required for
54
      every environment. Which ones they are, and why, is specified in the
55
      corresponding sections.
56
    </para>
57

  
58
  </sect1>
59

  
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

  
63
    <sect2>
64
      <title>Hardware requirements</title>
65

  
66
      <para>
67
        Any system supported by your Linux distribution is fine.  64-bit
68
        systems are better as they can support more memory.
69
      </para>
70

  
71
      <para>
72
        Any disk drive recognized by Linux
73
        (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
        is supported in Ganeti. Note that no shared storage (e.g.
75
        <literal>SAN</literal>) is needed to get high-availability features. It
76
        is highly recommended to use more than one disk drive to improve speed.
77
        But Ganeti also works with one disk per machine.
78
      </para>
79

  
80
    <sect2>
81
      <title>Installing the base system</title>
82

  
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

  
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems, if you want to use
94
        all Ganeti features. The volume group name Ganeti 2.0 uses (by
95
        default) is <emphasis>xenvg</emphasis>.
96
      </para>
97

  
98
      <para>
99
        You can also use file-based storage only, without LVM, but
100
        this is not detailed in this document.
101
      </para>
102

  
103
      <para>
104
        While you can use an existing system, please note that the
105
        Ganeti installation is intrusive in terms of changes to the
106
        system configuration, and it's best to use a newly-installed
107
        system without important data on it.
108
      </para>
109

  
110
      <para>
111
        Also, for best results, it's advised that the nodes have as
112
        much as possible the same hardware and software
113
        configuration. This will make administration much easier.
114
      </para>
115

  
116
      <sect3>
117
        <title>Hostname issues</title>
118
        <para>
119
          Note that Ganeti requires the hostnames of the systems
120
          (i.e. what the <computeroutput>hostname</computeroutput>
121
          command outputs to be a fully-qualified name, not a short
122
          name. In other words, you should use
123
          <literal>node1.example.com</literal> as a hostname and not
124
          just <literal>node1</literal>.
125
        </para>
126

  
127
        <formalpara>
128
          <title>Debian</title>
129
          <para>
130
            Note that Debian Lenny configures the hostname differently
131
            than you need it for Ganeti. For example, this is what
132
            Etch puts in <filename>/etc/hosts</filename> in certain
133
            situations:
134
<screen>
135
127.0.0.1       localhost
136
127.0.1.1       node1.example.com node1
137
</screen>
138

  
139
          but for Ganeti you need to have:
140
<screen>
141
127.0.0.1       localhost
142
192.168.1.1     node1.example.com node1
143
</screen>
144
            replacing <literal>192.168.1.1</literal> with your node's
145
            address. Also, the file <filename>/etc/hostname</filename>
146
            which configures the hostname of the system should contain
147
            <literal>node1.example.com</literal> and not just
148
            <literal>node1</literal> (you need to run the command
149
            <computeroutput>/etc/init.d/hostname.sh
150
            start</computeroutput> after changing the file).
151
          </para>
152
        </formalpara>
153
      </sect3>
154

  
155
    </sect2>
156

  
157
    <sect2>
158
      <title>Installing Xen</title>
159

  
160
      <para>
161
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
162
      </para>
163

  
164
      <para>
165
        While Ganeti is developed with the ability to modularly run on
166
        different virtualization environments in mind the only two
167
        currently useable on a live system are <ulink
168
        url="http://xen.xensource.com/">Xen</ulink> and KVM. Supported
169
        versions are: <simplelist type="inline">
170
        <member><literal>3.0.3</literal></member>
171
        <member><literal>3.0.4</literal></member>
172
        <member><literal>3.1</literal></member> </simplelist>.
173
      </para>
174

  
175
      <para>
176
        Please follow your distribution's recommended way to install
177
        and set up Xen, or install Xen from the upstream source, if
178
        you wish, following their manual. For KVM, make sure you have
179
        a KVM-enabled kernel and the KVM tools.
180
      </para>
181

  
182
      <para>
183
        After installing either hypervisor, you need to reboot into
184
        your new system. On some distributions this might involve
185
        configuring GRUB appropriately, whereas others will configure
186
        it automatically when you install the respective kernels.
187
      </para>
188

  
189
      <formalpara><title>Debian</title>
190
      <para>
191
        Under Debian Lenny or Etch you can install the relevant
192
        <literal>xen-linux-system</literal> package, which will pull
193
        in both the hypervisor and the relevant kernel. Also, if you
194
        are installing a 32-bit Lenny/Etch, you should install the
195
        <computeroutput>libc6-xen</computeroutput> package (run
196
        <computeroutput>apt-get install libc6-xen</computeroutput>).
197
      </para>
198
      </formalpara>
199

  
200
      <sect3>
201
        <title>Xen settings</title>
202

  
203
        <para>
204
          It's recommended that dom0 is restricted to a low amount of
205
          memory (<constant>512MiB</constant> or
206
          <constant>1GiB</constant> is reasonable) and that memory
207
          ballooning is disabled in the file
208
          <filename>/etc/xen/xend-config.sxp</filename> by setting the
209
          value <literal>dom0-min-mem</literal> to
210
          <constant>0</constant>, like this:
211
          <computeroutput>(dom0-min-mem 0)</computeroutput>
212
        </para>
213

  
214
        <para>
215
          For optimum performance when running both CPU and I/O
216
          intensive instances, it's also recommended that the dom0 is
217
          restricted to one CPU only, for example by booting with the
218
          kernel parameter <literal>nosmp</literal>.
219
        </para>
220

  
221
        <para>
222
          It is recommended that you disable xen's automatic save of virtual
223
          machines at system shutdown and subsequent restore of them at reboot.
224
          To obtain this make sure the variable
225
          <literal>XENDOMAINS_SAVE</literal> in the file
226
          <literal>/etc/default/xendomains</literal> is set to an empty value.
227
        </para>
228

  
229
        <formalpara>
230
          <title>Debian</title>
231
          <para>
232
            Besides the ballooning change which you need to set in
233
            <filename>/etc/xen/xend-config.sxp</filename>, you need to
234
            set the memory and nosmp parameters in the file
235
            <filename>/boot/grub/menu.lst</filename>. You need to
236
            modify the variable <literal>xenhopt</literal> to add
237
            <userinput>dom0_mem=1024M</userinput> like this:
238
<screen>
239
## Xen hypervisor options to use with the default Xen boot option
240
# xenhopt=dom0_mem=1024M
241
</screen>
242
            and the <literal>xenkopt</literal> needs to include the
243
            <userinput>nosmp</userinput> option like this:
244
<screen>
245
## Xen Linux kernel options to use with the default Xen boot option
246
# xenkopt=nosmp
247
</screen>
248

  
249
          Any existing parameters can be left in place: it's ok to
250
          have <computeroutput>xenkopt=console=tty0
251
          nosmp</computeroutput>, for example. After modifying the
252
          files, you need to run:
253
<screen>
254
/sbin/update-grub
255
</screen>
256
          </para>
257
        </formalpara>
258
        <para>
259
          If you want to run HVM instances too with Ganeti and want
260
          VNC access to the console of your instances, set the
261
          following two entries in
262
          <filename>/etc/xen/xend-config.sxp</filename>:
263
<screen>
264
(vnc-listen '0.0.0.0')
265
(vncpasswd '')
266
</screen>
267
          You need to restart the Xen daemon for these settings to
268
          take effect:
269
<screen>
270
/etc/init.d/xend restart
271
</screen>
272
        </para>
273

  
274
      </sect3>
275

  
276
      <sect3>
277
        <title>Selecting the instance kernel</title>
278

  
279
        <para>
280
          After you have installed Xen, you need to tell Ganeti
281
          exactly what kernel to use for the instances it will
282
          create. This is done by creating a
283
          <emphasis>symlink</emphasis> from your actual kernel to
284
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
285
          your initrd to
286
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
287
          don't use an initrd for the <literal>domU</literal> kernel,
288
          you don't need to create the initrd symlink.
289
        </para>
290

  
291
        <formalpara>
292
          <title>Debian</title>
293
          <para>
294
            After installation of the
295
            <literal>xen-linux-system</literal> package, you need to
296
            run (replace the exact version number with the one you
297
            have):
298
            <screen>
299
cd /boot
300
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
301
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
302
            </screen>
303
          </para>
304
        </formalpara>
305
      </sect3>
306

  
307
    </sect2>
308

  
309
    <sect2>
310
      <title>Installing DRBD</title>
311

  
312
      <para>
313
        Recommended on all nodes: <ulink
314
        url="http://www.drbd.org/">DRBD</ulink> is required if you
315
        want to use the high availability (HA) features of Ganeti, but
316
        optional if you don't require HA or only run Ganeti on
317
        single-node clusters. You can upgrade a non-HA cluster to an
318
        HA one later, but you might need to export and re-import all
319
        your instances to take advantage of the new features.
320
      </para>
321

  
322
      <para>
323
        Supported DRBD versions: <literal>8.0.x</literal>.
324
        It's recommended to have at least version <literal>8.0.12</literal>.
325
      </para>
326

  
327
      <para>
328
        Now the bad news: unless your distribution already provides it
329
        installing DRBD might involve recompiling your kernel or
330
        anyway fiddling with it. Hopefully at least the Xen-ified
331
        kernel source to start from will be provided.
332
      </para>
333

  
334
      <para>
335
        The good news is that you don't need to configure DRBD at all.
336
        Ganeti will do it for you for every instance you set up.  If
337
        you have the DRBD utils installed and the module in your
338
        kernel you're fine. Please check that your system is
339
        configured to load the module at every boot, and that it
340
        passes the following option to the module
341
        <computeroutput>minor_count=255</computeroutput>. This will
342
        allow you to use up to 128 instances per node (for most clusters
343
        <constant>128 </constant> should be enough, though).
344
      </para>
345

  
346
      <formalpara><title>Debian</title>
347
        <para>
348
         You can just install (build) the DRBD 8.0.x module with the
349
         following commands (make sure you are running the Xen
350
         kernel):
351
        </para>
352
      </formalpara>
353

  
354
      <screen>
355
apt-get install drbd8-source drbd8-utils
356
m-a update
357
m-a a-i drbd8
358
echo drbd minor_count=128 >> /etc/modules
359
depmod -a
360
modprobe drbd minor_count=128
361
      </screen>
362

  
363
      <para>
364
        It is also recommended that you comment out the default
365
        resources in the <filename>/etc/drbd.conf</filename> file, so
366
        that the init script doesn't try to configure any drbd
367
        devices. You can do this by prefixing all
368
        <literal>resource</literal> lines in the file with the keyword
369
        <literal>skip</literal>, like this:
370
      </para>
371

  
372
      <screen>
373
skip resource r0 {
374
...
375
}
376

  
377
skip resource "r1" {
378
...
379
}
380
      </screen>
381

  
382
    </sect2>
383

  
384
    <sect2>
385
      <title>Other required software</title>
386

  
387
      <para>Besides Xen and DRBD, you will need to install the
388
      following (on all nodes):</para>
389

  
390
      <itemizedlist>
391
        <listitem>
392
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
393
          version 2</ulink></simpara>
394
        </listitem>
395
        <listitem>
396
          <simpara><ulink
397
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
398
        </listitem>
399
        <listitem>
400
          <simpara><ulink
401
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
402
        </listitem>
403
        <listitem>
404
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
405
          utilities</ulink></simpara>
406
        </listitem>
407
        <listitem>
408
          <simpara><ulink
409
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
410
        </listitem>
411
        <listitem>
412
          <simpara><ulink
413
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
414
          (part of iputils package)</simpara>
415
        </listitem>
416
        <listitem>
417
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
418
        </listitem>
419
        <listitem>
420
          <simpara><ulink
421
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
422
          bindings</ulink></simpara>
423
        </listitem>
424
        <listitem>
425
          <simpara><ulink
426
          url="http://www.undefined.org/python/#simplejson">simplejson Python
427
          module</ulink></simpara>
428
        </listitem>
429
        <listitem>
430
          <simpara><ulink
431
          url="http://pyparsing.wikispaces.com/">pyparsing Python
432
          module</ulink></simpara>
433
        </listitem>
434
      </itemizedlist>
435

  
436
      <para>
437
        These programs are supplied as part of most Linux
438
        distributions, so usually they can be installed via apt or
439
        similar methods. Also many of them will already be installed
440
        on a standard machine.
441
      </para>
442

  
443

  
444
      <formalpara><title>Debian</title>
445

  
446
      <para>You can use this command line to install all of them:</para>
447

  
448
      </formalpara>
449
      <screen>
450
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
451
  python python-pyopenssl openssl python-pyparsing python-simplejson
452
      </screen>
453

  
454
    </sect2>
455

  
456
  </sect1>
457

  
458

  
459
  <sect1>
460
    <title>Setting up the environment for Ganeti</title>
461

  
462
    <sect2>
463
      <title>Configuring the network</title>
464

  
465
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
466

  
467
      <para>
468
        Ganeti relies on Xen running in "bridge mode", which means the
469
        instances network interfaces will be attached to a software bridge
470
        running in dom0. Xen by default creates such a bridge at startup, but
471
        your distribution might have a different way to do things.
472
      </para>
473

  
474
      <para>
475
        Beware that the default name Ganeti uses is
476
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
477
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
478
        default. The default bridge your Ganeti cluster will use for new
479
        instances can be specified at cluster initialization time.
480
      </para>
481

  
482
      <formalpara><title>Debian</title>
483
        <para>
484
          The recommended Debian way to configure the Xen bridge is to
485
          edit your <filename>/etc/network/interfaces</filename> file
486
          and substitute your normal Ethernet stanza with the
487
          following snippet:
488

  
489
        <screen>
490
auto xen-br0
491
iface xen-br0 inet static
492
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
493
        netmask <replaceable>YOUR_NETMASK</replaceable>
494
        network <replaceable>YOUR_NETWORK</replaceable>
495
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
496
        gateway <replaceable>YOUR_GATEWAY</replaceable>
497
        bridge_ports eth0
498
        bridge_stp off
499
        bridge_fd 0
500
        </screen>
501
        </para>
502
      </formalpara>
503

  
504
     <para>
505
The following commands need to be executed on the local console
506
     </para>
507
      <screen>
508
ifdown eth0
509
ifup xen-br0
510
      </screen>
511

  
512
      <para>
513
        To check if the bridge is setup, use <command>ip</command>
514
        and <command>brctl show</command>:
515
      <para>
516

  
517
      <screen>
518
# ip a show xen-br0
519
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
520
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
521
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
522
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
523
       valid_lft forever preferred_lft forever
524

  
525
# brctl show xen-br0
526
bridge name     bridge id               STP enabled     interfaces
527
xen-br0         8000.0020fc1ed55d       no              eth0
528
      </screen>
529

  
530

  
531
    </sect2>
532

  
533
    <sect2>
534
      <title>Configuring LVM</title>
535

  
536

  
537
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
538

  
539
      <note>
540
        <simpara>The volume group is required to be at least
541
        <constant>20GiB</constant>.</simpara>
542
      </note>
543
      <para>
544
        If you haven't configured your LVM volume group at install
545
        time you need to do it before trying to initialize the Ganeti
546
        cluster. This is done by formatting the devices/partitions you
547
        want to use for it and then adding them to the relevant volume
548
        group:
549

  
550
       <screen>
551
pvcreate /dev/sda3
552
vgcreate xenvg /dev/sda3
553
       </screen>
554
or
555
       <screen>
556
pvcreate /dev/sdb1
557
pvcreate /dev/sdc1
558
vgcreate xenvg /dev/sdb1 /dev/sdc1
559
       </screen>
560
      </para>
561

  
562
      <para>
563
	If you want to add a device later you can do so with the
564
	<citerefentry><refentrytitle>vgextend</refentrytitle>
565
	<manvolnum>8</manvolnum></citerefentry> command:
566
      </para>
567

  
568
      <screen>
569
pvcreate /dev/sdd1
570
vgextend xenvg /dev/sdd1
571
      </screen>
572

  
573
      <formalpara>
574
        <title>Optional</title>
575
        <para>
576
          It is recommended to configure LVM not to scan the DRBD
577
          devices for physical volumes. This can be accomplished by
578
          editing <filename>/etc/lvm/lvm.conf</filename> and adding
579
          the <literal>/dev/drbd[0-9]+</literal> regular expression to
580
          the <literal>filter</literal> variable, like this:
581
<screen>
582
    filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
583
</screen>
584
        </para>
585
      </formalpara>
586

  
587
    </sect2>
588

  
589
    <sect2>
590
      <title>Installing Ganeti</title>
591

  
592
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
593

  
594
      <para>
595
        It's now time to install the Ganeti software itself.  Download
596
        the source from <ulink
597
        url="http://code.google.com/p/ganeti/"></ulink>.
598
      </para>
599

  
600
        <screen>
601
tar xvzf ganeti-@GANETI_VERSION@.tar.gz
602
cd ganeti-@GANETI_VERSION@
603
./configure --localstatedir=/var --sysconfdir=/etc
604
make
605
make install
606
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
607
        </screen>
608

  
609
      <para>
610
        You also need to copy the file
611
        <filename>doc/examples/ganeti.initd</filename>
612
        from the source archive to
613
        <filename>/etc/init.d/ganeti</filename> and register it with
614
        your distribution's startup scripts, for example in Debian:
615
      </para>
616
      <screen>update-rc.d ganeti defaults 20 80</screen>
617

  
618
      <para>
619
        In order to automatically restart failed instances, you need
620
        to setup a cron job run the
621
        <computeroutput>ganeti-watcher</computeroutput> program. A
622
        sample cron file is provided in the source at
623
        <filename>doc/examples/ganeti.cron</filename> and you can
624
        copy that (eventually altering the path) to
625
        <filename>/etc/cron.d/ganeti</filename>
626
      </para>
627

  
628
    </sect2>
629

  
630
    <sect2>
631
      <title>Installing the Operating System support packages</title>
632

  
633
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
634

  
635
      <para>
636
        To be able to install instances you need to have an Operating
637
        System installation script. An example OS that works under
638
        Debian and can install Debian and Ubuntu instace OSes is
639
        provided on the project web site.  Download it from <ulink
640
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
641
        instructions in the <filename>README</filename> file.  Here is
642
        the installation procedure (replace <constant>0.7</constant>
643
        with the latest version that is compatible with your ganeti
644
        version):
645
      </para>
646

  
647
      <screen>
648
cd /usr/local/src/
649
wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz
650
tar xzf ganeti-instance-debootstrap-0.7.tar.gz
651
cd ganeti-instance-debootstrap-0.7
652
./configure
653
make
654
make install
655
      </screen>
656

  
657
      <para>
658
        In order to use this OS definition, you need to have internet
659
        access from your nodes and have the <citerefentry>
660
        <refentrytitle>debootstrap</refentrytitle>
661
        <manvolnum>8</manvolnum></citerefentry>, <citerefentry>
662
        <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum>
663
        </citerefentry> and <citerefentry>
664
        <refentrytitle>restore</refentrytitle>
665
        <manvolnum>8</manvolnum> </citerefentry> commands installed on
666
        all nodes. Also, if the OS is configured to partition the
667
        instance's disk in
668
        <filename>/etc/default/ganeti-instance-debootstrap</filename>,
669
        you will need <command>kpartx</command> installed.
670
      </para>
671
      <formalpara>
672
        <title>Debian</title>
673
        <para>
674
          Use this command on all nodes to install the required
675
          packages:
676

  
677
          <screen>apt-get install debootstrap dump kpartx</screen>
678
        </para>
679
      </formalpara>
680

  
681
      <para>
682
        Alternatively, you can create your own OS definitions. See the
683
        manpage
684
        <citerefentry>
685
        <refentrytitle>ganeti-os-interface</refentrytitle>
686
        <manvolnum>8</manvolnum>
687
        </citerefentry>.
688
      </para>
689

  
690
    </sect2>
691

  
692
    <sect2>
693
      <title>Initializing the cluster</title>
694

  
695
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
696
      node per cluster.</para>
697

  
698

  
699
      <para>
700
        The last step is to initialize the cluster. After you've
701
        repeated the above process on all of your nodes, choose one as
702
        the master, and execute:
703
      </para>
704

  
705
      <screen>
706
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
707
      </screen>
708

  
709
      <para>
710
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
711
        which must be resolvable (e.g. it must exist in DNS or in
712
        <filename>/etc/hosts</filename>) by all the nodes in the
713
        cluster. You must choose a name different from any of the
714
        nodes names for a multi-node cluster. In general the best
715
        choice is to have a unique name for a cluster, even if it
716
        consists of only one machine, as you will be able to expand it
717
        later without any problems. Please note that the hostname used
718
        for this must resolve to an IP address reserved <emphasis
719
        role="strong">exclusively</emphasis> for this purpose.
720
      </para>
721

  
722
      <para>
723
        If the bridge name you are using is not
724
        <literal>xen-br0</literal>, use the <option>-b
725
        <replaceable>BRIDGENAME</replaceable></option> option to
726
        specify the bridge name. In this case, you should also use the
727
        <option>--master-netdev
728
        <replaceable>BRIDGENAME</replaceable></option> option with the
729
        same <replaceable>BRIDGENAME</replaceable> argument.
730
      </para>
731

  
732
      <para>
733
        You can use a different name than <literal>xenvg</literal> for
734
        the volume group (but note that the name must be identical on
735
        all nodes). In this case you need to specify it by passing the
736
        <option>-g <replaceable>VGNAME</replaceable></option> option
737
        to <computeroutput>gnt-cluster init</computeroutput>.
738
      </para>
739

  
740
      <para>
741
        To set up the cluster as an HVM cluster, use the
742
        <option>--enabled-hypervisors=xen-hvm</option> option to
743
        enable the HVM hypervisor (you can also add
744
        <userinput>,xen-pvm</userinput> to enable the PVM one
745
        too). You will also need to create the VNC cluster password
746
        file <filename>/etc/ganeti/vnc-cluster-password</filename>
747
        which contains one line with the default VNC password for the
748
        cluster.
749
      </para>
750

  
751
      <para>
752
        To setup the cluster for KVM-only usage (KVM and Xen cannot be
753
        mixed), pass <option>--enabled-hypervisors=kvm</option> to the
754
        init command.
755
      </para>
756

  
757
      <para>
758
        You can also invoke the command with the
759
        <option>--help</option> option in order to see all the
760
        possibilities.
761
      </para>
762

  
763
    </sect2>
764

  
765
    <sect2>
766
      <title>Joining the nodes to the cluster</title>
767

  
768
      <para>
769
        <emphasis role="strong">Mandatory:</emphasis> for all the
770
        other nodes.
771
      </para>
772

  
773
      <para>
774
        After you have initialized your cluster you need to join the
775
        other nodes to it. You can do so by executing the following
776
        command on the master node:
777
      </para>
778
        <screen>
779
gnt-node add <replaceable>NODENAME</replaceable>
780
        </screen>
781
    </sect2>
782

  
783
    <sect2>
784
      <title>Separate replication network</title>
785

  
786
      <para><emphasis role="strong">Optional</emphasis></para>
787
      <para>
788
        Ganeti uses DRBD to mirror the disk of the virtual instances
789
        between nodes. To use a dedicated network interface for this
790
        (in order to improve performance or to enhance security) you
791
        need to configure an additional interface for each node.  Use
792
        the <option>-s</option> option with
793
        <computeroutput>gnt-cluster init</computeroutput> and
794
        <computeroutput>gnt-node add</computeroutput> to specify the
795
        IP address of this secondary interface to use for each
796
        node. Note that if you specified this option at cluster setup
797
        time, you must afterwards use it for every node add operation.
798
      </para>
799
    </sect2>
800

  
801
    <sect2>
802
      <title>Testing the setup</title>
803

  
804
      <para>
805
        Execute the <computeroutput>gnt-node list</computeroutput>
806
        command to see all nodes in the cluster:
807
      <screen>
808
# gnt-node list
809
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
810
node1.example.com 197404 197404   2047  1896   125     0     0
811
      </screen>
812
    </para>
813
  </sect2>
814

  
815
  <sect1>
816
    <title>Setting up and managing virtual instances</title>
817
    <sect2>
818
      <title>Setting up virtual instances</title>
819
      <para>
820
        This step shows how to setup a virtual instance with either
821
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
822
        with network mirrored disks
823
        (<computeroutput>drbd</computeroutput>).  All
824
        commands need to be executed on the Ganeti master node (the
825
        one on which <computeroutput>gnt-cluster init</computeroutput>
826
        was run).  Verify that the OS scripts are present on all
827
        cluster nodes with <computeroutput>gnt-os
828
        list</computeroutput>.
829
      </para>
830
      <para>
831
        To create a virtual instance, you need a hostname which is
832
        resolvable (DNS or <filename>/etc/hosts</filename> on all
833
        nodes). The following command will create a non-mirrored
834
        instance for you:
835
      </para>
836
      <screen>
837
gnt-instance add --node=node1 -o debootstrap -t plain inst1.example.com
838
* creating instance disks...
839
adding instance inst1.example.com to cluster config
840
Waiting for instance inst1.example.com to sync disks.
841
Instance inst1.example.com's disks are in sync.
842
creating os for instance inst1.example.com on node node1.example.com
843
* running the instance OS create scripts...
844
      </screen>
845

  
846
      <para>
847
        The above instance will have no network interface enabled.
848
        You can access it over the virtual console with
849
        <computeroutput>gnt-instance console
850
        <literal>inst1</literal></computeroutput>. There is no
... This diff was truncated because it exceeds the maximum size that can be displayed.

Also available in: Unified diff