Statistics
| Branch: | Tag: | Revision:

root / doc / install.rst @ 7ed400f0

History | View | Annotate | Download (26.4 kB)

1
Ganeti installation tutorial
2
============================
3

    
4
Documents Ganeti version |version|
5

    
6
.. contents::
7

    
8
.. highlight:: text
9

    
10
Introduction
11
------------
12

    
13
Ganeti is a cluster virtualization management system based on Xen or
14
KVM. This document explains how to bootstrap a Ganeti node (Xen *dom0*,
15
the host Linux system for KVM), create a running cluster and install
16
virtual instances (Xen *domUs*, KVM guests).  You need to repeat most of
17
the steps in this document for every node you want to install, but of
18
course we recommend creating some semi-automatic procedure if you plan
19
to deploy Ganeti on a medium/large scale.
20

    
21
A basic Ganeti terminology glossary is provided in the introductory
22
section of the :doc:`admin`. Please refer to that document if you are
23
uncertain about the terms we are using.
24

    
25
Ganeti has been developed for Linux and should be distribution-agnostic.
26
This documentation will use Debian Lenny as an example system but the
27
examples can be translated to any other distribution. You are expected
28
to be familiar with your distribution, its package management system,
29
and Xen or KVM before trying to use Ganeti.
30

    
31
This document is divided into two main sections:
32

    
33
- Installation of the base system and base components
34

    
35
- Configuration of the environment for Ganeti
36

    
37
Each of these is divided into sub-sections. While a full Ganeti system
38
will need all of the steps specified, some are not strictly required for
39
every environment. Which ones they are, and why, is specified in the
40
corresponding sections.
41

    
42
Installing the base system and base components
43
----------------------------------------------
44

    
45
Hardware requirements
46
+++++++++++++++++++++
47

    
48
Any system supported by your Linux distribution is fine. 64-bit systems
49
are better as they can support more memory.
50

    
51
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is
52
supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is
53
needed to get high-availability features (but of course, one can be used
54
to store the images). It is highly recommended to use more than one disk
55
drive to improve speed. But Ganeti also works with one disk per machine.
56

    
57
Installing the base system
58
++++++++++++++++++++++++++
59

    
60
**Mandatory** on all nodes.
61

    
62
It is advised to start with a clean, minimal install of the operating
63
system. The only requirement you need to be aware of at this stage is to
64
partition leaving enough space for a big (**minimum** 20GiB) LVM volume
65
group which will then host your instance filesystems, if you want to use
66
all Ganeti features. The volume group name Ganeti uses (by default) is
67
``xenvg``.
68

    
69
You can also use file-based storage only, without LVM, but this setup is
70
not detailed in this document.
71

    
72
If you choose to use RBD-based instances, there's no need for LVM
73
provisioning. However, this feature is experimental, and is not
74
recommended for production clusters.
75

    
76
While you can use an existing system, please note that the Ganeti
77
installation is intrusive in terms of changes to the system
78
configuration, and it's best to use a newly-installed system without
79
important data on it.
80

    
81
Also, for best results, it's advised that the nodes have as much as
82
possible the same hardware and software configuration. This will make
83
administration much easier.
84

    
85
Hostname issues
86
~~~~~~~~~~~~~~~
87

    
88
Note that Ganeti requires the hostnames of the systems (i.e. what the
89
``hostname`` command outputs to be a fully-qualified name, not a short
90
name. In other words, you should use *node1.example.com* as a hostname
91
and not just *node1*.
92

    
93
.. admonition:: Debian
94

    
95
   Debian Lenny and Etch configures the hostname differently than you
96
   need it for Ganeti. For example, this is what Etch puts in
97
   ``/etc/hosts`` in certain situations::
98

    
99
     127.0.0.1       localhost
100
     127.0.1.1       node1.example.com node1
101

    
102
   but for Ganeti you need to have::
103

    
104
     127.0.0.1       localhost
105
     192.0.2.1     node1.example.com node1
106

    
107
   replacing ``192.0.2.1`` with your node's address. Also, the file
108
   ``/etc/hostname`` which configures the hostname of the system
109
   should contain ``node1.example.com`` and not just ``node1`` (you
110
   need to run the command ``/etc/init.d/hostname.sh start`` after
111
   changing the file).
112

    
113
.. admonition:: Why a fully qualified host name
114

    
115
   Although most distributions use only the short name in the
116
   /etc/hostname file, we still think Ganeti nodes should use the full
117
   name. The reason for this is that calling 'hostname --fqdn' requires
118
   the resolver library to work and is a 'guess' via heuristics at what
119
   is your domain name. Since Ganeti can be used among other things to
120
   host DNS servers, we don't want to depend on them as much as
121
   possible, and we'd rather have the uname() syscall return the full
122
   node name.
123

    
124
   We haven't ever found any breakage in using a full hostname on a
125
   Linux system, and anyway we recommend to have only a minimal
126
   installation on Ganeti nodes, and to use instances (or other
127
   dedicated machines) to run the rest of your network services. By
128
   doing this you can change the /etc/hostname file to contain an FQDN
129
   without the fear of breaking anything unrelated.
130

    
131

    
132
Installing The Hypervisor
133
+++++++++++++++++++++++++
134

    
135
**Mandatory** on all nodes.
136

    
137
While Ganeti is developed with the ability to modularly run on different
138
virtualization environments in mind the only two currently useable on a
139
live system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4
140
and 3.1.  Supported KVM version are 72 and above.
141

    
142
Please follow your distribution's recommended way to install and set up
143
Xen, or install Xen from the upstream source, if you wish, following
144
their manual. For KVM, make sure you have a KVM-enabled kernel and the
145
KVM tools.
146

    
147
After installing Xen, you need to reboot into your new system. On some
148
distributions this might involve configuring GRUB appropriately, whereas
149
others will configure it automatically when you install the respective
150
kernels. For KVM no reboot should be necessary.
151

    
152
.. admonition:: Xen on Debian
153

    
154
   Under Lenny or Etch you can install the relevant ``xen-linux-system``
155
   package, which will pull in both the hypervisor and the relevant
156
   kernel. Also, if you are installing a 32-bit Lenny/Etch, you should
157
   install the ``libc6-xen`` package (run ``apt-get install
158
   libc6-xen``).
159

    
160
Xen settings
161
~~~~~~~~~~~~
162

    
163
It's recommended that dom0 is restricted to a low amount of memory
164
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled in
165
the file ``/etc/xen/xend-config.sxp`` by setting the value
166
``dom0-min-mem`` to 0, like this::
167

    
168
  (dom0-min-mem 0)
169

    
170
For optimum performance when running both CPU and I/O intensive
171
instances, it's also recommended that the dom0 is restricted to one CPU
172
only, for example by booting with the kernel parameter ``nosmp``.
173

    
174
It is recommended that you disable xen's automatic save of virtual
175
machines at system shutdown and subsequent restore of them at reboot.
176
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
177
``/etc/default/xendomains`` is set to an empty value.
178

    
179
If you want to use live migration make sure you have, in the xen config
180
file, something that allows the nodes to migrate instances between each
181
other. For example::
182

    
183
  (xend-relocation-server yes)
184
  (xend-relocation-port 8002)
185
  (xend-relocation-address '')
186
  (xend-relocation-hosts-allow '^192\\.0\\.2\\.[0-9]+$')
187

    
188

    
189
The second line assumes that the hypervisor parameter
190
``migration_port`` is set 8002, otherwise modify it to match. The last
191
line assumes that all your nodes have secondary IPs in the
192
192.0.2.0/24 network, adjust it accordingly to your setup.
193

    
194
.. admonition:: Debian
195

    
196
   Besides the ballooning change which you need to set in
197
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
198
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
199
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
200

    
201
     ## Xen hypervisor options to use with the default Xen boot option
202
     # xenhopt=dom0_mem=1024M
203

    
204
   and the ``xenkopt`` needs to include the ``nosmp`` option like this::
205

    
206
     ## Xen Linux kernel options to use with the default Xen boot option
207
     # xenkopt=nosmp
208

    
209
   Any existing parameters can be left in place: it's ok to have
210
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
211
   files, you need to run::
212

    
213
     /sbin/update-grub
214

    
215
If you want to run HVM instances too with Ganeti and want VNC access to
216
the console of your instances, set the following two entries in
217
``/etc/xen/xend-config.sxp``::
218

    
219
  (vnc-listen '0.0.0.0') (vncpasswd '')
220

    
221
You need to restart the Xen daemon for these settings to take effect::
222

    
223
  /etc/init.d/xend restart
224

    
225
Selecting the instance kernel
226
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
227

    
228
After you have installed Xen, you need to tell Ganeti exactly what
229
kernel to use for the instances it will create. This is done by creating
230
a symlink from your actual kernel to ``/boot/vmlinuz-2.6-xenU``, and one
231
from your initrd to ``/boot/initrd-2.6-xenU`` [#defkernel]_. Note that
232
if you don't use an initrd for the domU kernel, you don't need to create
233
the initrd symlink.
234

    
235
.. admonition:: Debian
236

    
237
   After installation of the ``xen-linux-system`` package, you need to
238
   run (replace the exact version number with the one you have)::
239

    
240
     cd /boot
241
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
242
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
243

    
244
Installing DRBD
245
+++++++++++++++
246

    
247
Recommended on all nodes: DRBD_ is required if you want to use the high
248
availability (HA) features of Ganeti, but optional if you don't require
249
them or only run Ganeti on single-node clusters. You can upgrade a
250
non-HA cluster to an HA one later, but you might need to export and
251
re-import all your instances to take advantage of the new features.
252

    
253
.. _DRBD: http://www.drbd.org/
254

    
255
Supported DRBD versions: 8.0+. It's recommended to have at least version
256
8.0.12. Note that for version 8.2 and newer it is needed to pass the
257
``usermode_helper=/bin/true`` parameter to the module, either by
258
configuring ``/etc/modules`` or when inserting it manually.
259

    
260
Now the bad news: unless your distribution already provides it
261
installing DRBD might involve recompiling your kernel or anyway fiddling
262
with it. Hopefully at least the Xen-ified kernel source to start from
263
will be provided (if you intend to use Xen).
264

    
265
The good news is that you don't need to configure DRBD at all. Ganeti
266
will do it for you for every instance you set up.  If you have the DRBD
267
utils installed and the module in your kernel you're fine. Please check
268
that your system is configured to load the module at every boot, and
269
that it passes the following option to the module:
270
``minor_count=NUMBER``. We recommend that you use 128 as the value of
271
the minor_count - this will allow you to use up to 64 instances in total
272
per node (both primary and secondary, when using only one disk per
273
instance). You can increase the number up to 255 if you need more
274
instances on a node.
275

    
276

    
277
.. admonition:: Debian
278

    
279
   On Debian, you can just install (build) the DRBD module with the
280
   following commands, making sure you are running the target (Xen or
281
   KVM) kernel::
282

    
283
     apt-get install drbd8-source drbd8-utils
284
     m-a update
285
     m-a a-i drbd8
286
     echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
287
     depmod -a
288
     modprobe drbd minor_count=128 usermode_helper=/bin/true
289

    
290
   It is also recommended that you comment out the default resources in
291
   the ``/etc/drbd.conf`` file, so that the init script doesn't try to
292
   configure any drbd devices. You can do this by prefixing all
293
   *resource* lines in the file with the keyword *skip*, like this::
294

    
295
     skip {
296
       resource r0 {
297
         ...
298
       }
299
     }
300

    
301
     skip {
302
       resource "r1" {
303
         ...
304
       }
305
     }
306

    
307
Installing RBD
308
+++++++++++++++
309

    
310
Recommended on all nodes: RBD_ is required if you want to create
311
instances with RBD disks residing inside a RADOS cluster (make use of
312
the rbd disk template). RBD-based instances can failover or migrate to
313
any other node in the ganeti cluster, enabling you to exploit of all
314
Ganeti's high availabilily (HA) features.
315

    
316
.. attention::
317
   Be careful though: rbd is still experimental! For now it is
318
   recommended only for testing purposes.  No sensitive data should be
319
   stored there.
320

    
321
.. _RBD: http://ceph.newdream.net/
322

    
323
You will need the ``rbd`` and ``libceph`` kernel modules, the RBD/Ceph
324
userspace utils (ceph-common Debian package) and an appropriate
325
Ceph/RADOS configuration file on every VM-capable node.
326

    
327
You will also need a working RADOS Cluster accessible by the above
328
nodes.
329

    
330
RADOS Cluster
331
~~~~~~~~~~~~~
332

    
333
You will need a working RADOS Cluster accesible by all VM-capable nodes
334
to use the RBD template. For more information on setting up a RADOS
335
Cluster, refer to the `official docs <http://ceph.newdream.net/>`_.
336

    
337
If you want to use a pool for storing RBD disk images other than the
338
default (``rbd``), you should first create the pool in the RADOS
339
Cluster, and then set the corresponding rbd disk parameter named
340
``pool``.
341

    
342
Kernel Modules
343
~~~~~~~~~~~~~~
344

    
345
Unless your distribution already provides it, you might need to compile
346
the ``rbd`` and ``libceph`` modules from source. You will need Linux
347
Kernel 3.2 or above for the kernel modules. Alternatively you will have
348
to build them as external modules (from Linux Kernel source 3.2 or
349
above), if you want to run a less recent kernel, or your kernel doesn't
350
include them.
351

    
352
Userspace Utils
353
~~~~~~~~~~~~~~~
354

    
355
The RBD template has been tested with ``ceph-common`` v0.38 and
356
above. We recommend using the latest version of ``ceph-common``.
357

    
358
.. admonition:: Debian
359

    
360
   On Debian, you can just install the RBD/Ceph userspace utils with
361
   the following command::
362

    
363
      apt-get install ceph-common
364

    
365
Configuration file
366
~~~~~~~~~~~~~~~~~~
367

    
368
You should also provide an appropriate configuration file
369
(``ceph.conf``) in ``/etc/ceph``. For the rbd userspace utils, you'll
370
only need to specify the IP addresses of the RADOS Cluster monitors.
371

    
372
.. admonition:: ceph.conf
373

    
374
   Sample configuration file::
375

    
376
    [mon.a]
377
           host = example_monitor_host1
378
           mon addr = 1.2.3.4:6789
379
    [mon.b]
380
           host = example_monitor_host2
381
           mon addr = 1.2.3.5:6789
382
    [mon.c]
383
           host = example_monitor_host3
384
           mon addr = 1.2.3.6:6789
385

    
386
For more information, please see the `Ceph Docs
387
<http://ceph.newdream.net/docs/latest/>`_
388

    
389
Other required software
390
+++++++++++++++++++++++
391

    
392
See :doc:`install-quick`.
393

    
394
Setting up the environment for Ganeti
395
-------------------------------------
396

    
397
Configuring the network
398
+++++++++++++++++++++++
399

    
400
**Mandatory** on all nodes.
401

    
402
You can run Ganeti either in "bridge mode" or in "routed mode". In
403
bridge mode, the default, the instances network interfaces will be
404
attached to a software bridge running in dom0. Xen by default creates
405
such a bridge at startup, but your distribution might have a different
406
way to do things, and you'll definitely need to manually set it up under
407
KVM.
408

    
409
Beware that the default name Ganeti uses is ``xen-br0`` (which was used
410
in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. See the
411
`Initializing the cluster`_ section to learn how to choose a different
412
bridge, or not to use one at all and use "routed mode".
413

    
414
In order to use "routed mode" under Xen, you'll need to change the
415
relevant parameters in the Xen config file. Under KVM instead, no config
416
change is necessary, but you still need to set up your network
417
interfaces correctly.
418

    
419
By default, under KVM, the "link" parameter you specify per-nic will
420
represent, if non-empty, a different routing table name or number to use
421
for your instances. This allows insulation between different instance
422
groups, and different routing policies between node traffic and instance
423
traffic.
424

    
425
You will need to configure your routing table basic routes and rules
426
outside of ganeti. The vif scripts will only add /32 routes to your
427
instances, through their interface, in the table you specified (under
428
KVM, and in the main table under Xen).
429

    
430
.. admonition:: Bridging under Debian
431

    
432
   The recommended way to configure the Xen bridge is to edit your
433
   ``/etc/network/interfaces`` file and substitute your normal
434
   Ethernet stanza with the following snippet::
435

    
436
     auto xen-br0
437
     iface xen-br0 inet static
438
        address YOUR_IP_ADDRESS
439
        netmask YOUR_NETMASK
440
        network YOUR_NETWORK
441
        broadcast YOUR_BROADCAST_ADDRESS
442
        gateway YOUR_GATEWAY
443
        bridge_ports eth0
444
        bridge_stp off
445
        bridge_fd 0
446

    
447
The following commands need to be executed on the local console:
448

    
449
  ifdown eth0
450
  ifup xen-br0
451

    
452
To check if the bridge is setup, use the ``ip`` and ``brctl show``
453
commands::
454

    
455
  # ip a show xen-br0
456
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
457
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
458
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
459
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
460
         valid_lft forever preferred_lft forever
461

    
462
  # brctl show xen-br0
463
  bridge name     bridge id               STP enabled     interfaces
464
  xen-br0         8000.0020fc1ed55d       no              eth0
465

    
466
.. _configure-lvm-label:
467

    
468
Configuring LVM
469
+++++++++++++++
470

    
471
**Mandatory** on all nodes.
472

    
473
The volume group is required to be at least 20GiB.
474

    
475
If you haven't configured your LVM volume group at install time you need
476
to do it before trying to initialize the Ganeti cluster. This is done by
477
formatting the devices/partitions you want to use for it and then adding
478
them to the relevant volume group::
479

    
480
  pvcreate /dev/sda3
481
  vgcreate xenvg /dev/sda3
482

    
483
or::
484

    
485
  pvcreate /dev/sdb1
486
  pvcreate /dev/sdc1
487
  vgcreate xenvg /dev/sdb1 /dev/sdc1
488

    
489
If you want to add a device later you can do so with the *vgextend*
490
command::
491

    
492
  pvcreate /dev/sdd1
493
  vgextend xenvg /dev/sdd1
494

    
495
Optional: it is recommended to configure LVM not to scan the DRBD
496
devices for physical volumes. This can be accomplished by editing
497
``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular
498
expression to the ``filter`` variable, like this::
499

    
500
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
501

    
502
Note that with Ganeti a helper script is provided - ``lvmstrap`` which
503
will erase and configure as LVM any not in-use disk on your system. This
504
is dangerous and it's recommended to read its ``--help`` output if you
505
want to use it.
506

    
507
Installing Ganeti
508
+++++++++++++++++
509

    
510
**Mandatory** on all nodes.
511

    
512
It's now time to install the Ganeti software itself.  Download the
513
source from the project page at `<http://code.google.com/p/ganeti/>`_,
514
and install it (replace 2.0.0 with the latest version)::
515

    
516
  tar xvzf ganeti-2.0.0.tar.gz
517
  cd ganeti-2.0.0
518
  ./configure --localstatedir=/var --sysconfdir=/etc
519
  make
520
  make install
521
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
522

    
523
You also need to copy the file ``doc/examples/ganeti.initd`` from the
524
source archive to ``/etc/init.d/ganeti`` and register it with your
525
distribution's startup scripts, for example in Debian::
526

    
527
  update-rc.d ganeti defaults 20 80
528

    
529
In order to automatically restart failed instances, you need to setup a
530
cron job run the *ganeti-watcher* command. A sample cron file is
531
provided in the source at ``doc/examples/ganeti.cron`` and you can copy
532
that (eventually altering the path) to ``/etc/cron.d/ganeti``.
533

    
534
What gets installed
535
~~~~~~~~~~~~~~~~~~~
536

    
537
The above ``make install`` invocation, or installing via your
538
distribution mechanisms, will install on the system:
539

    
540
- a set of python libraries under the *ganeti* namespace (depending on
541
  the python version this can be located in either
542
  ``lib/python-$ver/site-packages`` or various other locations)
543
- a set of programs under ``/usr/local/sbin`` or ``/usr/sbin``
544
- man pages for the above programs
545
- a set of tools under the ``lib/ganeti/tools`` directory
546
- an example iallocator script (see the admin guide for details) under
547
  ``lib/ganeti/iallocators``
548
- a cron job that is needed for cluster maintenance
549
- an init script for automatic startup of Ganeti daemons
550
- provided but not installed automatically by ``make install`` is a bash
551
  completion script that hopefully will ease working with the many
552
  cluster commands
553

    
554
Installing the Operating System support packages
555
++++++++++++++++++++++++++++++++++++++++++++++++
556

    
557
**Mandatory** on all nodes.
558

    
559
To be able to install instances you need to have an Operating System
560
installation script. An example OS that works under Debian and can
561
install Debian and Ubuntu instace OSes is provided on the project web
562
site.  Download it from the project page and follow the instructions in
563
the ``README`` file.  Here is the installation procedure (replace 0.9
564
with the latest version that is compatible with your ganeti version)::
565

    
566
  cd /usr/local/src/
567
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.9.tar.gz
568
  tar xzf ganeti-instance-debootstrap-0.9.tar.gz
569
  cd ganeti-instance-debootstrap-0.9
570
  ./configure
571
  make
572
  make install
573

    
574
In order to use this OS definition, you need to have internet access
575
from your nodes and have the *debootstrap*, *dump* and *restore*
576
commands installed on all nodes. Also, if the OS is configured to
577
partition the instance's disk in
578
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
579
installed.
580

    
581
.. admonition:: Debian
582

    
583
   Use this command on all nodes to install the required packages::
584

    
585
     apt-get install debootstrap dump kpartx
586

    
587
.. admonition:: KVM
588

    
589
   In order for debootstrap instances to be able to shutdown cleanly
590
   they must install have basic acpi support inside the instance. Which
591
   packages are needed depend on the exact flavor of debian or ubuntu
592
   which you're installing, but the example defaults file has a
593
   commented out configuration line that works for debian lenny and
594
   squeeze::
595

    
596
     EXTRA_PKGS="acpi-support-base,console-tools,udev"
597

    
598
   kbd can be used instead of console-tools, and more packages can be
599
   added, of course, if needed.
600

    
601
Alternatively, you can create your own OS definitions. See the manpage
602
:manpage:`ganeti-os-interface`.
603

    
604
Initializing the cluster
605
++++++++++++++++++++++++
606

    
607
**Mandatory** once per cluster, on the first node.
608

    
609
The last step is to initialize the cluster. After you have repeated the
610
above process on all of your nodes, choose one as the master, and
611
execute::
612

    
613
  gnt-cluster init <CLUSTERNAME>
614

    
615
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must
616
exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You
617
must choose a name different from any of the nodes names for a
618
multi-node cluster. In general the best choice is to have a unique name
619
for a cluster, even if it consists of only one machine, as you will be
620
able to expand it later without any problems. Please note that the
621
hostname used for this must resolve to an IP address reserved
622
**exclusively** for this purpose, and cannot be the name of the first
623
(master) node.
624

    
625
If you want to use a bridge which is not ``xen-br0``, or no bridge at
626
all, change it with the ``--nic-parameters`` option. For example to
627
bridge on br0 you can say::
628

    
629
  --nic-parameters link=br0
630

    
631
Or to not bridge at all, and use a separate routing table::
632

    
633
  --nic-parameters mode=routed,link=100
634

    
635
If you don't have a xen-br0 interface you also have to specify a
636
different network interface which will get the cluster ip, on the master
637
node, by using the ``--master-netdev <device>`` option.
638

    
639
You can use a different name than ``xenvg`` for the volume group (but
640
note that the name must be identical on all nodes). In this case you
641
need to specify it by passing the *--vg-name <VGNAME>* option to
642
``gnt-cluster init``.
643

    
644
To set up the cluster as an Xen HVM cluster, use the
645
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
646
(you can also add ``,xen-pvm`` to enable the PVM one too). You will also
647
need to create the VNC cluster password file
648
``/etc/ganeti/vnc-cluster-password`` which contains one line with the
649
default VNC password for the cluster.
650

    
651
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
652
pass ``--enabled-hypervisors=kvm`` to the init command.
653

    
654
You can also invoke the command with the ``--help`` option in order to
655
see all the possibilities.
656

    
657
Hypervisor/Network/Cluster parameters
658
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
659

    
660
Please note that the default hypervisor/network/cluster parameters may
661
not be the correct one for your environment. Carefully check them, and
662
change them at cluster init time, or later with ``gnt-cluster modify``.
663

    
664
Your instance types, networking environment, hypervisor type and version
665
may all affect what kind of parameters should be used on your cluster.
666

    
667
For example kvm instances are by default configured to use a host
668
kernel, and to be reached via serial console, which works nice for linux
669
paravirtualized instances. If you want fully virtualized instances you
670
may want to handle their kernel inside the instance, and to use VNC.
671

    
672
Joining the nodes to the cluster
673
++++++++++++++++++++++++++++++++
674

    
675
**Mandatory** for all the other nodes.
676

    
677
After you have initialized your cluster you need to join the other nodes
678
to it. You can do so by executing the following command on the master
679
node::
680

    
681
  gnt-node add <NODENAME>
682

    
683
Separate replication network
684
++++++++++++++++++++++++++++
685

    
686
**Optional**
687

    
688
Ganeti uses DRBD to mirror the disk of the virtual instances between
689
nodes. To use a dedicated network interface for this (in order to
690
improve performance or to enhance security) you need to configure an
691
additional interface for each node.  Use the *-s* option with
692
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
693
this secondary interface to use for each node. Note that if you
694
specified this option at cluster setup time, you must afterwards use it
695
for every node add operation.
696

    
697
Testing the setup
698
+++++++++++++++++
699

    
700
Execute the ``gnt-node list`` command to see all nodes in the cluster::
701

    
702
  # gnt-node list
703
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
704
  node1.example.com 197404 197404   2047  1896   125     0     0
705

    
706
The above shows a couple of things:
707

    
708
- The various Ganeti daemons can talk to each other
709
- Ganeti can examine the storage of the node (DTotal/DFree)
710
- Ganeti can talk to the selected hypervisor (MTotal/MNode/MFree)
711

    
712
Cluster burnin
713
~~~~~~~~~~~~~~
714

    
715
With Ganeti a tool called :command:`burnin` is provided that can test
716
most of the Ganeti functionality. The tool is installed under the
717
``lib/ganeti/tools`` directory (either under ``/usr`` or ``/usr/local``
718
based on the installation method). See more details under
719
:ref:`burnin-label`.
720

    
721
Further steps
722
-------------
723

    
724
You can now proceed either to the :doc:`admin`, or read the manpages of
725
the various commands (:manpage:`ganeti(7)`, :manpage:`gnt-cluster(8)`,
726
:manpage:`gnt-node(8)`, :manpage:`gnt-instance(8)`,
727
:manpage:`gnt-job(8)`).
728

    
729
.. rubric:: Footnotes
730

    
731
.. [#defkernel] The kernel and initrd paths can be changed at either
732
   cluster level (which changes the default for all instances) or at
733
   instance level.
734

    
735
.. vim: set textwidth=72 :
736
.. Local Variables:
737
.. mode: rst
738
.. fill-column: 72
739
.. End: