Statistics
| Branch: | Tag: | Revision:

root / doc / install.rst @ 12f9d75e

History | View | Annotate | Download (24.7 kB)

1
Ganeti installation tutorial
2
============================
3

    
4
Documents Ganeti version |version|
5

    
6
.. contents::
7

    
8
.. highlight:: text
9

    
10
Introduction
11
------------
12

    
13
Ganeti is a cluster virtualization management system based on Xen or
14
KVM. This document explains how to bootstrap a Ganeti node (Xen *dom0*,
15
the host Linux system for KVM), create a running cluster and install
16
virtual instances (Xen *domUs*, KVM guests).  You need to repeat most of
17
the steps in this document for every node you want to install, but of
18
course we recommend creating some semi-automatic procedure if you plan
19
to deploy Ganeti on a medium/large scale.
20

    
21
A basic Ganeti terminology glossary is provided in the introductory
22
section of the :doc:`admin`. Please refer to that document if you are
23
uncertain about the terms we are using.
24

    
25
Ganeti has been developed for Linux and should be distribution-agnostic.
26
This documentation will use Debian Lenny as an example system but the
27
examples can be translated to any other distribution. You are expected
28
to be familiar with your distribution, its package management system,
29
and Xen or KVM before trying to use Ganeti.
30

    
31
This document is divided into two main sections:
32

    
33
- Installation of the base system and base components
34

    
35
- Configuration of the environment for Ganeti
36

    
37
Each of these is divided into sub-sections. While a full Ganeti system
38
will need all of the steps specified, some are not strictly required for
39
every environment. Which ones they are, and why, is specified in the
40
corresponding sections.
41

    
42
Installing the base system and base components
43
----------------------------------------------
44

    
45
Hardware requirements
46
+++++++++++++++++++++
47

    
48
Any system supported by your Linux distribution is fine. 64-bit systems
49
are better as they can support more memory.
50

    
51
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is
52
supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is
53
needed to get high-availability features (but of course, one can be used
54
to store the images). It is highly recommended to use more than one disk
55
drive to improve speed. But Ganeti also works with one disk per machine.
56

    
57
Installing the base system
58
++++++++++++++++++++++++++
59

    
60
**Mandatory** on all nodes.
61

    
62
It is advised to start with a clean, minimal install of the operating
63
system. The only requirement you need to be aware of at this stage is to
64
partition leaving enough space for a big (**minimum** 20GiB) LVM volume
65
group which will then host your instance filesystems, if you want to use
66
all Ganeti features. The volume group name Ganeti uses (by default) is
67
``xenvg``.
68

    
69
You can also use file-based storage only, without LVM, but this setup is
70
not detailed in this document.
71

    
72
While you can use an existing system, please note that the Ganeti
73
installation is intrusive in terms of changes to the system
74
configuration, and it's best to use a newly-installed system without
75
important data on it.
76

    
77
Also, for best results, it's advised that the nodes have as much as
78
possible the same hardware and software configuration. This will make
79
administration much easier.
80

    
81
Hostname issues
82
~~~~~~~~~~~~~~~
83

    
84
Note that Ganeti requires the hostnames of the systems (i.e. what the
85
``hostname`` command outputs to be a fully-qualified name, not a short
86
name. In other words, you should use *node1.example.com* as a hostname
87
and not just *node1*.
88

    
89
.. admonition:: Debian
90

    
91
   Debian Lenny and Etch configures the hostname differently than you
92
   need it for Ganeti. For example, this is what Etch puts in
93
   ``/etc/hosts`` in certain situations::
94

    
95
     127.0.0.1       localhost
96
     127.0.1.1       node1.example.com node1
97

    
98
   but for Ganeti you need to have::
99

    
100
     127.0.0.1       localhost
101
     192.0.2.1     node1.example.com node1
102

    
103
   replacing ``192.0.2.1`` with your node's address. Also, the file
104
   ``/etc/hostname`` which configures the hostname of the system
105
   should contain ``node1.example.com`` and not just ``node1`` (you
106
   need to run the command ``/etc/init.d/hostname.sh start`` after
107
   changing the file).
108

    
109
.. admonition:: Why a fully qualified host name
110

    
111
   Although most distributions use only the short name in the
112
   /etc/hostname file, we still think Ganeti nodes should use the full
113
   name. The reason for this is that calling 'hostname --fqdn' requires
114
   the resolver library to work and is a 'guess' via heuristics at what
115
   is your domain name. Since Ganeti can be used among other things to
116
   host DNS servers, we don't want to depend on them as much as
117
   possible, and we'd rather have the uname() syscall return the full
118
   node name.
119

    
120
   We haven't ever found any breakage in using a full hostname on a
121
   Linux system, and anyway we recommend to have only a minimal
122
   installation on Ganeti nodes, and to use instances (or other
123
   dedicated machines) to run the rest of your network services. By
124
   doing this you can change the /etc/hostname file to contain an FQDN
125
   without the fear of breaking anything unrelated.
126

    
127

    
128
Installing The Hypervisor
129
+++++++++++++++++++++++++
130

    
131
**Mandatory** on all nodes.
132

    
133
While Ganeti is developed with the ability to modularly run on different
134
virtualization environments in mind the only two currently useable on a
135
live system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4
136
and 3.1.  Supported KVM version are 72 and above.
137

    
138
Please follow your distribution's recommended way to install and set up
139
Xen, or install Xen from the upstream source, if you wish, following
140
their manual. For KVM, make sure you have a KVM-enabled kernel and the
141
KVM tools.
142

    
143
After installing Xen, you need to reboot into your new system. On some
144
distributions this might involve configuring GRUB appropriately, whereas
145
others will configure it automatically when you install the respective
146
kernels. For KVM no reboot should be necessary.
147

    
148
.. admonition:: Xen on Debian
149

    
150
   Under Lenny or Etch you can install the relevant ``xen-linux-system``
151
   package, which will pull in both the hypervisor and the relevant
152
   kernel. Also, if you are installing a 32-bit Lenny/Etch, you should
153
   install the ``libc6-xen`` package (run ``apt-get install
154
   libc6-xen``).
155

    
156
Xen settings
157
~~~~~~~~~~~~
158

    
159
It's recommended that dom0 is restricted to a low amount of memory
160
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled in
161
the file ``/etc/xen/xend-config.sxp`` by setting the value
162
``dom0-min-mem`` to 0, like this::
163

    
164
  (dom0-min-mem 0)
165

    
166
For optimum performance when running both CPU and I/O intensive
167
instances, it's also recommended that the dom0 is restricted to one CPU
168
only, for example by booting with the kernel parameter ``nosmp``.
169

    
170
It is recommended that you disable xen's automatic save of virtual
171
machines at system shutdown and subsequent restore of them at reboot.
172
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file
173
``/etc/default/xendomains`` is set to an empty value.
174

    
175
If you want to use live migration make sure you have, in the xen config
176
file, something that allows the nodes to migrate instances between each
177
other. For example::
178

    
179
  (xend-relocation-server yes)
180
  (xend-relocation-port 8002)
181
  (xend-relocation-address '')
182
  (xend-relocation-hosts-allow '^192\\.0\\.2\\.[0-9]+$')
183

    
184

    
185
The second line assumes that the hypervisor parameter
186
``migration_port`` is set 8002, otherwise modify it to match. The last
187
line assumes that all your nodes have secondary IPs in the
188
192.0.2.0/24 network, adjust it accordingly to your setup.
189

    
190
.. admonition:: Debian
191

    
192
   Besides the ballooning change which you need to set in
193
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
194
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
195
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
196

    
197
     ## Xen hypervisor options to use with the default Xen boot option
198
     # xenhopt=dom0_mem=1024M
199

    
200
   and the ``xenkopt`` needs to include the ``nosmp`` option like this::
201

    
202
     ## Xen Linux kernel options to use with the default Xen boot option
203
     # xenkopt=nosmp
204

    
205
   Any existing parameters can be left in place: it's ok to have
206
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
207
   files, you need to run::
208

    
209
     /sbin/update-grub
210

    
211
If you want to run HVM instances too with Ganeti and want VNC access to
212
the console of your instances, set the following two entries in
213
``/etc/xen/xend-config.sxp``::
214

    
215
  (vnc-listen '0.0.0.0') (vncpasswd '')
216

    
217
You need to restart the Xen daemon for these settings to take effect::
218

    
219
  /etc/init.d/xend restart
220

    
221
Selecting the instance kernel
222
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223

    
224
After you have installed Xen, you need to tell Ganeti exactly what
225
kernel to use for the instances it will create. This is done by creating
226
a symlink from your actual kernel to ``/boot/vmlinuz-2.6-xenU``, and one
227
from your initrd to ``/boot/initrd-2.6-xenU`` [#defkernel]_. Note that
228
if you don't use an initrd for the domU kernel, you don't need to create
229
the initrd symlink.
230

    
231
.. admonition:: Debian
232

    
233
   After installation of the ``xen-linux-system`` package, you need to
234
   run (replace the exact version number with the one you have)::
235

    
236
     cd /boot
237
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
238
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
239

    
240
Installing DRBD
241
+++++++++++++++
242

    
243
Recommended on all nodes: DRBD_ is required if you want to use the high
244
availability (HA) features of Ganeti, but optional if you don't require
245
them or only run Ganeti on single-node clusters. You can upgrade a
246
non-HA cluster to an HA one later, but you might need to export and
247
re-import all your instances to take advantage of the new features.
248

    
249
.. _DRBD: http://www.drbd.org/
250

    
251
Supported DRBD versions: 8.0+. It's recommended to have at least version
252
8.0.12. Note that for version 8.2 and newer it is needed to pass the
253
``usermode_helper=/bin/true`` parameter to the module, either by
254
configuring ``/etc/modules`` or when inserting it manually.
255

    
256
Now the bad news: unless your distribution already provides it
257
installing DRBD might involve recompiling your kernel or anyway fiddling
258
with it. Hopefully at least the Xen-ified kernel source to start from
259
will be provided (if you intend to use Xen).
260

    
261
The good news is that you don't need to configure DRBD at all. Ganeti
262
will do it for you for every instance you set up.  If you have the DRBD
263
utils installed and the module in your kernel you're fine. Please check
264
that your system is configured to load the module at every boot, and
265
that it passes the following option to the module:
266
``minor_count=NUMBER``. We recommend that you use 128 as the value of
267
the minor_count - this will allow you to use up to 64 instances in total
268
per node (both primary and secondary, when using only one disk per
269
instance). You can increase the number up to 255 if you need more
270
instances on a node.
271

    
272

    
273
.. admonition:: Debian
274

    
275
   On Debian, you can just install (build) the DRBD module with the
276
   following commands, making sure you are running the target (Xen or
277
   KVM) kernel::
278

    
279
     apt-get install drbd8-source drbd8-utils
280
     m-a update
281
     m-a a-i drbd8
282
     echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
283
     depmod -a
284
     modprobe drbd minor_count=128 usermode_helper=/bin/true
285

    
286
   It is also recommended that you comment out the default resources in
287
   the ``/etc/drbd.conf`` file, so that the init script doesn't try to
288
   configure any drbd devices. You can do this by prefixing all
289
   *resource* lines in the file with the keyword *skip*, like this::
290

    
291
     skip {
292
       resource r0 {
293
         ...
294
       }
295
     }
296

    
297
     skip {
298
       resource "r1" {
299
         ...
300
       }
301
     }
302

    
303
Other required software
304
+++++++++++++++++++++++
305

    
306
See :doc:`install-quick`.
307

    
308
Setting up the environment for Ganeti
309
-------------------------------------
310

    
311
Configuring the network
312
+++++++++++++++++++++++
313

    
314
**Mandatory** on all nodes.
315

    
316
You can run Ganeti either in "bridge mode" or in "routed mode". In
317
bridge mode, the default, the instances network interfaces will be
318
attached to a software bridge running in dom0. Xen by default creates
319
such a bridge at startup, but your distribution might have a different
320
way to do things, and you'll definitely need to manually set it up under
321
KVM.
322

    
323
Beware that the default name Ganeti uses is ``xen-br0`` (which was used
324
in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. See the
325
`Initializing the cluster`_ section to learn how to choose a different
326
bridge, or not to use one at all and use "routed mode".
327

    
328
In order to use "routed mode" under Xen, you'll need to change the
329
relevant parameters in the Xen config file. Under KVM instead, no config
330
change is necessary, but you still need to set up your network
331
interfaces correctly.
332

    
333
By default, under KVM, the "link" parameter you specify per-nic will
334
represent, if non-empty, a different routing table name or number to use
335
for your instances. This allows insulation between different instance
336
groups, and different routing policies between node traffic and instance
337
traffic.
338

    
339
You will need to configure your routing table basic routes and rules
340
outside of ganeti. The vif scripts will only add /32 routes to your
341
instances, through their interface, in the table you specified (under
342
KVM, and in the main table under Xen).
343

    
344
.. admonition:: Bridging issues with certain kernels
345

    
346
    Some kernel versions (e.g. 2.6.32) have an issue where the bridge
347
    will automatically change its ``MAC`` address to the lower-numbered
348
    slave on port addition and removal. This means that, depending on
349
    the ``MAC`` address of the actual NIC on the node and the addresses
350
    of the instances, it could be that starting, stopping or migrating
351
    instances will lead to timeouts due to the address of the bridge
352
    (and thus node itself) changing.
353

    
354
    To prevent this, it's enough to set the bridge manually to a
355
    specific ``MAC`` address, which will disable this automatic address
356
    change. In Debian, this can be done as follows in the bridge
357
    configuration snippet::
358

    
359
      up ip link set addr $(cat /sys/class/net/$IFACE/address) dev $IFACE
360

    
361
    which will "set" the bridge address to the initial one, disallowing
362
    changes.
363

    
364
.. admonition:: Bridging under Debian
365

    
366
   The recommended way to configure the Xen bridge is to edit your
367
   ``/etc/network/interfaces`` file and substitute your normal
368
   Ethernet stanza with the following snippet::
369

    
370
     auto xen-br0
371
     iface xen-br0 inet static
372
        address YOUR_IP_ADDRESS
373
        netmask YOUR_NETMASK
374
        network YOUR_NETWORK
375
        broadcast YOUR_BROADCAST_ADDRESS
376
        gateway YOUR_GATEWAY
377
        bridge_ports eth0
378
        bridge_stp off
379
        bridge_fd 0
380
        # example for setting manually the bridge address to the eth0 NIC
381
        up ip link set addr $(cat /sys/class/net/eth0/address) dev $IFACE
382

    
383
The following commands need to be executed on the local console:
384

    
385
  ifdown eth0
386
  ifup xen-br0
387

    
388
To check if the bridge is setup, use the ``ip`` and ``brctl show``
389
commands::
390

    
391
  # ip a show xen-br0
392
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
393
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
394
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
395
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
396
         valid_lft forever preferred_lft forever
397

    
398
  # brctl show xen-br0
399
  bridge name     bridge id               STP enabled     interfaces
400
  xen-br0         8000.0020fc1ed55d       no              eth0
401

    
402
.. _configure-lvm-label:
403

    
404
Configuring LVM
405
+++++++++++++++
406

    
407
**Mandatory** on all nodes.
408

    
409
The volume group is required to be at least 20GiB.
410

    
411
If you haven't configured your LVM volume group at install time you need
412
to do it before trying to initialize the Ganeti cluster. This is done by
413
formatting the devices/partitions you want to use for it and then adding
414
them to the relevant volume group::
415

    
416
  pvcreate /dev/sda3
417
  vgcreate xenvg /dev/sda3
418

    
419
or::
420

    
421
  pvcreate /dev/sdb1
422
  pvcreate /dev/sdc1
423
  vgcreate xenvg /dev/sdb1 /dev/sdc1
424

    
425
If you want to add a device later you can do so with the *vgextend*
426
command::
427

    
428
  pvcreate /dev/sdd1
429
  vgextend xenvg /dev/sdd1
430

    
431
Optional: it is recommended to configure LVM not to scan the DRBD
432
devices for physical volumes. This can be accomplished by editing
433
``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular
434
expression to the ``filter`` variable, like this::
435

    
436
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
437

    
438
Note that with Ganeti a helper script is provided - ``lvmstrap`` which
439
will erase and configure as LVM any not in-use disk on your system. This
440
is dangerous and it's recommended to read its ``--help`` output if you
441
want to use it.
442

    
443
Installing Ganeti
444
+++++++++++++++++
445

    
446
**Mandatory** on all nodes.
447

    
448
It's now time to install the Ganeti software itself.  Download the
449
source from the project page at `<http://code.google.com/p/ganeti/>`_,
450
and install it (replace 2.0.0 with the latest version)::
451

    
452
  tar xvzf ganeti-2.0.0.tar.gz
453
  cd ganeti-2.0.0
454
  ./configure --localstatedir=/var --sysconfdir=/etc
455
  make
456
  make install
457
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
458

    
459
You also need to copy the file ``doc/examples/ganeti.initd`` from the
460
source archive to ``/etc/init.d/ganeti`` and register it with your
461
distribution's startup scripts, for example in Debian::
462

    
463
  update-rc.d ganeti defaults 20 80
464

    
465
In order to automatically restart failed instances, you need to setup a
466
cron job run the *ganeti-watcher* command. A sample cron file is
467
provided in the source at ``doc/examples/ganeti.cron`` and you can copy
468
that (eventually altering the path) to ``/etc/cron.d/ganeti``.
469

    
470
What gets installed
471
~~~~~~~~~~~~~~~~~~~
472

    
473
The above ``make install`` invocation, or installing via your
474
distribution mechanisms, will install on the system:
475

    
476
- a set of python libraries under the *ganeti* namespace (depending on
477
  the python version this can be located in either
478
  ``lib/python-$ver/site-packages`` or various other locations)
479
- a set of programs under ``/usr/local/sbin`` or ``/usr/sbin``
480
- man pages for the above programs
481
- a set of tools under the ``lib/ganeti/tools`` directory
482
- an example iallocator script (see the admin guide for details) under
483
  ``lib/ganeti/iallocators``
484
- a cron job that is needed for cluster maintenance
485
- an init script for automatic startup of Ganeti daemons
486
- provided but not installed automatically by ``make install`` is a bash
487
  completion script that hopefully will ease working with the many
488
  cluster commands
489

    
490
Installing the Operating System support packages
491
++++++++++++++++++++++++++++++++++++++++++++++++
492

    
493
**Mandatory** on all nodes.
494

    
495
To be able to install instances you need to have an Operating System
496
installation script. An example OS that works under Debian and can
497
install Debian and Ubuntu instace OSes is provided on the project web
498
site.  Download it from the project page and follow the instructions in
499
the ``README`` file.  Here is the installation procedure (replace 0.9
500
with the latest version that is compatible with your ganeti version)::
501

    
502
  cd /usr/local/src/
503
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.9.tar.gz
504
  tar xzf ganeti-instance-debootstrap-0.9.tar.gz
505
  cd ganeti-instance-debootstrap-0.9
506
  ./configure
507
  make
508
  make install
509

    
510
In order to use this OS definition, you need to have internet access
511
from your nodes and have the *debootstrap*, *dump* and *restore*
512
commands installed on all nodes. Also, if the OS is configured to
513
partition the instance's disk in
514
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx*
515
installed.
516

    
517
.. admonition:: Debian
518

    
519
   Use this command on all nodes to install the required packages::
520

    
521
     apt-get install debootstrap dump kpartx
522

    
523
.. admonition:: KVM
524

    
525
   In order for debootstrap instances to be able to shutdown cleanly
526
   they must install have basic acpi support inside the instance. Which
527
   packages are needed depend on the exact flavor of debian or ubuntu
528
   which you're installing, but the example defaults file has a
529
   commented out configuration line that works for debian lenny and
530
   squeeze::
531

    
532
     EXTRA_PKGS="acpi-support-base,console-tools,udev"
533

    
534
   kbd can be used instead of console-tools, and more packages can be
535
   added, of course, if needed.
536

    
537
Alternatively, you can create your own OS definitions. See the manpage
538
:manpage:`ganeti-os-interface`.
539

    
540
Initializing the cluster
541
++++++++++++++++++++++++
542

    
543
**Mandatory** once per cluster, on the first node.
544

    
545
The last step is to initialize the cluster. After you have repeated the
546
above process on all of your nodes, choose one as the master, and
547
execute::
548

    
549
  gnt-cluster init <CLUSTERNAME>
550

    
551
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must
552
exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You
553
must choose a name different from any of the nodes names for a
554
multi-node cluster. In general the best choice is to have a unique name
555
for a cluster, even if it consists of only one machine, as you will be
556
able to expand it later without any problems. Please note that the
557
hostname used for this must resolve to an IP address reserved
558
**exclusively** for this purpose, and cannot be the name of the first
559
(master) node.
560

    
561
If you want to use a bridge which is not ``xen-br0``, or no bridge at
562
all, change it with the ``--nic-parameters`` option. For example to
563
bridge on br0 you can say::
564

    
565
  --nic-parameters link=br0
566

    
567
Or to not bridge at all, and use a separate routing table::
568

    
569
  --nic-parameters mode=routed,link=100
570

    
571
If you don't have a xen-br0 interface you also have to specify a
572
different network interface which will get the cluster ip, on the master
573
node, by using the ``--master-netdev <device>`` option.
574

    
575
You can use a different name than ``xenvg`` for the volume group (but
576
note that the name must be identical on all nodes). In this case you
577
need to specify it by passing the *--vg-name <VGNAME>* option to
578
``gnt-cluster init``.
579

    
580
To set up the cluster as an Xen HVM cluster, use the
581
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor
582
(you can also add ``,xen-pvm`` to enable the PVM one too). You will also
583
need to create the VNC cluster password file
584
``/etc/ganeti/vnc-cluster-password`` which contains one line with the
585
default VNC password for the cluster.
586

    
587
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed),
588
pass ``--enabled-hypervisors=kvm`` to the init command.
589

    
590
You can also invoke the command with the ``--help`` option in order to
591
see all the possibilities.
592

    
593
Hypervisor/Network/Cluster parameters
594
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
595

    
596
Please note that the default hypervisor/network/cluster parameters may
597
not be the correct one for your environment. Carefully check them, and
598
change them at cluster init time, or later with ``gnt-cluster modify``.
599

    
600
Your instance types, networking environment, hypervisor type and version
601
may all affect what kind of parameters should be used on your cluster.
602

    
603
For example kvm instances are by default configured to use a host
604
kernel, and to be reached via serial console, which works nice for linux
605
paravirtualized instances. If you want fully virtualized instances you
606
may want to handle their kernel inside the instance, and to use VNC.
607

    
608
Joining the nodes to the cluster
609
++++++++++++++++++++++++++++++++
610

    
611
**Mandatory** for all the other nodes.
612

    
613
After you have initialized your cluster you need to join the other nodes
614
to it. You can do so by executing the following command on the master
615
node::
616

    
617
  gnt-node add <NODENAME>
618

    
619
Separate replication network
620
++++++++++++++++++++++++++++
621

    
622
**Optional**
623

    
624
Ganeti uses DRBD to mirror the disk of the virtual instances between
625
nodes. To use a dedicated network interface for this (in order to
626
improve performance or to enhance security) you need to configure an
627
additional interface for each node.  Use the *-s* option with
628
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of
629
this secondary interface to use for each node. Note that if you
630
specified this option at cluster setup time, you must afterwards use it
631
for every node add operation.
632

    
633
Testing the setup
634
+++++++++++++++++
635

    
636
Execute the ``gnt-node list`` command to see all nodes in the cluster::
637

    
638
  # gnt-node list
639
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
640
  node1.example.com 197404 197404   2047  1896   125     0     0
641

    
642
The above shows a couple of things:
643

    
644
- The various Ganeti daemons can talk to each other
645
- Ganeti can examine the storage of the node (DTotal/DFree)
646
- Ganeti can talk to the selected hypervisor (MTotal/MNode/MFree)
647

    
648
Cluster burnin
649
~~~~~~~~~~~~~~
650

    
651
With Ganeti a tool called :command:`burnin` is provided that can test
652
most of the Ganeti functionality. The tool is installed under the
653
``lib/ganeti/tools`` directory (either under ``/usr`` or ``/usr/local``
654
based on the installation method). See more details under
655
:ref:`burnin-label`.
656

    
657
Further steps
658
-------------
659

    
660
You can now proceed either to the :doc:`admin`, or read the manpages of
661
the various commands (:manpage:`ganeti(7)`, :manpage:`gnt-cluster(8)`,
662
:manpage:`gnt-node(8)`, :manpage:`gnt-instance(8)`,
663
:manpage:`gnt-job(8)`).
664

    
665
.. rubric:: Footnotes
666

    
667
.. [#defkernel] The kernel and initrd paths can be changed at either
668
   cluster level (which changes the default for all instances) or at
669
   instance level.
670

    
671
.. vim: set textwidth=72 :
672
.. Local Variables:
673
.. mode: rst
674
.. fill-column: 72
675
.. End: