Statistics
| Branch: | Tag: | Revision:

root / doc / install.sgml @ 1df40cb5

History | View | Annotate | Download (30.5 kB)

1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 1.2</para>
8

    
9
  <sect1>
10
    <title>Introduction</title>
11

    
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

    
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

    
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
37
    </para>
38

    
39
    <para>This document is divided into two main sections:
40

    
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
            components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
            Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

    
52
      Each of these is divided into sub-sections. While a full Ganeti system
53
      will need all of the steps specified, some are not strictly required for
54
      every environment. Which ones they are, and why, is specified in the
55
      corresponding sections.
56
    </para>
57

    
58
  </sect1>
59

    
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

    
63
    <sect2>
64
      <title>Hardware requirements</title>
65

    
66
      <para>
67
        Any system supported by your Linux distribution is fine.  64-bit
68
        systems are better as they can support more memory.
69
      </para>
70

    
71
      <para>
72
        Any disk drive recognized by Linux
73
        (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
        is supported in Ganeti. Note that no shared storage (e.g.
75
        <literal>SAN</literal>) is needed to get high-availability features. It
76
        is highly recommended to use more than one disk drive to improve speed.
77
        But Ganeti also works with one disk per machine.
78
      </para>
79

    
80
    <sect2>
81
      <title>Installing the base system</title>
82

    
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

    
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems. The volume group
94
        name Ganeti 1.2 uses (by default) is
95
        <emphasis>xenvg</emphasis>.
96
      </para>
97

    
98
      <para>
99
        While you can use an existing system, please note that the
100
        Ganeti installation is intrusive in terms of changes to the
101
        system configuration, and it's best to use a newly-installed
102
        system without important data on it.
103
      </para>
104

    
105
      <para>
106
        Also, for best results, it's advised that the nodes have as
107
        much as possible the same hardware and software
108
        configuration. This will make administration much easier.
109
      </para>
110

    
111
      <sect3>
112
        <title>Hostname issues</title>
113
        <para>
114
          Note that Ganeti requires the hostnames of the systems
115
          (i.e. what the <computeroutput>hostname</computeroutput>
116
          command outputs to be a fully-qualified name, not a short
117
          name. In other words, you should use
118
          <literal>node1.example.com</literal> as a hostname and not
119
          just <literal>node1</literal>.
120
        </para>
121

    
122
        <formalpara>
123
          <title>Debian</title>
124
          <para>
125
            Note that Debian Etch configures the hostname differently
126
            than you need it for Ganeti. For example, this is what
127
            Etch puts in <filename>/etc/hosts</filename> in certain
128
            situations:
129
<screen>
130
127.0.0.1       localhost
131
127.0.1.1       node1.example.com node1
132
</screen>
133

    
134
          but for Ganeti you need to have:
135
<screen>
136
127.0.0.1       localhost
137
192.168.1.1     node1.example.com node1
138
</screen>
139
            replacing <literal>192.168.1.1</literal> with your node's
140
            address. Also, the file <filename>/etc/hostname</filename>
141
            which configures the hostname of the system should contain
142
            <literal>node1.example.com</literal> and not just
143
            <literal>node1</literal> (you need to run the command
144
            <computeroutput>/etc/init.d/hostname.sh
145
            start</computeroutput> after changing the file).
146
          </para>
147
        </formalpara>
148
      </sect3>
149

    
150
    </sect2>
151

    
152
    <sect2>
153
      <title>Installing Xen</title>
154

    
155
      <para>
156
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
157
      </para>
158

    
159
      <para>
160
        While Ganeti is developed with the ability to modularly run on
161
        different virtualization environments in mind the only one
162
        currently useable on a live system is <ulink
163
        url="http://xen.xensource.com/">Xen</ulink>. Supported
164
        versions are: <simplelist type="inline">
165
        <member><literal>3.0.3</literal></member>
166
        <member><literal>3.0.4</literal></member>
167
        <member><literal>3.1</literal></member> </simplelist>.
168
      </para>
169

    
170
      <para>
171
        Please follow your distribution's recommended way to install
172
        and set up Xen, or install Xen from the upstream source, if
173
        you wish, following their manual.
174
      </para>
175

    
176
      <para>
177
        After installing Xen you need to reboot into your Xen-ified
178
        dom0 system. On some distributions this might involve
179
        configuring GRUB appropriately, whereas others will configure
180
        it automatically when you install Xen from a package.
181
      </para>
182

    
183
      <formalpara><title>Debian</title>
184
      <para>
185
        Under Debian Etch or Sarge+backports you can install the
186
        relevant <literal>xen-linux-system</literal> package, which
187
        will pull in both the hypervisor and the relevant
188
        kernel. Also, if you are installing a 32-bit Etch, you should
189
        install the <computeroutput>libc6-xen</computeroutput> package
190
        (run <computeroutput>apt-get install
191
        libc6-xen</computeroutput>).
192
      </para>
193
      </formalpara>
194

    
195
      <sect3>
196
        <title>Xen settings</title>
197

    
198
        <para>
199
          It's recommended that dom0 is restricted to a low amount of
200
          memory (<constant>512MiB</constant> is reasonable) and that
201
          memory ballooning is disabled in the file
202
          <filename>/etc/xen/xend-config.sxp</filename> by setting the
203
          value <literal>dom0-min-mem</literal> to
204
          <constant>0</constant>, like this:
205
          <computeroutput>(dom0-min-mem 0)</computeroutput>
206
        </para>
207

    
208
        <para>
209
          For optimum performance when running both CPU and I/O
210
          intensive instances, it's also recommended that the dom0 is
211
          restricted to one CPU only, for example by booting with the
212
          kernel parameter <literal>nosmp</literal>.
213
        </para>
214

    
215
	<para>
216
	  It is recommended that you disable xen's automatic save of virtual
217
	  machines at system shutdown and subsequent restore of them at reboot.
218
	  To obtain this make sure the variable
219
	  <listeral>XENDOMAINS_SAVE</literal> in the file
220
	  <literal>/etc/default/xendomains</literal> is set to an empty value.
221
	</para>
222

    
223
        <formalpara>
224
          <title>Debian</title>
225
          <para>
226
            Besides the ballooning change which you need to set in
227
            <filename>/etc/xen/xend-config.sxp</filename>, you need to
228
            set the memory and nosmp parameters in the file
229
            <filename>/boot/grub/menu.lst</filename>. You need to
230
            modify the variable <literal>xenhopt</literal> to add
231
            <userinput>dom0_mem=512M</userinput> like this:
232
<screen>
233
## Xen hypervisor options to use with the default Xen boot option
234
# xenhopt=dom0_mem=512M
235
</screen>
236
            and the <literal>xenkopt</literal> needs to include the
237
            <userinput>nosmp</userinput> option like this:
238
<screen>
239
## Xen Linux kernel options to use with the default Xen boot option
240
# xenkopt=nosmp
241
</screen>
242

    
243
          Any existing parameters can be left in place: it's ok to
244
          have <computeroutput>xenkopt=console=tty0
245
          nosmp</computeroutput>, for example. After modifying the
246
          files, you need to run:
247
<screen>
248
/sbin/update-grub
249
</screen>
250
          </para>
251
        </formalpara>
252

    
253
      </sect3>
254

    
255
      <sect3>
256
        <title>Selecting the instance kernel</title>
257

    
258
        <para>
259
          After you have installed Xen, you need to tell Ganeti
260
          exactly what kernel to use for the instances it will
261
          create. This is done by creating a
262
          <emphasis>symlink</emphasis> from your actual kernel to
263
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
264
          your initrd to
265
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
266
          don't use an initrd for the <literal>domU</literal> kernel,
267
          you don't need to create the initrd symlink.
268
        </para>
269

    
270
        <formalpara>
271
          <title>Debian</title>
272
          <para>
273
            After installation of the
274
            <literal>xen-linux-system</literal> package, you need to
275
            run (replace the exact version number with the one you
276
            have):
277
            <screen>
278
cd /boot
279
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
280
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
281
            </screen>
282
          </para>
283
        </formalpara>
284
      </sect3>
285

    
286
    </sect2>
287

    
288
    <sect2>
289
      <title>Installing DRBD</title>
290

    
291
      <para>
292
        Recommended on all nodes: <ulink
293
        url="http://www.drbd.org/">DRBD</ulink> is required if you
294
        want to use the high availability (HA) features of Ganeti, but
295
        optional if you don't require HA or only run Ganeti on
296
        single-node clusters. You can upgrade a non-HA cluster to an
297
        HA one later, but you might need to export and re-import all
298
        your instances to take advantage of the new features.
299
      </para>
300

    
301
      <para>
302
        Supported DRBD versions: the <literal>0.7</literal> series
303
        <emphasis role="strong">or</emphasis>
304
        <literal>8.0.x</literal>. It's recommended to have at least
305
        version <literal>0.7.24</literal> if you use
306
        <command>udev</command> since older versions have a bug
307
        related to device discovery which can be triggered in cases of
308
        hard drive failure.
309
      </para>
310

    
311
      <para>
312
        Now the bad news: unless your distribution already provides it
313
        installing DRBD might involve recompiling your kernel or
314
        anyway fiddling with it. Hopefully at least the Xen-ified
315
        kernel source to start from will be provided.
316
      </para>
317

    
318
      <para>
319
        The good news is that you don't need to configure DRBD at all.
320
        Ganeti will do it for you for every instance you set up.  If
321
        you have the DRBD utils installed and the module in your
322
        kernel you're fine. Please check that your system is
323
        configured to load the module at every boot, and that it
324
        passes the following option to the module (for
325
        <literal>0.7.x</literal>:
326
        <computeroutput>minor_count=64</computeroutput> (this will
327
        allow you to use up to 32 instances per node) or for
328
        <literal>8.0.x</literal> you can use up to
329
        <constant>255</constant>
330
        (i.e. <computeroutput>minor_count=255</computeroutput>, but
331
        for most clusters <constant>128</constant> should be enough).
332
      </para>
333

    
334
      <formalpara><title>Debian</title>
335
        <para>
336
         You can just install (build) the DRBD 0.7 module with the
337
         following commands (make sure you are running the Xen
338
         kernel):
339
        </para>
340
      </formalpara>
341

    
342
      <screen>
343
apt-get install drbd0.7-module-source drbd0.7-utils
344
m-a update
345
m-a a-i drbd0.7
346
echo drbd minor_count=64 >> /etc/modules
347
modprobe drbd minor_count=64
348
      </screen>
349
      <para>or for using DRBD <literal>8.x</literal> from the etch
350
      backports:</para>
351
      <screen>
352
apt-get install -t etch-backports drbd8-module-source drbd8-utils
353
m-a update
354
m-a a-i drbd8
355
echo drbd minor_count=128 >> /etc/modules
356
modprobe drbd minor_count=128
357
      </screen>
358

    
359
      <para>
360
        It is also recommended that you comment out the default
361
        resources in the <filename>/etc/dbrd.conf</filename> file, so
362
        that the init script doesn't try to configure any drbd
363
        devices. You can do this by prefixing all
364
        <literal>resource</literal> lines in the file with the keyword
365
        <literal>skip</literal>, like this:
366
      </para>
367

    
368
      <screen>
369
skip resource r0 {
370
...
371
}
372

    
373
skip resource "r1" {
374
...
375
}
376
      </screen>
377

    
378
    </sect2>
379

    
380
    <sect2>
381
      <title>Other required software</title>
382

    
383
      <para>Besides Xen and DRBD, you will need to install the
384
      following (on all nodes):</para>
385

    
386
      <itemizedlist>
387
        <listitem>
388
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
389
          version 2</ulink></simpara>
390
        </listitem>
391
        <listitem>
392
          <simpara><ulink
393
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
394
        </listitem>
395
        <listitem>
396
          <simpara><ulink
397
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
398
        </listitem>
399
        <listitem>
400
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
401
          utilities</ulink></simpara>
402
        </listitem>
403
        <listitem>
404
          <simpara><ulink
405
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
406
        </listitem>
407
        <listitem>
408
          <simpara><ulink
409
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
410
          (part of iputils package)</simpara>
411
        </listitem>
412
        <listitem>
413
          <simpara><ulink
414
          url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
415
          (Linux Software Raid tools)</simpara>
416
        </listitem>
417
        <listitem>
418
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
419
        </listitem>
420
        <listitem>
421
          <simpara><ulink url="http://twistedmatrix.com/">Python
422
          Twisted library</ulink> - the core library is
423
          enough</simpara>
424
        </listitem>
425
        <listitem>
426
          <simpara><ulink
427
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
428
          bindings</ulink></simpara>
429
        </listitem>
430
        <listitem>
431
          <simpara><ulink
432
          url="http://www.undefined.org/python/#simplejson">simplejson Python
433
          module</ulink></simpara>
434
        </listitem>
435
        <listitem>
436
          <simpara><ulink
437
          url="http://pyparsing.wikispaces.com/">pyparsing Python
438
          module</ulink></simpara>
439
        </listitem>
440
      </itemizedlist>
441

    
442
      <para>
443
        These programs are supplied as part of most Linux
444
        distributions, so usually they can be installed via apt or
445
        similar methods. Also many of them will already be installed
446
        on a standard machine.
447
      </para>
448

    
449

    
450
      <formalpara><title>Debian</title>
451

    
452
      <para>You can use this command line to install all of them:</para>
453

    
454
      </formalpara>
455
      <screen>
456
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
457
  python2.4 python-twisted-core python-pyopenssl openssl \
458
  mdadm python-pyparsing python-simplejson
459
      </screen>
460

    
461
    </sect2>
462

    
463
  </sect1>
464

    
465

    
466
  <sect1>
467
    <title>Setting up the environment for Ganeti</title>
468

    
469
    <sect2>
470
      <title>Configuring the network</title>
471

    
472
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
473

    
474
      <para>
475
        Ganeti relies on Xen running in "bridge mode", which means the
476
        instances network interfaces will be attached to a software bridge
477
        running in dom0. Xen by default creates such a bridge at startup, but
478
        your distribution might have a different way to do things.
479
      </para>
480

    
481
      <para>
482
        Beware that the default name Ganeti uses is
483
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
484
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
485
        default. The default bridge your Ganeti cluster will use for new
486
        instances can be specified at cluster initialization time.
487
      </para>
488

    
489
      <formalpara><title>Debian</title>
490
        <para>
491
          The recommended Debian way to configure the Xen bridge is to
492
          edit your <filename>/etc/network/interfaces</filename> file
493
          and substitute your normal Ethernet stanza with the
494
          following snippet:
495

    
496
        <screen>
497
auto xen-br0
498
iface xen-br0 inet static
499
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
500
        netmask <replaceable>YOUR_NETMASK</replaceable>
501
        network <replaceable>YOUR_NETWORK</replaceable>
502
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
503
        gateway <replaceable>YOUR_GATEWAY</replaceable>
504
        bridge_ports eth0
505
        bridge_stp off
506
        bridge_fd 0
507
        </screen>
508
        </para>
509
      </formalpara>
510

    
511
     <para>
512
The following commands need to be executed on the local console
513
     </para>
514
      <screen>
515
ifdown eth0
516
ifup xen-br0
517
      </screen>
518

    
519
      <para>
520
        To check if the bridge is setup, use <command>ip</command>
521
        and <command>brctl show</command>:
522
      <para>
523

    
524
      <screen>
525
# ip a show xen-br0
526
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
527
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
528
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
529
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
530
       valid_lft forever preferred_lft forever
531

    
532
# brctl show xen-br0
533
bridge name     bridge id               STP enabled     interfaces
534
xen-br0         8000.0020fc1ed55d       no              eth0
535
      </screen>
536

    
537

    
538
    </sect2>
539

    
540
    <sect2>
541
      <title>Configuring LVM</title>
542

    
543

    
544
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
545

    
546
      <note>
547
        <simpara>The volume group is required to be at least
548
        <constant>20GiB</constant>.</simpara>
549
      </note>
550
      <para>
551
        If you haven't configured your LVM volume group at install
552
        time you need to do it before trying to initialize the Ganeti
553
        cluster. This is done by formatting the devices/partitions you
554
        want to use for it and then adding them to the relevant volume
555
        group:
556

    
557
       <screen>
558
pvcreate /dev/sda3
559
vgcreate xenvg /dev/sda3
560
       </screen>
561
or
562
       <screen>
563
pvcreate /dev/sdb1
564
pvcreate /dev/sdc1
565
vgcreate xenvg /dev/sdb1 /dev/sdc1
566
       </screen>
567
      </para>
568

    
569
      <para>
570
	If you want to add a device later you can do so with the
571
	<citerefentry><refentrytitle>vgextend</refentrytitle>
572
	<manvolnum>8</manvolnum></citerefentry> command:
573
      </para>
574

    
575
      <screen>
576
pvcreate /dev/sdd1
577
vgextend xenvg /dev/sdd1
578
      </screen>
579

    
580
      <formalpara>
581
        <title>Optional</title>
582
        <para>
583
          It is recommended to configure LVM not to scan the DRBD
584
          devices for physical volumes. This can be accomplished by
585
          editing <filename>/etc/lvm/lvm.conf</filename> and adding
586
          the <literal>/dev/drbd[0-9]+</literal> regular expression to
587
          the <literal>filter</literal> variable, like this:
588
<screen>
589
    filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
590
</screen>
591
        </para>
592
      </formalpara>
593

    
594
    </sect2>
595

    
596
    <sect2>
597
      <title>Installing Ganeti</title>
598

    
599
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
600

    
601
      <para>
602
        It's now time to install the Ganeti software itself.  Download
603
        the source from <ulink
604
        url="http://code.google.com/p/ganeti/"></ulink>.
605
      </para>
606

    
607
        <screen>
608
tar xvzf ganeti-1.2b2.tar.gz
609
cd ganeti-1.2b2
610
./configure --localstatedir=/var --sysconfdir=/etc
611
make
612
make install
613
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
614
        </screen>
615

    
616
      <para>
617
        You also need to copy the file
618
        <filename>doc/examples/ganeti.initd</filename>
619
        from the source archive to
620
        <filename>/etc/init.d/ganeti</filename> and register it with
621
        your distribution's startup scripts, for example in Debian:
622
      </para>
623
      <screen>update-rc.d ganeti defaults 20 80</screen>
624

    
625
      <para>
626
        In order to automatically restart failed instances, you need
627
        to setup a cron job run the
628
        <computeroutput>ganeti-watcher</computeroutput> program. A
629
        sample cron file is provided in the source at
630
        <filename>doc/examples/ganeti.cron</filename> and you can
631
        copy that (eventually altering the path) to
632
        <filename>/etc/cron.d/ganeti</filename>
633
      </para>
634

    
635
    </sect2>
636

    
637
    <sect2>
638
      <title>Installing the Operating System support packages</title>
639

    
640
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
641

    
642
      <para>
643
        To be able to install instances you need to have an Operating
644
        System installation script. An example for Debian Etch is
645
        provided on the project web site.  Download it from <ulink
646
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
647
        instructions in the <filename>README</filename> file.  Here is
648
        the installation procedure:
649
      </para>
650

    
651
      <screen>
652
cd /srv/ganeti/os
653
tar xvf instance-debian-etch-0.2.tar
654
mv instance-debian-etch-0.2 debian-etch
655
      </screen>
656

    
657
      <para>
658
        In order to use this OS definition, you need to have internet
659
        access from your nodes and have the <citerefentry>
660
        <refentrytitle>debootstrap</refentrytitle>
661
        <manvolnum>8</manvolnum></citerefentry>, <citerefentry>
662
        <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum>
663
        </citerefentry> and <citerefentry>
664
        <refentrytitle>restore</refentrytitle>
665
        <manvolnum>8</manvolnum> </citerefentry> commands installed on
666
        all nodes.
667
      </para>
668
      <formalpara>
669
        <title>Debian</title>
670
        <para>
671
          Use this command on all nodes to install the required
672
          packages:
673

    
674
          <screen>apt-get install debootstrap dump</screen>
675
        </para>
676
      </formalpara>
677

    
678
      <para>
679
        Alternatively, you can create your own OS definitions. See the
680
        manpage
681
        <citerefentry>
682
        <refentrytitle>ganeti-os-interface</refentrytitle>
683
        <manvolnum>8</manvolnum>
684
        </citerefentry>.
685
      </para>
686

    
687
    </sect2>
688

    
689
    <sect2>
690
      <title>Initializing the cluster</title>
691

    
692
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
693
      node per cluster.</para>
694

    
695

    
696
      <para>The last step is to initialize the cluster. After you've repeated
697
        the above process on all of your nodes, choose one as the master, and execute:
698
      </para>
699

    
700
      <screen>
701
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
702
      </screen>
703

    
704
      <para>
705
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
706
        which must be resolvable (e.g. it must exist in DNS or in
707
        <filename>/etc/hosts</filename>) by all the nodes in the
708
        cluster. You must choose a name different from any of the
709
        nodes names for a multi-node cluster. In general the best
710
        choice is to have a unique name for a cluster, even if it
711
        consists of only one machine, as you will be able to expand it
712
        later without any problems.
713
      </para>
714

    
715
      <para>
716
        If the bridge name you are using is not
717
        <literal>xen-br0</literal>, use the <option>-b
718
        <replaceable>BRIDGENAME</replaceable></option> option to
719
        specify the bridge name. In this case, you should also use the
720
        <option>--master-netdev
721
        <replaceable>BRIDGENAME</replaceable></option> option with the
722
        same <replaceable>BRIDGENAME</replaceable> argument.
723
      </para>
724

    
725
      <para>
726
        You can use a different name than <literal>xenvg</literal> for
727
        the volume group (but note that the name must be identical on
728
        all nodes). In this case you need to specify it by passing the
729
        <option>-g <replaceable>VGNAME</replaceable></option> option
730
        to <computeroutput>gnt-cluster init</computeroutput>.
731
      </para>
732

    
733
      <para>
734
        You can also invoke the command with the
735
        <option>--help</option> option in order to see all the
736
        possibilities.
737
      </para>
738

    
739
    </sect2>
740

    
741
    <sect2>
742
      <title>Joining the nodes to the cluster</title>
743

    
744
      <para>
745
        <emphasis role="strong">Mandatory:</emphasis> for all the
746
        other nodes.
747
      </para>
748

    
749
      <para>
750
        After you have initialized your cluster you need to join the
751
        other nodes to it. You can do so by executing the following
752
        command on the master node:
753
      </para>
754
        <screen>
755
gnt-node add <replaceable>NODENAME</replaceable>
756
        </screen>
757
    </sect2>
758

    
759
    <sect2>
760
      <title>Separate replication network</title>
761

    
762
      <para><emphasis role="strong">Optional</emphasis></para>
763
      <para>
764
        Ganeti uses DRBD to mirror the disk of the virtual instances
765
        between nodes. To use a dedicated network interface for this
766
        (in order to improve performance or to enhance security) you
767
        need to configure an additional interface for each node.  Use
768
        the <option>-s</option> option with
769
        <computeroutput>gnt-cluster init</computeroutput> and
770
        <computeroutput>gnt-node add</computeroutput> to specify the
771
        IP address of this secondary interface to use for each
772
        node. Note that if you specified this option at cluster setup
773
        time, you must afterwards use it for every node add operation.
774
      </para>
775
    </sect2>
776

    
777
    <sect2>
778
      <title>Testing the setup</title>
779

    
780
      <para>
781
        Execute the <computeroutput>gnt-node list</computeroutput>
782
        command to see all nodes in the cluster:
783
      <screen>
784
# gnt-node list
785
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
786
node1.example.com 197404 197404   2047  1896   125     0     0
787
      </screen>
788
    </para>
789
  </sect2>
790

    
791
  <sect1>
792
    <title>Setting up and managing virtual instances</title>
793
    <sect2>
794
      <title>Setting up virtual instances</title>
795
      <para>
796
        This step shows how to setup a virtual instance with either
797
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
798
        with network mirrored disks
799
        (<computeroutput>remote_raid1</computeroutput> for drbd 0.7
800
        and <computeroutput>drbd</computeroutput> for drbd 8.x).  All
801
        commands need to be executed on the Ganeti master node (the
802
        one on which <computeroutput>gnt-cluster init</computeroutput>
803
        was run).  Verify that the OS scripts are present on all
804
        cluster nodes with <computeroutput>gnt-os
805
        list</computeroutput>.
806
      </para>
807
      <para>
808
        To create a virtual instance, you need a hostname which is
809
        resolvable (DNS or <filename>/etc/hosts</filename> on all
810
        nodes). The following command will create a non-mirrored
811
        instance for you:
812
      </para>
813
      <screen>
814
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
815
* creating instance disks...
816
adding instance inst1.example.com to cluster config
817
Waiting for instance inst1.example.com to sync disks.
818
Instance inst1.example.com's disks are in sync.
819
creating os for instance inst1.example.com on node node1.example.com
820
* running the instance OS create scripts...
821
      </screen>
822

    
823
      <para>
824
        The above instance will have no network interface enabled.
825
        You can access it over the virtual console with
826
        <computeroutput>gnt-instance console
827
        <literal>inst1</literal></computeroutput>. There is no
828
        password for root.  As this is a Debian instance, you can
829
        modify the <filename>/etc/network/interfaces</filename> file
830
        to setup the network interface (<literal>eth0</literal> is the
831
        name of the interface provided to the instance).
832
      </para>
833

    
834
      <para>
835
        To create a network mirrored instance, change the argument to
836
        the <option>-t</option> option from <literal>plain</literal>
837
        to <literal>remote_raid1</literal> (drbd 0.7) or
838
        <literal>drbd</literal> (drbd 8.0) and specify the node on
839
        which the mirror should reside with the second value of the
840
        <option>--node</option> option, like this:
841
      </para>
842

    
843
      <screen>
844
# gnt-instance add -t remote_raid1 -n node1:node2 -o debian-etch instance2
845
* creating instance disks...
846
adding instance instance2 to cluster config
847
Waiting for instance instance1 to sync disks.
848
- device sdb:  3.50% done, 304 estimated seconds remaining
849
- device sdb: 21.70% done, 270 estimated seconds remaining
850
- device sdb: 39.80% done, 247 estimated seconds remaining
851
- device sdb: 58.10% done, 121 estimated seconds remaining
852
- device sdb: 76.30% done, 72 estimated seconds remaining
853
- device sdb: 94.80% done, 18 estimated seconds remaining
854
Instance instance2's disks are in sync.
855
creating os for instance instance2 on node node1.example.com
856
* running the instance OS create scripts...
857
* starting instance...
858
      </screen>
859

    
860
    </sect2>
861

    
862
    <sect2>
863
      <title>Managing virtual instances</title>
864
      <para>
865
        All commands need to be executed on the Ganeti master node
866
      </para>
867

    
868
      <para>
869
        To access the console of an instance, use
870
        <computeroutput>gnt-instance console
871
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
872
      </para>
873

    
874
      <para>
875
        To shutdown an instance, use <computeroutput>gnt-instance
876
        shutdown
877
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
878
        startup an instance, use <computeroutput>gnt-instance startup
879
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
880
      </para>
881

    
882
      <para>
883
        To failover an instance to its secondary node (only possible
884
        in <literal>remote_raid1</literal> or <literal>drbd</literal>
885
        disk templates), use <computeroutput>gnt-instance failover
886
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
887
      </para>
888

    
889
      <para>
890
        For more instance and cluster administration details, see the
891
        <emphasis>Ganeti administrator's guide</emphasis>.
892
      </para>
893

    
894
    </sect2>
895

    
896
  </sect1>
897

    
898
  </article>