Statistics
| Branch: | Tag: | Revision:

root / doc / install.sgml @ 7ec54942

History | View | Annotate | Download (30.2 kB)

1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 1.2</para>
8

    
9
  <sect1>
10
    <title>Introduction</title>
11

    
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

    
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

    
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
37
    </para>
38

    
39
    <para>This document is divided into two main sections:
40

    
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
            components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
            Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

    
52
      Each of these is divided into sub-sections. While a full Ganeti system
53
      will need all of the steps specified, some are not strictly required for
54
      every environment. Which ones they are, and why, is specified in the
55
      corresponding sections.
56
    </para>
57

    
58
  </sect1>
59

    
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

    
63
    <sect2>
64
      <title>Hardware requirements</title>
65

    
66
      <para>
67
        Any system supported by your Linux distribution is fine.  64-bit
68
        systems are better as they can support more memory.
69
      </para>
70

    
71
      <para>
72
        Any disk drive recognized by Linux
73
        (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
        is supported in Ganeti. Note that no shared storage (e.g.
75
        <literal>SAN</literal>) is needed to get high-availability features. It
76
        is highly recommended to use more than one disk drive to improve speed.
77
        But Ganeti also works with one disk per machine.
78
      </para>
79

    
80
    <sect2>
81
      <title>Installing the base system</title>
82

    
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

    
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems. The volume group
94
        name Ganeti 1.2 uses (by default) is
95
        <emphasis>xenvg</emphasis>.
96
      </para>
97

    
98
      <para>
99
        While you can use an existing system, please note that the
100
        Ganeti installation is intrusive in terms of changes to the
101
        system configuration, and it's best to use a newly-installed
102
        system without important data on it.
103
      </para>
104

    
105
      <para>
106
        Also, for best results, it's advised that the nodes have as
107
        much as possible the same hardware and software
108
        configuration. This will make administration much easier.
109
      </para>
110

    
111
      <sect3>
112
        <title>Hostname issues</title>
113
        <para>
114
          Note that Ganeti requires the hostnames of the systems
115
          (i.e. what the <computeroutput>hostname</computeroutput>
116
          command outputs to be a fully-qualified name, not a short
117
          name. In other words, you should use
118
          <literal>node1.example.com</literal> as a hostname and not
119
          just <literal>node1</literal>.
120
        </para>
121

    
122
        <formalpara>
123
          <title>Debian</title>
124
          <para>
125
            Note that Debian Etch configures the hostname differently
126
            than you need it for Ganeti. For example, this is what
127
            Etch puts in <filename>/etc/hosts</filename> in certain
128
            situations:
129
<screen>
130
127.0.0.1       localhost
131
127.0.1.1       node1.example.com node1
132
</screen>
133

    
134
          but for Ganeti you need to have:
135
<screen>
136
127.0.0.1       localhost
137
192.168.1.1     node1.example.com node1
138
</screen>
139
            replacing <literal>192.168.1.1</literal> with your node's
140
            address. Also, the file <filename>/etc/hostname</filename>
141
            which configures the hostname of the system should contain
142
            <literal>node1.example.com</literal> and not just
143
            <literal>node1</literal> (you need to run the command
144
            <computeroutput>/etc/init.d/hostname.sh
145
            start</computeroutput> after changing the file).
146
          </para>
147
        </formalpara>
148
      </sect3>
149

    
150
    </sect2>
151

    
152
    <sect2>
153
      <title>Installing Xen</title>
154

    
155
      <para>
156
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
157
      </para>
158

    
159
      <para>
160
        While Ganeti is developed with the ability to modularly run on
161
        different virtualization environments in mind the only one
162
        currently useable on a live system is <ulink
163
        url="http://xen.xensource.com/">Xen</ulink>. Supported
164
        versions are: <simplelist type="inline">
165
        <member><literal>3.0.3</literal></member>
166
        <member><literal>3.0.4</literal></member>
167
        <member><literal>3.1</literal></member> </simplelist>.
168
      </para>
169

    
170
      <para>
171
        Please follow your distribution's recommended way to install
172
        and set up Xen, or install Xen from the upstream source, if
173
        you wish, following their manual.
174
      </para>
175

    
176
      <para>
177
        After installing Xen you need to reboot into your Xen-ified
178
        dom0 system. On some distributions this might involve
179
        configuring GRUB appropriately, whereas others will configure
180
        it automatically when you install Xen from a package.
181
      </para>
182

    
183
      <formalpara><title>Debian</title>
184
      <para>
185
        Under Debian Etch or Sarge+backports you can install the
186
        relevant <literal>xen-linux-system</literal> package, which
187
        will pull in both the hypervisor and the relevant
188
        kernel. Also, if you are installing a 32-bit Etch, you should
189
        install the <computeroutput>libc6-xen</computeroutput> package
190
        (run <computeroutput>apt-get install
191
        libc6-xen</computeroutput>).
192
      </para>
193
      </formalpara>
194

    
195
      <sect3>
196
        <title>Xen settings</title>
197

    
198
        <para>
199
          It's recommended that dom0 is restricted to a low amount of
200
          memory (<constant>512MiB</constant> is reasonable) and that
201
          memory ballooning is disabled in the file
202
          <filename>/etc/xen/xend-config.sxp</filename> by setting the
203
          value <literal>dom0-min-mem</literal> to
204
          <constant>0</constant>, like this:
205
          <computeroutput>(dom0-min-mem 0)</computeroutput>
206
        </para>
207

    
208
        <para>
209
          For optimum performance when running both CPU and I/O
210
          intensive instances, it's also recommended that the dom0 is
211
          restricted to one CPU only, for example by booting with the
212
          kernel parameter <literal>nosmp</literal>.
213
        </para>
214

    
215
        <formalpara>
216
          <title>Debian</title>
217
          <para>
218
            Besides the ballooning change which you need to set in
219
            <filename>/etc/xen/xend-config.sxp</filename>, you need to
220
            set the memory and nosmp parameters in the file
221
            <filename>/boot/grub/menu.lst</filename>. You need to
222
            modify the variable <literal>xenhopt</literal> to add
223
            <userinput>dom0_mem=512M</userinput> like this:
224
<screen>
225
## Xen hypervisor options to use with the default Xen boot option
226
# xenhopt=dom0_mem=512M
227
</screen>
228
            and the <literal>xenkopt</literal> needs to include the
229
            <userinput>nosmp</userinput> option like this:
230
<screen>
231
## Xen Linux kernel options to use with the default Xen boot option
232
# xenkopt=nosmp
233
</screen>
234

    
235
          Any existing parameters can be left in place: it's ok to
236
          have <computeroutput>xenkopt=console=tty0
237
          nosmp</computeroutput>, for example. After modifying the
238
          files, you need to run:
239
<screen>
240
/sbin/update-grub
241
</screen>
242
          </para>
243
        </formalpara>
244

    
245
      </sect3>
246

    
247
      <sect3>
248
        <title>Selecting the instance kernel</title>
249

    
250
        <para>
251
          After you have installed Xen, you need to tell Ganeti
252
          exactly what kernel to use for the instances it will
253
          create. This is done by creating a
254
          <emphasis>symlink</emphasis> from your actual kernel to
255
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
256
          your initrd to
257
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
258
          don't use an initrd for the <literal>domU</literal> kernel,
259
          you don't need to create the initrd symlink.
260
        </para>
261

    
262
        <formalpara>
263
          <title>Debian</title>
264
          <para>
265
            After installation of the
266
            <literal>xen-linux-system</literal> package, you need to
267
            run (replace the exact version number with the one you
268
            have):
269
            <screen>
270
cd /boot
271
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
272
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
273
            </screen>
274
          </para>
275
        </formalpara>
276
      </sect3>
277

    
278
    </sect2>
279

    
280
    <sect2>
281
      <title>Installing DRBD</title>
282

    
283
      <para>
284
        Recommended on all nodes: <ulink
285
        url="http://www.drbd.org/">DRBD</ulink> is required if you
286
        want to use the high availability (HA) features of Ganeti, but
287
        optional if you don't require HA or only run Ganeti on
288
        single-node clusters. You can upgrade a non-HA cluster to an
289
        HA one later, but you might need to export and re-import all
290
        your instances to take advantage of the new features.
291
      </para>
292

    
293
      <para>
294
        Supported DRBD versions: the <literal>0.7</literal> series
295
        <emphasis role="strong">or</emphasis>
296
        <literal>8.0.x</literal>. It's recommended to have at least
297
        version <literal>0.7.24</literal> if you use
298
        <command>udev</command> since older versions have a bug
299
        related to device discovery which can be triggered in cases of
300
        hard drive failure.
301
      </para>
302

    
303
      <para>
304
        Now the bad news: unless your distribution already provides it
305
        installing DRBD might involve recompiling your kernel or
306
        anyway fiddling with it. Hopefully at least the Xen-ified
307
        kernel source to start from will be provided.
308
      </para>
309

    
310
      <para>
311
        The good news is that you don't need to configure DRBD at all.
312
        Ganeti will do it for you for every instance you set up.  If
313
        you have the DRBD utils installed and the module in your
314
        kernel you're fine. Please check that your system is
315
        configured to load the module at every boot, and that it
316
        passes the following option to the module (for
317
        <literal>0.7.x</literal>:
318
        <computeroutput>minor_count=64</computeroutput> (this will
319
        allow you to use up to 32 instances per node) or for
320
        <literal>8.0.x</literal> you can use up to
321
        <constant>255</constant>
322
        (i.e. <computeroutput>minor_count=255</computeroutput>, but
323
        for most clusters <constant>128</constant> should be enough).
324
      </para>
325

    
326
      <formalpara><title>Debian</title>
327
        <para>
328
         You can just install (build) the DRBD 0.7 module with the
329
         following commands (make sure you are running the Xen
330
         kernel):
331
        </para>
332
      </formalpara>
333

    
334
      <screen>
335
apt-get install drbd0.7-module-source drbd0.7-utils
336
m-a update
337
m-a a-i drbd0.7
338
echo drbd minor_count=64 >> /etc/modules
339
modprobe drbd minor_count=64
340
      </screen>
341
      <para>or for using DRBD <literal>8.x</literal> from the etch
342
      backports:</para>
343
      <screen>
344
apt-get install -t etch-backports drbd8-module-source drbd8-utils
345
m-a update
346
m-a a-i drbd8
347
echo drbd minor_count=128 >> /etc/modules
348
modprobe drbd minor_count=128
349
      </screen>
350

    
351
      <para>
352
        It is also recommended that you comment out the default
353
        resources in the <filename>/etc/dbrd.conf</filename> file, so
354
        that the init script doesn't try to configure any drbd
355
        devices. You can do this by prefixing all
356
        <literal>resource</literal> lines in the file with the keyword
357
        <literal>skip</literal>, like this:
358
      </para>
359

    
360
      <screen>
361
skip resource r0 {
362
...
363
}
364

    
365
skip resource "r1" {
366
...
367
}
368
      </screen>
369

    
370
    </sect2>
371

    
372
    <sect2>
373
      <title>Other required software</title>
374

    
375
      <para>Besides Xen and DRBD, you will need to install the
376
      following (on all nodes):</para>
377

    
378
      <itemizedlist>
379
        <listitem>
380
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
381
          version 2</ulink></simpara>
382
        </listitem>
383
        <listitem>
384
          <simpara><ulink
385
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
386
        </listitem>
387
        <listitem>
388
          <simpara><ulink
389
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
390
        </listitem>
391
        <listitem>
392
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
393
          utilities</ulink></simpara>
394
        </listitem>
395
        <listitem>
396
          <simpara><ulink
397
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
398
        </listitem>
399
        <listitem>
400
          <simpara><ulink
401
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
402
          (part of iputils package)</simpara>
403
        </listitem>
404
        <listitem>
405
          <simpara><ulink
406
          url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
407
          (Linux Software Raid tools)</simpara>
408
        </listitem>
409
        <listitem>
410
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
411
        </listitem>
412
        <listitem>
413
          <simpara><ulink url="http://twistedmatrix.com/">Python
414
          Twisted library</ulink> - the core library is
415
          enough</simpara>
416
        </listitem>
417
        <listitem>
418
          <simpara><ulink
419
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
420
          bindings</ulink></simpara>
421
        </listitem>
422
        <listitem>
423
          <simpara><ulink
424
          url="http://www.undefined.org/python/#simplejson">simplejson Python
425
          module</ulink></simpara>
426
        </listitem>
427
        <listitem>
428
          <simpara><ulink
429
          url="http://pyparsing.wikispaces.com/">pyparsing Python
430
          module</ulink></simpara>
431
        </listitem>
432
      </itemizedlist>
433

    
434
      <para>
435
        These programs are supplied as part of most Linux
436
        distributions, so usually they can be installed via apt or
437
        similar methods. Also many of them will already be installed
438
        on a standard machine.
439
      </para>
440

    
441

    
442
      <formalpara><title>Debian</title>
443

    
444
      <para>You can use this command line to install all of them:</para>
445

    
446
      </formalpara>
447
      <screen>
448
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
449
  python2.4 python-twisted-core python-pyopenssl openssl \
450
  mdadm python-pyparsing python-simplejson
451
      </screen>
452

    
453
    </sect2>
454

    
455
  </sect1>
456

    
457

    
458
  <sect1>
459
    <title>Setting up the environment for Ganeti</title>
460

    
461
    <sect2>
462
      <title>Configuring the network</title>
463

    
464
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
465

    
466
      <para>
467
        Ganeti relies on Xen running in "bridge mode", which means the
468
        instances network interfaces will be attached to a software bridge
469
        running in dom0. Xen by default creates such a bridge at startup, but
470
        your distribution might have a different way to do things.
471
      </para>
472

    
473
      <para>
474
        Beware that the default name Ganeti uses is
475
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
476
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
477
        default. The default bridge your Ganeti cluster will use for new
478
        instances can be specified at cluster initialization time.
479
      </para>
480

    
481
      <formalpara><title>Debian</title>
482
        <para>
483
          The recommended Debian way to configure the Xen bridge is to
484
          edit your <filename>/etc/network/interfaces</filename> file
485
          and substitute your normal Ethernet stanza with the
486
          following snippet:
487

    
488
        <screen>
489
auto xen-br0
490
iface xen-br0 inet static
491
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
492
        netmask <replaceable>YOUR_NETMASK</replaceable>
493
        network <replaceable>YOUR_NETWORK</replaceable>
494
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
495
        gateway <replaceable>YOUR_GATEWAY</replaceable>
496
        bridge_ports eth0
497
        bridge_stp off
498
        bridge_fd 0
499
        </screen>
500
        </para>
501
      </formalpara>
502

    
503
     <para>
504
The following commands need to be executed on the local console
505
     </para>
506
      <screen>
507
ifdown eth0
508
ifup xen-br0
509
      </screen>
510

    
511
      <para>
512
        To check if the bridge is setup, use <command>ip</command>
513
        and <command>brctl show</command>:
514
      <para>
515

    
516
      <screen>
517
# ip a show xen-br0
518
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
519
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
520
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
521
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
522
       valid_lft forever preferred_lft forever
523

    
524
# brctl show xen-br0
525
bridge name     bridge id               STP enabled     interfaces
526
xen-br0         8000.0020fc1ed55d       no              eth0
527
      </screen>
528

    
529

    
530
    </sect2>
531

    
532
    <sect2>
533
      <title>Configuring LVM</title>
534

    
535

    
536
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
537

    
538
      <note>
539
        <simpara>The volume group is required to be at least
540
        <constant>20GiB</constant>.</simpara>
541
      </note>
542
      <para>
543
        If you haven't configured your LVM volume group at install
544
        time you need to do it before trying to initialize the Ganeti
545
        cluster. This is done by formatting the devices/partitions you
546
        want to use for it and then adding them to the relevant volume
547
        group:
548

    
549
       <screen>
550
pvcreate /dev/sda3
551
vgcreate xenvg /dev/sda3
552
       </screen>
553
or
554
       <screen>
555
pvcreate /dev/sdb1
556
pvcreate /dev/sdc1
557
vgcreate xenvg /dev/sdb1 /dev/sdc1
558
       </screen>
559
      </para>
560

    
561
      <para>
562
	If you want to add a device later you can do so with the
563
	<citerefentry><refentrytitle>vgextend</refentrytitle>
564
	<manvolnum>8</manvolnum></citerefentry> command:
565
      </para>
566

    
567
      <screen>
568
pvcreate /dev/sdd1
569
vgextend xenvg /dev/sdd1
570
      </screen>
571

    
572
      <formalpara>
573
        <title>Optional</title>
574
        <para>
575
          It is recommended to configure LVM not to scan the DRBD
576
          devices for physical volumes. This can be accomplished by
577
          editing <filename>/etc/lvm/lvm.conf</filename> and adding
578
          the <literal>/dev/drbd[0-9]+</literal> regular expression to
579
          the <literal>filter</literal> variable, like this:
580
<screen>
581
    filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
582
</screen>
583
        </para>
584
      </formalpara>
585

    
586
    </sect2>
587

    
588
    <sect2>
589
      <title>Installing Ganeti</title>
590

    
591
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
592

    
593
      <para>
594
        It's now time to install the Ganeti software itself.  Download
595
        the source from <ulink
596
        url="http://code.google.com/p/ganeti/"></ulink>.
597
      </para>
598

    
599
        <screen>
600
tar xvzf ganeti-1.2b2.tar.gz
601
cd ganeti-1.2b2
602
./configure --localstatedir=/var --sysconfdir=/etc
603
make
604
make install
605
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
606
        </screen>
607

    
608
      <para>
609
        You also need to copy the file
610
        <filename>doc/examples/ganeti.initd</filename>
611
        from the source archive to
612
        <filename>/etc/init.d/ganeti</filename> and register it with
613
        your distribution's startup scripts, for example in Debian:
614
      </para>
615
      <screen>update-rc.d ganeti defaults 20 80</screen>
616

    
617
      <para>
618
        In order to automatically restart failed instances, you need
619
        to setup a cron job run the
620
        <computeroutput>ganeti-watcher</computeroutput> program. A
621
        sample cron file is provided in the source at
622
        <filename>doc/examples/ganeti.cron</filename> and you can
623
        copy that (eventually altering the path) to
624
        <filename>/etc/cron.d/ganeti</filename>
625
      </para>
626

    
627
    </sect2>
628

    
629
    <sect2>
630
      <title>Installing the Operating System support packages</title>
631

    
632
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
633

    
634
      <para>
635
        To be able to install instances you need to have an Operating
636
        System installation script. An example for Debian Etch is
637
        provided on the project web site.  Download it from <ulink
638
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
639
        instructions in the <filename>README</filename> file.  Here is
640
        the installation procedure:
641
      </para>
642

    
643
      <screen>
644
cd /srv/ganeti/os
645
tar xvf instance-debian-etch-0.2.tar
646
mv instance-debian-etch-0.2 debian-etch
647
      </screen>
648

    
649
      <para>
650
        In order to use this OS definition, you need to have internet
651
        access from your nodes and have the <citerefentry>
652
        <refentrytitle>debootstrap</refentrytitle>
653
        <manvolnum>8</manvolnum></citerefentry>, <citerefentry>
654
        <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum>
655
        </citerefentry> and <citerefentry>
656
        <refentrytitle>restore</refentrytitle>
657
        <manvolnum>8</manvolnum> </citerefentry> commands installed on
658
        all nodes.
659
      </para>
660
      <formalpara>
661
        <title>Debian</title>
662
        <para>
663
          Use this command on all nodes to install the required
664
          packages:
665

    
666
          <screen>apt-get install debootstrap dump</screen>
667
        </para>
668
      </formalpara>
669

    
670
      <para>
671
        Alternatively, you can create your own OS definitions. See the
672
        manpage
673
        <citerefentry>
674
        <refentrytitle>ganeti-os-interface</refentrytitle>
675
        <manvolnum>8</manvolnum>
676
        </citerefentry>.
677
      </para>
678

    
679
    </sect2>
680

    
681
    <sect2>
682
      <title>Initializing the cluster</title>
683

    
684
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
685
      node per cluster.</para>
686

    
687

    
688
      <para>The last step is to initialize the cluster. After you've repeated
689
        the above process on all of your nodes, choose one as the master, and execute:
690
      </para>
691

    
692
      <screen>
693
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
694
      </screen>
695

    
696
      <para>
697
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
698
        which must be resolvable (e.g. it must exist in DNS or in
699
        <filename>/etc/hosts</filename>) by all the nodes in the
700
        cluster. You must choose a name different from any of the
701
        nodes names for a multi-node cluster. In general the best
702
        choice is to have a unique name for a cluster, even if it
703
        consists of only one machine, as you will be able to expand it
704
        later without any problems.
705
      </para>
706

    
707
      <para>
708
        If the bridge name you are using is not
709
        <literal>xen-br0</literal>, use the <option>-b
710
        <replaceable>BRIDGENAME</replaceable></option> option to
711
        specify the bridge name. In this case, you should also use the
712
        <option>--master-netdev
713
        <replaceable>BRIDGENAME</replaceable></option> option with the
714
        same <replaceable>BRIDGENAME</replaceable> argument.
715
      </para>
716

    
717
      <para>
718
        You can use a different name than <literal>xenvg</literal> for
719
        the volume group (but note that the name must be identical on
720
        all nodes). In this case you need to specify it by passing the
721
        <option>-g <replaceable>VGNAME</replaceable></option> option
722
        to <computeroutput>gnt-cluster init</computeroutput>.
723
      </para>
724

    
725
      <para>
726
        You can also invoke the command with the
727
        <option>--help</option> option in order to see all the
728
        possibilities.
729
      </para>
730

    
731
    </sect2>
732

    
733
    <sect2>
734
      <title>Joining the nodes to the cluster</title>
735

    
736
      <para>
737
        <emphasis role="strong">Mandatory:</emphasis> for all the
738
        other nodes.
739
      </para>
740

    
741
      <para>
742
        After you have initialized your cluster you need to join the
743
        other nodes to it. You can do so by executing the following
744
        command on the master node:
745
      </para>
746
        <screen>
747
gnt-node add <replaceable>NODENAME</replaceable>
748
        </screen>
749
    </sect2>
750

    
751
    <sect2>
752
      <title>Separate replication network</title>
753

    
754
      <para><emphasis role="strong">Optional</emphasis></para>
755
      <para>
756
        Ganeti uses DRBD to mirror the disk of the virtual instances
757
        between nodes. To use a dedicated network interface for this
758
        (in order to improve performance or to enhance security) you
759
        need to configure an additional interface for each node.  Use
760
        the <option>-s</option> option with
761
        <computeroutput>gnt-cluster init</computeroutput> and
762
        <computeroutput>gnt-node add</computeroutput> to specify the
763
        IP address of this secondary interface to use for each
764
        node. Note that if you specified this option at cluster setup
765
        time, you must afterwards use it for every node add operation.
766
      </para>
767
    </sect2>
768

    
769
    <sect2>
770
      <title>Testing the setup</title>
771

    
772
      <para>
773
        Execute the <computeroutput>gnt-node list</computeroutput>
774
        command to see all nodes in the cluster:
775
      <screen>
776
# gnt-node list
777
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
778
node1.example.com 197404 197404   2047  1896   125     0     0
779
      </screen>
780
    </para>
781
  </sect2>
782

    
783
  <sect1>
784
    <title>Setting up and managing virtual instances</title>
785
    <sect2>
786
      <title>Setting up virtual instances</title>
787
      <para>
788
        This step shows how to setup a virtual instance with either
789
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
790
        with network mirrored disks
791
        (<computeroutput>remote_raid1</computeroutput> for drbd 0.7
792
        and <computeroutput>drbd</computeroutput> for drbd 8.x).  All
793
        commands need to be executed on the Ganeti master node (the
794
        one on which <computeroutput>gnt-cluster init</computeroutput>
795
        was run).  Verify that the OS scripts are present on all
796
        cluster nodes with <computeroutput>gnt-os
797
        list</computeroutput>.
798
      </para>
799
      <para>
800
        To create a virtual instance, you need a hostname which is
801
        resolvable (DNS or <filename>/etc/hosts</filename> on all
802
        nodes). The following command will create a non-mirrored
803
        instance for you:
804
      </para>
805
      <screen>
806
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
807
* creating instance disks...
808
adding instance inst1.example.com to cluster config
809
Waiting for instance inst1.example.com to sync disks.
810
Instance inst1.example.com's disks are in sync.
811
creating os for instance inst1.example.com on node node1.example.com
812
* running the instance OS create scripts...
813
      </screen>
814

    
815
      <para>
816
        The above instance will have no network interface enabled.
817
        You can access it over the virtual console with
818
        <computeroutput>gnt-instance console
819
        <literal>inst1</literal></computeroutput>. There is no
820
        password for root.  As this is a Debian instance, you can
821
        modify the <filename>/etc/network/interfaces</filename> file
822
        to setup the network interface (<literal>eth0</literal> is the
823
        name of the interface provided to the instance).
824
      </para>
825

    
826
      <para>
827
        To create a network mirrored instance, change the argument to
828
        the <option>-t</option> option from <literal>plain</literal>
829
        to <literal>remote_raid1</literal> (drbd 0.7) or
830
        <literal>drbd</literal> (drbd 8.0) and specify the node on
831
        which the mirror should reside with the second value of the
832
        <option>--node</option> option, like this:
833
      </para>
834

    
835
      <screen>
836
# gnt-instance add -t remote_raid1 -n node1:node2 -o debian-etch instance2
837
* creating instance disks...
838
adding instance instance2 to cluster config
839
Waiting for instance instance1 to sync disks.
840
- device sdb:  3.50% done, 304 estimated seconds remaining
841
- device sdb: 21.70% done, 270 estimated seconds remaining
842
- device sdb: 39.80% done, 247 estimated seconds remaining
843
- device sdb: 58.10% done, 121 estimated seconds remaining
844
- device sdb: 76.30% done, 72 estimated seconds remaining
845
- device sdb: 94.80% done, 18 estimated seconds remaining
846
Instance instance2's disks are in sync.
847
creating os for instance instance2 on node node1.example.com
848
* running the instance OS create scripts...
849
* starting instance...
850
      </screen>
851

    
852
    </sect2>
853

    
854
    <sect2>
855
      <title>Managing virtual instances</title>
856
      <para>
857
        All commands need to be executed on the Ganeti master node
858
      </para>
859

    
860
      <para>
861
        To access the console of an instance, use
862
        <computeroutput>gnt-instance console
863
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
864
      </para>
865

    
866
      <para>
867
        To shutdown an instance, use <computeroutput>gnt-instance
868
        shutdown
869
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
870
        startup an instance, use <computeroutput>gnt-instance startup
871
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
872
      </para>
873

    
874
      <para>
875
        To failover an instance to its secondary node (only possible
876
        in <literal>remote_raid1</literal> or <literal>drbd</literal>
877
        disk templates), use <computeroutput>gnt-instance failover
878
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
879
      </para>
880

    
881
      <para>
882
        For more instance and cluster administration details, see the
883
        <emphasis>Ganeti administrator's guide</emphasis>.
884
      </para>
885

    
886
    </sect2>
887

    
888
  </sect1>
889

    
890
  </article>