Statistics
| Branch: | Tag: | Revision:

root / doc / install.sgml @ e30c295e

History | View | Annotate | Download (32.2 kB)

1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 1.2</para>
8

    
9
  <sect1>
10
    <title>Introduction</title>
11

    
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

    
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

    
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
37
    </para>
38

    
39
    <para>This document is divided into two main sections:
40

    
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
            components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
            Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

    
52
      Each of these is divided into sub-sections. While a full Ganeti system
53
      will need all of the steps specified, some are not strictly required for
54
      every environment. Which ones they are, and why, is specified in the
55
      corresponding sections.
56
    </para>
57

    
58
  </sect1>
59

    
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

    
63
    <sect2>
64
      <title>Hardware requirements</title>
65

    
66
      <para>
67
        Any system supported by your Linux distribution is fine.  64-bit
68
        systems are better as they can support more memory.
69
      </para>
70

    
71
      <para>
72
        Any disk drive recognized by Linux
73
        (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
        is supported in Ganeti. Note that no shared storage (e.g.
75
        <literal>SAN</literal>) is needed to get high-availability features. It
76
        is highly recommended to use more than one disk drive to improve speed.
77
        But Ganeti also works with one disk per machine.
78
      </para>
79

    
80
    <sect2>
81
      <title>Installing the base system</title>
82

    
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

    
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems. The volume group
94
        name Ganeti 1.2 uses (by default) is
95
        <emphasis>xenvg</emphasis>.
96
      </para>
97

    
98
      <para>
99
        While you can use an existing system, please note that the
100
        Ganeti installation is intrusive in terms of changes to the
101
        system configuration, and it's best to use a newly-installed
102
        system without important data on it.
103
      </para>
104

    
105
      <para>
106
        Also, for best results, it's advised that the nodes have as
107
        much as possible the same hardware and software
108
        configuration. This will make administration much easier.
109
      </para>
110

    
111
      <sect3>
112
        <title>Hostname issues</title>
113
        <para>
114
          Note that Ganeti requires the hostnames of the systems
115
          (i.e. what the <computeroutput>hostname</computeroutput>
116
          command outputs to be a fully-qualified name, not a short
117
          name. In other words, you should use
118
          <literal>node1.example.com</literal> as a hostname and not
119
          just <literal>node1</literal>.
120
        </para>
121

    
122
        <formalpara>
123
          <title>Debian</title>
124
          <para>
125
            Note that Debian Etch configures the hostname differently
126
            than you need it for Ganeti. For example, this is what
127
            Etch puts in <filename>/etc/hosts</filename> in certain
128
            situations:
129
<screen>
130
127.0.0.1       localhost
131
127.0.1.1       node1.example.com node1
132
</screen>
133

    
134
          but for Ganeti you need to have:
135
<screen>
136
127.0.0.1       localhost
137
192.168.1.1     node1.example.com node1
138
</screen>
139
            replacing <literal>192.168.1.1</literal> with your node's
140
            address. Also, the file <filename>/etc/hostname</filename>
141
            which configures the hostname of the system should contain
142
            <literal>node1.example.com</literal> and not just
143
            <literal>node1</literal> (you need to run the command
144
            <computeroutput>/etc/init.d/hostname.sh
145
            start</computeroutput> after changing the file).
146
          </para>
147
        </formalpara>
148
      </sect3>
149

    
150
    </sect2>
151

    
152
    <sect2>
153
      <title>Installing Xen</title>
154

    
155
      <para>
156
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
157
      </para>
158

    
159
      <para>
160
        While Ganeti is developed with the ability to modularly run on
161
        different virtualization environments in mind the only one
162
        currently useable on a live system is <ulink
163
        url="http://xen.xensource.com/">Xen</ulink>. Supported
164
        versions are: <simplelist type="inline">
165
        <member><literal>3.0.3</literal></member>
166
        <member><literal>3.0.4</literal></member>
167
        <member><literal>3.1</literal></member> </simplelist>.
168
      </para>
169

    
170
      <para>
171
        Please follow your distribution's recommended way to install
172
        and set up Xen, or install Xen from the upstream source, if
173
        you wish, following their manual.
174
      </para>
175

    
176
      <para>
177
        After installing Xen you need to reboot into your Xen-ified
178
        dom0 system. On some distributions this might involve
179
        configuring GRUB appropriately, whereas others will configure
180
        it automatically when you install Xen from a package.
181
      </para>
182

    
183
      <formalpara><title>Debian</title>
184
      <para>
185
        Under Debian Etch or Sarge+backports you can install the
186
        relevant <literal>xen-linux-system</literal> package, which
187
        will pull in both the hypervisor and the relevant
188
        kernel. Also, if you are installing a 32-bit Etch, you should
189
        install the <computeroutput>libc6-xen</computeroutput> package
190
        (run <computeroutput>apt-get install
191
        libc6-xen</computeroutput>).
192
      </para>
193
      </formalpara>
194

    
195
      <sect3>
196
        <title>Xen settings</title>
197

    
198
        <para>
199
          It's recommended that dom0 is restricted to a low amount of
200
          memory (<constant>512MiB</constant> is reasonable) and that
201
          memory ballooning is disabled in the file
202
          <filename>/etc/xen/xend-config.sxp</filename> by setting the
203
          value <literal>dom0-min-mem</literal> to
204
          <constant>0</constant>, like this:
205
          <computeroutput>(dom0-min-mem 0)</computeroutput>
206
        </para>
207

    
208
        <para>
209
          For optimum performance when running both CPU and I/O
210
          intensive instances, it's also recommended that the dom0 is
211
          restricted to one CPU only, for example by booting with the
212
          kernel parameter <literal>nosmp</literal>.
213
        </para>
214

    
215
        <para>
216
          It is recommended that you disable xen's automatic save of virtual
217
          machines at system shutdown and subsequent restore of them at reboot.
218
          To obtain this make sure the variable
219
          <literal>XENDOMAINS_SAVE</literal> in the file
220
          <literal>/etc/default/xendomains</literal> is set to an empty value.
221
        </para>
222

    
223
        <formalpara>
224
          <title>Debian</title>
225
          <para>
226
            Besides the ballooning change which you need to set in
227
            <filename>/etc/xen/xend-config.sxp</filename>, you need to
228
            set the memory and nosmp parameters in the file
229
            <filename>/boot/grub/menu.lst</filename>. You need to
230
            modify the variable <literal>xenhopt</literal> to add
231
            <userinput>dom0_mem=512M</userinput> like this:
232
<screen>
233
## Xen hypervisor options to use with the default Xen boot option
234
# xenhopt=dom0_mem=512M
235
</screen>
236
            and the <literal>xenkopt</literal> needs to include the
237
            <userinput>nosmp</userinput> option like this:
238
<screen>
239
## Xen Linux kernel options to use with the default Xen boot option
240
# xenkopt=nosmp
241
</screen>
242

    
243
          Any existing parameters can be left in place: it's ok to
244
          have <computeroutput>xenkopt=console=tty0
245
          nosmp</computeroutput>, for example. After modifying the
246
          files, you need to run:
247
<screen>
248
/sbin/update-grub
249
</screen>
250
          </para>
251
        </formalpara>
252
        <para>
253
          If you want to test the experimental HVM support
254
          with Ganeti and want VNC access to the console of your
255
          instances, set the following two entries in
256
          <filename>/etc/xen/xend-config.sxp</filename>:
257
<screen>
258
(vnc-listen '0.0.0.0')
259
(vncpasswd '')
260
</screen>
261
          You need to restart the Xen daemon for these settings to
262
          take effect:
263
<screen>
264
/etc/init.d/xend restart
265
</screen>
266
        </para>
267

    
268
      </sect3>
269

    
270
      <sect3>
271
        <title>Selecting the instance kernel</title>
272

    
273
        <para>
274
          After you have installed Xen, you need to tell Ganeti
275
          exactly what kernel to use for the instances it will
276
          create. This is done by creating a
277
          <emphasis>symlink</emphasis> from your actual kernel to
278
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
279
          your initrd to
280
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
281
          don't use an initrd for the <literal>domU</literal> kernel,
282
          you don't need to create the initrd symlink.
283
        </para>
284

    
285
        <formalpara>
286
          <title>Debian</title>
287
          <para>
288
            After installation of the
289
            <literal>xen-linux-system</literal> package, you need to
290
            run (replace the exact version number with the one you
291
            have):
292
            <screen>
293
cd /boot
294
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
295
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
296
            </screen>
297
          </para>
298
        </formalpara>
299
      </sect3>
300

    
301
    </sect2>
302

    
303
    <sect2>
304
      <title>Installing DRBD</title>
305

    
306
      <para>
307
        Recommended on all nodes: <ulink
308
        url="http://www.drbd.org/">DRBD</ulink> is required if you
309
        want to use the high availability (HA) features of Ganeti, but
310
        optional if you don't require HA or only run Ganeti on
311
        single-node clusters. You can upgrade a non-HA cluster to an
312
        HA one later, but you might need to export and re-import all
313
        your instances to take advantage of the new features.
314
      </para>
315

    
316
      <para>
317
        Supported DRBD versions: the <literal>0.7</literal> series
318
        <emphasis role="strong">or</emphasis>
319
        <literal>8.0.7</literal>. It's recommended to have at least
320
        version <literal>0.7.24</literal> if you use
321
        <command>udev</command> since older versions have a bug
322
        related to device discovery which can be triggered in cases of
323
        hard drive failure.
324
      </para>
325

    
326
      <para>
327
        Now the bad news: unless your distribution already provides it
328
        installing DRBD might involve recompiling your kernel or
329
        anyway fiddling with it. Hopefully at least the Xen-ified
330
        kernel source to start from will be provided.
331
      </para>
332

    
333
      <para>
334
        The good news is that you don't need to configure DRBD at all.
335
        Ganeti will do it for you for every instance you set up.  If
336
        you have the DRBD utils installed and the module in your
337
        kernel you're fine. Please check that your system is
338
        configured to load the module at every boot, and that it
339
        passes the following option to the module (for
340
        <literal>0.7.x</literal>:
341
        <computeroutput>minor_count=64</computeroutput> (this will
342
        allow you to use up to 32 instances per node) or for
343
        <literal>8.0.x</literal> you can use up to
344
        <constant>255</constant>
345
        (i.e. <computeroutput>minor_count=255</computeroutput>, but
346
        for most clusters <constant>128</constant> should be enough).
347
      </para>
348

    
349
      <formalpara><title>Debian</title>
350
        <para>
351
         You can just install (build) the DRBD 0.7 module with the
352
         following commands (make sure you are running the Xen
353
         kernel):
354
        </para>
355
      </formalpara>
356

    
357
      <screen>
358
apt-get install drbd0.7-module-source drbd0.7-utils
359
m-a update
360
m-a a-i drbd0.7
361
echo drbd minor_count=64 >> /etc/modules
362
modprobe drbd minor_count=64
363
      </screen>
364
      <para>
365
        or for using DRBD <literal>8.x</literal> from the etch
366
        backports (note: you need at least 8.0.7, older version have
367
        a bug that breaks ganeti's usage of drbd):
368
      </para>
369
      <screen>
370
apt-get install -t etch-backports drbd8-module-source drbd8-utils
371
m-a update
372
m-a a-i drbd8
373
echo drbd minor_count=128 >> /etc/modules
374
modprobe drbd minor_count=128
375
      </screen>
376

    
377
      <para>
378
        It is also recommended that you comment out the default
379
        resources in the <filename>/etc/dbrd.conf</filename> file, so
380
        that the init script doesn't try to configure any drbd
381
        devices. You can do this by prefixing all
382
        <literal>resource</literal> lines in the file with the keyword
383
        <literal>skip</literal>, like this:
384
      </para>
385

    
386
      <screen>
387
skip resource r0 {
388
...
389
}
390

    
391
skip resource "r1" {
392
...
393
}
394
      </screen>
395

    
396
    </sect2>
397

    
398
    <sect2>
399
      <title>Other required software</title>
400

    
401
      <para>Besides Xen and DRBD, you will need to install the
402
      following (on all nodes):</para>
403

    
404
      <itemizedlist>
405
        <listitem>
406
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
407
          version 2</ulink></simpara>
408
        </listitem>
409
        <listitem>
410
          <simpara><ulink
411
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
412
        </listitem>
413
        <listitem>
414
          <simpara><ulink
415
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
416
        </listitem>
417
        <listitem>
418
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
419
          utilities</ulink></simpara>
420
        </listitem>
421
        <listitem>
422
          <simpara><ulink
423
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
424
        </listitem>
425
        <listitem>
426
          <simpara><ulink
427
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
428
          (part of iputils package)</simpara>
429
        </listitem>
430
        <listitem>
431
          <simpara><ulink
432
          url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
433
          (Linux Software Raid tools)</simpara>
434
        </listitem>
435
        <listitem>
436
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
437
        </listitem>
438
        <listitem>
439
          <simpara><ulink url="http://twistedmatrix.com/">Python
440
          Twisted library</ulink> - the core library is
441
          enough</simpara>
442
        </listitem>
443
        <listitem>
444
          <simpara><ulink
445
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
446
          bindings</ulink></simpara>
447
        </listitem>
448
        <listitem>
449
          <simpara><ulink
450
          url="http://www.undefined.org/python/#simplejson">simplejson Python
451
          module</ulink></simpara>
452
        </listitem>
453
        <listitem>
454
          <simpara><ulink
455
          url="http://pyparsing.wikispaces.com/">pyparsing Python
456
          module</ulink></simpara>
457
        </listitem>
458
      </itemizedlist>
459

    
460
      <para>
461
        These programs are supplied as part of most Linux
462
        distributions, so usually they can be installed via apt or
463
        similar methods. Also many of them will already be installed
464
        on a standard machine.
465
      </para>
466

    
467

    
468
      <formalpara><title>Debian</title>
469

    
470
      <para>You can use this command line to install all of them:</para>
471

    
472
      </formalpara>
473
      <screen>
474
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
475
  python2.4 python-twisted-core python-pyopenssl openssl \
476
  mdadm python-pyparsing python-simplejson
477
      </screen>
478

    
479
    </sect2>
480

    
481
  </sect1>
482

    
483

    
484
  <sect1>
485
    <title>Setting up the environment for Ganeti</title>
486

    
487
    <sect2>
488
      <title>Configuring the network</title>
489

    
490
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
491

    
492
      <para>
493
        Ganeti relies on Xen running in "bridge mode", which means the
494
        instances network interfaces will be attached to a software bridge
495
        running in dom0. Xen by default creates such a bridge at startup, but
496
        your distribution might have a different way to do things.
497
      </para>
498

    
499
      <para>
500
        Beware that the default name Ganeti uses is
501
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
502
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
503
        default. The default bridge your Ganeti cluster will use for new
504
        instances can be specified at cluster initialization time.
505
      </para>
506

    
507
      <formalpara><title>Debian</title>
508
        <para>
509
          The recommended Debian way to configure the Xen bridge is to
510
          edit your <filename>/etc/network/interfaces</filename> file
511
          and substitute your normal Ethernet stanza with the
512
          following snippet:
513

    
514
        <screen>
515
auto xen-br0
516
iface xen-br0 inet static
517
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
518
        netmask <replaceable>YOUR_NETMASK</replaceable>
519
        network <replaceable>YOUR_NETWORK</replaceable>
520
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
521
        gateway <replaceable>YOUR_GATEWAY</replaceable>
522
        bridge_ports eth0
523
        bridge_stp off
524
        bridge_fd 0
525
        </screen>
526
        </para>
527
      </formalpara>
528

    
529
     <para>
530
The following commands need to be executed on the local console
531
     </para>
532
      <screen>
533
ifdown eth0
534
ifup xen-br0
535
      </screen>
536

    
537
      <para>
538
        To check if the bridge is setup, use <command>ip</command>
539
        and <command>brctl show</command>:
540
      <para>
541

    
542
      <screen>
543
# ip a show xen-br0
544
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
545
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
546
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
547
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
548
       valid_lft forever preferred_lft forever
549

    
550
# brctl show xen-br0
551
bridge name     bridge id               STP enabled     interfaces
552
xen-br0         8000.0020fc1ed55d       no              eth0
553
      </screen>
554

    
555

    
556
    </sect2>
557

    
558
    <sect2>
559
      <title>Configuring LVM</title>
560

    
561

    
562
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
563

    
564
      <note>
565
        <simpara>The volume group is required to be at least
566
        <constant>20GiB</constant>.</simpara>
567
      </note>
568
      <para>
569
        If you haven't configured your LVM volume group at install
570
        time you need to do it before trying to initialize the Ganeti
571
        cluster. This is done by formatting the devices/partitions you
572
        want to use for it and then adding them to the relevant volume
573
        group:
574

    
575
       <screen>
576
pvcreate /dev/sda3
577
vgcreate xenvg /dev/sda3
578
       </screen>
579
or
580
       <screen>
581
pvcreate /dev/sdb1
582
pvcreate /dev/sdc1
583
vgcreate xenvg /dev/sdb1 /dev/sdc1
584
       </screen>
585
      </para>
586

    
587
      <para>
588
	If you want to add a device later you can do so with the
589
	<citerefentry><refentrytitle>vgextend</refentrytitle>
590
	<manvolnum>8</manvolnum></citerefentry> command:
591
      </para>
592

    
593
      <screen>
594
pvcreate /dev/sdd1
595
vgextend xenvg /dev/sdd1
596
      </screen>
597

    
598
      <formalpara>
599
        <title>Optional</title>
600
        <para>
601
          It is recommended to configure LVM not to scan the DRBD
602
          devices for physical volumes. This can be accomplished by
603
          editing <filename>/etc/lvm/lvm.conf</filename> and adding
604
          the <literal>/dev/drbd[0-9]+</literal> regular expression to
605
          the <literal>filter</literal> variable, like this:
606
<screen>
607
    filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
608
</screen>
609
        </para>
610
      </formalpara>
611

    
612
    </sect2>
613

    
614
    <sect2>
615
      <title>Installing Ganeti</title>
616

    
617
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
618

    
619
      <para>
620
        It's now time to install the Ganeti software itself.  Download
621
        the source from <ulink
622
        url="http://code.google.com/p/ganeti/"></ulink>.
623
      </para>
624

    
625
        <screen>
626
tar xvzf ganeti-@GANETI_VERSION@.tar.gz
627
cd ganeti-@GANETI_VERSION@
628
./configure --localstatedir=/var --sysconfdir=/etc
629
make
630
make install
631
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
632
        </screen>
633

    
634
      <para>
635
        You also need to copy the file
636
        <filename>doc/examples/ganeti.initd</filename>
637
        from the source archive to
638
        <filename>/etc/init.d/ganeti</filename> and register it with
639
        your distribution's startup scripts, for example in Debian:
640
      </para>
641
      <screen>update-rc.d ganeti defaults 20 80</screen>
642

    
643
      <para>
644
        In order to automatically restart failed instances, you need
645
        to setup a cron job run the
646
        <computeroutput>ganeti-watcher</computeroutput> program. A
647
        sample cron file is provided in the source at
648
        <filename>doc/examples/ganeti.cron</filename> and you can
649
        copy that (eventually altering the path) to
650
        <filename>/etc/cron.d/ganeti</filename>
651
      </para>
652

    
653
    </sect2>
654

    
655
    <sect2>
656
      <title>Installing the Operating System support packages</title>
657

    
658
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
659

    
660
      <para>
661
        To be able to install instances you need to have an Operating
662
        System installation script. An example for Debian Etch is
663
        provided on the project web site.  Download it from <ulink
664
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
665
        instructions in the <filename>README</filename> file.  Here is
666
        the installation procedure (replace <constant>0.2</constant>
667
        with the latest version that is compatible with your ganeti
668
        version):
669
      </para>
670

    
671
      <screen>
672
cd /srv/ganeti/os
673
tar xvf ganeti-instance-debian-etch-0.4.tar
674
mv ganeti-instance-debian-etch-0.4 debian-etch
675
      </screen>
676

    
677
      <para>
678
        In order to use this OS definition, you need to have internet
679
        access from your nodes and have the <citerefentry>
680
        <refentrytitle>debootstrap</refentrytitle>
681
        <manvolnum>8</manvolnum></citerefentry>, <citerefentry>
682
        <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum>
683
        </citerefentry> and <citerefentry>
684
        <refentrytitle>restore</refentrytitle>
685
        <manvolnum>8</manvolnum> </citerefentry> commands installed on
686
        all nodes.
687
      </para>
688
      <formalpara>
689
        <title>Debian</title>
690
        <para>
691
          Use this command on all nodes to install the required
692
          packages:
693

    
694
          <screen>apt-get install debootstrap dump</screen>
695
        </para>
696
      </formalpara>
697

    
698
      <para>
699
        Alternatively, you can create your own OS definitions. See the
700
        manpage
701
        <citerefentry>
702
        <refentrytitle>ganeti-os-interface</refentrytitle>
703
        <manvolnum>8</manvolnum>
704
        </citerefentry>.
705
      </para>
706

    
707
    </sect2>
708

    
709
    <sect2>
710
      <title>Initializing the cluster</title>
711

    
712
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
713
      node per cluster.</para>
714

    
715

    
716
      <para>The last step is to initialize the cluster. After you've repeated
717
        the above process on all of your nodes, choose one as the master, and execute:
718
      </para>
719

    
720
      <screen>
721
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
722
      </screen>
723

    
724
      <para>
725
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
726
        which must be resolvable (e.g. it must exist in DNS or in
727
        <filename>/etc/hosts</filename>) by all the nodes in the
728
        cluster. You must choose a name different from any of the
729
        nodes names for a multi-node cluster. In general the best
730
        choice is to have a unique name for a cluster, even if it
731
        consists of only one machine, as you will be able to expand it
732
        later without any problems.
733
      </para>
734

    
735
      <para>
736
        If the bridge name you are using is not
737
        <literal>xen-br0</literal>, use the <option>-b
738
        <replaceable>BRIDGENAME</replaceable></option> option to
739
        specify the bridge name. In this case, you should also use the
740
        <option>--master-netdev
741
        <replaceable>BRIDGENAME</replaceable></option> option with the
742
        same <replaceable>BRIDGENAME</replaceable> argument.
743
      </para>
744

    
745
      <para>
746
        You can use a different name than <literal>xenvg</literal> for
747
        the volume group (but note that the name must be identical on
748
        all nodes). In this case you need to specify it by passing the
749
        <option>-g <replaceable>VGNAME</replaceable></option> option
750
        to <computeroutput>gnt-cluster init</computeroutput>.
751
      </para>
752

    
753
      <para>
754
        To set up the cluster as an HVM cluster, use the
755
        <option>--hypervisor=xen-hvm3.1</option> option to use
756
        the Xen 3.1 HVM hypervisor. Note that with the
757
        experimental HVM support, you will only be able to create
758
        HVM instances in a cluster set to this hypervisor type. Mixed
759
        PVM/HVM clusters are not supported by the Ganeti 1.2
760
        experimental HVM support. You will also need to create the VNC
761
        cluster password  file
762
        <filename>/etc/ganeti/vnc-cluster-password</filename>
763
        which contains one line with the default VNC password for the
764
        cluster. Finally, you need to provide an installation ISO
765
        image for HVM instance which will not only be mapped to the
766
        first CDROM of the instance, but which the instance will also
767
        boot from. This ISO image is expected at
768
        <filename>/srv/ganeti/iso/hvm-install.iso</filename>.
769
      </para>
770

    
771
      <para>
772
        You can also invoke the command with the
773
        <option>--help</option> option in order to see all the
774
        possibilities.
775
      </para>
776

    
777
    </sect2>
778

    
779
    <sect2>
780
      <title>Joining the nodes to the cluster</title>
781

    
782
      <para>
783
        <emphasis role="strong">Mandatory:</emphasis> for all the
784
        other nodes.
785
      </para>
786

    
787
      <para>
788
        After you have initialized your cluster you need to join the
789
        other nodes to it. You can do so by executing the following
790
        command on the master node:
791
      </para>
792
        <screen>
793
gnt-node add <replaceable>NODENAME</replaceable>
794
        </screen>
795
    </sect2>
796

    
797
    <sect2>
798
      <title>Separate replication network</title>
799

    
800
      <para><emphasis role="strong">Optional</emphasis></para>
801
      <para>
802
        Ganeti uses DRBD to mirror the disk of the virtual instances
803
        between nodes. To use a dedicated network interface for this
804
        (in order to improve performance or to enhance security) you
805
        need to configure an additional interface for each node.  Use
806
        the <option>-s</option> option with
807
        <computeroutput>gnt-cluster init</computeroutput> and
808
        <computeroutput>gnt-node add</computeroutput> to specify the
809
        IP address of this secondary interface to use for each
810
        node. Note that if you specified this option at cluster setup
811
        time, you must afterwards use it for every node add operation.
812
      </para>
813
    </sect2>
814

    
815
    <sect2>
816
      <title>Testing the setup</title>
817

    
818
      <para>
819
        Execute the <computeroutput>gnt-node list</computeroutput>
820
        command to see all nodes in the cluster:
821
      <screen>
822
# gnt-node list
823
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
824
node1.example.com 197404 197404   2047  1896   125     0     0
825
      </screen>
826
    </para>
827
  </sect2>
828

    
829
  <sect1>
830
    <title>Setting up and managing virtual instances</title>
831
    <sect2>
832
      <title>Setting up virtual instances</title>
833
      <para>
834
        This step shows how to setup a virtual instance with either
835
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
836
        with network mirrored disks
837
        (<computeroutput>remote_raid1</computeroutput> for drbd 0.7
838
        and <computeroutput>drbd</computeroutput> for drbd 8.x).  All
839
        commands need to be executed on the Ganeti master node (the
840
        one on which <computeroutput>gnt-cluster init</computeroutput>
841
        was run).  Verify that the OS scripts are present on all
842
        cluster nodes with <computeroutput>gnt-os
843
        list</computeroutput>.
844
      </para>
845
      <para>
846
        To create a virtual instance, you need a hostname which is
847
        resolvable (DNS or <filename>/etc/hosts</filename> on all
848
        nodes). The following command will create a non-mirrored
849
        instance for you:
850
      </para>
851
      <screen>
852
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
853
* creating instance disks...
854
adding instance inst1.example.com to cluster config
855
Waiting for instance inst1.example.com to sync disks.
856
Instance inst1.example.com's disks are in sync.
857
creating os for instance inst1.example.com on node node1.example.com
858
* running the instance OS create scripts...
859
      </screen>
860

    
861
      <para>
862
        The above instance will have no network interface enabled.
863
        You can access it over the virtual console with
864
        <computeroutput>gnt-instance console
865
        <literal>inst1</literal></computeroutput>. There is no
866
        password for root.  As this is a Debian instance, you can
867
        modify the <filename>/etc/network/interfaces</filename> file
868
        to setup the network interface (<literal>eth0</literal> is the
869
        name of the interface provided to the instance).
870
      </para>
871

    
872
      <para>
873
        To create a network mirrored instance, change the argument to
874
        the <option>-t</option> option from <literal>plain</literal>
875
        to <literal>remote_raid1</literal> (drbd 0.7) or
876
        <literal>drbd</literal> (drbd 8.0) and specify the node on
877
        which the mirror should reside with the second value of the
878
        <option>--node</option> option, like this:
879
      </para>
880

    
881
      <screen>
882
# gnt-instance add -t remote_raid1 -n node1:node2 -o debian-etch instance2
883
* creating instance disks...
884
adding instance instance2 to cluster config
885
Waiting for instance instance1 to sync disks.
886
- device sdb:  3.50% done, 304 estimated seconds remaining
887
- device sdb: 21.70% done, 270 estimated seconds remaining
888
- device sdb: 39.80% done, 247 estimated seconds remaining
889
- device sdb: 58.10% done, 121 estimated seconds remaining
890
- device sdb: 76.30% done, 72 estimated seconds remaining
891
- device sdb: 94.80% done, 18 estimated seconds remaining
892
Instance instance2's disks are in sync.
893
creating os for instance instance2 on node node1.example.com
894
* running the instance OS create scripts...
895
* starting instance...
896
      </screen>
897

    
898
    </sect2>
899

    
900
    <sect2>
901
      <title>Managing virtual instances</title>
902
      <para>
903
        All commands need to be executed on the Ganeti master node
904
      </para>
905

    
906
      <para>
907
        To access the console of an instance, use
908
        <computeroutput>gnt-instance console
909
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
910
      </para>
911

    
912
      <para>
913
        To shutdown an instance, use <computeroutput>gnt-instance
914
        shutdown
915
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
916
        startup an instance, use <computeroutput>gnt-instance startup
917
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
918
      </para>
919

    
920
      <para>
921
        To failover an instance to its secondary node (only possible
922
        in <literal>remote_raid1</literal> or <literal>drbd</literal>
923
        disk templates), use <computeroutput>gnt-instance failover
924
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
925
      </para>
926

    
927
      <para>
928
        For more instance and cluster administration details, see the
929
        <emphasis>Ganeti administrator's guide</emphasis>.
930
      </para>
931

    
932
    </sect2>
933

    
934
  </sect1>
935

    
936
  </article>