Statistics
| Branch: | Tag: | Revision:

root / docs / install.sgml @ 16450d30

History | View | Annotate | Download (23.9 kB)

1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 1.2</para>
8

    
9
  <sect1>
10
    <title>Introduction</title>
11

    
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

    
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

    
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
37
    </para>
38

    
39
    <para>This document is divided into two main sections:
40

    
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
          components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
          Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

    
52
    Each of these is divided into sub-sections. While a full Ganeti
53
    system will need all of the steps specified, some are not strictly
54
    required for every environment. Which ones they are, and why, is
55
    specified in the corresponding sections.
56
    </para>
57

    
58
  </sect1>
59

    
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

    
63
    <sect2>
64
      <title>Hardware requirements</title>
65

    
66
      <para>
67
         Any system supported by your Linux distribution is fine.
68
         64-bit systems are better as they can support more memory.
69
      </para>
70

    
71
      <para>
72
         Any disk drive recognized by Linux
73
         (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
         is supported in Ganeti. Note that no shared storage
75
         (e.g. <literal>SAN</literal>) is needed to get high-availability features. It is
76
         highly recommended to use more than one disk drive to improve
77
         speed. But Ganeti also works with one disk per machine.
78
      </para>
79

    
80
    <sect2>
81
      <title>Installing the base system</title>
82

    
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

    
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems. The volume group
94
        name Ganeti 1.2 uses (by default) is
95
        <emphasis>xenvg</emphasis>.
96
      </para>
97

    
98
      <note>
99
        <simpara>
100
          You need to use a fully-qualified name for the hostname of
101
          the nodes.
102
        </simpara>
103
      </note>
104

    
105
      <para>
106
        While you can use an existing system, please note that the
107
        Ganeti installation is intrusive in terms of changes to the
108
        system configuration, and it's best to use a newly-installed
109
        system without important data on it.
110
      </para>
111

    
112
      <para>
113
        Also, for best results, it's advised that the nodes have as
114
        much as possible the same hardware and software
115
        configuration. This will make administration much easier.
116
      </para>
117

    
118
    </sect2>
119

    
120
    <sect2>
121
      <title>Installing Xen</title>
122

    
123
      <para>
124
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
125
      </para>
126

    
127
      <para>
128
        While Ganeti is developed with the ability to modularly run on
129
        different virtualization environments in mind the only one
130
        currently useable on a live system is <ulink
131
        url="http://xen.xensource.com/">Xen</ulink>. Supported
132
        versions are: <simplelist type="inline">
133
        <member><literal>3.0.3</literal></member>
134
        <member><literal>3.0.4</literal></member>
135
        <member><literal>3.1</literal></member> </simplelist>.
136
      </para>
137

    
138
      <para>
139
        Please follow your distribution's recommended way to install
140
        and set up Xen, or install Xen from the upstream source, if
141
        you wish, following their manual.
142
      </para>
143

    
144
      <para>
145
        After installing Xen you need to reboot into your xenified
146
        dom0 system. On some distributions this might involve
147
        configuring GRUB appropriately, whereas others will configure
148
        it automatically when you install Xen from a package.
149
      </para>
150

    
151
      <formalpara><title>Debian</title>
152
      <para>
153
        Under Debian Etch or Sarge+backports you can install the
154
        relevant <literal>xen-linux-system</literal> package, which
155
        will pull in both the hypervisor and the relevant kernel.
156
      </para>
157
      </formalpara>
158

    
159
      <sect3>
160
        <title>Selecting the instance kernel</title>
161

    
162
        <para>
163
          After you have installed xen, you need to tell Ganeti
164
          exactly what kernel to use for the instances it will
165
          create. This is done by creating a
166
          <emphasis>symlink</emphasis> from your actual kernel to
167
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
168
          your initrd to
169
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
170
          don't use an initrd for the <literal>domU</literal> kernel,
171
          you don't need to create the initrd symlink.
172
        </para>
173

    
174
        <formalpara>
175
          <title>Debian</title>
176
          <para>
177
            After installation of the
178
            <literal>xen-linux-system</literal> package, you need to
179
            run (replace the exact version number with the one you
180
            have):
181
            <screen>
182
cd /boot
183
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
184
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
185
            </screen>
186
          </para>
187
        </formalpara>
188
      </sect3>
189

    
190
    </sect2>
191

    
192
    <sect2>
193
      <title>Installing DRBD</title>
194

    
195
      <para>
196
        Recommended on all nodes: <ulink
197
        url="http://www.drbd.org/">DRBD</ulink> is required if you
198
        want to use the high availability (HA) features of Ganeti, but
199
        optional if you don't require HA or only run Ganeti on
200
        single-node clusters. You can upgrade a non-HA cluster to an
201
        HA one later, but you might need to export and re-import all
202
        your instances to take advantage of the new features.
203
      </para>
204

    
205
      <para>
206
        Supported DRBD version: the <literal>0.7</literal>
207
        series. It's recommended to have at least version
208
        <literal>0.7.24</literal> if you use <command>udev</command>
209
        since older versions have a bug related to device discovery
210
        which can be triggered in cases of hard drive failure.
211
      </para>
212

    
213
      <para>
214
        Now the bad news: unless your distribution already provides it
215
        installing DRBD might involve recompiling your kernel or
216
        anyway fiddling with it. Hopefully at least the xenified
217
        kernel source to start from will be provided.
218
      </para>
219

    
220
      <para>
221
        The good news is that you don't need to configure DRBD at all.
222
        Ganeti will do it for you for every instance you set up.  If
223
        you have the DRBD utils installed and the module in your
224
        kernel you're fine. Please check that your system is
225
        configured to load the module at every boot.
226
      </para>
227

    
228
      <formalpara><title>Debian</title>
229
        <para>
230
         You can just install (build) the DRBD 0.7 module with the
231
         following command:
232
        </para>
233
      </formalpara>
234

    
235
      <screen>
236
apt-get install drbd0.7-module-source drbd0.7-utils
237
m-a update
238
m-a a-i drbd0.7
239
      </screen>
240

    
241
    </sect2>
242

    
243
    <sect2>
244
      <title>Other required software</title>
245

    
246
      <para>Besides Xen and DRBD, you will need to install the
247
      following (on all nodes):</para>
248

    
249
      <itemizedlist>
250
        <listitem>
251
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
252
          version 2</ulink></simpara>
253
        </listitem>
254
        <listitem>
255
          <simpara><ulink
256
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
257
        </listitem>
258
        <listitem>
259
          <simpara><ulink
260
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
261
        </listitem>
262
        <listitem>
263
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
264
          utilities</ulink></simpara>
265
        </listitem>
266
        <listitem>
267
          <simpara><ulink
268
          url="http://fping.sourceforge.net/">fping</ulink></simpara>
269
        </listitem>
270
        <listitem>
271
          <simpara><ulink
272
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
273
        </listitem>
274
        <listitem>
275
          <simpara><ulink
276
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
277
          (part of iputils package)</simpara>
278
        </listitem>
279
        <listitem>
280
          <simpara><ulink
281
          url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
282
          (Linux Software Raid tools)</simpara>
283
        </listitem>
284
        <listitem>
285
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
286
        </listitem>
287
        <listitem>
288
          <simpara><ulink url="http://twistedmatrix.com/">Python
289
          Twisted library</ulink> - the core library is
290
          enough</simpara>
291
        </listitem>
292
        <listitem>
293
          <simpara><ulink
294
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
295
          bindings</ulink></simpara>
296
        </listitem>
297
      </itemizedlist>
298

    
299
      <para>
300
        These programs are supplied as part of most Linux
301
        distributions, so usually they can be installed via apt or
302
        similar methods. Also many of them will already be installed
303
        on a standard machine.
304
      </para>
305

    
306

    
307
      <formalpara><title>Debian</title>
308

    
309
      <para>You can use this command line to install all of them:</para>
310

    
311
      </formalpara>
312
      <screen>
313
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
314
  fping python2.4 python-twisted-core python-pyopenssl openssl
315
      </screen>
316

    
317
    </sect2>
318

    
319
  </sect1>
320

    
321

    
322
  <sect1>
323
    <title>Setting up the environment for Ganeti</title>
324

    
325
    <sect2>
326
      <title>Configuring the network</title>
327

    
328
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
329

    
330
      <para>
331
        Ganeti relies on Xen running in "bridge mode", which means the
332
        instances network interfaces will be attached to a software bridge
333
        running in dom0. Xen by default creates such a bridge at startup, but
334
        your distribution might have a different way to do things.
335
      </para>
336

    
337
      <para>
338
        Beware that the default name Ganeti uses is
339
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
340
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
341
        default. The default bridge your Ganeti cluster will use for new
342
        instances can be specified at cluster initialization time.
343
      </para>
344

    
345
      <formalpara><title>Debian</title>
346
        <para>
347
          The recommended Debian way to configure the xen bridge is to
348
          edit your <filename>/etc/network/interfaces</filename> file
349
          and substitute your normal Ethernet stanza with the
350
          following snippet:
351

    
352
        <screen>
353
auto xen-br0
354
iface xen-br0 inet static
355
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
356
        netmask <replaceable>YOUR_NETMASK</replaceable>
357
        network <replaceable>YOUR_NETWORK</replaceable>
358
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
359
        gateway <replaceable>YOUR_GATEWAY</replaceable>
360
        bridge_ports eth0
361
        bridge_stp off
362
        bridge_fd 0
363
        </screen>
364
        </para>
365
      </formalpara>
366

    
367
     <para>
368
The following commands need to be executed on the local console
369
     </para>
370
      <screen>
371
ifdown eth0
372
ifup xen-br0
373
      </screen>
374

    
375
      <para>
376
        To check if the bridge is setup, use <command>ip</command>
377
        and <command>brctl show</command>:
378
      <para>
379

    
380
      <screen>
381
# ip a show xen-br0
382
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
383
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
384
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
385
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
386
       valid_lft forever preferred_lft forever
387

    
388
# brctl show xen-br0
389
bridge name     bridge id               STP enabled     interfaces
390
xen-br0         8000.0020fc1ed55d       no              eth0
391
      </screen>
392

    
393

    
394
    </sect2>
395

    
396
    <sect2>
397
      <title>Configuring LVM</title>
398

    
399

    
400
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
401

    
402
      <note>
403
        <simpara>The volume group is required to be at least
404
        <constant>20GiB</constant>.</simpara>
405
      </note>
406
      <para>
407
        If you haven't configured your LVM volume group at install
408
        time you need to do it before trying to initialize the Ganeti
409
        cluster. This is done by formatting the devices/partitions you
410
        want to use for it and then adding them to the relevant volume
411
        group:
412

    
413
       <screen>
414
pvcreate /dev/sda3
415
vgcreate xenvg /dev/sda3
416
       </screen>
417
or
418
       <screen>
419
pvcreate /dev/sdb1
420
pvcreate /dev/sdc1
421
vgcreate xenvg /dev/sdb1 /dev/sdc1
422
       </screen>
423
      </para>
424

    
425
      <para>
426
	If you want to add a device later you can do so with the
427
	<citerefentry><refentrytitle>vgextend</refentrytitle>
428
	<manvolnum>8</manvolnum></citerefentry> command:
429
      </para>
430

    
431
      <screen>
432
pvcreate /dev/sdd1
433
vgextend xenvg /dev/sdd1
434
      </screen>
435
    </sect2>
436

    
437
    <sect2>
438
      <title>Installing Ganeti</title>
439

    
440
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
441

    
442
      <para>
443
        It's now time to install the Ganeti software itself.  Download
444
        the source from <ulink
445
        url="http://code.google.com/p/ganeti/"></ulink>.
446
      </para>
447

    
448
        <screen>
449
tar xvzf ganeti-1.2b1.tar.gz
450
cd ganeti-1.2b1
451
./configure --localstatedir=/var
452
make
453
make install
454
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
455
        </screen>
456

    
457
      <para>
458
        You also need to copy the file
459
        <filename>docs/examples/ganeti.initd</filename>
460
        from the source archive to
461
        <filename>/etc/init.d/ganeti</filename> and register it with
462
        your distribution's startup scripts, for example in Debian:
463
      </para>
464
      <screen>update-rc.d ganeti defaults 20 80</screen>
465

    
466
    </sect2>
467

    
468
    <sect2>
469
      <title>Installing the Operating System support packages</title>
470

    
471
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
472

    
473
      <para>
474
        To be able to install instances you need to have an Operating
475
        System installation script. An example for Debian Etch is
476
        provided on the project web site.  Download it from <ulink
477
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
478
        instructions in the <filename>README</filename> file.  Here is
479
        the installation procedure:
480
      </para>
481

    
482
      <screen>
483
cd /srv/ganeti/os
484
tar xvf instance-debian-etch-0.1.tar
485
mv instance-debian-etch-0.1 debian-etch
486
      </screen>
487

    
488
      <para>
489
        In order to use this OS definition, you need to have internet
490
        access from your nodes and have <citerefentry>
491
        <refentrytitle>debootstrap</refentrytitle>
492
        <manvolnum>8</manvolnum> </citerefentry> installed on all the
493
        nodes.
494
      </para>
495
      <formalpara>
496
        <title>Debian</title>
497
        <para>
498
          Use this command on all nodes to install
499
          <computeroutput>debootstrap</computeroutput>:
500

    
501
          <screen>apt-get install debootstrap</screen>
502
        </para>
503
      </formalpara>
504

    
505
      <para>
506
        Alternatively, you can create your own OS definitions. See the
507
        manpage
508
        <citerefentry>
509
        <refentrytitle>ganeti-os-interface</refentrytitle>
510
        <manvolnum>8</manvolnum>
511
        </citerefentry>.
512
      </para>
513

    
514
    </sect2>
515

    
516
    <sect2>
517
      <title>Initializing the cluster</title>
518

    
519
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
520
      node per cluster.</para>
521

    
522

    
523
      <para>The last step is to initialize the cluster. After you've repeated
524
        the above process on all of your nodes, choose one as the master, and execute:
525
      </para>
526

    
527
      <screen>
528
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
529
      </screen>
530

    
531
      <para>
532
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
533
        which must be resolvable (e.g. it must exist in DNS or in
534
        <filename>/etc/hosts</filename>) by all the nodes in the
535
        cluster. You must choose a name different from any of the
536
        nodes names for a multi-node cluster. In general the best
537
        choice is to have a unique name for a cluster, even if it
538
        consists of only one machine, as you will be able to expand it
539
        later without any problems.
540
      </para>
541

    
542
      <para>
543
        If the bridge name you are using is not
544
        <literal>xen-br0</literal>, use the <option>-b
545
        <replaceable>BRIDGENAME</replaceable></option> option to
546
        specify the bridge name. In this case, you should also use the
547
        <option>--master-netdev
548
        <replaceable>BRIDGENAME</replaceable></option> option with the
549
        same <replaceable>BRIDGENAME</replaceable> argument.
550
      </para>
551

    
552
      <para>
553
        You can use a different name than <literal>xenvg</literal> for
554
        the volume group (but note that the name must be identical on
555
        all nodes). In this case you need to specify it by passing the
556
        <option>-g <replaceable>VGNAME</replaceable></option> option
557
        to <computeroutput>gnt-cluster init</computeroutput>.
558
      </para>
559

    
560
      <para>
561
        You can also invoke the command with the
562
        <option>--help</option> option in order to see all the
563
        possibilities.
564
      </para>
565

    
566
    </sect2>
567

    
568
    <sect2>
569
      <title>Joining the nodes to the cluster</title>
570

    
571
      <para>
572
        <emphasis role="strong">Mandatory:</emphasis> for all the
573
        other nodes.
574
      </para>
575

    
576
      <para>
577
        After you have initialized your cluster you need to join the
578
        other nodes to it. You can do so by executing the following
579
        command on the master node:
580
      </para>
581
        <screen>
582
gnt-node add <replaceable>NODENAME</replaceable>
583
        </screen>
584
    </sect2>
585

    
586
    <sect2>
587
      <title>Separate replication network</title>
588

    
589
      <para><emphasis role="strong">Optional</emphasis></para>
590
      <para>
591
        Ganeti uses DRBD to mirror the disk of the virtual instances
592
        between nodes. To use a dedicated network interface for this
593
        (in order to improve performance or to enhance security) you
594
        need to configure an additional interface for each node.  Use
595
        the <option>-s</option> option with
596
        <computeroutput>gnt-cluster init</computeroutput> and
597
        <computeroutput>gnt-node add</computeroutput> to specify the
598
        IP address of this secondary inteface to use for each
599
        node. Note that if you specified this option at cluster setup
600
        time, you must afterwards use it for every node add operation.
601
      </para>
602
    </sect2>
603

    
604
    <sect2>
605
      <title>Testing the setup</title>
606

    
607
      <para>
608

    
609
        Execute the <computeroutput>gnt-node list</computeroutput>
610
        command to see all nodes in the cluster:
611
      <screen>
612
# gnt-node list
613
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
614
node1.example.com 197404 197404   2047  1896   125     0     0
615
      </screen>
616
    </para>
617
  </sect2>
618

    
619
  <sect1>
620
    <title>Setting up and managing virtual instances</title>
621
    <sect2>
622
      <title>Setting up virtual instances</title>
623
      <para>
624
        This step shows how to setup a virtual instance with either
625
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
626
        with network mirrored disks
627
        (<computeroutput>remote_raid1</computeroutput>).  All commands
628
        need to be executed on the Ganeti master node (the one on
629
        which <computeroutput>gnt-cluster init</computeroutput> was
630
        run).  Verify that the OS scripts are present on all cluster
631
        nodes with <computeroutput>gnt-os list</computeroutput>.
632
      </para>
633
      <para>
634
        To create a virtual instance, you need a hostname which is
635
        resolvable (DNS or <filename>/etc/hosts</filename> on all
636
        nodes). The following command will create a non-mirrored
637
        instance for you:
638
      </para>
639
      <screen>
640
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
641
* creating instance disks...
642
adding instance inst1.example.com to cluster config
643
Waiting for instance inst1.example.com to sync disks.
644
Instance inst1.example.com's disks are in sync.
645
creating os for instance inst1.example.com on node node1.example.com
646
* running the instance OS create scripts...
647
      </screen>
648

    
649
      <para>
650
        The above instance will have no network interface enabled.
651
        You can access it over the virtual console with
652
        <computeroutput>gnt-instance console
653
        <literal>inst1</literal></computeroutput>. There is no
654
        password for root.  As this is a Debian instance, you can
655
        modifiy the <filename>/etc/network/interfaces</filename> file
656
        to setup the network interface (<literal>eth0</literal> is the
657
        name of the interface provided to the instance).
658
      </para>
659

    
660
      <para>
661
        To create a network mirrored instance, change the argument to
662
        the <option>-t</option> option from <literal>plain</literal>
663
        to <literal>remote_raid1</literal> and specify the node on
664
        which the mirror should reside with the
665
        <option>--secondary-node</option> option, like this:
666
      </para>
667

    
668
      <screen>
669
# gnt-instance add -t remote_raid1 --secondary-node node1 \
670
  -n node2 -o debian-etch instance2
671
* creating instance disks...
672
adding instance instance2 to cluster config
673
Waiting for instance instance1 to sync disks.
674
- device sdb:  3.50% done, 304 estimated seconds remaining
675
- device sdb: 21.70% done, 270 estimated seconds remaining
676
- device sdb: 39.80% done, 247 estimated seconds remaining
677
- device sdb: 58.10% done, 121 estimated seconds remaining
678
- device sdb: 76.30% done, 72 estimated seconds remaining
679
- device sdb: 94.80% done, 18 estimated seconds remaining
680
Instance instance2's disks are in sync.
681
creating os for instance instance2 on node node2.example.com
682
* running the instance OS create scripts...
683
* starting instance...
684
      </screen>
685

    
686
    </sect2>
687

    
688
    <sect2>
689
      <title>Managing virtual instances</title>
690
      <para>
691
        All commands need to be executed on the Ganeti master node
692
      </para>
693

    
694
      <para>
695
        To access the console of an instance, use
696
        <computeroutput>gnt-instance console
697
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
698
      </para>
699

    
700
      <para>
701
        To shutdown an instance, use <computeroutput>gnt-instance
702
        shutdown
703
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
704
        startup an instance, use <computeroutput>gnt-instance startup
705
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
706
      </para>
707

    
708
      <para>
709
        To failover an instance to its secondary node (only possible
710
        in <literal>remote_raid1</literal> setup), use
711
        <computeroutput>gnt-instance failover
712
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
713
      </para>
714

    
715
      <para>
716
        For more instance and cluster administration details, see the
717
        <emphasis>Ganeti administrator's guide</emphasis>.
718
      </para>
719

    
720
    </sect2>
721

    
722
  </sect1>
723

    
724
  </article>