Revision 32ad0582

b/docs/install.sgml
10 10
    <title>Introduction</title>
11 11

  
12 12
    <para>
13
      Ganeti is a cluster virtualization management system. This
14
      document explains how to bootstrap a Ganeti node and create a
15
      running cluster. You need to repeat most of the steps in this
16
      document for every node you want to install, but of course we
17
      recommend creating some semi-automatic procedure if you plan to
18
      deploy Ganeti on a medium/large scale.
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

  
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

  
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
19 37
    </para>
20 38

  
21 39
    <para>This document is divided into two main sections:
22 40

  
23 41
      <itemizedlist>
24 42
        <listitem>
25
          <simpara>Installation of the core system and base
43
          <simpara>Installation of the base system and base
26 44
          components</simpara>
27 45
        </listitem>
28 46
        <listitem>
......
37 55
    specified in the corresponding sections.
38 56
    </para>
39 57

  
40
    <para>
41
      While Ganeti itself is distribution-agnostic most of the
42
      examples in this document will be targeted at Debian or
43
      Debian-derived distributions. You are expected to be familiar
44
      with your distribution, its package management system, and Xen
45
      before trying to use Ganeti.
46
    </para>
47

  
48
    <para>
49
      A basic Ganeti terminology glossary is provided in the
50
      introductory section of the <emphasis>Ganeti administrator's
51
      guide</emphasis>. Please refer to that document if you are
52
      uncertain about the terms we are using.
53
    </para>
54

  
55 58
  </sect1>
56 59

  
57 60
  <sect1>
58
    <title>Installing the system and base components</title>
61
    <title>Installing the base system and base components</title>
62

  
63
    <sect2>
64
      <title>Hardware requirements</title>
65

  
66
      <para>
67
         Any system supported by your Linux distribution is fine.
68
         64-bit systems are better as they can support more memory.
69
      </para>
70

  
71
      <para>
72
         Any disk drive recognized by Linux
73
         (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
         is supported in Ganeti. Note that no shared storage
75
         (e.g. <literal>SAN</literal>) is needed to get high-availability features. It is
76
         highly recommended to use more than one disk drive to improve
77
         speed. But Ganeti also works with one disk per machine.
78
      </para>
59 79

  
60 80
    <sect2>
61 81
      <title>Installing the base system</title>
......
69 89
        operating system. The only requirement you need to be aware of
70 90
        at this stage is to partition leaving enough space for a big
71 91
        LVM volume group which will then host your instance
72
        filesystems. You can even create the volume group at
73
        installation time, of course: the default volume group name
74
        Ganeti 1.2 uses is <emphasis>xenvg</emphasis> but you may name
75
        it differently should you wish to, as long as the name is the
76
        same for all the nodes in the cluster.
92
        filesystems. The volume group name Ganeti 1.2 uses is
93
        <emphasis>xenvg</emphasis>.
77 94
      </para>
78 95

  
96
      <note>
97
        <simpara>
98
          You need to use a fully-qualified name for the hostname of
99
          the nodes.
100
        </simpara>
101
      </note>
102

  
79 103
      <para>
80 104
        While you can use an exiting system, please note that the
81 105
        Ganeti installation is intrusive in terms of changes to the
......
96 120

  
97 121
      <para>
98 122
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
123
      </para>
124

  
125
      <para>
99 126
        While Ganeti is developed with the ability to modularly run on
100 127
        different virtualization environments in mind the only one
101 128
        currently useable on a live system is <ulink
......
113 140
      </para>
114 141

  
115 142
      <para>
116
        For example under Debian 4.0 or 3.1+backports you can install
117
        the relevant xen-linux-system package, which will pull in both
118
        the hypervisor and the relevant kernel. On Ubuntu (from Gutsy
119
        on) the package is called ubuntu-xen-server.
120
      </para>
121

  
122
      <para>
123 143
        After installing Xen you need to reboot into your xenified
124 144
        dom0 system. On some distributions this might involve
125 145
        configuring GRUB appropriately, whereas others will configure
126 146
        it automatically when you install Xen from a package.
127 147
      </para>
128 148

  
149
      <formalpara><title>Debian</title>
150
      <para>
151
        Under Debian Etch or Sarge+backports you can install the
152
        relevant xen-linux-system package, which will pull in both the
153
        hypervisor and the relevant kernel.
154
      </para>
155
      </formalpara>
156

  
129 157
    </sect2>
130 158

  
131 159
    <sect2>
......
137 165
        want to use the high availability (HA) features of Ganeti, but
138 166
        optional if you don't require HA or only run Ganeti on
139 167
        single-node clusters. You can upgrade a non-HA cluster to an
140
        HA one later, but you might need to export and reimport all
168
        HA one later, but you might need to export and re-import all
141 169
        your instances to take advantage of the new features.
142 170
      </para>
143 171

  
......
146 174
        series. It's recommended to have at least version
147 175
        <literal>0.7.24</literal> if you use <command>udev</command>
148 176
        since older versions have a bug related to device discovery
149
        which can be triggered in cases of harddrive failure.
177
        which can be triggered in cases of hard drive failure.
150 178
      </para>
151 179

  
152 180
      <para>
......
157 185
      </para>
158 186

  
159 187
      <para>
160
        Under Debian you can just install the drbd0.7-module-source
161
        and drbd0.7-utils packages, and your kernel source, and then
162
        run module-assistant to compile the drbd0.7 module. The
163
        following commands should do it:
164
      </para>
165

  
166
      <screen>
167
m-a update
168
m-a a-i drbd0.7
169
      </screen>
170

  
171
      <para>
172 188
        The good news is that you don't need to configure DRBD at all.
173 189
        Ganeti will do it for you for every instance you set up.  If
174 190
        you have the DRBD utils installed and the module in your
......
176 192
        configured to load the module at every boot.
177 193
      </para>
178 194

  
195
      <formalpara><title>Debian</title>
196
        <para>
197
         You can just install (build) the DRBD 0.7 module with the
198
         following command:
199
        </para>
200
      </formalpara>
201

  
202
      <screen>
203
apt-get install drbd0.7-module-source drbd0.7-utils
204
m-a update
205
m-a a-i drbd0.7
206
      </screen>
207

  
179 208
    </sect2>
180 209

  
181 210
    <sect2>
......
234 263
        </listitem>
235 264
      </itemizedlist>
236 265

  
237
      <para>These programs are supplied as part of most Linux
238
      distributions, so usually they can be installed via apt or
239
      similar methods. Also many of them will already be installed on
240
      a standard machine. On Debian Etch you can use this command line
241
      to install all of them:</para>
266
      <para>
267
        These programs are supplied as part of most Linux
268
        distributions, so usually they can be installed via apt or
269
        similar methods. Also many of them will already be installed
270
        on a standard machine.
271
      </para>
272

  
273

  
274
      <formalpara><title>Debian</title>
275

  
276
      <para>You can use this command line to install all of them:</para>
242 277

  
278
      </formalpara>
243 279
      <screen>
244 280
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
245 281
  fping python2.4 python-twisted-core python-pyopenssl openssl
246 282
      </screen>
247 283

  
248
      <para>
249
        When installing from source, you will also need the following:
250
      </para>
251
      <itemizedlist>
252
        <listitem>
253
          <simpara>make</simpara>
254
        </listitem>
255
        <listitem>
256
          <simpara>tar</simpara>
257
        </listitem>
258
        <listitem>
259
          <simpara>gzip or bzip2</simpara>
260
        </listitem>
261
      </itemizedlist>
262

  
263
      <para>
264
        Again, these are available in most if not all linux distributions. For Debian, do:
265
      <screen>
266
# apt-get install make tar gzip bzip2
267
      </screen>
268
      </para>
269 284
    </sect2>
270 285

  
271 286
  </sect1>
......
279 294

  
280 295
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
281 296

  
282
      <para>Ganeti relies on Xen running in "bridge mode", which means the
283
      instances network interfaces will be attached to a software bridge
284
      running in dom0. Xen by default creates such a bridge at startup, but
285
      your distribution might have a different way to do things.
297
      <para>
298
        Ganeti relies on Xen running in "bridge mode", which means the
299
        instances network interfaces will be attached to a software bridge
300
        running in dom0. Xen by default creates such a bridge at startup, but
301
        your distribution might have a different way to do things.
286 302
      </para>
287 303

  
288 304
      <para>
289
      In Debian, in order to enable the default Xen behaviour, you
290
      have to edit <filename>/etc/xen/xend-config.sxp</filename> and
291
      replace <computeroutput>(network-script
292
      network-dummy)</computeroutput> with
293
      <computeroutput>(network-script
294
      network-bridge)</computeroutput>. The recommended Debian way to
295
      configure things, though, is to edit your
296
      <filename>/etc/network/interfaces</filename> file and substitute
297
      your normal ethernet stanza with something like:</para>
305
        Beware that the default name Ganeti uses is
306
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
307
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
308
        default. The default bridge your Ganeti cluster will use for new
309
        instances can be specified at cluster initialization time.
310
      </para>
298 311

  
299
      <screen>
300
auto br0
301
iface br0 inet static
312
      <formalpara><title>Debian</title>
313
        <para>
314
          The recommended Debian way to configure the xen bridge is to
315
          edit your <filename>/etc/network/interfaces</filename> file
316
          and substitute your normal Ethernet stanza with the
317
          following snippet:
318

  
319
        <screen>
320
auto xen-br0
321
iface xen-br0 inet static
302 322
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
303 323
        netmask <replaceable>YOUR_NETMASK</replaceable>
304 324
        network <replaceable>YOUR_NETWORK</replaceable>
305 325
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
306 326
        gateway <replaceable>YOUR_GATEWAY</replaceable>
307
        bridge_ports <replaceable>eth0</replaceable>
327
        bridge_ports eth0
308 328
        bridge_stp off
309 329
        bridge_fd 0
330
        </screen>
331
        </para>
332
      </formalpara>
333

  
334
     <para>
335
The following commands need to be executed on the local console
336
     </para>
337
      <screen>
338
ifdown eth0
339
ifup xen-br0
340
      </screen>
341

  
342
      <para>
343
        To check if the bridge is setup, use <command>ip</command>
344
        and <command>brctl show</command>:
345
      <para>
346

  
347
      <screen>
348
# ip a show xen-br0
349
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
350
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
351
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
352
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
353
       valid_lft forever preferred_lft forever
354

  
355
# brctl show xen-br0
356
bridge name     bridge id               STP enabled     interfaces
357
xen-br0         8000.0020fc1ed55d       no              eth0
310 358
      </screen>
311 359

  
312
    <para>
313
      Beware that the default name Ganeti uses is
314
      <hardware>xen-br0</hardware> (which was used in Xen 2.0)
315
      while Xen 3.0 uses <hardware>xenbr0</hardware> by
316
      default. The default bridge your cluster will use for new
317
      instances can be specified at cluster initialization time.
318
    </para>
319 360

  
320 361
    </sect2>
321 362

  
322 363
    <sect2>
323 364
      <title>Configuring LVM</title>
324 365

  
366

  
325 367
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
326 368

  
327 369
      <para>
......
330 372
        cluster. This is done by formatting the devices/partitions you
331 373
        want to use for it and then adding them to the relevant volume
332 374
        group:
333
       </para>
334 375

  
335 376
       <screen>
336
pvcreate /dev/sda4
337
pvcreate /dev/sdb
377
pvcreate /dev/sda3
378
vgcreate xenvg /dev/sda3
379
       </screen>
380
or
381
       <screen>
382
pvcreate /dev/sdb1
338 383
pvcreate /dev/sdc1
339
vgcreate xenvg /dev/sda4 /dev/sdb /dev/sdc1
384
vgcreate xenvg /dev/sdb1 /dev/sdc1
340 385
       </screen>
386
      </para>
341 387

  
342 388
      <para>
343 389
	If you want to add a device later you can do so with the
......
346 392
      </para>
347 393

  
348 394
      <screen>
349
pvcreate /dev/sdd
350
vgextend xenvg /dev/sdd
395
pvcreate /dev/sdd1
396
vgextend xenvg /dev/sdd1
351 397
      </screen>
352

  
353
      <para>
354
        As said before you may choose a different name for the volume group,
355
        as long as you stick to the same name on all the nodes of a cluster.
356
      </para>
357 398
    </sect2>
358 399

  
359 400
    <sect2>
......
362 403
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
363 404

  
364 405
      <para>
365
        It's now time to install the Ganeti software itself. You can
366
        do it from source, with the usual steps (note that the
367
        <option>localstatedir</option> options must be set to
368
        <filename class="directory">/var</filename>):
406
        It's now time to install the Ganeti software itself.  Download
407
        the source from <ulink
408
        url="http://code.google.com/p/ganeti/"></ulink>.
369 409
      </para>
370 410

  
371 411
        <screen>
412
tar xvzf ganeti-1.2b1.tar.gz
413
cd ganeti-1.2b1
372 414
./configure --localstatedir=/var
373 415
make
374 416
make install
......
376 418
        </screen>
377 419

  
378 420
      <para>
379
        You also need to copy from the source archive the file
380
        <filename>docs/examples/ganeti.initd</filename> to
381
        <filename>/etc/init.d/ganeti</filename> and register it into
421
        You also need to copy the file
422
        <filename>docs/examples/ganeti.initd</filename>
423
        from the source archive to
424
        <filename>/etc/init.d/ganeti</filename> and register it with
382 425
        your distribution's startup scripts, for example in Debian:
383 426
      </para>
384 427
      <screen>update-rc.d ganeti defaults 20 80</screen>
......
391 434
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
392 435

  
393 436
      <para>
394
        Another important component for Ganeti are the OS support
395
        packages, which let different operating systems be used as
396
        instances. You can grab a simple package that allows
397
        installing Debian Etch instances on the project web site
398
        (after download, untar it and follow the instructions in the
399
        <filename>README</filename> file).
437
        To be able to install instances you need to have an Operating
438
        System installation script. An example for Debian Etch is
439
        provided on the project web site.  Download it from <ulink
440
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
441
        instructions in the <filename>README</filename> file.  Here is
442
        the installation procedure:
400 443
      </para>
401 444

  
445
      <screen>
446
cd /srv/ganeti/os
447
tar xvf instance-debian-etch-0.1.tar
448
mv instance-debian-etch-0.1 debian-etch
449
      </screen>
450

  
402 451
      <para>
403
        Alternatively, you can create your own OS definitions, see the
452
        Alternatively, you can create your own OS definitions. See the
404 453
        manpage
405 454
        <citerefentry>
406 455
        <refentrytitle>ganeti-os-interface</refentrytitle>
......
418 467

  
419 468

  
420 469
      <para>The last step is to initialize the cluster. After you've repeated
421
        the above process or some semi-automatic form of it on all of your
422
        nodes choose one as the master, and execute:
470
        the above process on all of your nodes, choose one as the master, and execute:
423 471
      </para>
424 472

  
425 473
      <screen>
......
438 486
      </para>
439 487

  
440 488
      <para>
441
        If the node's network interface which will be used for access
442
        from outside the cluster is not named
443
        <hardware>xen-br0</hardware>, you need to use the
444
        <option>--master-netdev=<replaceable>IFNAME</replaceable></option>
445
        option, replacing <replaceable>IFNAME</replaceable> with the
446
        correct one for your case (e.g. <hardware>xenbr0</hardware>,
447
        <hardware>eth0</hardware>, etc.). Usually this will be the
448
        same as the default bridge name (see next paragraph).
489
        If the bridge name you are using is not
490
        <literal>xen-br0</literal>, use the <option>-b
491
        <replaceable>BRIDGENAME</replaceable></option> option to
492
        specify the bridge name. In this case, you should also use the
493
        <option>--master-netdev
494
        <replaceable>BRIDGENAME</replaceable></option> option with the
495
        same <replaceable>BRIDGENAME</replaceable> argument.
449 496
      </para>
450 497

  
451 498
      <para>
452
        Other options you can pass to <command>gnt-cluster
453
        init</command> include the default bridge name
454
        (<option>-b</option>), the cluster-wide name for the volume
455
        group (<option>-g</option>) and the secondary ip address for
456
        the initial node should you wish to keep the data replication
457
        network separate (see the administrator's manual for details
458
        about this feature). Invoke it with <option>--help</option> to
459
        see all the possibilities.
499
        You can use a different name than <literal>xenvg</literal> for
500
        the volume group (but note that the name must be identical on
501
        all nodes). In this case you need to specify it by passing the
502
        <option>-g <replaceable>VGNAME</replaceable></option> option
503
        to <computeroutput>gnt-cluster init</computeroutput>.
460 504
      </para>
461 505

  
462 506
      <para>
463
        It is required that the cluster name exists in DNS.
507
        You can also invoke the command with the
508
        <option>--help</option> option in order to see all the
509
        possibilities.
464 510
      </para>
511

  
465 512
    </sect2>
466 513

  
467 514
    <sect2>
468
      <title>Joining the nodes to the cluster.</title>
515
      <title>Joining the nodes to the cluster</title>
469 516

  
470 517
      <para>
471 518
        <emphasis role="strong">Mandatory:</emphasis> for all the
......
476 523
        After you have initialized your cluster you need to join the
477 524
        other nodes to it. You can do so by executing the following
478 525
        command on the master node:
526
      </para>
479 527
        <screen>
480 528
gnt-node add <replaceable>NODENAME</replaceable>
481 529
        </screen>
530
    </sect2>
482 531

  
483
        The only option is <option>-s</option>, which sets the node's
484
        secondary ip address for replication purposes, if you are
485
        using a separate replication network.
532
    <sect2>
533
      <title>Separate replication network</title>
534

  
535
      <para><emphasis role="strong">Optional</emphasis></para>
536
      <para>
537
        Ganeti uses DRBD to mirror the disk of the virtual instances
538
        between nodes. To use a dedicated network interface for this
539
        (in order to improve performance or to enhance security) you
540
        need to configure an additional interface for each node.  Use
541
        the <option>-s</option> option with
542
        <computeroutput>gnt-cluster init</computeroutput> and
543
        <computeroutput>gnt-node add</computeroutput> to specify the
544
        IP address of this secondary inteface to use for each
545
        node. Note that if you specified this option at cluster setup
546
        time, you must afterwards use it for every node add operation.
486 547
      </para>
487 548
    </sect2>
488 549

  
489
  </sect1>
550
    <sect2>
551
      <title>Testing the setup</title>
490 552

  
491
  <sect1>
492
    <title>This is it!</title>
553
      <para>
493 554

  
494
    <para>
495
      Now you can follow the admin guide to use your new Ganeti
496
      cluster.
555
        Execute the <computeroutput>gnt-node list</computeroutput>
556
        command to see all nodes in the cluster:
557
      <screen>
558
# gnt-node list
559
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
560
node1.example.com 197404 197404   2047  1896   125     0     0
561
      </screen>
497 562
    </para>
563
  </sect2>
498 564

  
499
  </sect1>
565
  <sect1>
566
    <title>Setting up and managing virtual instances</title>
567
    <sect2>
568
      <title>Setting up virtual instances</title>
569
      <para>
570
        This step shows how to setup a virtual instance with either
571
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
572
        with network mirrored disks
573
        (<computeroutput>remote_raid1</computeroutput>).  All commands
574
        need to be executed on the Ganeti master node (the one on
575
        which <computeroutput>gnt-cluster init</computeroutput> was
576
        run).  Verify that the OS scripts are present on all cluster
577
        nodes with <computeroutput>gnt-os list</computeroutput>.
578
      </para>
579
      <para>
580
        To create a virtual instance, you need a hostname which is
581
        resolvable (DNS or <filename>/etc/hosts</filename> on all
582
        nodes). The following command will create a non-mirrored
583
        instance for you:
584
      </para>
585
      <screen>
586
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
587
* creating instance disks...
588
adding instance inst1.example.com to cluster config
589
Waiting for instance inst1.example.com to sync disks.
590
Instance inst1.example.com's disks are in sync.
591
creating os for instance inst1.example.com on node node1.example.com
592
* running the instance OS create scripts...
593
      </screen>
594

  
595
      <para>
596
        The above instance will have no network interface enabled.
597
        You can access it over the virtual console with
598
        <computeroutput>gnt-instance console
599
        <literal>inst1</literal></computeroutput>. There is no
600
        password for root.  As this is a Debian instance, you can
601
        modifiy the <filename>/etc/network/interfaces</filename> file
602
        to setup the network interface (<literal>eth0</literal> is the
603
        name of the interface provided to the instance).
604
      </para>
500 605

  
606
      <para>
607
        To create a network mirrored instance, change the argument to
608
        the <option>-t</option> option from <literal>plain</literal>
609
        to <literal>remote_raid1</literal> and specify the node on
610
        which the mirror should reside with the
611
        <option>--secondary-node</option> option, like this:
612
      </para>
613

  
614
      <screen>
615
# gnt-instance add -t remote_raid1 --secondary-node node1 \
616
  -n node2 -o debian-etch instance2
617
* creating instance disks...
618
adding instance instance2 to cluster config
619
Waiting for instance instance1 to sync disks.
620
- device sdb:  3.50% done, 304 estimated seconds remaining
621
- device sdb: 21.70% done, 270 estimated seconds remaining
622
- device sdb: 39.80% done, 247 estimated seconds remaining
623
- device sdb: 58.10% done, 121 estimated seconds remaining
624
- device sdb: 76.30% done, 72 estimated seconds remaining
625
- device sdb: 94.80% done, 18 estimated seconds remaining
626
Instance instance2's disks are in sync.
627
creating os for instance instance2 on node node2.example.com
628
* running the instance OS create scripts...
629
* starting instance...
630
      </screen>
631

  
632
    </sect2>
633

  
634
    <sect2>
635
      <title>Managing virtual instances</title>
636
      <para>
637
        All commands need to be executed on the Ganeti master node
638
      </para>
639

  
640
      <para>
641
        To access the console of an instance, use
642
        <computeroutput>gnt-instance console
643
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
644
      </para>
645

  
646
      <para>
647
        To shutdown an instance, use <computeroutput>gnt-instance
648
        shutdown
649
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
650
        startup an instance, use <computeroutput>gnt-instance startup
651
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
652
      </para>
653

  
654
      <para>
655
        To failover an instance to its secondary node (only possible
656
        in <literal>remote_raid1</literal> setup), use
657
        <computeroutput>gnt-instance failover
658
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
659
      </para>
660

  
661
      <para>
662
        For more instance and cluster administration details, see the
663
        <emphasis>Ganeti administrator's guide</emphasis>.
664
      </para>
665

  
666
    </sect2>
667

  
668
  </sect1>
501 669

  
502 670
  </article>

Also available in: Unified diff