Statistics
| Branch: | Tag: | Revision:

root / docs / install.sgml @ 727830bf

History | View | Annotate | Download (26.1 kB)

1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti installation tutorial</title>
6
  </articleinfo>
7
  <para>Documents Ganeti version 1.2</para>
8

    
9
  <sect1>
10
    <title>Introduction</title>
11

    
12
    <para>
13
      Ganeti is a cluster virtualization management system based on
14
      Xen. This document explains how to bootstrap a Ganeti node (Xen
15
      <literal>dom0</literal>), create a running cluster and install
16
      virtual instance (Xen <literal>domU</literal>).  You need to
17
      repeat most of the steps in this document for every node you
18
      want to install, but of course we recommend creating some
19
      semi-automatic procedure if you plan to deploy Ganeti on a
20
      medium/large scale.
21
    </para>
22

    
23
    <para>
24
      A basic Ganeti terminology glossary is provided in the
25
      introductory section of the <emphasis>Ganeti administrator's
26
      guide</emphasis>. Please refer to that document if you are
27
      uncertain about the terms we are using.
28
    </para>
29

    
30
    <para>
31
      Ganeti has been developed for Linux and is
32
      distribution-agnostic.  This documentation will use Debian Etch
33
      as an example system but the examples can easily be translated
34
      to any other distribution.  You are expected to be familiar with
35
      your distribution, its package management system, and Xen before
36
      trying to use Ganeti.
37
    </para>
38

    
39
    <para>This document is divided into two main sections:
40

    
41
      <itemizedlist>
42
        <listitem>
43
          <simpara>Installation of the base system and base
44
          components</simpara>
45
        </listitem>
46
        <listitem>
47
          <simpara>Configuration of the environment for
48
          Ganeti</simpara>
49
        </listitem>
50
      </itemizedlist>
51

    
52
    Each of these is divided into sub-sections. While a full Ganeti
53
    system will need all of the steps specified, some are not strictly
54
    required for every environment. Which ones they are, and why, is
55
    specified in the corresponding sections.
56
    </para>
57

    
58
  </sect1>
59

    
60
  <sect1>
61
    <title>Installing the base system and base components</title>
62

    
63
    <sect2>
64
      <title>Hardware requirements</title>
65

    
66
      <para>
67
         Any system supported by your Linux distribution is fine.
68
         64-bit systems are better as they can support more memory.
69
      </para>
70

    
71
      <para>
72
         Any disk drive recognized by Linux
73
         (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.)
74
         is supported in Ganeti. Note that no shared storage
75
         (e.g. <literal>SAN</literal>) is needed to get high-availability features. It is
76
         highly recommended to use more than one disk drive to improve
77
         speed. But Ganeti also works with one disk per machine.
78
      </para>
79

    
80
    <sect2>
81
      <title>Installing the base system</title>
82

    
83
      <para>
84
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
85
      </para>
86

    
87
      <para>
88
        It is advised to start with a clean, minimal install of the
89
        operating system. The only requirement you need to be aware of
90
        at this stage is to partition leaving enough space for a big
91
        (<emphasis role="strong">minimum
92
        <constant>20GiB</constant></emphasis>) LVM volume group which
93
        will then host your instance filesystems. The volume group
94
        name Ganeti 1.2 uses (by default) is
95
        <emphasis>xenvg</emphasis>.
96
      </para>
97

    
98
      <para>
99
        While you can use an existing system, please note that the
100
        Ganeti installation is intrusive in terms of changes to the
101
        system configuration, and it's best to use a newly-installed
102
        system without important data on it.
103
      </para>
104

    
105
      <para>
106
        Also, for best results, it's advised that the nodes have as
107
        much as possible the same hardware and software
108
        configuration. This will make administration much easier.
109
      </para>
110

    
111
      <sect3>
112
        <title>Hostname issues</title>
113
        <para>
114
          Note that Ganeti requires the hostnames of the systems
115
          (i.e. what the <computeroutput>hostname</computeroutput>
116
          command outputs to be a fully-qualified name, not a short
117
          name. In other words, you should use
118
          <literal>node1.example.com</literal> as a hostname and not
119
          just <literal>node1</literal>.
120
        </para>
121

    
122
        <formalpara>
123
          <title>Debian</title>
124
          <para>
125
            Note that Debian Etch configures the hostname differently
126
            than you need it for Ganeti. For example, this is what
127
            Etch puts in <filename>/etc/hosts</filename> in certain
128
            situations:
129
<screen>
130
127.0.0.1       localhost
131
127.0.1.1       node1.example.com node1
132
</screen>
133

    
134
          but for Ganeti you need to have:
135
<screen>
136
127.0.0.1       localhost
137
192.168.1.1     node1.example.com node1
138
</screen>
139
            replacing <literal>192.168.1.1</literal> with your node's
140
            address. Also, the file <filename>/etc/hostname</filename>
141
            which configures the hostname of the system should contain
142
            <literal>node1.example.com</literal> and not just
143
            <literal>node1</literal> (you need to run the command
144
            <computeroutput>/etc/init.d/hostname.sh
145
            start</computeroutput> after changing the file).
146
          </para>
147
        </formalpara>
148
      </sect3>
149

    
150
    </sect2>
151

    
152
    <sect2>
153
      <title>Installing Xen</title>
154

    
155
      <para>
156
        <emphasis role="strong">Mandatory</emphasis> on all nodes.
157
      </para>
158

    
159
      <para>
160
        While Ganeti is developed with the ability to modularly run on
161
        different virtualization environments in mind the only one
162
        currently useable on a live system is <ulink
163
        url="http://xen.xensource.com/">Xen</ulink>. Supported
164
        versions are: <simplelist type="inline">
165
        <member><literal>3.0.3</literal></member>
166
        <member><literal>3.0.4</literal></member>
167
        <member><literal>3.1</literal></member> </simplelist>.
168
      </para>
169

    
170
      <para>
171
        Please follow your distribution's recommended way to install
172
        and set up Xen, or install Xen from the upstream source, if
173
        you wish, following their manual.
174
      </para>
175

    
176
      <para>
177
        After installing Xen you need to reboot into your xenified
178
        dom0 system. On some distributions this might involve
179
        configuring GRUB appropriately, whereas others will configure
180
        it automatically when you install Xen from a package.
181
      </para>
182

    
183
      <formalpara><title>Debian</title>
184
      <para>
185
        Under Debian Etch or Sarge+backports you can install the
186
        relevant <literal>xen-linux-system</literal> package, which
187
        will pull in both the hypervisor and the relevant
188
        kernel. Also, if you are installing a 32-bit Etch, you should
189
        install the <computeroutput>lib6-xen</computeroutput> package
190
        (run <computeroutput>apt-get install
191
        libc6-xen</computeroutput>).
192
      </para>
193
      </formalpara>
194

    
195
      <sect3>
196
        <title>Selecting the instance kernel</title>
197

    
198
        <para>
199
          After you have installed xen, you need to tell Ganeti
200
          exactly what kernel to use for the instances it will
201
          create. This is done by creating a
202
          <emphasis>symlink</emphasis> from your actual kernel to
203
          <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from
204
          your initrd to
205
          <filename>/boot/initrd-2.6-xenU</filename>. Note that if you
206
          don't use an initrd for the <literal>domU</literal> kernel,
207
          you don't need to create the initrd symlink.
208
        </para>
209

    
210
        <formalpara>
211
          <title>Debian</title>
212
          <para>
213
            After installation of the
214
            <literal>xen-linux-system</literal> package, you need to
215
            run (replace the exact version number with the one you
216
            have):
217
            <screen>
218
cd /boot
219
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU
220
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU
221
            </screen>
222
          </para>
223
        </formalpara>
224
      </sect3>
225

    
226
    </sect2>
227

    
228
    <sect2>
229
      <title>Installing DRBD</title>
230

    
231
      <para>
232
        Recommended on all nodes: <ulink
233
        url="http://www.drbd.org/">DRBD</ulink> is required if you
234
        want to use the high availability (HA) features of Ganeti, but
235
        optional if you don't require HA or only run Ganeti on
236
        single-node clusters. You can upgrade a non-HA cluster to an
237
        HA one later, but you might need to export and re-import all
238
        your instances to take advantage of the new features.
239
      </para>
240

    
241
      <para>
242
        Supported DRBD version: the <literal>0.7</literal>
243
        series. It's recommended to have at least version
244
        <literal>0.7.24</literal> if you use <command>udev</command>
245
        since older versions have a bug related to device discovery
246
        which can be triggered in cases of hard drive failure.
247
      </para>
248

    
249
      <para>
250
        Now the bad news: unless your distribution already provides it
251
        installing DRBD might involve recompiling your kernel or
252
        anyway fiddling with it. Hopefully at least the xenified
253
        kernel source to start from will be provided.
254
      </para>
255

    
256
      <para>
257
        The good news is that you don't need to configure DRBD at all.
258
        Ganeti will do it for you for every instance you set up.  If
259
        you have the DRBD utils installed and the module in your
260
        kernel you're fine. Please check that your system is
261
        configured to load the module at every boot, and that it
262
        passes the following option to the module:
263
        <computeroutput>minor_count=64</computeroutput> (this will
264
        allow you to use up to 32 instances per node).
265
      </para>
266

    
267
      <formalpara><title>Debian</title>
268
        <para>
269
         You can just install (build) the DRBD 0.7 module with the
270
         following commands (make sure you are running the Xen
271
         kernel):
272
        </para>
273
      </formalpara>
274

    
275
      <screen>
276
apt-get install drbd0.7-module-source drbd0.7-utils
277
m-a update
278
m-a a-i drbd0.7
279
echo drbd minor_count=64 >> /etc/modules
280
modprobe drbd minor_count=64
281
      </screen>
282

    
283
    </sect2>
284

    
285
    <sect2>
286
      <title>Other required software</title>
287

    
288
      <para>Besides Xen and DRBD, you will need to install the
289
      following (on all nodes):</para>
290

    
291
      <itemizedlist>
292
        <listitem>
293
          <simpara><ulink url="http://sourceware.org/lvm2/">LVM
294
          version 2</ulink></simpara>
295
        </listitem>
296
        <listitem>
297
          <simpara><ulink
298
          url="http://www.openssl.org/">OpenSSL</ulink></simpara>
299
        </listitem>
300
        <listitem>
301
          <simpara><ulink
302
          url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara>
303
        </listitem>
304
        <listitem>
305
          <simpara><ulink url="http://bridge.sourceforge.net/">Bridge
306
          utilities</ulink></simpara>
307
        </listitem>
308
        <listitem>
309
          <simpara><ulink
310
          url="http://fping.sourceforge.net/">fping</ulink></simpara>
311
        </listitem>
312
        <listitem>
313
          <simpara><ulink
314
          url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara>
315
        </listitem>
316
        <listitem>
317
          <simpara><ulink
318
          url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink>
319
          (part of iputils package)</simpara>
320
        </listitem>
321
        <listitem>
322
          <simpara><ulink
323
          url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink>
324
          (Linux Software Raid tools)</simpara>
325
        </listitem>
326
        <listitem>
327
          <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara>
328
        </listitem>
329
        <listitem>
330
          <simpara><ulink url="http://twistedmatrix.com/">Python
331
          Twisted library</ulink> - the core library is
332
          enough</simpara>
333
        </listitem>
334
        <listitem>
335
          <simpara><ulink
336
          url="http://pyopenssl.sourceforge.net/">Python OpenSSL
337
          bindings</ulink></simpara>
338
        </listitem>
339
      </itemizedlist>
340

    
341
      <para>
342
        These programs are supplied as part of most Linux
343
        distributions, so usually they can be installed via apt or
344
        similar methods. Also many of them will already be installed
345
        on a standard machine.
346
      </para>
347

    
348

    
349
      <formalpara><title>Debian</title>
350

    
351
      <para>You can use this command line to install all of them:</para>
352

    
353
      </formalpara>
354
      <screen>
355
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \
356
  fping python2.4 python-twisted-core python-pyopenssl openssl \
357
  mdadm
358
      </screen>
359

    
360
    </sect2>
361

    
362
  </sect1>
363

    
364

    
365
  <sect1>
366
    <title>Setting up the environment for Ganeti</title>
367

    
368
    <sect2>
369
      <title>Configuring the network</title>
370

    
371
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
372

    
373
      <para>
374
        Ganeti relies on Xen running in "bridge mode", which means the
375
        instances network interfaces will be attached to a software bridge
376
        running in dom0. Xen by default creates such a bridge at startup, but
377
        your distribution might have a different way to do things.
378
      </para>
379

    
380
      <para>
381
        Beware that the default name Ganeti uses is
382
        <hardware>xen-br0</hardware> (which was used in Xen 2.0)
383
        while Xen 3.0 uses <hardware>xenbr0</hardware> by
384
        default. The default bridge your Ganeti cluster will use for new
385
        instances can be specified at cluster initialization time.
386
      </para>
387

    
388
      <formalpara><title>Debian</title>
389
        <para>
390
          The recommended Debian way to configure the xen bridge is to
391
          edit your <filename>/etc/network/interfaces</filename> file
392
          and substitute your normal Ethernet stanza with the
393
          following snippet:
394

    
395
        <screen>
396
auto xen-br0
397
iface xen-br0 inet static
398
        address <replaceable>YOUR_IP_ADDRESS</replaceable>
399
        netmask <replaceable>YOUR_NETMASK</replaceable>
400
        network <replaceable>YOUR_NETWORK</replaceable>
401
        broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable>
402
        gateway <replaceable>YOUR_GATEWAY</replaceable>
403
        bridge_ports eth0
404
        bridge_stp off
405
        bridge_fd 0
406
        </screen>
407
        </para>
408
      </formalpara>
409

    
410
     <para>
411
The following commands need to be executed on the local console
412
     </para>
413
      <screen>
414
ifdown eth0
415
ifup xen-br0
416
      </screen>
417

    
418
      <para>
419
        To check if the bridge is setup, use <command>ip</command>
420
        and <command>brctl show</command>:
421
      <para>
422

    
423
      <screen>
424
# ip a show xen-br0
425
9: xen-br0: &lt;BROADCAST,MULTICAST,UP,10000&gt; mtu 1500 qdisc noqueue
426
    link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
427
    inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
428
    inet6 fe80::220:fcff:fe1e:d55d/64 scope link
429
       valid_lft forever preferred_lft forever
430

    
431
# brctl show xen-br0
432
bridge name     bridge id               STP enabled     interfaces
433
xen-br0         8000.0020fc1ed55d       no              eth0
434
      </screen>
435

    
436

    
437
    </sect2>
438

    
439
    <sect2>
440
      <title>Configuring LVM</title>
441

    
442

    
443
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
444

    
445
      <note>
446
        <simpara>The volume group is required to be at least
447
        <constant>20GiB</constant>.</simpara>
448
      </note>
449
      <para>
450
        If you haven't configured your LVM volume group at install
451
        time you need to do it before trying to initialize the Ganeti
452
        cluster. This is done by formatting the devices/partitions you
453
        want to use for it and then adding them to the relevant volume
454
        group:
455

    
456
       <screen>
457
pvcreate /dev/sda3
458
vgcreate xenvg /dev/sda3
459
       </screen>
460
or
461
       <screen>
462
pvcreate /dev/sdb1
463
pvcreate /dev/sdc1
464
vgcreate xenvg /dev/sdb1 /dev/sdc1
465
       </screen>
466
      </para>
467

    
468
      <para>
469
	If you want to add a device later you can do so with the
470
	<citerefentry><refentrytitle>vgextend</refentrytitle>
471
	<manvolnum>8</manvolnum></citerefentry> command:
472
      </para>
473

    
474
      <screen>
475
pvcreate /dev/sdd1
476
vgextend xenvg /dev/sdd1
477
      </screen>
478
    </sect2>
479

    
480
    <sect2>
481
      <title>Installing Ganeti</title>
482

    
483
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
484

    
485
      <para>
486
        It's now time to install the Ganeti software itself.  Download
487
        the source from <ulink
488
        url="http://code.google.com/p/ganeti/"></ulink>.
489
      </para>
490

    
491
        <screen>
492
tar xvzf ganeti-1.2b1.tar.gz
493
cd ganeti-1.2b1
494
./configure --localstatedir=/var
495
make
496
make install
497
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
498
        </screen>
499

    
500
      <para>
501
        You also need to copy the file
502
        <filename>docs/examples/ganeti.initd</filename>
503
        from the source archive to
504
        <filename>/etc/init.d/ganeti</filename> and register it with
505
        your distribution's startup scripts, for example in Debian:
506
      </para>
507
      <screen>update-rc.d ganeti defaults 20 80</screen>
508

    
509
      <para>
510
        In order to automatically restart failed instances, you need
511
        to setup a cron job run the
512
        <computeroutput>ganeti-watcher</computeroutput> program. A
513
        sample cron file is provided in the source at
514
        <filename>docs/examples/ganeti.cron</filename> and you can
515
        copy that (eventually altering the path) to
516
        <filename>/etc/cron.d/ganeti</filename>
517
      </para>
518

    
519
    </sect2>
520

    
521
    <sect2>
522
      <title>Installing the Operating System support packages</title>
523

    
524
      <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para>
525

    
526
      <para>
527
        To be able to install instances you need to have an Operating
528
        System installation script. An example for Debian Etch is
529
        provided on the project web site.  Download it from <ulink
530
        url="http://code.google.com/p/ganeti/"></ulink> and follow the
531
        instructions in the <filename>README</filename> file.  Here is
532
        the installation procedure:
533
      </para>
534

    
535
      <screen>
536
cd /srv/ganeti/os
537
tar xvf instance-debian-etch-0.1.tar
538
mv instance-debian-etch-0.1 debian-etch
539
      </screen>
540

    
541
      <para>
542
        In order to use this OS definition, you need to have internet
543
        access from your nodes and have <citerefentry>
544
        <refentrytitle>debootstrap</refentrytitle>
545
        <manvolnum>8</manvolnum> </citerefentry> installed on all the
546
        nodes.
547
      </para>
548
      <formalpara>
549
        <title>Debian</title>
550
        <para>
551
          Use this command on all nodes to install
552
          <computeroutput>debootstrap</computeroutput>:
553

    
554
          <screen>apt-get install debootstrap</screen>
555
        </para>
556
      </formalpara>
557

    
558
      <para>
559
        Alternatively, you can create your own OS definitions. See the
560
        manpage
561
        <citerefentry>
562
        <refentrytitle>ganeti-os-interface</refentrytitle>
563
        <manvolnum>8</manvolnum>
564
        </citerefentry>.
565
      </para>
566

    
567
    </sect2>
568

    
569
    <sect2>
570
      <title>Initializing the cluster</title>
571

    
572
      <para><emphasis role="strong">Mandatory:</emphasis> only on one
573
      node per cluster.</para>
574

    
575

    
576
      <para>The last step is to initialize the cluster. After you've repeated
577
        the above process on all of your nodes, choose one as the master, and execute:
578
      </para>
579

    
580
      <screen>
581
gnt-cluster init <replaceable>CLUSTERNAME</replaceable>
582
      </screen>
583

    
584
      <para>
585
        The <replaceable>CLUSTERNAME</replaceable> is a hostname,
586
        which must be resolvable (e.g. it must exist in DNS or in
587
        <filename>/etc/hosts</filename>) by all the nodes in the
588
        cluster. You must choose a name different from any of the
589
        nodes names for a multi-node cluster. In general the best
590
        choice is to have a unique name for a cluster, even if it
591
        consists of only one machine, as you will be able to expand it
592
        later without any problems.
593
      </para>
594

    
595
      <para>
596
        If the bridge name you are using is not
597
        <literal>xen-br0</literal>, use the <option>-b
598
        <replaceable>BRIDGENAME</replaceable></option> option to
599
        specify the bridge name. In this case, you should also use the
600
        <option>--master-netdev
601
        <replaceable>BRIDGENAME</replaceable></option> option with the
602
        same <replaceable>BRIDGENAME</replaceable> argument.
603
      </para>
604

    
605
      <para>
606
        You can use a different name than <literal>xenvg</literal> for
607
        the volume group (but note that the name must be identical on
608
        all nodes). In this case you need to specify it by passing the
609
        <option>-g <replaceable>VGNAME</replaceable></option> option
610
        to <computeroutput>gnt-cluster init</computeroutput>.
611
      </para>
612

    
613
      <para>
614
        You can also invoke the command with the
615
        <option>--help</option> option in order to see all the
616
        possibilities.
617
      </para>
618

    
619
    </sect2>
620

    
621
    <sect2>
622
      <title>Joining the nodes to the cluster</title>
623

    
624
      <para>
625
        <emphasis role="strong">Mandatory:</emphasis> for all the
626
        other nodes.
627
      </para>
628

    
629
      <para>
630
        After you have initialized your cluster you need to join the
631
        other nodes to it. You can do so by executing the following
632
        command on the master node:
633
      </para>
634
        <screen>
635
gnt-node add <replaceable>NODENAME</replaceable>
636
        </screen>
637
    </sect2>
638

    
639
    <sect2>
640
      <title>Separate replication network</title>
641

    
642
      <para><emphasis role="strong">Optional</emphasis></para>
643
      <para>
644
        Ganeti uses DRBD to mirror the disk of the virtual instances
645
        between nodes. To use a dedicated network interface for this
646
        (in order to improve performance or to enhance security) you
647
        need to configure an additional interface for each node.  Use
648
        the <option>-s</option> option with
649
        <computeroutput>gnt-cluster init</computeroutput> and
650
        <computeroutput>gnt-node add</computeroutput> to specify the
651
        IP address of this secondary inteface to use for each
652
        node. Note that if you specified this option at cluster setup
653
        time, you must afterwards use it for every node add operation.
654
      </para>
655
    </sect2>
656

    
657
    <sect2>
658
      <title>Testing the setup</title>
659

    
660
      <para>
661

    
662
        Execute the <computeroutput>gnt-node list</computeroutput>
663
        command to see all nodes in the cluster:
664
      <screen>
665
# gnt-node list
666
Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
667
node1.example.com 197404 197404   2047  1896   125     0     0
668
      </screen>
669
    </para>
670
  </sect2>
671

    
672
  <sect1>
673
    <title>Setting up and managing virtual instances</title>
674
    <sect2>
675
      <title>Setting up virtual instances</title>
676
      <para>
677
        This step shows how to setup a virtual instance with either
678
        non-mirrored disks (<computeroutput>plain</computeroutput>) or
679
        with network mirrored disks
680
        (<computeroutput>remote_raid1</computeroutput>).  All commands
681
        need to be executed on the Ganeti master node (the one on
682
        which <computeroutput>gnt-cluster init</computeroutput> was
683
        run).  Verify that the OS scripts are present on all cluster
684
        nodes with <computeroutput>gnt-os list</computeroutput>.
685
      </para>
686
      <para>
687
        To create a virtual instance, you need a hostname which is
688
        resolvable (DNS or <filename>/etc/hosts</filename> on all
689
        nodes). The following command will create a non-mirrored
690
        instance for you:
691
      </para>
692
      <screen>
693
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com
694
* creating instance disks...
695
adding instance inst1.example.com to cluster config
696
Waiting for instance inst1.example.com to sync disks.
697
Instance inst1.example.com's disks are in sync.
698
creating os for instance inst1.example.com on node node1.example.com
699
* running the instance OS create scripts...
700
      </screen>
701

    
702
      <para>
703
        The above instance will have no network interface enabled.
704
        You can access it over the virtual console with
705
        <computeroutput>gnt-instance console
706
        <literal>inst1</literal></computeroutput>. There is no
707
        password for root.  As this is a Debian instance, you can
708
        modifiy the <filename>/etc/network/interfaces</filename> file
709
        to setup the network interface (<literal>eth0</literal> is the
710
        name of the interface provided to the instance).
711
      </para>
712

    
713
      <para>
714
        To create a network mirrored instance, change the argument to
715
        the <option>-t</option> option from <literal>plain</literal>
716
        to <literal>remote_raid1</literal> and specify the node on
717
        which the mirror should reside with the
718
        <option>--secondary-node</option> option, like this:
719
      </para>
720

    
721
      <screen>
722
# gnt-instance add -t remote_raid1 --secondary-node node1 \
723
  -n node2 -o debian-etch instance2
724
* creating instance disks...
725
adding instance instance2 to cluster config
726
Waiting for instance instance1 to sync disks.
727
- device sdb:  3.50% done, 304 estimated seconds remaining
728
- device sdb: 21.70% done, 270 estimated seconds remaining
729
- device sdb: 39.80% done, 247 estimated seconds remaining
730
- device sdb: 58.10% done, 121 estimated seconds remaining
731
- device sdb: 76.30% done, 72 estimated seconds remaining
732
- device sdb: 94.80% done, 18 estimated seconds remaining
733
Instance instance2's disks are in sync.
734
creating os for instance instance2 on node node2.example.com
735
* running the instance OS create scripts...
736
* starting instance...
737
      </screen>
738

    
739
    </sect2>
740

    
741
    <sect2>
742
      <title>Managing virtual instances</title>
743
      <para>
744
        All commands need to be executed on the Ganeti master node
745
      </para>
746

    
747
      <para>
748
        To access the console of an instance, use
749
        <computeroutput>gnt-instance console
750
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
751
      </para>
752

    
753
      <para>
754
        To shutdown an instance, use <computeroutput>gnt-instance
755
        shutdown
756
        <replaceable>INSTANCENAME</replaceable></computeroutput>. To
757
        startup an instance, use <computeroutput>gnt-instance startup
758
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
759
      </para>
760

    
761
      <para>
762
        To failover an instance to its secondary node (only possible
763
        in <literal>remote_raid1</literal> setup), use
764
        <computeroutput>gnt-instance failover
765
        <replaceable>INSTANCENAME</replaceable></computeroutput>.
766
      </para>
767

    
768
      <para>
769
        For more instance and cluster administration details, see the
770
        <emphasis>Ganeti administrator's guide</emphasis>.
771
      </para>
772

    
773
    </sect2>
774

    
775
  </sect1>
776

    
777
  </article>