Revision f6d62af4

b/doc/install.rst
5 5

  
6 6
.. contents::
7 7

  
8
.. highlight:: text
8
.. highlight:: shell-example
9 9

  
10 10
Introduction
11 11
------------
......
23 23
uncertain about the terms we are using.
24 24

  
25 25
Ganeti has been developed for Linux and should be distribution-agnostic.
26
This documentation will use Debian Lenny as an example system but the
26
This documentation will use Debian Squeeze as an example system but the
27 27
examples can be translated to any other distribution. You are expected
28 28
to be familiar with your distribution, its package management system,
29 29
and Xen or KVM before trying to use Ganeti.
......
51 51
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is
52 52
supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is
53 53
needed to get high-availability features (but of course, one can be used
54
to store the images). It is highly recommended to use more than one disk
55
drive to improve speed. But Ganeti also works with one disk per machine.
54
to store the images). Whilte it is highly recommended to use more than
55
one disk drive in order to improve speed, Ganeti also works with one
56
disk per machine.
56 57

  
57 58
Installing the base system
58 59
++++++++++++++++++++++++++
......
70 71
not detailed in this document.
71 72

  
72 73
If you choose to use RBD-based instances, there's no need for LVM
73
provisioning. However, this feature is experimental, and is not
74
provisioning. However, this feature is experimental, and is not yet
74 75
recommended for production clusters.
75 76

  
76 77
While you can use an existing system, please note that the Ganeti
......
92 93

  
93 94
.. admonition:: Debian
94 95

  
95
   Debian Lenny and Etch configures the hostname differently than you
96
   need it for Ganeti. For example, this is what Etch puts in
97
   ``/etc/hosts`` in certain situations::
96
   Debian usually configures the hostname differently than you need it
97
   for Ganeti. For example, this is what it puts in ``/etc/hosts`` in
98
   certain situations::
98 99

  
99 100
     127.0.0.1       localhost
100 101
     127.0.1.1       node1.example.com node1
......
102 103
   but for Ganeti you need to have::
103 104

  
104 105
     127.0.0.1       localhost
105
     192.0.2.1     node1.example.com node1
106
     192.0.2.1       node1.example.com node1
106 107

  
107 108
   replacing ``192.0.2.1`` with your node's address. Also, the file
108 109
   ``/etc/hostname`` which configures the hostname of the system
......
136 137

  
137 138
While Ganeti is developed with the ability to modularly run on different
138 139
virtualization environments in mind the only two currently useable on a
139
live system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4
140
and 3.1.  Supported KVM version are 72 and above.
140
live system are Xen and KVM. Supported Xen versions are: 3.0.3 and later
141
3.x versions, and 4.x (tested up to 4.1).  Supported KVM versions are 72
142
and above.
141 143

  
142 144
Please follow your distribution's recommended way to install and set up
143 145
Xen, or install Xen from the upstream source, if you wish, following
......
151 153

  
152 154
.. admonition:: Xen on Debian
153 155

  
154
   Under Lenny or Etch you can install the relevant ``xen-linux-system``
156
   Under Debian you can install the relevant ``xen-linux-system``
155 157
   package, which will pull in both the hypervisor and the relevant
156
   kernel. Also, if you are installing a 32-bit Lenny/Etch, you should
158
   kernel. Also, if you are installing a 32-bit system, you should
157 159
   install the ``libc6-xen`` package (run ``apt-get install
158 160
   libc6-xen``).
159 161

  
......
169 171

  
170 172
For optimum performance when running both CPU and I/O intensive
171 173
instances, it's also recommended that the dom0 is restricted to one CPU
172
only, for example by booting with the kernel parameter ``nosmp``.
174
only, for example by booting with the kernel parameter ``maxcpus=1``.
173 175

  
174 176
It is recommended that you disable xen's automatic save of virtual
175 177
machines at system shutdown and subsequent restore of them at reboot.
......
178 180

  
179 181
If you want to use live migration make sure you have, in the xen config
180 182
file, something that allows the nodes to migrate instances between each
181
other. For example::
183
other. For example:
184

  
185
.. code-block:: text
182 186

  
183 187
  (xend-relocation-server yes)
184 188
  (xend-relocation-port 8002)
......
196 200
   Besides the ballooning change which you need to set in
197 201
   ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp
198 202
   parameters in the file ``/boot/grub/menu.lst``. You need to modify
199
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this::
203
   the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this:
204

  
205
   .. code-block:: text
200 206

  
201 207
     ## Xen hypervisor options to use with the default Xen boot option
202 208
     # xenhopt=dom0_mem=1024M
203 209

  
204
   and the ``xenkopt`` needs to include the ``nosmp`` option like this::
210
   and the ``xenkopt`` needs to include the ``maxcpus`` option like
211
   this:
212

  
213
   .. code-block:: text
205 214

  
206 215
     ## Xen Linux kernel options to use with the default Xen boot option
207
     # xenkopt=nosmp
216
     # xenkopt=maxcpus=1
208 217

  
209 218
   Any existing parameters can be left in place: it's ok to have
210
   ``xenkopt=console=tty0 nosmp``, for example. After modifying the
219
   ``xenkopt=console=tty0 maxcpus=1``, for example. After modifying the
211 220
   files, you need to run::
212 221

  
213
     /sbin/update-grub
222
     $ /sbin/update-grub
214 223

  
215 224
If you want to run HVM instances too with Ganeti and want VNC access to
216 225
the console of your instances, set the following two entries in
217
``/etc/xen/xend-config.sxp``::
226
``/etc/xen/xend-config.sxp``:
227

  
228
.. code-block:: text
218 229

  
219 230
  (vnc-listen '0.0.0.0') (vncpasswd '')
220 231

  
221 232
You need to restart the Xen daemon for these settings to take effect::
222 233

  
223
  /etc/init.d/xend restart
234
  $ /etc/init.d/xend restart
224 235

  
225 236
Selecting the instance kernel
226 237
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
227 238

  
228 239
After you have installed Xen, you need to tell Ganeti exactly what
229 240
kernel to use for the instances it will create. This is done by creating
230
a symlink from your actual kernel to ``/boot/vmlinuz-2.6-xenU``, and one
231
from your initrd to ``/boot/initrd-2.6-xenU`` [#defkernel]_. Note that
241
a symlink from your actual kernel to ``/boot/vmlinuz-3-xenU``, and one
242
from your initrd to ``/boot/initrd-3-xenU`` [#defkernel]_. Note that
232 243
if you don't use an initrd for the domU kernel, you don't need to create
233 244
the initrd symlink.
234 245

  
......
237 248
   After installation of the ``xen-linux-system`` package, you need to
238 249
   run (replace the exact version number with the one you have)::
239 250

  
240
     cd /boot
241
     ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU
242
     ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU
251
     $ cd /boot
252
     $ ln -s vmlinuz-%2.6.26-1%-xen-amd64 vmlinuz-3-xenU
253
     $ ln -s initrd.img-%2.6.26-1%-xen-amd64 initrd-3-xenU
254

  
255
   By default, the initrd doesn't contain the Xen block drivers needed
256
   to mount the root device, so it is recommended to update the initrd
257
   by following these two steps:
258

  
259
   - edit ``/etc/initramfs-tools/modules`` and add ``xen_blkfront``
260
   - run ``update-initramfs -u``
243 261

  
244 262
Installing DRBD
245 263
+++++++++++++++
......
247 265
Recommended on all nodes: DRBD_ is required if you want to use the high
248 266
availability (HA) features of Ganeti, but optional if you don't require
249 267
them or only run Ganeti on single-node clusters. You can upgrade a
250
non-HA cluster to an HA one later, but you might need to export and
251
re-import all your instances to take advantage of the new features.
268
non-HA cluster to an HA one later, but you might need to convert all
269
your instances to DRBD to take advantage of the new features.
252 270

  
253 271
.. _DRBD: http://www.drbd.org/
254 272

  
255
Supported DRBD versions: 8.0+. It's recommended to have at least version
256
8.0.12. Note that for version 8.2 and newer it is needed to pass the
257
``usermode_helper=/bin/true`` parameter to the module, either by
273
Supported DRBD versions: 8.0-8.3. It's recommended to have at least
274
version 8.0.12. Note that for version 8.2 and newer it is needed to pass
275
the ``usermode_helper=/bin/true`` parameter to the module, either by
258 276
configuring ``/etc/modules`` or when inserting it manually.
259 277

  
260 278
Now the bad news: unless your distribution already provides it
......
280 298
   following commands, making sure you are running the target (Xen or
281 299
   KVM) kernel::
282 300

  
283
     apt-get install drbd8-source drbd8-utils
284
     m-a update
285
     m-a a-i drbd8
286
     echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
287
     depmod -a
288
     modprobe drbd minor_count=128 usermode_helper=/bin/true
301
     $ apt-get install drbd8-source drbd8-utils
302
     $ m-a update
303
     $ m-a a-i drbd8
304
     $ echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
305
     $ depmod -a
306
     $ modprobe drbd minor_count=128 usermode_helper=/bin/true
289 307

  
290 308
   It is also recommended that you comment out the default resources in
291 309
   the ``/etc/drbd.conf`` file, so that the init script doesn't try to
292 310
   configure any drbd devices. You can do this by prefixing all
293
   *resource* lines in the file with the keyword *skip*, like this::
311
   *resource* lines in the file with the keyword *skip*, like this:
312

  
313
   .. code-block:: text
294 314

  
295 315
     skip {
296 316
       resource r0 {
......
305 325
     }
306 326

  
307 327
Installing RBD
308
+++++++++++++++
328
++++++++++++++
309 329

  
310 330
Recommended on all nodes: RBD_ is required if you want to create
311 331
instances with RBD disks residing inside a RADOS cluster (make use of
......
360 380
   On Debian, you can just install the RBD/Ceph userspace utils with
361 381
   the following command::
362 382

  
363
      apt-get install ceph-common
383
      $ apt-get install ceph-common
364 384

  
365 385
Configuration file
366 386
~~~~~~~~~~~~~~~~~~
......
371 391

  
372 392
.. admonition:: ceph.conf
373 393

  
374
   Sample configuration file::
394
   Sample configuration file:
395

  
396
   .. code-block:: text
375 397

  
376 398
    [mon.a]
377 399
           host = example_monitor_host1
......
399 421

  
400 422
**Mandatory** on all nodes.
401 423

  
402
You can run Ganeti either in "bridge mode" or in "routed mode". In
403
bridge mode, the default, the instances network interfaces will be
424
You can run Ganeti either in "bridged mode" or in "routed mode". In
425
bridged mode, the default, the instances network interfaces will be
404 426
attached to a software bridge running in dom0. Xen by default creates
405 427
such a bridge at startup, but your distribution might have a different
406 428
way to do things, and you'll definitely need to manually set it up under
......
418 440

  
419 441
By default, under KVM, the "link" parameter you specify per-nic will
420 442
represent, if non-empty, a different routing table name or number to use
421
for your instances. This allows insulation between different instance
443
for your instances. This allows isolation between different instance
422 444
groups, and different routing policies between node traffic and instance
423 445
traffic.
424 446

  
......
435 457

  
436 458
     auto xen-br0
437 459
     iface xen-br0 inet static
438
        address YOUR_IP_ADDRESS
439
        netmask YOUR_NETMASK
440
        network YOUR_NETWORK
441
        broadcast YOUR_BROADCAST_ADDRESS
442
        gateway YOUR_GATEWAY
460
        address %YOUR_IP_ADDRESS%
461
        netmask %YOUR_NETMASK%
462
        network %YOUR_NETWORK%
463
        broadcast %YOUR_BROADCAST_ADDRESS%
464
        gateway %YOUR_GATEWAY%
443 465
        bridge_ports eth0
444 466
        bridge_stp off
445 467
        bridge_fd 0
446 468

  
447
The following commands need to be executed on the local console:
469
The following commands need to be executed on the local console::
448 470

  
449
  ifdown eth0
450
  ifup xen-br0
471
  $ ifdown eth0
472
  $ ifup xen-br0
451 473

  
452 474
To check if the bridge is setup, use the ``ip`` and ``brctl show``
453 475
commands::
454 476

  
455
  # ip a show xen-br0
477
  $ ip a show xen-br0
456 478
  9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue
457 479
      link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff
458 480
      inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0
459 481
      inet6 fe80::220:fcff:fe1e:d55d/64 scope link
460 482
         valid_lft forever preferred_lft forever
461 483

  
462
  # brctl show xen-br0
484
  $ brctl show xen-br0
463 485
  bridge name     bridge id               STP enabled     interfaces
464 486
  xen-br0         8000.0020fc1ed55d       no              eth0
465 487

  
......
477 499
formatting the devices/partitions you want to use for it and then adding
478 500
them to the relevant volume group::
479 501

  
480
  pvcreate /dev/sda3
481
  vgcreate xenvg /dev/sda3
502
  $ pvcreate /dev/%sda3%
503
  $ vgcreate xenvg /dev/%sda3%
482 504

  
483 505
or::
484 506

  
485
  pvcreate /dev/sdb1
486
  pvcreate /dev/sdc1
487
  vgcreate xenvg /dev/sdb1 /dev/sdc1
507
  $ pvcreate /dev/%sdb1%
508
  $ pvcreate /dev/%sdc1%
509
  $ vgcreate xenvg /dev/%sdb1% /dev/%sdc1%
488 510

  
489 511
If you want to add a device later you can do so with the *vgextend*
490 512
command::
491 513

  
492
  pvcreate /dev/sdd1
493
  vgextend xenvg /dev/sdd1
514
  $ pvcreate /dev/%sdd1%
515
  $ vgextend xenvg /dev/%sdd1%
494 516

  
495 517
Optional: it is recommended to configure LVM not to scan the DRBD
496 518
devices for physical volumes. This can be accomplished by editing
497 519
``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular
498
expression to the ``filter`` variable, like this::
520
expression to the ``filter`` variable, like this:
521

  
522
.. code-block:: text
499 523

  
500 524
  filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ]
501 525

  
......
511 535

  
512 536
It's now time to install the Ganeti software itself.  Download the
513 537
source from the project page at `<http://code.google.com/p/ganeti/>`_,
514
and install it (replace 2.0.0 with the latest version)::
538
and install it (replace 2.6.0 with the latest version)::
515 539

  
516
  tar xvzf ganeti-2.0.0.tar.gz
517
  cd ganeti-2.0.0
518
  ./configure --localstatedir=/var --sysconfdir=/etc
519
  make
520
  make install
521
  mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
540
  $ tar xvzf ganeti-%2.6.0%.tar.gz
541
  $ cd ganeti-%2.6.0%
542
  $ ./configure --localstatedir=/var --sysconfdir=/etc
543
  $ make
544
  $ make install
545
  $ mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export
522 546

  
523 547
You also need to copy the file ``doc/examples/ganeti.initd`` from the
524 548
source archive to ``/etc/init.d/ganeti`` and register it with your
525 549
distribution's startup scripts, for example in Debian::
526 550

  
527
  update-rc.d ganeti defaults 20 80
551
  $ update-rc.d ganeti defaults 20 80
528 552

  
529 553
In order to automatically restart failed instances, you need to setup a
530 554
cron job run the *ganeti-watcher* command. A sample cron file is
......
541 565
  the python version this can be located in either
542 566
  ``lib/python-$ver/site-packages`` or various other locations)
543 567
- a set of programs under ``/usr/local/sbin`` or ``/usr/sbin``
568
- if the htools component was enabled, a set of programs unde
569
  ``/usr/local/bin`` or ``/usr/bin/``
544 570
- man pages for the above programs
545 571
- a set of tools under the ``lib/ganeti/tools`` directory
546 572
- an example iallocator script (see the admin guide for details) under
......
563 589
the ``README`` file.  Here is the installation procedure (replace 0.9
564 590
with the latest version that is compatible with your ganeti version)::
565 591

  
566
  cd /usr/local/src/
567
  wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.9.tar.gz
568
  tar xzf ganeti-instance-debootstrap-0.9.tar.gz
569
  cd ganeti-instance-debootstrap-0.9
570
  ./configure
571
  make
572
  make install
592
  $ cd /usr/local/src/
593
  $ wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-%0.9%.tar.gz
594
  $ tar xzf ganeti-instance-debootstrap-%0.9%.tar.gz
595
  $ cd ganeti-instance-debootstrap-%0.9%
596
  $ ./configure
597
  $ make
598
  $ make install
573 599

  
574 600
In order to use this OS definition, you need to have internet access
575 601
from your nodes and have the *debootstrap*, *dump* and *restore*
......
582 608

  
583 609
   Use this command on all nodes to install the required packages::
584 610

  
585
     apt-get install debootstrap dump kpartx
611
     $ apt-get install debootstrap dump kpartx
612

  
613
   Or alternatively install the OS definition from the Debian package::
614

  
615
     $ apt-get install ganeti-instance-debootstrap
586 616

  
587 617
.. admonition:: KVM
588 618

  
589 619
   In order for debootstrap instances to be able to shutdown cleanly
590
   they must install have basic acpi support inside the instance. Which
591
   packages are needed depend on the exact flavor of debian or ubuntu
620
   they must install have basic ACPI support inside the instance. Which
621
   packages are needed depend on the exact flavor of Debian or Ubuntu
592 622
   which you're installing, but the example defaults file has a
593
   commented out configuration line that works for debian lenny and
594
   squeeze::
623
   commented out configuration line that works for Debian Lenny and
624
   Squeeze::
595 625

  
596 626
     EXTRA_PKGS="acpi-support-base,console-tools,udev"
597 627

  
598
   kbd can be used instead of console-tools, and more packages can be
599
   added, of course, if needed.
628
   ``kbd`` can be used instead of ``console-tools``, and more packages
629
   can be added, of course, if needed.
600 630

  
601 631
Alternatively, you can create your own OS definitions. See the manpage
602 632
:manpage:`ganeti-os-interface`.
......
610 640
above process on all of your nodes, choose one as the master, and
611 641
execute::
612 642

  
613
  gnt-cluster init <CLUSTERNAME>
643
  $ gnt-cluster init %CLUSTERNAME%
614 644

  
615 645
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must
616 646
exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You
......
624 654

  
625 655
If you want to use a bridge which is not ``xen-br0``, or no bridge at
626 656
all, change it with the ``--nic-parameters`` option. For example to
627
bridge on br0 you can say::
657
bridge on br0 you can add::
628 658

  
629 659
  --nic-parameters link=br0
630 660

  
......
632 662

  
633 663
  --nic-parameters mode=routed,link=100
634 664

  
635
If you don't have a xen-br0 interface you also have to specify a
636
different network interface which will get the cluster ip, on the master
665
If you don't have a ``xen-br0`` interface you also have to specify a
666
different network interface which will get the cluster IP, on the master
637 667
node, by using the ``--master-netdev <device>`` option.
638 668

  
639 669
You can use a different name than ``xenvg`` for the volume group (but
......
659 689

  
660 690
Please note that the default hypervisor/network/cluster parameters may
661 691
not be the correct one for your environment. Carefully check them, and
662
change them at cluster init time, or later with ``gnt-cluster modify``.
692
change them either at cluster init time, or later with ``gnt-cluster
693
modify``.
663 694

  
664 695
Your instance types, networking environment, hypervisor type and version
665 696
may all affect what kind of parameters should be used on your cluster.
666 697

  
667 698
For example kvm instances are by default configured to use a host
668
kernel, and to be reached via serial console, which works nice for linux
699
kernel, and to be reached via serial console, which works nice for Linux
669 700
paravirtualized instances. If you want fully virtualized instances you
670 701
may want to handle their kernel inside the instance, and to use VNC.
671 702

  
......
678 709
to it. You can do so by executing the following command on the master
679 710
node::
680 711

  
681
  gnt-node add <NODENAME>
712
  $ gnt-node add %NODENAME%
682 713

  
683 714
Separate replication network
684 715
++++++++++++++++++++++++++++
......
699 730

  
700 731
Execute the ``gnt-node list`` command to see all nodes in the cluster::
701 732

  
702
  # gnt-node list
733
  $ gnt-node list
703 734
  Node              DTotal  DFree MTotal MNode MFree Pinst Sinst
704 735
  node1.example.com 197404 197404   2047  1896   125     0     0
705 736

  

Also available in: Unified diff