X-Git-Url: https://code.grnet.gr/git/ganeti-local/blobdiff_plain/16abfbc23d39d267c2233dce36e93f80351651b9..f21b6023893e85b6edc6642af11278d7a959c59c:/doc/install.sgml diff --git a/doc/install.sgml b/doc/install.sgml index 79c5d0e..84cfab9 100644 --- a/doc/install.sgml +++ b/doc/install.sgml @@ -41,18 +41,18 @@ Installation of the base system and base - components + components Configuration of the environment for - Ganeti + Ganeti - Each of these is divided into sub-sections. While a full Ganeti - system will need all of the steps specified, some are not strictly - required for every environment. Which ones they are, and why, is - specified in the corresponding sections. + Each of these is divided into sub-sections. While a full Ganeti system + will need all of the steps specified, some are not strictly required for + every environment. Which ones they are, and why, is specified in the + corresponding sections. @@ -64,17 +64,17 @@ Hardware requirements - Any system supported by your Linux distribution is fine. - 64-bit systems are better as they can support more memory. + Any system supported by your Linux distribution is fine. 64-bit + systems are better as they can support more memory. - Any disk drive recognized by Linux - (IDE/SCSI/SATA/etc.) - is supported in Ganeti. Note that no shared storage - (e.g. SAN) is needed to get high-availability features. It is - highly recommended to use more than one disk drive to improve - speed. But Ganeti also works with one disk per machine. + Any disk drive recognized by Linux + (IDE/SCSI/SATA/etc.) + is supported in Ganeti. Note that no shared storage (e.g. + SAN) is needed to get high-availability features. It + is highly recommended to use more than one disk drive to improve speed. + But Ganeti also works with one disk per machine. @@ -212,6 +212,14 @@ kernel parameter nosmp. + + It is recommended that you disable xen's automatic save of virtual + machines at system shutdown and subsequent restore of them at reboot. + To obtain this make sure the variable + XENDOMAINS_SAVE in the file + /etc/default/xendomains is set to an empty value. + + Debian @@ -241,6 +249,21 @@ + + If you want to test the HVM support + with Ganeti and want VNC access to the console of your + instances, set the following two entries in + /etc/xen/xend-config.sxp: + +(vnc-listen '0.0.0.0') +(vncpasswd '') + + You need to restart the Xen daemon for these settings to + take effect: + +/etc/init.d/xend restart + + @@ -291,11 +314,13 @@ ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU - Supported DRBD version: the 0.7 - series. It's recommended to have at least version - 0.7.24 if you use udev - since older versions have a bug related to device discovery - which can be triggered in cases of hard drive failure. + Supported DRBD versions: the 0.7 series + or + 8.0.7. It's recommended to have at least + version 0.7.24 if you use + udev since older versions have a bug + related to device discovery which can be triggered in cases of + hard drive failure. @@ -311,9 +336,14 @@ ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU you have the DRBD utils installed and the module in your kernel you're fine. Please check that your system is configured to load the module at every boot, and that it - passes the following option to the module: + passes the following option to the module (for + 0.7.x: minor_count=64 (this will - allow you to use up to 32 instances per node). + allow you to use up to 32 instances per node) or for + 8.0.x you can use up to + 255 + (i.e. minor_count=255, but + for most clusters 128 should be enough). Debian @@ -331,6 +361,18 @@ m-a a-i drbd0.7 echo drbd minor_count=64 >> /etc/modules modprobe drbd minor_count=64 + + or for using DRBD 8.x from the etch + backports (note: you need at least 8.0.7, older version have + a bug that breaks ganeti's usage of drbd): + + +apt-get install -t etch-backports drbd8-module-source drbd8-utils +m-a update +m-a a-i drbd8 +echo drbd minor_count=128 >> /etc/modules +modprobe drbd minor_count=128 + It is also recommended that you comment out the default @@ -408,6 +450,11 @@ skip resource "r1" { url="http://www.undefined.org/python/#simplejson">simplejson Python module + + pyparsing Python + module + @@ -426,7 +473,7 @@ skip resource "r1" { # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ python2.4 python-twisted-core python-pyopenssl openssl \ - mdadm + mdadm python-pyparsing python-simplejson @@ -576,8 +623,8 @@ vgextend xenvg /dev/sdd1 -tar xvzf ganeti-1.2b1.tar.gz -cd ganeti-1.2b1 +tar xvzf ganeti-@GANETI_VERSION@.tar.gz +cd ganeti-@GANETI_VERSION@ ./configure --localstatedir=/var --sysconfdir=/etc make make install @@ -616,13 +663,15 @@ mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export provided on the project web site. Download it from and follow the instructions in the README file. Here is - the installation procedure: + the installation procedure (replace 0.2 + with the latest version that is compatible with your ganeti + version): cd /srv/ganeti/os -tar xvf instance-debian-etch-0.1.tar -mv instance-debian-etch-0.1 debian-etch +tar xvf ganeti-instance-debian-etch-0.4.tar +mv ganeti-instance-debian-etch-0.4 debian-etch @@ -702,6 +751,20 @@ gnt-cluster init CLUSTERNAME + To set up the cluster as an HVM cluster, use the + option to use + the Xen 3.1 HVM hypervisor. Note that with the + HVM support, you will only be able to create + HVM instances in a cluster set to this hypervisor type. Mixed + PVM/HVM clusters are not supported by the Ganeti 1.2 + HVM support. You will also need to create the VNC + cluster password file + /etc/ganeti/vnc-cluster-password + which contains one line with the default VNC password for the + cluster. + + + You can also invoke the command with the option in order to see all the possibilities. @@ -749,7 +812,6 @@ gnt-node add NODENAME Testing the setup - Execute the gnt-node list command to see all nodes in the cluster: @@ -768,11 +830,13 @@ node1.example.com 197404 197404 2047 1896 125 0 0 This step shows how to setup a virtual instance with either non-mirrored disks (plain) or with network mirrored disks - (remote_raid1). All commands - need to be executed on the Ganeti master node (the one on - which gnt-cluster init was - run). Verify that the OS scripts are present on all cluster - nodes with gnt-os list. + (remote_raid1 for drbd 0.7 + and drbd for drbd 8.x). All + commands need to be executed on the Ganeti master node (the + one on which gnt-cluster init + was run). Verify that the OS scripts are present on all + cluster nodes with gnt-os + list. To create a virtual instance, you need a hostname which is @@ -804,14 +868,14 @@ creating os for instance inst1.example.com on node node1.example.com To create a network mirrored instance, change the argument to the option from plain - to remote_raid1 and specify the node on - which the mirror should reside with the - option, like this: + to remote_raid1 (drbd 0.7) or + drbd (drbd 8.0) and specify the node on + which the mirror should reside with the second value of the + option, like this: -# gnt-instance add -t remote_raid1 --secondary-node node1 \ - -n node2 -o debian-etch instance2 +# gnt-instance add -t remote_raid1 -n node1:node2 -o debian-etch instance2 * creating instance disks... adding instance instance2 to cluster config Waiting for instance instance1 to sync disks. @@ -822,7 +886,7 @@ Waiting for instance instance1 to sync disks. - device sdb: 76.30% done, 72 estimated seconds remaining - device sdb: 94.80% done, 18 estimated seconds remaining Instance instance2's disks are in sync. -creating os for instance instance2 on node node2.example.com +creating os for instance instance2 on node node1.example.com * running the instance OS create scripts... * starting instance... @@ -851,8 +915,8 @@ creating os for instance instance2 on node node2.example.com To failover an instance to its secondary node (only possible - in remote_raid1 setup), use - gnt-instance failover + in remote_raid1 or drbd + disk templates), use gnt-instance failover INSTANCENAME.