Revision c71a1a3d doc/install.rst
b/doc/install.rst | ||
---|---|---|
5 | 5 |
|
6 | 6 |
.. contents:: |
7 | 7 |
|
8 |
.. highlight:: text |
|
9 |
|
|
8 | 10 |
Introduction |
9 | 11 |
------------ |
10 | 12 |
|
11 | 13 |
Ganeti is a cluster virtualization management system based on Xen or |
12 |
KVM. This document explains how to bootstrap a Ganeti node (Xen |
|
13 |
*dom0*), create a running cluster and install virtual instance (Xen
|
|
14 |
*domU*). You need to repeat most of the steps in this document for
|
|
15 |
every node you want to install, but of course we recommend creating
|
|
16 |
some semi-automatic procedure if you plan to deploy Ganeti on a
|
|
17 |
medium/large scale. |
|
14 |
KVM. This document explains how to bootstrap a Ganeti node (Xen *dom0*,
|
|
15 |
the host Linux system for KVM), create a running cluster and install
|
|
16 |
virtual instances (Xen *domUs*, KVM guests). You need to repeat most of
|
|
17 |
the steps in this document for every node you want to install, but of
|
|
18 |
course we recommend creating some semi-automatic procedure if you plan
|
|
19 |
to deploy Ganeti on a medium/large scale.
|
|
18 | 20 |
|
19 | 21 |
A basic Ganeti terminology glossary is provided in the introductory |
20 |
section of the *Ganeti administrator's guide*. Please refer to that
|
|
21 |
document if you are uncertain about the terms we are using.
|
|
22 |
section of the :doc:`admin`. Please refer to that document if you are
|
|
23 |
uncertain about the terms we are using. |
|
22 | 24 |
|
23 |
Ganeti has been developed for Linux and is distribution-agnostic.
|
|
25 |
Ganeti has been developed for Linux and should be distribution-agnostic.
|
|
24 | 26 |
This documentation will use Debian Lenny as an example system but the |
25 |
examples can easily be translated to any other distribution. ou are
|
|
26 |
expected to be familiar with your distribution, its package management
|
|
27 |
system, and Xen or KVM before trying to use Ganeti.
|
|
27 |
examples can be translated to any other distribution. You are expected
|
|
28 |
to be familiar with your distribution, its package management system,
|
|
29 |
and Xen or KVM before trying to use Ganeti. |
|
28 | 30 |
|
29 | 31 |
This document is divided into two main sections: |
30 | 32 |
|
... | ... | |
33 | 35 |
- Configuration of the environment for Ganeti |
34 | 36 |
|
35 | 37 |
Each of these is divided into sub-sections. While a full Ganeti system |
36 |
will need all of the steps specified, some are not strictly required |
|
37 |
for every environment. Which ones they are, and why, is specified in
|
|
38 |
the corresponding sections.
|
|
38 |
will need all of the steps specified, some are not strictly required for
|
|
39 |
every environment. Which ones they are, and why, is specified in the
|
|
40 |
corresponding sections. |
|
39 | 41 |
|
40 | 42 |
Installing the base system and base components |
41 | 43 |
---------------------------------------------- |
... | ... | |
43 | 45 |
Hardware requirements |
44 | 46 |
+++++++++++++++++++++ |
45 | 47 |
|
46 |
Any system supported by your Linux distribution is fine. 64-bit |
|
47 |
systems are better as they can support more memory.
|
|
48 |
Any system supported by your Linux distribution is fine. 64-bit systems
|
|
49 |
are better as they can support more memory. |
|
48 | 50 |
|
49 |
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) |
|
50 |
is supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is |
|
51 |
needed to get high-availability features (but of course, one can be |
|
52 |
used to store the images). It is highly recommended to use more than |
|
53 |
one disk drive to improve speed. But Ganeti also works with one disk |
|
54 |
per machine. |
|
51 |
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is |
|
52 |
supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is |
|
53 |
needed to get high-availability features (but of course, one can be used |
|
54 |
to store the images). It is highly recommended to use more than one disk |
|
55 |
drive to improve speed. But Ganeti also works with one disk per machine. |
|
55 | 56 |
|
56 | 57 |
Installing the base system |
57 | 58 |
++++++++++++++++++++++++++ |
... | ... | |
59 | 60 |
**Mandatory** on all nodes. |
60 | 61 |
|
61 | 62 |
It is advised to start with a clean, minimal install of the operating |
62 |
system. The only requirement you need to be aware of at this stage is |
|
63 |
to partition leaving enough space for a big (**minimum** 20GiB) LVM |
|
64 |
volume group which will then host your instance filesystems, if you |
|
65 |
want to use all Ganeti features. The volume group name Ganeti 2.0 uses |
|
66 |
(by default) is ``xenvg``. |
|
67 |
|
|
68 |
You can also use file-based storage only, without LVM, but this setup |
|
69 |
is not detailed in this document. |
|
63 |
system. The only requirement you need to be aware of at this stage is to |
|
64 |
partition leaving enough space for a big (**minimum** 20GiB) LVM volume |
|
65 |
group which will then host your instance filesystems, if you want to use |
|
66 |
all Ganeti features. The volume group name Ganeti uses (by default) is |
|
67 |
``xenvg``. |
|
70 | 68 |
|
69 |
You can also use file-based storage only, without LVM, but this setup is |
|
70 |
not detailed in this document. |
|
71 | 71 |
|
72 | 72 |
While you can use an existing system, please note that the Ganeti |
73 | 73 |
installation is intrusive in terms of changes to the system |
... | ... | |
135 | 135 |
live system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4 |
136 | 136 |
and 3.1. Supported KVM version are 72 and above. |
137 | 137 |
|
138 |
Please follow your distribution's recommended way to install and set |
|
139 |
up Xen, or install Xen from the upstream source, if you wish,
|
|
140 |
following their manual. For KVM, make sure you have a KVM-enabled
|
|
141 |
kernel and the KVM tools.
|
|
138 |
Please follow your distribution's recommended way to install and set up
|
|
139 |
Xen, or install Xen from the upstream source, if you wish, following
|
|
140 |
their manual. For KVM, make sure you have a KVM-enabled kernel and the
|
|
141 |
KVM tools. |
|
142 | 142 |
|
143 | 143 |
After installing Xen, you need to reboot into your new system. On some |
144 | 144 |
distributions this might involve configuring GRUB appropriately, whereas |
... | ... | |
147 | 147 |
|
148 | 148 |
.. admonition:: Xen on Debian |
149 | 149 |
|
150 |
Under Lenny or Etch you can install the relevant |
|
151 |
``xen-linux-system`` package, which will pull in both the
|
|
152 |
hypervisor and the relevant kernel. Also, if you are installing a
|
|
153 |
32-bit Lenny/Etch, you should install the ``libc6-xen`` package
|
|
154 |
(run ``apt-get install libc6-xen``).
|
|
150 |
Under Lenny or Etch you can install the relevant ``xen-linux-system``
|
|
151 |
package, which will pull in both the hypervisor and the relevant
|
|
152 |
kernel. Also, if you are installing a 32-bit Lenny/Etch, you should
|
|
153 |
install the ``libc6-xen`` package (run ``apt-get install
|
|
154 |
libc6-xen``). |
|
155 | 155 |
|
156 | 156 |
Xen settings |
157 | 157 |
~~~~~~~~~~~~ |
158 | 158 |
|
159 | 159 |
It's recommended that dom0 is restricted to a low amount of memory |
160 |
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled |
|
161 |
in the file ``/etc/xen/xend-config.sxp`` by setting |
|
162 |
the value ``dom0-min-mem`` to 0, |
|
163 |
like this:: |
|
160 |
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled in |
|
161 |
the file ``/etc/xen/xend-config.sxp`` by setting the value |
|
162 |
``dom0-min-mem`` to 0, like this:: |
|
164 | 163 |
|
165 | 164 |
(dom0-min-mem 0) |
166 | 165 |
|
167 | 166 |
For optimum performance when running both CPU and I/O intensive |
168 |
instances, it's also recommended that the dom0 is restricted to one |
|
169 |
CPU only, for example by booting with the kernel parameter ``nosmp``.
|
|
167 |
instances, it's also recommended that the dom0 is restricted to one CPU
|
|
168 |
only, for example by booting with the kernel parameter ``nosmp``. |
|
170 | 169 |
|
171 | 170 |
It is recommended that you disable xen's automatic save of virtual |
172 | 171 |
machines at system shutdown and subsequent restore of them at reboot. |
... | ... | |
195 | 194 |
## Xen hypervisor options to use with the default Xen boot option |
196 | 195 |
# xenhopt=dom0_mem=1024M |
197 | 196 |
|
198 |
and the ``xenkopt`` needs to include the ``nosmp`` option like |
|
199 |
this:: |
|
197 |
and the ``xenkopt`` needs to include the ``nosmp`` option like this:: |
|
200 | 198 |
|
201 | 199 |
## Xen Linux kernel options to use with the default Xen boot option |
202 | 200 |
# xenkopt=nosmp |
... | ... | |
207 | 205 |
|
208 | 206 |
/sbin/update-grub |
209 | 207 |
|
210 |
If you want to run HVM instances too with Ganeti and want VNC access |
|
211 |
to the console of your instances, set the following two entries in
|
|
208 |
If you want to run HVM instances too with Ganeti and want VNC access to
|
|
209 |
the console of your instances, set the following two entries in |
|
212 | 210 |
``/etc/xen/xend-config.sxp``:: |
213 | 211 |
|
214 | 212 |
(vnc-listen '0.0.0.0') (vncpasswd '') |
... | ... | |
221 | 219 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
222 | 220 |
|
223 | 221 |
After you have installed Xen, you need to tell Ganeti exactly what |
224 |
kernel to use for the instances it will create. This is done by |
|
225 |
creating a symlink from your actual kernel to |
|
226 |
``/boot/vmlinuz-2.6-xenU``, and one from your initrd |
|
227 |
to ``/boot/initrd-2.6-xenU``. Note that if you don't |
|
228 |
use an initrd for the domU kernel, you don't need |
|
229 |
to create the initrd symlink. |
|
222 |
kernel to use for the instances it will create. This is done by creating |
|
223 |
a symlink from your actual kernel to ``/boot/vmlinuz-2.6-xenU``, and one |
|
224 |
from your initrd to ``/boot/initrd-2.6-xenU`` [#defkernel]_. Note that |
|
225 |
if you don't use an initrd for the domU kernel, you don't need to create |
|
226 |
the initrd symlink. |
|
230 | 227 |
|
231 | 228 |
.. admonition:: Debian |
232 | 229 |
|
... | ... | |
240 | 237 |
Installing DRBD |
241 | 238 |
+++++++++++++++ |
242 | 239 |
|
243 |
Recommended on all nodes: DRBD_ is required if you want to use the |
|
244 |
high availability (HA) features of Ganeti, but optional if you don't
|
|
245 |
require HA or only run Ganeti on single-node clusters. You can upgrade
|
|
246 |
a non-HA cluster to an HA one later, but you might need to export and
|
|
240 |
Recommended on all nodes: DRBD_ is required if you want to use the high
|
|
241 |
availability (HA) features of Ganeti, but optional if you don't require
|
|
242 |
them or only run Ganeti on single-node clusters. You can upgrade a
|
|
243 |
non-HA cluster to an HA one later, but you might need to export and |
|
247 | 244 |
re-import all your instances to take advantage of the new features. |
248 | 245 |
|
249 | 246 |
.. _DRBD: http://www.drbd.org/ |
250 | 247 |
|
251 |
Supported DRBD versions: 8.0.x. It's recommended to have at least |
|
252 |
version 8.0.12. |
|
248 |
Supported DRBD versions: 8.0+. It's recommended to have at least version |
|
249 |
8.0.12. Note that for version 8.2 and newer it is needed to pass the |
|
250 |
``usermode_helper=/bin/true`` parameter to the module, either by |
|
251 |
configuring ``/etc/modules`` or when inserting it manually. |
|
253 | 252 |
|
254 | 253 |
Now the bad news: unless your distribution already provides it |
255 |
installing DRBD might involve recompiling your kernel or anyway |
|
256 |
fiddling with it. Hopefully at least the Xen-ified kernel source to
|
|
257 |
start from will be provided.
|
|
254 |
installing DRBD might involve recompiling your kernel or anyway fiddling
|
|
255 |
with it. Hopefully at least the Xen-ified kernel source to start from
|
|
256 |
will be provided (if you intend to use Xen).
|
|
258 | 257 |
|
259 | 258 |
The good news is that you don't need to configure DRBD at all. Ganeti |
260 |
will do it for you for every instance you set up. If you have the |
|
261 |
DRBD utils installed and the module in your kernel you're fine. Please |
|
262 |
check that your system is configured to load the module at every boot, |
|
263 |
and that it passes the following option to the module |
|
264 |
``minor_count=255``. This will allow you to use up to 128 instances |
|
265 |
per node (for most clusters 128 should be enough, though). |
|
259 |
will do it for you for every instance you set up. If you have the DRBD |
|
260 |
utils installed and the module in your kernel you're fine. Please check |
|
261 |
that your system is configured to load the module at every boot, and |
|
262 |
that it passes the following option to the module: |
|
263 |
``minor_count=NUMBER``. We recommend that you use 128 as the value of |
|
264 |
the minor_count - this will allow you to use up to 64 instances in total |
|
265 |
per node (both primary and secondary, when using only one disk per |
|
266 |
instance). You can increase the number up to 255 if you need more |
|
267 |
instances on a node. |
|
268 |
|
|
266 | 269 |
|
267 | 270 |
.. admonition:: Debian |
268 | 271 |
|
269 |
On Debian, you can just install (build) the DRBD 8.0.x module with |
|
270 |
the following commands (make sure you are running the Xen kernel):: |
|
272 |
On Debian, you can just install (build) the DRBD module with the |
|
273 |
following commands, making sure you are running the target (Xen or |
|
274 |
KVM) kernel:: |
|
271 | 275 |
|
272 | 276 |
apt-get install drbd8-source drbd8-utils |
273 | 277 |
m-a update |
274 | 278 |
m-a a-i drbd8 |
275 |
echo drbd minor_count=128 >> /etc/modules |
|
279 |
echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules
|
|
276 | 280 |
depmod -a |
277 |
modprobe drbd minor_count=128 |
|
281 |
modprobe drbd minor_count=128 usermode_helper=/bin/true
|
|
278 | 282 |
|
279 |
It is also recommended that you comment out the default resources |
|
280 |
in the ``/etc/drbd.conf`` file, so that the init script doesn't try
|
|
281 |
to configure any drbd devices. You can do this by prefixing all
|
|
283 |
It is also recommended that you comment out the default resources in
|
|
284 |
the ``/etc/drbd.conf`` file, so that the init script doesn't try to
|
|
285 |
configure any drbd devices. You can do this by prefixing all |
|
282 | 286 |
*resource* lines in the file with the keyword *skip*, like this:: |
283 | 287 |
|
284 | 288 |
skip resource r0 { |
... | ... | |
346 | 350 |
way to do things, and you'll definitely need to manually set it up under |
347 | 351 |
KVM. |
348 | 352 |
|
349 |
Beware that the default name Ganeti uses is ``xen-br0`` (which was |
|
350 |
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default
|
|
351 |
bridge your Ganeti cluster will use for new instances can be specified
|
|
352 |
at cluster initialization time.
|
|
353 |
Beware that the default name Ganeti uses is ``xen-br0`` (which was used
|
|
354 |
in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default bridge
|
|
355 |
your Ganeti cluster will use for new instances can be specified at
|
|
356 |
cluster initialization time. |
|
353 | 357 |
|
354 | 358 |
If you want to run in "routing mode" you need to specify that at cluster |
355 | 359 |
init time (using the --nicparam option), and then no bridge will be |
... | ... | |
408 | 412 |
bridge name bridge id STP enabled interfaces |
409 | 413 |
xen-br0 8000.0020fc1ed55d no eth0 |
410 | 414 |
|
415 |
.. _configure-lvm-label: |
|
416 |
|
|
411 | 417 |
Configuring LVM |
412 | 418 |
+++++++++++++++ |
413 | 419 |
|
... | ... | |
415 | 421 |
|
416 | 422 |
The volume group is required to be at least 20GiB. |
417 | 423 |
|
418 |
If you haven't configured your LVM volume group at install time you |
|
419 |
need to do it before trying to initialize the Ganeti cluster. This is
|
|
420 |
done by formatting the devices/partitions you want to use for it and
|
|
421 |
then adding them to the relevant volume group::
|
|
424 |
If you haven't configured your LVM volume group at install time you need
|
|
425 |
to do it before trying to initialize the Ganeti cluster. This is done by
|
|
426 |
formatting the devices/partitions you want to use for it and then adding
|
|
427 |
them to the relevant volume group:: |
|
422 | 428 |
|
423 | 429 |
pvcreate /dev/sda3 |
424 | 430 |
vgcreate xenvg /dev/sda3 |
... | ... | |
437 | 443 |
|
438 | 444 |
Optional: it is recommended to configure LVM not to scan the DRBD |
439 | 445 |
devices for physical volumes. This can be accomplished by editing |
440 |
``/etc/lvm/lvm.conf`` and adding the |
|
441 |
``/dev/drbd[0-9]+`` regular expression to the |
|
442 |
``filter`` variable, like this:: |
|
446 |
``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular |
|
447 |
expression to the ``filter`` variable, like this:: |
|
443 | 448 |
|
444 | 449 |
filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] |
445 | 450 |
|
451 |
Note that with Ganeti a helper script is provided - ``lvmstrap`` which |
|
452 |
will erase and configure as LVM any not in-use disk on your system. This |
|
453 |
is dangerous and it's recommended to read its ``--help`` output if you |
|
454 |
want to use it. |
|
455 |
|
|
446 | 456 |
Installing Ganeti |
447 | 457 |
+++++++++++++++++ |
448 | 458 |
|
... | ... | |
459 | 469 |
make install |
460 | 470 |
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export |
461 | 471 |
|
462 |
You also need to copy the file |
|
463 |
``doc/examples/ganeti.initd`` from the source archive |
|
464 |
to ``/etc/init.d/ganeti`` and register it with your |
|
472 |
You also need to copy the file ``doc/examples/ganeti.initd`` from the |
|
473 |
source archive to ``/etc/init.d/ganeti`` and register it with your |
|
465 | 474 |
distribution's startup scripts, for example in Debian:: |
466 | 475 |
|
467 | 476 |
update-rc.d ganeti defaults 20 80 |
468 | 477 |
|
469 |
In order to automatically restart failed instances, you need to setup |
|
470 |
a cron job run the *ganeti-watcher* command. A sample cron file is |
|
471 |
provided in the source at ``doc/examples/ganeti.cron`` and you can |
|
472 |
copy that (eventually altering the path) to ``/etc/cron.d/ganeti``. |
|
478 |
In order to automatically restart failed instances, you need to setup a |
|
479 |
cron job run the *ganeti-watcher* command. A sample cron file is |
|
480 |
provided in the source at ``doc/examples/ganeti.cron`` and you can copy |
|
481 |
that (eventually altering the path) to ``/etc/cron.d/ganeti``. |
|
482 |
|
|
483 |
What gets installed |
|
484 |
~~~~~~~~~~~~~~~~~~~ |
|
485 |
|
|
486 |
The above ``make install`` invocation, or installing via your |
|
487 |
distribution mechanisms, will install on the system: |
|
488 |
|
|
489 |
- a set of python libraries under the *ganeti* namespace (depending on |
|
490 |
the python version this can be located in either |
|
491 |
``lib/python-$ver/site-packages`` or various other locations) |
|
492 |
- a set of programs under ``/usr/local/sbin`` or ``/usr/sbin`` |
|
493 |
- man pages for the above programs |
|
494 |
- a set of tools under the ``lib/ganeti/tools`` directory |
|
495 |
- an example iallocator script (see the admin guide for details) under |
|
496 |
``lib/ganeti/iallocators`` |
|
497 |
- a cron job that is needed for cluster maintenance |
|
498 |
- an init script for automatic startup of Ganeti daemons |
|
499 |
- provided but not installed automatically by ``make install`` is a bash |
|
500 |
completion script that hopefully will ease working with the many |
|
501 |
cluster commands |
|
473 | 502 |
|
474 | 503 |
Installing the Operating System support packages |
475 | 504 |
++++++++++++++++++++++++++++++++++++++++++++++++ |
... | ... | |
479 | 508 |
To be able to install instances you need to have an Operating System |
480 | 509 |
installation script. An example OS that works under Debian and can |
481 | 510 |
install Debian and Ubuntu instace OSes is provided on the project web |
482 |
site. Download it from the project page and follow the instructions |
|
483 |
in the ``README`` file. Here is the installation procedure (replace |
|
484 |
0.7 with the latest version that is compatible with your ganeti |
|
485 |
version):: |
|
511 |
site. Download it from the project page and follow the instructions in |
|
512 |
the ``README`` file. Here is the installation procedure (replace 0.7 |
|
513 |
with the latest version that is compatible with your ganeti version):: |
|
486 | 514 |
|
487 | 515 |
cd /usr/local/src/ |
488 | 516 |
wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz |
... | ... | |
511 | 539 |
Initializing the cluster |
512 | 540 |
++++++++++++++++++++++++ |
513 | 541 |
|
514 |
**Mandatory** on one node per cluster.
|
|
542 |
**Mandatory** once per cluster, on the first node.
|
|
515 | 543 |
|
516 |
The last step is to initialize the cluster. After you've repeated the
|
|
544 |
The last step is to initialize the cluster. After you have repeated the
|
|
517 | 545 |
above process on all of your nodes, choose one as the master, and |
518 | 546 |
execute:: |
519 | 547 |
|
520 | 548 |
gnt-cluster init <CLUSTERNAME> |
521 | 549 |
|
522 |
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it |
|
523 |
must exist in DNS or in ``/etc/hosts``) by all the nodes in the
|
|
524 |
cluster. You must choose a name different from any of the nodes names
|
|
525 |
for a multi-node cluster. In general the best choice is to have a
|
|
526 |
unique name for a cluster, even if it consists of only one machine, as
|
|
527 |
you will be able to expand it later without any problems. Please note
|
|
528 |
that the hostname used for this must resolve to an IP address reserved
|
|
550 |
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must
|
|
551 |
exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You
|
|
552 |
must choose a name different from any of the nodes names for a
|
|
553 |
multi-node cluster. In general the best choice is to have a unique name
|
|
554 |
for a cluster, even if it consists of only one machine, as you will be
|
|
555 |
able to expand it later without any problems. Please note that the
|
|
556 |
hostname used for this must resolve to an IP address reserved |
|
529 | 557 |
**exclusively** for this purpose, and cannot be the name of the first |
530 | 558 |
(master) node. |
531 | 559 |
|
... | ... | |
534 | 562 |
|
535 | 563 |
If the bridge name you are using is not ``xen-br0``, use the *-b |
536 | 564 |
<BRIDGENAME>* option to specify the bridge name. In this case, you |
537 |
should also use the *--master-netdev <BRIDGENAME>* option with the |
|
538 |
same BRIDGENAME argument.
|
|
565 |
should also use the *--master-netdev <BRIDGENAME>* option with the same
|
|
566 |
BRIDGENAME argument. |
|
539 | 567 |
|
540 | 568 |
You can use a different name than ``xenvg`` for the volume group (but |
541 | 569 |
note that the name must be identical on all nodes). In this case you |
542 |
need to specify it by passing the *-g <VGNAME>* option to |
|
543 |
``gnt-cluster init``.
|
|
570 |
need to specify it by passing the *-g <VGNAME>* option to ``gnt-cluster
|
|
571 |
init``. |
|
544 | 572 |
|
545 |
To set up the cluster as an HVM cluster, use the |
|
573 |
To set up the cluster as an Xen HVM cluster, use the
|
|
546 | 574 |
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor |
547 |
(you can also add ``,xen-pvm`` to enable the PVM one too). You will |
|
548 |
also need to create the VNC cluster password file
|
|
575 |
(you can also add ``,xen-pvm`` to enable the PVM one too). You will also
|
|
576 |
need to create the VNC cluster password file |
|
549 | 577 |
``/etc/ganeti/vnc-cluster-password`` which contains one line with the |
550 | 578 |
default VNC password for the cluster. |
551 | 579 |
|
... | ... | |
560 | 588 |
|
561 | 589 |
**Mandatory** for all the other nodes. |
562 | 590 |
|
563 |
After you have initialized your cluster you need to join the other |
|
564 |
nodes to it. You can do so by executing the following command on the
|
|
565 |
master node::
|
|
591 |
After you have initialized your cluster you need to join the other nodes
|
|
592 |
to it. You can do so by executing the following command on the master
|
|
593 |
node:: |
|
566 | 594 |
|
567 | 595 |
gnt-node add <NODENAME> |
568 | 596 |
|
... | ... | |
577 | 605 |
additional interface for each node. Use the *-s* option with |
578 | 606 |
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of |
579 | 607 |
this secondary interface to use for each node. Note that if you |
580 |
specified this option at cluster setup time, you must afterwards use |
|
581 |
it for every node add operation.
|
|
608 |
specified this option at cluster setup time, you must afterwards use it
|
|
609 |
for every node add operation. |
|
582 | 610 |
|
583 | 611 |
Testing the setup |
584 | 612 |
+++++++++++++++++ |
585 | 613 |
|
586 |
Execute the ``gnt-node list`` command to see all nodes in the |
|
587 |
cluster:: |
|
614 |
Execute the ``gnt-node list`` command to see all nodes in the cluster:: |
|
588 | 615 |
|
589 | 616 |
# gnt-node list |
590 | 617 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
591 | 618 |
node1.example.com 197404 197404 2047 1896 125 0 0 |
592 | 619 |
|
593 |
Setting up and managing virtual instances |
|
594 |
----------------------------------------- |
|
595 |
|
|
596 |
Setting up virtual instances |
|
597 |
++++++++++++++++++++++++++++ |
|
598 |
|
|
599 |
This step shows how to setup a virtual instance with either |
|
600 |
non-mirrored disks (``plain``) or with network mirrored disks |
|
601 |
(``drbd``). All commands need to be executed on the Ganeti master |
|
602 |
node (the one on which ``gnt-cluster init`` was run). Verify that the |
|
603 |
OS scripts are present on all cluster nodes with ``gnt-os list``. |
|
604 |
|
|
605 |
|
|
606 |
To create a virtual instance, you need a hostname which is resolvable |
|
607 |
(DNS or ``/etc/hosts`` on all nodes). The following command will |
|
608 |
create a non-mirrored instance for you:: |
|
609 |
|
|
610 |
gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com |
|
611 |
* creating instance disks... |
|
612 |
adding instance instance1.example.com to cluster config |
|
613 |
- INFO: Waiting for instance instance1.example.com to sync disks. |
|
614 |
- INFO: Instance instance1.example.com's disks are in sync. |
|
615 |
creating os for instance instance1.example.com on node node1.example.com |
|
616 |
* running the instance OS create scripts... |
|
617 |
* starting instance... |
|
618 |
|
|
619 |
The above instance will have no network interface enabled. You can |
|
620 |
access it over the virtual console with ``gnt-instance console |
|
621 |
inst1``. There is no password for root. As this is a Debian instance, |
|
622 |
you can modify the ``/etc/network/interfaces`` file to setup the |
|
623 |
network interface (eth0 is the name of the interface provided to the |
|
624 |
instance). |
|
625 |
|
|
626 |
To create a network mirrored instance, change the argument to the *-t* |
|
627 |
option from ``plain`` to ``drbd`` and specify the node on which the |
|
628 |
mirror should reside with the second value of the *--node* option, |
|
629 |
like this (note that the command output includes timestamps which have |
|
630 |
been removed for clarity):: |
|
631 |
|
|
632 |
# gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2 |
|
633 |
* creating instance disks... |
|
634 |
adding instance instance2.example.com to cluster config |
|
635 |
- INFO: Waiting for instance instance2.example.com to sync disks. |
|
636 |
- INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining |
|
637 |
- INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining |
|
638 |
- INFO: Instance instance2.example.com's disks are in sync. |
|
639 |
creating os for instance instance2.example.com on node node1.example.com |
|
640 |
* running the instance OS create scripts... |
|
641 |
* starting instance... |
|
642 |
|
|
643 |
Managing virtual instances |
|
644 |
++++++++++++++++++++++++++ |
|
645 |
|
|
646 |
All commands need to be executed on the Ganeti master node. |
|
647 |
|
|
648 |
To access the console of an instance, run:: |
|
649 |
|
|
650 |
gnt-instance console INSTANCENAME |
|
620 |
The above shows a couple of things: |
|
651 | 621 |
|
652 |
To shutdown an instance, run:: |
|
622 |
- The various Ganeti daemons can talk to each other |
|
623 |
- Ganeti can examine the storage of the node (DTotal/DFree) |
|
624 |
- Ganeti can talk to the selected hypervisor (MTotal/MNode/MFree) |
|
653 | 625 |
|
654 |
gnt-instance shutdown INSTANCENAME |
|
626 |
Cluster burnin |
|
627 |
~~~~~~~~~~~~~~ |
|
655 | 628 |
|
656 |
To startup an instance, run:: |
|
629 |
With Ganeti a tool called :command:`burnin` is provided that can test |
|
630 |
most of the Ganeti functionality. The tool is installed under the |
|
631 |
``lib/ganeti/tools`` directory (either under ``/usr`` or ``/usr/local`` |
|
632 |
based on the installation method). See more details under |
|
633 |
:ref:`burnin-label`. |
|
657 | 634 |
|
658 |
gnt-instance startup INSTANCENAME |
|
635 |
Further steps |
|
636 |
------------- |
|
659 | 637 |
|
660 |
To failover an instance to its secondary node (only possible with |
|
661 |
``drbd`` disk templates), run:: |
|
638 |
You can now proceed either to the :doc:`admin`, or read the manpages of |
|
639 |
the various commands (:manpage:`ganeti(7)`, :manpage:`gnt-cluster(8)`, |
|
640 |
:manpage:`gnt-node(8)`, :manpage:`gnt-instance(8)`, |
|
641 |
:manpage:`gnt-job(8)`). |
|
662 | 642 |
|
663 |
gnt-instance failover INSTANCENAME
|
|
643 |
.. rubric:: Footnotes
|
|
664 | 644 |
|
665 |
For more instance and cluster administration details, see the |
|
666 |
*Ganeti administrator's guide*. |
|
645 |
.. [#defkernel] The kernel and initrd paths can be changed at either |
|
646 |
cluster level (which changes the default for all instances) or at |
|
647 |
instance level. |
|
667 | 648 |
|
668 | 649 |
.. vim: set textwidth=72 : |
650 |
.. Local Variables: |
|
651 |
.. mode: rst |
|
652 |
.. fill-column: 72 |
|
653 |
.. End: |
Also available in: Unified diff