root / doc / install.rst @ 216cb5af
History | View | Annotate | Download (29.4 kB)
1 |
Ganeti installation tutorial |
---|---|
2 |
============================ |
3 |
|
4 |
Documents Ganeti version |version| |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
.. highlight:: shell-example |
9 |
|
10 |
Introduction |
11 |
------------ |
12 |
|
13 |
Ganeti is a cluster virtualization management system based on Xen or |
14 |
KVM. This document explains how to bootstrap a Ganeti node (Xen *dom0*, |
15 |
the host Linux system for KVM), create a running cluster and install |
16 |
virtual instances (Xen *domUs*, KVM guests). You need to repeat most of |
17 |
the steps in this document for every node you want to install, but of |
18 |
course we recommend creating some semi-automatic procedure if you plan |
19 |
to deploy Ganeti on a medium/large scale. |
20 |
|
21 |
A basic Ganeti terminology glossary is provided in the introductory |
22 |
section of the :doc:`admin`. Please refer to that document if you are |
23 |
uncertain about the terms we are using. |
24 |
|
25 |
Ganeti has been developed for Linux and should be distribution-agnostic. |
26 |
This documentation will use Debian Squeeze as an example system but the |
27 |
examples can be translated to any other distribution. You are expected |
28 |
to be familiar with your distribution, its package management system, |
29 |
and Xen or KVM before trying to use Ganeti. |
30 |
|
31 |
This document is divided into two main sections: |
32 |
|
33 |
- Installation of the base system and base components |
34 |
|
35 |
- Configuration of the environment for Ganeti |
36 |
|
37 |
Each of these is divided into sub-sections. While a full Ganeti system |
38 |
will need all of the steps specified, some are not strictly required for |
39 |
every environment. Which ones they are, and why, is specified in the |
40 |
corresponding sections. |
41 |
|
42 |
Installing the base system and base components |
43 |
---------------------------------------------- |
44 |
|
45 |
Hardware requirements |
46 |
+++++++++++++++++++++ |
47 |
|
48 |
Any system supported by your Linux distribution is fine. 64-bit systems |
49 |
are better as they can support more memory. |
50 |
|
51 |
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) is |
52 |
supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is |
53 |
needed to get high-availability features (but of course, one can be used |
54 |
to store the images). Whilte it is highly recommended to use more than |
55 |
one disk drive in order to improve speed, Ganeti also works with one |
56 |
disk per machine. |
57 |
|
58 |
Installing the base system |
59 |
++++++++++++++++++++++++++ |
60 |
|
61 |
**Mandatory** on all nodes. |
62 |
|
63 |
It is advised to start with a clean, minimal install of the operating |
64 |
system. The only requirement you need to be aware of at this stage is to |
65 |
partition leaving enough space for a big (**minimum** 20GiB) LVM volume |
66 |
group which will then host your instance filesystems, if you want to use |
67 |
all Ganeti features. The volume group name Ganeti uses (by default) is |
68 |
``xenvg``. |
69 |
|
70 |
You can also use file-based storage only, without LVM, but this setup is |
71 |
not detailed in this document. |
72 |
|
73 |
If you choose to use RBD-based instances, there's no need for LVM |
74 |
provisioning. However, this feature is experimental, and is not yet |
75 |
recommended for production clusters. |
76 |
|
77 |
While you can use an existing system, please note that the Ganeti |
78 |
installation is intrusive in terms of changes to the system |
79 |
configuration, and it's best to use a newly-installed system without |
80 |
important data on it. |
81 |
|
82 |
Also, for best results, it's advised that the nodes have as much as |
83 |
possible the same hardware and software configuration. This will make |
84 |
administration much easier. |
85 |
|
86 |
Hostname issues |
87 |
~~~~~~~~~~~~~~~ |
88 |
|
89 |
Note that Ganeti requires the hostnames of the systems (i.e. what the |
90 |
``hostname`` command outputs to be a fully-qualified name, not a short |
91 |
name. In other words, you should use *node1.example.com* as a hostname |
92 |
and not just *node1*. |
93 |
|
94 |
.. admonition:: Debian |
95 |
|
96 |
Debian usually configures the hostname differently than you need it |
97 |
for Ganeti. For example, this is what it puts in ``/etc/hosts`` in |
98 |
certain situations:: |
99 |
|
100 |
127.0.0.1 localhost |
101 |
127.0.1.1 node1.example.com node1 |
102 |
|
103 |
but for Ganeti you need to have:: |
104 |
|
105 |
127.0.0.1 localhost |
106 |
192.0.2.1 node1.example.com node1 |
107 |
|
108 |
replacing ``192.0.2.1`` with your node's address. Also, the file |
109 |
``/etc/hostname`` which configures the hostname of the system |
110 |
should contain ``node1.example.com`` and not just ``node1`` (you |
111 |
need to run the command ``/etc/init.d/hostname.sh start`` after |
112 |
changing the file). |
113 |
|
114 |
.. admonition:: Why a fully qualified host name |
115 |
|
116 |
Although most distributions use only the short name in the |
117 |
/etc/hostname file, we still think Ganeti nodes should use the full |
118 |
name. The reason for this is that calling 'hostname --fqdn' requires |
119 |
the resolver library to work and is a 'guess' via heuristics at what |
120 |
is your domain name. Since Ganeti can be used among other things to |
121 |
host DNS servers, we don't want to depend on them as much as |
122 |
possible, and we'd rather have the uname() syscall return the full |
123 |
node name. |
124 |
|
125 |
We haven't ever found any breakage in using a full hostname on a |
126 |
Linux system, and anyway we recommend to have only a minimal |
127 |
installation on Ganeti nodes, and to use instances (or other |
128 |
dedicated machines) to run the rest of your network services. By |
129 |
doing this you can change the /etc/hostname file to contain an FQDN |
130 |
without the fear of breaking anything unrelated. |
131 |
|
132 |
|
133 |
Installing The Hypervisor |
134 |
+++++++++++++++++++++++++ |
135 |
|
136 |
**Mandatory** on all nodes. |
137 |
|
138 |
While Ganeti is developed with the ability to modularly run on different |
139 |
virtualization environments in mind the only two currently useable on a |
140 |
live system are Xen and KVM. Supported Xen versions are: 3.0.3 and later |
141 |
3.x versions, and 4.x (tested up to 4.1). Supported KVM versions are 72 |
142 |
and above. |
143 |
|
144 |
Please follow your distribution's recommended way to install and set up |
145 |
Xen, or install Xen from the upstream source, if you wish, following |
146 |
their manual. For KVM, make sure you have a KVM-enabled kernel and the |
147 |
KVM tools. |
148 |
|
149 |
After installing Xen, you need to reboot into your new system. On some |
150 |
distributions this might involve configuring GRUB appropriately, whereas |
151 |
others will configure it automatically when you install the respective |
152 |
kernels. For KVM no reboot should be necessary. |
153 |
|
154 |
.. admonition:: Xen on Debian |
155 |
|
156 |
Under Debian you can install the relevant ``xen-linux-system`` |
157 |
package, which will pull in both the hypervisor and the relevant |
158 |
kernel. Also, if you are installing a 32-bit system, you should |
159 |
install the ``libc6-xen`` package (run ``apt-get install |
160 |
libc6-xen``). |
161 |
|
162 |
Xen settings |
163 |
~~~~~~~~~~~~ |
164 |
|
165 |
It's recommended that dom0 is restricted to a low amount of memory |
166 |
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled in |
167 |
the file ``/etc/xen/xend-config.sxp`` by setting the value |
168 |
``dom0-min-mem`` to 0, like this:: |
169 |
|
170 |
(dom0-min-mem 0) |
171 |
|
172 |
For optimum performance when running both CPU and I/O intensive |
173 |
instances, it's also recommended that the dom0 is restricted to one CPU |
174 |
only. For example you can add ``dom0_max_vcpus=1,dom0_vcpus_pin`` to your |
175 |
kernels boot command line and set ``dom0-cpus`` in |
176 |
``/etc/xen/xend-config.sxp`` like this:: |
177 |
|
178 |
(dom0-cpus 1) |
179 |
|
180 |
It is recommended that you disable xen's automatic save of virtual |
181 |
machines at system shutdown and subsequent restore of them at reboot. |
182 |
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file |
183 |
``/etc/default/xendomains`` is set to an empty value. |
184 |
|
185 |
If you want to use live migration make sure you have, in the xen config |
186 |
file, something that allows the nodes to migrate instances between each |
187 |
other. For example: |
188 |
|
189 |
.. code-block:: text |
190 |
|
191 |
(xend-relocation-server yes) |
192 |
(xend-relocation-port 8002) |
193 |
(xend-relocation-address '') |
194 |
(xend-relocation-hosts-allow '^192\\.0\\.2\\.[0-9]+$') |
195 |
|
196 |
|
197 |
The second line assumes that the hypervisor parameter |
198 |
``migration_port`` is set 8002, otherwise modify it to match. The last |
199 |
line assumes that all your nodes have secondary IPs in the |
200 |
192.0.2.0/24 network, adjust it accordingly to your setup. |
201 |
|
202 |
.. admonition:: Debian |
203 |
|
204 |
Besides the ballooning change which you need to set in |
205 |
``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp |
206 |
parameters in the file ``/boot/grub/menu.lst``. You need to modify |
207 |
the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this: |
208 |
|
209 |
.. code-block:: text |
210 |
|
211 |
## Xen hypervisor options to use with the default Xen boot option |
212 |
# xenhopt=dom0_mem=1024M |
213 |
|
214 |
and the ``xenkopt`` needs to include the ``maxcpus`` option like |
215 |
this: |
216 |
|
217 |
.. code-block:: text |
218 |
|
219 |
## Xen Linux kernel options to use with the default Xen boot option |
220 |
# xenkopt=maxcpus=1 |
221 |
|
222 |
Any existing parameters can be left in place: it's ok to have |
223 |
``xenkopt=console=tty0 maxcpus=1``, for example. After modifying the |
224 |
files, you need to run:: |
225 |
|
226 |
$ /sbin/update-grub |
227 |
|
228 |
If you want to run HVM instances too with Ganeti and want VNC access to |
229 |
the console of your instances, set the following two entries in |
230 |
``/etc/xen/xend-config.sxp``: |
231 |
|
232 |
.. code-block:: text |
233 |
|
234 |
(vnc-listen '0.0.0.0') (vncpasswd '') |
235 |
|
236 |
You need to restart the Xen daemon for these settings to take effect:: |
237 |
|
238 |
$ /etc/init.d/xend restart |
239 |
|
240 |
Selecting the instance kernel |
241 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
242 |
|
243 |
After you have installed Xen, you need to tell Ganeti exactly what |
244 |
kernel to use for the instances it will create. This is done by creating |
245 |
a symlink from your actual kernel to ``/boot/vmlinuz-3-xenU``, and one |
246 |
from your initrd to ``/boot/initrd-3-xenU`` [#defkernel]_. Note that |
247 |
if you don't use an initrd for the domU kernel, you don't need to create |
248 |
the initrd symlink. |
249 |
|
250 |
.. admonition:: Debian |
251 |
|
252 |
After installation of the ``xen-linux-system`` package, you need to |
253 |
run (replace the exact version number with the one you have):: |
254 |
|
255 |
$ cd /boot |
256 |
$ ln -s vmlinuz-%2.6.26-1%-xen-amd64 vmlinuz-3-xenU |
257 |
$ ln -s initrd.img-%2.6.26-1%-xen-amd64 initrd-3-xenU |
258 |
|
259 |
By default, the initrd doesn't contain the Xen block drivers needed |
260 |
to mount the root device, so it is recommended to update the initrd |
261 |
by following these two steps: |
262 |
|
263 |
- edit ``/etc/initramfs-tools/modules`` and add ``xen_blkfront`` |
264 |
- run ``update-initramfs -u`` |
265 |
|
266 |
Installing DRBD |
267 |
+++++++++++++++ |
268 |
|
269 |
Recommended on all nodes: DRBD_ is required if you want to use the high |
270 |
availability (HA) features of Ganeti, but optional if you don't require |
271 |
them or only run Ganeti on single-node clusters. You can upgrade a |
272 |
non-HA cluster to an HA one later, but you might need to convert all |
273 |
your instances to DRBD to take advantage of the new features. |
274 |
|
275 |
.. _DRBD: http://www.drbd.org/ |
276 |
|
277 |
Supported DRBD versions: 8.0-8.3. It's recommended to have at least |
278 |
version 8.0.12. Note that for version 8.2 and newer it is needed to pass |
279 |
the ``usermode_helper=/bin/true`` parameter to the module, either by |
280 |
configuring ``/etc/modules`` or when inserting it manually. |
281 |
|
282 |
Now the bad news: unless your distribution already provides it |
283 |
installing DRBD might involve recompiling your kernel or anyway fiddling |
284 |
with it. Hopefully at least the Xen-ified kernel source to start from |
285 |
will be provided (if you intend to use Xen). |
286 |
|
287 |
The good news is that you don't need to configure DRBD at all. Ganeti |
288 |
will do it for you for every instance you set up. If you have the DRBD |
289 |
utils installed and the module in your kernel you're fine. Please check |
290 |
that your system is configured to load the module at every boot, and |
291 |
that it passes the following option to the module: |
292 |
``minor_count=NUMBER``. We recommend that you use 128 as the value of |
293 |
the minor_count - this will allow you to use up to 64 instances in total |
294 |
per node (both primary and secondary, when using only one disk per |
295 |
instance). You can increase the number up to 255 if you need more |
296 |
instances on a node. |
297 |
|
298 |
|
299 |
.. admonition:: Debian |
300 |
|
301 |
On Debian, you can just install (build) the DRBD module with the |
302 |
following commands, making sure you are running the target (Xen or |
303 |
KVM) kernel:: |
304 |
|
305 |
$ apt-get install drbd8-source drbd8-utils |
306 |
$ m-a update |
307 |
$ m-a a-i drbd8 |
308 |
|
309 |
Or on newer versions, if the kernel already has modules: |
310 |
|
311 |
$ apt-get install drbd8-utils |
312 |
|
313 |
Then to configure it for Ganeti:: |
314 |
|
315 |
$ echo drbd minor_count=128 usermode_helper=/bin/true >> /etc/modules |
316 |
$ depmod -a |
317 |
$ modprobe drbd minor_count=128 usermode_helper=/bin/true |
318 |
|
319 |
It is also recommended that you comment out the default resources (if any) |
320 |
in the ``/etc/drbd.conf`` file, so that the init script doesn't try to |
321 |
configure any drbd devices. You can do this by prefixing all |
322 |
*resource* lines in the file with the keyword *skip*, like this: |
323 |
|
324 |
.. code-block:: text |
325 |
|
326 |
skip { |
327 |
resource r0 { |
328 |
... |
329 |
} |
330 |
} |
331 |
|
332 |
skip { |
333 |
resource "r1" { |
334 |
... |
335 |
} |
336 |
} |
337 |
|
338 |
Installing RBD |
339 |
++++++++++++++ |
340 |
|
341 |
Recommended on all nodes: RBD_ is required if you want to create |
342 |
instances with RBD disks residing inside a RADOS cluster (make use of |
343 |
the rbd disk template). RBD-based instances can failover or migrate to |
344 |
any other node in the ganeti cluster, enabling you to exploit of all |
345 |
Ganeti's high availabilily (HA) features. |
346 |
|
347 |
.. attention:: |
348 |
Be careful though: rbd is still experimental! For now it is |
349 |
recommended only for testing purposes. No sensitive data should be |
350 |
stored there. |
351 |
|
352 |
.. _RBD: http://ceph.newdream.net/ |
353 |
|
354 |
You will need the ``rbd`` and ``libceph`` kernel modules, the RBD/Ceph |
355 |
userspace utils (ceph-common Debian package) and an appropriate |
356 |
Ceph/RADOS configuration file on every VM-capable node. |
357 |
|
358 |
You will also need a working RADOS Cluster accessible by the above |
359 |
nodes. |
360 |
|
361 |
RADOS Cluster |
362 |
~~~~~~~~~~~~~ |
363 |
|
364 |
You will need a working RADOS Cluster accesible by all VM-capable nodes |
365 |
to use the RBD template. For more information on setting up a RADOS |
366 |
Cluster, refer to the `official docs <http://ceph.newdream.net/>`_. |
367 |
|
368 |
If you want to use a pool for storing RBD disk images other than the |
369 |
default (``rbd``), you should first create the pool in the RADOS |
370 |
Cluster, and then set the corresponding rbd disk parameter named |
371 |
``pool``. |
372 |
|
373 |
Kernel Modules |
374 |
~~~~~~~~~~~~~~ |
375 |
|
376 |
Unless your distribution already provides it, you might need to compile |
377 |
the ``rbd`` and ``libceph`` modules from source. You will need Linux |
378 |
Kernel 3.2 or above for the kernel modules. Alternatively you will have |
379 |
to build them as external modules (from Linux Kernel source 3.2 or |
380 |
above), if you want to run a less recent kernel, or your kernel doesn't |
381 |
include them. |
382 |
|
383 |
Userspace Utils |
384 |
~~~~~~~~~~~~~~~ |
385 |
|
386 |
The RBD template has been tested with ``ceph-common`` v0.38 and |
387 |
above. We recommend using the latest version of ``ceph-common``. |
388 |
|
389 |
.. admonition:: Debian |
390 |
|
391 |
On Debian, you can just install the RBD/Ceph userspace utils with |
392 |
the following command:: |
393 |
|
394 |
$ apt-get install ceph-common |
395 |
|
396 |
Configuration file |
397 |
~~~~~~~~~~~~~~~~~~ |
398 |
|
399 |
You should also provide an appropriate configuration file |
400 |
(``ceph.conf``) in ``/etc/ceph``. For the rbd userspace utils, you'll |
401 |
only need to specify the IP addresses of the RADOS Cluster monitors. |
402 |
|
403 |
.. admonition:: ceph.conf |
404 |
|
405 |
Sample configuration file: |
406 |
|
407 |
.. code-block:: text |
408 |
|
409 |
[mon.a] |
410 |
host = example_monitor_host1 |
411 |
mon addr = 1.2.3.4:6789 |
412 |
[mon.b] |
413 |
host = example_monitor_host2 |
414 |
mon addr = 1.2.3.5:6789 |
415 |
[mon.c] |
416 |
host = example_monitor_host3 |
417 |
mon addr = 1.2.3.6:6789 |
418 |
|
419 |
For more information, please see the `Ceph Docs |
420 |
<http://ceph.newdream.net/docs/latest/>`_ |
421 |
|
422 |
Other required software |
423 |
+++++++++++++++++++++++ |
424 |
|
425 |
Please install all software requirements mentioned in :doc:`install-quick`. |
426 |
If you want to build Ganeti from source, don't forget to follow the steps |
427 |
required for that as well. |
428 |
|
429 |
Setting up the environment for Ganeti |
430 |
------------------------------------- |
431 |
|
432 |
Configuring the network |
433 |
+++++++++++++++++++++++ |
434 |
|
435 |
**Mandatory** on all nodes. |
436 |
|
437 |
You can run Ganeti either in "bridged mode", "routed mode" or |
438 |
"openvswitch mode". In bridged mode, the default, the instances network |
439 |
interfaces will be attached to a software bridge running in dom0. Xen by |
440 |
default creates such a bridge at startup, but your distribution might |
441 |
have a different way to do things, and you'll definitely need to |
442 |
manually set it up under KVM. |
443 |
|
444 |
Beware that the default name Ganeti uses is ``xen-br0`` (which was used |
445 |
in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. See the |
446 |
`Initializing the cluster`_ section to learn how to choose a different |
447 |
bridge, or not to use one at all and use "routed mode". |
448 |
|
449 |
In order to use "routed mode" under Xen, you'll need to change the |
450 |
relevant parameters in the Xen config file. Under KVM instead, no config |
451 |
change is necessary, but you still need to set up your network |
452 |
interfaces correctly. |
453 |
|
454 |
By default, under KVM, the "link" parameter you specify per-nic will |
455 |
represent, if non-empty, a different routing table name or number to use |
456 |
for your instances. This allows isolation between different instance |
457 |
groups, and different routing policies between node traffic and instance |
458 |
traffic. |
459 |
|
460 |
You will need to configure your routing table basic routes and rules |
461 |
outside of ganeti. The vif scripts will only add /32 routes to your |
462 |
instances, through their interface, in the table you specified (under |
463 |
KVM, and in the main table under Xen). |
464 |
|
465 |
Also for "openvswitch mode" under Xen a custom network script is needed. |
466 |
Under KVM everything should work, but you'll need to configure your |
467 |
switches outside of Ganeti (as for bridges). |
468 |
|
469 |
.. admonition:: Bridging issues with certain kernels |
470 |
|
471 |
Some kernel versions (e.g. 2.6.32) have an issue where the bridge |
472 |
will automatically change its ``MAC`` address to the lower-numbered |
473 |
slave on port addition and removal. This means that, depending on |
474 |
the ``MAC`` address of the actual NIC on the node and the addresses |
475 |
of the instances, it could be that starting, stopping or migrating |
476 |
instances will lead to timeouts due to the address of the bridge |
477 |
(and thus node itself) changing. |
478 |
|
479 |
To prevent this, it's enough to set the bridge manually to a |
480 |
specific ``MAC`` address, which will disable this automatic address |
481 |
change. In Debian, this can be done as follows in the bridge |
482 |
configuration snippet:: |
483 |
|
484 |
up ip link set addr $(cat /sys/class/net/$IFACE/address) dev $IFACE |
485 |
|
486 |
which will "set" the bridge address to the initial one, disallowing |
487 |
changes. |
488 |
|
489 |
.. admonition:: Bridging under Debian |
490 |
|
491 |
The recommended way to configure the Xen bridge is to edit your |
492 |
``/etc/network/interfaces`` file and substitute your normal |
493 |
Ethernet stanza with the following snippet:: |
494 |
|
495 |
auto xen-br0 |
496 |
iface xen-br0 inet static |
497 |
address %YOUR_IP_ADDRESS% |
498 |
netmask %YOUR_NETMASK% |
499 |
network %YOUR_NETWORK% |
500 |
broadcast %YOUR_BROADCAST_ADDRESS% |
501 |
gateway %YOUR_GATEWAY% |
502 |
bridge_ports eth0 |
503 |
bridge_stp off |
504 |
bridge_fd 0 |
505 |
# example for setting manually the bridge address to the eth0 NIC |
506 |
up ip link set addr $(cat /sys/class/net/eth0/address) dev $IFACE |
507 |
|
508 |
The following commands need to be executed on the local console:: |
509 |
|
510 |
$ ifdown eth0 |
511 |
$ ifup xen-br0 |
512 |
|
513 |
To check if the bridge is setup, use the ``ip`` and ``brctl show`` |
514 |
commands:: |
515 |
|
516 |
$ ip a show xen-br0 |
517 |
9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue |
518 |
link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff |
519 |
inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 |
520 |
inet6 fe80::220:fcff:fe1e:d55d/64 scope link |
521 |
valid_lft forever preferred_lft forever |
522 |
|
523 |
$ brctl show xen-br0 |
524 |
bridge name bridge id STP enabled interfaces |
525 |
xen-br0 8000.0020fc1ed55d no eth0 |
526 |
|
527 |
.. _configure-lvm-label: |
528 |
|
529 |
Configuring LVM |
530 |
+++++++++++++++ |
531 |
|
532 |
**Mandatory** on all nodes. |
533 |
|
534 |
The volume group is required to be at least 20GiB. |
535 |
|
536 |
If you haven't configured your LVM volume group at install time you need |
537 |
to do it before trying to initialize the Ganeti cluster. This is done by |
538 |
formatting the devices/partitions you want to use for it and then adding |
539 |
them to the relevant volume group:: |
540 |
|
541 |
$ pvcreate /dev/%sda3% |
542 |
$ vgcreate xenvg /dev/%sda3% |
543 |
|
544 |
or:: |
545 |
|
546 |
$ pvcreate /dev/%sdb1% |
547 |
$ pvcreate /dev/%sdc1% |
548 |
$ vgcreate xenvg /dev/%sdb1% /dev/%sdc1% |
549 |
|
550 |
If you want to add a device later you can do so with the *vgextend* |
551 |
command:: |
552 |
|
553 |
$ pvcreate /dev/%sdd1% |
554 |
$ vgextend xenvg /dev/%sdd1% |
555 |
|
556 |
Optional: it is recommended to configure LVM not to scan the DRBD |
557 |
devices for physical volumes. This can be accomplished by editing |
558 |
``/etc/lvm/lvm.conf`` and adding the ``/dev/drbd[0-9]+`` regular |
559 |
expression to the ``filter`` variable, like this: |
560 |
|
561 |
.. code-block:: text |
562 |
|
563 |
filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] |
564 |
|
565 |
Note that with Ganeti a helper script is provided - ``lvmstrap`` which |
566 |
will erase and configure as LVM any not in-use disk on your system. This |
567 |
is dangerous and it's recommended to read its ``--help`` output if you |
568 |
want to use it. |
569 |
|
570 |
Installing Ganeti |
571 |
+++++++++++++++++ |
572 |
|
573 |
**Mandatory** on all nodes. |
574 |
|
575 |
It's now time to install the Ganeti software itself. Download the |
576 |
source from the project page at `<http://code.google.com/p/ganeti/>`_, |
577 |
and install it (replace 2.6.0 with the latest version):: |
578 |
|
579 |
$ tar xvzf ganeti-%2.6.0%.tar.gz |
580 |
$ cd ganeti-%2.6.0% |
581 |
$ ./configure --localstatedir=/var --sysconfdir=/etc |
582 |
$ make |
583 |
$ make install |
584 |
$ mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export |
585 |
|
586 |
You also need to copy the file ``doc/examples/ganeti.initd`` from the |
587 |
source archive to ``/etc/init.d/ganeti`` and register it with your |
588 |
distribution's startup scripts, for example in Debian:: |
589 |
|
590 |
$ update-rc.d ganeti defaults 20 80 |
591 |
|
592 |
In order to automatically restart failed instances, you need to setup a |
593 |
cron job run the *ganeti-watcher* command. A sample cron file is |
594 |
provided in the source at ``doc/examples/ganeti.cron`` and you can copy |
595 |
that (eventually altering the path) to ``/etc/cron.d/ganeti``. |
596 |
|
597 |
What gets installed |
598 |
~~~~~~~~~~~~~~~~~~~ |
599 |
|
600 |
The above ``make install`` invocation, or installing via your |
601 |
distribution mechanisms, will install on the system: |
602 |
|
603 |
- a set of python libraries under the *ganeti* namespace (depending on |
604 |
the python version this can be located in either |
605 |
``lib/python-$ver/site-packages`` or various other locations) |
606 |
- a set of programs under ``/usr/local/sbin`` or ``/usr/sbin`` |
607 |
- if the htools component was enabled, a set of programs unde |
608 |
``/usr/local/bin`` or ``/usr/bin/`` |
609 |
- man pages for the above programs |
610 |
- a set of tools under the ``lib/ganeti/tools`` directory |
611 |
- an example iallocator script (see the admin guide for details) under |
612 |
``lib/ganeti/iallocators`` |
613 |
- a cron job that is needed for cluster maintenance |
614 |
- an init script for automatic startup of Ganeti daemons |
615 |
- provided but not installed automatically by ``make install`` is a bash |
616 |
completion script that hopefully will ease working with the many |
617 |
cluster commands |
618 |
|
619 |
Installing the Operating System support packages |
620 |
++++++++++++++++++++++++++++++++++++++++++++++++ |
621 |
|
622 |
**Mandatory** on all nodes. |
623 |
|
624 |
To be able to install instances you need to have an Operating System |
625 |
installation script. An example OS that works under Debian and can |
626 |
install Debian and Ubuntu instace OSes is provided on the project web |
627 |
site. Download it from the project page and follow the instructions in |
628 |
the ``README`` file. Here is the installation procedure (replace 0.12 |
629 |
with the latest version that is compatible with your ganeti version):: |
630 |
|
631 |
$ cd /usr/local/src/ |
632 |
$ wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-%0.12%.tar.gz |
633 |
$ tar xzf ganeti-instance-debootstrap-%0.12%.tar.gz |
634 |
$ cd ganeti-instance-debootstrap-%0.12% |
635 |
$ ./configure --with-os-dir=/srv/ganeti/os |
636 |
$ make |
637 |
$ make install |
638 |
|
639 |
In order to use this OS definition, you need to have internet access |
640 |
from your nodes and have the *debootstrap*, *dump* and *restore* |
641 |
commands installed on all nodes. Also, if the OS is configured to |
642 |
partition the instance's disk in |
643 |
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx* |
644 |
installed. |
645 |
|
646 |
.. admonition:: Debian |
647 |
|
648 |
Use this command on all nodes to install the required packages:: |
649 |
|
650 |
$ apt-get install debootstrap dump kpartx |
651 |
|
652 |
Or alternatively install the OS definition from the Debian package:: |
653 |
|
654 |
$ apt-get install ganeti-instance-debootstrap |
655 |
|
656 |
.. admonition:: KVM |
657 |
|
658 |
In order for debootstrap instances to be able to shutdown cleanly |
659 |
they must install have basic ACPI support inside the instance. Which |
660 |
packages are needed depend on the exact flavor of Debian or Ubuntu |
661 |
which you're installing, but the example defaults file has a |
662 |
commented out configuration line that works for Debian Lenny and |
663 |
Squeeze:: |
664 |
|
665 |
EXTRA_PKGS="acpi-support-base,console-tools,udev" |
666 |
|
667 |
``kbd`` can be used instead of ``console-tools``, and more packages |
668 |
can be added, of course, if needed. |
669 |
|
670 |
Please refer to the ``README`` file of ``ganeti-instance-debootstrap`` for |
671 |
further documentation. |
672 |
|
673 |
Alternatively, you can create your own OS definitions. See the manpage |
674 |
:manpage:`ganeti-os-interface(7)`. |
675 |
|
676 |
Initializing the cluster |
677 |
++++++++++++++++++++++++ |
678 |
|
679 |
**Mandatory** once per cluster, on the first node. |
680 |
|
681 |
The last step is to initialize the cluster. After you have repeated the |
682 |
above process on all of your nodes and choose one as the master. Make sure |
683 |
there is a SSH key pair on the master node (optionally generating one using |
684 |
``ssh-keygen``). Finally execute:: |
685 |
|
686 |
$ gnt-cluster init %CLUSTERNAME% |
687 |
|
688 |
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it must |
689 |
exist in DNS or in ``/etc/hosts``) by all the nodes in the cluster. You |
690 |
must choose a name different from any of the nodes names for a |
691 |
multi-node cluster. In general the best choice is to have a unique name |
692 |
for a cluster, even if it consists of only one machine, as you will be |
693 |
able to expand it later without any problems. Please note that the |
694 |
hostname used for this must resolve to an IP address reserved |
695 |
**exclusively** for this purpose, and cannot be the name of the first |
696 |
(master) node. |
697 |
|
698 |
If you want to use a bridge which is not ``xen-br0``, or no bridge at |
699 |
all, change it with the ``--nic-parameters`` option. For example to |
700 |
bridge on br0 you can add:: |
701 |
|
702 |
--nic-parameters link=br0 |
703 |
|
704 |
Or to not bridge at all, and use a separate routing table:: |
705 |
|
706 |
--nic-parameters mode=routed,link=100 |
707 |
|
708 |
If you don't have a ``xen-br0`` interface you also have to specify a |
709 |
different network interface which will get the cluster IP, on the master |
710 |
node, by using the ``--master-netdev <device>`` option. |
711 |
|
712 |
You can use a different name than ``xenvg`` for the volume group (but |
713 |
note that the name must be identical on all nodes). In this case you |
714 |
need to specify it by passing the *--vg-name <VGNAME>* option to |
715 |
``gnt-cluster init``. |
716 |
|
717 |
To set up the cluster as an Xen HVM cluster, use the |
718 |
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor |
719 |
(you can also add ``,xen-pvm`` to enable the PVM one too). You will also |
720 |
need to create the VNC cluster password file |
721 |
``/etc/ganeti/vnc-cluster-password`` which contains one line with the |
722 |
default VNC password for the cluster. |
723 |
|
724 |
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed), |
725 |
pass ``--enabled-hypervisors=kvm`` to the init command. |
726 |
|
727 |
You can also invoke the command with the ``--help`` option in order to |
728 |
see all the possibilities. |
729 |
|
730 |
Hypervisor/Network/Cluster parameters |
731 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
732 |
|
733 |
Please note that the default hypervisor/network/cluster parameters may |
734 |
not be the correct one for your environment. Carefully check them, and |
735 |
change them either at cluster init time, or later with ``gnt-cluster |
736 |
modify``. |
737 |
|
738 |
Your instance types, networking environment, hypervisor type and version |
739 |
may all affect what kind of parameters should be used on your cluster. |
740 |
|
741 |
.. admonition:: KVM |
742 |
|
743 |
Instances are by default configured to use a host kernel, and to be |
744 |
reached via serial console, which works nice for Linux paravirtualized |
745 |
instances. If you want fully virtualized instances you may want to |
746 |
handle their kernel inside the instance, and to use VNC. |
747 |
|
748 |
Some versions of KVM have a bug that will make an instance hang when |
749 |
configured to use the serial console (which is the default) unless a |
750 |
connection is made to it within about 2 seconds of the instance's |
751 |
startup. For such case it's recommended to disable the |
752 |
``serial_console`` option. |
753 |
|
754 |
|
755 |
Joining the nodes to the cluster |
756 |
++++++++++++++++++++++++++++++++ |
757 |
|
758 |
**Mandatory** for all the other nodes. |
759 |
|
760 |
After you have initialized your cluster you need to join the other nodes |
761 |
to it. You can do so by executing the following command on the master |
762 |
node:: |
763 |
|
764 |
$ gnt-node add %NODENAME% |
765 |
|
766 |
Separate replication network |
767 |
++++++++++++++++++++++++++++ |
768 |
|
769 |
**Optional** |
770 |
|
771 |
Ganeti uses DRBD to mirror the disk of the virtual instances between |
772 |
nodes. To use a dedicated network interface for this (in order to |
773 |
improve performance or to enhance security) you need to configure an |
774 |
additional interface for each node. Use the *-s* option with |
775 |
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of |
776 |
this secondary interface to use for each node. Note that if you |
777 |
specified this option at cluster setup time, you must afterwards use it |
778 |
for every node add operation. |
779 |
|
780 |
Testing the setup |
781 |
+++++++++++++++++ |
782 |
|
783 |
Execute the ``gnt-node list`` command to see all nodes in the cluster:: |
784 |
|
785 |
$ gnt-node list |
786 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
787 |
node1.example.com 197404 197404 2047 1896 125 0 0 |
788 |
|
789 |
The above shows a couple of things: |
790 |
|
791 |
- The various Ganeti daemons can talk to each other |
792 |
- Ganeti can examine the storage of the node (DTotal/DFree) |
793 |
- Ganeti can talk to the selected hypervisor (MTotal/MNode/MFree) |
794 |
|
795 |
Cluster burnin |
796 |
~~~~~~~~~~~~~~ |
797 |
|
798 |
With Ganeti a tool called :command:`burnin` is provided that can test |
799 |
most of the Ganeti functionality. The tool is installed under the |
800 |
``lib/ganeti/tools`` directory (either under ``/usr`` or ``/usr/local`` |
801 |
based on the installation method). See more details under |
802 |
:ref:`burnin-label`. |
803 |
|
804 |
Further steps |
805 |
------------- |
806 |
|
807 |
You can now proceed either to the :doc:`admin`, or read the manpages of |
808 |
the various commands (:manpage:`ganeti(7)`, :manpage:`gnt-cluster(8)`, |
809 |
:manpage:`gnt-node(8)`, :manpage:`gnt-instance(8)`, |
810 |
:manpage:`gnt-job(8)`). |
811 |
|
812 |
.. rubric:: Footnotes |
813 |
|
814 |
.. [#defkernel] The kernel and initrd paths can be changed at either |
815 |
cluster level (which changes the default for all instances) or at |
816 |
instance level. |
817 |
|
818 |
.. vim: set textwidth=72 : |
819 |
.. Local Variables: |
820 |
.. mode: rst |
821 |
.. fill-column: 72 |
822 |
.. End: |