root / doc / install.rst @ 9f83899a
History | View | Annotate | Download (23.3 kB)
1 |
Ganeti installation tutorial |
---|---|
2 |
============================ |
3 |
|
4 |
Documents Ganeti version |version| |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
Introduction |
9 |
------------ |
10 |
|
11 |
Ganeti is a cluster virtualization management system based on Xen or |
12 |
KVM. This document explains how to bootstrap a Ganeti node (Xen |
13 |
*dom0*), create a running cluster and install virtual instance (Xen |
14 |
*domU*). You need to repeat most of the steps in this document for |
15 |
every node you want to install, but of course we recommend creating |
16 |
some semi-automatic procedure if you plan to deploy Ganeti on a |
17 |
medium/large scale. |
18 |
|
19 |
A basic Ganeti terminology glossary is provided in the introductory |
20 |
section of the *Ganeti administrator's guide*. Please refer to that |
21 |
document if you are uncertain about the terms we are using. |
22 |
|
23 |
Ganeti has been developed for Linux and is distribution-agnostic. |
24 |
This documentation will use Debian Lenny as an example system but the |
25 |
examples can easily be translated to any other distribution. ou are |
26 |
expected to be familiar with your distribution, its package management |
27 |
system, and Xen or KVM before trying to use Ganeti. |
28 |
|
29 |
This document is divided into two main sections: |
30 |
|
31 |
- Installation of the base system and base components |
32 |
|
33 |
- Configuration of the environment for Ganeti |
34 |
|
35 |
Each of these is divided into sub-sections. While a full Ganeti system |
36 |
will need all of the steps specified, some are not strictly required |
37 |
for every environment. Which ones they are, and why, is specified in |
38 |
the corresponding sections. |
39 |
|
40 |
Installing the base system and base components |
41 |
---------------------------------------------- |
42 |
|
43 |
Hardware requirements |
44 |
+++++++++++++++++++++ |
45 |
|
46 |
Any system supported by your Linux distribution is fine. 64-bit |
47 |
systems are better as they can support more memory. |
48 |
|
49 |
Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) |
50 |
is supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is |
51 |
needed to get high-availability features (but of course, one can be |
52 |
used to store the images). It is highly recommended to use more than |
53 |
one disk drive to improve speed. But Ganeti also works with one disk |
54 |
per machine. |
55 |
|
56 |
Installing the base system |
57 |
++++++++++++++++++++++++++ |
58 |
|
59 |
**Mandatory** on all nodes. |
60 |
|
61 |
It is advised to start with a clean, minimal install of the operating |
62 |
system. The only requirement you need to be aware of at this stage is |
63 |
to partition leaving enough space for a big (**minimum** 20GiB) LVM |
64 |
volume group which will then host your instance filesystems, if you |
65 |
want to use all Ganeti features. The volume group name Ganeti 2.0 uses |
66 |
(by default) is ``xenvg``. |
67 |
|
68 |
You can also use file-based storage only, without LVM, but this setup |
69 |
is not detailed in this document. |
70 |
|
71 |
|
72 |
While you can use an existing system, please note that the Ganeti |
73 |
installation is intrusive in terms of changes to the system |
74 |
configuration, and it's best to use a newly-installed system without |
75 |
important data on it. |
76 |
|
77 |
Also, for best results, it's advised that the nodes have as much as |
78 |
possible the same hardware and software configuration. This will make |
79 |
administration much easier. |
80 |
|
81 |
Hostname issues |
82 |
~~~~~~~~~~~~~~~ |
83 |
|
84 |
Note that Ganeti requires the hostnames of the systems (i.e. what the |
85 |
``hostname`` command outputs to be a fully-qualified name, not a short |
86 |
name. In other words, you should use *node1.example.com* as a hostname |
87 |
and not just *node1*. |
88 |
|
89 |
.. admonition:: Debian |
90 |
|
91 |
Debian Lenny and Etch configures the hostname differently than you |
92 |
need it for Ganeti. For example, this is what Etch puts in |
93 |
``/etc/hosts`` in certain situations:: |
94 |
|
95 |
127.0.0.1 localhost |
96 |
127.0.1.1 node1.example.com node1 |
97 |
|
98 |
but for Ganeti you need to have:: |
99 |
|
100 |
127.0.0.1 localhost |
101 |
192.168.1.1 node1.example.com node1 |
102 |
|
103 |
replacing ``192.168.1.1`` with your node's address. Also, the file |
104 |
``/etc/hostname`` which configures the hostname of the system |
105 |
should contain ``node1.example.com`` and not just ``node1`` (you |
106 |
need to run the command ``/etc/init.d/hostname.sh start`` after |
107 |
changing the file). |
108 |
|
109 |
.. admonition:: Why a fully qualified host name |
110 |
|
111 |
Although most distributions use only the short name in the /etc/hostname |
112 |
file, we still think Ganeti nodes should use the full name. The reason for |
113 |
this is that calling 'hostname --fqdn' requires the resolver library to work |
114 |
and is a 'guess' via heuristics at what is your domain name. Since Ganeti |
115 |
can be used among other things to host DNS servers, we don't want to depend |
116 |
on them as much as possible, and we'd rather have the uname() syscall return |
117 |
the full node name. |
118 |
|
119 |
We haven't ever found any breakage in using a full hostname on a Linux |
120 |
system, and anyway we recommend to have only a minimal installation on |
121 |
Ganeti nodes, and to use instances (or other dedicated machines) to run the |
122 |
rest of your network services. By doing this you can change the |
123 |
/etc/hostname file to contain an FQDN without the fear of breaking anything |
124 |
unrelated. |
125 |
|
126 |
|
127 |
Installing The Hypervisor |
128 |
+++++++++++++++++++++++++ |
129 |
|
130 |
**Mandatory** on all nodes. |
131 |
|
132 |
While Ganeti is developed with the ability to modularly run on different |
133 |
virtualization environments in mind the only two currently useable on a live |
134 |
system are Xen and KVM. Supported Xen versions are: 3.0.3, 3.0.4 and 3.1. |
135 |
Supported KVM version are 72 and above. |
136 |
|
137 |
Please follow your distribution's recommended way to install and set |
138 |
up Xen, or install Xen from the upstream source, if you wish, |
139 |
following their manual. For KVM, make sure you have a KVM-enabled |
140 |
kernel and the KVM tools. |
141 |
|
142 |
After installing Xen, you need to reboot into your new system. On some |
143 |
distributions this might involve configuring GRUB appropriately, whereas others |
144 |
will configure it automatically when you install the respective kernels. For |
145 |
KVM no reboot should be necessary. |
146 |
|
147 |
.. admonition:: Xen on Debian |
148 |
|
149 |
Under Lenny or Etch you can install the relevant |
150 |
``xen-linux-system`` package, which will pull in both the |
151 |
hypervisor and the relevant kernel. Also, if you are installing a |
152 |
32-bit Lenny/Etch, you should install the ``libc6-xen`` package |
153 |
(run ``apt-get install libc6-xen``). |
154 |
|
155 |
Xen settings |
156 |
~~~~~~~~~~~~ |
157 |
|
158 |
It's recommended that dom0 is restricted to a low amount of memory |
159 |
(512MiB or 1GiB is reasonable) and that memory ballooning is disabled |
160 |
in the file ``/etc/xen/xend-config.sxp`` by setting |
161 |
the value ``dom0-min-mem`` to 0, |
162 |
like this:: |
163 |
|
164 |
(dom0-min-mem 0) |
165 |
|
166 |
For optimum performance when running both CPU and I/O intensive |
167 |
instances, it's also recommended that the dom0 is restricted to one |
168 |
CPU only, for example by booting with the kernel parameter ``nosmp``. |
169 |
|
170 |
It is recommended that you disable xen's automatic save of virtual |
171 |
machines at system shutdown and subsequent restore of them at reboot. |
172 |
To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file |
173 |
``/etc/default/xendomains`` is set to an empty value. |
174 |
|
175 |
.. admonition:: Debian |
176 |
|
177 |
Besides the ballooning change which you need to set in |
178 |
``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp |
179 |
parameters in the file ``/boot/grub/menu.lst``. You need to modify |
180 |
the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this:: |
181 |
|
182 |
## Xen hypervisor options to use with the default Xen boot option |
183 |
# xenhopt=dom0_mem=1024M |
184 |
|
185 |
and the ``xenkopt`` needs to include the ``nosmp`` option like |
186 |
this:: |
187 |
|
188 |
## Xen Linux kernel options to use with the default Xen boot option |
189 |
# xenkopt=nosmp |
190 |
|
191 |
Any existing parameters can be left in place: it's ok to have |
192 |
``xenkopt=console=tty0 nosmp``, for example. After modifying the |
193 |
files, you need to run:: |
194 |
|
195 |
/sbin/update-grub |
196 |
|
197 |
If you want to run HVM instances too with Ganeti and want VNC access |
198 |
to the console of your instances, set the following two entries in |
199 |
``/etc/xen/xend-config.sxp``:: |
200 |
|
201 |
(vnc-listen '0.0.0.0') (vncpasswd '') |
202 |
|
203 |
You need to restart the Xen daemon for these settings to take effect:: |
204 |
|
205 |
/etc/init.d/xend restart |
206 |
|
207 |
Selecting the instance kernel |
208 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
209 |
|
210 |
After you have installed Xen, you need to tell Ganeti exactly what |
211 |
kernel to use for the instances it will create. This is done by |
212 |
creating a symlink from your actual kernel to |
213 |
``/boot/vmlinuz-2.6-xenU``, and one from your initrd |
214 |
to ``/boot/initrd-2.6-xenU``. Note that if you don't |
215 |
use an initrd for the domU kernel, you don't need |
216 |
to create the initrd symlink. |
217 |
|
218 |
.. admonition:: Debian |
219 |
|
220 |
After installation of the ``xen-linux-system`` package, you need to |
221 |
run (replace the exact version number with the one you have):: |
222 |
|
223 |
cd /boot |
224 |
ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU |
225 |
ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU |
226 |
|
227 |
Installing DRBD |
228 |
+++++++++++++++ |
229 |
|
230 |
Recommended on all nodes: DRBD_ is required if you want to use the |
231 |
high availability (HA) features of Ganeti, but optional if you don't |
232 |
require HA or only run Ganeti on single-node clusters. You can upgrade |
233 |
a non-HA cluster to an HA one later, but you might need to export and |
234 |
re-import all your instances to take advantage of the new features. |
235 |
|
236 |
.. _DRBD: http://www.drbd.org/ |
237 |
|
238 |
Supported DRBD versions: 8.0.x. It's recommended to have at least |
239 |
version 8.0.12. |
240 |
|
241 |
Now the bad news: unless your distribution already provides it |
242 |
installing DRBD might involve recompiling your kernel or anyway |
243 |
fiddling with it. Hopefully at least the Xen-ified kernel source to |
244 |
start from will be provided. |
245 |
|
246 |
The good news is that you don't need to configure DRBD at all. Ganeti |
247 |
will do it for you for every instance you set up. If you have the |
248 |
DRBD utils installed and the module in your kernel you're fine. Please |
249 |
check that your system is configured to load the module at every boot, |
250 |
and that it passes the following option to the module |
251 |
``minor_count=255``. This will allow you to use up to 128 instances |
252 |
per node (for most clusters 128 should be enough, though). |
253 |
|
254 |
.. admonition:: Debian |
255 |
|
256 |
On Debian, you can just install (build) the DRBD 8.0.x module with |
257 |
the following commands (make sure you are running the Xen kernel):: |
258 |
|
259 |
apt-get install drbd8-source drbd8-utils |
260 |
m-a update |
261 |
m-a a-i drbd8 |
262 |
echo drbd minor_count=128 >> /etc/modules |
263 |
depmod -a |
264 |
modprobe drbd minor_count=128 |
265 |
|
266 |
It is also recommended that you comment out the default resources |
267 |
in the ``/etc/drbd.conf`` file, so that the init script doesn't try |
268 |
to configure any drbd devices. You can do this by prefixing all |
269 |
*resource* lines in the file with the keyword *skip*, like this:: |
270 |
|
271 |
skip resource r0 { |
272 |
... |
273 |
} |
274 |
|
275 |
skip resource "r1" { |
276 |
... |
277 |
} |
278 |
|
279 |
Other required software |
280 |
+++++++++++++++++++++++ |
281 |
|
282 |
Besides Xen and DRBD, you will need to install the following (on all |
283 |
nodes): |
284 |
|
285 |
- LVM version 2, `<http://sourceware.org/lvm2/>`_ |
286 |
|
287 |
- OpenSSL, `<http://www.openssl.org/>`_ |
288 |
|
289 |
- OpenSSH, `<http://www.openssh.com/portable.html>`_ |
290 |
|
291 |
- bridge utilities, `<http://bridge.sourceforge.net/>`_ |
292 |
|
293 |
- iproute2, `<http://developer.osdl.org/dev/iproute2>`_ |
294 |
|
295 |
- arping (part of iputils package), |
296 |
`<ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz>`_ |
297 |
|
298 |
- Python version 2.4 or 2.5, `<http://www.python.org>`_ |
299 |
|
300 |
- Python OpenSSL bindings, `<http://pyopenssl.sourceforge.net/>`_ |
301 |
|
302 |
- simplejson Python module, `<http://www.undefined.org/python/#simplejson>`_ |
303 |
|
304 |
- pyparsing Python module, `<http://pyparsing.wikispaces.com/>`_ |
305 |
|
306 |
- pyinotify Python module, `<http://trac.dbzteam.org/pyinotify>`_ |
307 |
|
308 |
These programs are supplied as part of most Linux distributions, so |
309 |
usually they can be installed via apt or similar methods. Also many of |
310 |
them will already be installed on a standard machine. |
311 |
|
312 |
|
313 |
.. admonition:: Debian |
314 |
|
315 |
You can use this command line to install all needed packages:: |
316 |
|
317 |
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ |
318 |
python python-pyopenssl openssl python-pyparsing python-simplejson \ |
319 |
python-pyinotify |
320 |
|
321 |
Setting up the environment for Ganeti |
322 |
------------------------------------- |
323 |
|
324 |
Configuring the network |
325 |
+++++++++++++++++++++++ |
326 |
|
327 |
**Mandatory** on all nodes. |
328 |
|
329 |
You can run Ganeti either in "bridge mode" or in "routed mode". In bridge |
330 |
mode, the default, the instances network interfaces will be attached to a |
331 |
software bridge running in dom0. Xen by default creates such a bridge at |
332 |
startup, but your distribution might have a different way to do things, and |
333 |
you'll definitely need to manually set it up under KVM. |
334 |
|
335 |
Beware that the default name Ganeti uses is ``xen-br0`` (which was |
336 |
used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default |
337 |
bridge your Ganeti cluster will use for new instances can be specified |
338 |
at cluster initialization time. |
339 |
|
340 |
If you want to run in "routing mode" you need to specify that at cluster init |
341 |
time (using the --nicparam option), and then no bridge will be needed. In |
342 |
this mode instance traffic will be routed by dom0, instead of bridged. |
343 |
|
344 |
In order to use "routing mode" under Xen, you'll need to change the relevant |
345 |
parameters in the Xen config file. Under KVM instead, no config change is |
346 |
necessary, but you still need to set up your network interfaces correctly. |
347 |
|
348 |
By default, under KVM, the "link" parameter you specify per-nic will |
349 |
represent, if non-empty, a different routing table name or number to use for |
350 |
your instances. This allows insulation between different instance groups, |
351 |
and different routing policies between node traffic and instance traffic. |
352 |
|
353 |
You will need to configure your routing table basic routes and rules outside |
354 |
of ganeti. The vif scripts will only add /32 routes to your instances, |
355 |
through their interface, in the table you specified (under KVM, and in the |
356 |
main table under Xen). |
357 |
|
358 |
.. admonition:: Bridging under Debian |
359 |
|
360 |
The recommended way to configure the Xen bridge is to edit your |
361 |
``/etc/network/interfaces`` file and substitute your normal |
362 |
Ethernet stanza with the following snippet:: |
363 |
|
364 |
auto xen-br0 |
365 |
iface xen-br0 inet static |
366 |
address YOUR_IP_ADDRESS |
367 |
netmask YOUR_NETMASK |
368 |
network YOUR_NETWORK |
369 |
broadcast YOUR_BROADCAST_ADDRESS |
370 |
gateway YOUR_GATEWAY |
371 |
bridge_ports eth0 |
372 |
bridge_stp off |
373 |
bridge_fd 0 |
374 |
|
375 |
The following commands need to be executed on the local console: |
376 |
|
377 |
ifdown eth0 |
378 |
ifup xen-br0 |
379 |
|
380 |
To check if the bridge is setup, use the ``ip`` and ``brctl show`` |
381 |
commands:: |
382 |
|
383 |
# ip a show xen-br0 |
384 |
9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue |
385 |
link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff |
386 |
inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 |
387 |
inet6 fe80::220:fcff:fe1e:d55d/64 scope link |
388 |
valid_lft forever preferred_lft forever |
389 |
|
390 |
# brctl show xen-br0 |
391 |
bridge name bridge id STP enabled interfaces |
392 |
xen-br0 8000.0020fc1ed55d no eth0 |
393 |
|
394 |
Configuring LVM |
395 |
+++++++++++++++ |
396 |
|
397 |
**Mandatory** on all nodes. |
398 |
|
399 |
The volume group is required to be at least 20GiB. |
400 |
|
401 |
If you haven't configured your LVM volume group at install time you |
402 |
need to do it before trying to initialize the Ganeti cluster. This is |
403 |
done by formatting the devices/partitions you want to use for it and |
404 |
then adding them to the relevant volume group:: |
405 |
|
406 |
pvcreate /dev/sda3 |
407 |
vgcreate xenvg /dev/sda3 |
408 |
|
409 |
or:: |
410 |
|
411 |
pvcreate /dev/sdb1 |
412 |
pvcreate /dev/sdc1 |
413 |
vgcreate xenvg /dev/sdb1 /dev/sdc1 |
414 |
|
415 |
If you want to add a device later you can do so with the *vgextend* |
416 |
command:: |
417 |
|
418 |
pvcreate /dev/sdd1 |
419 |
vgextend xenvg /dev/sdd1 |
420 |
|
421 |
Optional: it is recommended to configure LVM not to scan the DRBD |
422 |
devices for physical volumes. This can be accomplished by editing |
423 |
``/etc/lvm/lvm.conf`` and adding the |
424 |
``/dev/drbd[0-9]+`` regular expression to the |
425 |
``filter`` variable, like this:: |
426 |
|
427 |
filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] |
428 |
|
429 |
Installing Ganeti |
430 |
+++++++++++++++++ |
431 |
|
432 |
**Mandatory** on all nodes. |
433 |
|
434 |
It's now time to install the Ganeti software itself. Download the |
435 |
source from the project page at `<http://code.google.com/p/ganeti/>`_, |
436 |
and install it (replace 2.0.0 with the latest version):: |
437 |
|
438 |
tar xvzf ganeti-2.0.0.tar.gz |
439 |
cd ganeti-2.0.0 |
440 |
./configure --localstatedir=/var --sysconfdir=/etc |
441 |
make |
442 |
make install |
443 |
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export |
444 |
|
445 |
You also need to copy the file |
446 |
``doc/examples/ganeti.initd`` from the source archive |
447 |
to ``/etc/init.d/ganeti`` and register it with your |
448 |
distribution's startup scripts, for example in Debian:: |
449 |
|
450 |
update-rc.d ganeti defaults 20 80 |
451 |
|
452 |
In order to automatically restart failed instances, you need to setup |
453 |
a cron job run the *ganeti-watcher* command. A sample cron file is |
454 |
provided in the source at ``doc/examples/ganeti.cron`` and you can |
455 |
copy that (eventually altering the path) to ``/etc/cron.d/ganeti``. |
456 |
|
457 |
Installing the Operating System support packages |
458 |
++++++++++++++++++++++++++++++++++++++++++++++++ |
459 |
|
460 |
**Mandatory** on all nodes. |
461 |
|
462 |
To be able to install instances you need to have an Operating System |
463 |
installation script. An example OS that works under Debian and can |
464 |
install Debian and Ubuntu instace OSes is provided on the project web |
465 |
site. Download it from the project page and follow the instructions |
466 |
in the ``README`` file. Here is the installation procedure (replace |
467 |
0.7 with the latest version that is compatible with your ganeti |
468 |
version):: |
469 |
|
470 |
cd /usr/local/src/ |
471 |
wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz |
472 |
tar xzf ganeti-instance-debootstrap-0.7.tar.gz |
473 |
cd ganeti-instance-debootstrap-0.7 |
474 |
./configure |
475 |
make |
476 |
make install |
477 |
|
478 |
In order to use this OS definition, you need to have internet access |
479 |
from your nodes and have the *debootstrap*, *dump* and *restore* |
480 |
commands installed on all nodes. Also, if the OS is configured to |
481 |
partition the instance's disk in |
482 |
``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx* |
483 |
installed. |
484 |
|
485 |
.. admonition:: Debian |
486 |
|
487 |
Use this command on all nodes to install the required packages:: |
488 |
|
489 |
apt-get install debootstrap dump kpartx |
490 |
|
491 |
Alternatively, you can create your own OS definitions. See the manpage |
492 |
:manpage:`ganeti-os-interface`. |
493 |
|
494 |
Initializing the cluster |
495 |
++++++++++++++++++++++++ |
496 |
|
497 |
**Mandatory** on one node per cluster. |
498 |
|
499 |
The last step is to initialize the cluster. After you've repeated the |
500 |
above process on all of your nodes, choose one as the master, and |
501 |
execute:: |
502 |
|
503 |
gnt-cluster init <CLUSTERNAME> |
504 |
|
505 |
The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it |
506 |
must exist in DNS or in ``/etc/hosts``) by all the nodes in the |
507 |
cluster. You must choose a name different from any of the nodes names |
508 |
for a multi-node cluster. In general the best choice is to have a |
509 |
unique name for a cluster, even if it consists of only one machine, as |
510 |
you will be able to expand it later without any problems. Please note |
511 |
that the hostname used for this must resolve to an IP address reserved |
512 |
**exclusively** for this purpose, and cannot be the name of the first |
513 |
(master) node. |
514 |
|
515 |
If you want to use a bridge which is not ``xen-br0``, or no bridge at all, use |
516 |
the --nicparams |
517 |
|
518 |
If the bridge name you are using is not ``xen-br0``, use the *-b |
519 |
<BRIDGENAME>* option to specify the bridge name. In this case, you |
520 |
should also use the *--master-netdev <BRIDGENAME>* option with the |
521 |
same BRIDGENAME argument. |
522 |
|
523 |
You can use a different name than ``xenvg`` for the volume group (but |
524 |
note that the name must be identical on all nodes). In this case you |
525 |
need to specify it by passing the *-g <VGNAME>* option to |
526 |
``gnt-cluster init``. |
527 |
|
528 |
To set up the cluster as an HVM cluster, use the |
529 |
``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor |
530 |
(you can also add ``,xen-pvm`` to enable the PVM one too). You will |
531 |
also need to create the VNC cluster password file |
532 |
``/etc/ganeti/vnc-cluster-password`` which contains one line with the |
533 |
default VNC password for the cluster. |
534 |
|
535 |
To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed), |
536 |
pass ``--enabled-hypervisors=kvm`` to the init command. |
537 |
|
538 |
You can also invoke the command with the ``--help`` option in order to |
539 |
see all the possibilities. |
540 |
|
541 |
Joining the nodes to the cluster |
542 |
++++++++++++++++++++++++++++++++ |
543 |
|
544 |
**Mandatory** for all the other nodes. |
545 |
|
546 |
After you have initialized your cluster you need to join the other |
547 |
nodes to it. You can do so by executing the following command on the |
548 |
master node:: |
549 |
|
550 |
gnt-node add <NODENAME> |
551 |
|
552 |
Separate replication network |
553 |
++++++++++++++++++++++++++++ |
554 |
|
555 |
**Optional** |
556 |
|
557 |
Ganeti uses DRBD to mirror the disk of the virtual instances between |
558 |
nodes. To use a dedicated network interface for this (in order to |
559 |
improve performance or to enhance security) you need to configure an |
560 |
additional interface for each node. Use the *-s* option with |
561 |
``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of |
562 |
this secondary interface to use for each node. Note that if you |
563 |
specified this option at cluster setup time, you must afterwards use |
564 |
it for every node add operation. |
565 |
|
566 |
Testing the setup |
567 |
+++++++++++++++++ |
568 |
|
569 |
Execute the ``gnt-node list`` command to see all nodes in the |
570 |
cluster:: |
571 |
|
572 |
# gnt-node list |
573 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
574 |
node1.example.com 197404 197404 2047 1896 125 0 0 |
575 |
|
576 |
Setting up and managing virtual instances |
577 |
----------------------------------------- |
578 |
|
579 |
Setting up virtual instances |
580 |
++++++++++++++++++++++++++++ |
581 |
|
582 |
This step shows how to setup a virtual instance with either |
583 |
non-mirrored disks (``plain``) or with network mirrored disks |
584 |
(``drbd``). All commands need to be executed on the Ganeti master |
585 |
node (the one on which ``gnt-cluster init`` was run). Verify that the |
586 |
OS scripts are present on all cluster nodes with ``gnt-os list``. |
587 |
|
588 |
|
589 |
To create a virtual instance, you need a hostname which is resolvable |
590 |
(DNS or ``/etc/hosts`` on all nodes). The following command will |
591 |
create a non-mirrored instance for you:: |
592 |
|
593 |
gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com |
594 |
* creating instance disks... |
595 |
adding instance instance1.example.com to cluster config |
596 |
- INFO: Waiting for instance instance1.example.com to sync disks. |
597 |
- INFO: Instance instance1.example.com's disks are in sync. |
598 |
creating os for instance instance1.example.com on node node1.example.com |
599 |
* running the instance OS create scripts... |
600 |
* starting instance... |
601 |
|
602 |
The above instance will have no network interface enabled. You can |
603 |
access it over the virtual console with ``gnt-instance console |
604 |
inst1``. There is no password for root. As this is a Debian instance, |
605 |
you can modify the ``/etc/network/interfaces`` file to setup the |
606 |
network interface (eth0 is the name of the interface provided to the |
607 |
instance). |
608 |
|
609 |
To create a network mirrored instance, change the argument to the *-t* |
610 |
option from ``plain`` to ``drbd`` and specify the node on which the |
611 |
mirror should reside with the second value of the *--node* option, |
612 |
like this (note that the command output includes timestamps which have |
613 |
been removed for clarity):: |
614 |
|
615 |
# gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2 |
616 |
* creating instance disks... |
617 |
adding instance instance2.example.com to cluster config |
618 |
- INFO: Waiting for instance instance2.example.com to sync disks. |
619 |
- INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining |
620 |
- INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining |
621 |
- INFO: Instance instance2.example.com's disks are in sync. |
622 |
creating os for instance instance2.example.com on node node1.example.com |
623 |
* running the instance OS create scripts... |
624 |
* starting instance... |
625 |
|
626 |
Managing virtual instances |
627 |
++++++++++++++++++++++++++ |
628 |
|
629 |
All commands need to be executed on the Ganeti master node. |
630 |
|
631 |
To access the console of an instance, run:: |
632 |
|
633 |
gnt-instance console INSTANCENAME |
634 |
|
635 |
To shutdown an instance, run:: |
636 |
|
637 |
gnt-instance shutdown INSTANCENAME |
638 |
|
639 |
To startup an instance, run:: |
640 |
|
641 |
gnt-instance startup INSTANCENAME |
642 |
|
643 |
To failover an instance to its secondary node (only possible with |
644 |
``drbd`` disk templates), run:: |
645 |
|
646 |
gnt-instance failover INSTANCENAME |
647 |
|
648 |
For more instance and cluster administration details, see the |
649 |
*Ganeti administrator's guide*. |