root / doc / install.sgml @ 36e23a40
History | View | Annotate | Download (31.4 kB)
1 |
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ |
---|---|
2 |
]> |
3 |
<article class="specification"> |
4 |
<articleinfo> |
5 |
<title>Ganeti installation tutorial</title> |
6 |
</articleinfo> |
7 |
<para>Documents Ganeti version 2.0</para> |
8 |
|
9 |
<sect1> |
10 |
<title>Introduction</title> |
11 |
|
12 |
<para> |
13 |
Ganeti is a cluster virtualization management system based on |
14 |
Xen or KVM. This document explains how to bootstrap a Ganeti |
15 |
node (Xen <literal>dom0</literal>), create a running cluster and |
16 |
install virtual instance (Xen <literal>domU</literal>). You |
17 |
need to repeat most of the steps in this document for every node |
18 |
you want to install, but of course we recommend creating some |
19 |
semi-automatic procedure if you plan to deploy Ganeti on a |
20 |
medium/large scale. |
21 |
</para> |
22 |
|
23 |
<para> |
24 |
A basic Ganeti terminology glossary is provided in the |
25 |
introductory section of the <emphasis>Ganeti administrator's |
26 |
guide</emphasis>. Please refer to that document if you are |
27 |
uncertain about the terms we are using. |
28 |
</para> |
29 |
|
30 |
<para> |
31 |
Ganeti has been developed for Linux and is |
32 |
distribution-agnostic. This documentation will use Debian Lenny |
33 |
as an example system but the examples can easily be translated |
34 |
to any other distribution. You are expected to be familiar with |
35 |
your distribution, its package management system, and Xen or KVM |
36 |
before trying to use Ganeti. |
37 |
</para> |
38 |
|
39 |
<para>This document is divided into two main sections: |
40 |
|
41 |
<itemizedlist> |
42 |
<listitem> |
43 |
<simpara>Installation of the base system and base |
44 |
components</simpara> |
45 |
</listitem> |
46 |
<listitem> |
47 |
<simpara>Configuration of the environment for |
48 |
Ganeti</simpara> |
49 |
</listitem> |
50 |
</itemizedlist> |
51 |
|
52 |
Each of these is divided into sub-sections. While a full Ganeti system |
53 |
will need all of the steps specified, some are not strictly required for |
54 |
every environment. Which ones they are, and why, is specified in the |
55 |
corresponding sections. |
56 |
</para> |
57 |
|
58 |
</sect1> |
59 |
|
60 |
<sect1> |
61 |
<title>Installing the base system and base components</title> |
62 |
|
63 |
<sect2> |
64 |
<title>Hardware requirements</title> |
65 |
|
66 |
<para> |
67 |
Any system supported by your Linux distribution is fine. 64-bit |
68 |
systems are better as they can support more memory. |
69 |
</para> |
70 |
|
71 |
<para> |
72 |
Any disk drive recognized by Linux |
73 |
(<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.) |
74 |
is supported in Ganeti. Note that no shared storage (e.g. |
75 |
<literal>SAN</literal>) is needed to get high-availability features. It |
76 |
is highly recommended to use more than one disk drive to improve speed. |
77 |
But Ganeti also works with one disk per machine. |
78 |
</para> |
79 |
|
80 |
<sect2> |
81 |
<title>Installing the base system</title> |
82 |
|
83 |
<para> |
84 |
<emphasis role="strong">Mandatory</emphasis> on all nodes. |
85 |
</para> |
86 |
|
87 |
<para> |
88 |
It is advised to start with a clean, minimal install of the |
89 |
operating system. The only requirement you need to be aware of |
90 |
at this stage is to partition leaving enough space for a big |
91 |
(<emphasis role="strong">minimum |
92 |
<constant>20GiB</constant></emphasis>) LVM volume group which |
93 |
will then host your instance filesystems, if you want to use |
94 |
all Ganeti features. The volume group name Ganeti 2.0 uses (by |
95 |
default) is <emphasis>xenvg</emphasis>. |
96 |
</para> |
97 |
|
98 |
<para> |
99 |
You can also use file-based storage only, without LVM, but |
100 |
this is not detailed in this document. |
101 |
</para> |
102 |
|
103 |
<para> |
104 |
While you can use an existing system, please note that the |
105 |
Ganeti installation is intrusive in terms of changes to the |
106 |
system configuration, and it's best to use a newly-installed |
107 |
system without important data on it. |
108 |
</para> |
109 |
|
110 |
<para> |
111 |
Also, for best results, it's advised that the nodes have as |
112 |
much as possible the same hardware and software |
113 |
configuration. This will make administration much easier. |
114 |
</para> |
115 |
|
116 |
<sect3> |
117 |
<title>Hostname issues</title> |
118 |
<para> |
119 |
Note that Ganeti requires the hostnames of the systems |
120 |
(i.e. what the <computeroutput>hostname</computeroutput> |
121 |
command outputs to be a fully-qualified name, not a short |
122 |
name. In other words, you should use |
123 |
<literal>node1.example.com</literal> as a hostname and not |
124 |
just <literal>node1</literal>. |
125 |
</para> |
126 |
|
127 |
<formalpara> |
128 |
<title>Debian</title> |
129 |
<para> |
130 |
Note that Debian Lenny configures the hostname differently |
131 |
than you need it for Ganeti. For example, this is what |
132 |
Etch puts in <filename>/etc/hosts</filename> in certain |
133 |
situations: |
134 |
<screen> |
135 |
127.0.0.1 localhost |
136 |
127.0.1.1 node1.example.com node1 |
137 |
</screen> |
138 |
|
139 |
but for Ganeti you need to have: |
140 |
<screen> |
141 |
127.0.0.1 localhost |
142 |
192.168.1.1 node1.example.com node1 |
143 |
</screen> |
144 |
replacing <literal>192.168.1.1</literal> with your node's |
145 |
address. Also, the file <filename>/etc/hostname</filename> |
146 |
which configures the hostname of the system should contain |
147 |
<literal>node1.example.com</literal> and not just |
148 |
<literal>node1</literal> (you need to run the command |
149 |
<computeroutput>/etc/init.d/hostname.sh |
150 |
start</computeroutput> after changing the file). |
151 |
</para> |
152 |
</formalpara> |
153 |
</sect3> |
154 |
|
155 |
</sect2> |
156 |
|
157 |
<sect2> |
158 |
<title>Installing Xen</title> |
159 |
|
160 |
<para> |
161 |
<emphasis role="strong">Mandatory</emphasis> on all nodes. |
162 |
</para> |
163 |
|
164 |
<para> |
165 |
While Ganeti is developed with the ability to modularly run on |
166 |
different virtualization environments in mind the only two |
167 |
currently useable on a live system are <ulink |
168 |
url="http://xen.xensource.com/">Xen</ulink> and KVM. Supported |
169 |
versions are: <simplelist type="inline"> |
170 |
<member><literal>3.0.3</literal></member> |
171 |
<member><literal>3.0.4</literal></member> |
172 |
<member><literal>3.1</literal></member> </simplelist>. |
173 |
</para> |
174 |
|
175 |
<para> |
176 |
Please follow your distribution's recommended way to install |
177 |
and set up Xen, or install Xen from the upstream source, if |
178 |
you wish, following their manual. For KVM, make sure you have |
179 |
a KVM-enabled kernel and the KVM tools. |
180 |
</para> |
181 |
|
182 |
<para> |
183 |
After installing either hypervisor, you need to reboot into |
184 |
your new system. On some distributions this might involve |
185 |
configuring GRUB appropriately, whereas others will configure |
186 |
it automatically when you install the respective kernels. |
187 |
</para> |
188 |
|
189 |
<formalpara><title>Debian</title> |
190 |
<para> |
191 |
Under Debian Lenny or Etch you can install the relevant |
192 |
<literal>xen-linux-system</literal> package, which will pull |
193 |
in both the hypervisor and the relevant kernel. Also, if you |
194 |
are installing a 32-bit Lenny/Etch, you should install the |
195 |
<computeroutput>libc6-xen</computeroutput> package (run |
196 |
<computeroutput>apt-get install libc6-xen</computeroutput>). |
197 |
</para> |
198 |
</formalpara> |
199 |
|
200 |
<sect3> |
201 |
<title>Xen settings</title> |
202 |
|
203 |
<para> |
204 |
It's recommended that dom0 is restricted to a low amount of |
205 |
memory (<constant>512MiB</constant> or |
206 |
<constant>1GiB</constant> is reasonable) and that memory |
207 |
ballooning is disabled in the file |
208 |
<filename>/etc/xen/xend-config.sxp</filename> by setting the |
209 |
value <literal>dom0-min-mem</literal> to |
210 |
<constant>0</constant>, like this: |
211 |
<computeroutput>(dom0-min-mem 0)</computeroutput> |
212 |
</para> |
213 |
|
214 |
<para> |
215 |
For optimum performance when running both CPU and I/O |
216 |
intensive instances, it's also recommended that the dom0 is |
217 |
restricted to one CPU only, for example by booting with the |
218 |
kernel parameter <literal>nosmp</literal>. |
219 |
</para> |
220 |
|
221 |
<para> |
222 |
It is recommended that you disable xen's automatic save of virtual |
223 |
machines at system shutdown and subsequent restore of them at reboot. |
224 |
To obtain this make sure the variable |
225 |
<literal>XENDOMAINS_SAVE</literal> in the file |
226 |
<literal>/etc/default/xendomains</literal> is set to an empty value. |
227 |
</para> |
228 |
|
229 |
<formalpara> |
230 |
<title>Debian</title> |
231 |
<para> |
232 |
Besides the ballooning change which you need to set in |
233 |
<filename>/etc/xen/xend-config.sxp</filename>, you need to |
234 |
set the memory and nosmp parameters in the file |
235 |
<filename>/boot/grub/menu.lst</filename>. You need to |
236 |
modify the variable <literal>xenhopt</literal> to add |
237 |
<userinput>dom0_mem=1024M</userinput> like this: |
238 |
<screen> |
239 |
## Xen hypervisor options to use with the default Xen boot option |
240 |
# xenhopt=dom0_mem=1024M |
241 |
</screen> |
242 |
and the <literal>xenkopt</literal> needs to include the |
243 |
<userinput>nosmp</userinput> option like this: |
244 |
<screen> |
245 |
## Xen Linux kernel options to use with the default Xen boot option |
246 |
# xenkopt=nosmp |
247 |
</screen> |
248 |
|
249 |
Any existing parameters can be left in place: it's ok to |
250 |
have <computeroutput>xenkopt=console=tty0 |
251 |
nosmp</computeroutput>, for example. After modifying the |
252 |
files, you need to run: |
253 |
<screen> |
254 |
/sbin/update-grub |
255 |
</screen> |
256 |
</para> |
257 |
</formalpara> |
258 |
<para> |
259 |
If you want to run HVM instances too with Ganeti and want |
260 |
VNC access to the console of your instances, set the |
261 |
following two entries in |
262 |
<filename>/etc/xen/xend-config.sxp</filename>: |
263 |
<screen> |
264 |
(vnc-listen '0.0.0.0') |
265 |
(vncpasswd '') |
266 |
</screen> |
267 |
You need to restart the Xen daemon for these settings to |
268 |
take effect: |
269 |
<screen> |
270 |
/etc/init.d/xend restart |
271 |
</screen> |
272 |
</para> |
273 |
|
274 |
</sect3> |
275 |
|
276 |
<sect3> |
277 |
<title>Selecting the instance kernel</title> |
278 |
|
279 |
<para> |
280 |
After you have installed Xen, you need to tell Ganeti |
281 |
exactly what kernel to use for the instances it will |
282 |
create. This is done by creating a |
283 |
<emphasis>symlink</emphasis> from your actual kernel to |
284 |
<filename>/boot/vmlinuz-2.6-xenU</filename>, and one from |
285 |
your initrd to |
286 |
<filename>/boot/initrd-2.6-xenU</filename>. Note that if you |
287 |
don't use an initrd for the <literal>domU</literal> kernel, |
288 |
you don't need to create the initrd symlink. |
289 |
</para> |
290 |
|
291 |
<formalpara> |
292 |
<title>Debian</title> |
293 |
<para> |
294 |
After installation of the |
295 |
<literal>xen-linux-system</literal> package, you need to |
296 |
run (replace the exact version number with the one you |
297 |
have): |
298 |
<screen> |
299 |
cd /boot |
300 |
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU |
301 |
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU |
302 |
</screen> |
303 |
</para> |
304 |
</formalpara> |
305 |
</sect3> |
306 |
|
307 |
</sect2> |
308 |
|
309 |
<sect2> |
310 |
<title>Installing DRBD</title> |
311 |
|
312 |
<para> |
313 |
Recommended on all nodes: <ulink |
314 |
url="http://www.drbd.org/">DRBD</ulink> is required if you |
315 |
want to use the high availability (HA) features of Ganeti, but |
316 |
optional if you don't require HA or only run Ganeti on |
317 |
single-node clusters. You can upgrade a non-HA cluster to an |
318 |
HA one later, but you might need to export and re-import all |
319 |
your instances to take advantage of the new features. |
320 |
</para> |
321 |
|
322 |
<para> |
323 |
Supported DRBD versions: <literal>8.0.x</literal>. |
324 |
It's recommended to have at least version <literal>8.0.12</literal>. |
325 |
</para> |
326 |
|
327 |
<para> |
328 |
Now the bad news: unless your distribution already provides it |
329 |
installing DRBD might involve recompiling your kernel or |
330 |
anyway fiddling with it. Hopefully at least the Xen-ified |
331 |
kernel source to start from will be provided. |
332 |
</para> |
333 |
|
334 |
<para> |
335 |
The good news is that you don't need to configure DRBD at all. |
336 |
Ganeti will do it for you for every instance you set up. If |
337 |
you have the DRBD utils installed and the module in your |
338 |
kernel you're fine. Please check that your system is |
339 |
configured to load the module at every boot, and that it |
340 |
passes the following option to the module |
341 |
<computeroutput>minor_count=255</computeroutput>. This will |
342 |
allow you to use up to 128 instances per node (for most clusters |
343 |
<constant>128 </constant> should be enough, though). |
344 |
</para> |
345 |
|
346 |
<formalpara><title>Debian</title> |
347 |
<para> |
348 |
You can just install (build) the DRBD 8.0.x module with the |
349 |
following commands (make sure you are running the Xen |
350 |
kernel): |
351 |
</para> |
352 |
</formalpara> |
353 |
|
354 |
<screen> |
355 |
apt-get install drbd8-source drbd8-utils |
356 |
m-a update |
357 |
m-a a-i drbd8 |
358 |
echo drbd minor_count=128 >> /etc/modules |
359 |
depmod -a |
360 |
modprobe drbd minor_count=128 |
361 |
</screen> |
362 |
|
363 |
<para> |
364 |
It is also recommended that you comment out the default |
365 |
resources in the <filename>/etc/drbd.conf</filename> file, so |
366 |
that the init script doesn't try to configure any drbd |
367 |
devices. You can do this by prefixing all |
368 |
<literal>resource</literal> lines in the file with the keyword |
369 |
<literal>skip</literal>, like this: |
370 |
</para> |
371 |
|
372 |
<screen> |
373 |
skip resource r0 { |
374 |
... |
375 |
} |
376 |
|
377 |
skip resource "r1" { |
378 |
... |
379 |
} |
380 |
</screen> |
381 |
|
382 |
</sect2> |
383 |
|
384 |
<sect2> |
385 |
<title>Other required software</title> |
386 |
|
387 |
<para>Besides Xen and DRBD, you will need to install the |
388 |
following (on all nodes):</para> |
389 |
|
390 |
<itemizedlist> |
391 |
<listitem> |
392 |
<simpara><ulink url="http://sourceware.org/lvm2/">LVM |
393 |
version 2</ulink></simpara> |
394 |
</listitem> |
395 |
<listitem> |
396 |
<simpara><ulink |
397 |
url="http://www.openssl.org/">OpenSSL</ulink></simpara> |
398 |
</listitem> |
399 |
<listitem> |
400 |
<simpara><ulink |
401 |
url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara> |
402 |
</listitem> |
403 |
<listitem> |
404 |
<simpara><ulink url="http://bridge.sourceforge.net/">Bridge |
405 |
utilities</ulink></simpara> |
406 |
</listitem> |
407 |
<listitem> |
408 |
<simpara><ulink |
409 |
url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara> |
410 |
</listitem> |
411 |
<listitem> |
412 |
<simpara><ulink |
413 |
url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink> |
414 |
(part of iputils package)</simpara> |
415 |
</listitem> |
416 |
<listitem> |
417 |
<simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara> |
418 |
</listitem> |
419 |
<listitem> |
420 |
<simpara><ulink |
421 |
url="http://pyopenssl.sourceforge.net/">Python OpenSSL |
422 |
bindings</ulink></simpara> |
423 |
</listitem> |
424 |
<listitem> |
425 |
<simpara><ulink |
426 |
url="http://www.undefined.org/python/#simplejson">simplejson Python |
427 |
module</ulink></simpara> |
428 |
</listitem> |
429 |
<listitem> |
430 |
<simpara><ulink |
431 |
url="http://pyparsing.wikispaces.com/">pyparsing Python |
432 |
module</ulink></simpara> |
433 |
</listitem> |
434 |
</itemizedlist> |
435 |
|
436 |
<para> |
437 |
These programs are supplied as part of most Linux |
438 |
distributions, so usually they can be installed via apt or |
439 |
similar methods. Also many of them will already be installed |
440 |
on a standard machine. |
441 |
</para> |
442 |
|
443 |
|
444 |
<formalpara><title>Debian</title> |
445 |
|
446 |
<para>You can use this command line to install all of them:</para> |
447 |
|
448 |
</formalpara> |
449 |
<screen> |
450 |
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ |
451 |
python python-pyopenssl openssl python-pyparsing python-simplejson |
452 |
</screen> |
453 |
|
454 |
</sect2> |
455 |
|
456 |
</sect1> |
457 |
|
458 |
|
459 |
<sect1> |
460 |
<title>Setting up the environment for Ganeti</title> |
461 |
|
462 |
<sect2> |
463 |
<title>Configuring the network</title> |
464 |
|
465 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
466 |
|
467 |
<para> |
468 |
Ganeti relies on Xen running in "bridge mode", which means the |
469 |
instances network interfaces will be attached to a software bridge |
470 |
running in dom0. Xen by default creates such a bridge at startup, but |
471 |
your distribution might have a different way to do things. |
472 |
</para> |
473 |
|
474 |
<para> |
475 |
Beware that the default name Ganeti uses is |
476 |
<hardware>xen-br0</hardware> (which was used in Xen 2.0) |
477 |
while Xen 3.0 uses <hardware>xenbr0</hardware> by |
478 |
default. The default bridge your Ganeti cluster will use for new |
479 |
instances can be specified at cluster initialization time. |
480 |
</para> |
481 |
|
482 |
<formalpara><title>Debian</title> |
483 |
<para> |
484 |
The recommended Debian way to configure the Xen bridge is to |
485 |
edit your <filename>/etc/network/interfaces</filename> file |
486 |
and substitute your normal Ethernet stanza with the |
487 |
following snippet: |
488 |
|
489 |
<screen> |
490 |
auto xen-br0 |
491 |
iface xen-br0 inet static |
492 |
address <replaceable>YOUR_IP_ADDRESS</replaceable> |
493 |
netmask <replaceable>YOUR_NETMASK</replaceable> |
494 |
network <replaceable>YOUR_NETWORK</replaceable> |
495 |
broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable> |
496 |
gateway <replaceable>YOUR_GATEWAY</replaceable> |
497 |
bridge_ports eth0 |
498 |
bridge_stp off |
499 |
bridge_fd 0 |
500 |
</screen> |
501 |
</para> |
502 |
</formalpara> |
503 |
|
504 |
<para> |
505 |
The following commands need to be executed on the local console |
506 |
</para> |
507 |
<screen> |
508 |
ifdown eth0 |
509 |
ifup xen-br0 |
510 |
</screen> |
511 |
|
512 |
<para> |
513 |
To check if the bridge is setup, use <command>ip</command> |
514 |
and <command>brctl show</command>: |
515 |
<para> |
516 |
|
517 |
<screen> |
518 |
# ip a show xen-br0 |
519 |
9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue |
520 |
link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff |
521 |
inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 |
522 |
inet6 fe80::220:fcff:fe1e:d55d/64 scope link |
523 |
valid_lft forever preferred_lft forever |
524 |
|
525 |
# brctl show xen-br0 |
526 |
bridge name bridge id STP enabled interfaces |
527 |
xen-br0 8000.0020fc1ed55d no eth0 |
528 |
</screen> |
529 |
|
530 |
|
531 |
</sect2> |
532 |
|
533 |
<sect2> |
534 |
<title>Configuring LVM</title> |
535 |
|
536 |
|
537 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
538 |
|
539 |
<note> |
540 |
<simpara>The volume group is required to be at least |
541 |
<constant>20GiB</constant>.</simpara> |
542 |
</note> |
543 |
<para> |
544 |
If you haven't configured your LVM volume group at install |
545 |
time you need to do it before trying to initialize the Ganeti |
546 |
cluster. This is done by formatting the devices/partitions you |
547 |
want to use for it and then adding them to the relevant volume |
548 |
group: |
549 |
|
550 |
<screen> |
551 |
pvcreate /dev/sda3 |
552 |
vgcreate xenvg /dev/sda3 |
553 |
</screen> |
554 |
or |
555 |
<screen> |
556 |
pvcreate /dev/sdb1 |
557 |
pvcreate /dev/sdc1 |
558 |
vgcreate xenvg /dev/sdb1 /dev/sdc1 |
559 |
</screen> |
560 |
</para> |
561 |
|
562 |
<para> |
563 |
If you want to add a device later you can do so with the |
564 |
<citerefentry><refentrytitle>vgextend</refentrytitle> |
565 |
<manvolnum>8</manvolnum></citerefentry> command: |
566 |
</para> |
567 |
|
568 |
<screen> |
569 |
pvcreate /dev/sdd1 |
570 |
vgextend xenvg /dev/sdd1 |
571 |
</screen> |
572 |
|
573 |
<formalpara> |
574 |
<title>Optional</title> |
575 |
<para> |
576 |
It is recommended to configure LVM not to scan the DRBD |
577 |
devices for physical volumes. This can be accomplished by |
578 |
editing <filename>/etc/lvm/lvm.conf</filename> and adding |
579 |
the <literal>/dev/drbd[0-9]+</literal> regular expression to |
580 |
the <literal>filter</literal> variable, like this: |
581 |
<screen> |
582 |
filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] |
583 |
</screen> |
584 |
</para> |
585 |
</formalpara> |
586 |
|
587 |
</sect2> |
588 |
|
589 |
<sect2> |
590 |
<title>Installing Ganeti</title> |
591 |
|
592 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
593 |
|
594 |
<para> |
595 |
It's now time to install the Ganeti software itself. Download |
596 |
the source from <ulink |
597 |
url="http://code.google.com/p/ganeti/"></ulink>. |
598 |
</para> |
599 |
|
600 |
<screen> |
601 |
tar xvzf ganeti-@GANETI_VERSION@.tar.gz |
602 |
cd ganeti-@GANETI_VERSION@ |
603 |
./configure --localstatedir=/var --sysconfdir=/etc |
604 |
make |
605 |
make install |
606 |
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export |
607 |
</screen> |
608 |
|
609 |
<para> |
610 |
You also need to copy the file |
611 |
<filename>doc/examples/ganeti.initd</filename> |
612 |
from the source archive to |
613 |
<filename>/etc/init.d/ganeti</filename> and register it with |
614 |
your distribution's startup scripts, for example in Debian: |
615 |
</para> |
616 |
<screen>update-rc.d ganeti defaults 20 80</screen> |
617 |
|
618 |
<para> |
619 |
In order to automatically restart failed instances, you need |
620 |
to setup a cron job run the |
621 |
<computeroutput>ganeti-watcher</computeroutput> program. A |
622 |
sample cron file is provided in the source at |
623 |
<filename>doc/examples/ganeti.cron</filename> and you can |
624 |
copy that (eventually altering the path) to |
625 |
<filename>/etc/cron.d/ganeti</filename> |
626 |
</para> |
627 |
|
628 |
</sect2> |
629 |
|
630 |
<sect2> |
631 |
<title>Installing the Operating System support packages</title> |
632 |
|
633 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
634 |
|
635 |
<para> |
636 |
To be able to install instances you need to have an Operating |
637 |
System installation script. An example OS that works under |
638 |
Debian and can install Debian and Ubuntu instace OSes is |
639 |
provided on the project web site. Download it from <ulink |
640 |
url="http://code.google.com/p/ganeti/"></ulink> and follow the |
641 |
instructions in the <filename>README</filename> file. Here is |
642 |
the installation procedure (replace <constant>0.7</constant> |
643 |
with the latest version that is compatible with your ganeti |
644 |
version): |
645 |
</para> |
646 |
|
647 |
<screen> |
648 |
cd /usr/local/src/ |
649 |
wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz |
650 |
tar xzf ganeti-instance-debootstrap-0.7.tar.gz |
651 |
cd ganeti-instance-debootstrap-0.7 |
652 |
./configure |
653 |
make |
654 |
make install |
655 |
</screen> |
656 |
|
657 |
<para> |
658 |
In order to use this OS definition, you need to have internet |
659 |
access from your nodes and have the <citerefentry> |
660 |
<refentrytitle>debootstrap</refentrytitle> |
661 |
<manvolnum>8</manvolnum></citerefentry>, <citerefentry> |
662 |
<refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum> |
663 |
</citerefentry> and <citerefentry> |
664 |
<refentrytitle>restore</refentrytitle> |
665 |
<manvolnum>8</manvolnum> </citerefentry> commands installed on |
666 |
all nodes. Also, if the OS is configured to partition the |
667 |
instance's disk in |
668 |
<filename>/etc/default/ganeti-instance-debootstrap</filename>, |
669 |
you will need <command>kpartx</command> installed. |
670 |
</para> |
671 |
<formalpara> |
672 |
<title>Debian</title> |
673 |
<para> |
674 |
Use this command on all nodes to install the required |
675 |
packages: |
676 |
|
677 |
<screen>apt-get install debootstrap dump kpartx</screen> |
678 |
</para> |
679 |
</formalpara> |
680 |
|
681 |
<para> |
682 |
Alternatively, you can create your own OS definitions. See the |
683 |
manpage |
684 |
<citerefentry> |
685 |
<refentrytitle>ganeti-os-interface</refentrytitle> |
686 |
<manvolnum>8</manvolnum> |
687 |
</citerefentry>. |
688 |
</para> |
689 |
|
690 |
</sect2> |
691 |
|
692 |
<sect2> |
693 |
<title>Initializing the cluster</title> |
694 |
|
695 |
<para><emphasis role="strong">Mandatory:</emphasis> only on one |
696 |
node per cluster.</para> |
697 |
|
698 |
|
699 |
<para> |
700 |
The last step is to initialize the cluster. After you've |
701 |
repeated the above process on all of your nodes, choose one as |
702 |
the master, and execute: |
703 |
</para> |
704 |
|
705 |
<screen> |
706 |
gnt-cluster init <replaceable>CLUSTERNAME</replaceable> |
707 |
</screen> |
708 |
|
709 |
<para> |
710 |
The <replaceable>CLUSTERNAME</replaceable> is a hostname, |
711 |
which must be resolvable (e.g. it must exist in DNS or in |
712 |
<filename>/etc/hosts</filename>) by all the nodes in the |
713 |
cluster. You must choose a name different from any of the |
714 |
nodes names for a multi-node cluster. In general the best |
715 |
choice is to have a unique name for a cluster, even if it |
716 |
consists of only one machine, as you will be able to expand it |
717 |
later without any problems. Please note that the hostname used |
718 |
for this must resolve to an IP address reserved <emphasis |
719 |
role="strong">exclusively</emphasis> for this purpose. |
720 |
</para> |
721 |
|
722 |
<para> |
723 |
If the bridge name you are using is not |
724 |
<literal>xen-br0</literal>, use the <option>-b |
725 |
<replaceable>BRIDGENAME</replaceable></option> option to |
726 |
specify the bridge name. In this case, you should also use the |
727 |
<option>--master-netdev |
728 |
<replaceable>BRIDGENAME</replaceable></option> option with the |
729 |
same <replaceable>BRIDGENAME</replaceable> argument. |
730 |
</para> |
731 |
|
732 |
<para> |
733 |
You can use a different name than <literal>xenvg</literal> for |
734 |
the volume group (but note that the name must be identical on |
735 |
all nodes). In this case you need to specify it by passing the |
736 |
<option>-g <replaceable>VGNAME</replaceable></option> option |
737 |
to <computeroutput>gnt-cluster init</computeroutput>. |
738 |
</para> |
739 |
|
740 |
<para> |
741 |
To set up the cluster as an HVM cluster, use the |
742 |
<option>--enabled-hypervisors=xen-hvm</option> option to |
743 |
enable the HVM hypervisor (you can also add |
744 |
<userinput>,xen-pvm</userinput> to enable the PVM one |
745 |
too). You will also need to create the VNC cluster password |
746 |
file <filename>/etc/ganeti/vnc-cluster-password</filename> |
747 |
which contains one line with the default VNC password for the |
748 |
cluster. |
749 |
</para> |
750 |
|
751 |
<para> |
752 |
To setup the cluster for KVM-only usage (KVM and Xen cannot be |
753 |
mixed), pass <option>--enabled-hypervisors=kvm</option> to the |
754 |
init command. |
755 |
</para> |
756 |
|
757 |
<para> |
758 |
You can also invoke the command with the |
759 |
<option>--help</option> option in order to see all the |
760 |
possibilities. |
761 |
</para> |
762 |
|
763 |
</sect2> |
764 |
|
765 |
<sect2> |
766 |
<title>Joining the nodes to the cluster</title> |
767 |
|
768 |
<para> |
769 |
<emphasis role="strong">Mandatory:</emphasis> for all the |
770 |
other nodes. |
771 |
</para> |
772 |
|
773 |
<para> |
774 |
After you have initialized your cluster you need to join the |
775 |
other nodes to it. You can do so by executing the following |
776 |
command on the master node: |
777 |
</para> |
778 |
<screen> |
779 |
gnt-node add <replaceable>NODENAME</replaceable> |
780 |
</screen> |
781 |
</sect2> |
782 |
|
783 |
<sect2> |
784 |
<title>Separate replication network</title> |
785 |
|
786 |
<para><emphasis role="strong">Optional</emphasis></para> |
787 |
<para> |
788 |
Ganeti uses DRBD to mirror the disk of the virtual instances |
789 |
between nodes. To use a dedicated network interface for this |
790 |
(in order to improve performance or to enhance security) you |
791 |
need to configure an additional interface for each node. Use |
792 |
the <option>-s</option> option with |
793 |
<computeroutput>gnt-cluster init</computeroutput> and |
794 |
<computeroutput>gnt-node add</computeroutput> to specify the |
795 |
IP address of this secondary interface to use for each |
796 |
node. Note that if you specified this option at cluster setup |
797 |
time, you must afterwards use it for every node add operation. |
798 |
</para> |
799 |
</sect2> |
800 |
|
801 |
<sect2> |
802 |
<title>Testing the setup</title> |
803 |
|
804 |
<para> |
805 |
Execute the <computeroutput>gnt-node list</computeroutput> |
806 |
command to see all nodes in the cluster: |
807 |
<screen> |
808 |
# gnt-node list |
809 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
810 |
node1.example.com 197404 197404 2047 1896 125 0 0 |
811 |
</screen> |
812 |
</para> |
813 |
</sect2> |
814 |
|
815 |
<sect1> |
816 |
<title>Setting up and managing virtual instances</title> |
817 |
<sect2> |
818 |
<title>Setting up virtual instances</title> |
819 |
<para> |
820 |
This step shows how to setup a virtual instance with either |
821 |
non-mirrored disks (<computeroutput>plain</computeroutput>) or |
822 |
with network mirrored disks |
823 |
(<computeroutput>drbd</computeroutput>). All |
824 |
commands need to be executed on the Ganeti master node (the |
825 |
one on which <computeroutput>gnt-cluster init</computeroutput> |
826 |
was run). Verify that the OS scripts are present on all |
827 |
cluster nodes with <computeroutput>gnt-os |
828 |
list</computeroutput>. |
829 |
</para> |
830 |
<para> |
831 |
To create a virtual instance, you need a hostname which is |
832 |
resolvable (DNS or <filename>/etc/hosts</filename> on all |
833 |
nodes). The following command will create a non-mirrored |
834 |
instance for you: |
835 |
</para> |
836 |
<screen> |
837 |
gnt-instance add --node=node1 -o debootstrap -t plain inst1.example.com |
838 |
* creating instance disks... |
839 |
adding instance inst1.example.com to cluster config |
840 |
Waiting for instance inst1.example.com to sync disks. |
841 |
Instance inst1.example.com's disks are in sync. |
842 |
creating os for instance inst1.example.com on node node1.example.com |
843 |
* running the instance OS create scripts... |
844 |
</screen> |
845 |
|
846 |
<para> |
847 |
The above instance will have no network interface enabled. |
848 |
You can access it over the virtual console with |
849 |
<computeroutput>gnt-instance console |
850 |
<literal>inst1</literal></computeroutput>. There is no |
851 |
password for root. As this is a Debian instance, you can |
852 |
modify the <filename>/etc/network/interfaces</filename> file |
853 |
to setup the network interface (<literal>eth0</literal> is the |
854 |
name of the interface provided to the instance). |
855 |
</para> |
856 |
|
857 |
<para> |
858 |
To create a network mirrored instance, change the argument to |
859 |
the <option>-t</option> option from <literal>plain</literal> |
860 |
to <literal>drbd</literal> and specify the node on |
861 |
which the mirror should reside with the second value of the |
862 |
<option>--node</option> option, like this: |
863 |
</para> |
864 |
|
865 |
<screen> |
866 |
# gnt-instance add -t drbd -n node1:node2 -o debootstrap instance2 |
867 |
* creating instance disks... |
868 |
adding instance instance2 to cluster config |
869 |
Waiting for instance instance1 to sync disks. |
870 |
- device sdb: 3.50% done, 304 estimated seconds remaining |
871 |
- device sdb: 21.70% done, 270 estimated seconds remaining |
872 |
- device sdb: 39.80% done, 247 estimated seconds remaining |
873 |
- device sdb: 58.10% done, 121 estimated seconds remaining |
874 |
- device sdb: 76.30% done, 72 estimated seconds remaining |
875 |
- device sdb: 94.80% done, 18 estimated seconds remaining |
876 |
Instance instance2's disks are in sync. |
877 |
creating os for instance instance2 on node node1.example.com |
878 |
* running the instance OS create scripts... |
879 |
* starting instance... |
880 |
</screen> |
881 |
|
882 |
</sect2> |
883 |
|
884 |
<sect2> |
885 |
<title>Managing virtual instances</title> |
886 |
<para> |
887 |
All commands need to be executed on the Ganeti master node |
888 |
</para> |
889 |
|
890 |
<para> |
891 |
To access the console of an instance, use |
892 |
<computeroutput>gnt-instance console |
893 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
894 |
</para> |
895 |
|
896 |
<para> |
897 |
To shutdown an instance, use <computeroutput>gnt-instance |
898 |
shutdown |
899 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. To |
900 |
startup an instance, use <computeroutput>gnt-instance startup |
901 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
902 |
</para> |
903 |
|
904 |
<para> |
905 |
To failover an instance to its secondary node (only possible |
906 |
with <literal>drbd</literal> disk templates), use |
907 |
<computeroutput>gnt-instance failover |
908 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
909 |
</para> |
910 |
|
911 |
<para> |
912 |
For more instance and cluster administration details, see the |
913 |
<emphasis>Ganeti administrator's guide</emphasis>. |
914 |
</para> |
915 |
|
916 |
</sect2> |
917 |
|
918 |
</sect1> |
919 |
|
920 |
</article> |