root / doc / install.sgml @ d06565e0
History | View | Annotate | Download (29.4 kB)
1 |
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ |
---|---|
2 |
]> |
3 |
<article class="specification"> |
4 |
<articleinfo> |
5 |
<title>Ganeti installation tutorial</title> |
6 |
</articleinfo> |
7 |
<para>Documents Ganeti version 1.2</para> |
8 |
|
9 |
<sect1> |
10 |
<title>Introduction</title> |
11 |
|
12 |
<para> |
13 |
Ganeti is a cluster virtualization management system based on |
14 |
Xen. This document explains how to bootstrap a Ganeti node (Xen |
15 |
<literal>dom0</literal>), create a running cluster and install |
16 |
virtual instance (Xen <literal>domU</literal>). You need to |
17 |
repeat most of the steps in this document for every node you |
18 |
want to install, but of course we recommend creating some |
19 |
semi-automatic procedure if you plan to deploy Ganeti on a |
20 |
medium/large scale. |
21 |
</para> |
22 |
|
23 |
<para> |
24 |
A basic Ganeti terminology glossary is provided in the |
25 |
introductory section of the <emphasis>Ganeti administrator's |
26 |
guide</emphasis>. Please refer to that document if you are |
27 |
uncertain about the terms we are using. |
28 |
</para> |
29 |
|
30 |
<para> |
31 |
Ganeti has been developed for Linux and is |
32 |
distribution-agnostic. This documentation will use Debian Etch |
33 |
as an example system but the examples can easily be translated |
34 |
to any other distribution. You are expected to be familiar with |
35 |
your distribution, its package management system, and Xen before |
36 |
trying to use Ganeti. |
37 |
</para> |
38 |
|
39 |
<para>This document is divided into two main sections: |
40 |
|
41 |
<itemizedlist> |
42 |
<listitem> |
43 |
<simpara>Installation of the base system and base |
44 |
components</simpara> |
45 |
</listitem> |
46 |
<listitem> |
47 |
<simpara>Configuration of the environment for |
48 |
Ganeti</simpara> |
49 |
</listitem> |
50 |
</itemizedlist> |
51 |
|
52 |
Each of these is divided into sub-sections. While a full Ganeti |
53 |
system will need all of the steps specified, some are not strictly |
54 |
required for every environment. Which ones they are, and why, is |
55 |
specified in the corresponding sections. |
56 |
</para> |
57 |
|
58 |
</sect1> |
59 |
|
60 |
<sect1> |
61 |
<title>Installing the base system and base components</title> |
62 |
|
63 |
<sect2> |
64 |
<title>Hardware requirements</title> |
65 |
|
66 |
<para> |
67 |
Any system supported by your Linux distribution is fine. |
68 |
64-bit systems are better as they can support more memory. |
69 |
</para> |
70 |
|
71 |
<para> |
72 |
Any disk drive recognized by Linux |
73 |
(<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.) |
74 |
is supported in Ganeti. Note that no shared storage |
75 |
(e.g. <literal>SAN</literal>) is needed to get high-availability features. It is |
76 |
highly recommended to use more than one disk drive to improve |
77 |
speed. But Ganeti also works with one disk per machine. |
78 |
</para> |
79 |
|
80 |
<sect2> |
81 |
<title>Installing the base system</title> |
82 |
|
83 |
<para> |
84 |
<emphasis role="strong">Mandatory</emphasis> on all nodes. |
85 |
</para> |
86 |
|
87 |
<para> |
88 |
It is advised to start with a clean, minimal install of the |
89 |
operating system. The only requirement you need to be aware of |
90 |
at this stage is to partition leaving enough space for a big |
91 |
(<emphasis role="strong">minimum |
92 |
<constant>20GiB</constant></emphasis>) LVM volume group which |
93 |
will then host your instance filesystems. The volume group |
94 |
name Ganeti 1.2 uses (by default) is |
95 |
<emphasis>xenvg</emphasis>. |
96 |
</para> |
97 |
|
98 |
<para> |
99 |
While you can use an existing system, please note that the |
100 |
Ganeti installation is intrusive in terms of changes to the |
101 |
system configuration, and it's best to use a newly-installed |
102 |
system without important data on it. |
103 |
</para> |
104 |
|
105 |
<para> |
106 |
Also, for best results, it's advised that the nodes have as |
107 |
much as possible the same hardware and software |
108 |
configuration. This will make administration much easier. |
109 |
</para> |
110 |
|
111 |
<sect3> |
112 |
<title>Hostname issues</title> |
113 |
<para> |
114 |
Note that Ganeti requires the hostnames of the systems |
115 |
(i.e. what the <computeroutput>hostname</computeroutput> |
116 |
command outputs to be a fully-qualified name, not a short |
117 |
name. In other words, you should use |
118 |
<literal>node1.example.com</literal> as a hostname and not |
119 |
just <literal>node1</literal>. |
120 |
</para> |
121 |
|
122 |
<formalpara> |
123 |
<title>Debian</title> |
124 |
<para> |
125 |
Note that Debian Etch configures the hostname differently |
126 |
than you need it for Ganeti. For example, this is what |
127 |
Etch puts in <filename>/etc/hosts</filename> in certain |
128 |
situations: |
129 |
<screen> |
130 |
127.0.0.1 localhost |
131 |
127.0.1.1 node1.example.com node1 |
132 |
</screen> |
133 |
|
134 |
but for Ganeti you need to have: |
135 |
<screen> |
136 |
127.0.0.1 localhost |
137 |
192.168.1.1 node1.example.com node1 |
138 |
</screen> |
139 |
replacing <literal>192.168.1.1</literal> with your node's |
140 |
address. Also, the file <filename>/etc/hostname</filename> |
141 |
which configures the hostname of the system should contain |
142 |
<literal>node1.example.com</literal> and not just |
143 |
<literal>node1</literal> (you need to run the command |
144 |
<computeroutput>/etc/init.d/hostname.sh |
145 |
start</computeroutput> after changing the file). |
146 |
</para> |
147 |
</formalpara> |
148 |
</sect3> |
149 |
|
150 |
</sect2> |
151 |
|
152 |
<sect2> |
153 |
<title>Installing Xen</title> |
154 |
|
155 |
<para> |
156 |
<emphasis role="strong">Mandatory</emphasis> on all nodes. |
157 |
</para> |
158 |
|
159 |
<para> |
160 |
While Ganeti is developed with the ability to modularly run on |
161 |
different virtualization environments in mind the only one |
162 |
currently useable on a live system is <ulink |
163 |
url="http://xen.xensource.com/">Xen</ulink>. Supported |
164 |
versions are: <simplelist type="inline"> |
165 |
<member><literal>3.0.3</literal></member> |
166 |
<member><literal>3.0.4</literal></member> |
167 |
<member><literal>3.1</literal></member> </simplelist>. |
168 |
</para> |
169 |
|
170 |
<para> |
171 |
Please follow your distribution's recommended way to install |
172 |
and set up Xen, or install Xen from the upstream source, if |
173 |
you wish, following their manual. |
174 |
</para> |
175 |
|
176 |
<para> |
177 |
After installing Xen you need to reboot into your Xen-ified |
178 |
dom0 system. On some distributions this might involve |
179 |
configuring GRUB appropriately, whereas others will configure |
180 |
it automatically when you install Xen from a package. |
181 |
</para> |
182 |
|
183 |
<formalpara><title>Debian</title> |
184 |
<para> |
185 |
Under Debian Etch or Sarge+backports you can install the |
186 |
relevant <literal>xen-linux-system</literal> package, which |
187 |
will pull in both the hypervisor and the relevant |
188 |
kernel. Also, if you are installing a 32-bit Etch, you should |
189 |
install the <computeroutput>libc6-xen</computeroutput> package |
190 |
(run <computeroutput>apt-get install |
191 |
libc6-xen</computeroutput>). |
192 |
</para> |
193 |
</formalpara> |
194 |
|
195 |
<sect3> |
196 |
<title>Xen settings</title> |
197 |
|
198 |
<para> |
199 |
It's recommended that dom0 is restricted to a low amount of |
200 |
memory (<constant>512MiB</constant> is reasonable) and that |
201 |
memory ballooning is disabled in the file |
202 |
<filename>/etc/xen/xend-config.sxp</filename> by setting the |
203 |
value <literal>dom0-min-mem</literal> to |
204 |
<constant>0</constant>, like this: |
205 |
<computeroutput>(dom0-min-mem 0)</computeroutput> |
206 |
</para> |
207 |
|
208 |
<para> |
209 |
For optimum performance when running both CPU and I/O |
210 |
intensive instances, it's also recommended that the dom0 is |
211 |
restricted to one CPU only, for example by booting with the |
212 |
kernel parameter <literal>nosmp</literal>. |
213 |
</para> |
214 |
|
215 |
<formalpara> |
216 |
<title>Debian</title> |
217 |
<para> |
218 |
Besides the ballooning change which you need to set in |
219 |
<filename>/etc/xen/xend-config.sxp</filename>, you need to |
220 |
set the memory and nosmp parameters in the file |
221 |
<filename>/boot/grub/menu.lst</filename>. You need to |
222 |
modify the variable <literal>xenhopt</literal> to add |
223 |
<userinput>dom0_mem=512M</userinput> like this: |
224 |
<screen> |
225 |
## Xen hypervisor options to use with the default Xen boot option |
226 |
# xenhopt=dom0_mem=512M |
227 |
</screen> |
228 |
and the <literal>xenkopt</literal> needs to include the |
229 |
<userinput>nosmp</userinput> option like this: |
230 |
<screen> |
231 |
## Xen Linux kernel options to use with the default Xen boot option |
232 |
# xenkopt=nosmp |
233 |
</screen> |
234 |
|
235 |
Any existing parameters can be left in place: it's ok to |
236 |
have <computeroutput>xenkopt=console=tty0 |
237 |
nosmp</computeroutput>, for example. After modifying the |
238 |
files, you need to run: |
239 |
<screen> |
240 |
/sbin/update-grub |
241 |
</screen> |
242 |
</para> |
243 |
</formalpara> |
244 |
|
245 |
</sect3> |
246 |
|
247 |
<sect3> |
248 |
<title>Selecting the instance kernel</title> |
249 |
|
250 |
<para> |
251 |
After you have installed Xen, you need to tell Ganeti |
252 |
exactly what kernel to use for the instances it will |
253 |
create. This is done by creating a |
254 |
<emphasis>symlink</emphasis> from your actual kernel to |
255 |
<filename>/boot/vmlinuz-2.6-xenU</filename>, and one from |
256 |
your initrd to |
257 |
<filename>/boot/initrd-2.6-xenU</filename>. Note that if you |
258 |
don't use an initrd for the <literal>domU</literal> kernel, |
259 |
you don't need to create the initrd symlink. |
260 |
</para> |
261 |
|
262 |
<formalpara> |
263 |
<title>Debian</title> |
264 |
<para> |
265 |
After installation of the |
266 |
<literal>xen-linux-system</literal> package, you need to |
267 |
run (replace the exact version number with the one you |
268 |
have): |
269 |
<screen> |
270 |
cd /boot |
271 |
ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU |
272 |
ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU |
273 |
</screen> |
274 |
</para> |
275 |
</formalpara> |
276 |
</sect3> |
277 |
|
278 |
</sect2> |
279 |
|
280 |
<sect2> |
281 |
<title>Installing DRBD</title> |
282 |
|
283 |
<para> |
284 |
Recommended on all nodes: <ulink |
285 |
url="http://www.drbd.org/">DRBD</ulink> is required if you |
286 |
want to use the high availability (HA) features of Ganeti, but |
287 |
optional if you don't require HA or only run Ganeti on |
288 |
single-node clusters. You can upgrade a non-HA cluster to an |
289 |
HA one later, but you might need to export and re-import all |
290 |
your instances to take advantage of the new features. |
291 |
</para> |
292 |
|
293 |
<para> |
294 |
Supported DRBD version: the <literal>0.7</literal> |
295 |
series. It's recommended to have at least version |
296 |
<literal>0.7.24</literal> if you use <command>udev</command> |
297 |
since older versions have a bug related to device discovery |
298 |
which can be triggered in cases of hard drive failure. |
299 |
</para> |
300 |
|
301 |
<para> |
302 |
Now the bad news: unless your distribution already provides it |
303 |
installing DRBD might involve recompiling your kernel or |
304 |
anyway fiddling with it. Hopefully at least the Xen-ified |
305 |
kernel source to start from will be provided. |
306 |
</para> |
307 |
|
308 |
<para> |
309 |
The good news is that you don't need to configure DRBD at all. |
310 |
Ganeti will do it for you for every instance you set up. If |
311 |
you have the DRBD utils installed and the module in your |
312 |
kernel you're fine. Please check that your system is |
313 |
configured to load the module at every boot, and that it |
314 |
passes the following option to the module: |
315 |
<computeroutput>minor_count=64</computeroutput> (this will |
316 |
allow you to use up to 32 instances per node). |
317 |
</para> |
318 |
|
319 |
<formalpara><title>Debian</title> |
320 |
<para> |
321 |
You can just install (build) the DRBD 0.7 module with the |
322 |
following commands (make sure you are running the Xen |
323 |
kernel): |
324 |
</para> |
325 |
</formalpara> |
326 |
|
327 |
<screen> |
328 |
apt-get install drbd0.7-module-source drbd0.7-utils |
329 |
m-a update |
330 |
m-a a-i drbd0.7 |
331 |
echo drbd minor_count=64 >> /etc/modules |
332 |
modprobe drbd minor_count=64 |
333 |
</screen> |
334 |
|
335 |
<para> |
336 |
It is also recommended that you comment out the default |
337 |
resources in the <filename>/etc/dbrd.conf</filename> file, so |
338 |
that the init script doesn't try to configure any drbd |
339 |
devices. You can do this by prefixing all |
340 |
<literal>resource</literal> lines in the file with the keyword |
341 |
<literal>skip</literal>, like this: |
342 |
</para> |
343 |
|
344 |
<screen> |
345 |
skip resource r0 { |
346 |
... |
347 |
} |
348 |
|
349 |
skip resource "r1" { |
350 |
... |
351 |
} |
352 |
</screen> |
353 |
|
354 |
</sect2> |
355 |
|
356 |
<sect2> |
357 |
<title>Other required software</title> |
358 |
|
359 |
<para>Besides Xen and DRBD, you will need to install the |
360 |
following (on all nodes):</para> |
361 |
|
362 |
<itemizedlist> |
363 |
<listitem> |
364 |
<simpara><ulink url="http://sourceware.org/lvm2/">LVM |
365 |
version 2</ulink></simpara> |
366 |
</listitem> |
367 |
<listitem> |
368 |
<simpara><ulink |
369 |
url="http://www.openssl.org/">OpenSSL</ulink></simpara> |
370 |
</listitem> |
371 |
<listitem> |
372 |
<simpara><ulink |
373 |
url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara> |
374 |
</listitem> |
375 |
<listitem> |
376 |
<simpara><ulink url="http://bridge.sourceforge.net/">Bridge |
377 |
utilities</ulink></simpara> |
378 |
</listitem> |
379 |
<listitem> |
380 |
<simpara><ulink |
381 |
url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara> |
382 |
</listitem> |
383 |
<listitem> |
384 |
<simpara><ulink |
385 |
url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink> |
386 |
(part of iputils package)</simpara> |
387 |
</listitem> |
388 |
<listitem> |
389 |
<simpara><ulink |
390 |
url="http://www.kernel.org/pub/linux/utils/raid/mdadm/">mdadm</ulink> |
391 |
(Linux Software Raid tools)</simpara> |
392 |
</listitem> |
393 |
<listitem> |
394 |
<simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara> |
395 |
</listitem> |
396 |
<listitem> |
397 |
<simpara><ulink url="http://twistedmatrix.com/">Python |
398 |
Twisted library</ulink> - the core library is |
399 |
enough</simpara> |
400 |
</listitem> |
401 |
<listitem> |
402 |
<simpara><ulink |
403 |
url="http://pyopenssl.sourceforge.net/">Python OpenSSL |
404 |
bindings</ulink></simpara> |
405 |
</listitem> |
406 |
<listitem> |
407 |
<simpara><ulink |
408 |
url="http://www.undefined.org/python/#simplejson">simplejson Python |
409 |
module</ulink></simpara> |
410 |
</listitem> |
411 |
<listitem> |
412 |
<simpara><ulink |
413 |
url="http://pyparsing.wikispaces.com/">pyparsing Python |
414 |
module</ulink></simpara> |
415 |
</listitem> |
416 |
</itemizedlist> |
417 |
|
418 |
<para> |
419 |
These programs are supplied as part of most Linux |
420 |
distributions, so usually they can be installed via apt or |
421 |
similar methods. Also many of them will already be installed |
422 |
on a standard machine. |
423 |
</para> |
424 |
|
425 |
|
426 |
<formalpara><title>Debian</title> |
427 |
|
428 |
<para>You can use this command line to install all of them:</para> |
429 |
|
430 |
</formalpara> |
431 |
<screen> |
432 |
# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ |
433 |
python2.4 python-twisted-core python-pyopenssl openssl \ |
434 |
mdadm |
435 |
</screen> |
436 |
|
437 |
</sect2> |
438 |
|
439 |
</sect1> |
440 |
|
441 |
|
442 |
<sect1> |
443 |
<title>Setting up the environment for Ganeti</title> |
444 |
|
445 |
<sect2> |
446 |
<title>Configuring the network</title> |
447 |
|
448 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
449 |
|
450 |
<para> |
451 |
Ganeti relies on Xen running in "bridge mode", which means the |
452 |
instances network interfaces will be attached to a software bridge |
453 |
running in dom0. Xen by default creates such a bridge at startup, but |
454 |
your distribution might have a different way to do things. |
455 |
</para> |
456 |
|
457 |
<para> |
458 |
Beware that the default name Ganeti uses is |
459 |
<hardware>xen-br0</hardware> (which was used in Xen 2.0) |
460 |
while Xen 3.0 uses <hardware>xenbr0</hardware> by |
461 |
default. The default bridge your Ganeti cluster will use for new |
462 |
instances can be specified at cluster initialization time. |
463 |
</para> |
464 |
|
465 |
<formalpara><title>Debian</title> |
466 |
<para> |
467 |
The recommended Debian way to configure the Xen bridge is to |
468 |
edit your <filename>/etc/network/interfaces</filename> file |
469 |
and substitute your normal Ethernet stanza with the |
470 |
following snippet: |
471 |
|
472 |
<screen> |
473 |
auto xen-br0 |
474 |
iface xen-br0 inet static |
475 |
address <replaceable>YOUR_IP_ADDRESS</replaceable> |
476 |
netmask <replaceable>YOUR_NETMASK</replaceable> |
477 |
network <replaceable>YOUR_NETWORK</replaceable> |
478 |
broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable> |
479 |
gateway <replaceable>YOUR_GATEWAY</replaceable> |
480 |
bridge_ports eth0 |
481 |
bridge_stp off |
482 |
bridge_fd 0 |
483 |
</screen> |
484 |
</para> |
485 |
</formalpara> |
486 |
|
487 |
<para> |
488 |
The following commands need to be executed on the local console |
489 |
</para> |
490 |
<screen> |
491 |
ifdown eth0 |
492 |
ifup xen-br0 |
493 |
</screen> |
494 |
|
495 |
<para> |
496 |
To check if the bridge is setup, use <command>ip</command> |
497 |
and <command>brctl show</command>: |
498 |
<para> |
499 |
|
500 |
<screen> |
501 |
# ip a show xen-br0 |
502 |
9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue |
503 |
link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff |
504 |
inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 |
505 |
inet6 fe80::220:fcff:fe1e:d55d/64 scope link |
506 |
valid_lft forever preferred_lft forever |
507 |
|
508 |
# brctl show xen-br0 |
509 |
bridge name bridge id STP enabled interfaces |
510 |
xen-br0 8000.0020fc1ed55d no eth0 |
511 |
</screen> |
512 |
|
513 |
|
514 |
</sect2> |
515 |
|
516 |
<sect2> |
517 |
<title>Configuring LVM</title> |
518 |
|
519 |
|
520 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
521 |
|
522 |
<note> |
523 |
<simpara>The volume group is required to be at least |
524 |
<constant>20GiB</constant>.</simpara> |
525 |
</note> |
526 |
<para> |
527 |
If you haven't configured your LVM volume group at install |
528 |
time you need to do it before trying to initialize the Ganeti |
529 |
cluster. This is done by formatting the devices/partitions you |
530 |
want to use for it and then adding them to the relevant volume |
531 |
group: |
532 |
|
533 |
<screen> |
534 |
pvcreate /dev/sda3 |
535 |
vgcreate xenvg /dev/sda3 |
536 |
</screen> |
537 |
or |
538 |
<screen> |
539 |
pvcreate /dev/sdb1 |
540 |
pvcreate /dev/sdc1 |
541 |
vgcreate xenvg /dev/sdb1 /dev/sdc1 |
542 |
</screen> |
543 |
</para> |
544 |
|
545 |
<para> |
546 |
If you want to add a device later you can do so with the |
547 |
<citerefentry><refentrytitle>vgextend</refentrytitle> |
548 |
<manvolnum>8</manvolnum></citerefentry> command: |
549 |
</para> |
550 |
|
551 |
<screen> |
552 |
pvcreate /dev/sdd1 |
553 |
vgextend xenvg /dev/sdd1 |
554 |
</screen> |
555 |
|
556 |
<formalpara> |
557 |
<title>Optional</title> |
558 |
<para> |
559 |
It is recommended to configure LVM not to scan the DRBD |
560 |
devices for physical volumes. This can be accomplished by |
561 |
editing <filename>/etc/lvm/lvm.conf</filename> and adding |
562 |
the <literal>/dev/drbd[0-9]+</literal> regular expression to |
563 |
the <literal>filter</literal> variable, like this: |
564 |
<screen> |
565 |
filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] |
566 |
</screen> |
567 |
</para> |
568 |
</formalpara> |
569 |
|
570 |
</sect2> |
571 |
|
572 |
<sect2> |
573 |
<title>Installing Ganeti</title> |
574 |
|
575 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
576 |
|
577 |
<para> |
578 |
It's now time to install the Ganeti software itself. Download |
579 |
the source from <ulink |
580 |
url="http://code.google.com/p/ganeti/"></ulink>. |
581 |
</para> |
582 |
|
583 |
<screen> |
584 |
tar xvzf ganeti-1.2b1.tar.gz |
585 |
cd ganeti-1.2b1 |
586 |
./configure --localstatedir=/var --sysconfdir=/etc |
587 |
make |
588 |
make install |
589 |
mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export |
590 |
</screen> |
591 |
|
592 |
<para> |
593 |
You also need to copy the file |
594 |
<filename>doc/examples/ganeti.initd</filename> |
595 |
from the source archive to |
596 |
<filename>/etc/init.d/ganeti</filename> and register it with |
597 |
your distribution's startup scripts, for example in Debian: |
598 |
</para> |
599 |
<screen>update-rc.d ganeti defaults 20 80</screen> |
600 |
|
601 |
<para> |
602 |
In order to automatically restart failed instances, you need |
603 |
to setup a cron job run the |
604 |
<computeroutput>ganeti-watcher</computeroutput> program. A |
605 |
sample cron file is provided in the source at |
606 |
<filename>doc/examples/ganeti.cron</filename> and you can |
607 |
copy that (eventually altering the path) to |
608 |
<filename>/etc/cron.d/ganeti</filename> |
609 |
</para> |
610 |
|
611 |
</sect2> |
612 |
|
613 |
<sect2> |
614 |
<title>Installing the Operating System support packages</title> |
615 |
|
616 |
<para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> |
617 |
|
618 |
<para> |
619 |
To be able to install instances you need to have an Operating |
620 |
System installation script. An example for Debian Etch is |
621 |
provided on the project web site. Download it from <ulink |
622 |
url="http://code.google.com/p/ganeti/"></ulink> and follow the |
623 |
instructions in the <filename>README</filename> file. Here is |
624 |
the installation procedure: |
625 |
</para> |
626 |
|
627 |
<screen> |
628 |
cd /srv/ganeti/os |
629 |
tar xvf instance-debian-etch-0.1.tar |
630 |
mv instance-debian-etch-0.1 debian-etch |
631 |
</screen> |
632 |
|
633 |
<para> |
634 |
In order to use this OS definition, you need to have internet |
635 |
access from your nodes and have the <citerefentry> |
636 |
<refentrytitle>debootstrap</refentrytitle> |
637 |
<manvolnum>8</manvolnum></citerefentry>, <citerefentry> |
638 |
<refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum> |
639 |
</citerefentry> and <citerefentry> |
640 |
<refentrytitle>restore</refentrytitle> |
641 |
<manvolnum>8</manvolnum> </citerefentry> commands installed on |
642 |
all nodes. |
643 |
</para> |
644 |
<formalpara> |
645 |
<title>Debian</title> |
646 |
<para> |
647 |
Use this command on all nodes to install the required |
648 |
packages: |
649 |
|
650 |
<screen>apt-get install debootstrap dump</screen> |
651 |
</para> |
652 |
</formalpara> |
653 |
|
654 |
<para> |
655 |
Alternatively, you can create your own OS definitions. See the |
656 |
manpage |
657 |
<citerefentry> |
658 |
<refentrytitle>ganeti-os-interface</refentrytitle> |
659 |
<manvolnum>8</manvolnum> |
660 |
</citerefentry>. |
661 |
</para> |
662 |
|
663 |
</sect2> |
664 |
|
665 |
<sect2> |
666 |
<title>Initializing the cluster</title> |
667 |
|
668 |
<para><emphasis role="strong">Mandatory:</emphasis> only on one |
669 |
node per cluster.</para> |
670 |
|
671 |
|
672 |
<para>The last step is to initialize the cluster. After you've repeated |
673 |
the above process on all of your nodes, choose one as the master, and execute: |
674 |
</para> |
675 |
|
676 |
<screen> |
677 |
gnt-cluster init <replaceable>CLUSTERNAME</replaceable> |
678 |
</screen> |
679 |
|
680 |
<para> |
681 |
The <replaceable>CLUSTERNAME</replaceable> is a hostname, |
682 |
which must be resolvable (e.g. it must exist in DNS or in |
683 |
<filename>/etc/hosts</filename>) by all the nodes in the |
684 |
cluster. You must choose a name different from any of the |
685 |
nodes names for a multi-node cluster. In general the best |
686 |
choice is to have a unique name for a cluster, even if it |
687 |
consists of only one machine, as you will be able to expand it |
688 |
later without any problems. |
689 |
</para> |
690 |
|
691 |
<para> |
692 |
If the bridge name you are using is not |
693 |
<literal>xen-br0</literal>, use the <option>-b |
694 |
<replaceable>BRIDGENAME</replaceable></option> option to |
695 |
specify the bridge name. In this case, you should also use the |
696 |
<option>--master-netdev |
697 |
<replaceable>BRIDGENAME</replaceable></option> option with the |
698 |
same <replaceable>BRIDGENAME</replaceable> argument. |
699 |
</para> |
700 |
|
701 |
<para> |
702 |
You can use a different name than <literal>xenvg</literal> for |
703 |
the volume group (but note that the name must be identical on |
704 |
all nodes). In this case you need to specify it by passing the |
705 |
<option>-g <replaceable>VGNAME</replaceable></option> option |
706 |
to <computeroutput>gnt-cluster init</computeroutput>. |
707 |
</para> |
708 |
|
709 |
<para> |
710 |
You can also invoke the command with the |
711 |
<option>--help</option> option in order to see all the |
712 |
possibilities. |
713 |
</para> |
714 |
|
715 |
</sect2> |
716 |
|
717 |
<sect2> |
718 |
<title>Joining the nodes to the cluster</title> |
719 |
|
720 |
<para> |
721 |
<emphasis role="strong">Mandatory:</emphasis> for all the |
722 |
other nodes. |
723 |
</para> |
724 |
|
725 |
<para> |
726 |
After you have initialized your cluster you need to join the |
727 |
other nodes to it. You can do so by executing the following |
728 |
command on the master node: |
729 |
</para> |
730 |
<screen> |
731 |
gnt-node add <replaceable>NODENAME</replaceable> |
732 |
</screen> |
733 |
</sect2> |
734 |
|
735 |
<sect2> |
736 |
<title>Separate replication network</title> |
737 |
|
738 |
<para><emphasis role="strong">Optional</emphasis></para> |
739 |
<para> |
740 |
Ganeti uses DRBD to mirror the disk of the virtual instances |
741 |
between nodes. To use a dedicated network interface for this |
742 |
(in order to improve performance or to enhance security) you |
743 |
need to configure an additional interface for each node. Use |
744 |
the <option>-s</option> option with |
745 |
<computeroutput>gnt-cluster init</computeroutput> and |
746 |
<computeroutput>gnt-node add</computeroutput> to specify the |
747 |
IP address of this secondary interface to use for each |
748 |
node. Note that if you specified this option at cluster setup |
749 |
time, you must afterwards use it for every node add operation. |
750 |
</para> |
751 |
</sect2> |
752 |
|
753 |
<sect2> |
754 |
<title>Testing the setup</title> |
755 |
|
756 |
<para> |
757 |
|
758 |
Execute the <computeroutput>gnt-node list</computeroutput> |
759 |
command to see all nodes in the cluster: |
760 |
<screen> |
761 |
# gnt-node list |
762 |
Node DTotal DFree MTotal MNode MFree Pinst Sinst |
763 |
node1.example.com 197404 197404 2047 1896 125 0 0 |
764 |
</screen> |
765 |
</para> |
766 |
</sect2> |
767 |
|
768 |
<sect1> |
769 |
<title>Setting up and managing virtual instances</title> |
770 |
<sect2> |
771 |
<title>Setting up virtual instances</title> |
772 |
<para> |
773 |
This step shows how to setup a virtual instance with either |
774 |
non-mirrored disks (<computeroutput>plain</computeroutput>) or |
775 |
with network mirrored disks |
776 |
(<computeroutput>remote_raid1</computeroutput>). All commands |
777 |
need to be executed on the Ganeti master node (the one on |
778 |
which <computeroutput>gnt-cluster init</computeroutput> was |
779 |
run). Verify that the OS scripts are present on all cluster |
780 |
nodes with <computeroutput>gnt-os list</computeroutput>. |
781 |
</para> |
782 |
<para> |
783 |
To create a virtual instance, you need a hostname which is |
784 |
resolvable (DNS or <filename>/etc/hosts</filename> on all |
785 |
nodes). The following command will create a non-mirrored |
786 |
instance for you: |
787 |
</para> |
788 |
<screen> |
789 |
gnt-instance add --node=node1 -o debian-etch -t plain inst1.example.com |
790 |
* creating instance disks... |
791 |
adding instance inst1.example.com to cluster config |
792 |
Waiting for instance inst1.example.com to sync disks. |
793 |
Instance inst1.example.com's disks are in sync. |
794 |
creating os for instance inst1.example.com on node node1.example.com |
795 |
* running the instance OS create scripts... |
796 |
</screen> |
797 |
|
798 |
<para> |
799 |
The above instance will have no network interface enabled. |
800 |
You can access it over the virtual console with |
801 |
<computeroutput>gnt-instance console |
802 |
<literal>inst1</literal></computeroutput>. There is no |
803 |
password for root. As this is a Debian instance, you can |
804 |
modify the <filename>/etc/network/interfaces</filename> file |
805 |
to setup the network interface (<literal>eth0</literal> is the |
806 |
name of the interface provided to the instance). |
807 |
</para> |
808 |
|
809 |
<para> |
810 |
To create a network mirrored instance, change the argument to |
811 |
the <option>-t</option> option from <literal>plain</literal> |
812 |
to <literal>remote_raid1</literal> and specify the node on |
813 |
which the mirror should reside with the |
814 |
<option>--secondary-node</option> option, like this: |
815 |
</para> |
816 |
|
817 |
<screen> |
818 |
# gnt-instance add -t remote_raid1 --secondary-node node1 \ |
819 |
-n node2 -o debian-etch instance2 |
820 |
* creating instance disks... |
821 |
adding instance instance2 to cluster config |
822 |
Waiting for instance instance1 to sync disks. |
823 |
- device sdb: 3.50% done, 304 estimated seconds remaining |
824 |
- device sdb: 21.70% done, 270 estimated seconds remaining |
825 |
- device sdb: 39.80% done, 247 estimated seconds remaining |
826 |
- device sdb: 58.10% done, 121 estimated seconds remaining |
827 |
- device sdb: 76.30% done, 72 estimated seconds remaining |
828 |
- device sdb: 94.80% done, 18 estimated seconds remaining |
829 |
Instance instance2's disks are in sync. |
830 |
creating os for instance instance2 on node node2.example.com |
831 |
* running the instance OS create scripts... |
832 |
* starting instance... |
833 |
</screen> |
834 |
|
835 |
</sect2> |
836 |
|
837 |
<sect2> |
838 |
<title>Managing virtual instances</title> |
839 |
<para> |
840 |
All commands need to be executed on the Ganeti master node |
841 |
</para> |
842 |
|
843 |
<para> |
844 |
To access the console of an instance, use |
845 |
<computeroutput>gnt-instance console |
846 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
847 |
</para> |
848 |
|
849 |
<para> |
850 |
To shutdown an instance, use <computeroutput>gnt-instance |
851 |
shutdown |
852 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. To |
853 |
startup an instance, use <computeroutput>gnt-instance startup |
854 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
855 |
</para> |
856 |
|
857 |
<para> |
858 |
To failover an instance to its secondary node (only possible |
859 |
in <literal>remote_raid1</literal> setup), use |
860 |
<computeroutput>gnt-instance failover |
861 |
<replaceable>INSTANCENAME</replaceable></computeroutput>. |
862 |
</para> |
863 |
|
864 |
<para> |
865 |
For more instance and cluster administration details, see the |
866 |
<emphasis>Ganeti administrator's guide</emphasis>. |
867 |
</para> |
868 |
|
869 |
</sect2> |
870 |
|
871 |
</sect1> |
872 |
|
873 |
</article> |