root / doc / admin.sgml @ 36e23a40
History | View | Annotate | Download (14.8 kB)
1 |
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ |
---|---|
2 |
]> |
3 |
<article class="specification"> |
4 |
<articleinfo> |
5 |
<title>Ganeti administrator's guide</title> |
6 |
</articleinfo> |
7 |
<para>Documents Ganeti version 2.0</para> |
8 |
<sect1> |
9 |
<title>Introduction</title> |
10 |
|
11 |
<para> |
12 |
Ganeti is a virtualization cluster management software. You are |
13 |
expected to be a system administrator familiar with your Linux |
14 |
distribution and the Xen or KVM virtualization environments |
15 |
before using it. |
16 |
</para> |
17 |
|
18 |
<para> |
19 |
The various components of Ganeti all have man pages and |
20 |
interactive help. This manual though will help you getting |
21 |
familiar with the system by explaining the most common |
22 |
operations, grouped by related use. |
23 |
</para> |
24 |
|
25 |
<para> |
26 |
After a terminology glossary and a section on the prerequisites |
27 |
needed to use this manual, the rest of this document is divided |
28 |
in three main sections, which group different features of |
29 |
Ganeti: |
30 |
<itemizedlist> |
31 |
<listitem> |
32 |
<simpara>Instance Management</simpara> |
33 |
</listitem> |
34 |
<listitem> |
35 |
<simpara>High Availability Features</simpara> |
36 |
</listitem> |
37 |
<listitem> |
38 |
<simpara>Debugging Features</simpara> |
39 |
</listitem> |
40 |
</itemizedlist> |
41 |
</para> |
42 |
|
43 |
<sect2> |
44 |
<title>Ganeti terminology</title> |
45 |
|
46 |
<para> |
47 |
This section provides a small introduction to Ganeti terminology, which |
48 |
might be useful to read the rest of the document. |
49 |
|
50 |
<glosslist> |
51 |
<glossentry> |
52 |
<glossterm>Cluster</glossterm> |
53 |
<glossdef> |
54 |
<simpara> |
55 |
A set of machines (nodes) that cooperate to offer a |
56 |
coherent highly available virtualization service. |
57 |
</simpara> |
58 |
</glossdef> |
59 |
</glossentry> |
60 |
<glossentry> |
61 |
<glossterm>Node</glossterm> |
62 |
<glossdef> |
63 |
<simpara> |
64 |
A physical machine which is member of a cluster. |
65 |
Nodes are the basic cluster infrastructure, and are |
66 |
not fault tolerant. |
67 |
</simpara> |
68 |
</glossdef> |
69 |
</glossentry> |
70 |
<glossentry> |
71 |
<glossterm>Master node</glossterm> |
72 |
<glossdef> |
73 |
<simpara> |
74 |
The node which controls the Cluster, from which all |
75 |
Ganeti commands must be given. |
76 |
</simpara> |
77 |
</glossdef> |
78 |
</glossentry> |
79 |
<glossentry> |
80 |
<glossterm>Instance</glossterm> |
81 |
<glossdef> |
82 |
<simpara> |
83 |
A virtual machine which runs on a cluster. It can be a |
84 |
fault tolerant highly available entity. |
85 |
</simpara> |
86 |
</glossdef> |
87 |
</glossentry> |
88 |
<glossentry> |
89 |
<glossterm>Pool</glossterm> |
90 |
<glossdef> |
91 |
<simpara> |
92 |
A pool is a set of clusters sharing the same network. |
93 |
</simpara> |
94 |
</glossdef> |
95 |
</glossentry> |
96 |
<glossentry> |
97 |
<glossterm>Meta-Cluster</glossterm> |
98 |
<glossdef> |
99 |
<simpara> |
100 |
Anything that concerns more than one cluster. |
101 |
</simpara> |
102 |
</glossdef> |
103 |
</glossentry> |
104 |
</glosslist> |
105 |
</para> |
106 |
</sect2> |
107 |
|
108 |
<sect2> |
109 |
<title>Prerequisites</title> |
110 |
|
111 |
<para> |
112 |
You need to have your Ganeti cluster installed and configured before |
113 |
you try any of the commands in this document. Please follow the |
114 |
<emphasis>Ganeti installation tutorial</emphasis> for instructions on |
115 |
how to do that. |
116 |
</para> |
117 |
</sect2> |
118 |
|
119 |
</sect1> |
120 |
|
121 |
<sect1> |
122 |
<title>Managing Instances</title> |
123 |
|
124 |
<sect2> |
125 |
<title>Adding/Removing an instance</title> |
126 |
|
127 |
<para> |
128 |
Adding a new virtual instance to your Ganeti cluster is really easy. |
129 |
The command is: |
130 |
|
131 |
<synopsis>gnt-instance add -n <replaceable>TARGET_NODE<optional>:SECONDARY_NODE</optional></replaceable> -o <replaceable>OS_TYPE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis> |
132 |
|
133 |
The instance name must be resolvable (e.g. exist in DNS) and |
134 |
usually to an address in the same subnet as the cluster |
135 |
itself. Options you can give to this command include: |
136 |
|
137 |
<itemizedlist> |
138 |
<listitem> |
139 |
<simpara>The disk size (<option>-s</option>) for a |
140 |
single-disk instance, or multiple <option>--disk |
141 |
<replaceable>N</replaceable>:size=<replaceable>SIZE</replaceable></option> |
142 |
options for multi-instance disks</simpara> |
143 |
</listitem> |
144 |
<listitem> |
145 |
<simpara>The memory size (<option>-B memory</option>)</simpara> |
146 |
</listitem> |
147 |
<listitem> |
148 |
<simpara>The number of virtual CPUs (<option>-B vcpus</option>)</simpara> |
149 |
</listitem> |
150 |
<listitem> |
151 |
<para> |
152 |
Arguments for the NICs of the instance; by default, a |
153 |
single-NIC instance is created. The IP and/or bridge of |
154 |
the NIC can be changed via <option>--nic |
155 |
0:ip=<replaceable>IP</replaceable>,bridge=<replaceable>BRIDGE</replaceable></option> |
156 |
</para> |
157 |
</listitem> |
158 |
</itemizedlist> |
159 |
</para> |
160 |
|
161 |
<para>There are four types of disk template you can choose from:</para> |
162 |
|
163 |
<variablelist> |
164 |
<varlistentry> |
165 |
<term>diskless</term> |
166 |
<listitem> |
167 |
<para>The instance has no disks. Only used for special purpouse |
168 |
operating systems or for testing.</para> |
169 |
</listitem> |
170 |
</varlistentry> |
171 |
|
172 |
<varlistentry> |
173 |
<term>file</term> |
174 |
<listitem> |
175 |
<para>The instance will use plain files as backend for its |
176 |
disks. No redundancy is provided, and this is somewhat |
177 |
more difficult to configure for high performance.</para> |
178 |
</listitem> |
179 |
</varlistentry> |
180 |
|
181 |
<varlistentry> |
182 |
<term>plain</term> |
183 |
<listitem> |
184 |
<para>The instance will use LVM devices as backend for its disks. |
185 |
No redundancy is provided.</para> |
186 |
</listitem> |
187 |
</varlistentry> |
188 |
|
189 |
<varlistentry> |
190 |
<term>drbd</term> |
191 |
<listitem> |
192 |
<simpara><emphasis role="strong">Note:</emphasis> This is only |
193 |
valid for multi-node clusters using drbd 8.0.x</simpara> |
194 |
<simpara> |
195 |
A mirror is set between the local node and a remote one, which |
196 |
must be specified with the second value of the --node option. Use |
197 |
this option to obtain a highly available instance that can be |
198 |
failed over to a remote node should the primary one fail. |
199 |
</simpara> |
200 |
</listitem> |
201 |
</varlistentry> |
202 |
|
203 |
</variablelist> |
204 |
|
205 |
<para> |
206 |
For example if you want to create an highly available instance use the |
207 |
drbd disk templates: |
208 |
<synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable><optional>:<replaceable>SECONDARY_NODE</replaceable></optional> -o <replaceable>OS_TYPE</replaceable> -t drbd \ |
209 |
<replaceable>INSTANCE_NAME</replaceable></synopsis> |
210 |
|
211 |
<para> |
212 |
To know which operating systems your cluster supports you can use |
213 |
<synopsis>gnt-os list</synopsis> |
214 |
</para> |
215 |
|
216 |
<para> |
217 |
Removing an instance is even easier than creating one. This |
218 |
operation is irrereversible and destroys all the contents of |
219 |
your instance. Use with care: |
220 |
|
221 |
<synopsis>gnt-instance remove <replaceable>INSTANCE_NAME</replaceable></synopsis> |
222 |
</para> |
223 |
</sect2> |
224 |
|
225 |
<sect2> |
226 |
<title>Starting/Stopping an instance</title> |
227 |
|
228 |
<para> |
229 |
Instances are automatically started at instance creation time. To |
230 |
manually start one which is currently stopped you can run: |
231 |
|
232 |
<synopsis>gnt-instance startup <replaceable>INSTANCE_NAME</replaceable></synopsis> |
233 |
|
234 |
While the command to stop one is: |
235 |
|
236 |
<synopsis>gnt-instance shutdown <replaceable>INSTANCE_NAME</replaceable></synopsis> |
237 |
|
238 |
The command to see all the instances configured and their status is: |
239 |
|
240 |
<synopsis>gnt-instance list</synopsis> |
241 |
|
242 |
</para> |
243 |
|
244 |
<para> |
245 |
Do not use the xen commands to stop instances. If you run for |
246 |
example xm shutdown or xm destroy on an instance Ganeti will |
247 |
automatically restart it (via the |
248 |
<citerefentry><refentrytitle>ganeti-watcher</refentrytitle> |
249 |
<manvolnum>8</manvolnum></citerefentry>) |
250 |
</para> |
251 |
|
252 |
</sect2> |
253 |
|
254 |
<sect2> |
255 |
<title>Exporting/Importing an instance</title> |
256 |
|
257 |
<para> |
258 |
You can create a snapshot of an instance disk and Ganeti |
259 |
configuration, which then you can backup, or import into |
260 |
another cluster. The way to export an instance is: |
261 |
|
262 |
<synopsis>gnt-backup export -n <replaceable>TARGET_NODE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis> |
263 |
|
264 |
The target node can be any node in the cluster with enough |
265 |
space under <filename class="directory">/srv/ganeti</filename> |
266 |
to hold the instance image. Use the |
267 |
<option>--noshutdown</option> option to snapshot an instance |
268 |
without rebooting it. Any previous snapshot of the same |
269 |
instance existing cluster-wide under <filename |
270 |
class="directory">/srv/ganeti</filename> will be removed by |
271 |
this operation: if you want to keep them move them out of the |
272 |
Ganeti exports directory. |
273 |
</para> |
274 |
|
275 |
<para> |
276 |
Importing an instance is similar to creating a new one. The command is: |
277 |
|
278 |
<synopsis>gnt-backup import -n <replaceable>TARGET_NODE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> --src-node=<replaceable>NODE</replaceable> --src-dir=DIR INSTANCE_NAME</synopsis> |
279 |
|
280 |
Most of the options available for the command |
281 |
<emphasis>gnt-instance add</emphasis> are supported here too. |
282 |
|
283 |
</para> |
284 |
</sect2> |
285 |
|
286 |
</sect1> |
287 |
|
288 |
|
289 |
<sect1> |
290 |
<title>High availability features</title> |
291 |
|
292 |
<note> |
293 |
<simpara>This section only applies to multi-node clusters.</simpara> |
294 |
</note> |
295 |
|
296 |
<sect2> |
297 |
<title>Failing over an instance</title> |
298 |
|
299 |
<para> |
300 |
If an instance is built in highly available mode you can at |
301 |
any time fail it over to its secondary node, even if the |
302 |
primary has somehow failed and it's not up anymore. Doing it |
303 |
is really easy, on the master node you can just run: |
304 |
|
305 |
<synopsis>gnt-instance failover <replaceable>INSTANCE_NAME</replaceable></synopsis> |
306 |
|
307 |
That's it. After the command completes the secondary node is |
308 |
now the primary, and vice versa. |
309 |
</para> |
310 |
</sect2> |
311 |
|
312 |
<sect2> |
313 |
<title>Live migrating an instance</title> |
314 |
|
315 |
<para> |
316 |
If an instance is built in highly available mode, it currently |
317 |
runs and both its nodes are running fine, you can at migrate |
318 |
it over to its secondary node, without dowtime. On the master |
319 |
node you need to run: |
320 |
|
321 |
<synopsis>gnt-instance migrate <replaceable>INSTANCE_NAME</replaceable></synopsis> |
322 |
|
323 |
</para> |
324 |
</sect2> |
325 |
|
326 |
|
327 |
<sect2> |
328 |
<title>Replacing an instance disks</title> |
329 |
|
330 |
<para> |
331 |
So what if instead the secondary node for an instance has |
332 |
failed, or you plan to remove a node from your cluster, and |
333 |
you failed over all its instances, but it's still secondary |
334 |
for some? The solution here is to replace the instance disks, |
335 |
changing the secondary node: |
336 |
<synopsis>gnt-instance replace-disks <option>-n <replaceable>NODE</replaceable></option> <replaceable>INSTANCE_NAME</replaceable></synopsis> |
337 |
|
338 |
This process is a bit long, but involves no instance |
339 |
downtime, and at the end of it the instance has changed its |
340 |
secondary node, to which it can if necessary be failed over. |
341 |
</para> |
342 |
</sect2> |
343 |
|
344 |
<sect2> |
345 |
<title>Failing over the master node</title> |
346 |
|
347 |
<para> |
348 |
This is all good as long as the Ganeti Master Node is |
349 |
up. Should it go down, or should you wish to decommission it, |
350 |
just run on any other node the command: |
351 |
|
352 |
<synopsis>gnt-cluster masterfailover</synopsis> |
353 |
|
354 |
and the node you ran it on is now the new master. |
355 |
</para> |
356 |
</sect2> |
357 |
<sect2> |
358 |
<title>Adding/Removing nodes</title> |
359 |
|
360 |
<para> |
361 |
And of course, now that you know how to move instances around, |
362 |
it's easy to free up a node, and then you can remove it from |
363 |
the cluster: |
364 |
|
365 |
<synopsis>gnt-node remove <replaceable>NODE_NAME</replaceable></synopsis> |
366 |
|
367 |
and maybe add a new one: |
368 |
|
369 |
<synopsis>gnt-node add <optional><option>--secondary-ip=<replaceable>ADDRESS</replaceable></option></optional> <replaceable>NODE_NAME</replaceable> |
370 |
|
371 |
</synopsis> |
372 |
</para> |
373 |
</sect2> |
374 |
</sect1> |
375 |
|
376 |
<sect1> |
377 |
<title>Debugging Features</title> |
378 |
|
379 |
<para> |
380 |
At some point you might need to do some debugging operations on |
381 |
your cluster or on your instances. This section will help you |
382 |
with the most used debugging functionalities. |
383 |
</para> |
384 |
|
385 |
<sect2> |
386 |
<title>Accessing an instance's disks</title> |
387 |
|
388 |
<para> |
389 |
From an instance's primary node you have access to its |
390 |
disks. Never ever mount the underlying logical volume manually |
391 |
on a fault tolerant instance, or you risk breaking |
392 |
replication. The correct way to access them is to run the |
393 |
command: |
394 |
|
395 |
<synopsis>gnt-instance activate-disks <replaceable>INSTANCE_NAME</replaceable></synopsis> |
396 |
|
397 |
And then access the device that gets created. After you've |
398 |
finished you can deactivate them with the deactivate-disks |
399 |
command, which works in the same way. |
400 |
</para> |
401 |
</sect2> |
402 |
|
403 |
<sect2> |
404 |
<title>Accessing an instance's console</title> |
405 |
|
406 |
<para> |
407 |
The command to access a running instance's console is: |
408 |
|
409 |
<synopsis>gnt-instance console <replaceable>INSTANCE_NAME</replaceable></synopsis> |
410 |
|
411 |
Use the console normally and then type |
412 |
<userinput>^]</userinput> when done, to exit. |
413 |
</para> |
414 |
</sect2> |
415 |
|
416 |
<sect2> |
417 |
<title>Instance OS definitions Debugging</title> |
418 |
|
419 |
<para> |
420 |
Should you have any problems with operating systems support |
421 |
the command to ran to see a complete status for all your nodes |
422 |
is: |
423 |
|
424 |
<synopsis>gnt-os diagnose</synopsis> |
425 |
|
426 |
</para> |
427 |
|
428 |
</sect2> |
429 |
|
430 |
<sect2> |
431 |
<title>Cluster-wide debugging</title> |
432 |
|
433 |
<para> |
434 |
The gnt-cluster command offers several options to run tests or |
435 |
execute cluster-wide operations. For example: |
436 |
|
437 |
<screen> |
438 |
gnt-cluster command |
439 |
gnt-cluster copyfile |
440 |
gnt-cluster verify |
441 |
gnt-cluster verify-disks |
442 |
gnt-cluster getmaster |
443 |
gnt-cluster version |
444 |
</screen> |
445 |
|
446 |
See the man page <citerefentry> |
447 |
<refentrytitle>gnt-cluster</refentrytitle> |
448 |
<manvolnum>8</manvolnum> </citerefentry> to know more about |
449 |
their usage. |
450 |
</para> |
451 |
</sect2> |
452 |
|
453 |
</sect1> |
454 |
|
455 |
</article> |