root / doc / admin.rst @ 7142485a
History | View | Annotate | Download (58.1 kB)
1 |
Ganeti administrator's guide |
---|---|
2 |
============================ |
3 |
|
4 |
Documents Ganeti version |version| |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
.. highlight:: text |
9 |
|
10 |
Introduction |
11 |
------------ |
12 |
|
13 |
Ganeti is a virtualization cluster management software. You are expected |
14 |
to be a system administrator familiar with your Linux distribution and |
15 |
the Xen or KVM virtualization environments before using it. |
16 |
|
17 |
The various components of Ganeti all have man pages and interactive |
18 |
help. This manual though will help you getting familiar with the system |
19 |
by explaining the most common operations, grouped by related use. |
20 |
|
21 |
After a terminology glossary and a section on the prerequisites needed |
22 |
to use this manual, the rest of this document is divided in sections |
23 |
for the different targets that a command affects: instance, nodes, etc. |
24 |
|
25 |
.. _terminology-label: |
26 |
|
27 |
Ganeti terminology |
28 |
++++++++++++++++++ |
29 |
|
30 |
This section provides a small introduction to Ganeti terminology, which |
31 |
might be useful when reading the rest of the document. |
32 |
|
33 |
Cluster |
34 |
~~~~~~~ |
35 |
|
36 |
A set of machines (nodes) that cooperate to offer a coherent, highly |
37 |
available virtualization service under a single administration domain. |
38 |
|
39 |
Node |
40 |
~~~~ |
41 |
|
42 |
A physical machine which is member of a cluster. Nodes are the basic |
43 |
cluster infrastructure, and they don't need to be fault tolerant in |
44 |
order to achieve high availability for instances. |
45 |
|
46 |
Node can be added and removed (if they host no instances) at will from |
47 |
the cluster. In a HA cluster and only with HA instances, the loss of any |
48 |
single node will not cause disk data loss for any instance; of course, |
49 |
a node crash will cause the crash of the its primary instances. |
50 |
|
51 |
A node belonging to a cluster can be in one of the following roles at a |
52 |
given time: |
53 |
|
54 |
- *master* node, which is the node from which the cluster is controlled |
55 |
- *master candidate* node, only nodes in this role have the full cluster |
56 |
configuration and knowledge, and only master candidates can become the |
57 |
master node |
58 |
- *regular* node, which is the state in which most nodes will be on |
59 |
bigger clusters (>20 nodes) |
60 |
- *drained* node, nodes in this state are functioning normally but the |
61 |
cannot receive new instances; the intention is that nodes in this role |
62 |
have some issue and they are being evacuated for hardware repairs |
63 |
- *offline* node, in which there is a record in the cluster |
64 |
configuration about the node, but the daemons on the master node will |
65 |
not talk to this node; any instances declared as having an offline |
66 |
node as either primary or secondary will be flagged as an error in the |
67 |
cluster verify operation |
68 |
|
69 |
Depending on the role, each node will run a set of daemons: |
70 |
|
71 |
- the :command:`ganeti-noded` daemon, which control the manipulation of |
72 |
this node's hardware resources; it runs on all nodes which are in a |
73 |
cluster |
74 |
- the :command:`ganeti-confd` daemon (Ganeti 2.1+) which runs on all |
75 |
nodes, but is only functional on master candidate nodes; this daemon |
76 |
can be disabled at configuration time if you don't need its |
77 |
functionality |
78 |
- the :command:`ganeti-rapi` daemon which runs on the master node and |
79 |
offers an HTTP-based API for the cluster |
80 |
- the :command:`ganeti-masterd` daemon which runs on the master node and |
81 |
allows control of the cluster |
82 |
|
83 |
Beside the node role, there are other node flags that influence its |
84 |
behaviour: |
85 |
|
86 |
- the *master_capable* flag denotes whether the node can ever become a |
87 |
master candidate; setting this to 'no' means that auto-promotion will |
88 |
never make this node a master candidate; this flag can be useful for a |
89 |
remote node that only runs local instances, and having it become a |
90 |
master is impractical due to networking or other constraints |
91 |
- the *vm_capable* flag denotes whether the node can host instances or |
92 |
not; for example, one might use a non-vm_capable node just as a master |
93 |
candidate, for configuration backups; setting this flag to no |
94 |
disallows placement of instances of this node, deactivates hypervisor |
95 |
and related checks on it (e.g. bridge checks, LVM check, etc.), and |
96 |
removes it from cluster capacity computations |
97 |
|
98 |
|
99 |
Instance |
100 |
~~~~~~~~ |
101 |
|
102 |
A virtual machine which runs on a cluster. It can be a fault tolerant, |
103 |
highly available entity. |
104 |
|
105 |
An instance has various parameters, which are classified in three |
106 |
categories: hypervisor related-parameters (called ``hvparams``), general |
107 |
parameters (called ``beparams``) and per network-card parameters (called |
108 |
``nicparams``). All these parameters can be modified either at instance |
109 |
level or via defaults at cluster level. |
110 |
|
111 |
Disk template |
112 |
~~~~~~~~~~~~~ |
113 |
|
114 |
The are multiple options for the storage provided to an instance; while |
115 |
the instance sees the same virtual drive in all cases, the node-level |
116 |
configuration varies between them. |
117 |
|
118 |
There are five disk templates you can choose from: |
119 |
|
120 |
diskless |
121 |
The instance has no disks. Only used for special purpose operating |
122 |
systems or for testing. |
123 |
|
124 |
file |
125 |
The instance will use plain files as backend for its disks. No |
126 |
redundancy is provided, and this is somewhat more difficult to |
127 |
configure for high performance. |
128 |
|
129 |
plain |
130 |
The instance will use LVM devices as backend for its disks. No |
131 |
redundancy is provided. |
132 |
|
133 |
drbd |
134 |
.. note:: This is only valid for multi-node clusters using DRBD 8.0+ |
135 |
|
136 |
A mirror is set between the local node and a remote one, which must be |
137 |
specified with the second value of the --node option. Use this option |
138 |
to obtain a highly available instance that can be failed over to a |
139 |
remote node should the primary one fail. |
140 |
|
141 |
rbd |
142 |
The instance will use Volumes inside a RADOS cluster as backend for its |
143 |
disks. It will access them using the RADOS block device (RBD). |
144 |
|
145 |
IAllocator |
146 |
~~~~~~~~~~ |
147 |
|
148 |
A framework for using external (user-provided) scripts to compute the |
149 |
placement of instances on the cluster nodes. This eliminates the need to |
150 |
manually specify nodes in instance add, instance moves, node evacuate, |
151 |
etc. |
152 |
|
153 |
In order for Ganeti to be able to use these scripts, they must be place |
154 |
in the iallocator directory (usually ``lib/ganeti/iallocators`` under |
155 |
the installation prefix, e.g. ``/usr/local``). |
156 |
|
157 |
“Primary” and “secondary” concepts |
158 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
159 |
|
160 |
An instance has a primary and depending on the disk configuration, might |
161 |
also have a secondary node. The instance always runs on the primary node |
162 |
and only uses its secondary node for disk replication. |
163 |
|
164 |
Similarly, the term of primary and secondary instances when talking |
165 |
about a node refers to the set of instances having the given node as |
166 |
primary, respectively secondary. |
167 |
|
168 |
Tags |
169 |
~~~~ |
170 |
|
171 |
Tags are short strings that can be attached to either to cluster itself, |
172 |
or to nodes or instances. They are useful as a very simplistic |
173 |
information store for helping with cluster administration, for example |
174 |
by attaching owner information to each instance after it's created:: |
175 |
|
176 |
gnt-instance add … instance1 |
177 |
gnt-instance add-tags instance1 owner:user2 |
178 |
|
179 |
And then by listing each instance and its tags, this information could |
180 |
be used for contacting the users of each instance. |
181 |
|
182 |
Jobs and OpCodes |
183 |
~~~~~~~~~~~~~~~~ |
184 |
|
185 |
While not directly visible by an end-user, it's useful to know that a |
186 |
basic cluster operation (e.g. starting an instance) is represented |
187 |
internall by Ganeti as an *OpCode* (abbreviation from operation |
188 |
code). These OpCodes are executed as part of a *Job*. The OpCodes in a |
189 |
single Job are processed serially by Ganeti, but different Jobs will be |
190 |
processed (depending on resource availability) in parallel. They will |
191 |
not be executed in the submission order, but depending on resource |
192 |
availability, locks and (starting with Ganeti 2.3) priority. An earlier |
193 |
job may have to wait for a lock while a newer job doesn't need any locks |
194 |
and can be executed right away. Operations requiring a certain order |
195 |
need to be submitted as a single job, or the client must submit one job |
196 |
at a time and wait for it to finish before continuing. |
197 |
|
198 |
For example, shutting down the entire cluster can be done by running the |
199 |
command ``gnt-instance shutdown --all``, which will submit for each |
200 |
instance a separate job containing the “shutdown instance” OpCode. |
201 |
|
202 |
|
203 |
Prerequisites |
204 |
+++++++++++++ |
205 |
|
206 |
You need to have your Ganeti cluster installed and configured before you |
207 |
try any of the commands in this document. Please follow the |
208 |
:doc:`install` for instructions on how to do that. |
209 |
|
210 |
Instance management |
211 |
------------------- |
212 |
|
213 |
Adding an instance |
214 |
++++++++++++++++++ |
215 |
|
216 |
The add operation might seem complex due to the many parameters it |
217 |
accepts, but once you have understood the (few) required parameters and |
218 |
the customisation capabilities you will see it is an easy operation. |
219 |
|
220 |
The add operation requires at minimum five parameters: |
221 |
|
222 |
- the OS for the instance |
223 |
- the disk template |
224 |
- the disk count and size |
225 |
- the node specification or alternatively the iallocator to use |
226 |
- and finally the instance name |
227 |
|
228 |
The OS for the instance must be visible in the output of the command |
229 |
``gnt-os list`` and specifies which guest OS to install on the instance. |
230 |
|
231 |
The disk template specifies what kind of storage to use as backend for |
232 |
the (virtual) disks presented to the instance; note that for instances |
233 |
with multiple virtual disks, they all must be of the same type. |
234 |
|
235 |
The node(s) on which the instance will run can be given either manually, |
236 |
via the ``-n`` option, or computed automatically by Ganeti, if you have |
237 |
installed any iallocator script. |
238 |
|
239 |
With the above parameters in mind, the command is:: |
240 |
|
241 |
gnt-instance add \ |
242 |
-n TARGET_NODE:SECONDARY_NODE \ |
243 |
-o OS_TYPE \ |
244 |
-t DISK_TEMPLATE -s DISK_SIZE \ |
245 |
INSTANCE_NAME |
246 |
|
247 |
The instance name must be resolvable (e.g. exist in DNS) and usually |
248 |
points to an address in the same subnet as the cluster itself. |
249 |
|
250 |
The above command has the minimum required options; other options you |
251 |
can give include, among others: |
252 |
|
253 |
- The maximum/minimum memory size (``-B maxmem``, ``-B minmem``) |
254 |
(``-B memory`` can be used to specify only one size) |
255 |
|
256 |
- The number of virtual CPUs (``-B vcpus``) |
257 |
|
258 |
- Arguments for the NICs of the instance; by default, a single-NIC |
259 |
instance is created. The IP and/or bridge of the NIC can be changed |
260 |
via ``--nic 0:ip=IP,bridge=BRIDGE`` |
261 |
|
262 |
See the manpage for gnt-instance for the detailed option list. |
263 |
|
264 |
For example if you want to create an highly available instance, with a |
265 |
single disk of 50GB and the default memory size, having primary node |
266 |
``node1`` and secondary node ``node3``, use the following command:: |
267 |
|
268 |
gnt-instance add -n node1:node3 -o debootstrap -t drbd \ |
269 |
instance1 |
270 |
|
271 |
There is a also a command for batch instance creation from a |
272 |
specification file, see the ``batch-create`` operation in the |
273 |
gnt-instance manual page. |
274 |
|
275 |
Regular instance operations |
276 |
+++++++++++++++++++++++++++ |
277 |
|
278 |
Removal |
279 |
~~~~~~~ |
280 |
|
281 |
Removing an instance is even easier than creating one. This operation is |
282 |
irreversible and destroys all the contents of your instance. Use with |
283 |
care:: |
284 |
|
285 |
gnt-instance remove INSTANCE_NAME |
286 |
|
287 |
.. _instance-startup-label: |
288 |
|
289 |
Startup/shutdown |
290 |
~~~~~~~~~~~~~~~~ |
291 |
|
292 |
Instances are automatically started at instance creation time. To |
293 |
manually start one which is currently stopped you can run:: |
294 |
|
295 |
gnt-instance startup INSTANCE_NAME |
296 |
|
297 |
Ganeti will start an instance with up to its maximum instance memory. If |
298 |
not enough memory is available Ganeti will use all the available memory |
299 |
down to the instance minumum memory. If not even that amount of memory |
300 |
is free Ganeti will refuse to start the instance. |
301 |
|
302 |
Note, that this will not work when an instance is in a permanently |
303 |
stopped state ``offline``. In this case, you will first have to |
304 |
put it back to online mode by running:: |
305 |
|
306 |
gnt-instance modify --online INSTANCE_NAME |
307 |
|
308 |
The command to stop the running instance is:: |
309 |
|
310 |
gnt-instance shutdown INSTANCE_NAME |
311 |
|
312 |
If you want to shut the instance down more permanently, so that it |
313 |
does not require dynamically allocated resources (memory and vcpus), |
314 |
after shutting down an instance, execute the following:: |
315 |
|
316 |
gnt-instance modify --offline INSTANCE_NAME |
317 |
|
318 |
.. warning:: Do not use the Xen or KVM commands directly to stop |
319 |
instances. If you run for example ``xm shutdown`` or ``xm destroy`` |
320 |
on an instance Ganeti will automatically restart it (via |
321 |
the :command:`ganeti-watcher` command which is launched via cron). |
322 |
|
323 |
Querying instances |
324 |
~~~~~~~~~~~~~~~~~~ |
325 |
|
326 |
There are two ways to get information about instances: listing |
327 |
instances, which does a tabular output containing a given set of fields |
328 |
about each instance, and querying detailed information about a set of |
329 |
instances. |
330 |
|
331 |
The command to see all the instances configured and their status is:: |
332 |
|
333 |
gnt-instance list |
334 |
|
335 |
The command can return a custom set of information when using the ``-o`` |
336 |
option (as always, check the manpage for a detailed specification). Each |
337 |
instance will be represented on a line, thus making it easy to parse |
338 |
this output via the usual shell utilities (grep, sed, etc.). |
339 |
|
340 |
To get more detailed information about an instance, you can run:: |
341 |
|
342 |
gnt-instance info INSTANCE |
343 |
|
344 |
which will give a multi-line block of information about the instance, |
345 |
it's hardware resources (especially its disks and their redundancy |
346 |
status), etc. This is harder to parse and is more expensive than the |
347 |
list operation, but returns much more detailed information. |
348 |
|
349 |
Changing an instance's runtime memory |
350 |
+++++++++++++++++++++++++++++++++++++ |
351 |
|
352 |
Ganeti will always make sure an instance has a value between its maximum |
353 |
and its minimum memory available as runtime memory. As of version 2.6 |
354 |
Ganeti will only choose a size different than the maximum size when |
355 |
starting up, failing over, or migrating an instance on a node with less |
356 |
than the maximum memory available. It won't resize other instances in |
357 |
order to free up space for an instance. |
358 |
|
359 |
If you find that you need more memory on a node any instance can be |
360 |
manually resized without downtime, with the command:: |
361 |
|
362 |
gnt-instance modify -m SIZE INSTANCE_NAME |
363 |
|
364 |
The same command can also be used to increase the memory available on an |
365 |
instance, provided that enough free memory is available on its node, and |
366 |
the specified size is not larger than the maximum memory size the |
367 |
instance had when it was first booted (an instance will be unable to see |
368 |
new memory above the maximum that was specified to the hypervisor at its |
369 |
boot time, if it needs to grow further a reboot becomes necessary). |
370 |
|
371 |
Export/Import |
372 |
+++++++++++++ |
373 |
|
374 |
You can create a snapshot of an instance disk and its Ganeti |
375 |
configuration, which then you can backup, or import into another |
376 |
cluster. The way to export an instance is:: |
377 |
|
378 |
gnt-backup export -n TARGET_NODE INSTANCE_NAME |
379 |
|
380 |
|
381 |
The target node can be any node in the cluster with enough space under |
382 |
``/srv/ganeti`` to hold the instance image. Use the ``--noshutdown`` |
383 |
option to snapshot an instance without rebooting it. Note that Ganeti |
384 |
only keeps one snapshot for an instance - any previous snapshot of the |
385 |
same instance existing cluster-wide under ``/srv/ganeti`` will be |
386 |
removed by this operation: if you want to keep them, you need to move |
387 |
them out of the Ganeti exports directory. |
388 |
|
389 |
Importing an instance is similar to creating a new one, but additionally |
390 |
one must specify the location of the snapshot. The command is:: |
391 |
|
392 |
gnt-backup import -n TARGET_NODE \ |
393 |
--src-node=NODE --src-dir=DIR INSTANCE_NAME |
394 |
|
395 |
By default, parameters will be read from the export information, but you |
396 |
can of course pass them in via the command line - most of the options |
397 |
available for the command :command:`gnt-instance add` are supported here |
398 |
too. |
399 |
|
400 |
Import of foreign instances |
401 |
+++++++++++++++++++++++++++ |
402 |
|
403 |
There is a possibility to import a foreign instance whose disk data is |
404 |
already stored as LVM volumes without going through copying it: the disk |
405 |
adoption mode. |
406 |
|
407 |
For this, ensure that the original, non-managed instance is stopped, |
408 |
then create a Ganeti instance in the usual way, except that instead of |
409 |
passing the disk information you specify the current volumes:: |
410 |
|
411 |
gnt-instance add -t plain -n HOME_NODE ... \ |
412 |
--disk 0:adopt=lv_name[,vg=vg_name] INSTANCE_NAME |
413 |
|
414 |
This will take over the given logical volumes, rename them to the Ganeti |
415 |
standard (UUID-based), and without installing the OS on them start |
416 |
directly the instance. If you configure the hypervisor similar to the |
417 |
non-managed configuration that the instance had, the transition should |
418 |
be seamless for the instance. For more than one disk, just pass another |
419 |
disk parameter (e.g. ``--disk 1:adopt=...``). |
420 |
|
421 |
Instance kernel selection |
422 |
+++++++++++++++++++++++++ |
423 |
|
424 |
The kernel that instances uses to bootup can come either from the node, |
425 |
or from instances themselves, depending on the setup. |
426 |
|
427 |
Xen-PVM |
428 |
~~~~~~~ |
429 |
|
430 |
With Xen PVM, there are three options. |
431 |
|
432 |
First, you can use a kernel from the node, by setting the hypervisor |
433 |
parameters as such: |
434 |
|
435 |
- ``kernel_path`` to a valid file on the node (and appropriately |
436 |
``initrd_path``) |
437 |
- ``kernel_args`` optionally set to a valid Linux setting (e.g. ``ro``) |
438 |
- ``root_path`` to a valid setting (e.g. ``/dev/xvda1``) |
439 |
- ``bootloader_path`` and ``bootloader_args`` to empty |
440 |
|
441 |
Alternatively, you can delegate the kernel management to instances, and |
442 |
use either ``pvgrub`` or the deprecated ``pygrub``. For this, you must |
443 |
install the kernels and initrds in the instance and create a valid GRUB |
444 |
v1 configuration file. |
445 |
|
446 |
For ``pvgrub`` (new in version 2.4.2), you need to set: |
447 |
|
448 |
- ``kernel_path`` to point to the ``pvgrub`` loader present on the node |
449 |
(e.g. ``/usr/lib/xen/boot/pv-grub-x86_32.gz``) |
450 |
- ``kernel_args`` to the path to the GRUB config file, relative to the |
451 |
instance (e.g. ``(hd0,0)/grub/menu.lst``) |
452 |
- ``root_path`` **must** be empty |
453 |
- ``bootloader_path`` and ``bootloader_args`` to empty |
454 |
|
455 |
While ``pygrub`` is deprecated, here is how you can configure it: |
456 |
|
457 |
- ``bootloader_path`` to the pygrub binary (e.g. ``/usr/bin/pygrub``) |
458 |
- the other settings are not important |
459 |
|
460 |
More information can be found in the Xen wiki pages for `pvgrub |
461 |
<http://wiki.xensource.com/xenwiki/PvGrub>`_ and `pygrub |
462 |
<http://wiki.xensource.com/xenwiki/PyGrub>`_. |
463 |
|
464 |
KVM |
465 |
~~~ |
466 |
|
467 |
For KVM also the kernel can be loaded either way. |
468 |
|
469 |
For loading the kernels from the node, you need to set: |
470 |
|
471 |
- ``kernel_path`` to a valid value |
472 |
- ``initrd_path`` optionally set if you use an initrd |
473 |
- ``kernel_args`` optionally set to a valid value (e.g. ``ro``) |
474 |
|
475 |
If you want instead to have the instance boot from its disk (and execute |
476 |
its bootloader), simply set the ``kernel_path`` parameter to an empty |
477 |
string, and all the others will be ignored. |
478 |
|
479 |
Instance HA features |
480 |
-------------------- |
481 |
|
482 |
.. note:: This section only applies to multi-node clusters |
483 |
|
484 |
.. _instance-change-primary-label: |
485 |
|
486 |
Changing the primary node |
487 |
+++++++++++++++++++++++++ |
488 |
|
489 |
There are three ways to exchange an instance's primary and secondary |
490 |
nodes; the right one to choose depends on how the instance has been |
491 |
created and the status of its current primary node. See |
492 |
:ref:`rest-redundancy-label` for information on changing the secondary |
493 |
node. Note that it's only possible to change the primary node to the |
494 |
secondary and vice-versa; a direct change of the primary node with a |
495 |
third node, while keeping the current secondary is not possible in a |
496 |
single step, only via multiple operations as detailed in |
497 |
:ref:`instance-relocation-label`. |
498 |
|
499 |
Failing over an instance |
500 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
501 |
|
502 |
If an instance is built in highly available mode you can at any time |
503 |
fail it over to its secondary node, even if the primary has somehow |
504 |
failed and it's not up anymore. Doing it is really easy, on the master |
505 |
node you can just run:: |
506 |
|
507 |
gnt-instance failover INSTANCE_NAME |
508 |
|
509 |
That's it. After the command completes the secondary node is now the |
510 |
primary, and vice-versa. |
511 |
|
512 |
The instance will be started with an amount of memory between its |
513 |
``maxmem`` and its ``minmem`` value, depending on the free memory on its |
514 |
target node, or the operation will fail if that's not possible. See |
515 |
:ref:`instance-startup-label` for details. |
516 |
|
517 |
If the instance's disk template is of type rbd, then you can specify |
518 |
the target node (which can be any node) explicitly, or specify an |
519 |
iallocator plugin. If you omit both, the default iallocator will be |
520 |
used to determine the target node:: |
521 |
|
522 |
gnt-instance failover -n TARGET_NODE INSTANCE_NAME |
523 |
|
524 |
Live migrating an instance |
525 |
~~~~~~~~~~~~~~~~~~~~~~~~~~ |
526 |
|
527 |
If an instance is built in highly available mode, it currently runs and |
528 |
both its nodes are running fine, you can at migrate it over to its |
529 |
secondary node, without downtime. On the master node you need to run:: |
530 |
|
531 |
gnt-instance migrate INSTANCE_NAME |
532 |
|
533 |
The current load on the instance and its memory size will influence how |
534 |
long the migration will take. In any case, for both KVM and Xen |
535 |
hypervisors, the migration will be transparent to the instance. |
536 |
|
537 |
If the destination node has less memory than the instance's current |
538 |
runtime memory, but at least the instance's minimum memory available |
539 |
Ganeti will automatically reduce the instance runtime memory before |
540 |
migrating it, unless the ``--no-runtime-changes`` option is passed, in |
541 |
which case the target node should have at least the instance's current |
542 |
runtime memory free. |
543 |
|
544 |
If the instance's disk template is of type rbd, then you can specify |
545 |
the target node (which can be any node) explicitly, or specify an |
546 |
iallocator plugin. If you omit both, the default iallocator will be |
547 |
used to determine the target node:: |
548 |
|
549 |
gnt-instance migrate -n TARGET_NODE INSTANCE_NAME |
550 |
|
551 |
Moving an instance (offline) |
552 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
553 |
|
554 |
If an instance has not been create as mirrored, then the only way to |
555 |
change its primary node is to execute the move command:: |
556 |
|
557 |
gnt-instance move -n NEW_NODE INSTANCE |
558 |
|
559 |
This has a few prerequisites: |
560 |
|
561 |
- the instance must be stopped |
562 |
- its current primary node must be on-line and healthy |
563 |
- the disks of the instance must not have any errors |
564 |
|
565 |
Since this operation actually copies the data from the old node to the |
566 |
new node, expect it to take proportional to the size of the instance's |
567 |
disks and the speed of both the nodes' I/O system and their networking. |
568 |
|
569 |
Disk operations |
570 |
+++++++++++++++ |
571 |
|
572 |
Disk failures are a common cause of errors in any server |
573 |
deployment. Ganeti offers protection from single-node failure if your |
574 |
instances were created in HA mode, and it also offers ways to restore |
575 |
redundancy after a failure. |
576 |
|
577 |
Preparing for disk operations |
578 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
579 |
|
580 |
It is important to note that for Ganeti to be able to do any disk |
581 |
operation, the Linux machines on top of which Ganeti must be consistent; |
582 |
for LVM, this means that the LVM commands must not return failures; it |
583 |
is common that after a complete disk failure, any LVM command aborts |
584 |
with an error similar to:: |
585 |
|
586 |
# vgs |
587 |
/dev/sdb1: read failed after 0 of 4096 at 0: Input/output error |
588 |
/dev/sdb1: read failed after 0 of 4096 at 750153695232: Input/output |
589 |
error |
590 |
/dev/sdb1: read failed after 0 of 4096 at 0: Input/output error |
591 |
Couldn't find device with uuid |
592 |
't30jmN-4Rcf-Fr5e-CURS-pawt-z0jU-m1TgeJ'. |
593 |
Couldn't find all physical volumes for volume group xenvg. |
594 |
|
595 |
Before restoring an instance's disks to healthy status, it's needed to |
596 |
fix the volume group used by Ganeti so that we can actually create and |
597 |
manage the logical volumes. This is usually done in a multi-step |
598 |
process: |
599 |
|
600 |
#. first, if the disk is completely gone and LVM commands exit with |
601 |
“Couldn't find device with uuid…” then you need to run the command:: |
602 |
|
603 |
vgreduce --removemissing VOLUME_GROUP |
604 |
|
605 |
#. after the above command, the LVM commands should be executing |
606 |
normally (warnings are normal, but the commands will not fail |
607 |
completely). |
608 |
|
609 |
#. if the failed disk is still visible in the output of the ``pvs`` |
610 |
command, you need to deactivate it from allocations by running:: |
611 |
|
612 |
pvs -x n /dev/DISK |
613 |
|
614 |
At this point, the volume group should be consistent and any bad |
615 |
physical volumes should not longer be available for allocation. |
616 |
|
617 |
Note that since version 2.1 Ganeti provides some commands to automate |
618 |
these two operations, see :ref:`storage-units-label`. |
619 |
|
620 |
.. _rest-redundancy-label: |
621 |
|
622 |
Restoring redundancy for DRBD-based instances |
623 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
624 |
|
625 |
A DRBD instance has two nodes, and the storage on one of them has |
626 |
failed. Depending on which node (primary or secondary) has failed, you |
627 |
have three options at hand: |
628 |
|
629 |
- if the storage on the primary node has failed, you need to re-create |
630 |
the disks on it |
631 |
- if the storage on the secondary node has failed, you can either |
632 |
re-create the disks on it or change the secondary and recreate |
633 |
redundancy on the new secondary node |
634 |
|
635 |
Of course, at any point it's possible to force re-creation of disks even |
636 |
though everything is already fine. |
637 |
|
638 |
For all three cases, the ``replace-disks`` operation can be used:: |
639 |
|
640 |
# re-create disks on the primary node |
641 |
gnt-instance replace-disks -p INSTANCE_NAME |
642 |
# re-create disks on the current secondary |
643 |
gnt-instance replace-disks -s INSTANCE_NAME |
644 |
# change the secondary node, via manual specification |
645 |
gnt-instance replace-disks -n NODE INSTANCE_NAME |
646 |
# change the secondary node, via an iallocator script |
647 |
gnt-instance replace-disks -I SCRIPT INSTANCE_NAME |
648 |
# since Ganeti 2.1: automatically fix the primary or secondary node |
649 |
gnt-instance replace-disks -a INSTANCE_NAME |
650 |
|
651 |
Since the process involves copying all data from the working node to the |
652 |
target node, it will take a while, depending on the instance's disk |
653 |
size, node I/O system and network speed. But it is (barring any network |
654 |
interruption) completely transparent for the instance. |
655 |
|
656 |
Re-creating disks for non-redundant instances |
657 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
658 |
|
659 |
.. versionadded:: 2.1 |
660 |
|
661 |
For non-redundant instances, there isn't a copy (except backups) to |
662 |
re-create the disks. But it's possible to at-least re-create empty |
663 |
disks, after which a reinstall can be run, via the ``recreate-disks`` |
664 |
command:: |
665 |
|
666 |
gnt-instance recreate-disks INSTANCE |
667 |
|
668 |
Note that this will fail if the disks already exists. |
669 |
|
670 |
Conversion of an instance's disk type |
671 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
672 |
|
673 |
It is possible to convert between a non-redundant instance of type |
674 |
``plain`` (LVM storage) and redundant ``drbd`` via the ``gnt-instance |
675 |
modify`` command:: |
676 |
|
677 |
# start with a non-redundant instance |
678 |
gnt-instance add -t plain ... INSTANCE |
679 |
|
680 |
# later convert it to redundant |
681 |
gnt-instance stop INSTANCE |
682 |
gnt-instance modify -t drbd -n NEW_SECONDARY INSTANCE |
683 |
gnt-instance start INSTANCE |
684 |
|
685 |
# and convert it back |
686 |
gnt-instance stop INSTANCE |
687 |
gnt-instance modify -t plain INSTANCE |
688 |
gnt-instance start INSTANCE |
689 |
|
690 |
The conversion must be done while the instance is stopped, and |
691 |
converting from plain to drbd template presents a small risk, especially |
692 |
if the instance has multiple disks and/or if one node fails during the |
693 |
conversion procedure). As such, it's recommended (as always) to make |
694 |
sure that downtime for manual recovery is acceptable and that the |
695 |
instance has up-to-date backups. |
696 |
|
697 |
Debugging instances |
698 |
+++++++++++++++++++ |
699 |
|
700 |
Accessing an instance's disks |
701 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
702 |
|
703 |
From an instance's primary node you can have access to its disks. Never |
704 |
ever mount the underlying logical volume manually on a fault tolerant |
705 |
instance, or will break replication and your data will be |
706 |
inconsistent. The correct way to access an instance's disks is to run |
707 |
(on the master node, as usual) the command:: |
708 |
|
709 |
gnt-instance activate-disks INSTANCE |
710 |
|
711 |
And then, *on the primary node of the instance*, access the device that |
712 |
gets created. For example, you could mount the given disks, then edit |
713 |
files on the filesystem, etc. |
714 |
|
715 |
Note that with partitioned disks (as opposed to whole-disk filesystems), |
716 |
you will need to use a tool like :manpage:`kpartx(8)`:: |
717 |
|
718 |
node1# gnt-instance activate-disks instance1 |
719 |
… |
720 |
node1# ssh node3 |
721 |
node3# kpartx -l /dev/… |
722 |
node3# kpartx -a /dev/… |
723 |
node3# mount /dev/mapper/… /mnt/ |
724 |
# edit files under mnt as desired |
725 |
node3# umount /mnt/ |
726 |
node3# kpartx -d /dev/… |
727 |
node3# exit |
728 |
node1# |
729 |
|
730 |
After you've finished you can deactivate them with the deactivate-disks |
731 |
command, which works in the same way:: |
732 |
|
733 |
gnt-instance deactivate-disks INSTANCE |
734 |
|
735 |
Note that if any process started by you is still using the disks, the |
736 |
above command will error out, and you **must** cleanup and ensure that |
737 |
the above command runs successfully before you start the instance, |
738 |
otherwise the instance will suffer corruption. |
739 |
|
740 |
Accessing an instance's console |
741 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
742 |
|
743 |
The command to access a running instance's console is:: |
744 |
|
745 |
gnt-instance console INSTANCE_NAME |
746 |
|
747 |
Use the console normally and then type ``^]`` when done, to exit. |
748 |
|
749 |
Other instance operations |
750 |
+++++++++++++++++++++++++ |
751 |
|
752 |
Reboot |
753 |
~~~~~~ |
754 |
|
755 |
There is a wrapper command for rebooting instances:: |
756 |
|
757 |
gnt-instance reboot instance2 |
758 |
|
759 |
By default, this does the equivalent of shutting down and then starting |
760 |
the instance, but it accepts parameters to perform a soft-reboot (via |
761 |
the hypervisor), a hard reboot (hypervisor shutdown and then startup) or |
762 |
a full one (the default, which also de-configures and then configures |
763 |
again the disks of the instance). |
764 |
|
765 |
Instance OS definitions debugging |
766 |
+++++++++++++++++++++++++++++++++ |
767 |
|
768 |
Should you have any problems with instance operating systems the command |
769 |
to see a complete status for all your nodes is:: |
770 |
|
771 |
gnt-os diagnose |
772 |
|
773 |
.. _instance-relocation-label: |
774 |
|
775 |
Instance relocation |
776 |
~~~~~~~~~~~~~~~~~~~ |
777 |
|
778 |
While it is not possible to move an instance from nodes ``(A, B)`` to |
779 |
nodes ``(C, D)`` in a single move, it is possible to do so in a few |
780 |
steps:: |
781 |
|
782 |
# instance is located on A, B |
783 |
node1# gnt-instance replace -n nodeC instance1 |
784 |
# instance has moved from (A, B) to (A, C) |
785 |
# we now flip the primary/secondary nodes |
786 |
node1# gnt-instance migrate instance1 |
787 |
# instance lives on (C, A) |
788 |
# we can then change A to D via: |
789 |
node1# gnt-instance replace -n nodeD instance1 |
790 |
|
791 |
Which brings it into the final configuration of ``(C, D)``. Note that we |
792 |
needed to do two replace-disks operation (two copies of the instance |
793 |
disks), because we needed to get rid of both the original nodes (A and |
794 |
B). |
795 |
|
796 |
Node operations |
797 |
--------------- |
798 |
|
799 |
There are much fewer node operations available than for instances, but |
800 |
they are equivalently important for maintaining a healthy cluster. |
801 |
|
802 |
Add/readd |
803 |
+++++++++ |
804 |
|
805 |
It is at any time possible to extend the cluster with one more node, by |
806 |
using the node add operation:: |
807 |
|
808 |
gnt-node add NEW_NODE |
809 |
|
810 |
If the cluster has a replication network defined, then you need to pass |
811 |
the ``-s REPLICATION_IP`` parameter to this option. |
812 |
|
813 |
A variation of this command can be used to re-configure a node if its |
814 |
Ganeti configuration is broken, for example if it has been reinstalled |
815 |
by mistake:: |
816 |
|
817 |
gnt-node add --readd EXISTING_NODE |
818 |
|
819 |
This will reinitialise the node as if it's been newly added, but while |
820 |
keeping its existing configuration in the cluster (primary/secondary IP, |
821 |
etc.), in other words you won't need to use ``-s`` here. |
822 |
|
823 |
Changing the node role |
824 |
++++++++++++++++++++++ |
825 |
|
826 |
A node can be in different roles, as explained in the |
827 |
:ref:`terminology-label` section. Promoting a node to the master role is |
828 |
special, while the other roles are handled all via a single command. |
829 |
|
830 |
Failing over the master node |
831 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
832 |
|
833 |
If you want to promote a different node to the master role (for whatever |
834 |
reason), run on any other master-candidate node the command:: |
835 |
|
836 |
gnt-cluster master-failover |
837 |
|
838 |
and the node you ran it on is now the new master. In case you try to run |
839 |
this on a non master-candidate node, you will get an error telling you |
840 |
which nodes are valid. |
841 |
|
842 |
Changing between the other roles |
843 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
844 |
|
845 |
The ``gnt-node modify`` command can be used to select a new role:: |
846 |
|
847 |
# change to master candidate |
848 |
gnt-node modify -C yes NODE |
849 |
# change to drained status |
850 |
gnt-node modify -D yes NODE |
851 |
# change to offline status |
852 |
gnt-node modify -O yes NODE |
853 |
# change to regular mode (reset all flags) |
854 |
gnt-node modify -O no -D no -C no NODE |
855 |
|
856 |
Note that the cluster requires that at any point in time, a certain |
857 |
number of nodes are master candidates, so changing from master candidate |
858 |
to other roles might fail. It is recommended to either force the |
859 |
operation (via the ``--force`` option) or first change the number of |
860 |
master candidates in the cluster - see :ref:`cluster-config-label`. |
861 |
|
862 |
Evacuating nodes |
863 |
++++++++++++++++ |
864 |
|
865 |
There are two steps of moving instances off a node: |
866 |
|
867 |
- moving the primary instances (actually converting them into secondary |
868 |
instances) |
869 |
- moving the secondary instances (including any instances converted in |
870 |
the step above) |
871 |
|
872 |
Primary instance conversion |
873 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
874 |
|
875 |
For this step, you can use either individual instance move |
876 |
commands (as seen in :ref:`instance-change-primary-label`) or the bulk |
877 |
per-node versions; these are:: |
878 |
|
879 |
gnt-node migrate NODE |
880 |
gnt-node evacuate NODE |
881 |
|
882 |
Note that the instance “move” command doesn't currently have a node |
883 |
equivalent. |
884 |
|
885 |
Both these commands, or the equivalent per-instance command, will make |
886 |
this node the secondary node for the respective instances, whereas their |
887 |
current secondary node will become primary. Note that it is not possible |
888 |
to change in one step the primary node to another node as primary, while |
889 |
keeping the same secondary node. |
890 |
|
891 |
Secondary instance evacuation |
892 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
893 |
|
894 |
For the evacuation of secondary instances, a command called |
895 |
:command:`gnt-node evacuate` is provided and its syntax is:: |
896 |
|
897 |
gnt-node evacuate -I IALLOCATOR_SCRIPT NODE |
898 |
gnt-node evacuate -n DESTINATION_NODE NODE |
899 |
|
900 |
The first version will compute the new secondary for each instance in |
901 |
turn using the given iallocator script, whereas the second one will |
902 |
simply move all instances to DESTINATION_NODE. |
903 |
|
904 |
Removal |
905 |
+++++++ |
906 |
|
907 |
Once a node no longer has any instances (neither primary nor secondary), |
908 |
it's easy to remove it from the cluster:: |
909 |
|
910 |
gnt-node remove NODE_NAME |
911 |
|
912 |
This will deconfigure the node, stop the ganeti daemons on it and leave |
913 |
it hopefully like before it joined to the cluster. |
914 |
|
915 |
Storage handling |
916 |
++++++++++++++++ |
917 |
|
918 |
When using LVM (either standalone or with DRBD), it can become tedious |
919 |
to debug and fix it in case of errors. Furthermore, even file-based |
920 |
storage can become complicated to handle manually on many hosts. Ganeti |
921 |
provides a couple of commands to help with automation. |
922 |
|
923 |
Logical volumes |
924 |
~~~~~~~~~~~~~~~ |
925 |
|
926 |
This is a command specific to LVM handling. It allows listing the |
927 |
logical volumes on a given node or on all nodes and their association to |
928 |
instances via the ``volumes`` command:: |
929 |
|
930 |
node1# gnt-node volumes |
931 |
Node PhysDev VG Name Size Instance |
932 |
node1 /dev/sdb1 xenvg e61fbc97-….disk0 512M instance17 |
933 |
node1 /dev/sdb1 xenvg ebd1a7d1-….disk0 512M instance19 |
934 |
node2 /dev/sdb1 xenvg 0af08a3d-….disk0 512M instance20 |
935 |
node2 /dev/sdb1 xenvg cc012285-….disk0 512M instance16 |
936 |
node2 /dev/sdb1 xenvg f0fac192-….disk0 512M instance18 |
937 |
|
938 |
The above command maps each logical volume to a volume group and |
939 |
underlying physical volume and (possibly) to an instance. |
940 |
|
941 |
.. _storage-units-label: |
942 |
|
943 |
Generalized storage handling |
944 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
945 |
|
946 |
.. versionadded:: 2.1 |
947 |
|
948 |
Starting with Ganeti 2.1, a new storage framework has been implemented |
949 |
that tries to abstract the handling of the storage type the cluster |
950 |
uses. |
951 |
|
952 |
First is listing the backend storage and their space situation:: |
953 |
|
954 |
node1# gnt-node list-storage |
955 |
Node Name Size Used Free |
956 |
node1 /dev/sda7 673.8G 0M 673.8G |
957 |
node1 /dev/sdb1 698.6G 1.5G 697.1G |
958 |
node2 /dev/sda7 673.8G 0M 673.8G |
959 |
node2 /dev/sdb1 698.6G 1.0G 697.6G |
960 |
|
961 |
The default is to list LVM physical volumes. It's also possible to list |
962 |
the LVM volume groups:: |
963 |
|
964 |
node1# gnt-node list-storage -t lvm-vg |
965 |
Node Name Size |
966 |
node1 xenvg 1.3T |
967 |
node2 xenvg 1.3T |
968 |
|
969 |
Next is repairing storage units, which is currently only implemented for |
970 |
volume groups and does the equivalent of ``vgreduce --removemissing``:: |
971 |
|
972 |
node1# gnt-node repair-storage node2 lvm-vg xenvg |
973 |
Sun Oct 25 22:21:45 2009 Repairing storage unit 'xenvg' on node2 ... |
974 |
|
975 |
Last is the modification of volume properties, which is (again) only |
976 |
implemented for LVM physical volumes and allows toggling the |
977 |
``allocatable`` value:: |
978 |
|
979 |
node1# gnt-node modify-storage --allocatable=no node2 lvm-pv /dev/sdb1 |
980 |
|
981 |
Use of the storage commands |
982 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
983 |
|
984 |
All these commands are needed when recovering a node from a disk |
985 |
failure: |
986 |
|
987 |
- first, we need to recover from complete LVM failure (due to missing |
988 |
disk), by running the ``repair-storage`` command |
989 |
- second, we need to change allocation on any partially-broken disk |
990 |
(i.e. LVM still sees it, but it has bad blocks) by running |
991 |
``modify-storage`` |
992 |
- then we can evacuate the instances as needed |
993 |
|
994 |
|
995 |
Cluster operations |
996 |
------------------ |
997 |
|
998 |
Beside the cluster initialisation command (which is detailed in the |
999 |
:doc:`install` document) and the master failover command which is |
1000 |
explained under node handling, there are a couple of other cluster |
1001 |
operations available. |
1002 |
|
1003 |
.. _cluster-config-label: |
1004 |
|
1005 |
Standard operations |
1006 |
+++++++++++++++++++ |
1007 |
|
1008 |
One of the few commands that can be run on any node (not only the |
1009 |
master) is the ``getmaster`` command:: |
1010 |
|
1011 |
node2# gnt-cluster getmaster |
1012 |
node1.example.com |
1013 |
node2# |
1014 |
|
1015 |
It is possible to query and change global cluster parameters via the |
1016 |
``info`` and ``modify`` commands:: |
1017 |
|
1018 |
node1# gnt-cluster info |
1019 |
Cluster name: cluster.example.com |
1020 |
Cluster UUID: 07805e6f-f0af-4310-95f1-572862ee939c |
1021 |
Creation time: 2009-09-25 05:04:15 |
1022 |
Modification time: 2009-10-18 22:11:47 |
1023 |
Master node: node1.example.com |
1024 |
Architecture (this node): 64bit (x86_64) |
1025 |
… |
1026 |
Tags: foo |
1027 |
Default hypervisor: xen-pvm |
1028 |
Enabled hypervisors: xen-pvm |
1029 |
Hypervisor parameters: |
1030 |
- xen-pvm: |
1031 |
root_path: /dev/sda1 |
1032 |
… |
1033 |
Cluster parameters: |
1034 |
- candidate pool size: 10 |
1035 |
… |
1036 |
Default instance parameters: |
1037 |
- default: |
1038 |
memory: 128 |
1039 |
… |
1040 |
Default nic parameters: |
1041 |
- default: |
1042 |
link: xen-br0 |
1043 |
… |
1044 |
|
1045 |
There various parameters above can be changed via the ``modify`` |
1046 |
commands as follows: |
1047 |
|
1048 |
- the hypervisor parameters can be changed via ``modify -H |
1049 |
xen-pvm:root_path=…``, and so on for other hypervisors/key/values |
1050 |
- the "default instance parameters" are changeable via ``modify -B |
1051 |
parameter=value…`` syntax |
1052 |
- the cluster parameters are changeable via separate options to the |
1053 |
modify command (e.g. ``--candidate-pool-size``, etc.) |
1054 |
|
1055 |
For detailed option list see the :manpage:`gnt-cluster(8)` man page. |
1056 |
|
1057 |
The cluster version can be obtained via the ``version`` command:: |
1058 |
node1# gnt-cluster version |
1059 |
Software version: 2.1.0 |
1060 |
Internode protocol: 20 |
1061 |
Configuration format: 2010000 |
1062 |
OS api version: 15 |
1063 |
Export interface: 0 |
1064 |
|
1065 |
This is not very useful except when debugging Ganeti. |
1066 |
|
1067 |
Global node commands |
1068 |
++++++++++++++++++++ |
1069 |
|
1070 |
There are two commands provided for replicating files to all nodes of a |
1071 |
cluster and for running commands on all the nodes:: |
1072 |
|
1073 |
node1# gnt-cluster copyfile /path/to/file |
1074 |
node1# gnt-cluster command ls -l /path/to/file |
1075 |
|
1076 |
These are simple wrappers over scp/ssh and more advanced usage can be |
1077 |
obtained using :manpage:`dsh(1)` and similar commands. But they are |
1078 |
useful to update an OS script from the master node, for example. |
1079 |
|
1080 |
Cluster verification |
1081 |
++++++++++++++++++++ |
1082 |
|
1083 |
There are three commands that relate to global cluster checks. The first |
1084 |
one is ``verify`` which gives an overview on the cluster state, |
1085 |
highlighting any issues. In normal operation, this command should return |
1086 |
no ``ERROR`` messages:: |
1087 |
|
1088 |
node1# gnt-cluster verify |
1089 |
Sun Oct 25 23:08:58 2009 * Verifying global settings |
1090 |
Sun Oct 25 23:08:58 2009 * Gathering data (2 nodes) |
1091 |
Sun Oct 25 23:09:00 2009 * Verifying node status |
1092 |
Sun Oct 25 23:09:00 2009 * Verifying instance status |
1093 |
Sun Oct 25 23:09:00 2009 * Verifying orphan volumes |
1094 |
Sun Oct 25 23:09:00 2009 * Verifying remaining instances |
1095 |
Sun Oct 25 23:09:00 2009 * Verifying N+1 Memory redundancy |
1096 |
Sun Oct 25 23:09:00 2009 * Other Notes |
1097 |
Sun Oct 25 23:09:00 2009 - NOTICE: 5 non-redundant instance(s) found. |
1098 |
Sun Oct 25 23:09:00 2009 * Hooks Results |
1099 |
|
1100 |
The second command is ``verify-disks``, which checks that the instance's |
1101 |
disks have the correct status based on the desired instance state |
1102 |
(up/down):: |
1103 |
|
1104 |
node1# gnt-cluster verify-disks |
1105 |
|
1106 |
Note that this command will show no output when disks are healthy. |
1107 |
|
1108 |
The last command is used to repair any discrepancies in Ganeti's |
1109 |
recorded disk size and the actual disk size (disk size information is |
1110 |
needed for proper activation and growth of DRBD-based disks):: |
1111 |
|
1112 |
node1# gnt-cluster repair-disk-sizes |
1113 |
Sun Oct 25 23:13:16 2009 - INFO: Disk 0 of instance instance1 has mismatched size, correcting: recorded 512, actual 2048 |
1114 |
Sun Oct 25 23:13:17 2009 - WARNING: Invalid result from node node4, ignoring node results |
1115 |
|
1116 |
The above shows one instance having wrong disk size, and a node which |
1117 |
returned invalid data, and thus we ignored all primary instances of that |
1118 |
node. |
1119 |
|
1120 |
Configuration redistribution |
1121 |
++++++++++++++++++++++++++++ |
1122 |
|
1123 |
If the verify command complains about file mismatches between the master |
1124 |
and other nodes, due to some node problems or if you manually modified |
1125 |
configuration files, you can force an push of the master configuration |
1126 |
to all other nodes via the ``redist-conf`` command:: |
1127 |
|
1128 |
node1# gnt-cluster redist-conf |
1129 |
node1# |
1130 |
|
1131 |
This command will be silent unless there are problems sending updates to |
1132 |
the other nodes. |
1133 |
|
1134 |
|
1135 |
Cluster renaming |
1136 |
++++++++++++++++ |
1137 |
|
1138 |
It is possible to rename a cluster, or to change its IP address, via the |
1139 |
``rename`` command. If only the IP has changed, you need to pass the |
1140 |
current name and Ganeti will realise its IP has changed:: |
1141 |
|
1142 |
node1# gnt-cluster rename cluster.example.com |
1143 |
This will rename the cluster to 'cluster.example.com'. If |
1144 |
you are connected over the network to the cluster name, the operation |
1145 |
is very dangerous as the IP address will be removed from the node and |
1146 |
the change may not go through. Continue? |
1147 |
y/[n]/?: y |
1148 |
Failure: prerequisites not met for this operation: |
1149 |
Neither the name nor the IP address of the cluster has changed |
1150 |
|
1151 |
In the above output, neither value has changed since the cluster |
1152 |
initialisation so the operation is not completed. |
1153 |
|
1154 |
Queue operations |
1155 |
++++++++++++++++ |
1156 |
|
1157 |
The job queue execution in Ganeti 2.0 and higher can be inspected, |
1158 |
suspended and resumed via the ``queue`` command:: |
1159 |
|
1160 |
node1~# gnt-cluster queue info |
1161 |
The drain flag is unset |
1162 |
node1~# gnt-cluster queue drain |
1163 |
node1~# gnt-instance stop instance1 |
1164 |
Failed to submit job for instance1: Job queue is drained, refusing job |
1165 |
node1~# gnt-cluster queue info |
1166 |
The drain flag is set |
1167 |
node1~# gnt-cluster queue undrain |
1168 |
|
1169 |
This is most useful if you have an active cluster and you need to |
1170 |
upgrade the Ganeti software, or simply restart the software on any node: |
1171 |
|
1172 |
#. suspend the queue via ``queue drain`` |
1173 |
#. wait until there are no more running jobs via ``gnt-job list`` |
1174 |
#. restart the master or another node, or upgrade the software |
1175 |
#. resume the queue via ``queue undrain`` |
1176 |
|
1177 |
.. note:: this command only stores a local flag file, and if you |
1178 |
failover the master, it will not have effect on the new master. |
1179 |
|
1180 |
|
1181 |
Watcher control |
1182 |
+++++++++++++++ |
1183 |
|
1184 |
The :manpage:`ganeti-watcher` is a program, usually scheduled via |
1185 |
``cron``, that takes care of cluster maintenance operations (restarting |
1186 |
downed instances, activating down DRBD disks, etc.). However, during |
1187 |
maintenance and troubleshooting, this can get in your way; disabling it |
1188 |
via commenting out the cron job is not so good as this can be |
1189 |
forgotten. Thus there are some commands for automated control of the |
1190 |
watcher: ``pause``, ``info`` and ``continue``:: |
1191 |
|
1192 |
node1~# gnt-cluster watcher info |
1193 |
The watcher is not paused. |
1194 |
node1~# gnt-cluster watcher pause 1h |
1195 |
The watcher is paused until Mon Oct 26 00:30:37 2009. |
1196 |
node1~# gnt-cluster watcher info |
1197 |
The watcher is paused until Mon Oct 26 00:30:37 2009. |
1198 |
node1~# ganeti-watcher -d |
1199 |
2009-10-25 23:30:47,984: pid=28867 ganeti-watcher:486 DEBUG Pause has been set, exiting |
1200 |
node1~# gnt-cluster watcher continue |
1201 |
The watcher is no longer paused. |
1202 |
node1~# ganeti-watcher -d |
1203 |
2009-10-25 23:31:04,789: pid=28976 ganeti-watcher:345 DEBUG Archived 0 jobs, left 0 |
1204 |
2009-10-25 23:31:05,884: pid=28976 ganeti-watcher:280 DEBUG Got data from cluster, writing instance status file |
1205 |
2009-10-25 23:31:06,061: pid=28976 ganeti-watcher:150 DEBUG Data didn't change, just touching status file |
1206 |
node1~# gnt-cluster watcher info |
1207 |
The watcher is not paused. |
1208 |
node1~# |
1209 |
|
1210 |
The exact details of the argument to the ``pause`` command are available |
1211 |
in the manpage. |
1212 |
|
1213 |
.. note:: this command only stores a local flag file, and if you |
1214 |
failover the master, it will not have effect on the new master. |
1215 |
|
1216 |
Node auto-maintenance |
1217 |
+++++++++++++++++++++ |
1218 |
|
1219 |
If the cluster parameter ``maintain_node_health`` is enabled (see the |
1220 |
manpage for :command:`gnt-cluster`, the init and modify subcommands), |
1221 |
then the following will happen automatically: |
1222 |
|
1223 |
- the watcher will shutdown any instances running on offline nodes |
1224 |
- the watcher will deactivate any DRBD devices on offline nodes |
1225 |
|
1226 |
In the future, more actions are planned, so only enable this parameter |
1227 |
if the nodes are completely dedicated to Ganeti; otherwise it might be |
1228 |
possible to lose data due to auto-maintenance actions. |
1229 |
|
1230 |
Removing a cluster entirely |
1231 |
+++++++++++++++++++++++++++ |
1232 |
|
1233 |
The usual method to cleanup a cluster is to run ``gnt-cluster destroy`` |
1234 |
however if the Ganeti installation is broken in any way then this will |
1235 |
not run. |
1236 |
|
1237 |
It is possible in such a case to cleanup manually most if not all traces |
1238 |
of a cluster installation by following these steps on all of the nodes: |
1239 |
|
1240 |
1. Shutdown all instances. This depends on the virtualisation method |
1241 |
used (Xen, KVM, etc.): |
1242 |
|
1243 |
- Xen: run ``xm list`` and ``xm destroy`` on all the non-Domain-0 |
1244 |
instances |
1245 |
- KVM: kill all the KVM processes |
1246 |
- chroot: kill all processes under the chroot mountpoints |
1247 |
|
1248 |
2. If using DRBD, shutdown all DRBD minors (which should by at this time |
1249 |
no-longer in use by instances); on each node, run ``drbdsetup |
1250 |
/dev/drbdN down`` for each active DRBD minor. |
1251 |
|
1252 |
3. If using LVM, cleanup the Ganeti volume group; if only Ganeti created |
1253 |
logical volumes (and you are not sharing the volume group with the |
1254 |
OS, for example), then simply running ``lvremove -f xenvg`` (replace |
1255 |
'xenvg' with your volume group name) should do the required cleanup. |
1256 |
|
1257 |
4. If using file-based storage, remove recursively all files and |
1258 |
directories under your file-storage directory: ``rm -rf |
1259 |
/srv/ganeti/file-storage/*`` replacing the path with the correct path |
1260 |
for your cluster. |
1261 |
|
1262 |
5. Stop the ganeti daemons (``/etc/init.d/ganeti stop``) and kill any |
1263 |
that remain alive (``pgrep ganeti`` and ``pkill ganeti``). |
1264 |
|
1265 |
6. Remove the ganeti state directory (``rm -rf /var/lib/ganeti/*``), |
1266 |
replacing the path with the correct path for your installation. |
1267 |
|
1268 |
7. If using RBD, run ``rbd unmap /dev/rbdN`` to unmap the RBD disks. |
1269 |
Then remove the RBD disk images used by Ganeti, identified by their |
1270 |
UUIDs (``rbd rm uuid.rbd.diskN``). |
1271 |
|
1272 |
On the master node, remove the cluster from the master-netdev (usually |
1273 |
``xen-br0`` for bridged mode, otherwise ``eth0`` or similar), by running |
1274 |
``ip a del $clusterip/32 dev xen-br0`` (use the correct cluster ip and |
1275 |
network device name). |
1276 |
|
1277 |
At this point, the machines are ready for a cluster creation; in case |
1278 |
you want to remove Ganeti completely, you need to also undo some of the |
1279 |
SSH changes and log directories: |
1280 |
|
1281 |
- ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct |
1282 |
paths) |
1283 |
- remove from ``/root/.ssh`` the keys that Ganeti added (check the |
1284 |
``authorized_keys`` and ``id_dsa`` files) |
1285 |
- regenerate the host's SSH keys (check the OpenSSH startup scripts) |
1286 |
- uninstall Ganeti |
1287 |
|
1288 |
Otherwise, if you plan to re-create the cluster, you can just go ahead |
1289 |
and rerun ``gnt-cluster init``. |
1290 |
|
1291 |
Tags handling |
1292 |
------------- |
1293 |
|
1294 |
The tags handling (addition, removal, listing) is similar for all the |
1295 |
objects that support it (instances, nodes, and the cluster). |
1296 |
|
1297 |
Limitations |
1298 |
+++++++++++ |
1299 |
|
1300 |
Note that the set of characters present in a tag and the maximum tag |
1301 |
length are restricted. Currently the maximum length is 128 characters, |
1302 |
there can be at most 4096 tags per object, and the set of characters is |
1303 |
comprised by alphanumeric characters and additionally ``.+*/:@-``. |
1304 |
|
1305 |
Operations |
1306 |
++++++++++ |
1307 |
|
1308 |
Tags can be added via ``add-tags``:: |
1309 |
|
1310 |
gnt-instance add-tags INSTANCE a b c |
1311 |
gnt-node add-tags INSTANCE a b c |
1312 |
gnt-cluster add-tags a b c |
1313 |
|
1314 |
|
1315 |
The above commands add three tags to an instance, to a node and to the |
1316 |
cluster. Note that the cluster command only takes tags as arguments, |
1317 |
whereas the node and instance commands first required the node and |
1318 |
instance name. |
1319 |
|
1320 |
Tags can also be added from a file, via the ``--from=FILENAME`` |
1321 |
argument. The file is expected to contain one tag per line. |
1322 |
|
1323 |
Tags can also be remove via a syntax very similar to the add one:: |
1324 |
|
1325 |
gnt-instance remove-tags INSTANCE a b c |
1326 |
|
1327 |
And listed via:: |
1328 |
|
1329 |
gnt-instance list-tags |
1330 |
gnt-node list-tags |
1331 |
gnt-cluster list-tags |
1332 |
|
1333 |
Global tag search |
1334 |
+++++++++++++++++ |
1335 |
|
1336 |
It is also possible to execute a global search on the all tags defined |
1337 |
in the cluster configuration, via a cluster command:: |
1338 |
|
1339 |
gnt-cluster search-tags REGEXP |
1340 |
|
1341 |
The parameter expected is a regular expression (see |
1342 |
:manpage:`regex(7)`). This will return all tags that match the search, |
1343 |
together with the object they are defined in (the names being show in a |
1344 |
hierarchical kind of way):: |
1345 |
|
1346 |
node1# gnt-cluster search-tags o |
1347 |
/cluster foo |
1348 |
/instances/instance1 owner:bar |
1349 |
|
1350 |
|
1351 |
Job operations |
1352 |
-------------- |
1353 |
|
1354 |
The various jobs submitted by the instance/node/cluster commands can be |
1355 |
examined, canceled and archived by various invocations of the |
1356 |
``gnt-job`` command. |
1357 |
|
1358 |
First is the job list command:: |
1359 |
|
1360 |
node1# gnt-job list |
1361 |
17771 success INSTANCE_QUERY_DATA |
1362 |
17773 success CLUSTER_VERIFY_DISKS |
1363 |
17775 success CLUSTER_REPAIR_DISK_SIZES |
1364 |
17776 error CLUSTER_RENAME(cluster.example.com) |
1365 |
17780 success CLUSTER_REDIST_CONF |
1366 |
17792 success INSTANCE_REBOOT(instance1.example.com) |
1367 |
|
1368 |
More detailed information about a job can be found via the ``info`` |
1369 |
command:: |
1370 |
|
1371 |
node1# gnt-job info 17776 |
1372 |
Job ID: 17776 |
1373 |
Status: error |
1374 |
Received: 2009-10-25 23:18:02.180569 |
1375 |
Processing start: 2009-10-25 23:18:02.200335 (delta 0.019766s) |
1376 |
Processing end: 2009-10-25 23:18:02.279743 (delta 0.079408s) |
1377 |
Total processing time: 0.099174 seconds |
1378 |
Opcodes: |
1379 |
OP_CLUSTER_RENAME |
1380 |
Status: error |
1381 |
Processing start: 2009-10-25 23:18:02.200335 |
1382 |
Processing end: 2009-10-25 23:18:02.252282 |
1383 |
Input fields: |
1384 |
name: cluster.example.com |
1385 |
Result: |
1386 |
OpPrereqError |
1387 |
[Neither the name nor the IP address of the cluster has changed] |
1388 |
Execution log: |
1389 |
|
1390 |
During the execution of a job, it's possible to follow the output of a |
1391 |
job, similar to the log that one get from the ``gnt-`` commands, via the |
1392 |
watch command:: |
1393 |
|
1394 |
node1# gnt-instance add --submit … instance1 |
1395 |
JobID: 17818 |
1396 |
node1# gnt-job watch 17818 |
1397 |
Output from job 17818 follows |
1398 |
----------------------------- |
1399 |
Mon Oct 26 00:22:48 2009 - INFO: Selected nodes for instance instance1 via iallocator dumb: node1, node2 |
1400 |
Mon Oct 26 00:22:49 2009 * creating instance disks... |
1401 |
Mon Oct 26 00:22:52 2009 adding instance instance1 to cluster config |
1402 |
Mon Oct 26 00:22:52 2009 - INFO: Waiting for instance instance1 to sync disks. |
1403 |
… |
1404 |
Mon Oct 26 00:23:03 2009 creating os for instance instance1 on node node1 |
1405 |
Mon Oct 26 00:23:03 2009 * running the instance OS create scripts... |
1406 |
Mon Oct 26 00:23:13 2009 * starting instance... |
1407 |
node1# |
1408 |
|
1409 |
This is useful if you need to follow a job's progress from multiple |
1410 |
terminals. |
1411 |
|
1412 |
A job that has not yet started to run can be canceled:: |
1413 |
|
1414 |
node1# gnt-job cancel 17810 |
1415 |
|
1416 |
But not one that has already started execution:: |
1417 |
|
1418 |
node1# gnt-job cancel 17805 |
1419 |
Job 17805 is no longer waiting in the queue |
1420 |
|
1421 |
There are two queues for jobs: the *current* and the *archive* |
1422 |
queue. Jobs are initially submitted to the current queue, and they stay |
1423 |
in that queue until they have finished execution (either successfully or |
1424 |
not). At that point, they can be moved into the archive queue using e.g. |
1425 |
``gnt-job autoarchive all``. The ``ganeti-watcher`` script will do this |
1426 |
automatically 6 hours after a job is finished. The ``ganeti-cleaner`` |
1427 |
script will then remove archived the jobs from the archive directory |
1428 |
after three weeks. |
1429 |
|
1430 |
Note that ``gnt-job list`` only shows jobs in the current queue. |
1431 |
Archived jobs can be viewed using ``gnt-job info <id>``. |
1432 |
|
1433 |
Special Ganeti deployments |
1434 |
-------------------------- |
1435 |
|
1436 |
Since Ganeti 2.4, it is possible to extend the Ganeti deployment with |
1437 |
two custom scenarios: Ganeti inside Ganeti and multi-site model. |
1438 |
|
1439 |
Running Ganeti under Ganeti |
1440 |
+++++++++++++++++++++++++++ |
1441 |
|
1442 |
It is sometimes useful to be able to use a Ganeti instance as a Ganeti |
1443 |
node (part of another cluster, usually). One example scenario is two |
1444 |
small clusters, where we want to have an additional master candidate |
1445 |
that holds the cluster configuration and can be used for helping with |
1446 |
the master voting process. |
1447 |
|
1448 |
However, these Ganeti instance should not host instances themselves, and |
1449 |
should not be considered in the normal capacity planning, evacuation |
1450 |
strategies, etc. In order to accomplish this, mark these nodes as |
1451 |
non-``vm_capable``:: |
1452 |
|
1453 |
node1# gnt-node modify --vm-capable=no node3 |
1454 |
|
1455 |
The vm_capable status can be listed as usual via ``gnt-node list``:: |
1456 |
|
1457 |
node1# gnt-node list -oname,vm_capable |
1458 |
Node VMCapable |
1459 |
node1 Y |
1460 |
node2 Y |
1461 |
node3 N |
1462 |
|
1463 |
When this flag is set, the cluster will not do any operations that |
1464 |
relate to instances on such nodes, e.g. hypervisor operations, |
1465 |
disk-related operations, etc. Basically they will just keep the ssconf |
1466 |
files, and if master candidates the full configuration. |
1467 |
|
1468 |
Multi-site model |
1469 |
++++++++++++++++ |
1470 |
|
1471 |
If Ganeti is deployed in multi-site model, with each site being a node |
1472 |
group (so that instances are not relocated across the WAN by mistake), |
1473 |
it is conceivable that either the WAN latency is high or that some sites |
1474 |
have a lower reliability than others. In this case, it doesn't make |
1475 |
sense to replicate the job information across all sites (or even outside |
1476 |
of a “central” node group), so it should be possible to restrict which |
1477 |
nodes can become master candidates via the auto-promotion algorithm. |
1478 |
|
1479 |
Ganeti 2.4 introduces for this purpose a new ``master_capable`` flag, |
1480 |
which (when unset) prevents nodes from being marked as master |
1481 |
candidates, either manually or automatically. |
1482 |
|
1483 |
As usual, the node modify operation can change this flag:: |
1484 |
|
1485 |
node1# gnt-node modify --auto-promote --master-capable=no node3 |
1486 |
Fri Jan 7 06:23:07 2011 - INFO: Demoting from master candidate |
1487 |
Fri Jan 7 06:23:08 2011 - INFO: Promoted nodes to master candidate role: node4 |
1488 |
Modified node node3 |
1489 |
- master_capable -> False |
1490 |
- master_candidate -> False |
1491 |
|
1492 |
And the node list operation will list this flag:: |
1493 |
|
1494 |
node1# gnt-node list -oname,master_capable node1 node2 node3 |
1495 |
Node MasterCapable |
1496 |
node1 Y |
1497 |
node2 Y |
1498 |
node3 N |
1499 |
|
1500 |
Note that marking a node both not ``vm_capable`` and not |
1501 |
``master_capable`` makes the node practically unusable from Ganeti's |
1502 |
point of view. Hence these two flags should be used probably in |
1503 |
contrast: some nodes will be only master candidates (master_capable but |
1504 |
not vm_capable), and other nodes will only hold instances (vm_capable |
1505 |
but not master_capable). |
1506 |
|
1507 |
|
1508 |
Ganeti tools |
1509 |
------------ |
1510 |
|
1511 |
Beside the usual ``gnt-`` and ``ganeti-`` commands which are provided |
1512 |
and installed in ``$prefix/sbin`` at install time, there are a couple of |
1513 |
other tools installed which are used seldom but can be helpful in some |
1514 |
cases. |
1515 |
|
1516 |
lvmstrap |
1517 |
++++++++ |
1518 |
|
1519 |
The ``lvmstrap`` tool, introduced in :ref:`configure-lvm-label` section, |
1520 |
has two modes of operation: |
1521 |
|
1522 |
- ``diskinfo`` shows the discovered disks on the system and their status |
1523 |
- ``create`` takes all not-in-use disks and creates a volume group out |
1524 |
of them |
1525 |
|
1526 |
.. warning:: The ``create`` argument to this command causes data-loss! |
1527 |
|
1528 |
cfgupgrade |
1529 |
++++++++++ |
1530 |
|
1531 |
The ``cfgupgrade`` tools is used to upgrade between major (and minor) |
1532 |
Ganeti versions. Point-releases are usually transparent for the admin. |
1533 |
|
1534 |
More information about the upgrade procedure is listed on the wiki at |
1535 |
http://code.google.com/p/ganeti/wiki/UpgradeNotes. |
1536 |
|
1537 |
There is also a script designed to upgrade from Ganeti 1.2 to 2.0, |
1538 |
called ``cfgupgrade12``. |
1539 |
|
1540 |
cfgshell |
1541 |
++++++++ |
1542 |
|
1543 |
.. note:: This command is not actively maintained; make sure you backup |
1544 |
your configuration before using it |
1545 |
|
1546 |
This can be used as an alternative to direct editing of the |
1547 |
main configuration file if Ganeti has a bug and prevents you, for |
1548 |
example, from removing an instance or a node from the configuration |
1549 |
file. |
1550 |
|
1551 |
.. _burnin-label: |
1552 |
|
1553 |
burnin |
1554 |
++++++ |
1555 |
|
1556 |
.. warning:: This command will erase existing instances if given as |
1557 |
arguments! |
1558 |
|
1559 |
This tool is used to exercise either the hardware of machines or |
1560 |
alternatively the Ganeti software. It is safe to run on an existing |
1561 |
cluster **as long as you don't pass it existing instance names**. |
1562 |
|
1563 |
The command will, by default, execute a comprehensive set of operations |
1564 |
against a list of instances, these being: |
1565 |
|
1566 |
- creation |
1567 |
- disk replacement (for redundant instances) |
1568 |
- failover and migration (for redundant instances) |
1569 |
- move (for non-redundant instances) |
1570 |
- disk growth |
1571 |
- add disks, remove disk |
1572 |
- add NICs, remove NICs |
1573 |
- export and then import |
1574 |
- rename |
1575 |
- reboot |
1576 |
- shutdown/startup |
1577 |
- and finally removal of the test instances |
1578 |
|
1579 |
Executing all these operations will test that the hardware performs |
1580 |
well: the creation, disk replace, disk add and disk growth will exercise |
1581 |
the storage and network; the migrate command will test the memory of the |
1582 |
systems. Depending on the passed options, it can also test that the |
1583 |
instance OS definitions are executing properly the rename, import and |
1584 |
export operations. |
1585 |
|
1586 |
sanitize-config |
1587 |
+++++++++++++++ |
1588 |
|
1589 |
This tool takes the Ganeti configuration and outputs a "sanitized" |
1590 |
version, by randomizing or clearing: |
1591 |
|
1592 |
- DRBD secrets and cluster public key (always) |
1593 |
- host names (optional) |
1594 |
- IPs (optional) |
1595 |
- OS names (optional) |
1596 |
- LV names (optional, only useful for very old clusters which still have |
1597 |
instances whose LVs are based on the instance name) |
1598 |
|
1599 |
By default, all optional items are activated except the LV name |
1600 |
randomization. When passing ``--no-randomization``, which disables the |
1601 |
optional items (i.e. just the DRBD secrets and cluster public keys are |
1602 |
randomized), the resulting file can be used as a safety copy of the |
1603 |
cluster config - while not trivial, the layout of the cluster can be |
1604 |
recreated from it and if the instance disks have not been lost it |
1605 |
permits recovery from the loss of all master candidates. |
1606 |
|
1607 |
move-instance |
1608 |
+++++++++++++ |
1609 |
|
1610 |
See :doc:`separate documentation for move-instance <move-instance>`. |
1611 |
|
1612 |
.. TODO: document cluster-merge tool |
1613 |
|
1614 |
|
1615 |
Other Ganeti projects |
1616 |
--------------------- |
1617 |
|
1618 |
Below is a list (which might not be up-to-date) of additional projects |
1619 |
that can be useful in a Ganeti deployment. They can be downloaded from |
1620 |
the project site (http://code.google.com/p/ganeti/) and the repositories |
1621 |
are also on the project git site (http://git.ganeti.org). |
1622 |
|
1623 |
NBMA tools |
1624 |
++++++++++ |
1625 |
|
1626 |
The ``ganeti-nbma`` software is designed to allow instances to live on a |
1627 |
separate, virtual network from the nodes, and in an environment where |
1628 |
nodes are not guaranteed to be able to reach each other via multicasting |
1629 |
or broadcasting. For more information see the README in the source |
1630 |
archive. |
1631 |
|
1632 |
ganeti-htools |
1633 |
+++++++++++++ |
1634 |
|
1635 |
Before Ganeti version 2.5, this was a standalone project; since that |
1636 |
version it is integrated into the Ganeti codebase (see |
1637 |
:doc:`install-quick` for instructions on how to enable it). If you run |
1638 |
an older Ganeti version, you will have to download and build it |
1639 |
separately. |
1640 |
|
1641 |
For more information and installation instructions, see the README file |
1642 |
in the source archive. |
1643 |
|
1644 |
.. vim: set textwidth=72 : |
1645 |
.. Local Variables: |
1646 |
.. mode: rst |
1647 |
.. fill-column: 72 |
1648 |
.. End: |