Statistics
| Branch: | Tag: | Revision:

root / doc / admin.rst @ b5672ea0

History | View | Annotate | Download (48.8 kB)

1
Ganeti administrator's guide
2
============================
3

    
4
Documents Ganeti version |version|
5

    
6
.. contents::
7

    
8
.. highlight:: text
9

    
10
Introduction
11
------------
12

    
13
Ganeti is a virtualization cluster management software. You are expected
14
to be a system administrator familiar with your Linux distribution and
15
the Xen or KVM virtualization environments before using it.
16

    
17
The various components of Ganeti all have man pages and interactive
18
help. This manual though will help you getting familiar with the system
19
by explaining the most common operations, grouped by related use.
20

    
21
After a terminology glossary and a section on the prerequisites needed
22
to use this manual, the rest of this document is divided in sections
23
for the different targets that a command affects: instance, nodes, etc.
24

    
25
.. _terminology-label:
26

    
27
Ganeti terminology
28
++++++++++++++++++
29

    
30
This section provides a small introduction to Ganeti terminology, which
31
might be useful when reading the rest of the document.
32

    
33
Cluster
34
~~~~~~~
35

    
36
A set of machines (nodes) that cooperate to offer a coherent, highly
37
available virtualization service under a single administration domain.
38

    
39
Node
40
~~~~
41

    
42
A physical machine which is member of a cluster.  Nodes are the basic
43
cluster infrastructure, and they don't need to be fault tolerant in
44
order to achieve high availability for instances.
45

    
46
Node can be added and removed (if they host no instances) at will from
47
the cluster. In a HA cluster and only with HA instances, the loss of any
48
single node will not cause disk data loss for any instance; of course,
49
a node crash will cause the crash of the its primary instances.
50

    
51
A node belonging to a cluster can be in one of the following roles at a
52
given time:
53

    
54
- *master* node, which is the node from which the cluster is controlled
55
- *master candidate* node, only nodes in this role have the full cluster
56
  configuration and knowledge, and only master candidates can become the
57
  master node
58
- *regular* node, which is the state in which most nodes will be on
59
  bigger clusters (>20 nodes)
60
- *drained* node, nodes in this state are functioning normally but the
61
  cannot receive new instances; the intention is that nodes in this role
62
  have some issue and they are being evacuated for hardware repairs
63
- *offline* node, in which there is a record in the cluster
64
  configuration about the node, but the daemons on the master node will
65
  not talk to this node; any instances declared as having an offline
66
  node as either primary or secondary will be flagged as an error in the
67
  cluster verify operation
68

    
69
Depending on the role, each node will run a set of daemons:
70

    
71
- the :command:`ganeti-noded` daemon, which control the manipulation of
72
  this node's hardware resources; it runs on all nodes which are in a
73
  cluster
74
- the :command:`ganeti-confd` daemon (Ganeti 2.1+) which runs on all
75
  nodes, but is only functional on master candidate nodes
76
- the :command:`ganeti-rapi` daemon which runs on the master node and
77
  offers an HTTP-based API for the cluster
78
- the :command:`ganeti-masterd` daemon which runs on the master node and
79
  allows control of the cluster
80

    
81
Instance
82
~~~~~~~~
83

    
84
A virtual machine which runs on a cluster. It can be a fault tolerant,
85
highly available entity.
86

    
87
An instance has various parameters, which are classified in three
88
categories: hypervisor related-parameters (called ``hvparams``), general
89
parameters (called ``beparams``) and per network-card parameters (called
90
``nicparams``). All these parameters can be modified either at instance
91
level or via defaults at cluster level.
92

    
93
Disk template
94
~~~~~~~~~~~~~
95

    
96
The are multiple options for the storage provided to an instance; while
97
the instance sees the same virtual drive in all cases, the node-level
98
configuration varies between them.
99

    
100
There are four disk templates you can choose from:
101

    
102
diskless
103
  The instance has no disks. Only used for special purpose operating
104
  systems or for testing.
105

    
106
file
107
  The instance will use plain files as backend for its disks. No
108
  redundancy is provided, and this is somewhat more difficult to
109
  configure for high performance.
110

    
111
plain
112
  The instance will use LVM devices as backend for its disks. No
113
  redundancy is provided.
114

    
115
drbd
116
  .. note:: This is only valid for multi-node clusters using DRBD 8.0+
117

    
118
  A mirror is set between the local node and a remote one, which must be
119
  specified with the second value of the --node option. Use this option
120
  to obtain a highly available instance that can be failed over to a
121
  remote node should the primary one fail.
122

    
123
IAllocator
124
~~~~~~~~~~
125

    
126
A framework for using external (user-provided) scripts to compute the
127
placement of instances on the cluster nodes. This eliminates the need to
128
manually specify nodes in instance add, instance moves, node evacuate,
129
etc.
130

    
131
In order for Ganeti to be able to use these scripts, they must be place
132
in the iallocator directory (usually ``lib/ganeti/iallocators`` under
133
the installation prefix, e.g. ``/usr/local``).
134

    
135
“Primary” and “secondary” concepts
136
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
137

    
138
An instance has a primary and depending on the disk configuration, might
139
also have a secondary node. The instance always runs on the primary node
140
and only uses its secondary node for disk replication.
141

    
142
Similarly, the term of primary and secondary instances when talking
143
about a node refers to the set of instances having the given node as
144
primary, respectively secondary.
145

    
146
Tags
147
~~~~
148

    
149
Tags are short strings that can be attached to either to cluster itself,
150
or to nodes or instances. They are useful as a very simplistic
151
information store for helping with cluster administration, for example
152
by attaching owner information to each instance after it's created::
153

    
154
  gnt-instance add … instance1
155
  gnt-instance add-tags instance1 owner:user2
156

    
157
And then by listing each instance and its tags, this information could
158
be used for contacting the users of each instance.
159

    
160
Jobs and OpCodes
161
~~~~~~~~~~~~~~~~
162

    
163
While not directly visible by an end-user, it's useful to know that a
164
basic cluster operation (e.g. starting an instance) is represented
165
internall by Ganeti as an *OpCode* (abbreviation from operation
166
code). These OpCodes are executed as part of a *Job*. The OpCodes in a
167
single Job are processed serially by Ganeti, but different Jobs will be
168
processed (depending on resource availability) in parallel.
169

    
170
For example, shutting down the entire cluster can be done by running the
171
command ``gnt-instance shutdown --all``, which will submit for each
172
instance a separate job containing the “shutdown instance” OpCode.
173

    
174

    
175
Prerequisites
176
+++++++++++++
177

    
178
You need to have your Ganeti cluster installed and configured before you
179
try any of the commands in this document. Please follow the
180
:doc:`install` for instructions on how to do that.
181

    
182
Instance management
183
-------------------
184

    
185
Adding an instance
186
++++++++++++++++++
187

    
188
The add operation might seem complex due to the many parameters it
189
accepts, but once you have understood the (few) required parameters and
190
the customisation capabilities you will see it is an easy operation.
191

    
192
The add operation requires at minimum five parameters:
193

    
194
- the OS for the instance
195
- the disk template
196
- the disk count and size
197
- the node specification or alternatively the iallocator to use
198
- and finally the instance name
199

    
200
The OS for the instance must be visible in the output of the command
201
``gnt-os list`` and specifies which guest OS to install on the instance.
202

    
203
The disk template specifies what kind of storage to use as backend for
204
the (virtual) disks presented to the instance; note that for instances
205
with multiple virtual disks, they all must be of the same type.
206

    
207
The node(s) on which the instance will run can be given either manually,
208
via the ``-n`` option, or computed automatically by Ganeti, if you have
209
installed any iallocator script.
210

    
211
With the above parameters in mind, the command is::
212

    
213
  gnt-instance add \
214
    -n TARGET_NODE:SECONDARY_NODE \
215
    -o OS_TYPE \
216
    -t DISK_TEMPLATE -s DISK_SIZE \
217
    INSTANCE_NAME
218

    
219
The instance name must be resolvable (e.g. exist in DNS) and usually
220
points to an address in the same subnet as the cluster itself.
221

    
222
The above command has the minimum required options; other options you
223
can give include, among others:
224

    
225
- The memory size (``-B memory``)
226

    
227
- The number of virtual CPUs (``-B vcpus``)
228

    
229
- Arguments for the NICs of the instance; by default, a single-NIC
230
  instance is created. The IP and/or bridge of the NIC can be changed
231
  via ``--nic 0:ip=IP,bridge=BRIDGE``
232

    
233
See the manpage for gnt-instance for the detailed option list.
234

    
235
For example if you want to create an highly available instance, with a
236
single disk of 50GB and the default memory size, having primary node
237
``node1`` and secondary node ``node3``, use the following command::
238

    
239
  gnt-instance add -n node1:node3 -o debootstrap -t drbd \
240
    instance1
241

    
242
There is a also a command for batch instance creation from a
243
specification file, see the ``batch-create`` operation in the
244
gnt-instance manual page.
245

    
246
Regular instance operations
247
+++++++++++++++++++++++++++
248

    
249
Removal
250
~~~~~~~
251

    
252
Removing an instance is even easier than creating one. This operation is
253
irreversible and destroys all the contents of your instance. Use with
254
care::
255

    
256
  gnt-instance remove INSTANCE_NAME
257

    
258
Startup/shutdown
259
~~~~~~~~~~~~~~~~
260

    
261
Instances are automatically started at instance creation time. To
262
manually start one which is currently stopped you can run::
263

    
264
  gnt-instance startup INSTANCE_NAME
265

    
266
While the command to stop one is::
267

    
268
  gnt-instance shutdown INSTANCE_NAME
269

    
270
.. warning:: Do not use the Xen or KVM commands directly to stop
271
   instances. If you run for example ``xm shutdown`` or ``xm destroy``
272
   on an instance Ganeti will automatically restart it (via the
273
   :command:`ganeti-watcher` command which is launched via cron).
274

    
275
Querying instances
276
~~~~~~~~~~~~~~~~~~
277

    
278
There are two ways to get information about instances: listing
279
instances, which does a tabular output containing a given set of fields
280
about each instance, and querying detailed information about a set of
281
instances.
282

    
283
The command to see all the instances configured and their status is::
284

    
285
  gnt-instance list
286

    
287
The command can return a custom set of information when using the ``-o``
288
option (as always, check the manpage for a detailed specification). Each
289
instance will be represented on a line, thus making it easy to parse
290
this output via the usual shell utilities (grep, sed, etc.).
291

    
292
To get more detailed information about an instance, you can run::
293

    
294
  gnt-instance info INSTANCE
295

    
296
which will give a multi-line block of information about the instance,
297
it's hardware resources (especially its disks and their redundancy
298
status), etc. This is harder to parse and is more expensive than the
299
list operation, but returns much more detailed information.
300

    
301

    
302
Export/Import
303
+++++++++++++
304

    
305
You can create a snapshot of an instance disk and its Ganeti
306
configuration, which then you can backup, or import into another
307
cluster. The way to export an instance is::
308

    
309
  gnt-backup export -n TARGET_NODE INSTANCE_NAME
310

    
311

    
312
The target node can be any node in the cluster with enough space under
313
``/srv/ganeti`` to hold the instance image. Use the ``--noshutdown``
314
option to snapshot an instance without rebooting it. Note that Ganeti
315
only keeps one snapshot for an instance - any previous snapshot of the
316
same instance existing cluster-wide under ``/srv/ganeti`` will be
317
removed by this operation: if you want to keep them, you need to move
318
them out of the Ganeti exports directory.
319

    
320
Importing an instance is similar to creating a new one, but additionally
321
one must specify the location of the snapshot. The command is::
322

    
323
  gnt-backup import -n TARGET_NODE \
324
    --src-node=NODE --src-dir=DIR INSTANCE_NAME
325

    
326
By default, parameters will be read from the export information, but you
327
can of course pass them in via the command line - most of the options
328
available for the command :command:`gnt-instance add` are supported here
329
too.
330

    
331
Import of foreign instances
332
+++++++++++++++++++++++++++
333

    
334
There is a possibility to import a foreign instance whose disk data is
335
already stored as LVM volumes without going through copying it: the disk
336
adoption mode.
337

    
338
For this, ensure that the original, non-managed instance is stopped,
339
then create a Ganeti instance in the usual way, except that instead of
340
passing the disk information you specify the current volumes::
341

    
342
  gnt-instance add -t plain -n HOME_NODE ... \
343
    --disk 0:adopt=lv_name INSTANCE_NAME
344

    
345
This will take over the given logical volumes, rename them to the Ganeti
346
standard (UUID-based), and without installing the OS on them start
347
directly the instance. If you configure the hypervisor similar to the
348
non-managed configuration that the instance had, the transition should
349
be seamless for the instance. For more than one disk, just pass another
350
disk parameter (e.g. ``--disk 1:adopt=...``).
351

    
352
Instance HA features
353
--------------------
354

    
355
.. note:: This section only applies to multi-node clusters
356

    
357
.. _instance-change-primary-label:
358

    
359
Changing the primary node
360
+++++++++++++++++++++++++
361

    
362
There are three ways to exchange an instance's primary and secondary
363
nodes; the right one to choose depends on how the instance has been
364
created and the status of its current primary node. See
365
:ref:`rest-redundancy-label` for information on changing the secondary
366
node. Note that it's only possible to change the primary node to the
367
secondary and vice-versa; a direct change of the primary node with a
368
third node, while keeping the current secondary is not possible in a
369
single step, only via multiple operations as detailed in
370
:ref:`instance-relocation-label`.
371

    
372
Failing over an instance
373
~~~~~~~~~~~~~~~~~~~~~~~~
374

    
375
If an instance is built in highly available mode you can at any time
376
fail it over to its secondary node, even if the primary has somehow
377
failed and it's not up anymore. Doing it is really easy, on the master
378
node you can just run::
379

    
380
  gnt-instance failover INSTANCE_NAME
381

    
382
That's it. After the command completes the secondary node is now the
383
primary, and vice-versa.
384

    
385
Live migrating an instance
386
~~~~~~~~~~~~~~~~~~~~~~~~~~
387

    
388
If an instance is built in highly available mode, it currently runs and
389
both its nodes are running fine, you can at migrate it over to its
390
secondary node, without downtime. On the master node you need to run::
391

    
392
  gnt-instance migrate INSTANCE_NAME
393

    
394
The current load on the instance and its memory size will influence how
395
long the migration will take. In any case, for both KVM and Xen
396
hypervisors, the migration will be transparent to the instance.
397

    
398
Moving an instance (offline)
399
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
400

    
401
If an instance has not been create as mirrored, then the only way to
402
change its primary node is to execute the move command::
403

    
404
  gnt-instance move -n NEW_NODE INSTANCE
405

    
406
This has a few prerequisites:
407

    
408
- the instance must be stopped
409
- its current primary node must be on-line and healthy
410
- the disks of the instance must not have any errors
411

    
412
Since this operation actually copies the data from the old node to the
413
new node, expect it to take proportional to the size of the instance's
414
disks and the speed of both the nodes' I/O system and their networking.
415

    
416
Disk operations
417
+++++++++++++++
418

    
419
Disk failures are a common cause of errors in any server
420
deployment. Ganeti offers protection from single-node failure if your
421
instances were created in HA mode, and it also offers ways to restore
422
redundancy after a failure.
423

    
424
Preparing for disk operations
425
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
426

    
427
It is important to note that for Ganeti to be able to do any disk
428
operation, the Linux machines on top of which Ganeti must be consistent;
429
for LVM, this means that the LVM commands must not return failures; it
430
is common that after a complete disk failure, any LVM command aborts
431
with an error similar to::
432

    
433
  # vgs
434
  /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
435
  /dev/sdb1: read failed after 0 of 4096 at 750153695232: Input/output
436
  error
437
  /dev/sdb1: read failed after 0 of 4096 at 0: Input/output error
438
  Couldn't find device with uuid
439
  't30jmN-4Rcf-Fr5e-CURS-pawt-z0jU-m1TgeJ'.
440
  Couldn't find all physical volumes for volume group xenvg.
441

    
442
Before restoring an instance's disks to healthy status, it's needed to
443
fix the volume group used by Ganeti so that we can actually create and
444
manage the logical volumes. This is usually done in a multi-step
445
process:
446

    
447
#. first, if the disk is completely gone and LVM commands exit with
448
   “Couldn't find device with uuid…” then you need to run the command::
449

    
450
    vgreduce --removemissing VOLUME_GROUP
451

    
452
#. after the above command, the LVM commands should be executing
453
   normally (warnings are normal, but the commands will not fail
454
   completely).
455

    
456
#. if the failed disk is still visible in the output of the ``pvs``
457
   command, you need to deactivate it from allocations by running::
458

    
459
    pvs -x n /dev/DISK
460

    
461
At this point, the volume group should be consistent and any bad
462
physical volumes should not longer be available for allocation.
463

    
464
Note that since version 2.1 Ganeti provides some commands to automate
465
these two operations, see :ref:`storage-units-label`.
466

    
467
.. _rest-redundancy-label:
468

    
469
Restoring redundancy for DRBD-based instances
470
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
471

    
472
A DRBD instance has two nodes, and the storage on one of them has
473
failed. Depending on which node (primary or secondary) has failed, you
474
have three options at hand:
475

    
476
- if the storage on the primary node has failed, you need to re-create
477
  the disks on it
478
- if the storage on the secondary node has failed, you can either
479
  re-create the disks on it or change the secondary and recreate
480
  redundancy on the new secondary node
481

    
482
Of course, at any point it's possible to force re-creation of disks even
483
though everything is already fine.
484

    
485
For all three cases, the ``replace-disks`` operation can be used::
486

    
487
  # re-create disks on the primary node
488
  gnt-instance replace-disks -p INSTANCE_NAME
489
  # re-create disks on the current secondary
490
  gnt-instance replace-disks -s INSTANCE_NAME
491
  # change the secondary node, via manual specification
492
  gnt-instance replace-disks -n NODE INSTANCE_NAME
493
  # change the secondary node, via an iallocator script
494
  gnt-instance replace-disks -I SCRIPT INSTANCE_NAME
495
  # since Ganeti 2.1: automatically fix the primary or secondary node
496
  gnt-instance replace-disks -a INSTANCE_NAME
497

    
498
Since the process involves copying all data from the working node to the
499
target node, it will take a while, depending on the instance's disk
500
size, node I/O system and network speed. But it is (baring any network
501
interruption) completely transparent for the instance.
502

    
503
Re-creating disks for non-redundant instances
504
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
505

    
506
.. versionadded:: 2.1
507

    
508
For non-redundant instances, there isn't a copy (except backups) to
509
re-create the disks. But it's possible to at-least re-create empty
510
disks, after which a reinstall can be run, via the ``recreate-disks``
511
command::
512

    
513
  gnt-instance recreate-disks INSTANCE
514

    
515
Note that this will fail if the disks already exists.
516

    
517
Conversion of an instance's disk type
518
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
519

    
520
It is possible to convert between a non-redundant instance of type
521
``plain`` (LVM storage) and redundant ``drbd`` via the ``gnt-instance
522
modify`` command::
523

    
524
  # start with a non-redundant instance
525
  gnt-instance add -t plain ... INSTANCE
526

    
527
  # later convert it to redundant
528
  gnt-instance stop INSTANCE
529
  gnt-instance modify -t drbd INSTANCE
530
  gnt-instance start INSTANCE
531

    
532
  # and convert it back
533
  gnt-instance stop INSTANCE
534
  gnt-instance modify -t plain INSTANCE
535
  gnt-instance start INSTANCE
536

    
537
The conversion must be done while the instance is stopped, and
538
converting from plain to drbd template presents a small risk, especially
539
if the instance has multiple disks and/or if one node fails during the
540
conversion procedure). As such, it's recommended (as always) to make
541
sure that downtime for manual recovery is acceptable and that the
542
instance has up-to-date backups.
543

    
544
Debugging instances
545
+++++++++++++++++++
546

    
547
Accessing an instance's disks
548
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
549

    
550
From an instance's primary node you can have access to its disks. Never
551
ever mount the underlying logical volume manually on a fault tolerant
552
instance, or will break replication and your data will be
553
inconsistent. The correct way to access an instance's disks is to run
554
(on the master node, as usual) the command::
555

    
556
  gnt-instance activate-disks INSTANCE
557

    
558
And then, *on the primary node of the instance*, access the device that
559
gets created. For example, you could mount the given disks, then edit
560
files on the filesystem, etc.
561

    
562
Note that with partitioned disks (as opposed to whole-disk filesystems),
563
you will need to use a tool like :manpage:`kpartx(8)`::
564

    
565
  node1# gnt-instance activate-disks instance1
566
567
  node1# ssh node3
568
  node3# kpartx -l /dev/…
569
  node3# kpartx -a /dev/…
570
  node3# mount /dev/mapper/… /mnt/
571
  # edit files under mnt as desired
572
  node3# umount /mnt/
573
  node3# kpartx -d /dev/…
574
  node3# exit
575
  node1#
576

    
577
After you've finished you can deactivate them with the deactivate-disks
578
command, which works in the same way::
579

    
580
  gnt-instance deactivate-disks INSTANCE
581

    
582
Note that if any process started by you is still using the disks, the
583
above command will error out, and you **must** cleanup and ensure that
584
the above command runs successfully before you start the instance,
585
otherwise the instance will suffer corruption.
586

    
587
Accessing an instance's console
588
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
589

    
590
The command to access a running instance's console is::
591

    
592
  gnt-instance console INSTANCE_NAME
593

    
594
Use the console normally and then type ``^]`` when done, to exit.
595

    
596
Other instance operations
597
+++++++++++++++++++++++++
598

    
599
Reboot
600
~~~~~~
601

    
602
There is a wrapper command for rebooting instances::
603

    
604
  gnt-instance reboot instance2
605

    
606
By default, this does the equivalent of shutting down and then starting
607
the instance, but it accepts parameters to perform a soft-reboot (via
608
the hypervisor), a hard reboot (hypervisor shutdown and then startup) or
609
a full one (the default, which also de-configures and then configures
610
again the disks of the instance).
611

    
612
Instance OS definitions debugging
613
+++++++++++++++++++++++++++++++++
614

    
615
Should you have any problems with instance operating systems the command
616
to see a complete status for all your nodes is::
617

    
618
   gnt-os diagnose
619

    
620
.. _instance-relocation-label:
621

    
622
Instance relocation
623
~~~~~~~~~~~~~~~~~~~
624

    
625
While it is not possible to move an instance from nodes ``(A, B)`` to
626
nodes ``(C, D)`` in a single move, it is possible to do so in a few
627
steps::
628

    
629
  # instance is located on A, B
630
  node1# gnt-instance replace -n nodeC instance1
631
  # instance has moved from (A, B) to (A, C)
632
  # we now flip the primary/secondary nodes
633
  node1# gnt-instance migrate instance1
634
  # instance lives on (C, A)
635
  # we can then change A to D via:
636
  node1# gnt-instance replace -n nodeD instance1
637

    
638
Which brings it into the final configuration of ``(C, D)``. Note that we
639
needed to do two replace-disks operation (two copies of the instance
640
disks), because we needed to get rid of both the original nodes (A and
641
B).
642

    
643
Node operations
644
---------------
645

    
646
There are much fewer node operations available than for instances, but
647
they are equivalently important for maintaining a healthy cluster.
648

    
649
Add/readd
650
+++++++++
651

    
652
It is at any time possible to extend the cluster with one more node, by
653
using the node add operation::
654

    
655
  gnt-node add NEW_NODE
656

    
657
If the cluster has a replication network defined, then you need to pass
658
the ``-s REPLICATION_IP`` parameter to this option.
659

    
660
A variation of this command can be used to re-configure a node if its
661
Ganeti configuration is broken, for example if it has been reinstalled
662
by mistake::
663

    
664
  gnt-node add --readd EXISTING_NODE
665

    
666
This will reinitialise the node as if it's been newly added, but while
667
keeping its existing configuration in the cluster (primary/secondary IP,
668
etc.), in other words you won't need to use ``-s`` here.
669

    
670
Changing the node role
671
++++++++++++++++++++++
672

    
673
A node can be in different roles, as explained in the
674
:ref:`terminology-label` section. Promoting a node to the master role is
675
special, while the other roles are handled all via a single command.
676

    
677
Failing over the master node
678
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
679

    
680
If you want to promote a different node to the master role (for whatever
681
reason), run on any other master-candidate node the command::
682

    
683
  gnt-cluster masterfailover
684

    
685
and the node you ran it on is now the new master. In case you try to run
686
this on a non master-candidate node, you will get an error telling you
687
which nodes are valid.
688

    
689
Changing between the other roles
690
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
691

    
692
The ``gnt-node modify`` command can be used to select a new role::
693

    
694
  # change to master candidate
695
  gnt-node modify -C yes NODE
696
  # change to drained status
697
  gnt-node modify -D yes NODE
698
  # change to offline status
699
  gnt-node modify -O yes NODE
700
  # change to regular mode (reset all flags)
701
  gnt-node modify -O no -D no -C no NODE
702

    
703
Note that the cluster requires that at any point in time, a certain
704
number of nodes are master candidates, so changing from master candidate
705
to other roles might fail. It is recommended to either force the
706
operation (via the ``--force`` option) or first change the number of
707
master candidates in the cluster - see :ref:`cluster-config-label`.
708

    
709
Evacuating nodes
710
++++++++++++++++
711

    
712
There are two steps of moving instances off a node:
713

    
714
- moving the primary instances (actually converting them into secondary
715
  instances)
716
- moving the secondary instances (including any instances converted in
717
  the step above)
718

    
719
Primary instance conversion
720
~~~~~~~~~~~~~~~~~~~~~~~~~~~
721

    
722
For this step, you can use either individual instance move
723
commands (as seen in :ref:`instance-change-primary-label`) or the bulk
724
per-node versions; these are::
725

    
726
  gnt-node migrate NODE
727
  gnt-node evacuate NODE
728

    
729
Note that the instance “move” command doesn't currently have a node
730
equivalent.
731

    
732
Both these commands, or the equivalent per-instance command, will make
733
this node the secondary node for the respective instances, whereas their
734
current secondary node will become primary. Note that it is not possible
735
to change in one step the primary node to another node as primary, while
736
keeping the same secondary node.
737

    
738
Secondary instance evacuation
739
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
740

    
741
For the evacuation of secondary instances, a command called
742
:command:`gnt-node evacuate` is provided and its syntax is::
743

    
744
  gnt-node evacuate -I IALLOCATOR_SCRIPT NODE
745
  gnt-node evacuate -n DESTINATION_NODE NODE
746

    
747
The first version will compute the new secondary for each instance in
748
turn using the given iallocator script, whereas the second one will
749
simply move all instances to DESTINATION_NODE.
750

    
751
Removal
752
+++++++
753

    
754
Once a node no longer has any instances (neither primary nor secondary),
755
it's easy to remove it from the cluster::
756

    
757
  gnt-node remove NODE_NAME
758

    
759
This will deconfigure the node, stop the ganeti daemons on it and leave
760
it hopefully like before it joined to the cluster.
761

    
762
Storage handling
763
++++++++++++++++
764

    
765
When using LVM (either standalone or with DRBD), it can become tedious
766
to debug and fix it in case of errors. Furthermore, even file-based
767
storage can become complicated to handle manually on many hosts. Ganeti
768
provides a couple of commands to help with automation.
769

    
770
Logical volumes
771
~~~~~~~~~~~~~~~
772

    
773
This is a command specific to LVM handling. It allows listing the
774
logical volumes on a given node or on all nodes and their association to
775
instances via the ``volumes`` command::
776

    
777
  node1# gnt-node volumes
778
  Node  PhysDev   VG    Name             Size Instance
779
  node1 /dev/sdb1 xenvg e61fbc97-….disk0 512M instance17
780
  node1 /dev/sdb1 xenvg ebd1a7d1-….disk0 512M instance19
781
  node2 /dev/sdb1 xenvg 0af08a3d-….disk0 512M instance20
782
  node2 /dev/sdb1 xenvg cc012285-….disk0 512M instance16
783
  node2 /dev/sdb1 xenvg f0fac192-….disk0 512M instance18
784

    
785
The above command maps each logical volume to a volume group and
786
underlying physical volume and (possibly) to an instance.
787

    
788
.. _storage-units-label:
789

    
790
Generalized storage handling
791
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
792

    
793
.. versionadded:: 2.1
794

    
795
Starting with Ganeti 2.1, a new storage framework has been implemented
796
that tries to abstract the handling of the storage type the cluster
797
uses.
798

    
799
First is listing the backend storage and their space situation::
800

    
801
  node1# gnt-node list-storage
802
  Node  Name        Size Used   Free
803
  node1 /dev/sda7 673.8G   0M 673.8G
804
  node1 /dev/sdb1 698.6G 1.5G 697.1G
805
  node2 /dev/sda7 673.8G   0M 673.8G
806
  node2 /dev/sdb1 698.6G 1.0G 697.6G
807

    
808
The default is to list LVM physical volumes. It's also possible to list
809
the LVM volume groups::
810

    
811
  node1# gnt-node list-storage -t lvm-vg
812
  Node  Name  Size
813
  node1 xenvg 1.3T
814
  node2 xenvg 1.3T
815

    
816
Next is repairing storage units, which is currently only implemented for
817
volume groups and does the equivalent of ``vgreduce --removemissing``::
818

    
819
  node1# gnt-node repair-storage node2 lvm-vg xenvg
820
  Sun Oct 25 22:21:45 2009 Repairing storage unit 'xenvg' on node2 ...
821

    
822
Last is the modification of volume properties, which is (again) only
823
implemented for LVM physical volumes and allows toggling the
824
``allocatable`` value::
825

    
826
  node1# gnt-node modify-storage --allocatable=no node2 lvm-pv /dev/sdb1
827

    
828
Use of the storage commands
829
~~~~~~~~~~~~~~~~~~~~~~~~~~~
830

    
831
All these commands are needed when recovering a node from a disk
832
failure:
833

    
834
- first, we need to recover from complete LVM failure (due to missing
835
  disk), by running the ``repair-storage`` command
836
- second, we need to change allocation on any partially-broken disk
837
  (i.e. LVM still sees it, but it has bad blocks) by running
838
  ``modify-storage``
839
- then we can evacuate the instances as needed
840

    
841

    
842
Cluster operations
843
------------------
844

    
845
Beside the cluster initialisation command (which is detailed in the
846
:doc:`install` document) and the master failover command which is
847
explained under node handling, there are a couple of other cluster
848
operations available.
849

    
850
.. _cluster-config-label:
851

    
852
Standard operations
853
+++++++++++++++++++
854

    
855
One of the few commands that can be run on any node (not only the
856
master) is the ``getmaster`` command::
857

    
858
  node2# gnt-cluster getmaster
859
  node1.example.com
860
  node2#
861

    
862
It is possible to query and change global cluster parameters via the
863
``info`` and ``modify`` commands::
864

    
865
  node1# gnt-cluster info
866
  Cluster name: cluster.example.com
867
  Cluster UUID: 07805e6f-f0af-4310-95f1-572862ee939c
868
  Creation time: 2009-09-25 05:04:15
869
  Modification time: 2009-10-18 22:11:47
870
  Master node: node1.example.com
871
  Architecture (this node): 64bit (x86_64)
872
873
  Tags: foo
874
  Default hypervisor: xen-pvm
875
  Enabled hypervisors: xen-pvm
876
  Hypervisor parameters:
877
    - xen-pvm:
878
        root_path: /dev/sda1
879
880
  Cluster parameters:
881
    - candidate pool size: 10
882
883
  Default instance parameters:
884
    - default:
885
        memory: 128
886
887
  Default nic parameters:
888
    - default:
889
        link: xen-br0
890
891

    
892
There various parameters above can be changed via the ``modify``
893
commands as follows:
894

    
895
- the hypervisor parameters can be changed via ``modify -H
896
  xen-pvm:root_path=…``, and so on for other hypervisors/key/values
897
- the "default instance parameters" are changeable via ``modify -B
898
  parameter=value…`` syntax
899
- the cluster parameters are changeable via separate options to the
900
  modify command (e.g. ``--candidate-pool-size``, etc.)
901

    
902
For detailed option list see the :manpage:`gnt-cluster(8)` man page.
903

    
904
The cluster version can be obtained via the ``version`` command::
905
  node1# gnt-cluster version
906
  Software version: 2.1.0
907
  Internode protocol: 20
908
  Configuration format: 2010000
909
  OS api version: 15
910
  Export interface: 0
911

    
912
This is not very useful except when debugging Ganeti.
913

    
914
Global node commands
915
++++++++++++++++++++
916

    
917
There are two commands provided for replicating files to all nodes of a
918
cluster and for running commands on all the nodes::
919

    
920
  node1# gnt-cluster copyfile /path/to/file
921
  node1# gnt-cluster command ls -l /path/to/file
922

    
923
These are simple wrappers over scp/ssh and more advanced usage can be
924
obtained using :manpage:`dsh(1)` and similar commands. But they are
925
useful to update an OS script from the master node, for example.
926

    
927
Cluster verification
928
++++++++++++++++++++
929

    
930
There are three commands that relate to global cluster checks. The first
931
one is ``verify`` which gives an overview on the cluster state,
932
highlighting any issues. In normal operation, this command should return
933
no ``ERROR`` messages::
934

    
935
  node1# gnt-cluster verify
936
  Sun Oct 25 23:08:58 2009 * Verifying global settings
937
  Sun Oct 25 23:08:58 2009 * Gathering data (2 nodes)
938
  Sun Oct 25 23:09:00 2009 * Verifying node status
939
  Sun Oct 25 23:09:00 2009 * Verifying instance status
940
  Sun Oct 25 23:09:00 2009 * Verifying orphan volumes
941
  Sun Oct 25 23:09:00 2009 * Verifying remaining instances
942
  Sun Oct 25 23:09:00 2009 * Verifying N+1 Memory redundancy
943
  Sun Oct 25 23:09:00 2009 * Other Notes
944
  Sun Oct 25 23:09:00 2009   - NOTICE: 5 non-redundant instance(s) found.
945
  Sun Oct 25 23:09:00 2009 * Hooks Results
946

    
947
The second command is ``verify-disks``, which checks that the instance's
948
disks have the correct status based on the desired instance state
949
(up/down)::
950

    
951
  node1# gnt-cluster verify-disks
952

    
953
Note that this command will show no output when disks are healthy.
954

    
955
The last command is used to repair any discrepancies in Ganeti's
956
recorded disk size and the actual disk size (disk size information is
957
needed for proper activation and growth of DRBD-based disks)::
958

    
959
  node1# gnt-cluster repair-disk-sizes
960
  Sun Oct 25 23:13:16 2009  - INFO: Disk 0 of instance instance1 has mismatched size, correcting: recorded 512, actual 2048
961
  Sun Oct 25 23:13:17 2009  - WARNING: Invalid result from node node4, ignoring node results
962

    
963
The above shows one instance having wrong disk size, and a node which
964
returned invalid data, and thus we ignored all primary instances of that
965
node.
966

    
967
Configuration redistribution
968
++++++++++++++++++++++++++++
969

    
970
If the verify command complains about file mismatches between the master
971
and other nodes, due to some node problems or if you manually modified
972
configuration files, you can force an push of the master configuration
973
to all other nodes via the ``redist-conf`` command::
974

    
975
  node1# gnt-cluster redist-conf
976
  node1#
977

    
978
This command will be silent unless there are problems sending updates to
979
the other nodes.
980

    
981

    
982
Cluster renaming
983
++++++++++++++++
984

    
985
It is possible to rename a cluster, or to change its IP address, via the
986
``rename`` command. If only the IP has changed, you need to pass the
987
current name and Ganeti will realise its IP has changed::
988

    
989
  node1# gnt-cluster rename cluster.example.com
990
  This will rename the cluster to 'cluster.example.com'. If
991
  you are connected over the network to the cluster name, the operation
992
  is very dangerous as the IP address will be removed from the node and
993
  the change may not go through. Continue?
994
  y/[n]/?: y
995
  Failure: prerequisites not met for this operation:
996
  Neither the name nor the IP address of the cluster has changed
997

    
998
In the above output, neither value has changed since the cluster
999
initialisation so the operation is not completed.
1000

    
1001
Queue operations
1002
++++++++++++++++
1003

    
1004
The job queue execution in Ganeti 2.0 and higher can be inspected,
1005
suspended and resumed via the ``queue`` command::
1006

    
1007
  node1~# gnt-cluster queue info
1008
  The drain flag is unset
1009
  node1~# gnt-cluster queue drain
1010
  node1~# gnt-instance stop instance1
1011
  Failed to submit job for instance1: Job queue is drained, refusing job
1012
  node1~# gnt-cluster queue info
1013
  The drain flag is set
1014
  node1~# gnt-cluster queue undrain
1015

    
1016
This is most useful if you have an active cluster and you need to
1017
upgrade the Ganeti software, or simply restart the software on any node:
1018

    
1019
#. suspend the queue via ``queue drain``
1020
#. wait until there are no more running jobs via ``gnt-job list``
1021
#. restart the master or another node, or upgrade the software
1022
#. resume the queue via ``queue undrain``
1023

    
1024
.. note:: this command only stores a local flag file, and if you
1025
   failover the master, it will not have effect on the new master.
1026

    
1027

    
1028
Watcher control
1029
+++++++++++++++
1030

    
1031
The :manpage:`ganeti-watcher` is a program, usually scheduled via
1032
``cron``, that takes care of cluster maintenance operations (restarting
1033
downed instances, activating down DRBD disks, etc.). However, during
1034
maintenance and troubleshooting, this can get in your way; disabling it
1035
via commenting out the cron job is not so good as this can be
1036
forgotten. Thus there are some commands for automated control of the
1037
watcher: ``pause``, ``info`` and ``continue``::
1038

    
1039
  node1~# gnt-cluster watcher info
1040
  The watcher is not paused.
1041
  node1~# gnt-cluster watcher pause 1h
1042
  The watcher is paused until Mon Oct 26 00:30:37 2009.
1043
  node1~# gnt-cluster watcher info
1044
  The watcher is paused until Mon Oct 26 00:30:37 2009.
1045
  node1~# ganeti-watcher -d
1046
  2009-10-25 23:30:47,984:  pid=28867 ganeti-watcher:486 DEBUG Pause has been set, exiting
1047
  node1~# gnt-cluster watcher continue
1048
  The watcher is no longer paused.
1049
  node1~# ganeti-watcher -d
1050
  2009-10-25 23:31:04,789:  pid=28976 ganeti-watcher:345 DEBUG Archived 0 jobs, left 0
1051
  2009-10-25 23:31:05,884:  pid=28976 ganeti-watcher:280 DEBUG Got data from cluster, writing instance status file
1052
  2009-10-25 23:31:06,061:  pid=28976 ganeti-watcher:150 DEBUG Data didn't change, just touching status file
1053
  node1~# gnt-cluster watcher info
1054
  The watcher is not paused.
1055
  node1~#
1056

    
1057
The exact details of the argument to the ``pause`` command are available
1058
in the manpage.
1059

    
1060
.. note:: this command only stores a local flag file, and if you
1061
   failover the master, it will not have effect on the new master.
1062

    
1063
Node auto-maintenance
1064
+++++++++++++++++++++
1065

    
1066
If the cluster parameter ``maintain_node_health`` is enabled (see the
1067
manpage for :command:`gnt-cluster`, the init and modify subcommands),
1068
then the following will happen automatically:
1069

    
1070
- the watcher will shutdown any instances running on offline nodes
1071
- the watcher will deactivate any DRBD devices on offline nodes
1072

    
1073
In the future, more actions are planned, so only enable this parameter
1074
if the nodes are completely dedicated to Ganeti; otherwise it might be
1075
possible to lose data due to auto-maintenance actions.
1076

    
1077
Removing a cluster entirely
1078
+++++++++++++++++++++++++++
1079

    
1080
The usual method to cleanup a cluster is to run ``gnt-cluster destroy``
1081
however if the Ganeti installation is broken in any way then this will
1082
not run.
1083

    
1084
It is possible in such a case to cleanup manually most if not all traces
1085
of a cluster installation by following these steps on all of the nodes:
1086

    
1087
1. Shutdown all instances. This depends on the virtualisation method
1088
   used (Xen, KVM, etc.):
1089

    
1090
  - Xen: run ``xm list`` and ``xm destroy`` on all the non-Domain-0
1091
    instances
1092
  - KVM: kill all the KVM processes
1093
  - chroot: kill all processes under the chroot mountpoints
1094

    
1095
2. If using DRBD, shutdown all DRBD minors (which should by at this time
1096
   no-longer in use by instances); on each node, run ``drbdsetup
1097
   /dev/drbdN down`` for each active DRBD minor.
1098

    
1099
3. If using LVM, cleanup the Ganeti volume group; if only Ganeti created
1100
   logical volumes (and you are not sharing the volume group with the
1101
   OS, for example), then simply running ``lvremove -f xenvg`` (replace
1102
   'xenvg' with your volume group name) should do the required cleanup.
1103

    
1104
4. If using file-based storage, remove recursively all files and
1105
   directories under your file-storage directory: ``rm -rf
1106
   /srv/ganeti/file-storage/*`` replacing the path with the correct path
1107
   for your cluster.
1108

    
1109
5. Stop the ganeti daemons (``/etc/init.d/ganeti stop``) and kill any
1110
   that remain alive (``pgrep ganeti`` and ``pkill ganeti``).
1111

    
1112
6. Remove the ganeti state directory (``rm -rf /var/lib/ganeti/*``),
1113
   replacing the path with the correct path for your installation.
1114

    
1115
On the master node, remove the cluster from the master-netdev (usually
1116
``xen-br0`` for bridged mode, otherwise ``eth0`` or similar), by running
1117
``ip a del $clusterip/32 dev xen-br0`` (use the correct cluster ip and
1118
network device name).
1119

    
1120
At this point, the machines are ready for a cluster creation; in case
1121
you want to remove Ganeti completely, you need to also undo some of the
1122
SSH changes and log directories:
1123

    
1124
- ``rm -rf /var/log/ganeti /srv/ganeti`` (replace with the correct
1125
  paths)
1126
- remove from ``/root/.ssh`` the keys that Ganeti added (check the
1127
  ``authorized_keys`` and ``id_dsa`` files)
1128
- regenerate the host's SSH keys (check the OpenSSH startup scripts)
1129
- uninstall Ganeti
1130

    
1131
Otherwise, if you plan to re-create the cluster, you can just go ahead
1132
and rerun ``gnt-cluster init``.
1133

    
1134
Tags handling
1135
-------------
1136

    
1137
The tags handling (addition, removal, listing) is similar for all the
1138
objects that support it (instances, nodes, and the cluster).
1139

    
1140
Limitations
1141
+++++++++++
1142

    
1143
Note that the set of characters present in a tag and the maximum tag
1144
length are restricted. Currently the maximum length is 128 characters,
1145
there can be at most 4096 tags per object, and the set of characters is
1146
comprised by alphanumeric characters and additionally ``.+*/:-``.
1147

    
1148
Operations
1149
++++++++++
1150

    
1151
Tags can be added via ``add-tags``::
1152

    
1153
  gnt-instance add-tags INSTANCE a b c
1154
  gnt-node add-tags INSTANCE a b c
1155
  gnt-cluster add-tags a b c
1156

    
1157

    
1158
The above commands add three tags to an instance, to a node and to the
1159
cluster. Note that the cluster command only takes tags as arguments,
1160
whereas the node and instance commands first required the node and
1161
instance name.
1162

    
1163
Tags can also be added from a file, via the ``--from=FILENAME``
1164
argument. The file is expected to contain one tag per line.
1165

    
1166
Tags can also be remove via a syntax very similar to the add one::
1167

    
1168
  gnt-instance remove-tags INSTANCE a b c
1169

    
1170
And listed via::
1171

    
1172
  gnt-instance list-tags
1173
  gnt-node list-tags
1174
  gnt-cluster list-tags
1175

    
1176
Global tag search
1177
+++++++++++++++++
1178

    
1179
It is also possible to execute a global search on the all tags defined
1180
in the cluster configuration, via a cluster command::
1181

    
1182
  gnt-cluster search-tags REGEXP
1183

    
1184
The parameter expected is a regular expression (see
1185
:manpage:`regex(7)`). This will return all tags that match the search,
1186
together with the object they are defined in (the names being show in a
1187
hierarchical kind of way)::
1188

    
1189
  node1# gnt-cluster search-tags o
1190
  /cluster foo
1191
  /instances/instance1 owner:bar
1192

    
1193

    
1194
Job operations
1195
--------------
1196

    
1197
The various jobs submitted by the instance/node/cluster commands can be
1198
examined, canceled and archived by various invocations of the
1199
``gnt-job`` command.
1200

    
1201
First is the job list command::
1202

    
1203
  node1# gnt-job list
1204
  17771 success INSTANCE_QUERY_DATA
1205
  17773 success CLUSTER_VERIFY_DISKS
1206
  17775 success CLUSTER_REPAIR_DISK_SIZES
1207
  17776 error   CLUSTER_RENAME(cluster.example.com)
1208
  17780 success CLUSTER_REDIST_CONF
1209
  17792 success INSTANCE_REBOOT(instance1.example.com)
1210

    
1211
More detailed information about a job can be found via the ``info``
1212
command::
1213

    
1214
  node1# gnt-job info 17776
1215
  Job ID: 17776
1216
    Status: error
1217
    Received:         2009-10-25 23:18:02.180569
1218
    Processing start: 2009-10-25 23:18:02.200335 (delta 0.019766s)
1219
    Processing end:   2009-10-25 23:18:02.279743 (delta 0.079408s)
1220
    Total processing time: 0.099174 seconds
1221
    Opcodes:
1222
      OP_CLUSTER_RENAME
1223
        Status: error
1224
        Processing start: 2009-10-25 23:18:02.200335
1225
        Processing end:   2009-10-25 23:18:02.252282
1226
        Input fields:
1227
          name: cluster.example.com
1228
        Result:
1229
          OpPrereqError
1230
          [Neither the name nor the IP address of the cluster has changed]
1231
        Execution log:
1232

    
1233
During the execution of a job, it's possible to follow the output of a
1234
job, similar to the log that one get from the ``gnt-`` commands, via the
1235
watch command::
1236

    
1237
  node1# gnt-instance add --submit … instance1
1238
  JobID: 17818
1239
  node1# gnt-job watch 17818
1240
  Output from job 17818 follows
1241
  -----------------------------
1242
  Mon Oct 26 00:22:48 2009  - INFO: Selected nodes for instance instance1 via iallocator dumb: node1, node2
1243
  Mon Oct 26 00:22:49 2009 * creating instance disks...
1244
  Mon Oct 26 00:22:52 2009 adding instance instance1 to cluster config
1245
  Mon Oct 26 00:22:52 2009  - INFO: Waiting for instance instance1 to sync disks.
1246
1247
  Mon Oct 26 00:23:03 2009 creating os for instance xen-devi-18.fra.corp.google.com on node mpgntac4.fra.corp.google.com
1248
  Mon Oct 26 00:23:03 2009 * running the instance OS create scripts...
1249
  Mon Oct 26 00:23:13 2009 * starting instance...
1250
  node1#
1251

    
1252
This is useful if you need to follow a job's progress from multiple
1253
terminals.
1254

    
1255
A job that has not yet started to run can be canceled::
1256

    
1257
  node1# gnt-job cancel 17810
1258

    
1259
But not one that has already started execution::
1260

    
1261
  node1# gnt-job cancel 17805
1262
  Job 17805 is no longer waiting in the queue
1263

    
1264
There are two queues for jobs: the *current* and the *archive*
1265
queue. Jobs are initially submitted to the current queue, and they stay
1266
in that queue until they have finished execution (either successfully or
1267
not). At that point, they can be moved into the archive queue, and the
1268
ganeti-watcher script will do this automatically after 6 hours. The
1269
ganeti-cleaner script will remove the jobs from the archive directory
1270
after three weeks.
1271

    
1272
Note that only jobs in the current queue can be viewed via the list and
1273
info commands; Ganeti itself doesn't examine the archive directory. If
1274
you need to see an older job, either move the file manually in the
1275
top-level queue directory, or look at its contents (it's a
1276
JSON-formatted file).
1277

    
1278
Ganeti tools
1279
------------
1280

    
1281
Beside the usual ``gnt-`` and ``ganeti-`` commands which are provided
1282
and installed in ``$prefix/sbin`` at install time, there are a couple of
1283
other tools installed which are used seldom but can be helpful in some
1284
cases.
1285

    
1286
lvmstrap
1287
++++++++
1288

    
1289
The ``lvmstrap`` tool, introduced in :ref:`configure-lvm-label` section,
1290
has two modes of operation:
1291

    
1292
- ``diskinfo`` shows the discovered disks on the system and their status
1293
- ``create`` takes all not-in-use disks and creates a volume group out
1294
  of them
1295

    
1296
.. warning:: The ``create`` argument to this command causes data-loss!
1297

    
1298
cfgupgrade
1299
++++++++++
1300

    
1301
The ``cfgupgrade`` tools is used to upgrade between major (and minor)
1302
Ganeti versions. Point-releases are usually transparent for the admin.
1303

    
1304
More information about the upgrade procedure is listed on the wiki at
1305
http://code.google.com/p/ganeti/wiki/UpgradeNotes.
1306

    
1307
There is also a script designed to upgrade from Ganeti 1.2 to 2.0,
1308
called ``cfgupgrade12``.
1309

    
1310
cfgshell
1311
++++++++
1312

    
1313
.. note:: This command is not actively maintained; make sure you backup
1314
   your configuration before using it
1315

    
1316
This can be used as an alternative to direct editing of the
1317
main configuration file if Ganeti has a bug and prevents you, for
1318
example, from removing an instance or a node from the configuration
1319
file.
1320

    
1321
.. _burnin-label:
1322

    
1323
burnin
1324
++++++
1325

    
1326
.. warning:: This command will erase existing instances if given as
1327
   arguments!
1328

    
1329
This tool is used to exercise either the hardware of machines or
1330
alternatively the Ganeti software. It is safe to run on an existing
1331
cluster **as long as you don't pass it existing instance names**.
1332

    
1333
The command will, by default, execute a comprehensive set of operations
1334
against a list of instances, these being:
1335

    
1336
- creation
1337
- disk replacement (for redundant instances)
1338
- failover and migration (for redundant instances)
1339
- move (for non-redundant instances)
1340
- disk growth
1341
- add disks, remove disk
1342
- add NICs, remove NICs
1343
- export and then import
1344
- rename
1345
- reboot
1346
- shutdown/startup
1347
- and finally removal of the test instances
1348

    
1349
Executing all these operations will test that the hardware performs
1350
well: the creation, disk replace, disk add and disk growth will exercise
1351
the storage and network; the migrate command will test the memory of the
1352
systems. Depending on the passed options, it can also test that the
1353
instance OS definitions are executing properly the rename, import and
1354
export operations.
1355

    
1356
sanitize-config
1357
+++++++++++++++
1358

    
1359
This tool takes the Ganeti configuration and outputs a "sanitized"
1360
version, by randomizing or clearing:
1361

    
1362
- DRBD secrets and cluster public key (always)
1363
- host names (optional)
1364
- IPs (optional)
1365
- OS names (optional)
1366
- LV names (optional, only useful for very old clusters which still have
1367
  instances whose LVs are based on the instance name)
1368

    
1369
By default, all optional items are activated except the LV name
1370
randomization. When passing ``--no-randomization``, which disables the
1371
optional items (i.e. just the DRBD secrets and cluster public keys are
1372
randomized), the resulting file can be used as a safety copy of the
1373
cluster config - while not trivial, the layout of the cluster can be
1374
recreated from it and if the instance disks have not been lost it
1375
permits recovery from the loss of all master candidates.
1376

    
1377

    
1378
Other Ganeti projects
1379
---------------------
1380

    
1381
There are two other Ganeti-related projects that can be useful in a
1382
Ganeti deployment. These can be downloaded from the project site
1383
(http://code.google.com/p/ganeti/) and the repositories are also on the
1384
project git site (http://git.ganeti.org).
1385

    
1386
NBMA tools
1387
++++++++++
1388

    
1389
The ``ganeti-nbma`` software is designed to allow instances to live on a
1390
separate, virtual network from the nodes, and in an environment where
1391
nodes are not guaranteed to be able to reach each other via multicasting
1392
or broadcasting. For more information see the README in the source
1393
archive.
1394

    
1395
ganeti-htools
1396
+++++++++++++
1397

    
1398
The ``ganeti-htools`` software consists of a set of tools:
1399

    
1400
- ``hail``: an advanced iallocator script compared to Ganeti's builtin
1401
  one
1402
- ``hbal``: a tool for rebalancing the cluster, i.e. moving instances
1403
  around in order to better use the resources on the nodes
1404
- ``hspace``: a tool for estimating the available capacity of a cluster,
1405
  so that capacity planning can be done efficiently
1406

    
1407
For more information and installation instructions, see the README file
1408
in the source archive.
1409

    
1410
.. vim: set textwidth=72 :
1411
.. Local Variables:
1412
.. mode: rst
1413
.. fill-column: 72
1414
.. End: