Statistics
| Branch: | Tag: | Revision:

root / man / gnt-cluster.rst @ ef99e3e8

History | View | Annotate | Download (33 kB)

1
gnt-cluster(8) Ganeti | Version @GANETI_VERSION@
2
================================================
3

    
4
Name
5
----
6

    
7
gnt-cluster - Ganeti administration, cluster-wide
8

    
9
Synopsis
10
--------
11

    
12
**gnt-cluster** {command} [arguments...]
13

    
14
DESCRIPTION
15
-----------
16

    
17
The **gnt-cluster** is used for cluster-wide administration in the
18
Ganeti system.
19

    
20
COMMANDS
21
--------
22

    
23
ACTIVATE-MASTER-IP
24
~~~~~~~~~~~~~~~~~~
25

    
26
**activate-master-ip**
27

    
28
Activates the master IP on the master node.
29

    
30
COMMAND
31
~~~~~~~
32

    
33
**command** [-n *node*] [-g *group*] [-M] {*command*}
34

    
35
Executes a command on all nodes. This command is designed for simple
36
usage. For more complex use cases the commands **dsh**\(1) or **cssh**\(1)
37
should be used instead.
38

    
39
If the option ``-n`` is not given, the command will be executed on all
40
nodes, otherwise it will be executed only on the node(s) specified. Use
41
the option multiple times for running it on multiple nodes, like::
42

    
43
    # gnt-cluster command -n node1.example.com -n node2.example.com date
44

    
45
The ``-g`` option can be used to run a command only on a specific node
46
group, e.g.::
47

    
48
    # gnt-cluster command -g default date
49

    
50
The ``-M`` option can be used to prepend the node name to all output
51
lines. The ``--failure-only`` option hides successful commands, making
52
it easier to see failures.
53

    
54
The command is executed serially on the selected nodes. If the
55
master node is present in the list, the command will be executed
56
last on the master. Regarding the other nodes, the execution order
57
is somewhat alphabetic, so that node2.example.com will be earlier
58
than node10.example.com but after node1.example.com.
59

    
60
So given the node names node1, node2, node3, node10, node11, with
61
node3 being the master, the order will be: node1, node2, node10,
62
node11, node3.
63

    
64
The command is constructed by concatenating all other command line
65
arguments. For example, to list the contents of the /etc directory
66
on all nodes, run::
67

    
68
    # gnt-cluster command ls -l /etc
69

    
70
and the command which will be executed will be ``ls -l /etc``.
71

    
72
COPYFILE
73
~~~~~~~~
74

    
75
| **copyfile** [\--use-replication-network] [-n *node*] [-g *group*]
76
| {*file*}
77

    
78
Copies a file to all or to some nodes. The argument specifies the
79
source file (on the current system), the ``-n`` argument specifies
80
the target node, or nodes if the option is given multiple times. If
81
``-n`` is not given at all, the file will be copied to all nodes. The
82
``-g`` option can be used to only select nodes in a specific node group.
83
Passing the ``--use-replication-network`` option will cause the
84
copy to be done over the replication network (only matters if the
85
primary/secondary IPs are different). Example::
86

    
87
    # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
88

    
89
This will copy the file /tmp/test from the current node to the two
90
named nodes.
91

    
92
DEACTIVATE-MASTER-IP
93
~~~~~~~~~~~~~~~~~~~~
94

    
95
**deactivate-master-ip** [\--yes]
96

    
97
Deactivates the master IP on the master node.
98

    
99
This should be run only locally or on a connection to the node ip
100
directly, as a connection to the master ip will be broken by this
101
operation. Because of this risk it will require user confirmation
102
unless the ``--yes`` option is passed.
103

    
104
DESTROY
105
~~~~~~~
106

    
107
**destroy** {\--yes-do-it}
108

    
109
Remove all configuration files related to the cluster, so that a
110
**gnt-cluster init** can be done again afterwards.
111

    
112
Since this is a dangerous command, you are required to pass the
113
argument *\--yes-do-it.*
114

    
115
EPO
116
~~~
117

    
118
**epo** [\--on] [\--groups|\--all] [\--power-delay] *arguments*
119

    
120
Performs an emergency power-off on nodes given as arguments. If
121
``--groups`` is given, arguments are node groups. If ``--all`` is
122
provided, the whole cluster will be shut down.
123

    
124
The ``--on`` flag recovers the cluster after an emergency power-off.
125
When powering on the cluster you can use ``--power-delay`` to define the
126
time in seconds (fractions allowed) waited between powering on
127
individual nodes.
128

    
129
Please note that the master node will not be turned down or up
130
automatically.  It will just be left in a state, where you can manully
131
perform the shutdown of that one node. If the master is in the list of
132
affected nodes and this is not a complete cluster emergency power-off
133
(e.g. using ``--all``), you're required to do a master failover to
134
another node not affected.
135

    
136
GETMASTER
137
~~~~~~~~~
138

    
139
**getmaster**
140

    
141
Displays the current master node.
142

    
143
INFO
144
~~~~
145

    
146
**info** [\--roman]
147

    
148
Shows runtime cluster information: cluster name, architecture (32
149
or 64 bit), master node, node list and instance list.
150

    
151
Passing the ``--roman`` option gnt-cluster info will try to print
152
its integer fields in a latin friendly way. This allows further
153
diffusion of Ganeti among ancient cultures.
154

    
155
SHOW-ISPECS-CMD
156
~~~~~~~~~~~~~~~
157

    
158
**show-ispecs-cmd**
159

    
160
Shows the command line that can be used to recreate the cluster with the
161
same options relative to specs in the instance policies.
162

    
163
INIT
164
~~~~
165

    
166
| **init**
167
| [{-s|\--secondary-ip} *secondary\_ip*]
168
| [\--vg-name *vg-name*]
169
| [\--master-netdev *interface-name*]
170
| [\--master-netmask *netmask*]
171
| [\--use-external-mip-script {yes \| no}]
172
| [{-m|\--mac-prefix} *mac-prefix*]
173
| [\--no-lvm-storage]
174
| [\--no-etc-hosts]
175
| [\--no-ssh-init]
176
| [\--file-storage-dir *dir*]
177
| [\--enabled-hypervisors *hypervisors*]
178
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
179
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
180
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
181
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
182
| [\--maintain-node-health {yes \| no}]
183
| [\--uid-pool *user-id pool definition*]
184
| [{-I|\--default-iallocator} *default instance allocator*]
185
| [\--primary-ip-version *version*]
186
| [\--prealloc-wipe-disks {yes \| no}]
187
| [\--node-parameters *ndparams*]
188
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
189
| [\--specs-cpu-count *spec-param*=*value* [,*spec-param*=*value*...]]
190
| [\--specs-disk-count *spec-param*=*value* [,*spec-param*=*value*...]]
191
| [\--specs-disk-size *spec-param*=*value* [,*spec-param*=*value*...]]
192
| [\--specs-mem-size *spec-param*=*value* [,*spec-param*=*value*...]]
193
| [\--specs-nic-count *spec-param*=*value* [,*spec-param*=*value*...]]
194
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
195
| [\--ipolicy-bounds-specs *bounds_ispecs*]
196
| [\--ipolicy-disk-templates *template* [,*template*...]]
197
| [\--ipolicy-spindle-ratio *ratio*]
198
| [\--ipolicy-vcpu-ratio *ratio*]
199
| [\--disk-state *diskstate*]
200
| [\--hypervisor-state *hvstate*]
201
| [\--enabled-disk-templates *template* [,*template*...]]
202
| {*clustername*}
203

    
204
This commands is only run once initially on the first node of the
205
cluster. It will initialize the cluster configuration, setup the
206
ssh-keys, start the daemons on the master node, etc. in order to have
207
a working one-node cluster.
208

    
209
Note that the *clustername* is not any random name. It has to be
210
resolvable to an IP address using DNS, and it is best if you give the
211
fully-qualified domain name. This hostname must resolve to an IP
212
address reserved exclusively for this purpose, i.e. not already in
213
use.
214

    
215
The cluster can run in two modes: single-home or dual-homed. In the
216
first case, all traffic (both public traffic, inter-node traffic and
217
data replication traffic) goes over the same interface. In the
218
dual-homed case, the data replication traffic goes over the second
219
network. The ``-s (--secondary-ip)`` option here marks the cluster as
220
dual-homed and its parameter represents this node's address on the
221
second network.  If you initialise the cluster with ``-s``, all nodes
222
added must have a secondary IP as well.
223

    
224
Note that for Ganeti it doesn't matter if the secondary network is
225
actually a separate physical network, or is done using tunneling,
226
etc. For performance reasons, it's recommended to use a separate
227
network, of course.
228

    
229
The ``--vg-name`` option will let you specify a volume group
230
different than "xenvg" for Ganeti to use when creating instance
231
disks. This volume group must have the same name on all nodes. Once
232
the cluster is initialized this can be altered by using the
233
**modify** command. Note that if the volume group name is modified after
234
the cluster creation and DRBD support is enabled you might have to
235
manually modify the metavg as well.
236

    
237
If you don't want to use lvm storage at all use
238
the ``--no-lvm-storage`` option. Once the cluster is initialized
239
you can change this setup with the **modify** command.
240

    
241
The ``--master-netdev`` option is useful for specifying a different
242
interface on which the master will activate its IP address. It's
243
important that all nodes have this interface because you'll need it
244
for a master failover.
245

    
246
The ``--master-netmask`` option allows to specify a netmask for the
247
master IP. The netmask must be specified as an integer, and will be
248
interpreted as a CIDR netmask. The default value is 32 for an IPv4
249
address and 128 for an IPv6 address.
250

    
251
The ``--use-external-mip-script`` option allows to specify whether to
252
use an user-supplied master IP address setup script, whose location is
253
``@SYSCONFDIR@/ganeti/scripts/master-ip-setup``. If the option value is
254
set to False, the default script (located at
255
``@PKGLIBDIR@/tools/master-ip-setup``) will be executed.
256

    
257
The ``-m (--mac-prefix)`` option will let you specify a three byte
258
prefix under which the virtual MAC addresses of your instances will be
259
generated. The prefix must be specified in the format ``XX:XX:XX`` and
260
the default is ``aa:00:00``.
261

    
262
The ``--no-lvm-storage`` option allows you to initialize the
263
cluster without lvm support. This means that only instances using
264
files as storage backend will be possible to create. Once the
265
cluster is initialized you can change this setup with the
266
**modify** command.
267

    
268
The ``--no-etc-hosts`` option allows you to initialize the cluster
269
without modifying the /etc/hosts file.
270

    
271
The ``--no-ssh-init`` option allows you to initialize the cluster
272
without creating or distributing SSH key pairs.
273

    
274
The ``--file-storage-dir`` option allows you set the directory to
275
use for storing the instance disk files when using file storage as
276
backend for instance disks.
277

    
278
The ``--prealloc-wipe-disks`` sets a cluster wide configuration value
279
for wiping disks prior to allocation and size changes (``gnt-instance
280
grow-disk``). This increases security on instance level as the instance
281
can't access untouched data from its underlying storage.
282

    
283
The ``--enabled-hypervisors`` option allows you to set the list of
284
hypervisors that will be enabled for this cluster. Instance
285
hypervisors can only be chosen from the list of enabled
286
hypervisors, and the first entry of this list will be used by
287
default. Currently, the following hypervisors are available:
288

    
289
xen-pvm
290
    Xen PVM hypervisor
291

    
292
xen-hvm
293
    Xen HVM hypervisor
294

    
295
kvm
296
    Linux KVM hypervisor
297

    
298
chroot
299
    a simple chroot manager that starts chroot based on a script at the
300
    root of the filesystem holding the chroot
301

    
302
fake
303
    fake hypervisor for development/testing
304

    
305
Either a single hypervisor name or a comma-separated list of
306
hypervisor names can be specified. If this option is not specified,
307
only the xen-pvm hypervisor is enabled by default.
308

    
309
The ``-H (--hypervisor-parameters)`` option allows you to set default
310
hypervisor specific parameters for the cluster. The format of this
311
option is the name of the hypervisor, followed by a colon and a
312
comma-separated list of key=value pairs. The keys available for each
313
hypervisors are detailed in the **gnt-instance**\(8) man page, in the
314
**add** command plus the following parameters which are only
315
configurable globally (at cluster level):
316

    
317
migration\_port
318
    Valid for the Xen PVM and KVM hypervisors.
319

    
320
    This options specifies the TCP port to use for live-migration. For
321
    Xen, the same port should be configured on all nodes in the
322
    ``@XEN_CONFIG_DIR@/xend-config.sxp`` file, under the key
323
    "xend-relocation-port".
324

    
325
migration\_bandwidth
326
    Valid for the KVM hypervisor.
327

    
328
    This option specifies the maximum bandwidth that KVM will use for
329
    instance live migrations. The value is in MiB/s.
330

    
331
    This option is only effective with kvm versions >= 78 and qemu-kvm
332
    versions >= 0.10.0.
333

    
334
The ``-B (--backend-parameters)`` option allows you to set the default
335
backend parameters for the cluster. The parameter format is a
336
comma-separated list of key=value pairs with the following supported
337
keys:
338

    
339
vcpus
340
    Number of VCPUs to set for an instance by default, must be an
341
    integer, will be set to 1 if no specified.
342

    
343
maxmem
344
    Maximum amount of memory to allocate for an instance by default, can
345
    be either an integer or an integer followed by a unit (M for
346
    mebibytes and G for gibibytes are supported), will be set to 128M if
347
    not specified.
348

    
349
minmem
350
    Minimum amount of memory to allocate for an instance by default, can
351
    be either an integer or an integer followed by a unit (M for
352
    mebibytes and G for gibibytes are supported), will be set to 128M if
353
    not specified.
354

    
355
auto\_balance
356
    Value of the auto\_balance flag for instances to use by default,
357
    will be set to true if not specified.
358

    
359
always\_failover
360
    Default value for the ``always_failover`` flag for instances; if
361
    not set, ``False`` is used.
362

    
363

    
364
The ``-N (--nic-parameters)`` option allows you to set the default
365
network interface parameters for the cluster. The parameter format is a
366
comma-separated list of key=value pairs with the following supported
367
keys:
368

    
369
mode
370
    The default NIC mode, one of ``routed``, ``bridged`` or
371
    ``openvswitch``.
372

    
373
link
374
    In ``bridged`` or ``openvswitch`` mode the default interface where
375
    to attach NICs. In ``routed`` mode it represents an
376
    hypervisor-vif-script dependent value to allow different instance
377
    groups. For example under the KVM default network script it is
378
    interpreted as a routing table number or name. Openvswitch support
379
    is also hypervisor dependent and currently works for the default KVM
380
    network script. Under Xen a custom network script must be provided.
381

    
382
The ``-D (--disk-parameters)`` option allows you to set the default disk
383
template parameters at cluster level. The format used for this option is
384
similar to the one use by the  ``-H`` option: the disk template name
385
must be specified first, followed by a colon and by a comma-separated
386
list of key-value pairs. These parameters can only be specified at
387
cluster and node group level; the cluster-level parameter are inherited
388
by the node group at the moment of its creation, and can be further
389
modified at node group level using the **gnt-group**\(8) command.
390

    
391
The following is the list of disk parameters available for the **drbd**
392
template, with measurement units specified in square brackets at the end
393
of the description (when applicable):
394

    
395
resync-rate
396
    Static re-synchronization rate. [KiB/s]
397

    
398
data-stripes
399
    Number of stripes to use for data LVs.
400

    
401
meta-stripes
402
    Number of stripes to use for meta LVs.
403

    
404
disk-barriers
405
    What kind of barriers to **disable** for disks. It can either assume
406
    the value "n", meaning no barrier disabled, or a non-empty string
407
    containing a subset of the characters "bfd". "b" means disable disk
408
    barriers, "f" means disable disk flushes, "d" disables disk drains.
409

    
410
meta-barriers
411
    Boolean value indicating whether the meta barriers should be
412
    disabled (True) or not (False).
413

    
414
metavg
415
    String containing the name of the default LVM volume group for DRBD
416
    metadata. By default, it is set to ``xenvg``. It can be overridden
417
    during the instance creation process by using the ``metavg`` key of
418
    the ``--disk`` parameter.
419

    
420
disk-custom
421
    String containing additional parameters to be appended to the
422
    arguments list of ``drbdsetup disk``.
423

    
424
net-custom
425
    String containing additional parameters to be appended to the
426
    arguments list of ``drbdsetup net``.
427

    
428
dynamic-resync
429
    Boolean indicating whether to use the dynamic resync speed
430
    controller or not. If enabled, c-plan-ahead must be non-zero and all
431
    the c-* parameters will be used by DRBD. Otherwise, the value of
432
    resync-rate will be used as a static resync speed.
433

    
434
c-plan-ahead
435
    Agility factor of the dynamic resync speed controller. (the higher,
436
    the slower the algorithm will adapt the resync speed). A value of 0
437
    (that is the default) disables the controller. [ds]
438

    
439
c-fill-target
440
    Maximum amount of in-flight resync data for the dynamic resync speed
441
    controller. [sectors]
442

    
443
c-delay-target
444
    Maximum estimated peer response latency for the dynamic resync speed
445
    controller. [ds]
446

    
447
c-min-rate
448
    Minimum resync speed for the dynamic resync speed controller. [KiB/s]
449

    
450
c-max-rate
451
    Upper bound on resync speed for the dynamic resync speed controller.
452
    [KiB/s]
453

    
454
List of parameters available for the **plain** template:
455

    
456
stripes
457
    Number of stripes to use for new LVs.
458

    
459
List of parameters available for the **rbd** template:
460

    
461
pool
462
    The RADOS cluster pool, inside which all rbd volumes will reside.
463
    When a new RADOS cluster is deployed, the default pool to put rbd
464
    volumes (Images in RADOS terminology) is 'rbd'.
465

    
466
The option ``--maintain-node-health`` allows one to enable/disable
467
automatic maintenance actions on nodes. Currently these include
468
automatic shutdown of instances and deactivation of DRBD devices on
469
offline nodes; in the future it might be extended to automatic
470
removal of unknown LVM volumes, etc. Note that this option is only
471
useful if the use of ``ganeti-confd`` was enabled at compilation.
472

    
473
The ``--uid-pool`` option initializes the user-id pool. The
474
*user-id pool definition* can contain a list of user-ids and/or a
475
list of user-id ranges. The parameter format is a comma-separated
476
list of numeric user-ids or user-id ranges. The ranges are defined
477
by a lower and higher boundary, separated by a dash. The boundaries
478
are inclusive. If the ``--uid-pool`` option is not supplied, the
479
user-id pool is initialized to an empty list. An empty list means
480
that the user-id pool feature is disabled.
481

    
482
The ``-I (--default-iallocator)`` option specifies the default
483
instance allocator. The instance allocator will be used for operations
484
like instance creation, instance and node migration, etc. when no
485
manual override is specified. If this option is not specified and
486
htools was not enabled at build time, the default instance allocator
487
will be blank, which means that relevant operations will require the
488
administrator to manually specify either an instance allocator, or a
489
set of nodes. If the option is not specified but htools was enabled,
490
the default iallocator will be **hail**\(1) (assuming it can be found
491
on disk). The default iallocator can be changed later using the
492
**modify** command.
493

    
494
The ``--primary-ip-version`` option specifies the IP version used
495
for the primary address. Possible values are 4 and 6 for IPv4 and
496
IPv6, respectively. This option is used when resolving node names
497
and the cluster name.
498

    
499
The ``--node-parameters`` option allows you to set default node
500
parameters for the cluster. Please see **ganeti**\(7) for more
501
information about supported key=value pairs.
502

    
503
The ``-C (--candidate-pool-size)`` option specifies the
504
``candidate_pool_size`` cluster parameter. This is the number of nodes
505
that the master will try to keep as master\_candidates. For more
506
details about this role and other node roles, see the **ganeti**\(7).
507

    
508
The ``--specs-...`` and ``--ipolicy-...`` options specify the instance
509
policy on the cluster. The ``--ipolicy-bounds-specs`` option sets the
510
minimum and maximum specifications for instances. The format is:
511
min:*param*=*value*,.../max:*param*=*value*,... and further
512
specifications pairs can be added by using ``//`` as a separator. The
513
``--ipolicy-std-specs`` option takes a list of parameter/value pairs.
514
For both options, *param* can be:
515

    
516
- ``cpu-count``: number of VCPUs for an instance
517
- ``disk-count``: number of disk for an instance
518
- ``disk-size``: size of each disk
519
- ``memory-size``: instance memory
520
- ``nic-count``: number of network interface
521
- ``spindle-use``: spindle usage for an instance
522

    
523
For the ``--specs-...`` options, each option can have three values:
524
``min``, ``max`` and ``std``, which can also be modified on group level
525
(except for ``std``, which is defined once for the entire cluster).
526
Please note, that ``std`` values are not the same as defaults set by
527
``--beparams``, but they are used for the capacity calculations.
528

    
529
- ``--specs-cpu-count`` limits the number of VCPUs that can be used by an
530
  instance.
531
- ``--specs-disk-count`` limits the number of disks
532
- ``--specs-disk-size`` limits the disk size for every disk used
533
- ``--specs-mem-size`` limits the amount of memory available
534
- ``--specs-nic-count`` sets limits on the number of NICs used
535

    
536
The ``--ipolicy-disk-templates`` and ``--ipolicy-spindle-ratio`` options
537
take a decimal number. The ``--ipolicy-disk-templates`` option takes a
538
comma-separated list of disk templates.
539

    
540
- ``--ipolicy-disk-templates`` limits the allowed disk templates
541
- ``--ipolicy-spindle-ratio`` limits the instances-spindles ratio
542
- ``--ipolicy-vcpu-ratio`` limits the vcpu-cpu ratio
543

    
544
All the instance policy elements can be overridden at group level. Group
545
level overrides can be removed by specifying ``default`` as the value of
546
an item.
547

    
548
For details about how to use ``--hypervisor-state`` and ``--disk-state``
549
have a look at **ganeti**\(7).
550

    
551
The ``--enabled-disk-templates`` option specifies a list of disk templates
552
that can be used by instances of the cluster. For the possible values in
553
this list, see **gnt-instance**\(8). Note that in contrast to the list of
554
disk templates in the ipolicy, this list is a hard restriction. It is not
555
possible to create instances with disk templates that are not enabled in
556
the cluster. It is also not possible to disable a disk template when there
557
are still instances using it.
558

    
559
MASTER-FAILOVER
560
~~~~~~~~~~~~~~~
561

    
562
**master-failover** [\--no-voting] [\--yes-do-it]
563

    
564
Failover the master role to the current node.
565

    
566
The ``--no-voting`` option skips the remote node agreement checks.
567
This is dangerous, but necessary in some cases (for example failing
568
over the master role in a 2 node cluster with the original master
569
down). If the original master then comes up, it won't be able to
570
start its master daemon because it won't have enough votes, but so
571
won't the new master, if the master daemon ever needs a restart.
572
You can pass ``--no-voting`` to **ganeti-masterd** on the new
573
master to solve this problem, and run **gnt-cluster redist-conf**
574
to make sure the cluster is consistent again.
575

    
576
The option ``--yes-do-it`` is used together with ``--no-voting``, for
577
skipping the interactive checks. This is even more dangerous, and should
578
only be used in conjunction with other means (e.g. a HA suite) to
579
confirm that the operation is indeed safe.
580

    
581
MASTER-PING
582
~~~~~~~~~~~
583

    
584
**master-ping**
585

    
586
Checks if the master daemon is alive.
587

    
588
If the master daemon is alive and can respond to a basic query (the
589
equivalent of **gnt-cluster info**), then the exit code of the
590
command will be 0. If the master daemon is not alive (either due to
591
a crash or because this is not the master node), the exit code will
592
be 1.
593

    
594
MODIFY
595
~~~~~~
596

    
597
| **modify** [\--submit]
598
| [\--vg-name *vg-name*]
599
| [\--no-lvm-storage]
600
| [\--enabled-hypervisors *hypervisors*]
601
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
602
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
603
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
604
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
605
| [\--uid-pool *user-id pool definition*]
606
| [\--add-uids *user-id pool definition*]
607
| [\--remove-uids *user-id pool definition*]
608
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
609
| [\--maintain-node-health {yes \| no}]
610
| [\--prealloc-wipe-disks {yes \| no}]
611
| [{-I|\--default-iallocator} *default instance allocator*]
612
| [\--reserved-lvs=*NAMES*]
613
| [\--node-parameters *ndparams*]
614
| [\--master-netdev *interface-name*]
615
| [\--master-netmask *netmask*]
616
| [\--use-external-mip-script {yes \| no}]
617
| [\--hypervisor-state *hvstate*]
618
| [\--disk-state *diskstate*]
619
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
620
| [\--ipolicy-bounds-specs *bounds_ispecs*]
621
| [\--ipolicy-disk-templates *template* [,*template*...]]
622
| [\--ipolicy-spindle-ratio *ratio*]
623
| [\--ipolicy-vcpu-ratio *ratio*]
624
| [\--enabled-disk-templates *template* [,*template*...]]
625

    
626

    
627
Modify the options for the cluster.
628

    
629
The ``--vg-name``, ``--no-lvm-storage``, ``--enabled-hypervisors``,
630
``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)``,
631
``-D (--disk-parameters)``, ``--nic-parameters``, ``-C
632
(--candidate-pool-size)``, ``--maintain-node-health``,
633
``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``,
634
``--master-netdev``, ``--master-netmask``, ``--use-external-mip-script``,
635
and ``--enabled-disk-templates`` options are described in the **init**
636
command.
637

    
638
The ``--hypervisor-state`` and ``--disk-state`` options are described in
639
detail in **ganeti**\(7).
640

    
641
The ``--add-uids`` and ``--remove-uids`` options can be used to
642
modify the user-id pool by adding/removing a list of user-ids or
643
user-id ranges.
644

    
645
The option ``--reserved-lvs`` specifies a list (comma-separated) of
646
logical volume group names (regular expressions) that will be
647
ignored by the cluster verify operation. This is useful if the
648
volume group used for Ganeti is shared with the system for other
649
uses. Note that it's not recommended to create and mark as ignored
650
logical volume names which match Ganeti's own name format (starting
651
with UUID and then .diskN), as this option only skips the
652
verification, but not the actual use of the names given.
653

    
654
To remove all reserved logical volumes, pass in an empty argument
655
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``.
656

    
657
The ``-I (--default-iallocator)`` is described in the **init**
658
command. To clear the default iallocator, just pass an empty string
659
('').
660

    
661
The ``--ipolicy-...`` options are described in the **init** command.
662

    
663
See **ganeti**\(7) for a description of ``--submit`` and other common
664
options.
665

    
666
QUEUE
667
~~~~~
668

    
669
**queue** {drain | undrain | info}
670

    
671
Change job queue properties.
672

    
673
The ``drain`` option sets the drain flag on the job queue. No new
674
jobs will be accepted, but jobs already in the queue will be
675
processed.
676

    
677
The ``undrain`` will unset the drain flag on the job queue. New
678
jobs will be accepted.
679

    
680
The ``info`` option shows the properties of the job queue.
681

    
682
WATCHER
683
~~~~~~~
684

    
685
**watcher** {pause *duration* | continue | info}
686

    
687
Make the watcher pause or let it continue.
688

    
689
The ``pause`` option causes the watcher to pause for *duration*
690
seconds.
691

    
692
The ``continue`` option will let the watcher continue.
693

    
694
The ``info`` option shows whether the watcher is currently paused.
695

    
696
REDIST-CONF
697
~~~~~~~~~~~
698

    
699
**redist-conf** [\--submit]
700

    
701
This command forces a full push of configuration files from the
702
master node to the other nodes in the cluster. This is normally not
703
needed, but can be run if the **verify** complains about
704
configuration mismatches.
705

    
706
See **ganeti**\(7) for a description of ``--submit`` and other common
707
options.
708

    
709
RENAME
710
~~~~~~
711

    
712
**rename** [-f] {*name*}
713

    
714
Renames the cluster and in the process updates the master IP
715
address to the one the new name resolves to. At least one of either
716
the name or the IP address must be different, otherwise the
717
operation will be aborted.
718

    
719
Note that since this command can be dangerous (especially when run
720
over SSH), the command will require confirmation unless run with
721
the ``-f`` option.
722

    
723
RENEW-CRYPTO
724
~~~~~~~~~~~~
725

    
726
| **renew-crypto** [-f]
727
| [\--new-cluster-certificate] [\--new-confd-hmac-key]
728
| [\--new-rapi-certificate] [\--rapi-certificate *rapi-cert*]
729
| [\--new-spice-certificate | \--spice-certificate *spice-cert*
730
| \--spice-ca-certificate *spice-ca-cert*]
731
| [\--new-cluster-domain-secret] [\--cluster-domain-secret *filename*]
732

    
733
This command will stop all Ganeti daemons in the cluster and start
734
them again once the new certificates and keys are replicated. The
735
options ``--new-cluster-certificate`` and ``--new-confd-hmac-key``
736
can be used to regenerate respectively the cluster-internal SSL
737
certificate and the HMAC key used by **ganeti-confd**\(8).
738

    
739
To generate a new self-signed RAPI certificate (used by
740
**ganeti-rapi**\(8)) specify ``--new-rapi-certificate``. If you want to
741
use your own certificate, e.g. one signed by a certificate
742
authority (CA), pass its filename to ``--rapi-certificate``.
743

    
744
To generate a new self-signed SPICE certificate, used for SPICE
745
connections to the KVM hypervisor, specify the
746
``--new-spice-certificate`` option. If you want to provide a
747
certificate, pass its filename to ``--spice-certificate`` and pass the
748
signing CA certificate to ``--spice-ca-certificate``.
749

    
750
Finally ``--new-cluster-domain-secret`` generates a new, random
751
cluster domain secret, and ``--cluster-domain-secret`` reads the
752
secret from a file. The cluster domain secret is used to sign
753
information exchanged between separate clusters via a third party.
754

    
755
REPAIR-DISK-SIZES
756
~~~~~~~~~~~~~~~~~
757

    
758
**repair-disk-sizes** [instance...]
759

    
760
This command checks that the recorded size of the given instance's
761
disks matches the actual size and updates any mismatches found.
762
This is needed if the Ganeti configuration is no longer consistent
763
with reality, as it will impact some disk operations. If no
764
arguments are given, all instances will be checked.
765

    
766
Note that only active disks can be checked by this command; in case
767
a disk cannot be activated it's advised to use
768
**gnt-instance activate-disks \--ignore-size ...** to force
769
activation without regard to the current size.
770

    
771
When the all disk sizes are consistent, the command will return no
772
output. Otherwise it will log details about the inconsistencies in
773
the configuration.
774

    
775
VERIFY
776
~~~~~~
777

    
778
| **verify** [\--no-nplus1-mem] [\--node-group *nodegroup*]
779
| [\--error-codes] [{-I|\--ignore-errors} *errorcode*]
780
| [{-I|\--ignore-errors} *errorcode*...]
781

    
782
Verify correctness of cluster configuration. This is safe with
783
respect to running instances, and incurs no downtime of the
784
instances.
785

    
786
If the ``--no-nplus1-mem`` option is given, Ganeti won't check
787
whether if it loses a node it can restart all the instances on
788
their secondaries (and report an error otherwise).
789

    
790
With ``--node-group``, restrict the verification to those nodes and
791
instances that live in the named group. This will not verify global
792
settings, but will allow to perform verification of a group while other
793
operations are ongoing in other groups.
794

    
795
The ``--error-codes`` option outputs each error in the following
796
parseable format: *ftype*:*ecode*:*edomain*:*name*:*msg*.
797
These fields have the following meaning:
798

    
799
ftype
800
    Failure type. Can be *WARNING* or *ERROR*.
801

    
802
ecode
803
    Error code of the failure. See below for a list of error codes.
804

    
805
edomain
806
    Can be *cluster*, *node* or *instance*.
807

    
808
name
809
    Contains the name of the item that is affected from the failure.
810

    
811
msg
812
    Contains a descriptive error message about the error
813

    
814
``gnt-cluster verify`` will have a non-zero exit code if at least one of
815
the failures that are found are of type *ERROR*.
816

    
817
The ``--ignore-errors`` option can be used to change this behaviour,
818
because it demotes the error represented by the error code received as a
819
parameter to a warning. The option must be repeated for each error that
820
should be ignored (e.g.: ``-I ENODEVERSION -I ENODEORPHANLV``). The
821
``--error-codes`` option can be used to determine the error code of a
822
given error.
823

    
824
List of error codes:
825

    
826
@CONSTANTS_ECODES@
827

    
828
VERIFY-DISKS
829
~~~~~~~~~~~~
830

    
831
**verify-disks**
832

    
833
The command checks which instances have degraded DRBD disks and
834
activates the disks of those instances.
835

    
836
This command is run from the **ganeti-watcher** tool, which also
837
has a different, complementary algorithm for doing this check.
838
Together, these two should ensure that DRBD disks are kept
839
consistent.
840

    
841
VERSION
842
~~~~~~~
843

    
844
**version**
845

    
846
Show the cluster version.
847

    
848
Tags
849
~~~~
850

    
851
ADD-TAGS
852
^^^^^^^^
853

    
854
**add-tags** [\--from *file*] {*tag*...}
855

    
856
Add tags to the cluster. If any of the tags contains invalid
857
characters, the entire operation will abort.
858

    
859
If the ``--from`` option is given, the list of tags will be
860
extended with the contents of that file (each line becomes a tag).
861
In this case, there is not need to pass tags on the command line
862
(if you do, both sources will be used). A file name of - will be
863
interpreted as stdin.
864

    
865
LIST-TAGS
866
^^^^^^^^^
867

    
868
**list-tags**
869

    
870
List the tags of the cluster.
871

    
872
REMOVE-TAGS
873
^^^^^^^^^^^
874

    
875
**remove-tags** [\--from *file*] {*tag*...}
876

    
877
Remove tags from the cluster. If any of the tags are not existing
878
on the cluster, the entire operation will abort.
879

    
880
If the ``--from`` option is given, the list of tags to be removed will
881
be extended with the contents of that file (each line becomes a tag).
882
In this case, there is not need to pass tags on the command line (if
883
you do, tags from both sources will be removed). A file name of - will
884
be interpreted as stdin.
885

    
886
SEARCH-TAGS
887
^^^^^^^^^^^
888

    
889
**search-tags** {*pattern*}
890

    
891
Searches the tags on all objects in the cluster (the cluster
892
itself, the nodes and the instances) for a given pattern. The
893
pattern is interpreted as a regular expression and a search will be
894
done on it (i.e. the given pattern is not anchored to the beggining
895
of the string; if you want that, prefix the pattern with ^).
896

    
897
If no tags are matching the pattern, the exit code of the command
898
will be one. If there is at least one match, the exit code will be
899
zero. Each match is listed on one line, the object and the tag
900
separated by a space. The cluster will be listed as /cluster, a
901
node will be listed as /nodes/*name*, and an instance as
902
/instances/*name*. Example:
903

    
904
::
905

    
906
    # gnt-cluster search-tags time
907
    /cluster ctime:2007-09-01
908
    /nodes/node1.example.com mtime:2007-10-04
909

    
910
.. vim: set textwidth=72 :
911
.. Local Variables:
912
.. mode: rst
913
.. fill-column: 72
914
.. End: