Statistics
| Branch: | Tag: | Revision:

root / man / gnt-cluster.rst @ f3bde990

History | View | Annotate | Download (33.7 kB)

1
gnt-cluster(8) Ganeti | Version @GANETI_VERSION@
2
================================================
3

    
4
Name
5
----
6

    
7
gnt-cluster - Ganeti administration, cluster-wide
8

    
9
Synopsis
10
--------
11

    
12
**gnt-cluster** {command} [arguments...]
13

    
14
DESCRIPTION
15
-----------
16

    
17
The **gnt-cluster** is used for cluster-wide administration in the
18
Ganeti system.
19

    
20
COMMANDS
21
--------
22

    
23
ACTIVATE-MASTER-IP
24
~~~~~~~~~~~~~~~~~~
25

    
26
**activate-master-ip**
27

    
28
Activates the master IP on the master node.
29

    
30
COMMAND
31
~~~~~~~
32

    
33
**command** [-n *node*] [-g *group*] [-M] {*command*}
34

    
35
Executes a command on all nodes. This command is designed for simple
36
usage. For more complex use cases the commands **dsh**\(1) or **cssh**\(1)
37
should be used instead.
38

    
39
If the option ``-n`` is not given, the command will be executed on all
40
nodes, otherwise it will be executed only on the node(s) specified. Use
41
the option multiple times for running it on multiple nodes, like::
42

    
43
    # gnt-cluster command -n node1.example.com -n node2.example.com date
44

    
45
The ``-g`` option can be used to run a command only on a specific node
46
group, e.g.::
47

    
48
    # gnt-cluster command -g default date
49

    
50
The ``-M`` option can be used to prepend the node name to all output
51
lines. The ``--failure-only`` option hides successful commands, making
52
it easier to see failures.
53

    
54
The command is executed serially on the selected nodes. If the
55
master node is present in the list, the command will be executed
56
last on the master. Regarding the other nodes, the execution order
57
is somewhat alphabetic, so that node2.example.com will be earlier
58
than node10.example.com but after node1.example.com.
59

    
60
So given the node names node1, node2, node3, node10, node11, with
61
node3 being the master, the order will be: node1, node2, node10,
62
node11, node3.
63

    
64
The command is constructed by concatenating all other command line
65
arguments. For example, to list the contents of the /etc directory
66
on all nodes, run::
67

    
68
    # gnt-cluster command ls -l /etc
69

    
70
and the command which will be executed will be ``ls -l /etc``.
71

    
72
COPYFILE
73
~~~~~~~~
74

    
75
| **copyfile** [\--use-replication-network] [-n *node*] [-g *group*]
76
| {*file*}
77

    
78
Copies a file to all or to some nodes. The argument specifies the
79
source file (on the current system), the ``-n`` argument specifies
80
the target node, or nodes if the option is given multiple times. If
81
``-n`` is not given at all, the file will be copied to all nodes. The
82
``-g`` option can be used to only select nodes in a specific node group.
83
Passing the ``--use-replication-network`` option will cause the
84
copy to be done over the replication network (only matters if the
85
primary/secondary IPs are different). Example::
86

    
87
    # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
88

    
89
This will copy the file /tmp/test from the current node to the two
90
named nodes.
91

    
92
DEACTIVATE-MASTER-IP
93
~~~~~~~~~~~~~~~~~~~~
94

    
95
**deactivate-master-ip** [\--yes]
96

    
97
Deactivates the master IP on the master node.
98

    
99
This should be run only locally or on a connection to the node ip
100
directly, as a connection to the master ip will be broken by this
101
operation. Because of this risk it will require user confirmation
102
unless the ``--yes`` option is passed.
103

    
104
DESTROY
105
~~~~~~~
106

    
107
**destroy** {\--yes-do-it}
108

    
109
Remove all configuration files related to the cluster, so that a
110
**gnt-cluster init** can be done again afterwards.
111

    
112
Since this is a dangerous command, you are required to pass the
113
argument *\--yes-do-it.*
114

    
115
EPO
116
~~~
117

    
118
**epo** [\--on] [\--groups|\--all] [\--power-delay] *arguments*
119

    
120
Performs an emergency power-off on nodes given as arguments. If
121
``--groups`` is given, arguments are node groups. If ``--all`` is
122
provided, the whole cluster will be shut down.
123

    
124
The ``--on`` flag recovers the cluster after an emergency power-off.
125
When powering on the cluster you can use ``--power-delay`` to define the
126
time in seconds (fractions allowed) waited between powering on
127
individual nodes.
128

    
129
Please note that the master node will not be turned down or up
130
automatically.  It will just be left in a state, where you can manully
131
perform the shutdown of that one node. If the master is in the list of
132
affected nodes and this is not a complete cluster emergency power-off
133
(e.g. using ``--all``), you're required to do a master failover to
134
another node not affected.
135

    
136
GETMASTER
137
~~~~~~~~~
138

    
139
**getmaster**
140

    
141
Displays the current master node.
142

    
143
INFO
144
~~~~
145

    
146
**info** [\--roman]
147

    
148
Shows runtime cluster information: cluster name, architecture (32
149
or 64 bit), master node, node list and instance list.
150

    
151
Passing the ``--roman`` option gnt-cluster info will try to print
152
its integer fields in a latin friendly way. This allows further
153
diffusion of Ganeti among ancient cultures.
154

    
155
SHOW-ISPECS-CMD
156
~~~~~~~~~~~~~~~
157

    
158
**show-ispecs-cmd**
159

    
160
Shows the command line that can be used to recreate the cluster with the
161
same options relative to specs in the instance policies.
162

    
163
INIT
164
~~~~
165

    
166
| **init**
167
| [{-s|\--secondary-ip} *secondary\_ip*]
168
| [\--vg-name *vg-name*]
169
| [\--master-netdev *interface-name*]
170
| [\--master-netmask *netmask*]
171
| [\--use-external-mip-script {yes \| no}]
172
| [{-m|\--mac-prefix} *mac-prefix*]
173
| [\--no-etc-hosts]
174
| [\--no-ssh-init]
175
| [\--file-storage-dir *dir*]
176
| [\--enabled-hypervisors *hypervisors*]
177
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
178
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
179
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
180
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
181
| [\--maintain-node-health {yes \| no}]
182
| [\--uid-pool *user-id pool definition*]
183
| [{-I|\--default-iallocator} *default instance allocator*]
184
| [\--primary-ip-version *version*]
185
| [\--prealloc-wipe-disks {yes \| no}]
186
| [\--node-parameters *ndparams*]
187
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
188
| [\--specs-cpu-count *spec-param*=*value* [,*spec-param*=*value*...]]
189
| [\--specs-disk-count *spec-param*=*value* [,*spec-param*=*value*...]]
190
| [\--specs-disk-size *spec-param*=*value* [,*spec-param*=*value*...]]
191
| [\--specs-mem-size *spec-param*=*value* [,*spec-param*=*value*...]]
192
| [\--specs-nic-count *spec-param*=*value* [,*spec-param*=*value*...]]
193
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
194
| [\--ipolicy-bounds-specs *bounds_ispecs*]
195
| [\--ipolicy-disk-templates *template* [,*template*...]]
196
| [\--ipolicy-spindle-ratio *ratio*]
197
| [\--ipolicy-vcpu-ratio *ratio*]
198
| [\--disk-state *diskstate*]
199
| [\--hypervisor-state *hvstate*]
200
| [\--drbd-usermode-helper *helper*]
201
| [\--enabled-disk-templates *template* [,*template*...]]
202
| {*clustername*}
203

    
204
This commands is only run once initially on the first node of the
205
cluster. It will initialize the cluster configuration, setup the
206
ssh-keys, start the daemons on the master node, etc. in order to have
207
a working one-node cluster.
208

    
209
Note that the *clustername* is not any random name. It has to be
210
resolvable to an IP address using DNS, and it is best if you give the
211
fully-qualified domain name. This hostname must resolve to an IP
212
address reserved exclusively for this purpose, i.e. not already in
213
use.
214

    
215
The cluster can run in two modes: single-home or dual-homed. In the
216
first case, all traffic (both public traffic, inter-node traffic and
217
data replication traffic) goes over the same interface. In the
218
dual-homed case, the data replication traffic goes over the second
219
network. The ``-s (--secondary-ip)`` option here marks the cluster as
220
dual-homed and its parameter represents this node's address on the
221
second network.  If you initialise the cluster with ``-s``, all nodes
222
added must have a secondary IP as well.
223

    
224
Note that for Ganeti it doesn't matter if the secondary network is
225
actually a separate physical network, or is done using tunneling,
226
etc. For performance reasons, it's recommended to use a separate
227
network, of course.
228

    
229
The ``--vg-name`` option will let you specify a volume group
230
different than "xenvg" for Ganeti to use when creating instance
231
disks. This volume group must have the same name on all nodes. Once
232
the cluster is initialized this can be altered by using the
233
**modify** command. Note that if the volume group name is modified after
234
the cluster creation and DRBD support is enabled you might have to
235
manually modify the metavg as well.
236

    
237
If you don't want to use lvm storage at all use
238
the ``--enabled-disk-template`` option to restrict the set of enabled
239
disk templates. Once the cluster is initialized
240
you can change this setup with the **modify** command.
241

    
242
The ``--master-netdev`` option is useful for specifying a different
243
interface on which the master will activate its IP address. It's
244
important that all nodes have this interface because you'll need it
245
for a master failover.
246

    
247
The ``--master-netmask`` option allows to specify a netmask for the
248
master IP. The netmask must be specified as an integer, and will be
249
interpreted as a CIDR netmask. The default value is 32 for an IPv4
250
address and 128 for an IPv6 address.
251

    
252
The ``--use-external-mip-script`` option allows to specify whether to
253
use an user-supplied master IP address setup script, whose location is
254
``@SYSCONFDIR@/ganeti/scripts/master-ip-setup``. If the option value is
255
set to False, the default script (located at
256
``@PKGLIBDIR@/tools/master-ip-setup``) will be executed.
257

    
258
The ``-m (--mac-prefix)`` option will let you specify a three byte
259
prefix under which the virtual MAC addresses of your instances will be
260
generated. The prefix must be specified in the format ``XX:XX:XX`` and
261
the default is ``aa:00:00``.
262

    
263
The ``--no-etc-hosts`` option allows you to initialize the cluster
264
without modifying the /etc/hosts file.
265

    
266
The ``--no-ssh-init`` option allows you to initialize the cluster
267
without creating or distributing SSH key pairs.
268

    
269
The ``--file-storage-dir`` option allows you set the directory to
270
use for storing the instance disk files when using file storage as
271
backend for instance disks. Note that the file storage dir must be
272
an allowed directory for file storage. Those directories are specified
273
in the ``@SYSCONFDIR@/ganeti/file-storage-paths`` file. The file storage
274
directory can also be a subdirectory of an allowed one.
275

    
276
The ``--prealloc-wipe-disks`` sets a cluster wide configuration value
277
for wiping disks prior to allocation and size changes (``gnt-instance
278
grow-disk``). This increases security on instance level as the instance
279
can't access untouched data from its underlying storage.
280

    
281
The ``--enabled-hypervisors`` option allows you to set the list of
282
hypervisors that will be enabled for this cluster. Instance
283
hypervisors can only be chosen from the list of enabled
284
hypervisors, and the first entry of this list will be used by
285
default. Currently, the following hypervisors are available:
286

    
287
xen-pvm
288
    Xen PVM hypervisor
289

    
290
xen-hvm
291
    Xen HVM hypervisor
292

    
293
kvm
294
    Linux KVM hypervisor
295

    
296
chroot
297
    a simple chroot manager that starts chroot based on a script at the
298
    root of the filesystem holding the chroot
299

    
300
fake
301
    fake hypervisor for development/testing
302

    
303
Either a single hypervisor name or a comma-separated list of
304
hypervisor names can be specified. If this option is not specified,
305
only the xen-pvm hypervisor is enabled by default.
306

    
307
The ``-H (--hypervisor-parameters)`` option allows you to set default
308
hypervisor specific parameters for the cluster. The format of this
309
option is the name of the hypervisor, followed by a colon and a
310
comma-separated list of key=value pairs. The keys available for each
311
hypervisors are detailed in the **gnt-instance**\(8) man page, in the
312
**add** command plus the following parameters which are only
313
configurable globally (at cluster level):
314

    
315
migration\_port
316
    Valid for the Xen PVM and KVM hypervisors.
317

    
318
    This options specifies the TCP port to use for live-migration. For
319
    Xen, the same port should be configured on all nodes in the
320
    ``@XEN_CONFIG_DIR@/xend-config.sxp`` file, under the key
321
    "xend-relocation-port".
322

    
323
migration\_bandwidth
324
    Valid for the KVM hypervisor.
325

    
326
    This option specifies the maximum bandwidth that KVM will use for
327
    instance live migrations. The value is in MiB/s.
328

    
329
    This option is only effective with kvm versions >= 78 and qemu-kvm
330
    versions >= 0.10.0.
331

    
332
The ``-B (--backend-parameters)`` option allows you to set the default
333
backend parameters for the cluster. The parameter format is a
334
comma-separated list of key=value pairs with the following supported
335
keys:
336

    
337
vcpus
338
    Number of VCPUs to set for an instance by default, must be an
339
    integer, will be set to 1 if no specified.
340

    
341
maxmem
342
    Maximum amount of memory to allocate for an instance by default, can
343
    be either an integer or an integer followed by a unit (M for
344
    mebibytes and G for gibibytes are supported), will be set to 128M if
345
    not specified.
346

    
347
minmem
348
    Minimum amount of memory to allocate for an instance by default, can
349
    be either an integer or an integer followed by a unit (M for
350
    mebibytes and G for gibibytes are supported), will be set to 128M if
351
    not specified.
352

    
353
auto\_balance
354
    Value of the auto\_balance flag for instances to use by default,
355
    will be set to true if not specified.
356

    
357
always\_failover
358
    Default value for the ``always_failover`` flag for instances; if
359
    not set, ``False`` is used.
360

    
361

    
362
The ``-N (--nic-parameters)`` option allows you to set the default
363
network interface parameters for the cluster. The parameter format is a
364
comma-separated list of key=value pairs with the following supported
365
keys:
366

    
367
mode
368
    The default NIC mode, one of ``routed``, ``bridged`` or
369
    ``openvswitch``.
370

    
371
link
372
    In ``bridged`` or ``openvswitch`` mode the default interface where
373
    to attach NICs. In ``routed`` mode it represents an
374
    hypervisor-vif-script dependent value to allow different instance
375
    groups. For example under the KVM default network script it is
376
    interpreted as a routing table number or name. Openvswitch support
377
    is also hypervisor dependent and currently works for the default KVM
378
    network script. Under Xen a custom network script must be provided.
379

    
380
The ``-D (--disk-parameters)`` option allows you to set the default disk
381
template parameters at cluster level. The format used for this option is
382
similar to the one use by the  ``-H`` option: the disk template name
383
must be specified first, followed by a colon and by a comma-separated
384
list of key-value pairs. These parameters can only be specified at
385
cluster and node group level; the cluster-level parameter are inherited
386
by the node group at the moment of its creation, and can be further
387
modified at node group level using the **gnt-group**\(8) command.
388

    
389
The following is the list of disk parameters available for the **drbd**
390
template, with measurement units specified in square brackets at the end
391
of the description (when applicable):
392

    
393
resync-rate
394
    Static re-synchronization rate. [KiB/s]
395

    
396
data-stripes
397
    Number of stripes to use for data LVs.
398

    
399
meta-stripes
400
    Number of stripes to use for meta LVs.
401

    
402
disk-barriers
403
    What kind of barriers to **disable** for disks. It can either assume
404
    the value "n", meaning no barrier disabled, or a non-empty string
405
    containing a subset of the characters "bfd". "b" means disable disk
406
    barriers, "f" means disable disk flushes, "d" disables disk drains.
407

    
408
meta-barriers
409
    Boolean value indicating whether the meta barriers should be
410
    disabled (True) or not (False).
411

    
412
metavg
413
    String containing the name of the default LVM volume group for DRBD
414
    metadata. By default, it is set to ``xenvg``. It can be overridden
415
    during the instance creation process by using the ``metavg`` key of
416
    the ``--disk`` parameter.
417

    
418
disk-custom
419
    String containing additional parameters to be appended to the
420
    arguments list of ``drbdsetup disk``.
421

    
422
net-custom
423
    String containing additional parameters to be appended to the
424
    arguments list of ``drbdsetup net``.
425

    
426
protocol
427
    Replication protocol for the DRBD device. Has to be either "A", "B"
428
    or "C". Refer to the DRBD documentation for further information
429
    about the differences between the protocols.
430

    
431
dynamic-resync
432
    Boolean indicating whether to use the dynamic resync speed
433
    controller or not. If enabled, c-plan-ahead must be non-zero and all
434
    the c-* parameters will be used by DRBD. Otherwise, the value of
435
    resync-rate will be used as a static resync speed.
436

    
437
c-plan-ahead
438
    Agility factor of the dynamic resync speed controller. (the higher,
439
    the slower the algorithm will adapt the resync speed). A value of 0
440
    (that is the default) disables the controller. [ds]
441

    
442
c-fill-target
443
    Maximum amount of in-flight resync data for the dynamic resync speed
444
    controller. [sectors]
445

    
446
c-delay-target
447
    Maximum estimated peer response latency for the dynamic resync speed
448
    controller. [ds]
449

    
450
c-min-rate
451
    Minimum resync speed for the dynamic resync speed controller. [KiB/s]
452

    
453
c-max-rate
454
    Upper bound on resync speed for the dynamic resync speed controller.
455
    [KiB/s]
456

    
457
List of parameters available for the **plain** template:
458

    
459
stripes
460
    Number of stripes to use for new LVs.
461

    
462
List of parameters available for the **rbd** template:
463

    
464
pool
465
    The RADOS cluster pool, inside which all rbd volumes will reside.
466
    When a new RADOS cluster is deployed, the default pool to put rbd
467
    volumes (Images in RADOS terminology) is 'rbd'.
468

    
469
The option ``--maintain-node-health`` allows one to enable/disable
470
automatic maintenance actions on nodes. Currently these include
471
automatic shutdown of instances and deactivation of DRBD devices on
472
offline nodes; in the future it might be extended to automatic
473
removal of unknown LVM volumes, etc. Note that this option is only
474
useful if the use of ``ganeti-confd`` was enabled at compilation.
475

    
476
The ``--uid-pool`` option initializes the user-id pool. The
477
*user-id pool definition* can contain a list of user-ids and/or a
478
list of user-id ranges. The parameter format is a comma-separated
479
list of numeric user-ids or user-id ranges. The ranges are defined
480
by a lower and higher boundary, separated by a dash. The boundaries
481
are inclusive. If the ``--uid-pool`` option is not supplied, the
482
user-id pool is initialized to an empty list. An empty list means
483
that the user-id pool feature is disabled.
484

    
485
The ``-I (--default-iallocator)`` option specifies the default
486
instance allocator. The instance allocator will be used for operations
487
like instance creation, instance and node migration, etc. when no
488
manual override is specified. If this option is not specified and
489
htools was not enabled at build time, the default instance allocator
490
will be blank, which means that relevant operations will require the
491
administrator to manually specify either an instance allocator, or a
492
set of nodes. If the option is not specified but htools was enabled,
493
the default iallocator will be **hail**\(1) (assuming it can be found
494
on disk). The default iallocator can be changed later using the
495
**modify** command.
496

    
497
The ``--primary-ip-version`` option specifies the IP version used
498
for the primary address. Possible values are 4 and 6 for IPv4 and
499
IPv6, respectively. This option is used when resolving node names
500
and the cluster name.
501

    
502
The ``--node-parameters`` option allows you to set default node
503
parameters for the cluster. Please see **ganeti**\(7) for more
504
information about supported key=value pairs.
505

    
506
The ``-C (--candidate-pool-size)`` option specifies the
507
``candidate_pool_size`` cluster parameter. This is the number of nodes
508
that the master will try to keep as master\_candidates. For more
509
details about this role and other node roles, see the **ganeti**\(7).
510

    
511
The ``--specs-...`` and ``--ipolicy-...`` options specify the instance
512
policy on the cluster. The ``--ipolicy-bounds-specs`` option sets the
513
minimum and maximum specifications for instances. The format is:
514
min:*param*=*value*,.../max:*param*=*value*,... and further
515
specifications pairs can be added by using ``//`` as a separator. The
516
``--ipolicy-std-specs`` option takes a list of parameter/value pairs.
517
For both options, *param* can be:
518

    
519
- ``cpu-count``: number of VCPUs for an instance
520
- ``disk-count``: number of disk for an instance
521
- ``disk-size``: size of each disk
522
- ``memory-size``: instance memory
523
- ``nic-count``: number of network interface
524
- ``spindle-use``: spindle usage for an instance
525

    
526
For the ``--specs-...`` options, each option can have three values:
527
``min``, ``max`` and ``std``, which can also be modified on group level
528
(except for ``std``, which is defined once for the entire cluster).
529
Please note, that ``std`` values are not the same as defaults set by
530
``--beparams``, but they are used for the capacity calculations.
531

    
532
- ``--specs-cpu-count`` limits the number of VCPUs that can be used by an
533
  instance.
534
- ``--specs-disk-count`` limits the number of disks
535
- ``--specs-disk-size`` limits the disk size for every disk used
536
- ``--specs-mem-size`` limits the amount of memory available
537
- ``--specs-nic-count`` sets limits on the number of NICs used
538

    
539
The ``--ipolicy-disk-templates`` and ``--ipolicy-spindle-ratio`` options
540
take a decimal number. The ``--ipolicy-disk-templates`` option takes a
541
comma-separated list of disk templates.
542

    
543
- ``--ipolicy-disk-templates`` limits the allowed disk templates
544
- ``--ipolicy-spindle-ratio`` limits the instances-spindles ratio
545
- ``--ipolicy-vcpu-ratio`` limits the vcpu-cpu ratio
546

    
547
All the instance policy elements can be overridden at group level. Group
548
level overrides can be removed by specifying ``default`` as the value of
549
an item.
550

    
551
The ``--drbd-usermode-helper`` option can be used to specify a usermode
552
helper. Check that this string is the one used by the DRBD kernel.
553

    
554
For details about how to use ``--hypervisor-state`` and ``--disk-state``
555
have a look at **ganeti**\(7).
556

    
557
The ``--enabled-disk-templates`` option specifies a list of disk templates
558
that can be used by instances of the cluster. For the possible values in
559
this list, see **gnt-instance**\(8). Note that in contrast to the list of
560
disk templates in the ipolicy, this list is a hard restriction. It is not
561
possible to create instances with disk templates that are not enabled in
562
the cluster. It is also not possible to disable a disk template when there
563
are still instances using it. The first disk template in the list of
564
enabled disk template is the default disk template. It will be used for
565
instance creation, if no disk template is requested explicitely.
566

    
567
MASTER-FAILOVER
568
~~~~~~~~~~~~~~~
569

    
570
**master-failover** [\--no-voting] [\--yes-do-it]
571

    
572
Failover the master role to the current node.
573

    
574
The ``--no-voting`` option skips the remote node agreement checks.
575
This is dangerous, but necessary in some cases (for example failing
576
over the master role in a 2 node cluster with the original master
577
down). If the original master then comes up, it won't be able to
578
start its master daemon because it won't have enough votes, but so
579
won't the new master, if the master daemon ever needs a restart.
580
You can pass ``--no-voting`` to **ganeti-masterd** on the new
581
master to solve this problem, and run **gnt-cluster redist-conf**
582
to make sure the cluster is consistent again.
583

    
584
The option ``--yes-do-it`` is used together with ``--no-voting``, for
585
skipping the interactive checks. This is even more dangerous, and should
586
only be used in conjunction with other means (e.g. a HA suite) to
587
confirm that the operation is indeed safe.
588

    
589
MASTER-PING
590
~~~~~~~~~~~
591

    
592
**master-ping**
593

    
594
Checks if the master daemon is alive.
595

    
596
If the master daemon is alive and can respond to a basic query (the
597
equivalent of **gnt-cluster info**), then the exit code of the
598
command will be 0. If the master daemon is not alive (either due to
599
a crash or because this is not the master node), the exit code will
600
be 1.
601

    
602
MODIFY
603
~~~~~~
604

    
605
| **modify** [\--submit] [\--print-job-id]
606
| [\--force]
607
| [\--vg-name *vg-name*]
608
| [\--enabled-hypervisors *hypervisors*]
609
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
610
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
611
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
612
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
613
| [\--uid-pool *user-id pool definition*]
614
| [\--add-uids *user-id pool definition*]
615
| [\--remove-uids *user-id pool definition*]
616
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
617
| [\--maintain-node-health {yes \| no}]
618
| [\--prealloc-wipe-disks {yes \| no}]
619
| [{-I|\--default-iallocator} *default instance allocator*]
620
| [\--reserved-lvs=*NAMES*]
621
| [\--node-parameters *ndparams*]
622
| [\--master-netdev *interface-name*]
623
| [\--master-netmask *netmask*]
624
| [\--use-external-mip-script {yes \| no}]
625
| [\--hypervisor-state *hvstate*]
626
| [\--disk-state *diskstate*]
627
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
628
| [\--ipolicy-bounds-specs *bounds_ispecs*]
629
| [\--ipolicy-disk-templates *template* [,*template*...]]
630
| [\--ipolicy-spindle-ratio *ratio*]
631
| [\--ipolicy-vcpu-ratio *ratio*]
632
| [\--enabled-disk-templates *template* [,*template*...]]
633
| [\--drbd-usermode-helper *helper*]
634

    
635

    
636
Modify the options for the cluster.
637

    
638
The ``--vg-name``, ``--enabled-hypervisors``, ``-H (--hypervisor-parameters)``,
639
``-B (--backend-parameters)``, ``-D (--disk-parameters)``, ``--nic-parameters``,
640
``-C (--candidate-pool-size)``, ``--maintain-node-health``,
641
``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``,
642
``--master-netdev``, ``--master-netmask``, ``--use-external-mip-script``,
643
``--drbd-usermode-helper``, and ``--enabled-disk-templates`` options are
644
described in the **init** command.
645

    
646
The ``--hypervisor-state`` and ``--disk-state`` options are described in
647
detail in **ganeti**\(7).
648

    
649
The ``--add-uids`` and ``--remove-uids`` options can be used to
650
modify the user-id pool by adding/removing a list of user-ids or
651
user-id ranges.
652

    
653
The option ``--reserved-lvs`` specifies a list (comma-separated) of
654
logical volume group names (regular expressions) that will be
655
ignored by the cluster verify operation. This is useful if the
656
volume group used for Ganeti is shared with the system for other
657
uses. Note that it's not recommended to create and mark as ignored
658
logical volume names which match Ganeti's own name format (starting
659
with UUID and then .diskN), as this option only skips the
660
verification, but not the actual use of the names given.
661

    
662
To remove all reserved logical volumes, pass in an empty argument
663
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``.
664

    
665
The ``-I (--default-iallocator)`` is described in the **init**
666
command. To clear the default iallocator, just pass an empty string
667
('').
668

    
669
The ``--ipolicy-...`` options are described in the **init** command.
670

    
671
See **ganeti**\(7) for a description of ``--submit`` and other common
672
options.
673

    
674
QUEUE
675
~~~~~
676

    
677
**queue** {drain | undrain | info}
678

    
679
Change job queue properties.
680

    
681
The ``drain`` option sets the drain flag on the job queue. No new
682
jobs will be accepted, but jobs already in the queue will be
683
processed.
684

    
685
The ``undrain`` will unset the drain flag on the job queue. New
686
jobs will be accepted.
687

    
688
The ``info`` option shows the properties of the job queue.
689

    
690
WATCHER
691
~~~~~~~
692

    
693
**watcher** {pause *duration* | continue | info}
694

    
695
Make the watcher pause or let it continue.
696

    
697
The ``pause`` option causes the watcher to pause for *duration*
698
seconds.
699

    
700
The ``continue`` option will let the watcher continue.
701

    
702
The ``info`` option shows whether the watcher is currently paused.
703

    
704
REDIST-CONF
705
~~~~~~~~~~~
706

    
707
**redist-conf** [\--submit] [\--print-job-id]
708

    
709
This command forces a full push of configuration files from the
710
master node to the other nodes in the cluster. This is normally not
711
needed, but can be run if the **verify** complains about
712
configuration mismatches.
713

    
714
See **ganeti**\(7) for a description of ``--submit`` and other common
715
options.
716

    
717
RENAME
718
~~~~~~
719

    
720
**rename** [-f] {*name*}
721

    
722
Renames the cluster and in the process updates the master IP
723
address to the one the new name resolves to. At least one of either
724
the name or the IP address must be different, otherwise the
725
operation will be aborted.
726

    
727
Note that since this command can be dangerous (especially when run
728
over SSH), the command will require confirmation unless run with
729
the ``-f`` option.
730

    
731
RENEW-CRYPTO
732
~~~~~~~~~~~~
733

    
734
| **renew-crypto** [-f]
735
| [\--new-cluster-certificate] [\--new-confd-hmac-key]
736
| [\--new-rapi-certificate] [\--rapi-certificate *rapi-cert*]
737
| [\--new-spice-certificate | \--spice-certificate *spice-cert*
738
| \--spice-ca-certificate *spice-ca-cert*]
739
| [\--new-cluster-domain-secret] [\--cluster-domain-secret *filename*]
740

    
741
This command will stop all Ganeti daemons in the cluster and start
742
them again once the new certificates and keys are replicated. The
743
options ``--new-cluster-certificate`` and ``--new-confd-hmac-key``
744
can be used to regenerate respectively the cluster-internal SSL
745
certificate and the HMAC key used by **ganeti-confd**\(8).
746

    
747
To generate a new self-signed RAPI certificate (used by
748
**ganeti-rapi**\(8)) specify ``--new-rapi-certificate``. If you want to
749
use your own certificate, e.g. one signed by a certificate
750
authority (CA), pass its filename to ``--rapi-certificate``.
751

    
752
To generate a new self-signed SPICE certificate, used for SPICE
753
connections to the KVM hypervisor, specify the
754
``--new-spice-certificate`` option. If you want to provide a
755
certificate, pass its filename to ``--spice-certificate`` and pass the
756
signing CA certificate to ``--spice-ca-certificate``.
757

    
758
Finally ``--new-cluster-domain-secret`` generates a new, random
759
cluster domain secret, and ``--cluster-domain-secret`` reads the
760
secret from a file. The cluster domain secret is used to sign
761
information exchanged between separate clusters via a third party.
762

    
763
REPAIR-DISK-SIZES
764
~~~~~~~~~~~~~~~~~
765

    
766
**repair-disk-sizes** [instance...]
767

    
768
This command checks that the recorded size of the given instance's
769
disks matches the actual size and updates any mismatches found.
770
This is needed if the Ganeti configuration is no longer consistent
771
with reality, as it will impact some disk operations. If no
772
arguments are given, all instances will be checked. When exclusive
773
storage is active, also spindles are updated.
774

    
775
Note that only active disks can be checked by this command; in case
776
a disk cannot be activated it's advised to use
777
**gnt-instance activate-disks \--ignore-size ...** to force
778
activation without regard to the current size.
779

    
780
When all the disk sizes are consistent, the command will return no
781
output. Otherwise it will log details about the inconsistencies in
782
the configuration.
783

    
784
VERIFY
785
~~~~~~
786

    
787
| **verify** [\--no-nplus1-mem] [\--node-group *nodegroup*]
788
| [\--error-codes] [{-I|\--ignore-errors} *errorcode*]
789
| [{-I|\--ignore-errors} *errorcode*...]
790

    
791
Verify correctness of cluster configuration. This is safe with
792
respect to running instances, and incurs no downtime of the
793
instances.
794

    
795
If the ``--no-nplus1-mem`` option is given, Ganeti won't check
796
whether if it loses a node it can restart all the instances on
797
their secondaries (and report an error otherwise).
798

    
799
With ``--node-group``, restrict the verification to those nodes and
800
instances that live in the named group. This will not verify global
801
settings, but will allow to perform verification of a group while other
802
operations are ongoing in other groups.
803

    
804
The ``--error-codes`` option outputs each error in the following
805
parseable format: *ftype*:*ecode*:*edomain*:*name*:*msg*.
806
These fields have the following meaning:
807

    
808
ftype
809
    Failure type. Can be *WARNING* or *ERROR*.
810

    
811
ecode
812
    Error code of the failure. See below for a list of error codes.
813

    
814
edomain
815
    Can be *cluster*, *node* or *instance*.
816

    
817
name
818
    Contains the name of the item that is affected from the failure.
819

    
820
msg
821
    Contains a descriptive error message about the error
822

    
823
``gnt-cluster verify`` will have a non-zero exit code if at least one of
824
the failures that are found are of type *ERROR*.
825

    
826
The ``--ignore-errors`` option can be used to change this behaviour,
827
because it demotes the error represented by the error code received as a
828
parameter to a warning. The option must be repeated for each error that
829
should be ignored (e.g.: ``-I ENODEVERSION -I ENODEORPHANLV``). The
830
``--error-codes`` option can be used to determine the error code of a
831
given error.
832

    
833
List of error codes:
834

    
835
@CONSTANTS_ECODES@
836

    
837
VERIFY-DISKS
838
~~~~~~~~~~~~
839

    
840
**verify-disks**
841

    
842
The command checks which instances have degraded DRBD disks and
843
activates the disks of those instances.
844

    
845
This command is run from the **ganeti-watcher** tool, which also
846
has a different, complementary algorithm for doing this check.
847
Together, these two should ensure that DRBD disks are kept
848
consistent.
849

    
850
VERSION
851
~~~~~~~
852

    
853
**version**
854

    
855
Show the cluster version.
856

    
857
Tags
858
~~~~
859

    
860
ADD-TAGS
861
^^^^^^^^
862

    
863
**add-tags** [\--from *file*] {*tag*...}
864

    
865
Add tags to the cluster. If any of the tags contains invalid
866
characters, the entire operation will abort.
867

    
868
If the ``--from`` option is given, the list of tags will be
869
extended with the contents of that file (each line becomes a tag).
870
In this case, there is not need to pass tags on the command line
871
(if you do, both sources will be used). A file name of - will be
872
interpreted as stdin.
873

    
874
LIST-TAGS
875
^^^^^^^^^
876

    
877
**list-tags**
878

    
879
List the tags of the cluster.
880

    
881
REMOVE-TAGS
882
^^^^^^^^^^^
883

    
884
**remove-tags** [\--from *file*] {*tag*...}
885

    
886
Remove tags from the cluster. If any of the tags are not existing
887
on the cluster, the entire operation will abort.
888

    
889
If the ``--from`` option is given, the list of tags to be removed will
890
be extended with the contents of that file (each line becomes a tag).
891
In this case, there is not need to pass tags on the command line (if
892
you do, tags from both sources will be removed). A file name of - will
893
be interpreted as stdin.
894

    
895
SEARCH-TAGS
896
^^^^^^^^^^^
897

    
898
**search-tags** {*pattern*}
899

    
900
Searches the tags on all objects in the cluster (the cluster
901
itself, the nodes and the instances) for a given pattern. The
902
pattern is interpreted as a regular expression and a search will be
903
done on it (i.e. the given pattern is not anchored to the beggining
904
of the string; if you want that, prefix the pattern with ^).
905

    
906
If no tags are matching the pattern, the exit code of the command
907
will be one. If there is at least one match, the exit code will be
908
zero. Each match is listed on one line, the object and the tag
909
separated by a space. The cluster will be listed as /cluster, a
910
node will be listed as /nodes/*name*, and an instance as
911
/instances/*name*. Example:
912

    
913
::
914

    
915
    # gnt-cluster search-tags time
916
    /cluster ctime:2007-09-01
917
    /nodes/node1.example.com mtime:2007-10-04
918

    
919
.. vim: set textwidth=72 :
920
.. Local Variables:
921
.. mode: rst
922
.. fill-column: 72
923
.. End: