Statistics
| Branch: | Tag: | Revision:

root / man / gnt-cluster.rst @ cbb533f4

History | View | Annotate | Download (34.7 kB)

1
gnt-cluster(8) Ganeti | Version @GANETI_VERSION@
2
================================================
3

    
4
Name
5
----
6

    
7
gnt-cluster - Ganeti administration, cluster-wide
8

    
9
Synopsis
10
--------
11

    
12
**gnt-cluster** {command} [arguments...]
13

    
14
DESCRIPTION
15
-----------
16

    
17
The **gnt-cluster** is used for cluster-wide administration in the
18
Ganeti system.
19

    
20
COMMANDS
21
--------
22

    
23
ACTIVATE-MASTER-IP
24
~~~~~~~~~~~~~~~~~~
25

    
26
**activate-master-ip**
27

    
28
Activates the master IP on the master node.
29

    
30
COMMAND
31
~~~~~~~
32

    
33
**command** [-n *node*] [-g *group*] [-M] {*command*}
34

    
35
Executes a command on all nodes. This command is designed for simple
36
usage. For more complex use cases the commands **dsh**\(1) or **cssh**\(1)
37
should be used instead.
38

    
39
If the option ``-n`` is not given, the command will be executed on all
40
nodes, otherwise it will be executed only on the node(s) specified. Use
41
the option multiple times for running it on multiple nodes, like::
42

    
43
    # gnt-cluster command -n node1.example.com -n node2.example.com date
44

    
45
The ``-g`` option can be used to run a command only on a specific node
46
group, e.g.::
47

    
48
    # gnt-cluster command -g default date
49

    
50
The ``-M`` option can be used to prepend the node name to all output
51
lines. The ``--failure-only`` option hides successful commands, making
52
it easier to see failures.
53

    
54
The command is executed serially on the selected nodes. If the
55
master node is present in the list, the command will be executed
56
last on the master. Regarding the other nodes, the execution order
57
is somewhat alphabetic, so that node2.example.com will be earlier
58
than node10.example.com but after node1.example.com.
59

    
60
So given the node names node1, node2, node3, node10, node11, with
61
node3 being the master, the order will be: node1, node2, node10,
62
node11, node3.
63

    
64
The command is constructed by concatenating all other command line
65
arguments. For example, to list the contents of the /etc directory
66
on all nodes, run::
67

    
68
    # gnt-cluster command ls -l /etc
69

    
70
and the command which will be executed will be ``ls -l /etc``.
71

    
72
COPYFILE
73
~~~~~~~~
74

    
75
| **copyfile** [\--use-replication-network] [-n *node*] [-g *group*]
76
| {*file*}
77

    
78
Copies a file to all or to some nodes. The argument specifies the
79
source file (on the current system), the ``-n`` argument specifies
80
the target node, or nodes if the option is given multiple times. If
81
``-n`` is not given at all, the file will be copied to all nodes. The
82
``-g`` option can be used to only select nodes in a specific node group.
83
Passing the ``--use-replication-network`` option will cause the
84
copy to be done over the replication network (only matters if the
85
primary/secondary IPs are different). Example::
86

    
87
    # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
88

    
89
This will copy the file /tmp/test from the current node to the two
90
named nodes.
91

    
92
DEACTIVATE-MASTER-IP
93
~~~~~~~~~~~~~~~~~~~~
94

    
95
**deactivate-master-ip** [\--yes]
96

    
97
Deactivates the master IP on the master node.
98

    
99
This should be run only locally or on a connection to the node ip
100
directly, as a connection to the master ip will be broken by this
101
operation. Because of this risk it will require user confirmation
102
unless the ``--yes`` option is passed.
103

    
104
DESTROY
105
~~~~~~~
106

    
107
**destroy** {\--yes-do-it}
108

    
109
Remove all configuration files related to the cluster, so that a
110
**gnt-cluster init** can be done again afterwards.
111

    
112
Since this is a dangerous command, you are required to pass the
113
argument *\--yes-do-it.*
114

    
115
EPO
116
~~~
117

    
118
**epo** [\--on] [\--groups|\--all] [\--power-delay] *arguments*
119

    
120
Performs an emergency power-off on nodes given as arguments. If
121
``--groups`` is given, arguments are node groups. If ``--all`` is
122
provided, the whole cluster will be shut down.
123

    
124
The ``--on`` flag recovers the cluster after an emergency power-off.
125
When powering on the cluster you can use ``--power-delay`` to define the
126
time in seconds (fractions allowed) waited between powering on
127
individual nodes.
128

    
129
Please note that the master node will not be turned down or up
130
automatically.  It will just be left in a state, where you can manully
131
perform the shutdown of that one node. If the master is in the list of
132
affected nodes and this is not a complete cluster emergency power-off
133
(e.g. using ``--all``), you're required to do a master failover to
134
another node not affected.
135

    
136
GETMASTER
137
~~~~~~~~~
138

    
139
**getmaster**
140

    
141
Displays the current master node.
142

    
143
INFO
144
~~~~
145

    
146
**info** [\--roman]
147

    
148
Shows runtime cluster information: cluster name, architecture (32
149
or 64 bit), master node, node list and instance list.
150

    
151
Passing the ``--roman`` option gnt-cluster info will try to print
152
its integer fields in a latin friendly way. This allows further
153
diffusion of Ganeti among ancient cultures.
154

    
155
SHOW-ISPECS-CMD
156
~~~~~~~~~~~~~~~
157

    
158
**show-ispecs-cmd**
159

    
160
Shows the command line that can be used to recreate the cluster with the
161
same options relative to specs in the instance policies.
162

    
163
INIT
164
~~~~
165

    
166
| **init**
167
| [{-s|\--secondary-ip} *secondary\_ip*]
168
| [\--vg-name *vg-name*]
169
| [\--master-netdev *interface-name*]
170
| [\--master-netmask *netmask*]
171
| [\--use-external-mip-script {yes \| no}]
172
| [{-m|\--mac-prefix} *mac-prefix*]
173
| [\--no-etc-hosts]
174
| [\--no-ssh-init]
175
| [\--file-storage-dir *dir*]
176
| [\--shared-file-storage-dir *dir*]
177
| [\--enabled-hypervisors *hypervisors*]
178
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
179
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
180
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
181
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
182
| [\--maintain-node-health {yes \| no}]
183
| [\--uid-pool *user-id pool definition*]
184
| [{-I|\--default-iallocator} *default instance allocator*]
185
| [\--primary-ip-version *version*]
186
| [\--prealloc-wipe-disks {yes \| no}]
187
| [\--node-parameters *ndparams*]
188
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
189
| [\--specs-cpu-count *spec-param*=*value* [,*spec-param*=*value*...]]
190
| [\--specs-disk-count *spec-param*=*value* [,*spec-param*=*value*...]]
191
| [\--specs-disk-size *spec-param*=*value* [,*spec-param*=*value*...]]
192
| [\--specs-mem-size *spec-param*=*value* [,*spec-param*=*value*...]]
193
| [\--specs-nic-count *spec-param*=*value* [,*spec-param*=*value*...]]
194
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
195
| [\--ipolicy-bounds-specs *bounds_ispecs*]
196
| [\--ipolicy-disk-templates *template* [,*template*...]]
197
| [\--ipolicy-spindle-ratio *ratio*]
198
| [\--ipolicy-vcpu-ratio *ratio*]
199
| [\--disk-state *diskstate*]
200
| [\--hypervisor-state *hvstate*]
201
| [\--drbd-usermode-helper *helper*]
202
| [\--enabled-disk-templates *template* [,*template*...]]
203
| {*clustername*}
204

    
205
This commands is only run once initially on the first node of the
206
cluster. It will initialize the cluster configuration, setup the
207
ssh-keys, start the daemons on the master node, etc. in order to have
208
a working one-node cluster.
209

    
210
Note that the *clustername* is not any random name. It has to be
211
resolvable to an IP address using DNS, and it is best if you give the
212
fully-qualified domain name. This hostname must resolve to an IP
213
address reserved exclusively for this purpose, i.e. not already in
214
use.
215

    
216
The cluster can run in two modes: single-home or dual-homed. In the
217
first case, all traffic (both public traffic, inter-node traffic and
218
data replication traffic) goes over the same interface. In the
219
dual-homed case, the data replication traffic goes over the second
220
network. The ``-s (--secondary-ip)`` option here marks the cluster as
221
dual-homed and its parameter represents this node's address on the
222
second network.  If you initialise the cluster with ``-s``, all nodes
223
added must have a secondary IP as well.
224

    
225
Note that for Ganeti it doesn't matter if the secondary network is
226
actually a separate physical network, or is done using tunneling,
227
etc. For performance reasons, it's recommended to use a separate
228
network, of course.
229

    
230
The ``--vg-name`` option will let you specify a volume group
231
different than "xenvg" for Ganeti to use when creating instance
232
disks. This volume group must have the same name on all nodes. Once
233
the cluster is initialized this can be altered by using the
234
**modify** command. Note that if the volume group name is modified after
235
the cluster creation and DRBD support is enabled you might have to
236
manually modify the metavg as well.
237

    
238
If you don't want to use lvm storage at all use
239
the ``--enabled-disk-templates`` option to restrict the set of enabled
240
disk templates. Once the cluster is initialized
241
you can change this setup with the **modify** command.
242

    
243
The ``--master-netdev`` option is useful for specifying a different
244
interface on which the master will activate its IP address. It's
245
important that all nodes have this interface because you'll need it
246
for a master failover.
247

    
248
The ``--master-netmask`` option allows to specify a netmask for the
249
master IP. The netmask must be specified as an integer, and will be
250
interpreted as a CIDR netmask. The default value is 32 for an IPv4
251
address and 128 for an IPv6 address.
252

    
253
The ``--use-external-mip-script`` option allows to specify whether to
254
use an user-supplied master IP address setup script, whose location is
255
``@SYSCONFDIR@/ganeti/scripts/master-ip-setup``. If the option value is
256
set to False, the default script (located at
257
``@PKGLIBDIR@/tools/master-ip-setup``) will be executed.
258

    
259
The ``-m (--mac-prefix)`` option will let you specify a three byte
260
prefix under which the virtual MAC addresses of your instances will be
261
generated. The prefix must be specified in the format ``XX:XX:XX`` and
262
the default is ``aa:00:00``.
263

    
264
The ``--no-etc-hosts`` option allows you to initialize the cluster
265
without modifying the /etc/hosts file.
266

    
267
The ``--no-ssh-init`` option allows you to initialize the cluster
268
without creating or distributing SSH key pairs.
269

    
270
The ``--file-storage-dir`` and ``--shared-file-storage-dir`` options
271
allow you set the directory to use for storing the instance disk files
272
when using file storage backend, respectively shared file storage
273
backend,  for instance disks. Note that the file and shared file storage
274
dir must be an allowed directory for file storage. Those directories
275
are specified in the ``@SYSCONFDIR@/ganeti/file-storage-paths`` file.
276
The file storage directory can also be a subdirectory of an allowed one.
277
The file storage directory should be present on all nodes.
278

    
279
The ``--prealloc-wipe-disks`` sets a cluster wide configuration value
280
for wiping disks prior to allocation and size changes (``gnt-instance
281
grow-disk``). This increases security on instance level as the instance
282
can't access untouched data from its underlying storage.
283

    
284
The ``--enabled-hypervisors`` option allows you to set the list of
285
hypervisors that will be enabled for this cluster. Instance
286
hypervisors can only be chosen from the list of enabled
287
hypervisors, and the first entry of this list will be used by
288
default. Currently, the following hypervisors are available:
289

    
290
xen-pvm
291
    Xen PVM hypervisor
292

    
293
xen-hvm
294
    Xen HVM hypervisor
295

    
296
kvm
297
    Linux KVM hypervisor
298

    
299
chroot
300
    a simple chroot manager that starts chroot based on a script at the
301
    root of the filesystem holding the chroot
302

    
303
fake
304
    fake hypervisor for development/testing
305

    
306
Either a single hypervisor name or a comma-separated list of
307
hypervisor names can be specified. If this option is not specified,
308
only the xen-pvm hypervisor is enabled by default.
309

    
310
The ``-H (--hypervisor-parameters)`` option allows you to set default
311
hypervisor specific parameters for the cluster. The format of this
312
option is the name of the hypervisor, followed by a colon and a
313
comma-separated list of key=value pairs. The keys available for each
314
hypervisors are detailed in the **gnt-instance**\(8) man page, in the
315
**add** command plus the following parameters which are only
316
configurable globally (at cluster level):
317

    
318
migration\_port
319
    Valid for the Xen PVM and KVM hypervisors.
320

    
321
    This options specifies the TCP port to use for live-migration. For
322
    Xen, the same port should be configured on all nodes in the
323
    ``@XEN_CONFIG_DIR@/xend-config.sxp`` file, under the key
324
    "xend-relocation-port".
325

    
326
migration\_bandwidth
327
    Valid for the KVM hypervisor.
328

    
329
    This option specifies the maximum bandwidth that KVM will use for
330
    instance live migrations. The value is in MiB/s.
331

    
332
    This option is only effective with kvm versions >= 78 and qemu-kvm
333
    versions >= 0.10.0.
334

    
335
The ``-B (--backend-parameters)`` option allows you to set the default
336
backend parameters for the cluster. The parameter format is a
337
comma-separated list of key=value pairs with the following supported
338
keys:
339

    
340
vcpus
341
    Number of VCPUs to set for an instance by default, must be an
342
    integer, will be set to 1 if no specified.
343

    
344
maxmem
345
    Maximum amount of memory to allocate for an instance by default, can
346
    be either an integer or an integer followed by a unit (M for
347
    mebibytes and G for gibibytes are supported), will be set to 128M if
348
    not specified.
349

    
350
minmem
351
    Minimum amount of memory to allocate for an instance by default, can
352
    be either an integer or an integer followed by a unit (M for
353
    mebibytes and G for gibibytes are supported), will be set to 128M if
354
    not specified.
355

    
356
auto\_balance
357
    Value of the auto\_balance flag for instances to use by default,
358
    will be set to true if not specified.
359

    
360
always\_failover
361
    Default value for the ``always_failover`` flag for instances; if
362
    not set, ``False`` is used.
363

    
364

    
365
The ``-N (--nic-parameters)`` option allows you to set the default
366
network interface parameters for the cluster. The parameter format is a
367
comma-separated list of key=value pairs with the following supported
368
keys:
369

    
370
mode
371
    The default NIC mode, one of ``routed``, ``bridged`` or
372
    ``openvswitch``.
373

    
374
link
375
    In ``bridged`` or ``openvswitch`` mode the default interface where
376
    to attach NICs. In ``routed`` mode it represents an
377
    hypervisor-vif-script dependent value to allow different instance
378
    groups. For example under the KVM default network script it is
379
    interpreted as a routing table number or name. Openvswitch support
380
    is also hypervisor dependent and currently works for the default KVM
381
    network script. Under Xen a custom network script must be provided.
382

    
383
The ``-D (--disk-parameters)`` option allows you to set the default disk
384
template parameters at cluster level. The format used for this option is
385
similar to the one use by the  ``-H`` option: the disk template name
386
must be specified first, followed by a colon and by a comma-separated
387
list of key-value pairs. These parameters can only be specified at
388
cluster and node group level; the cluster-level parameter are inherited
389
by the node group at the moment of its creation, and can be further
390
modified at node group level using the **gnt-group**\(8) command.
391

    
392
The following is the list of disk parameters available for the **drbd**
393
template, with measurement units specified in square brackets at the end
394
of the description (when applicable):
395

    
396
resync-rate
397
    Static re-synchronization rate. [KiB/s]
398

    
399
data-stripes
400
    Number of stripes to use for data LVs.
401

    
402
meta-stripes
403
    Number of stripes to use for meta LVs.
404

    
405
disk-barriers
406
    What kind of barriers to **disable** for disks. It can either assume
407
    the value "n", meaning no barrier disabled, or a non-empty string
408
    containing a subset of the characters "bfd". "b" means disable disk
409
    barriers, "f" means disable disk flushes, "d" disables disk drains.
410

    
411
meta-barriers
412
    Boolean value indicating whether the meta barriers should be
413
    disabled (True) or not (False).
414

    
415
metavg
416
    String containing the name of the default LVM volume group for DRBD
417
    metadata. By default, it is set to ``xenvg``. It can be overridden
418
    during the instance creation process by using the ``metavg`` key of
419
    the ``--disk`` parameter.
420

    
421
disk-custom
422
    String containing additional parameters to be appended to the
423
    arguments list of ``drbdsetup disk``.
424

    
425
net-custom
426
    String containing additional parameters to be appended to the
427
    arguments list of ``drbdsetup net``.
428

    
429
protocol
430
    Replication protocol for the DRBD device. Has to be either "A", "B"
431
    or "C". Refer to the DRBD documentation for further information
432
    about the differences between the protocols.
433

    
434
dynamic-resync
435
    Boolean indicating whether to use the dynamic resync speed
436
    controller or not. If enabled, c-plan-ahead must be non-zero and all
437
    the c-* parameters will be used by DRBD. Otherwise, the value of
438
    resync-rate will be used as a static resync speed.
439

    
440
c-plan-ahead
441
    Agility factor of the dynamic resync speed controller. (the higher,
442
    the slower the algorithm will adapt the resync speed). A value of 0
443
    (that is the default) disables the controller. [ds]
444

    
445
c-fill-target
446
    Maximum amount of in-flight resync data for the dynamic resync speed
447
    controller. [sectors]
448

    
449
c-delay-target
450
    Maximum estimated peer response latency for the dynamic resync speed
451
    controller. [ds]
452

    
453
c-min-rate
454
    Minimum resync speed for the dynamic resync speed controller. [KiB/s]
455

    
456
c-max-rate
457
    Upper bound on resync speed for the dynamic resync speed controller.
458
    [KiB/s]
459

    
460
List of parameters available for the **plain** template:
461

    
462
stripes
463
    Number of stripes to use for new LVs.
464

    
465
List of parameters available for the **rbd** template:
466

    
467
pool
468
    The RADOS cluster pool, inside which all rbd volumes will reside.
469
    When a new RADOS cluster is deployed, the default pool to put rbd
470
    volumes (Images in RADOS terminology) is 'rbd'.
471

    
472
access
473
    If 'userspace', instances will access their disks directly without
474
    going through a block device, avoiding expensive context switches
475
    with kernel space and the potential for deadlocks_ in low memory
476
    scenarios.
477

    
478
    The default value is 'kernelspace' and it disables this behaviour.
479
    This setting may only be changed to 'userspace' if all instance
480
    disks in the affected group or cluster can be accessed in userspace.
481

    
482
    Attempts to use this feature without rbd support compiled in KVM
483
    result in a "no such file or directory" error messages.
484

    
485
.. _deadlocks: http://tracker.ceph.com/issues/3076
486

    
487
The option ``--maintain-node-health`` allows one to enable/disable
488
automatic maintenance actions on nodes. Currently these include
489
automatic shutdown of instances and deactivation of DRBD devices on
490
offline nodes; in the future it might be extended to automatic
491
removal of unknown LVM volumes, etc. Note that this option is only
492
useful if the use of ``ganeti-confd`` was enabled at compilation.
493

    
494
The ``--uid-pool`` option initializes the user-id pool. The
495
*user-id pool definition* can contain a list of user-ids and/or a
496
list of user-id ranges. The parameter format is a comma-separated
497
list of numeric user-ids or user-id ranges. The ranges are defined
498
by a lower and higher boundary, separated by a dash. The boundaries
499
are inclusive. If the ``--uid-pool`` option is not supplied, the
500
user-id pool is initialized to an empty list. An empty list means
501
that the user-id pool feature is disabled.
502

    
503
The ``-I (--default-iallocator)`` option specifies the default
504
instance allocator. The instance allocator will be used for operations
505
like instance creation, instance and node migration, etc. when no
506
manual override is specified. If this option is not specified and
507
htools was not enabled at build time, the default instance allocator
508
will be blank, which means that relevant operations will require the
509
administrator to manually specify either an instance allocator, or a
510
set of nodes. If the option is not specified but htools was enabled,
511
the default iallocator will be **hail**\(1) (assuming it can be found
512
on disk). The default iallocator can be changed later using the
513
**modify** command.
514

    
515
The ``--primary-ip-version`` option specifies the IP version used
516
for the primary address. Possible values are 4 and 6 for IPv4 and
517
IPv6, respectively. This option is used when resolving node names
518
and the cluster name.
519

    
520
The ``--node-parameters`` option allows you to set default node
521
parameters for the cluster. Please see **ganeti**\(7) for more
522
information about supported key=value pairs.
523

    
524
The ``-C (--candidate-pool-size)`` option specifies the
525
``candidate_pool_size`` cluster parameter. This is the number of nodes
526
that the master will try to keep as master\_candidates. For more
527
details about this role and other node roles, see the **ganeti**\(7).
528

    
529
The ``--specs-...`` and ``--ipolicy-...`` options specify the instance
530
policy on the cluster. The ``--ipolicy-bounds-specs`` option sets the
531
minimum and maximum specifications for instances. The format is:
532
min:*param*=*value*,.../max:*param*=*value*,... and further
533
specifications pairs can be added by using ``//`` as a separator. The
534
``--ipolicy-std-specs`` option takes a list of parameter/value pairs.
535
For both options, *param* can be:
536

    
537
- ``cpu-count``: number of VCPUs for an instance
538
- ``disk-count``: number of disk for an instance
539
- ``disk-size``: size of each disk
540
- ``memory-size``: instance memory
541
- ``nic-count``: number of network interface
542
- ``spindle-use``: spindle usage for an instance
543

    
544
For the ``--specs-...`` options, each option can have three values:
545
``min``, ``max`` and ``std``, which can also be modified on group level
546
(except for ``std``, which is defined once for the entire cluster).
547
Please note, that ``std`` values are not the same as defaults set by
548
``--beparams``, but they are used for the capacity calculations.
549

    
550
- ``--specs-cpu-count`` limits the number of VCPUs that can be used by an
551
  instance.
552
- ``--specs-disk-count`` limits the number of disks
553
- ``--specs-disk-size`` limits the disk size for every disk used
554
- ``--specs-mem-size`` limits the amount of memory available
555
- ``--specs-nic-count`` sets limits on the number of NICs used
556

    
557
The ``--ipolicy-spindle-ratio`` option takes a decimal number. The
558
``--ipolicy-disk-templates`` option takes a comma-separated list of disk
559
templates. This list of disk templates must be a subset of the list
560
of cluster-wide enabled disk templates (which can be set with
561
``--enabled-disk-templates``).
562

    
563
- ``--ipolicy-spindle-ratio`` limits the instances-spindles ratio
564
- ``--ipolicy-vcpu-ratio`` limits the vcpu-cpu ratio
565

    
566
All the instance policy elements can be overridden at group level. Group
567
level overrides can be removed by specifying ``default`` as the value of
568
an item.
569

    
570
The ``--drbd-usermode-helper`` option can be used to specify a usermode
571
helper. Check that this string is the one used by the DRBD kernel.
572

    
573
For details about how to use ``--hypervisor-state`` and ``--disk-state``
574
have a look at **ganeti**\(7).
575

    
576
The ``--enabled-disk-templates`` option specifies a list of disk templates
577
that can be used by instances of the cluster. For the possible values in
578
this list, see **gnt-instance**\(8). Note that in contrast to the list of
579
disk templates in the ipolicy, this list is a hard restriction. It is not
580
possible to create instances with disk templates that are not enabled in
581
the cluster. It is also not possible to disable a disk template when there
582
are still instances using it. The first disk template in the list of
583
enabled disk template is the default disk template. It will be used for
584
instance creation, if no disk template is requested explicitely.
585

    
586
MASTER-FAILOVER
587
~~~~~~~~~~~~~~~
588

    
589
**master-failover** [\--no-voting] [\--yes-do-it]
590

    
591
Failover the master role to the current node.
592

    
593
The ``--no-voting`` option skips the remote node agreement checks.
594
This is dangerous, but necessary in some cases (for example failing
595
over the master role in a 2 node cluster with the original master
596
down). If the original master then comes up, it won't be able to
597
start its master daemon because it won't have enough votes, but so
598
won't the new master, if the master daemon ever needs a restart.
599
You can pass ``--no-voting`` to **ganeti-masterd** on the new
600
master to solve this problem, and run **gnt-cluster redist-conf**
601
to make sure the cluster is consistent again.
602

    
603
The option ``--yes-do-it`` is used together with ``--no-voting``, for
604
skipping the interactive checks. This is even more dangerous, and should
605
only be used in conjunction with other means (e.g. a HA suite) to
606
confirm that the operation is indeed safe.
607

    
608
MASTER-PING
609
~~~~~~~~~~~
610

    
611
**master-ping**
612

    
613
Checks if the master daemon is alive.
614

    
615
If the master daemon is alive and can respond to a basic query (the
616
equivalent of **gnt-cluster info**), then the exit code of the
617
command will be 0. If the master daemon is not alive (either due to
618
a crash or because this is not the master node), the exit code will
619
be 1.
620

    
621
MODIFY
622
~~~~~~
623

    
624
| **modify** [\--submit] [\--print-job-id]
625
| [\--force]
626
| [\--vg-name *vg-name*]
627
| [\--enabled-hypervisors *hypervisors*]
628
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
629
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]]
630
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]]
631
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]]
632
| [\--uid-pool *user-id pool definition*]
633
| [\--add-uids *user-id pool definition*]
634
| [\--remove-uids *user-id pool definition*]
635
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*]
636
| [\--maintain-node-health {yes \| no}]
637
| [\--prealloc-wipe-disks {yes \| no}]
638
| [{-I|\--default-iallocator} *default instance allocator*]
639
| [\--reserved-lvs=*NAMES*]
640
| [\--node-parameters *ndparams*]
641
| [\--master-netdev *interface-name*]
642
| [\--master-netmask *netmask*]
643
| [\--use-external-mip-script {yes \| no}]
644
| [\--hypervisor-state *hvstate*]
645
| [\--disk-state *diskstate*]
646
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]]
647
| [\--ipolicy-bounds-specs *bounds_ispecs*]
648
| [\--ipolicy-disk-templates *template* [,*template*...]]
649
| [\--ipolicy-spindle-ratio *ratio*]
650
| [\--ipolicy-vcpu-ratio *ratio*]
651
| [\--enabled-disk-templates *template* [,*template*...]]
652
| [\--drbd-usermode-helper *helper*]
653
| [\--file-storage-dir *dir*]
654
| [\--shared-file-storage-dir *dir*]
655

    
656

    
657
Modify the options for the cluster.
658

    
659
The ``--vg-name``, ``--enabled-hypervisors``, ``-H (--hypervisor-parameters)``,
660
``-B (--backend-parameters)``, ``-D (--disk-parameters)``, ``--nic-parameters``,
661
``-C (--candidate-pool-size)``, ``--maintain-node-health``,
662
``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``,
663
``--master-netdev``, ``--master-netmask``, ``--use-external-mip-script``,
664
``--drbd-usermode-helper``, ``--file-storage-dir``,
665
``--shared-file-storage-dir``, and ``--enabled-disk-templates`` options are
666
described in the **init** command.
667

    
668
The ``--hypervisor-state`` and ``--disk-state`` options are described in
669
detail in **ganeti**\(7).
670

    
671
The ``--add-uids`` and ``--remove-uids`` options can be used to
672
modify the user-id pool by adding/removing a list of user-ids or
673
user-id ranges.
674

    
675
The option ``--reserved-lvs`` specifies a list (comma-separated) of
676
logical volume group names (regular expressions) that will be
677
ignored by the cluster verify operation. This is useful if the
678
volume group used for Ganeti is shared with the system for other
679
uses. Note that it's not recommended to create and mark as ignored
680
logical volume names which match Ganeti's own name format (starting
681
with UUID and then .diskN), as this option only skips the
682
verification, but not the actual use of the names given.
683

    
684
To remove all reserved logical volumes, pass in an empty argument
685
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``.
686

    
687
The ``-I (--default-iallocator)`` is described in the **init**
688
command. To clear the default iallocator, just pass an empty string
689
('').
690

    
691
The ``--ipolicy-...`` options are described in the **init** command.
692

    
693
See **ganeti**\(7) for a description of ``--submit`` and other common
694
options.
695

    
696
QUEUE
697
~~~~~
698

    
699
**queue** {drain | undrain | info}
700

    
701
Change job queue properties.
702

    
703
The ``drain`` option sets the drain flag on the job queue. No new
704
jobs will be accepted, but jobs already in the queue will be
705
processed.
706

    
707
The ``undrain`` will unset the drain flag on the job queue. New
708
jobs will be accepted.
709

    
710
The ``info`` option shows the properties of the job queue.
711

    
712
WATCHER
713
~~~~~~~
714

    
715
**watcher** {pause *duration* | continue | info}
716

    
717
Make the watcher pause or let it continue.
718

    
719
The ``pause`` option causes the watcher to pause for *duration*
720
seconds.
721

    
722
The ``continue`` option will let the watcher continue.
723

    
724
The ``info`` option shows whether the watcher is currently paused.
725

    
726
REDIST-CONF
727
~~~~~~~~~~~
728

    
729
**redist-conf** [\--submit] [\--print-job-id]
730

    
731
This command forces a full push of configuration files from the
732
master node to the other nodes in the cluster. This is normally not
733
needed, but can be run if the **verify** complains about
734
configuration mismatches.
735

    
736
See **ganeti**\(7) for a description of ``--submit`` and other common
737
options.
738

    
739
RENAME
740
~~~~~~
741

    
742
**rename** [-f] {*name*}
743

    
744
Renames the cluster and in the process updates the master IP
745
address to the one the new name resolves to. At least one of either
746
the name or the IP address must be different, otherwise the
747
operation will be aborted.
748

    
749
Note that since this command can be dangerous (especially when run
750
over SSH), the command will require confirmation unless run with
751
the ``-f`` option.
752

    
753
RENEW-CRYPTO
754
~~~~~~~~~~~~
755

    
756
| **renew-crypto** [-f]
757
| [\--new-cluster-certificate] [\--new-confd-hmac-key]
758
| [\--new-rapi-certificate] [\--rapi-certificate *rapi-cert*]
759
| [\--new-spice-certificate | \--spice-certificate *spice-cert*
760
| \--spice-ca-certificate *spice-ca-cert*]
761
| [\--new-cluster-domain-secret] [\--cluster-domain-secret *filename*]
762

    
763
This command will stop all Ganeti daemons in the cluster and start
764
them again once the new certificates and keys are replicated. The
765
options ``--new-cluster-certificate`` and ``--new-confd-hmac-key``
766
can be used to regenerate respectively the cluster-internal SSL
767
certificate and the HMAC key used by **ganeti-confd**\(8).
768

    
769
To generate a new self-signed RAPI certificate (used by
770
**ganeti-rapi**\(8)) specify ``--new-rapi-certificate``. If you want to
771
use your own certificate, e.g. one signed by a certificate
772
authority (CA), pass its filename to ``--rapi-certificate``.
773

    
774
To generate a new self-signed SPICE certificate, used for SPICE
775
connections to the KVM hypervisor, specify the
776
``--new-spice-certificate`` option. If you want to provide a
777
certificate, pass its filename to ``--spice-certificate`` and pass the
778
signing CA certificate to ``--spice-ca-certificate``.
779

    
780
Finally ``--new-cluster-domain-secret`` generates a new, random
781
cluster domain secret, and ``--cluster-domain-secret`` reads the
782
secret from a file. The cluster domain secret is used to sign
783
information exchanged between separate clusters via a third party.
784

    
785
REPAIR-DISK-SIZES
786
~~~~~~~~~~~~~~~~~
787

    
788
**repair-disk-sizes** [instance...]
789

    
790
This command checks that the recorded size of the given instance's
791
disks matches the actual size and updates any mismatches found.
792
This is needed if the Ganeti configuration is no longer consistent
793
with reality, as it will impact some disk operations. If no
794
arguments are given, all instances will be checked. When exclusive
795
storage is active, also spindles are updated.
796

    
797
Note that only active disks can be checked by this command; in case
798
a disk cannot be activated it's advised to use
799
**gnt-instance activate-disks \--ignore-size ...** to force
800
activation without regard to the current size.
801

    
802
When all the disk sizes are consistent, the command will return no
803
output. Otherwise it will log details about the inconsistencies in
804
the configuration.
805

    
806
VERIFY
807
~~~~~~
808

    
809
| **verify** [\--no-nplus1-mem] [\--node-group *nodegroup*]
810
| [\--error-codes] [{-I|\--ignore-errors} *errorcode*]
811
| [{-I|\--ignore-errors} *errorcode*...]
812

    
813
Verify correctness of cluster configuration. This is safe with
814
respect to running instances, and incurs no downtime of the
815
instances.
816

    
817
If the ``--no-nplus1-mem`` option is given, Ganeti won't check
818
whether if it loses a node it can restart all the instances on
819
their secondaries (and report an error otherwise).
820

    
821
With ``--node-group``, restrict the verification to those nodes and
822
instances that live in the named group. This will not verify global
823
settings, but will allow to perform verification of a group while other
824
operations are ongoing in other groups.
825

    
826
The ``--error-codes`` option outputs each error in the following
827
parseable format: *ftype*:*ecode*:*edomain*:*name*:*msg*.
828
These fields have the following meaning:
829

    
830
ftype
831
    Failure type. Can be *WARNING* or *ERROR*.
832

    
833
ecode
834
    Error code of the failure. See below for a list of error codes.
835

    
836
edomain
837
    Can be *cluster*, *node* or *instance*.
838

    
839
name
840
    Contains the name of the item that is affected from the failure.
841

    
842
msg
843
    Contains a descriptive error message about the error
844

    
845
``gnt-cluster verify`` will have a non-zero exit code if at least one of
846
the failures that are found are of type *ERROR*.
847

    
848
The ``--ignore-errors`` option can be used to change this behaviour,
849
because it demotes the error represented by the error code received as a
850
parameter to a warning. The option must be repeated for each error that
851
should be ignored (e.g.: ``-I ENODEVERSION -I ENODEORPHANLV``). The
852
``--error-codes`` option can be used to determine the error code of a
853
given error.
854

    
855
List of error codes:
856

    
857
@CONSTANTS_ECODES@
858

    
859
VERIFY-DISKS
860
~~~~~~~~~~~~
861

    
862
**verify-disks**
863

    
864
The command checks which instances have degraded DRBD disks and
865
activates the disks of those instances.
866

    
867
This command is run from the **ganeti-watcher** tool, which also
868
has a different, complementary algorithm for doing this check.
869
Together, these two should ensure that DRBD disks are kept
870
consistent.
871

    
872
VERSION
873
~~~~~~~
874

    
875
**version**
876

    
877
Show the cluster version.
878

    
879
Tags
880
~~~~
881

    
882
ADD-TAGS
883
^^^^^^^^
884

    
885
**add-tags** [\--from *file*] {*tag*...}
886

    
887
Add tags to the cluster. If any of the tags contains invalid
888
characters, the entire operation will abort.
889

    
890
If the ``--from`` option is given, the list of tags will be
891
extended with the contents of that file (each line becomes a tag).
892
In this case, there is not need to pass tags on the command line
893
(if you do, both sources will be used). A file name of - will be
894
interpreted as stdin.
895

    
896
LIST-TAGS
897
^^^^^^^^^
898

    
899
**list-tags**
900

    
901
List the tags of the cluster.
902

    
903
REMOVE-TAGS
904
^^^^^^^^^^^
905

    
906
**remove-tags** [\--from *file*] {*tag*...}
907

    
908
Remove tags from the cluster. If any of the tags are not existing
909
on the cluster, the entire operation will abort.
910

    
911
If the ``--from`` option is given, the list of tags to be removed will
912
be extended with the contents of that file (each line becomes a tag).
913
In this case, there is not need to pass tags on the command line (if
914
you do, tags from both sources will be removed). A file name of - will
915
be interpreted as stdin.
916

    
917
SEARCH-TAGS
918
^^^^^^^^^^^
919

    
920
**search-tags** {*pattern*}
921

    
922
Searches the tags on all objects in the cluster (the cluster
923
itself, the nodes and the instances) for a given pattern. The
924
pattern is interpreted as a regular expression and a search will be
925
done on it (i.e. the given pattern is not anchored to the beggining
926
of the string; if you want that, prefix the pattern with ^).
927

    
928
If no tags are matching the pattern, the exit code of the command
929
will be one. If there is at least one match, the exit code will be
930
zero. Each match is listed on one line, the object and the tag
931
separated by a space. The cluster will be listed as /cluster, a
932
node will be listed as /nodes/*name*, and an instance as
933
/instances/*name*. Example:
934

    
935
::
936

    
937
    # gnt-cluster search-tags time
938
    /cluster ctime:2007-09-01
939
    /nodes/node1.example.com mtime:2007-10-04
940

    
941
.. vim: set textwidth=72 :
942
.. Local Variables:
943
.. mode: rst
944
.. fill-column: 72
945
.. End: