root / man / gnt-cluster.rst @ b8a10435
History | View | Annotate | Download (22.1 kB)
1 |
gnt-cluster(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
================================================ |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-cluster - Ganeti administration, cluster-wide |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-cluster** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-cluster** is used for cluster-wide administration in the |
18 |
Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
ADD-TAGS |
24 |
~~~~~~~~ |
25 |
|
26 |
**add-tags** [--from *file*] {*tag*...} |
27 |
|
28 |
Add tags to the cluster. If any of the tags contains invalid |
29 |
characters, the entire operation will abort. |
30 |
|
31 |
If the ``--from`` option is given, the list of tags will be |
32 |
extended with the contents of that file (each line becomes a tag). |
33 |
In this case, there is not need to pass tags on the command line |
34 |
(if you do, both sources will be used). A file name of - will be |
35 |
interpreted as stdin. |
36 |
|
37 |
COMMAND |
38 |
~~~~~~~ |
39 |
|
40 |
**command** [-n *node*] [-g *group*] {*command*} |
41 |
|
42 |
Executes a command on all nodes. If the option ``-n`` is not given, |
43 |
the command will be executed on all nodes, otherwise it will be |
44 |
executed only on the node(s) specified. Use the option multiple |
45 |
times for running it on multiple nodes, like:: |
46 |
|
47 |
# gnt-cluster command -n node1.example.com -n node2.example.com date |
48 |
|
49 |
The ``-g`` option can be used to run a command only on a specific node |
50 |
group, e.g.:: |
51 |
|
52 |
# gnt-cluster command -g default date |
53 |
|
54 |
The command is executed serially on the selected nodes. If the |
55 |
master node is present in the list, the command will be executed |
56 |
last on the master. Regarding the other nodes, the execution order |
57 |
is somewhat alphabetic, so that node2.example.com will be earlier |
58 |
than node10.example.com but after node1.example.com. |
59 |
|
60 |
So given the node names node1, node2, node3, node10, node11, with |
61 |
node3 being the master, the order will be: node1, node2, node10, |
62 |
node11, node3. |
63 |
|
64 |
The command is constructed by concatenating all other command line |
65 |
arguments. For example, to list the contents of the /etc directory |
66 |
on all nodes, run:: |
67 |
|
68 |
# gnt-cluster command ls -l /etc |
69 |
|
70 |
and the command which will be executed will be ``ls -l /etc``. |
71 |
|
72 |
COPYFILE |
73 |
~~~~~~~~ |
74 |
|
75 |
| **copyfile** [--use-replication-network] [-n *node*] [-g *group*] |
76 |
| {*file*} |
77 |
|
78 |
Copies a file to all or to some nodes. The argument specifies the |
79 |
source file (on the current system), the ``-n`` argument specifies |
80 |
the target node, or nodes if the option is given multiple times. If |
81 |
``-n`` is not given at all, the file will be copied to all nodes. The |
82 |
``-g`` option can be used to only select nodes in a specific node group. |
83 |
Passing the ``--use-replication-network`` option will cause the |
84 |
copy to be done over the replication network (only matters if the |
85 |
primary/secondary IPs are different). Example:: |
86 |
|
87 |
# gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test |
88 |
|
89 |
This will copy the file /tmp/test from the current node to the two |
90 |
named nodes. |
91 |
|
92 |
DESTROY |
93 |
~~~~~~~ |
94 |
|
95 |
**destroy** {--yes-do-it} |
96 |
|
97 |
Remove all configuration files related to the cluster, so that a |
98 |
**gnt-cluster init** can be done again afterwards. |
99 |
|
100 |
Since this is a dangerous command, you are required to pass the |
101 |
argument *--yes-do-it.* |
102 |
|
103 |
EPO |
104 |
~~~ |
105 |
|
106 |
**epo** [--on] [--groups|--all] [--power-delay] *arguments* |
107 |
|
108 |
Performs an emergency power-off on nodes given as arguments. If |
109 |
``--groups`` is given, arguments are node groups. If ``--all`` is |
110 |
provided, the whole cluster will be shut down. |
111 |
|
112 |
The ``--on`` flag recovers the cluster after an emergency power-off. |
113 |
When powering on the cluster you can use ``--power-delay`` to define the |
114 |
time in seconds (fractions allowed) waited between powering on |
115 |
individual nodes. |
116 |
|
117 |
Please note that the master node will not be turned down or up |
118 |
automatically. It will just be left in a state, where you can manully |
119 |
perform the shutdown of that one node. If the master is in the list of |
120 |
affected nodes and this is not a complete cluster emergency power-off |
121 |
(e.g. using ``--all``), you're required to do a master failover to |
122 |
another node not affected. |
123 |
|
124 |
GETMASTER |
125 |
~~~~~~~~~ |
126 |
|
127 |
**getmaster** |
128 |
|
129 |
Displays the current master node. |
130 |
|
131 |
INFO |
132 |
~~~~ |
133 |
|
134 |
**info** [--roman] |
135 |
|
136 |
Shows runtime cluster information: cluster name, architecture (32 |
137 |
or 64 bit), master node, node list and instance list. |
138 |
|
139 |
Passing the ``--roman`` option gnt-cluster info will try to print |
140 |
its integer fields in a latin friendly way. This allows further |
141 |
diffusion of Ganeti among ancient cultures. |
142 |
|
143 |
INIT |
144 |
~~~~ |
145 |
|
146 |
| **init** |
147 |
| [{-s|--secondary-ip} *secondary\_ip*] |
148 |
| [--vg-name *vg-name*] |
149 |
| [--master-netdev *interface-name*] |
150 |
| [{-m|--mac-prefix} *mac-prefix*] |
151 |
| [--no-lvm-storage] |
152 |
| [--no-etc-hosts] |
153 |
| [--no-ssh-init] |
154 |
| [--file-storage-dir *dir*] |
155 |
| [--enabled-hypervisors *hypervisors*] |
156 |
| [{-H|--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]] |
157 |
| [{-B|--backend-parameters} *be-param*=*value* [,*be-param*=*value*...]] |
158 |
| [{-N|--nic-parameters} *nic-param*=*value* [,*nic-param*=*value*...]] |
159 |
| [--maintain-node-health {yes \| no}] |
160 |
| [--uid-pool *user-id pool definition*] |
161 |
| [{-I|--default-iallocator} *default instance allocator*] |
162 |
| [--primary-ip-version *version*] |
163 |
| [--prealloc-wipe-disks {yes \| no}] |
164 |
| [--node-parameters *ndparams*] |
165 |
| [{-C|--candidate-pool-size} *candidate\_pool\_size*] |
166 |
| {*clustername*} |
167 |
|
168 |
This commands is only run once initially on the first node of the |
169 |
cluster. It will initialize the cluster configuration, setup the |
170 |
ssh-keys, start the daemons on the master node, etc. in order to have |
171 |
a working one-node cluster. |
172 |
|
173 |
Note that the *clustername* is not any random name. It has to be |
174 |
resolvable to an IP address using DNS, and it is best if you give the |
175 |
fully-qualified domain name. This hostname must resolve to an IP |
176 |
address reserved exclusively for this purpose, i.e. not already in |
177 |
use. |
178 |
|
179 |
The cluster can run in two modes: single-home or dual-homed. In the |
180 |
first case, all traffic (both public traffic, inter-node traffic and |
181 |
data replication traffic) goes over the same interface. In the |
182 |
dual-homed case, the data replication traffic goes over the second |
183 |
network. The ``-s (--secondary-ip)`` option here marks the cluster as |
184 |
dual-homed and its parameter represents this node's address on the |
185 |
second network. If you initialise the cluster with ``-s``, all nodes |
186 |
added must have a secondary IP as well. |
187 |
|
188 |
Note that for Ganeti it doesn't matter if the secondary network is |
189 |
actually a separate physical network, or is done using tunneling, |
190 |
etc. For performance reasons, it's recommended to use a separate |
191 |
network, of course. |
192 |
|
193 |
The ``--vg-name`` option will let you specify a volume group |
194 |
different than "xenvg" for Ganeti to use when creating instance |
195 |
disks. This volume group must have the same name on all nodes. Once |
196 |
the cluster is initialized this can be altered by using the |
197 |
**modify** command. If you don't want to use lvm storage at all use |
198 |
the ``--no-lvm-storage`` option. Once the cluster is initialized |
199 |
you can change this setup with the **modify** command. |
200 |
|
201 |
The ``--master-netdev`` option is useful for specifying a different |
202 |
interface on which the master will activate its IP address. It's |
203 |
important that all nodes have this interface because you'll need it |
204 |
for a master failover. |
205 |
|
206 |
The ``-m (--mac-prefix)`` option will let you specify a three byte |
207 |
prefix under which the virtual MAC addresses of your instances will be |
208 |
generated. The prefix must be specified in the format ``XX:XX:XX`` and |
209 |
the default is ``aa:00:00``. |
210 |
|
211 |
The ``--no-lvm-storage`` option allows you to initialize the |
212 |
cluster without lvm support. This means that only instances using |
213 |
files as storage backend will be possible to create. Once the |
214 |
cluster is initialized you can change this setup with the |
215 |
**modify** command. |
216 |
|
217 |
The ``--no-etc-hosts`` option allows you to initialize the cluster |
218 |
without modifying the /etc/hosts file. |
219 |
|
220 |
The ``--no-ssh-init`` option allows you to initialize the cluster |
221 |
without creating or distributing SSH key pairs. |
222 |
|
223 |
The ``--file-storage-dir`` option allows you set the directory to |
224 |
use for storing the instance disk files when using file storage as |
225 |
backend for instance disks. |
226 |
|
227 |
The ``--prealloc-wipe-disks`` sets a cluster wide configuration |
228 |
value for wiping disks prior to allocation. This increases security |
229 |
on instance level as the instance can't access untouched data from |
230 |
it's underlying storage. |
231 |
|
232 |
The ``--enabled-hypervisors`` option allows you to set the list of |
233 |
hypervisors that will be enabled for this cluster. Instance |
234 |
hypervisors can only be chosen from the list of enabled |
235 |
hypervisors, and the first entry of this list will be used by |
236 |
default. Currently, the following hypervisors are available: |
237 |
|
238 |
xen-pvm |
239 |
Xen PVM hypervisor |
240 |
|
241 |
xen-hvm |
242 |
Xen HVM hypervisor |
243 |
|
244 |
kvm |
245 |
Linux KVM hypervisor |
246 |
|
247 |
chroot |
248 |
a simple chroot manager that starts chroot based on a script at the |
249 |
root of the filesystem holding the chroot |
250 |
|
251 |
fake |
252 |
fake hypervisor for development/testing |
253 |
|
254 |
Either a single hypervisor name or a comma-separated list of |
255 |
hypervisor names can be specified. If this option is not specified, |
256 |
only the xen-pvm hypervisor is enabled by default. |
257 |
|
258 |
The ``-H (--hypervisor-parameters)`` option allows you to set default |
259 |
hypervisor specific parameters for the cluster. The format of this |
260 |
option is the name of the hypervisor, followed by a colon and a |
261 |
comma-separated list of key=value pairs. The keys available for each |
262 |
hypervisors are detailed in the gnt-instance(8) man page, in the |
263 |
**add** command plus the following parameters which are only |
264 |
configurable globally (at cluster level): |
265 |
|
266 |
migration\_port |
267 |
Valid for the Xen PVM and KVM hypervisors. |
268 |
|
269 |
This options specifies the TCP port to use for live-migration. For |
270 |
Xen, the same port should be configured on all nodes in the |
271 |
``/etc/xen/xend-config.sxp`` file, under the key |
272 |
"xend-relocation-port". |
273 |
|
274 |
migration\_bandwidth |
275 |
Valid for the KVM hypervisor. |
276 |
|
277 |
This option specifies the maximum bandwidth that KVM will use for |
278 |
instance live migrations. The value is in MiB/s. |
279 |
|
280 |
This option is only effective with kvm versions >= 78 and qemu-kvm |
281 |
versions >= 0.10.0. |
282 |
|
283 |
The ``-B (--backend-parameters)`` option allows you to set the default |
284 |
backend parameters for the cluster. The parameter format is a |
285 |
comma-separated list of key=value pairs with the following supported |
286 |
keys: |
287 |
|
288 |
vcpus |
289 |
Number of VCPUs to set for an instance by default, must be an |
290 |
integer, will be set to 1 if no specified. |
291 |
|
292 |
memory |
293 |
Amount of memory to allocate for an instance by default, can be |
294 |
either an integer or an integer followed by a unit (M for mebibytes |
295 |
and G for gibibytes are supported), will be set to 128M if not |
296 |
specified. |
297 |
|
298 |
auto\_balance |
299 |
Value of the auto\_balance flag for instances to use by default, |
300 |
will be set to true if not specified. |
301 |
|
302 |
|
303 |
The ``-N (--nic-parameters)`` option allows you to set the default nic |
304 |
parameters for the cluster. The parameter format is a comma-separated |
305 |
list of key=value pairs with the following supported keys: |
306 |
|
307 |
mode |
308 |
The default nic mode, 'routed' or 'bridged'. |
309 |
|
310 |
link |
311 |
In bridged mode the default NIC bridge. In routed mode it |
312 |
represents an hypervisor-vif-script dependent value to allow |
313 |
different instance groups. For example under the KVM default |
314 |
network script it is interpreted as a routing table number or |
315 |
name. |
316 |
|
317 |
The option ``--maintain-node-health`` allows one to enable/disable |
318 |
automatic maintenance actions on nodes. Currently these include |
319 |
automatic shutdown of instances and deactivation of DRBD devices on |
320 |
offline nodes; in the future it might be extended to automatic |
321 |
removal of unknown LVM volumes, etc. |
322 |
|
323 |
The ``--uid-pool`` option initializes the user-id pool. The |
324 |
*user-id pool definition* can contain a list of user-ids and/or a |
325 |
list of user-id ranges. The parameter format is a comma-separated |
326 |
list of numeric user-ids or user-id ranges. The ranges are defined |
327 |
by a lower and higher boundary, separated by a dash. The boundaries |
328 |
are inclusive. If the ``--uid-pool`` option is not supplied, the |
329 |
user-id pool is initialized to an empty list. An empty list means |
330 |
that the user-id pool feature is disabled. |
331 |
|
332 |
The ``-I (--default-iallocator)`` option specifies the default |
333 |
instance allocator. The instance allocator will be used for operations |
334 |
like instance creation, instance and node migration, etc. when no |
335 |
manual override is specified. If this option is not specified and |
336 |
htools was not enabled at build time, the default instance allocator |
337 |
will be blank, which means that relevant operations will require the |
338 |
administrator to manually specify either an instance allocator, or a |
339 |
set of nodes. If the option is not specified but htools was enabled, |
340 |
the default iallocator will be **hail**(1) (assuming it can be found |
341 |
on disk). The default iallocator can be changed later using the |
342 |
**modify** command. |
343 |
|
344 |
The ``--primary-ip-version`` option specifies the IP version used |
345 |
for the primary address. Possible values are 4 and 6 for IPv4 and |
346 |
IPv6, respectively. This option is used when resolving node names |
347 |
and the cluster name. |
348 |
|
349 |
The ``--node-parameters`` option allows you to set default node |
350 |
parameters for the cluster. Please see **ganeti**(7) for more |
351 |
information about supported key=value pairs. |
352 |
|
353 |
The ``-C (--candidate-pool-size)`` option specifies the |
354 |
``candidate_pool_size`` cluster parameter. This is the number of nodes |
355 |
that the master will try to keep as master\_candidates. For more |
356 |
details about this role and other node roles, see the ganeti(7). |
357 |
|
358 |
LIST-TAGS |
359 |
~~~~~~~~~ |
360 |
|
361 |
**list-tags** |
362 |
|
363 |
List the tags of the cluster. |
364 |
|
365 |
MASTER-FAILOVER |
366 |
~~~~~~~~~~~~~~~ |
367 |
|
368 |
**master-failover** [--no-voting] |
369 |
|
370 |
Failover the master role to the current node. |
371 |
|
372 |
The ``--no-voting`` option skips the remote node agreement checks. |
373 |
This is dangerous, but necessary in some cases (for example failing |
374 |
over the master role in a 2 node cluster with the original master |
375 |
down). If the original master then comes up, it won't be able to |
376 |
start its master daemon because it won't have enough votes, but so |
377 |
won't the new master, if the master daemon ever needs a restart. |
378 |
You can pass ``--no-voting`` to **ganeti-masterd** on the new |
379 |
master to solve this problem, and run **gnt-cluster redist-conf** |
380 |
to make sure the cluster is consistent again. |
381 |
|
382 |
MASTER-PING |
383 |
~~~~~~~~~~~ |
384 |
|
385 |
**master-ping** |
386 |
|
387 |
Checks if the master daemon is alive. |
388 |
|
389 |
If the master daemon is alive and can respond to a basic query (the |
390 |
equivalent of **gnt-cluster info**), then the exit code of the |
391 |
command will be 0. If the master daemon is not alive (either due to |
392 |
a crash or because this is not the master node), the exit code will |
393 |
be 1. |
394 |
|
395 |
MODIFY |
396 |
~~~~~~ |
397 |
|
398 |
| **modify** |
399 |
| [--vg-name *vg-name*] |
400 |
| [--no-lvm-storage] |
401 |
| [--enabled-hypervisors *hypervisors*] |
402 |
| [{-H|--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]] |
403 |
| [{-B|--backend-parameters} *be-param*=*value* [,*be-param*=*value*...]] |
404 |
| [{-N|--nic-parameters} *nic-param*=*value* [,*nic-param*=*value*...]] |
405 |
| [--uid-pool *user-id pool definition*] |
406 |
| [--add-uids *user-id pool definition*] |
407 |
| [--remove-uids *user-id pool definition*] |
408 |
| [{-C|--candidate-pool-size} *candidate\_pool\_size*] |
409 |
| [--maintain-node-health {yes \| no}] |
410 |
| [--prealloc-wipe-disks {yes \| no}] |
411 |
| [{-I|--default-iallocator} *default instance allocator*] |
412 |
| [--reserved-lvs=*NAMES*] |
413 |
| [--node-parameters *ndparams*] |
414 |
| [--master-netdev *interface-name*] |
415 |
|
416 |
Modify the options for the cluster. |
417 |
|
418 |
The ``--vg-name``, ``--no-lvm-storarge``, ``--enabled-hypervisors``, |
419 |
``-H (--hypervisor-parameters)``, ``-B (--backend-parameters)``, |
420 |
``--nic-parameters``, ``-C (--candidate-pool-size)``, |
421 |
``--maintain-node-health``, ``--prealloc-wipe-disks``, ``--uid-pool``, |
422 |
``--node-parameters``, ``--master-netdev`` options are described in |
423 |
the **init** command. |
424 |
|
425 |
The ``--add-uids`` and ``--remove-uids`` options can be used to |
426 |
modify the user-id pool by adding/removing a list of user-ids or |
427 |
user-id ranges. |
428 |
|
429 |
The option ``--reserved-lvs`` specifies a list (comma-separated) of |
430 |
logical volume group names (regular expressions) that will be |
431 |
ignored by the cluster verify operation. This is useful if the |
432 |
volume group used for Ganeti is shared with the system for other |
433 |
uses. Note that it's not recommended to create and mark as ignored |
434 |
logical volume names which match Ganeti's own name format (starting |
435 |
with UUID and then .diskN), as this option only skips the |
436 |
verification, but not the actual use of the names given. |
437 |
|
438 |
To remove all reserved logical volumes, pass in an empty argument |
439 |
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``. |
440 |
|
441 |
The ``-I (--default-iallocator)`` is described in the **init** |
442 |
command. To clear the default iallocator, just pass an empty string |
443 |
(''). |
444 |
|
445 |
QUEUE |
446 |
~~~~~ |
447 |
|
448 |
**queue** {drain | undrain | info} |
449 |
|
450 |
Change job queue properties. |
451 |
|
452 |
The ``drain`` option sets the drain flag on the job queue. No new |
453 |
jobs will be accepted, but jobs already in the queue will be |
454 |
processed. |
455 |
|
456 |
The ``undrain`` will unset the drain flag on the job queue. New |
457 |
jobs will be accepted. |
458 |
|
459 |
The ``info`` option shows the properties of the job queue. |
460 |
|
461 |
WATCHER |
462 |
~~~~~~~ |
463 |
|
464 |
**watcher** {pause *duration* | continue | info} |
465 |
|
466 |
Make the watcher pause or let it continue. |
467 |
|
468 |
The ``pause`` option causes the watcher to pause for *duration* |
469 |
seconds. |
470 |
|
471 |
The ``continue`` option will let the watcher continue. |
472 |
|
473 |
The ``info`` option shows whether the watcher is currently paused. |
474 |
|
475 |
redist-conf |
476 |
~~~~~~~~~~~ |
477 |
|
478 |
**redist-conf** [--submit] |
479 |
|
480 |
This command forces a full push of configuration files from the |
481 |
master node to the other nodes in the cluster. This is normally not |
482 |
needed, but can be run if the **verify** complains about |
483 |
configuration mismatches. |
484 |
|
485 |
The ``--submit`` option is used to send the job to the master |
486 |
daemon but not wait for its completion. The job ID will be shown so |
487 |
that it can be examined via **gnt-job info**. |
488 |
|
489 |
REMOVE-TAGS |
490 |
~~~~~~~~~~~ |
491 |
|
492 |
**remove-tags** [--from *file*] {*tag*...} |
493 |
|
494 |
Remove tags from the cluster. If any of the tags are not existing |
495 |
on the cluster, the entire operation will abort. |
496 |
|
497 |
If the ``--from`` option is given, the list of tags to be removed will |
498 |
be extended with the contents of that file (each line becomes a tag). |
499 |
In this case, there is not need to pass tags on the command line (if |
500 |
you do, tags from both sources will be removed). A file name of - will |
501 |
be interpreted as stdin. |
502 |
|
503 |
RENAME |
504 |
~~~~~~ |
505 |
|
506 |
**rename** [-f] {*name*} |
507 |
|
508 |
Renames the cluster and in the process updates the master IP |
509 |
address to the one the new name resolves to. At least one of either |
510 |
the name or the IP address must be different, otherwise the |
511 |
operation will be aborted. |
512 |
|
513 |
Note that since this command can be dangerous (especially when run |
514 |
over SSH), the command will require confirmation unless run with |
515 |
the ``-f`` option. |
516 |
|
517 |
RENEW-CRYPTO |
518 |
~~~~~~~~~~~~ |
519 |
|
520 |
| **renew-crypto** [-f] |
521 |
| [--new-cluster-certificate] [--new-confd-hmac-key] |
522 |
| [--new-rapi-certificate] [--rapi-certificate *rapi-cert*] |
523 |
| [--new-spice-certificate | --spice-certificate *spice-cert* |
524 |
| -- spice-ca-certificate *spice-ca-cert*] |
525 |
| [--new-cluster-domain-secret] [--cluster-domain-secret *filename*] |
526 |
|
527 |
This command will stop all Ganeti daemons in the cluster and start |
528 |
them again once the new certificates and keys are replicated. The |
529 |
options ``--new-cluster-certificate`` and ``--new-confd-hmac-key`` |
530 |
can be used to regenerate the cluster-internal SSL certificate |
531 |
respective the HMAC key used by ganeti-confd(8). |
532 |
|
533 |
To generate a new self-signed RAPI certificate (used by |
534 |
ganeti-rapi(8)) specify ``--new-rapi-certificate``. If you want to |
535 |
use your own certificate, e.g. one signed by a certificate |
536 |
authority (CA), pass its filename to ``--rapi-certificate``. |
537 |
|
538 |
To generate a new self-signed SPICE certificate, used by SPICE |
539 |
connections to the KVM hypervisor, specify the |
540 |
``--new-spice-certificate`` option. If you want to provide a |
541 |
certificate, pass its filename to ``--spice-certificate`` and pass the |
542 |
signing CA certificate to ``--spice-ca-certificate``. |
543 |
|
544 |
``--new-cluster-domain-secret`` generates a new, random cluster |
545 |
domain secret. ``--cluster-domain-secret`` reads the secret from a |
546 |
file. The cluster domain secret is used to sign information |
547 |
exchanged between separate clusters via a third party. |
548 |
|
549 |
REPAIR-DISK-SIZES |
550 |
~~~~~~~~~~~~~~~~~ |
551 |
|
552 |
**repair-disk-sizes** [instance...] |
553 |
|
554 |
This command checks that the recorded size of the given instance's |
555 |
disks matches the actual size and updates any mismatches found. |
556 |
This is needed if the Ganeti configuration is no longer consistent |
557 |
with reality, as it will impact some disk operations. If no |
558 |
arguments are given, all instances will be checked. |
559 |
|
560 |
Note that only active disks can be checked by this command; in case |
561 |
a disk cannot be activated it's advised to use |
562 |
**gnt-instance activate-disks --ignore-size ...** to force |
563 |
activation without regard to the current size. |
564 |
|
565 |
When the all disk sizes are consistent, the command will return no |
566 |
output. Otherwise it will log details about the inconsistencies in |
567 |
the configuration. |
568 |
|
569 |
SEARCH-TAGS |
570 |
~~~~~~~~~~~ |
571 |
|
572 |
**search-tags** {*pattern*} |
573 |
|
574 |
Searches the tags on all objects in the cluster (the cluster |
575 |
itself, the nodes and the instances) for a given pattern. The |
576 |
pattern is interpreted as a regular expression and a search will be |
577 |
done on it (i.e. the given pattern is not anchored to the beggining |
578 |
of the string; if you want that, prefix the pattern with ^). |
579 |
|
580 |
If no tags are matching the pattern, the exit code of the command |
581 |
will be one. If there is at least one match, the exit code will be |
582 |
zero. Each match is listed on one line, the object and the tag |
583 |
separated by a space. The cluster will be listed as /cluster, a |
584 |
node will be listed as /nodes/*name*, and an instance as |
585 |
/instances/*name*. Example: |
586 |
|
587 |
:: |
588 |
|
589 |
# gnt-cluster search-tags time |
590 |
/cluster ctime:2007-09-01 |
591 |
/nodes/node1.example.com mtime:2007-10-04 |
592 |
|
593 |
VERIFY |
594 |
~~~~~~ |
595 |
|
596 |
**verify** [--no-nplus1-mem] [--node-group *nodegroup*] |
597 |
|
598 |
Verify correctness of cluster configuration. This is safe with |
599 |
respect to running instances, and incurs no downtime of the |
600 |
instances. |
601 |
|
602 |
If the ``--no-nplus1-mem`` option is given, Ganeti won't check |
603 |
whether if it loses a node it can restart all the instances on |
604 |
their secondaries (and report an error otherwise). |
605 |
|
606 |
With ``--node-group``, restrict the verification to those nodes and |
607 |
instances that live in the named group. This will not verify global |
608 |
settings, but will allow to perform verification of a group while other |
609 |
operations are ongoing in other groups. |
610 |
|
611 |
VERIFY-DISKS |
612 |
~~~~~~~~~~~~ |
613 |
|
614 |
**verify-disks** |
615 |
|
616 |
The command checks which instances have degraded DRBD disks and |
617 |
activates the disks of those instances. |
618 |
|
619 |
This command is run from the **ganeti-watcher** tool, which also |
620 |
has a different, complementary algorithm for doing this check. |
621 |
Together, these two should ensure that DRBD disks are kept |
622 |
consistent. |
623 |
|
624 |
VERSION |
625 |
~~~~~~~ |
626 |
|
627 |
**version** |
628 |
|
629 |
Show the cluster version. |
630 |
|
631 |
.. vim: set textwidth=72 : |
632 |
.. Local Variables: |
633 |
.. mode: rst |
634 |
.. fill-column: 72 |
635 |
.. End: |