root / man / gnt-cluster.rst @ 0359e5d0
History | View | Annotate | Download (35.6 kB)
1 |
gnt-cluster(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
================================================ |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-cluster - Ganeti administration, cluster-wide |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-cluster** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-cluster** is used for cluster-wide administration in the |
18 |
Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
ACTIVATE-MASTER-IP |
24 |
~~~~~~~~~~~~~~~~~~ |
25 |
|
26 |
**activate-master-ip** |
27 |
|
28 |
Activates the master IP on the master node. |
29 |
|
30 |
COMMAND |
31 |
~~~~~~~ |
32 |
|
33 |
**command** [-n *node*] [-g *group*] [-M] {*command*} |
34 |
|
35 |
Executes a command on all nodes. This command is designed for simple |
36 |
usage. For more complex use cases the commands **dsh**\(1) or **cssh**\(1) |
37 |
should be used instead. |
38 |
|
39 |
If the option ``-n`` is not given, the command will be executed on all |
40 |
nodes, otherwise it will be executed only on the node(s) specified. Use |
41 |
the option multiple times for running it on multiple nodes, like:: |
42 |
|
43 |
# gnt-cluster command -n node1.example.com -n node2.example.com date |
44 |
|
45 |
The ``-g`` option can be used to run a command only on a specific node |
46 |
group, e.g.:: |
47 |
|
48 |
# gnt-cluster command -g default date |
49 |
|
50 |
The ``-M`` option can be used to prepend the node name to all output |
51 |
lines. The ``--failure-only`` option hides successful commands, making |
52 |
it easier to see failures. |
53 |
|
54 |
The command is executed serially on the selected nodes. If the |
55 |
master node is present in the list, the command will be executed |
56 |
last on the master. Regarding the other nodes, the execution order |
57 |
is somewhat alphabetic, so that node2.example.com will be earlier |
58 |
than node10.example.com but after node1.example.com. |
59 |
|
60 |
So given the node names node1, node2, node3, node10, node11, with |
61 |
node3 being the master, the order will be: node1, node2, node10, |
62 |
node11, node3. |
63 |
|
64 |
The command is constructed by concatenating all other command line |
65 |
arguments. For example, to list the contents of the /etc directory |
66 |
on all nodes, run:: |
67 |
|
68 |
# gnt-cluster command ls -l /etc |
69 |
|
70 |
and the command which will be executed will be ``ls -l /etc``. |
71 |
|
72 |
COPYFILE |
73 |
~~~~~~~~ |
74 |
|
75 |
| **copyfile** [\--use-replication-network] [-n *node*] [-g *group*] |
76 |
| {*file*} |
77 |
|
78 |
Copies a file to all or to some nodes. The argument specifies the |
79 |
source file (on the current system), the ``-n`` argument specifies |
80 |
the target node, or nodes if the option is given multiple times. If |
81 |
``-n`` is not given at all, the file will be copied to all nodes. The |
82 |
``-g`` option can be used to only select nodes in a specific node group. |
83 |
Passing the ``--use-replication-network`` option will cause the |
84 |
copy to be done over the replication network (only matters if the |
85 |
primary/secondary IPs are different). Example:: |
86 |
|
87 |
# gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test |
88 |
|
89 |
This will copy the file /tmp/test from the current node to the two |
90 |
named nodes. |
91 |
|
92 |
DEACTIVATE-MASTER-IP |
93 |
~~~~~~~~~~~~~~~~~~~~ |
94 |
|
95 |
**deactivate-master-ip** [\--yes] |
96 |
|
97 |
Deactivates the master IP on the master node. |
98 |
|
99 |
This should be run only locally or on a connection to the node ip |
100 |
directly, as a connection to the master ip will be broken by this |
101 |
operation. Because of this risk it will require user confirmation |
102 |
unless the ``--yes`` option is passed. |
103 |
|
104 |
DESTROY |
105 |
~~~~~~~ |
106 |
|
107 |
**destroy** {\--yes-do-it} |
108 |
|
109 |
Remove all configuration files related to the cluster, so that a |
110 |
**gnt-cluster init** can be done again afterwards. |
111 |
|
112 |
Since this is a dangerous command, you are required to pass the |
113 |
argument *\--yes-do-it.* |
114 |
|
115 |
EPO |
116 |
~~~ |
117 |
|
118 |
**epo** [\--on] [\--groups|\--all] [\--power-delay] *arguments* |
119 |
|
120 |
Performs an emergency power-off on nodes given as arguments. If |
121 |
``--groups`` is given, arguments are node groups. If ``--all`` is |
122 |
provided, the whole cluster will be shut down. |
123 |
|
124 |
The ``--on`` flag recovers the cluster after an emergency power-off. |
125 |
When powering on the cluster you can use ``--power-delay`` to define the |
126 |
time in seconds (fractions allowed) waited between powering on |
127 |
individual nodes. |
128 |
|
129 |
Please note that the master node will not be turned down or up |
130 |
automatically. It will just be left in a state, where you can manully |
131 |
perform the shutdown of that one node. If the master is in the list of |
132 |
affected nodes and this is not a complete cluster emergency power-off |
133 |
(e.g. using ``--all``), you're required to do a master failover to |
134 |
another node not affected. |
135 |
|
136 |
GETMASTER |
137 |
~~~~~~~~~ |
138 |
|
139 |
**getmaster** |
140 |
|
141 |
Displays the current master node. |
142 |
|
143 |
INFO |
144 |
~~~~ |
145 |
|
146 |
**info** [\--roman] |
147 |
|
148 |
Shows runtime cluster information: cluster name, architecture (32 |
149 |
or 64 bit), master node, node list and instance list. |
150 |
|
151 |
Passing the ``--roman`` option gnt-cluster info will try to print |
152 |
its integer fields in a latin friendly way. This allows further |
153 |
diffusion of Ganeti among ancient cultures. |
154 |
|
155 |
SHOW-ISPECS-CMD |
156 |
~~~~~~~~~~~~~~~ |
157 |
|
158 |
**show-ispecs-cmd** |
159 |
|
160 |
Shows the command line that can be used to recreate the cluster with the |
161 |
same options relative to specs in the instance policies. |
162 |
|
163 |
INIT |
164 |
~~~~ |
165 |
|
166 |
| **init** |
167 |
| [{-s|\--secondary-ip} *secondary\_ip*] |
168 |
| [\--vg-name *vg-name*] |
169 |
| [\--master-netdev *interface-name*] |
170 |
| [\--master-netmask *netmask*] |
171 |
| [\--use-external-mip-script {yes \| no}] |
172 |
| [{-m|\--mac-prefix} *mac-prefix*] |
173 |
| [\--no-etc-hosts] |
174 |
| [\--no-ssh-init] |
175 |
| [\--file-storage-dir *dir*] |
176 |
| [\--shared-file-storage-dir *dir*] |
177 |
| [\--enabled-hypervisors *hypervisors*] |
178 |
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]] |
179 |
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]] |
180 |
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]] |
181 |
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]] |
182 |
| [\--maintain-node-health {yes \| no}] |
183 |
| [\--uid-pool *user-id pool definition*] |
184 |
| [{-I|\--default-iallocator} *default instance allocator*] |
185 |
| [\--default-iallocator-params *ial-param*=*value*,*ial-param*=*value*] |
186 |
| [\--primary-ip-version *version*] |
187 |
| [\--prealloc-wipe-disks {yes \| no}] |
188 |
| [\--node-parameters *ndparams*] |
189 |
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*] |
190 |
| [\--specs-cpu-count *spec-param*=*value* [,*spec-param*=*value*...]] |
191 |
| [\--specs-disk-count *spec-param*=*value* [,*spec-param*=*value*...]] |
192 |
| [\--specs-disk-size *spec-param*=*value* [,*spec-param*=*value*...]] |
193 |
| [\--specs-mem-size *spec-param*=*value* [,*spec-param*=*value*...]] |
194 |
| [\--specs-nic-count *spec-param*=*value* [,*spec-param*=*value*...]] |
195 |
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]] |
196 |
| [\--ipolicy-bounds-specs *bounds_ispecs*] |
197 |
| [\--ipolicy-disk-templates *template* [,*template*...]] |
198 |
| [\--ipolicy-spindle-ratio *ratio*] |
199 |
| [\--ipolicy-vcpu-ratio *ratio*] |
200 |
| [\--disk-state *diskstate*] |
201 |
| [\--hypervisor-state *hvstate*] |
202 |
| [\--drbd-usermode-helper *helper*] |
203 |
| [\--enabled-disk-templates *template* [,*template*...]] |
204 |
| {*clustername*} |
205 |
|
206 |
This commands is only run once initially on the first node of the |
207 |
cluster. It will initialize the cluster configuration, setup the |
208 |
ssh-keys, start the daemons on the master node, etc. in order to have |
209 |
a working one-node cluster. |
210 |
|
211 |
Note that the *clustername* is not any random name. It has to be |
212 |
resolvable to an IP address using DNS, and it is best if you give the |
213 |
fully-qualified domain name. This hostname must resolve to an IP |
214 |
address reserved exclusively for this purpose, i.e. not already in |
215 |
use. |
216 |
|
217 |
The cluster can run in two modes: single-home or dual-homed. In the |
218 |
first case, all traffic (both public traffic, inter-node traffic and |
219 |
data replication traffic) goes over the same interface. In the |
220 |
dual-homed case, the data replication traffic goes over the second |
221 |
network. The ``-s (--secondary-ip)`` option here marks the cluster as |
222 |
dual-homed and its parameter represents this node's address on the |
223 |
second network. If you initialise the cluster with ``-s``, all nodes |
224 |
added must have a secondary IP as well. |
225 |
|
226 |
Note that for Ganeti it doesn't matter if the secondary network is |
227 |
actually a separate physical network, or is done using tunneling, |
228 |
etc. For performance reasons, it's recommended to use a separate |
229 |
network, of course. |
230 |
|
231 |
The ``--vg-name`` option will let you specify a volume group |
232 |
different than "xenvg" for Ganeti to use when creating instance |
233 |
disks. This volume group must have the same name on all nodes. Once |
234 |
the cluster is initialized this can be altered by using the |
235 |
**modify** command. Note that if the volume group name is modified after |
236 |
the cluster creation and DRBD support is enabled you might have to |
237 |
manually modify the metavg as well. |
238 |
|
239 |
If you don't want to use lvm storage at all use |
240 |
the ``--enabled-disk-templates`` option to restrict the set of enabled |
241 |
disk templates. Once the cluster is initialized |
242 |
you can change this setup with the **modify** command. |
243 |
|
244 |
The ``--master-netdev`` option is useful for specifying a different |
245 |
interface on which the master will activate its IP address. It's |
246 |
important that all nodes have this interface because you'll need it |
247 |
for a master failover. |
248 |
|
249 |
The ``--master-netmask`` option allows to specify a netmask for the |
250 |
master IP. The netmask must be specified as an integer, and will be |
251 |
interpreted as a CIDR netmask. The default value is 32 for an IPv4 |
252 |
address and 128 for an IPv6 address. |
253 |
|
254 |
The ``--use-external-mip-script`` option allows to specify whether to |
255 |
use an user-supplied master IP address setup script, whose location is |
256 |
``@SYSCONFDIR@/ganeti/scripts/master-ip-setup``. If the option value is |
257 |
set to False, the default script (located at |
258 |
``@PKGLIBDIR@/tools/master-ip-setup``) will be executed. |
259 |
|
260 |
The ``-m (--mac-prefix)`` option will let you specify a three byte |
261 |
prefix under which the virtual MAC addresses of your instances will be |
262 |
generated. The prefix must be specified in the format ``XX:XX:XX`` and |
263 |
the default is ``aa:00:00``. |
264 |
|
265 |
The ``--no-etc-hosts`` option allows you to initialize the cluster |
266 |
without modifying the /etc/hosts file. |
267 |
|
268 |
The ``--no-ssh-init`` option allows you to initialize the cluster |
269 |
without creating or distributing SSH key pairs. |
270 |
|
271 |
The ``--file-storage-dir`` and ``--shared-file-storage-dir`` options |
272 |
allow you set the directory to use for storing the instance disk files |
273 |
when using file storage backend, respectively shared file storage |
274 |
backend, for instance disks. Note that the file and shared file storage |
275 |
dir must be an allowed directory for file storage. Those directories |
276 |
are specified in the ``@SYSCONFDIR@/ganeti/file-storage-paths`` file. |
277 |
The file storage directory can also be a subdirectory of an allowed one. |
278 |
The file storage directory should be present on all nodes. |
279 |
|
280 |
The ``--prealloc-wipe-disks`` sets a cluster wide configuration value |
281 |
for wiping disks prior to allocation and size changes (``gnt-instance |
282 |
grow-disk``). This increases security on instance level as the instance |
283 |
can't access untouched data from its underlying storage. |
284 |
|
285 |
The ``--enabled-hypervisors`` option allows you to set the list of |
286 |
hypervisors that will be enabled for this cluster. Instance |
287 |
hypervisors can only be chosen from the list of enabled |
288 |
hypervisors, and the first entry of this list will be used by |
289 |
default. Currently, the following hypervisors are available: |
290 |
|
291 |
xen-pvm |
292 |
Xen PVM hypervisor |
293 |
|
294 |
xen-hvm |
295 |
Xen HVM hypervisor |
296 |
|
297 |
kvm |
298 |
Linux KVM hypervisor |
299 |
|
300 |
chroot |
301 |
a simple chroot manager that starts chroot based on a script at the |
302 |
root of the filesystem holding the chroot |
303 |
|
304 |
fake |
305 |
fake hypervisor for development/testing |
306 |
|
307 |
Either a single hypervisor name or a comma-separated list of |
308 |
hypervisor names can be specified. If this option is not specified, |
309 |
only the xen-pvm hypervisor is enabled by default. |
310 |
|
311 |
The ``-H (--hypervisor-parameters)`` option allows you to set default |
312 |
hypervisor specific parameters for the cluster. The format of this |
313 |
option is the name of the hypervisor, followed by a colon and a |
314 |
comma-separated list of key=value pairs. The keys available for each |
315 |
hypervisors are detailed in the **gnt-instance**\(8) man page, in the |
316 |
**add** command plus the following parameters which are only |
317 |
configurable globally (at cluster level): |
318 |
|
319 |
migration\_port |
320 |
Valid for the Xen PVM and KVM hypervisors. |
321 |
|
322 |
This options specifies the TCP port to use for live-migration. For |
323 |
Xen, the same port should be configured on all nodes in the |
324 |
``@XEN_CONFIG_DIR@/xend-config.sxp`` file, under the key |
325 |
"xend-relocation-port". |
326 |
|
327 |
migration\_bandwidth |
328 |
Valid for the KVM hypervisor. |
329 |
|
330 |
This option specifies the maximum bandwidth that KVM will use for |
331 |
instance live migrations. The value is in MiB/s. |
332 |
|
333 |
This option is only effective with kvm versions >= 78 and qemu-kvm |
334 |
versions >= 0.10.0. |
335 |
|
336 |
The ``-B (--backend-parameters)`` option allows you to set the default |
337 |
backend parameters for the cluster. The parameter format is a |
338 |
comma-separated list of key=value pairs with the following supported |
339 |
keys: |
340 |
|
341 |
vcpus |
342 |
Number of VCPUs to set for an instance by default, must be an |
343 |
integer, will be set to 1 if no specified. |
344 |
|
345 |
maxmem |
346 |
Maximum amount of memory to allocate for an instance by default, can |
347 |
be either an integer or an integer followed by a unit (M for |
348 |
mebibytes and G for gibibytes are supported), will be set to 128M if |
349 |
not specified. |
350 |
|
351 |
minmem |
352 |
Minimum amount of memory to allocate for an instance by default, can |
353 |
be either an integer or an integer followed by a unit (M for |
354 |
mebibytes and G for gibibytes are supported), will be set to 128M if |
355 |
not specified. |
356 |
|
357 |
auto\_balance |
358 |
Value of the auto\_balance flag for instances to use by default, |
359 |
will be set to true if not specified. |
360 |
|
361 |
always\_failover |
362 |
Default value for the ``always_failover`` flag for instances; if |
363 |
not set, ``False`` is used. |
364 |
|
365 |
|
366 |
The ``-N (--nic-parameters)`` option allows you to set the default |
367 |
network interface parameters for the cluster. The parameter format is a |
368 |
comma-separated list of key=value pairs with the following supported |
369 |
keys: |
370 |
|
371 |
mode |
372 |
The default NIC mode, one of ``routed``, ``bridged`` or |
373 |
``openvswitch``. |
374 |
|
375 |
link |
376 |
In ``bridged`` or ``openvswitch`` mode the default interface where |
377 |
to attach NICs. In ``routed`` mode it represents an |
378 |
hypervisor-vif-script dependent value to allow different instance |
379 |
groups. For example under the KVM default network script it is |
380 |
interpreted as a routing table number or name. Openvswitch support |
381 |
is also hypervisor dependent and currently works for the default KVM |
382 |
network script. Under Xen a custom network script must be provided. |
383 |
|
384 |
The ``-D (--disk-parameters)`` option allows you to set the default disk |
385 |
template parameters at cluster level. The format used for this option is |
386 |
similar to the one use by the ``-H`` option: the disk template name |
387 |
must be specified first, followed by a colon and by a comma-separated |
388 |
list of key-value pairs. These parameters can only be specified at |
389 |
cluster and node group level; the cluster-level parameter are inherited |
390 |
by the node group at the moment of its creation, and can be further |
391 |
modified at node group level using the **gnt-group**\(8) command. |
392 |
|
393 |
The following is the list of disk parameters available for the **drbd** |
394 |
template, with measurement units specified in square brackets at the end |
395 |
of the description (when applicable): |
396 |
|
397 |
resync-rate |
398 |
Static re-synchronization rate. [KiB/s] |
399 |
|
400 |
data-stripes |
401 |
Number of stripes to use for data LVs. |
402 |
|
403 |
meta-stripes |
404 |
Number of stripes to use for meta LVs. |
405 |
|
406 |
disk-barriers |
407 |
What kind of barriers to **disable** for disks. It can either assume |
408 |
the value "n", meaning no barrier disabled, or a non-empty string |
409 |
containing a subset of the characters "bfd". "b" means disable disk |
410 |
barriers, "f" means disable disk flushes, "d" disables disk drains. |
411 |
|
412 |
meta-barriers |
413 |
Boolean value indicating whether the meta barriers should be |
414 |
disabled (True) or not (False). |
415 |
|
416 |
metavg |
417 |
String containing the name of the default LVM volume group for DRBD |
418 |
metadata. By default, it is set to ``xenvg``. It can be overridden |
419 |
during the instance creation process by using the ``metavg`` key of |
420 |
the ``--disk`` parameter. |
421 |
|
422 |
disk-custom |
423 |
String containing additional parameters to be appended to the |
424 |
arguments list of ``drbdsetup disk``. |
425 |
|
426 |
net-custom |
427 |
String containing additional parameters to be appended to the |
428 |
arguments list of ``drbdsetup net``. |
429 |
|
430 |
protocol |
431 |
Replication protocol for the DRBD device. Has to be either "A", "B" |
432 |
or "C". Refer to the DRBD documentation for further information |
433 |
about the differences between the protocols. |
434 |
|
435 |
dynamic-resync |
436 |
Boolean indicating whether to use the dynamic resync speed |
437 |
controller or not. If enabled, c-plan-ahead must be non-zero and all |
438 |
the c-* parameters will be used by DRBD. Otherwise, the value of |
439 |
resync-rate will be used as a static resync speed. |
440 |
|
441 |
c-plan-ahead |
442 |
Agility factor of the dynamic resync speed controller. (the higher, |
443 |
the slower the algorithm will adapt the resync speed). A value of 0 |
444 |
(that is the default) disables the controller. [ds] |
445 |
|
446 |
c-fill-target |
447 |
Maximum amount of in-flight resync data for the dynamic resync speed |
448 |
controller. [sectors] |
449 |
|
450 |
c-delay-target |
451 |
Maximum estimated peer response latency for the dynamic resync speed |
452 |
controller. [ds] |
453 |
|
454 |
c-min-rate |
455 |
Minimum resync speed for the dynamic resync speed controller. [KiB/s] |
456 |
|
457 |
c-max-rate |
458 |
Upper bound on resync speed for the dynamic resync speed controller. |
459 |
[KiB/s] |
460 |
|
461 |
List of parameters available for the **plain** template: |
462 |
|
463 |
stripes |
464 |
Number of stripes to use for new LVs. |
465 |
|
466 |
List of parameters available for the **rbd** template: |
467 |
|
468 |
pool |
469 |
The RADOS cluster pool, inside which all rbd volumes will reside. |
470 |
When a new RADOS cluster is deployed, the default pool to put rbd |
471 |
volumes (Images in RADOS terminology) is 'rbd'. |
472 |
|
473 |
access |
474 |
If 'userspace', instances will access their disks directly without |
475 |
going through a block device, avoiding expensive context switches |
476 |
with kernel space and the potential for deadlocks_ in low memory |
477 |
scenarios. |
478 |
|
479 |
The default value is 'kernelspace' and it disables this behaviour. |
480 |
This setting may only be changed to 'userspace' if all instance |
481 |
disks in the affected group or cluster can be accessed in userspace. |
482 |
|
483 |
Attempts to use this feature without rbd support compiled in KVM |
484 |
result in a "no such file or directory" error messages. |
485 |
|
486 |
.. _deadlocks: http://tracker.ceph.com/issues/3076 |
487 |
|
488 |
The option ``--maintain-node-health`` allows one to enable/disable |
489 |
automatic maintenance actions on nodes. Currently these include |
490 |
automatic shutdown of instances and deactivation of DRBD devices on |
491 |
offline nodes; in the future it might be extended to automatic |
492 |
removal of unknown LVM volumes, etc. Note that this option is only |
493 |
useful if the use of ``ganeti-confd`` was enabled at compilation. |
494 |
|
495 |
The ``--uid-pool`` option initializes the user-id pool. The |
496 |
*user-id pool definition* can contain a list of user-ids and/or a |
497 |
list of user-id ranges. The parameter format is a comma-separated |
498 |
list of numeric user-ids or user-id ranges. The ranges are defined |
499 |
by a lower and higher boundary, separated by a dash. The boundaries |
500 |
are inclusive. If the ``--uid-pool`` option is not supplied, the |
501 |
user-id pool is initialized to an empty list. An empty list means |
502 |
that the user-id pool feature is disabled. |
503 |
|
504 |
The ``-I (--default-iallocator)`` option specifies the default |
505 |
instance allocator. The instance allocator will be used for operations |
506 |
like instance creation, instance and node migration, etc. when no |
507 |
manual override is specified. If this option is not specified and |
508 |
htools was not enabled at build time, the default instance allocator |
509 |
will be blank, which means that relevant operations will require the |
510 |
administrator to manually specify either an instance allocator, or a |
511 |
set of nodes. If the option is not specified but htools was enabled, |
512 |
the default iallocator will be **hail**\(1) (assuming it can be found |
513 |
on disk). The default iallocator can be changed later using the |
514 |
**modify** command. |
515 |
|
516 |
The option ``--default-iallocator-params`` sets the cluster-wide |
517 |
iallocator parameters used by the default iallocator only on instance |
518 |
allocations. |
519 |
|
520 |
The ``--primary-ip-version`` option specifies the IP version used |
521 |
for the primary address. Possible values are 4 and 6 for IPv4 and |
522 |
IPv6, respectively. This option is used when resolving node names |
523 |
and the cluster name. |
524 |
|
525 |
The ``--node-parameters`` option allows you to set default node |
526 |
parameters for the cluster. Please see **ganeti**\(7) for more |
527 |
information about supported key=value pairs. |
528 |
|
529 |
The ``-C (--candidate-pool-size)`` option specifies the |
530 |
``candidate_pool_size`` cluster parameter. This is the number of nodes |
531 |
that the master will try to keep as master\_candidates. For more |
532 |
details about this role and other node roles, see the **ganeti**\(7). |
533 |
|
534 |
The ``--specs-...`` and ``--ipolicy-...`` options specify the instance |
535 |
policy on the cluster. The ``--ipolicy-bounds-specs`` option sets the |
536 |
minimum and maximum specifications for instances. The format is: |
537 |
min:*param*=*value*,.../max:*param*=*value*,... and further |
538 |
specifications pairs can be added by using ``//`` as a separator. The |
539 |
``--ipolicy-std-specs`` option takes a list of parameter/value pairs. |
540 |
For both options, *param* can be: |
541 |
|
542 |
- ``cpu-count``: number of VCPUs for an instance |
543 |
- ``disk-count``: number of disk for an instance |
544 |
- ``disk-size``: size of each disk |
545 |
- ``memory-size``: instance memory |
546 |
- ``nic-count``: number of network interface |
547 |
- ``spindle-use``: spindle usage for an instance |
548 |
|
549 |
For the ``--specs-...`` options, each option can have three values: |
550 |
``min``, ``max`` and ``std``, which can also be modified on group level |
551 |
(except for ``std``, which is defined once for the entire cluster). |
552 |
Please note, that ``std`` values are not the same as defaults set by |
553 |
``--beparams``, but they are used for the capacity calculations. |
554 |
|
555 |
- ``--specs-cpu-count`` limits the number of VCPUs that can be used by an |
556 |
instance. |
557 |
- ``--specs-disk-count`` limits the number of disks |
558 |
- ``--specs-disk-size`` limits the disk size for every disk used |
559 |
- ``--specs-mem-size`` limits the amount of memory available |
560 |
- ``--specs-nic-count`` sets limits on the number of NICs used |
561 |
|
562 |
The ``--ipolicy-spindle-ratio`` option takes a decimal number. The |
563 |
``--ipolicy-disk-templates`` option takes a comma-separated list of disk |
564 |
templates. This list of disk templates must be a subset of the list |
565 |
of cluster-wide enabled disk templates (which can be set with |
566 |
``--enabled-disk-templates``). |
567 |
|
568 |
- ``--ipolicy-spindle-ratio`` limits the instances-spindles ratio |
569 |
- ``--ipolicy-vcpu-ratio`` limits the vcpu-cpu ratio |
570 |
|
571 |
All the instance policy elements can be overridden at group level. Group |
572 |
level overrides can be removed by specifying ``default`` as the value of |
573 |
an item. |
574 |
|
575 |
The ``--drbd-usermode-helper`` option can be used to specify a usermode |
576 |
helper. Check that this string is the one used by the DRBD kernel. |
577 |
|
578 |
For details about how to use ``--hypervisor-state`` and ``--disk-state`` |
579 |
have a look at **ganeti**\(7). |
580 |
|
581 |
The ``--enabled-disk-templates`` option specifies a list of disk templates |
582 |
that can be used by instances of the cluster. For the possible values in |
583 |
this list, see **gnt-instance**\(8). Note that in contrast to the list of |
584 |
disk templates in the ipolicy, this list is a hard restriction. It is not |
585 |
possible to create instances with disk templates that are not enabled in |
586 |
the cluster. It is also not possible to disable a disk template when there |
587 |
are still instances using it. The first disk template in the list of |
588 |
enabled disk template is the default disk template. It will be used for |
589 |
instance creation, if no disk template is requested explicitely. |
590 |
|
591 |
MASTER-FAILOVER |
592 |
~~~~~~~~~~~~~~~ |
593 |
|
594 |
**master-failover** [\--no-voting] [\--yes-do-it] |
595 |
|
596 |
Failover the master role to the current node. |
597 |
|
598 |
The ``--no-voting`` option skips the remote node agreement checks. |
599 |
This is dangerous, but necessary in some cases (for example failing |
600 |
over the master role in a 2 node cluster with the original master |
601 |
down). If the original master then comes up, it won't be able to |
602 |
start its master daemon because it won't have enough votes, but so |
603 |
won't the new master, if the master daemon ever needs a restart. |
604 |
You can pass ``--no-voting`` to **ganeti-masterd** on the new |
605 |
master to solve this problem, and run **gnt-cluster redist-conf** |
606 |
to make sure the cluster is consistent again. |
607 |
|
608 |
The option ``--yes-do-it`` is used together with ``--no-voting``, for |
609 |
skipping the interactive checks. This is even more dangerous, and should |
610 |
only be used in conjunction with other means (e.g. a HA suite) to |
611 |
confirm that the operation is indeed safe. |
612 |
|
613 |
MASTER-PING |
614 |
~~~~~~~~~~~ |
615 |
|
616 |
**master-ping** |
617 |
|
618 |
Checks if the master daemon is alive. |
619 |
|
620 |
If the master daemon is alive and can respond to a basic query (the |
621 |
equivalent of **gnt-cluster info**), then the exit code of the |
622 |
command will be 0. If the master daemon is not alive (either due to |
623 |
a crash or because this is not the master node), the exit code will |
624 |
be 1. |
625 |
|
626 |
MODIFY |
627 |
~~~~~~ |
628 |
|
629 |
| **modify** [\--submit] [\--print-job-id] |
630 |
| [\--force] |
631 |
| [\--vg-name *vg-name*] |
632 |
| [\--enabled-hypervisors *hypervisors*] |
633 |
| [{-H|\--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]] |
634 |
| [{-B|\--backend-parameters} *be-param*=*value*[,*be-param*=*value*...]] |
635 |
| [{-N|\--nic-parameters} *nic-param*=*value*[,*nic-param*=*value*...]] |
636 |
| [{-D|\--disk-parameters} *disk-template*:*disk-param*=*value*[,*disk-param*=*value*...]] |
637 |
| [\--uid-pool *user-id pool definition*] |
638 |
| [\--add-uids *user-id pool definition*] |
639 |
| [\--remove-uids *user-id pool definition*] |
640 |
| [{-C|\--candidate-pool-size} *candidate\_pool\_size*] |
641 |
| [\--maintain-node-health {yes \| no}] |
642 |
| [\--prealloc-wipe-disks {yes \| no}] |
643 |
| [{-I|\--default-iallocator} *default instance allocator*] |
644 |
| [\--default-iallocator-params *ial-param*=*value*,*ial-param*=*value*] |
645 |
| [\--reserved-lvs=*NAMES*] |
646 |
| [\--node-parameters *ndparams*] |
647 |
| [\--master-netdev *interface-name*] |
648 |
| [\--master-netmask *netmask*] |
649 |
| [\--use-external-mip-script {yes \| no}] |
650 |
| [\--hypervisor-state *hvstate*] |
651 |
| [\--disk-state *diskstate*] |
652 |
| [\--ipolicy-std-specs *spec*=*value* [,*spec*=*value*...]] |
653 |
| [\--ipolicy-bounds-specs *bounds_ispecs*] |
654 |
| [\--ipolicy-disk-templates *template* [,*template*...]] |
655 |
| [\--ipolicy-spindle-ratio *ratio*] |
656 |
| [\--ipolicy-vcpu-ratio *ratio*] |
657 |
| [\--enabled-disk-templates *template* [,*template*...]] |
658 |
| [\--drbd-usermode-helper *helper*] |
659 |
| [\--file-storage-dir *dir*] |
660 |
| [\--shared-file-storage-dir *dir*] |
661 |
|
662 |
|
663 |
Modify the options for the cluster. |
664 |
|
665 |
The ``--vg-name``, ``--enabled-hypervisors``, ``-H (--hypervisor-parameters)``, |
666 |
``-B (--backend-parameters)``, ``-D (--disk-parameters)``, ``--nic-parameters``, |
667 |
``-C (--candidate-pool-size)``, ``--maintain-node-health``, |
668 |
``--prealloc-wipe-disks``, ``--uid-pool``, ``--node-parameters``, |
669 |
``--master-netdev``, ``--master-netmask``, ``--use-external-mip-script``, |
670 |
``--drbd-usermode-helper``, ``--file-storage-dir``, |
671 |
``--shared-file-storage-dir``, and ``--enabled-disk-templates`` options are |
672 |
described in the **init** command. |
673 |
|
674 |
The ``--hypervisor-state`` and ``--disk-state`` options are described in |
675 |
detail in **ganeti**\(7). |
676 |
|
677 |
The ``--add-uids`` and ``--remove-uids`` options can be used to |
678 |
modify the user-id pool by adding/removing a list of user-ids or |
679 |
user-id ranges. |
680 |
|
681 |
The option ``--reserved-lvs`` specifies a list (comma-separated) of |
682 |
logical volume group names (regular expressions) that will be |
683 |
ignored by the cluster verify operation. This is useful if the |
684 |
volume group used for Ganeti is shared with the system for other |
685 |
uses. Note that it's not recommended to create and mark as ignored |
686 |
logical volume names which match Ganeti's own name format (starting |
687 |
with UUID and then .diskN), as this option only skips the |
688 |
verification, but not the actual use of the names given. |
689 |
|
690 |
To remove all reserved logical volumes, pass in an empty argument |
691 |
to the option, as in ``--reserved-lvs=`` or ``--reserved-lvs ''``. |
692 |
|
693 |
The ``-I (--default-iallocator)`` is described in the **init** |
694 |
command. To clear the default iallocator, just pass an empty string |
695 |
(''). |
696 |
|
697 |
The option ``--default-iallocator-params`` is described in the **init** |
698 |
command. To clear the default iallocator parameters, just pass an empty |
699 |
string (''). |
700 |
|
701 |
The ``--ipolicy-...`` options are described in the **init** command. |
702 |
|
703 |
See **ganeti**\(7) for a description of ``--submit`` and other common |
704 |
options. |
705 |
|
706 |
QUEUE |
707 |
~~~~~ |
708 |
|
709 |
**queue** {drain | undrain | info} |
710 |
|
711 |
Change job queue properties. |
712 |
|
713 |
The ``drain`` option sets the drain flag on the job queue. No new |
714 |
jobs will be accepted, but jobs already in the queue will be |
715 |
processed. |
716 |
|
717 |
The ``undrain`` will unset the drain flag on the job queue. New |
718 |
jobs will be accepted. |
719 |
|
720 |
The ``info`` option shows the properties of the job queue. |
721 |
|
722 |
WATCHER |
723 |
~~~~~~~ |
724 |
|
725 |
**watcher** {pause *duration* | continue | info} |
726 |
|
727 |
Make the watcher pause or let it continue. |
728 |
|
729 |
The ``pause`` option causes the watcher to pause for *duration* |
730 |
seconds. |
731 |
|
732 |
The ``continue`` option will let the watcher continue. |
733 |
|
734 |
The ``info`` option shows whether the watcher is currently paused. |
735 |
|
736 |
REDIST-CONF |
737 |
~~~~~~~~~~~ |
738 |
|
739 |
**redist-conf** [\--submit] [\--print-job-id] |
740 |
|
741 |
This command forces a full push of configuration files from the |
742 |
master node to the other nodes in the cluster. This is normally not |
743 |
needed, but can be run if the **verify** complains about |
744 |
configuration mismatches. |
745 |
|
746 |
See **ganeti**\(7) for a description of ``--submit`` and other common |
747 |
options. |
748 |
|
749 |
RENAME |
750 |
~~~~~~ |
751 |
|
752 |
**rename** [-f] {*name*} |
753 |
|
754 |
Renames the cluster and in the process updates the master IP |
755 |
address to the one the new name resolves to. At least one of either |
756 |
the name or the IP address must be different, otherwise the |
757 |
operation will be aborted. |
758 |
|
759 |
Note that since this command can be dangerous (especially when run |
760 |
over SSH), the command will require confirmation unless run with |
761 |
the ``-f`` option. |
762 |
|
763 |
RENEW-CRYPTO |
764 |
~~~~~~~~~~~~ |
765 |
|
766 |
| **renew-crypto** [-f] |
767 |
| [\--new-cluster-certificate] [\--new-confd-hmac-key] |
768 |
| [\--new-rapi-certificate] [\--rapi-certificate *rapi-cert*] |
769 |
| [\--new-spice-certificate | \--spice-certificate *spice-cert* |
770 |
| \--spice-ca-certificate *spice-ca-cert*] |
771 |
| [\--new-cluster-domain-secret] [\--cluster-domain-secret *filename*] |
772 |
|
773 |
This command will stop all Ganeti daemons in the cluster and start |
774 |
them again once the new certificates and keys are replicated. The |
775 |
options ``--new-cluster-certificate`` and ``--new-confd-hmac-key`` |
776 |
can be used to regenerate respectively the cluster-internal SSL |
777 |
certificate and the HMAC key used by **ganeti-confd**\(8). |
778 |
|
779 |
To generate a new self-signed RAPI certificate (used by |
780 |
**ganeti-rapi**\(8)) specify ``--new-rapi-certificate``. If you want to |
781 |
use your own certificate, e.g. one signed by a certificate |
782 |
authority (CA), pass its filename to ``--rapi-certificate``. |
783 |
|
784 |
To generate a new self-signed SPICE certificate, used for SPICE |
785 |
connections to the KVM hypervisor, specify the |
786 |
``--new-spice-certificate`` option. If you want to provide a |
787 |
certificate, pass its filename to ``--spice-certificate`` and pass the |
788 |
signing CA certificate to ``--spice-ca-certificate``. |
789 |
|
790 |
Finally ``--new-cluster-domain-secret`` generates a new, random |
791 |
cluster domain secret, and ``--cluster-domain-secret`` reads the |
792 |
secret from a file. The cluster domain secret is used to sign |
793 |
information exchanged between separate clusters via a third party. |
794 |
|
795 |
REPAIR-DISK-SIZES |
796 |
~~~~~~~~~~~~~~~~~ |
797 |
|
798 |
**repair-disk-sizes** [instance...] |
799 |
|
800 |
This command checks that the recorded size of the given instance's |
801 |
disks matches the actual size and updates any mismatches found. |
802 |
This is needed if the Ganeti configuration is no longer consistent |
803 |
with reality, as it will impact some disk operations. If no |
804 |
arguments are given, all instances will be checked. When exclusive |
805 |
storage is active, also spindles are updated. |
806 |
|
807 |
Note that only active disks can be checked by this command; in case |
808 |
a disk cannot be activated it's advised to use |
809 |
**gnt-instance activate-disks \--ignore-size ...** to force |
810 |
activation without regard to the current size. |
811 |
|
812 |
When all the disk sizes are consistent, the command will return no |
813 |
output. Otherwise it will log details about the inconsistencies in |
814 |
the configuration. |
815 |
|
816 |
UPGRADE |
817 |
~~~~~~~ |
818 |
|
819 |
**upgrade** {--to *version* | --resume} |
820 |
|
821 |
This command safely switches all nodes of the cluster to a new Ganeti |
822 |
version. It is a prerequisite that the new version is already installed, |
823 |
albeit not activated, on all nodes; this requisite is checked before any |
824 |
actions are done. |
825 |
|
826 |
If called with the ``--resume`` option, any pending upgrade is |
827 |
continued, that was interrupted by a power failure or similar on |
828 |
master. It will do nothing, if not run on the master node, or if no |
829 |
upgrade was in progress. |
830 |
|
831 |
|
832 |
VERIFY |
833 |
~~~~~~ |
834 |
|
835 |
| **verify** [\--no-nplus1-mem] [\--node-group *nodegroup*] |
836 |
| [\--error-codes] [{-I|\--ignore-errors} *errorcode*] |
837 |
| [{-I|\--ignore-errors} *errorcode*...] |
838 |
|
839 |
Verify correctness of cluster configuration. This is safe with |
840 |
respect to running instances, and incurs no downtime of the |
841 |
instances. |
842 |
|
843 |
If the ``--no-nplus1-mem`` option is given, Ganeti won't check |
844 |
whether if it loses a node it can restart all the instances on |
845 |
their secondaries (and report an error otherwise). |
846 |
|
847 |
With ``--node-group``, restrict the verification to those nodes and |
848 |
instances that live in the named group. This will not verify global |
849 |
settings, but will allow to perform verification of a group while other |
850 |
operations are ongoing in other groups. |
851 |
|
852 |
The ``--error-codes`` option outputs each error in the following |
853 |
parseable format: *ftype*:*ecode*:*edomain*:*name*:*msg*. |
854 |
These fields have the following meaning: |
855 |
|
856 |
ftype |
857 |
Failure type. Can be *WARNING* or *ERROR*. |
858 |
|
859 |
ecode |
860 |
Error code of the failure. See below for a list of error codes. |
861 |
|
862 |
edomain |
863 |
Can be *cluster*, *node* or *instance*. |
864 |
|
865 |
name |
866 |
Contains the name of the item that is affected from the failure. |
867 |
|
868 |
msg |
869 |
Contains a descriptive error message about the error |
870 |
|
871 |
``gnt-cluster verify`` will have a non-zero exit code if at least one of |
872 |
the failures that are found are of type *ERROR*. |
873 |
|
874 |
The ``--ignore-errors`` option can be used to change this behaviour, |
875 |
because it demotes the error represented by the error code received as a |
876 |
parameter to a warning. The option must be repeated for each error that |
877 |
should be ignored (e.g.: ``-I ENODEVERSION -I ENODEORPHANLV``). The |
878 |
``--error-codes`` option can be used to determine the error code of a |
879 |
given error. |
880 |
|
881 |
List of error codes: |
882 |
|
883 |
@CONSTANTS_ECODES@ |
884 |
|
885 |
VERIFY-DISKS |
886 |
~~~~~~~~~~~~ |
887 |
|
888 |
**verify-disks** |
889 |
|
890 |
The command checks which instances have degraded DRBD disks and |
891 |
activates the disks of those instances. |
892 |
|
893 |
This command is run from the **ganeti-watcher** tool, which also |
894 |
has a different, complementary algorithm for doing this check. |
895 |
Together, these two should ensure that DRBD disks are kept |
896 |
consistent. |
897 |
|
898 |
VERSION |
899 |
~~~~~~~ |
900 |
|
901 |
**version** |
902 |
|
903 |
Show the cluster version. |
904 |
|
905 |
Tags |
906 |
~~~~ |
907 |
|
908 |
ADD-TAGS |
909 |
^^^^^^^^ |
910 |
|
911 |
**add-tags** [\--from *file*] {*tag*...} |
912 |
|
913 |
Add tags to the cluster. If any of the tags contains invalid |
914 |
characters, the entire operation will abort. |
915 |
|
916 |
If the ``--from`` option is given, the list of tags will be |
917 |
extended with the contents of that file (each line becomes a tag). |
918 |
In this case, there is not need to pass tags on the command line |
919 |
(if you do, both sources will be used). A file name of - will be |
920 |
interpreted as stdin. |
921 |
|
922 |
LIST-TAGS |
923 |
^^^^^^^^^ |
924 |
|
925 |
**list-tags** |
926 |
|
927 |
List the tags of the cluster. |
928 |
|
929 |
REMOVE-TAGS |
930 |
^^^^^^^^^^^ |
931 |
|
932 |
**remove-tags** [\--from *file*] {*tag*...} |
933 |
|
934 |
Remove tags from the cluster. If any of the tags are not existing |
935 |
on the cluster, the entire operation will abort. |
936 |
|
937 |
If the ``--from`` option is given, the list of tags to be removed will |
938 |
be extended with the contents of that file (each line becomes a tag). |
939 |
In this case, there is not need to pass tags on the command line (if |
940 |
you do, tags from both sources will be removed). A file name of - will |
941 |
be interpreted as stdin. |
942 |
|
943 |
SEARCH-TAGS |
944 |
^^^^^^^^^^^ |
945 |
|
946 |
**search-tags** {*pattern*} |
947 |
|
948 |
Searches the tags on all objects in the cluster (the cluster |
949 |
itself, the nodes and the instances) for a given pattern. The |
950 |
pattern is interpreted as a regular expression and a search will be |
951 |
done on it (i.e. the given pattern is not anchored to the beggining |
952 |
of the string; if you want that, prefix the pattern with ^). |
953 |
|
954 |
If no tags are matching the pattern, the exit code of the command |
955 |
will be one. If there is at least one match, the exit code will be |
956 |
zero. Each match is listed on one line, the object and the tag |
957 |
separated by a space. The cluster will be listed as /cluster, a |
958 |
node will be listed as /nodes/*name*, and an instance as |
959 |
/instances/*name*. Example: |
960 |
|
961 |
:: |
962 |
|
963 |
# gnt-cluster search-tags time |
964 |
/cluster ctime:2007-09-01 |
965 |
/nodes/node1.example.com mtime:2007-10-04 |
966 |
|
967 |
.. vim: set textwidth=72 : |
968 |
.. Local Variables: |
969 |
.. mode: rst |
970 |
.. fill-column: 72 |
971 |
.. End: |