root / man / gnt-node.rst @ f0b1bafe
History | View | Annotate | Download (17.7 kB)
1 |
gnt-node(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
============================================= |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-node - Node administration |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-node** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-node** is used for managing the (physical) nodes in the |
18 |
Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
ADD |
24 |
~~~ |
25 |
|
26 |
| **add** [--readd] [-s *secondary\_ip*] [-g *nodegroup*] |
27 |
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``] |
28 |
| [--node-parameters *ndparams*] |
29 |
| {*nodename*} |
30 |
|
31 |
Adds the given node to the cluster. |
32 |
|
33 |
This command is used to join a new node to the cluster. You will |
34 |
have to provide the password for root of the node to be able to add |
35 |
the node in the cluster. The command needs to be run on the Ganeti |
36 |
master. |
37 |
|
38 |
Note that the command is potentially destructive, as it will |
39 |
forcibly join the specified host the cluster, not paying attention |
40 |
to its current status (it could be already in a cluster, etc.) |
41 |
|
42 |
The ``-s`` is used in dual-home clusters and specifies the new node's |
43 |
IP in the secondary network. See the discussion in **gnt-cluster**(8) |
44 |
for more information. |
45 |
|
46 |
In case you're readding a node after hardware failure, you can use |
47 |
the ``--readd`` parameter. In this case, you don't need to pass the |
48 |
secondary IP again, it will reused from the cluster. Also, the |
49 |
drained and offline flags of the node will be cleared before |
50 |
re-adding it. |
51 |
|
52 |
The ``--force-join`` option is to proceed with adding a node even if it already |
53 |
appears to belong to another cluster. This is used during cluster merging, for |
54 |
example. |
55 |
|
56 |
The ``-g`` is used to add the new node into a specific node group, |
57 |
specified by UUID or name. If only one node group exists you can |
58 |
skip this option, otherwise it's mandatory. |
59 |
|
60 |
The ``vm_capable``, ``master_capable`` and ``ndparams`` options are |
61 |
described in **ganeti**(7), and are used to set the properties of the |
62 |
new node. |
63 |
|
64 |
Example:: |
65 |
|
66 |
# gnt-node add node5.example.com |
67 |
# gnt-node add -s 192.0.2.5 node5.example.com |
68 |
# gnt-node add -g group2 -s 192.0.2.9 node9.group2.example.com |
69 |
|
70 |
|
71 |
ADD-TAGS |
72 |
~~~~~~~~ |
73 |
|
74 |
**add-tags** [--from *file*] {*nodename*} {*tag*...} |
75 |
|
76 |
Add tags to the given node. If any of the tags contains invalid |
77 |
characters, the entire operation will abort. |
78 |
|
79 |
If the ``--from`` option is given, the list of tags will be |
80 |
extended with the contents of that file (each line becomes a tag). |
81 |
In this case, there is not need to pass tags on the command line |
82 |
(if you do, both sources will be used). A file name of - will be |
83 |
interpreted as stdin. |
84 |
|
85 |
EVACUATE |
86 |
~~~~~~~~ |
87 |
|
88 |
**evacuate** [-f] [--early-release] [--iallocator *NAME* \| |
89 |
--new-secondary *destination\_node*] {*node*...} |
90 |
|
91 |
This command will move all secondary instances away from the given |
92 |
node(s). It works only for instances having a drbd disk template. |
93 |
|
94 |
The new location for the instances can be specified in two ways: |
95 |
|
96 |
- as a single node for all instances, via the ``--new-secondary`` |
97 |
option |
98 |
|
99 |
- or via the ``--iallocator`` option, giving a script name as |
100 |
parameter, so each instance will be in turn placed on the (per the |
101 |
script) optimal node |
102 |
|
103 |
|
104 |
The ``--early-release`` changes the code so that the old storage on |
105 |
node being evacuated is removed early (before the resync is |
106 |
completed) and the internal Ganeti locks are also released for both |
107 |
the current secondary and the new secondary, thus allowing more |
108 |
parallelism in the cluster operation. This should be used only when |
109 |
recovering from a disk failure on the current secondary (thus the |
110 |
old storage is already broken) or when the storage on the primary |
111 |
node is known to be fine (thus we won't need the old storage for |
112 |
potential recovery). |
113 |
|
114 |
Example:: |
115 |
|
116 |
# gnt-node evacuate -I dumb node3.example.com |
117 |
|
118 |
|
119 |
FAILOVER |
120 |
~~~~~~~~ |
121 |
|
122 |
**failover** [-f] [--ignore-consistency] {*node*} |
123 |
|
124 |
This command will fail over all instances having the given node as |
125 |
primary to their secondary nodes. This works only for instances having |
126 |
a drbd disk template. |
127 |
|
128 |
Normally the failover will check the consistency of the disks before |
129 |
failing over the instance. If you are trying to migrate instances off |
130 |
a dead node, this will fail. Use the ``--ignore-consistency`` option |
131 |
for this purpose. |
132 |
|
133 |
Example:: |
134 |
|
135 |
# gnt-node failover node1.example.com |
136 |
|
137 |
|
138 |
INFO |
139 |
~~~~ |
140 |
|
141 |
**info** [*node*...] |
142 |
|
143 |
Show detailed information about the nodes in the cluster. If you |
144 |
don't give any arguments, all nodes will be shows, otherwise the |
145 |
output will be restricted to the given names. |
146 |
|
147 |
LIST |
148 |
~~~~ |
149 |
|
150 |
| **list** |
151 |
| [--no-headers] [--separator=*SEPARATOR*] |
152 |
| [--units=*UNITS*] [-v] [-o *[+]FIELD,...*] |
153 |
| [node...] |
154 |
|
155 |
Lists the nodes in the cluster. |
156 |
|
157 |
The ``--no-headers`` option will skip the initial header line. The |
158 |
``--separator`` option takes an argument which denotes what will be |
159 |
used between the output fields. Both these options are to help |
160 |
scripting. |
161 |
|
162 |
The units used to display the numeric values in the output varies, |
163 |
depending on the options given. By default, the values will be |
164 |
formatted in the most appropriate unit. If the ``--separator`` |
165 |
option is given, then the values are shown in mebibytes to allow |
166 |
parsing by scripts. In both cases, the ``--units`` option can be |
167 |
used to enforce a given output unit. |
168 |
|
169 |
Queries of nodes will be done in parallel with any running jobs. This might |
170 |
give inconsistent results for the free disk/memory. |
171 |
|
172 |
The ``-v`` option activates verbose mode, which changes the display of |
173 |
special field states (see **ganeti(7)**). |
174 |
|
175 |
The ``-o`` option takes a comma-separated list of output fields. |
176 |
The available fields and their meaning are: |
177 |
|
178 |
|
179 |
name |
180 |
the node name |
181 |
|
182 |
pinst_cnt |
183 |
the number of instances having this node as primary |
184 |
|
185 |
pinst_list |
186 |
the list of instances having this node as primary, comma separated |
187 |
|
188 |
sinst_cnt |
189 |
the number of instances having this node as a secondary node |
190 |
|
191 |
sinst_list |
192 |
the list of instances having this node as a secondary node, comma |
193 |
separated |
194 |
|
195 |
pip |
196 |
the primary ip of this node (used for cluster communication) |
197 |
|
198 |
sip |
199 |
the secondary ip of this node (used for data replication in dual-ip |
200 |
clusters, see gnt-cluster(8) |
201 |
|
202 |
dtotal |
203 |
total disk space in the volume group used for instance disk |
204 |
allocations |
205 |
|
206 |
dfree |
207 |
available disk space in the volume group |
208 |
|
209 |
mtotal |
210 |
total memory on the physical node |
211 |
|
212 |
mnode |
213 |
the memory used by the node itself |
214 |
|
215 |
mfree |
216 |
memory available for instance allocations |
217 |
|
218 |
bootid |
219 |
the node bootid value; this is a linux specific feature that |
220 |
assigns a new UUID to the node at each boot and can be use to |
221 |
detect node reboots (by tracking changes in this value) |
222 |
|
223 |
tags |
224 |
comma-separated list of the node's tags |
225 |
|
226 |
serial_no |
227 |
the so called 'serial number' of the node; this is a numeric field |
228 |
that is incremented each time the node is modified, and it can be |
229 |
used to detect modifications |
230 |
|
231 |
ctime |
232 |
the creation time of the node; note that this field contains spaces |
233 |
and as such it's harder to parse |
234 |
|
235 |
if this attribute is not present (e.g. when upgrading from older |
236 |
versions), then "N/A" will be shown instead |
237 |
|
238 |
mtime |
239 |
the last modification time of the node; note that this field |
240 |
contains spaces and as such it's harder to parse |
241 |
|
242 |
if this attribute is not present (e.g. when upgrading from older |
243 |
versions), then "N/A" will be shown instead |
244 |
|
245 |
uuid |
246 |
Show the UUID of the node (generated automatically by Ganeti) |
247 |
|
248 |
ctotal |
249 |
the toal number of logical processors |
250 |
|
251 |
cnodes |
252 |
the number of NUMA domains on the node, if the hypervisor can |
253 |
export this information |
254 |
|
255 |
csockets |
256 |
the number of physical CPU sockets, if the hypervisor can export |
257 |
this information |
258 |
|
259 |
master_candidate |
260 |
whether the node is a master candidate or not |
261 |
|
262 |
drained |
263 |
whether the node is drained or not; the cluster still communicates |
264 |
with drained nodes but excludes them from allocation operations |
265 |
|
266 |
offline |
267 |
whether the node is offline or not; if offline, the cluster does |
268 |
not communicate with offline nodes; useful for nodes that are not |
269 |
reachable in order to avoid delays |
270 |
|
271 |
role |
272 |
A condensed version of the node flags; this field will output a |
273 |
one-character field, with the following possible values: |
274 |
|
275 |
- *M* for the master node |
276 |
|
277 |
- *C* for a master candidate |
278 |
|
279 |
- *R* for a regular node |
280 |
|
281 |
- *D* for a drained node |
282 |
|
283 |
- *O* for an offline node |
284 |
|
285 |
master_capable |
286 |
whether the node can become a master candidate |
287 |
|
288 |
vm_capable |
289 |
whether the node can host instances |
290 |
|
291 |
group |
292 |
the name of the node's group, if known (the query is done without |
293 |
locking, so data consistency is not guaranteed) |
294 |
|
295 |
group.uuid |
296 |
the UUID of the node's group |
297 |
|
298 |
|
299 |
If the value of the option starts with the character ``+``, the new |
300 |
fields will be added to the default list. This allows to quickly |
301 |
see the default list plus a few other fields, instead of retyping |
302 |
the entire list of fields. |
303 |
|
304 |
Note that some of this fields are known from the configuration of |
305 |
the cluster (e.g. name, pinst, sinst, pip, sip and thus the master |
306 |
does not need to contact the node for this data (making the listing |
307 |
fast if only fields from this set are selected), whereas the other |
308 |
fields are "live" fields and we need to make a query to the cluster |
309 |
nodes. |
310 |
|
311 |
Depending on the virtualization type and implementation details, |
312 |
the mtotal, mnode and mfree may have slighly varying meanings. For |
313 |
example, some solutions share the node memory with the pool of |
314 |
memory used for instances (KVM), whereas others have separate |
315 |
memory for the node and for the instances (Xen). |
316 |
|
317 |
If no node names are given, then all nodes are queried. Otherwise, |
318 |
only the given nodes will be listed. |
319 |
|
320 |
|
321 |
LIST-FIELDS |
322 |
~~~~~~~~~~~ |
323 |
|
324 |
**list-fields** [field...] |
325 |
|
326 |
Lists available fields for nodes. |
327 |
|
328 |
|
329 |
LIST-TAGS |
330 |
~~~~~~~~~ |
331 |
|
332 |
**list-tags** {*nodename*} |
333 |
|
334 |
List the tags of the given node. |
335 |
|
336 |
MIGRATE |
337 |
~~~~~~~ |
338 |
|
339 |
**migrate** [-f] [--non-live] [--migration-mode=live\|non-live] |
340 |
{*node*} |
341 |
|
342 |
This command will migrate all instances having the given node as |
343 |
primary to their secondary nodes. This works only for instances |
344 |
having a drbd disk template. |
345 |
|
346 |
As for the **gnt-instance migrate** command, the options |
347 |
``--no-live`` and ``--migration-mode`` can be given to influence |
348 |
the migration type. |
349 |
|
350 |
Example:: |
351 |
|
352 |
# gnt-node migrate node1.example.com |
353 |
|
354 |
|
355 |
MODIFY |
356 |
~~~~~~ |
357 |
|
358 |
| **modify** [-f] [--submit] |
359 |
| [--master-candidate=``yes|no``] [--drained=``yes|no``] [--offline=``yes|no``] |
360 |
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``] [--auto-promote] |
361 |
| [-s *secondary_ip*] |
362 |
| [--node-parameters *ndparams*] |
363 |
| [--node-powered=``yes|no``] |
364 |
| {*node*} |
365 |
|
366 |
This command changes the role of the node. Each options takes |
367 |
either a literal yes or no, and only one option should be given as |
368 |
yes. The meaning of the roles and flags are described in the |
369 |
manpage **ganeti**(7). |
370 |
|
371 |
``--node-powered`` can be used to modify state-of-record if it doesn't reflect |
372 |
the reality anymore. |
373 |
|
374 |
In case a node is demoted from the master candidate role, the |
375 |
operation will be refused unless you pass the ``--auto-promote`` |
376 |
option. This option will cause the operation to lock all cluster nodes |
377 |
(thus it will not be able to run in parallel with most other jobs), |
378 |
but it allows automated maintenance of the cluster candidate pool. If |
379 |
locking all cluster node is too expensive, another option is to |
380 |
promote manually another node to master candidate before demoting the |
381 |
current one. |
382 |
|
383 |
Example (setting a node offline, which will demote it from master |
384 |
candidate role if is in that role):: |
385 |
|
386 |
# gnt-node modify --offline=yes node1.example.com |
387 |
|
388 |
The ``-s`` can be used to change the node's secondary ip. No drbd |
389 |
instances can be running on the node, while this operation is |
390 |
taking place. |
391 |
|
392 |
Example (setting the node back to online and master candidate):: |
393 |
|
394 |
# gnt-node modify --offline=no --master-candidate=yes node1.example.com |
395 |
|
396 |
|
397 |
REMOVE |
398 |
~~~~~~ |
399 |
|
400 |
**remove** {*nodename*} |
401 |
|
402 |
Removes a node from the cluster. Instances must be removed or |
403 |
migrated to another cluster before. |
404 |
|
405 |
Example:: |
406 |
|
407 |
# gnt-node remove node5.example.com |
408 |
|
409 |
|
410 |
REMOVE-TAGS |
411 |
~~~~~~~~~~~ |
412 |
|
413 |
**remove-tags** [--from *file*] {*nodename*} {*tag*...} |
414 |
|
415 |
Remove tags from the given node. If any of the tags are not |
416 |
existing on the node, the entire operation will abort. |
417 |
|
418 |
If the ``--from`` option is given, the list of tags to be removed will |
419 |
be extended with the contents of that file (each line becomes a tag). |
420 |
In this case, there is not need to pass tags on the command line (if |
421 |
you do, tags from both sources will be removed). A file name of - will |
422 |
be interpreted as stdin. |
423 |
|
424 |
VOLUMES |
425 |
~~~~~~~ |
426 |
|
427 |
| **volumes** [--no-headers] [--human-readable] |
428 |
| [--separator=*SEPARATOR*] [--output=*FIELDS*] |
429 |
| [*node*...] |
430 |
|
431 |
Lists all logical volumes and their physical disks from the node(s) |
432 |
provided. |
433 |
|
434 |
The ``--no-headers`` option will skip the initial header line. The |
435 |
``--separator`` option takes an argument which denotes what will be |
436 |
used between the output fields. Both these options are to help |
437 |
scripting. |
438 |
|
439 |
The units used to display the numeric values in the output varies, |
440 |
depending on the options given. By default, the values will be |
441 |
formatted in the most appropriate unit. If the ``--separator`` |
442 |
option is given, then the values are shown in mebibytes to allow |
443 |
parsing by scripts. In both cases, the ``--units`` option can be |
444 |
used to enforce a given output unit. |
445 |
|
446 |
The ``-o`` option takes a comma-separated list of output fields. |
447 |
The available fields and their meaning are: |
448 |
|
449 |
node |
450 |
the node name on which the volume exists |
451 |
|
452 |
phys |
453 |
the physical drive (on which the LVM physical volume lives) |
454 |
|
455 |
vg |
456 |
the volume group name |
457 |
|
458 |
name |
459 |
the logical volume name |
460 |
|
461 |
size |
462 |
the logical volume size |
463 |
|
464 |
instance |
465 |
The name of the instance to which this volume belongs, or (in case |
466 |
it's an orphan volume) the character "-" |
467 |
|
468 |
|
469 |
Example:: |
470 |
|
471 |
# gnt-node volumes node5.example.com |
472 |
Node PhysDev VG Name Size Instance |
473 |
node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11000.meta 128 instance1.example.com |
474 |
node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11001.data 256 instance1.example.com |
475 |
|
476 |
|
477 |
LIST-STORAGE |
478 |
~~~~~~~~~~~~ |
479 |
|
480 |
| **list-storage** [--no-headers] [--human-readable] |
481 |
| [--separator=*SEPARATOR*] [--storage-type=*STORAGE\_TYPE*] |
482 |
| [--output=*FIELDS*] |
483 |
| [*node*...] |
484 |
|
485 |
Lists the available storage units and their details for the given |
486 |
node(s). |
487 |
|
488 |
The ``--no-headers`` option will skip the initial header line. The |
489 |
``--separator`` option takes an argument which denotes what will be |
490 |
used between the output fields. Both these options are to help |
491 |
scripting. |
492 |
|
493 |
The units used to display the numeric values in the output varies, |
494 |
depending on the options given. By default, the values will be |
495 |
formatted in the most appropriate unit. If the ``--separator`` |
496 |
option is given, then the values are shown in mebibytes to allow |
497 |
parsing by scripts. In both cases, the ``--units`` option can be |
498 |
used to enforce a given output unit. |
499 |
|
500 |
The ``--storage-type`` option can be used to choose a storage unit |
501 |
type. Possible choices are lvm-pv, lvm-vg or file. |
502 |
|
503 |
The ``-o`` option takes a comma-separated list of output fields. |
504 |
The available fields and their meaning are: |
505 |
|
506 |
node |
507 |
the node name on which the volume exists |
508 |
|
509 |
type |
510 |
the type of the storage unit (currently just what is passed in via |
511 |
``--storage-type``) |
512 |
|
513 |
name |
514 |
the path/identifier of the storage unit |
515 |
|
516 |
size |
517 |
total size of the unit; for the file type see a note below |
518 |
|
519 |
used |
520 |
used space in the unit; for the file type see a note below |
521 |
|
522 |
free |
523 |
available disk space |
524 |
|
525 |
allocatable |
526 |
whether we the unit is available for allocation (only lvm-pv can |
527 |
change this setting, the other types always report true) |
528 |
|
529 |
|
530 |
Note that for the "file" type, the total disk space might not equal |
531 |
to the sum of used and free, due to the method Ganeti uses to |
532 |
compute each of them. The total and free values are computed as the |
533 |
total and free space values for the filesystem to which the |
534 |
directory belongs, but the used space is computed from the used |
535 |
space under that directory *only*, which might not be necessarily |
536 |
the root of the filesystem, and as such there could be files |
537 |
outside the file storage directory using disk space and causing a |
538 |
mismatch in the values. |
539 |
|
540 |
Example:: |
541 |
|
542 |
node1# gnt-node list-storage node2 |
543 |
Node Type Name Size Used Free Allocatable |
544 |
node2 lvm-pv /dev/sda7 673.8G 1.5G 672.3G Y |
545 |
node2 lvm-pv /dev/sdb1 698.6G 0M 698.6G Y |
546 |
|
547 |
|
548 |
MODIFY-STORAGE |
549 |
~~~~~~~~~~~~~~ |
550 |
|
551 |
**modify-storage** [``--allocatable=yes|no``] |
552 |
{*node*} {*storage-type*} {*volume-name*} |
553 |
|
554 |
Modifies storage volumes on a node. Only LVM physical volumes can |
555 |
be modified at the moment. They have a storage type of "lvm-pv". |
556 |
|
557 |
Example:: |
558 |
|
559 |
# gnt-node modify-storage --allocatable no node5.example.com lvm-pv /dev/sdb1 |
560 |
|
561 |
|
562 |
REPAIR-STORAGE |
563 |
~~~~~~~~~~~~~~ |
564 |
|
565 |
**repair-storage** [--ignore-consistency] {*node*} {*storage-type*} |
566 |
{*volume-name*} |
567 |
|
568 |
Repairs a storage volume on a node. Only LVM volume groups can be |
569 |
repaired at this time. They have the storage type "lvm-vg". |
570 |
|
571 |
On LVM volume groups, **repair-storage** runs "vgreduce |
572 |
--removemissing". |
573 |
|
574 |
|
575 |
|
576 |
**Caution:** Running this command can lead to data loss. Use it with |
577 |
care. |
578 |
|
579 |
The ``--ignore-consistency`` option will ignore any inconsistent |
580 |
disks (on the nodes paired with this one). Use of this option is |
581 |
most likely to lead to data-loss. |
582 |
|
583 |
Example:: |
584 |
|
585 |
# gnt-node repair-storage node5.example.com lvm-vg xenvg |
586 |
|
587 |
|
588 |
POWERCYCLE |
589 |
~~~~~~~~~~ |
590 |
|
591 |
**powercycle** [``--yes``] [``--force``] {*node*} |
592 |
|
593 |
This commands (tries to) forcefully reboot a node. It is a command |
594 |
that can be used if the node environemnt is broken, such that the |
595 |
admin can no longer login over ssh, but the Ganeti node daemon is |
596 |
still working. |
597 |
|
598 |
Note that this command is not guaranteed to work; it depends on the |
599 |
hypervisor how effective is the reboot attempt. For Linux, this |
600 |
command require that the kernel option CONFIG\_MAGIC\_SYSRQ is |
601 |
enabled. |
602 |
|
603 |
The ``--yes`` option can be used to skip confirmation, while the |
604 |
``--force`` option is needed if the target node is the master |
605 |
node. |
606 |
|
607 |
POWER |
608 |
~~~~~ |
609 |
|
610 |
**power** on|off|cycle|status {*node*} |
611 |
|
612 |
This commands calls out to out-of-band management to change the power |
613 |
state of given node. With ``status`` you get the power status as reported |
614 |
by the out-of-band managment script. |