root / man / gnt-node.rst @ 4e37f591
History | View | Annotate | Download (17.3 kB)
1 |
gnt-node(8) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
============================================= |
3 |
|
4 |
Name |
5 |
---- |
6 |
|
7 |
gnt-node - Node administration |
8 |
|
9 |
Synopsis |
10 |
-------- |
11 |
|
12 |
**gnt-node** {command} [arguments...] |
13 |
|
14 |
DESCRIPTION |
15 |
----------- |
16 |
|
17 |
The **gnt-node** is used for managing the (physical) nodes in the |
18 |
Ganeti system. |
19 |
|
20 |
COMMANDS |
21 |
-------- |
22 |
|
23 |
ADD |
24 |
~~~ |
25 |
|
26 |
| **add** [--readd] [-s *secondary\_ip*] [-g *nodegroup*] |
27 |
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``] |
28 |
| [--node-parameters *ndparams*] |
29 |
| {*nodename*} |
30 |
|
31 |
Adds the given node to the cluster. |
32 |
|
33 |
This command is used to join a new node to the cluster. You will |
34 |
have to provide the password for root of the node to be able to add |
35 |
the node in the cluster. The command needs to be run on the Ganeti |
36 |
master. |
37 |
|
38 |
Note that the command is potentially destructive, as it will |
39 |
forcibly join the specified host the cluster, not paying attention |
40 |
to its current status (it could be already in a cluster, etc.) |
41 |
|
42 |
The ``-s`` is used in dual-home clusters and specifies the new node's |
43 |
IP in the secondary network. See the discussion in **gnt-cluster**(8) |
44 |
for more information. |
45 |
|
46 |
In case you're readding a node after hardware failure, you can use |
47 |
the ``--readd`` parameter. In this case, you don't need to pass the |
48 |
secondary IP again, it will reused from the cluster. Also, the |
49 |
drained and offline flags of the node will be cleared before |
50 |
re-adding it. |
51 |
|
52 |
The ``-g`` is used to add the new node into a specific node group, |
53 |
specified by UUID or name. If only one node group exists you can |
54 |
skip this option, otherwise it's mandatory. |
55 |
|
56 |
The ``vm_capable``, ``master_capable`` and ``ndparams`` options are |
57 |
described in **ganeti**(7), and are used to set the properties of the |
58 |
new node. |
59 |
|
60 |
Example:: |
61 |
|
62 |
# gnt-node add node5.example.com |
63 |
# gnt-node add -s 192.0.2.5 node5.example.com |
64 |
# gnt-node add -g group2 -s 192.0.2.9 node9.group2.example.com |
65 |
|
66 |
|
67 |
ADD-TAGS |
68 |
~~~~~~~~ |
69 |
|
70 |
**add-tags** [--from *file*] {*nodename*} {*tag*...} |
71 |
|
72 |
Add tags to the given node. If any of the tags contains invalid |
73 |
characters, the entire operation will abort. |
74 |
|
75 |
If the ``--from`` option is given, the list of tags will be |
76 |
extended with the contents of that file (each line becomes a tag). |
77 |
In this case, there is not need to pass tags on the command line |
78 |
(if you do, both sources will be used). A file name of - will be |
79 |
interpreted as stdin. |
80 |
|
81 |
EVACUATE |
82 |
~~~~~~~~ |
83 |
|
84 |
**evacuate** [-f] [--early-release] [--iallocator *NAME* \| |
85 |
--new-secondary *destination\_node*] {*node*...} |
86 |
|
87 |
This command will move all secondary instances away from the given |
88 |
node(s). It works only for instances having a drbd disk template. |
89 |
|
90 |
The new location for the instances can be specified in two ways: |
91 |
|
92 |
- as a single node for all instances, via the ``--new-secondary`` |
93 |
option |
94 |
|
95 |
- or via the ``--iallocator`` option, giving a script name as |
96 |
parameter, so each instance will be in turn placed on the (per the |
97 |
script) optimal node |
98 |
|
99 |
|
100 |
The ``--early-release`` changes the code so that the old storage on |
101 |
node being evacuated is removed early (before the resync is |
102 |
completed) and the internal Ganeti locks are also released for both |
103 |
the current secondary and the new secondary, thus allowing more |
104 |
parallelism in the cluster operation. This should be used only when |
105 |
recovering from a disk failure on the current secondary (thus the |
106 |
old storage is already broken) or when the storage on the primary |
107 |
node is known to be fine (thus we won't need the old storage for |
108 |
potential recovery). |
109 |
|
110 |
Example:: |
111 |
|
112 |
# gnt-node evacuate -I dumb node3.example.com |
113 |
|
114 |
|
115 |
FAILOVER |
116 |
~~~~~~~~ |
117 |
|
118 |
**failover** [-f] [--ignore-consistency] {*node*} |
119 |
|
120 |
This command will fail over all instances having the given node as |
121 |
primary to their secondary nodes. This works only for instances having |
122 |
a drbd disk template. |
123 |
|
124 |
Normally the failover will check the consistency of the disks before |
125 |
failing over the instance. If you are trying to migrate instances off |
126 |
a dead node, this will fail. Use the ``--ignore-consistency`` option |
127 |
for this purpose. |
128 |
|
129 |
Example:: |
130 |
|
131 |
# gnt-node failover node1.example.com |
132 |
|
133 |
|
134 |
INFO |
135 |
~~~~ |
136 |
|
137 |
**info** [*node*...] |
138 |
|
139 |
Show detailed information about the nodes in the cluster. If you |
140 |
don't give any arguments, all nodes will be shows, otherwise the |
141 |
output will be restricted to the given names. |
142 |
|
143 |
LIST |
144 |
~~~~ |
145 |
|
146 |
| **list** [--sync] |
147 |
| [--no-headers] [--separator=*SEPARATOR*] |
148 |
| [--units=*UNITS*] [-o *[+]FIELD,...*] |
149 |
| [--roman] |
150 |
| [node...] |
151 |
|
152 |
Lists the nodes in the cluster. |
153 |
|
154 |
The ``--no-headers`` option will skip the initial header line. The |
155 |
``--separator`` option takes an argument which denotes what will be |
156 |
used between the output fields. Both these options are to help |
157 |
scripting. |
158 |
|
159 |
The units used to display the numeric values in the output varies, |
160 |
depending on the options given. By default, the values will be |
161 |
formatted in the most appropriate unit. If the ``--separator`` |
162 |
option is given, then the values are shown in mebibytes to allow |
163 |
parsing by scripts. In both cases, the ``--units`` option can be |
164 |
used to enforce a given output unit. |
165 |
|
166 |
By default, the query of nodes will be done in parallel with any |
167 |
running jobs. This might give inconsistent results for the free |
168 |
disk/memory. The ``--sync`` can be used to grab locks for all the |
169 |
nodes and ensure consistent view of the cluster (but this might |
170 |
stall the query for a long time). |
171 |
|
172 |
Passing the ``--roman`` option gnt-node list will try to output |
173 |
some of its fields in a latin-friendly way. This is not the default |
174 |
for backwards compatibility. |
175 |
|
176 |
The ``-o`` option takes a comma-separated list of output fields. |
177 |
The available fields and their meaning are: |
178 |
|
179 |
|
180 |
|
181 |
name |
182 |
the node name |
183 |
|
184 |
pinst_cnt |
185 |
the number of instances having this node as primary |
186 |
|
187 |
pinst_list |
188 |
the list of instances having this node as primary, comma separated |
189 |
|
190 |
sinst_cnt |
191 |
the number of instances having this node as a secondary node |
192 |
|
193 |
sinst_list |
194 |
the list of instances having this node as a secondary node, comma |
195 |
separated |
196 |
|
197 |
pip |
198 |
the primary ip of this node (used for cluster communication) |
199 |
|
200 |
sip |
201 |
the secondary ip of this node (used for data replication in dual-ip |
202 |
clusters, see gnt-cluster(8) |
203 |
|
204 |
dtotal |
205 |
total disk space in the volume group used for instance disk |
206 |
allocations |
207 |
|
208 |
dfree |
209 |
available disk space in the volume group |
210 |
|
211 |
mtotal |
212 |
total memory on the physical node |
213 |
|
214 |
mnode |
215 |
the memory used by the node itself |
216 |
|
217 |
mfree |
218 |
memory available for instance allocations |
219 |
|
220 |
bootid |
221 |
the node bootid value; this is a linux specific feature that |
222 |
assigns a new UUID to the node at each boot and can be use to |
223 |
detect node reboots (by tracking changes in this value) |
224 |
|
225 |
tags |
226 |
comma-separated list of the node's tags |
227 |
|
228 |
serial_no |
229 |
the so called 'serial number' of the node; this is a numeric field |
230 |
that is incremented each time the node is modified, and it can be |
231 |
used to detect modifications |
232 |
|
233 |
ctime |
234 |
the creation time of the node; note that this field contains spaces |
235 |
and as such it's harder to parse |
236 |
|
237 |
if this attribute is not present (e.g. when upgrading from older |
238 |
versions), then "N/A" will be shown instead |
239 |
|
240 |
mtime |
241 |
the last modification time of the node; note that this field |
242 |
contains spaces and as such it's harder to parse |
243 |
|
244 |
if this attribute is not present (e.g. when upgrading from older |
245 |
versions), then "N/A" will be shown instead |
246 |
|
247 |
uuid |
248 |
Show the UUID of the node (generated automatically by Ganeti) |
249 |
|
250 |
ctotal |
251 |
the toal number of logical processors |
252 |
|
253 |
cnodes |
254 |
the number of NUMA domains on the node, if the hypervisor can |
255 |
export this information |
256 |
|
257 |
csockets |
258 |
the number of physical CPU sockets, if the hypervisor can export |
259 |
this information |
260 |
|
261 |
master_candidate |
262 |
whether the node is a master candidate or not |
263 |
|
264 |
drained |
265 |
whether the node is drained or not; the cluster still communicates |
266 |
with drained nodes but excludes them from allocation operations |
267 |
|
268 |
offline |
269 |
whether the node is offline or not; if offline, the cluster does |
270 |
not communicate with offline nodes; useful for nodes that are not |
271 |
reachable in order to avoid delays |
272 |
|
273 |
role |
274 |
A condensed version of the node flags; this field will output a |
275 |
one-character field, with the following possible values: |
276 |
|
277 |
- *M* for the master node |
278 |
|
279 |
- *C* for a master candidate |
280 |
|
281 |
- *R* for a regular node |
282 |
|
283 |
- *D* for a drained node |
284 |
|
285 |
- *O* for an offline node |
286 |
|
287 |
master_capable |
288 |
whether the node can become a master candidate |
289 |
|
290 |
vm_capable |
291 |
whether the node can host instances |
292 |
|
293 |
group |
294 |
the name of the node's group, if known (the query is done without |
295 |
locking, so data consistency is not guaranteed) |
296 |
|
297 |
group.uuid |
298 |
the UUID of the node's group |
299 |
|
300 |
|
301 |
If the value of the option starts with the character ``+``, the new |
302 |
fields will be added to the default list. This allows to quickly |
303 |
see the default list plus a few other fields, instead of retyping |
304 |
the entire list of fields. |
305 |
|
306 |
Note that some of this fields are known from the configuration of |
307 |
the cluster (e.g. name, pinst, sinst, pip, sip and thus the master |
308 |
does not need to contact the node for this data (making the listing |
309 |
fast if only fields from this set are selected), whereas the other |
310 |
fields are "live" fields and we need to make a query to the cluster |
311 |
nodes. |
312 |
|
313 |
Depending on the virtualization type and implementation details, |
314 |
the mtotal, mnode and mfree may have slighly varying meanings. For |
315 |
example, some solutions share the node memory with the pool of |
316 |
memory used for instances (KVM), whereas others have separate |
317 |
memory for the node and for the instances (Xen). |
318 |
|
319 |
If no node names are given, then all nodes are queried. Otherwise, |
320 |
only the given nodes will be listed. |
321 |
|
322 |
LIST-TAGS |
323 |
~~~~~~~~~ |
324 |
|
325 |
**list-tags** {*nodename*} |
326 |
|
327 |
List the tags of the given node. |
328 |
|
329 |
MIGRATE |
330 |
~~~~~~~ |
331 |
|
332 |
**migrate** [-f] [--non-live] [--migration-mode=live\|non-live] |
333 |
{*node*} |
334 |
|
335 |
This command will migrate all instances having the given node as |
336 |
primary to their secondary nodes. This works only for instances |
337 |
having a drbd disk template. |
338 |
|
339 |
As for the **gnt-instance migrate** command, the options |
340 |
``--no-live`` and ``--migration-mode`` can be given to influence |
341 |
the migration type. |
342 |
|
343 |
Example:: |
344 |
|
345 |
# gnt-node migrate node1.example.com |
346 |
|
347 |
|
348 |
MODIFY |
349 |
~~~~~~ |
350 |
|
351 |
| **modify** [-f] [--submit] |
352 |
| [--master-candidate=``yes|no``] [--drained=``yes|no``] [--offline=``yes|no``] |
353 |
| [--master-capable=``yes|no``] [--vm-capable=``yes|no``] [--auto-promote] |
354 |
| [-s *secondary_ip*] |
355 |
| [--node-parameters *ndparams*] |
356 |
| {*node*} |
357 |
|
358 |
This command changes the role of the node. Each options takes |
359 |
either a literal yes or no, and only one option should be given as |
360 |
yes. The meaning of the roles and flags are described in the |
361 |
manpage **ganeti**(7). |
362 |
|
363 |
In case a node is demoted from the master candidate role, the |
364 |
operation will be refused unless you pass the ``--auto-promote`` |
365 |
option. This option will cause the operation to lock all cluster nodes |
366 |
(thus it will not be able to run in parallel with most other jobs), |
367 |
but it allows automated maintenance of the cluster candidate pool. If |
368 |
locking all cluster node is too expensive, another option is to |
369 |
promote manually another node to master candidate before demoting the |
370 |
current one. |
371 |
|
372 |
Example (setting a node offline, which will demote it from master |
373 |
candidate role if is in that role):: |
374 |
|
375 |
# gnt-node modify --offline=yes node1.example.com |
376 |
|
377 |
The ``-s`` can be used to change the node's secondary ip. No drbd |
378 |
instances can be running on the node, while this operation is |
379 |
taking place. |
380 |
|
381 |
Example (setting the node back to online and master candidate):: |
382 |
|
383 |
# gnt-node modify --offline=no --master-candidate=yes node1.example.com |
384 |
|
385 |
|
386 |
REMOVE |
387 |
~~~~~~ |
388 |
|
389 |
**remove** {*nodename*} |
390 |
|
391 |
Removes a node from the cluster. Instances must be removed or |
392 |
migrated to another cluster before. |
393 |
|
394 |
Example:: |
395 |
|
396 |
# gnt-node remove node5.example.com |
397 |
|
398 |
|
399 |
REMOVE-TAGS |
400 |
~~~~~~~~~~~ |
401 |
|
402 |
**remove-tags** [--from *file*] {*nodename*} {*tag*...} |
403 |
|
404 |
Remove tags from the given node. If any of the tags are not |
405 |
existing on the node, the entire operation will abort. |
406 |
|
407 |
If the ``--from`` option is given, the list of tags to be removed will |
408 |
be extended with the contents of that file (each line becomes a tag). |
409 |
In this case, there is not need to pass tags on the command line (if |
410 |
you do, tags from both sources will be removed). A file name of - will |
411 |
be interpreted as stdin. |
412 |
|
413 |
VOLUMES |
414 |
~~~~~~~ |
415 |
|
416 |
| **volumes** [--no-headers] [--human-readable] |
417 |
| [--separator=*SEPARATOR*] [--output=*FIELDS*] |
418 |
| [*node*...] |
419 |
|
420 |
Lists all logical volumes and their physical disks from the node(s) |
421 |
provided. |
422 |
|
423 |
The ``--no-headers`` option will skip the initial header line. The |
424 |
``--separator`` option takes an argument which denotes what will be |
425 |
used between the output fields. Both these options are to help |
426 |
scripting. |
427 |
|
428 |
The units used to display the numeric values in the output varies, |
429 |
depending on the options given. By default, the values will be |
430 |
formatted in the most appropriate unit. If the ``--separator`` |
431 |
option is given, then the values are shown in mebibytes to allow |
432 |
parsing by scripts. In both cases, the ``--units`` option can be |
433 |
used to enforce a given output unit. |
434 |
|
435 |
The ``-o`` option takes a comma-separated list of output fields. |
436 |
The available fields and their meaning are: |
437 |
|
438 |
node |
439 |
the node name on which the volume exists |
440 |
|
441 |
phys |
442 |
the physical drive (on which the LVM physical volume lives) |
443 |
|
444 |
vg |
445 |
the volume group name |
446 |
|
447 |
name |
448 |
the logical volume name |
449 |
|
450 |
size |
451 |
the logical volume size |
452 |
|
453 |
instance |
454 |
The name of the instance to which this volume belongs, or (in case |
455 |
it's an orphan volume) the character "-" |
456 |
|
457 |
|
458 |
Example:: |
459 |
|
460 |
# gnt-node volumes node5.example.com |
461 |
Node PhysDev VG Name Size Instance |
462 |
node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11000.meta 128 instance1.example.com |
463 |
node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11001.data 256 instance1.example.com |
464 |
|
465 |
|
466 |
LIST-STORAGE |
467 |
~~~~~~~~~~~~ |
468 |
|
469 |
| **list-storage** [--no-headers] [--human-readable] |
470 |
| [--separator=*SEPARATOR*] [--storage-type=*STORAGE\_TYPE*] |
471 |
| [--output=*FIELDS*] |
472 |
| [*node*...] |
473 |
|
474 |
Lists the available storage units and their details for the given |
475 |
node(s). |
476 |
|
477 |
The ``--no-headers`` option will skip the initial header line. The |
478 |
``--separator`` option takes an argument which denotes what will be |
479 |
used between the output fields. Both these options are to help |
480 |
scripting. |
481 |
|
482 |
The units used to display the numeric values in the output varies, |
483 |
depending on the options given. By default, the values will be |
484 |
formatted in the most appropriate unit. If the ``--separator`` |
485 |
option is given, then the values are shown in mebibytes to allow |
486 |
parsing by scripts. In both cases, the ``--units`` option can be |
487 |
used to enforce a given output unit. |
488 |
|
489 |
The ``--storage-type`` option can be used to choose a storage unit |
490 |
type. Possible choices are lvm-pv, lvm-vg or file. |
491 |
|
492 |
The ``-o`` option takes a comma-separated list of output fields. |
493 |
The available fields and their meaning are: |
494 |
|
495 |
node |
496 |
the node name on which the volume exists |
497 |
|
498 |
type |
499 |
the type of the storage unit (currently just what is passed in via |
500 |
``--storage-type``) |
501 |
|
502 |
name |
503 |
the path/identifier of the storage unit |
504 |
|
505 |
size |
506 |
total size of the unit; for the file type see a note below |
507 |
|
508 |
used |
509 |
used space in the unit; for the file type see a note below |
510 |
|
511 |
free |
512 |
available disk space |
513 |
|
514 |
allocatable |
515 |
whether we the unit is available for allocation (only lvm-pv can |
516 |
change this setting, the other types always report true) |
517 |
|
518 |
|
519 |
Note that for the "file" type, the total disk space might not equal |
520 |
to the sum of used and free, due to the method Ganeti uses to |
521 |
compute each of them. The total and free values are computed as the |
522 |
total and free space values for the filesystem to which the |
523 |
directory belongs, but the used space is computed from the used |
524 |
space under that directory *only*, which might not be necessarily |
525 |
the root of the filesystem, and as such there could be files |
526 |
outside the file storage directory using disk space and causing a |
527 |
mismatch in the values. |
528 |
|
529 |
Example:: |
530 |
|
531 |
node1# gnt-node list-storage node2 |
532 |
Node Type Name Size Used Free Allocatable |
533 |
node2 lvm-pv /dev/sda7 673.8G 1.5G 672.3G Y |
534 |
node2 lvm-pv /dev/sdb1 698.6G 0M 698.6G Y |
535 |
|
536 |
|
537 |
MODIFY-STORAGE |
538 |
~~~~~~~~~~~~~~ |
539 |
|
540 |
**modify-storage** [``--allocatable=yes|no``] |
541 |
{*node*} {*storage-type*} {*volume-name*} |
542 |
|
543 |
Modifies storage volumes on a node. Only LVM physical volumes can |
544 |
be modified at the moment. They have a storage type of "lvm-pv". |
545 |
|
546 |
Example:: |
547 |
|
548 |
# gnt-node modify-storage --allocatable no node5.example.com lvm-pv /dev/sdb1 |
549 |
|
550 |
|
551 |
REPAIR-STORAGE |
552 |
~~~~~~~~~~~~~~ |
553 |
|
554 |
**repair-storage** [--ignore-consistency] {*node*} {*storage-type*} |
555 |
{*volume-name*} |
556 |
|
557 |
Repairs a storage volume on a node. Only LVM volume groups can be |
558 |
repaired at this time. They have the storage type "lvm-vg". |
559 |
|
560 |
On LVM volume groups, **repair-storage** runs "vgreduce |
561 |
--removemissing". |
562 |
|
563 |
|
564 |
|
565 |
**Caution:** Running this command can lead to data loss. Use it with |
566 |
care. |
567 |
|
568 |
The ``--ignore-consistency`` option will ignore any inconsistent |
569 |
disks (on the nodes paired with this one). Use of this option is |
570 |
most likely to lead to data-loss. |
571 |
|
572 |
Example:: |
573 |
|
574 |
# gnt-node repair-storage node5.example.com lvm-vg xenvg |
575 |
|
576 |
|
577 |
POWERCYCLE |
578 |
~~~~~~~~~~ |
579 |
|
580 |
**powercycle** [``--yes``] [``--force``] {*node*} |
581 |
|
582 |
This commands (tries to) forcefully reboot a node. It is a command |
583 |
that can be used if the node environemnt is broken, such that the |
584 |
admin can no longer login over ssh, but the Ganeti node daemon is |
585 |
still working. |
586 |
|
587 |
Note that this command is not guaranteed to work; it depends on the |
588 |
hypervisor how effective is the reboot attempt. For Linux, this |
589 |
command require that the kernel option CONFIG\_MAGIC\_SYSRQ is |
590 |
enabled. |
591 |
|
592 |
The ``--yes`` option can be used to skip confirmation, while the |
593 |
``--force`` option is needed if the target node is the master |
594 |
node. |