root / doc / hooks.rst @ 06c2fb4a
History | View | Annotate | Download (21.5 kB)
1 |
Ganeti customisation using hooks |
---|---|
2 |
================================ |
3 |
|
4 |
Documents Ganeti version 2.8 |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
Introduction |
9 |
------------ |
10 |
|
11 |
In order to allow customisation of operations, Ganeti runs scripts in |
12 |
sub-directories of ``@SYSCONFDIR@/ganeti/hooks``. These sub-directories |
13 |
are named ``$hook-$phase.d``, where ``$phase`` is either ``pre`` or |
14 |
``post`` and ``$hook`` matches the directory name given for a hook (e.g. |
15 |
``cluster-verify-post.d`` or ``node-add-pre.d``). |
16 |
|
17 |
This is similar to the ``/etc/network/`` structure present in Debian |
18 |
for network interface handling. |
19 |
|
20 |
Organisation |
21 |
------------ |
22 |
|
23 |
For every operation, two sets of scripts are run: |
24 |
|
25 |
- pre phase (for authorization/checking) |
26 |
- post phase (for logging) |
27 |
|
28 |
Also, for each operation, the scripts are run on one or more nodes, |
29 |
depending on the operation type. |
30 |
|
31 |
Note that, even though we call them scripts, we are actually talking |
32 |
about any executable. |
33 |
|
34 |
*pre* scripts |
35 |
~~~~~~~~~~~~~ |
36 |
|
37 |
The *pre* scripts have a definite target: to check that the operation |
38 |
is allowed given the site-specific constraints. You could have, for |
39 |
example, a rule that says every new instance is required to exists in |
40 |
a database; to implement this, you could write a script that checks |
41 |
the new instance parameters against your database. |
42 |
|
43 |
The objective of these scripts should be their return code (zero or |
44 |
non-zero for success and failure). However, if they modify the |
45 |
environment in any way, they should be idempotent, as failed |
46 |
executions could be restarted and thus the script(s) run again with |
47 |
exactly the same parameters. |
48 |
|
49 |
Note that if a node is unreachable at the time a hooks is run, this |
50 |
will not be interpreted as a deny for the execution. In other words, |
51 |
only an actual error returned from a script will cause abort, and not |
52 |
an unreachable node. |
53 |
|
54 |
Therefore, if you want to guarantee that a hook script is run and |
55 |
denies an action, it's best to put it on the master node. |
56 |
|
57 |
*post* scripts |
58 |
~~~~~~~~~~~~~~ |
59 |
|
60 |
These scripts should do whatever you need as a reaction to the |
61 |
completion of an operation. Their return code is not checked (but |
62 |
logged), and they should not depend on the fact that the *pre* scripts |
63 |
have been run. |
64 |
|
65 |
Naming |
66 |
~~~~~~ |
67 |
|
68 |
The allowed names for the scripts consist of (similar to *run-parts*) |
69 |
upper and lower case, digits, underscores and hyphens. In other words, |
70 |
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be |
71 |
ignored. |
72 |
|
73 |
|
74 |
Order of execution |
75 |
~~~~~~~~~~~~~~~~~~ |
76 |
|
77 |
On a single node, the scripts in a directory are run in lexicographic |
78 |
order (more exactly, the python string comparison order). It is |
79 |
advisable to implement the usual *NN-name* convention where *NN* is a |
80 |
two digit number. |
81 |
|
82 |
For an operation whose hooks are run on multiple nodes, there is no |
83 |
specific ordering of nodes with regard to hooks execution; you should |
84 |
assume that the scripts are run in parallel on the target nodes |
85 |
(keeping on each node the above specified ordering). If you need any |
86 |
kind of inter-node synchronisation, you have to implement it yourself |
87 |
in the scripts. |
88 |
|
89 |
Execution environment |
90 |
~~~~~~~~~~~~~~~~~~~~~ |
91 |
|
92 |
The scripts will be run as follows: |
93 |
|
94 |
- no command line arguments |
95 |
|
96 |
- no controlling *tty* |
97 |
|
98 |
- stdin is actually */dev/null* |
99 |
|
100 |
- stdout and stderr are directed to files |
101 |
|
102 |
- PATH is reset to :pyeval:`constants.HOOKS_PATH` |
103 |
|
104 |
- the environment is cleared, and only ganeti-specific variables will |
105 |
be left |
106 |
|
107 |
|
108 |
All information about the cluster is passed using environment |
109 |
variables. Different operations will have sligthly different |
110 |
environments, but most of the variables are common. |
111 |
|
112 |
Operation list |
113 |
-------------- |
114 |
|
115 |
Node operations |
116 |
~~~~~~~~~~~~~~~ |
117 |
|
118 |
OP_NODE_ADD |
119 |
+++++++++++ |
120 |
|
121 |
Adds a node to the cluster. |
122 |
|
123 |
:directory: node-add |
124 |
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE |
125 |
:pre-execution: all existing nodes |
126 |
:post-execution: all nodes plus the new node |
127 |
|
128 |
|
129 |
OP_NODE_REMOVE |
130 |
++++++++++++++ |
131 |
|
132 |
Removes a node from the cluster. On the removed node the hooks are |
133 |
called during the execution of the operation and not after its |
134 |
completion. |
135 |
|
136 |
:directory: node-remove |
137 |
:env. vars: NODE_NAME |
138 |
:pre-execution: all existing nodes except the removed node |
139 |
:post-execution: all existing nodes |
140 |
|
141 |
OP_NODE_SET_PARAMS |
142 |
++++++++++++++++++ |
143 |
|
144 |
Changes a node's parameters. |
145 |
|
146 |
:directory: node-modify |
147 |
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE |
148 |
:pre-execution: master node, the target node |
149 |
:post-execution: master node, the target node |
150 |
|
151 |
OP_NODE_MIGRATE |
152 |
++++++++++++++++ |
153 |
|
154 |
Relocate secondary instances from a node. |
155 |
|
156 |
:directory: node-migrate |
157 |
:env. vars: NODE_NAME |
158 |
:pre-execution: master node |
159 |
:post-execution: master node |
160 |
|
161 |
|
162 |
Node group operations |
163 |
~~~~~~~~~~~~~~~~~~~~~ |
164 |
|
165 |
OP_GROUP_ADD |
166 |
++++++++++++ |
167 |
|
168 |
Adds a node group to the cluster. |
169 |
|
170 |
:directory: group-add |
171 |
:env. vars: GROUP_NAME |
172 |
:pre-execution: master node |
173 |
:post-execution: master node |
174 |
|
175 |
OP_GROUP_SET_PARAMS |
176 |
+++++++++++++++++++ |
177 |
|
178 |
Changes a node group's parameters. |
179 |
|
180 |
:directory: group-modify |
181 |
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY |
182 |
:pre-execution: master node |
183 |
:post-execution: master node |
184 |
|
185 |
OP_GROUP_REMOVE |
186 |
+++++++++++++++ |
187 |
|
188 |
Removes a node group from the cluster. Since the node group must be |
189 |
empty for removal to succeed, the concept of "nodes in the group" does |
190 |
not exist, and the hook is only executed in the master node. |
191 |
|
192 |
:directory: group-remove |
193 |
:env. vars: GROUP_NAME |
194 |
:pre-execution: master node |
195 |
:post-execution: master node |
196 |
|
197 |
OP_GROUP_RENAME |
198 |
+++++++++++++++ |
199 |
|
200 |
Renames a node group. |
201 |
|
202 |
:directory: group-rename |
203 |
:env. vars: OLD_NAME, NEW_NAME |
204 |
:pre-execution: master node and all nodes in the group |
205 |
:post-execution: master node and all nodes in the group |
206 |
|
207 |
OP_GROUP_EVACUATE |
208 |
+++++++++++++++++ |
209 |
|
210 |
Evacuates a node group. |
211 |
|
212 |
:directory: group-evacuate |
213 |
:env. vars: GROUP_NAME, TARGET_GROUPS |
214 |
:pre-execution: master node and all nodes in the group |
215 |
:post-execution: master node and all nodes in the group |
216 |
|
217 |
Network operations |
218 |
~~~~~~~~~~~~~~~~~~ |
219 |
|
220 |
OP_NETWORK_ADD |
221 |
++++++++++++++ |
222 |
|
223 |
Adds a network to the cluster. |
224 |
|
225 |
:directory: network-add |
226 |
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6, |
227 |
NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS |
228 |
:pre-execution: master node |
229 |
:post-execution: master node |
230 |
|
231 |
OP_NETWORK_REMOVE |
232 |
+++++++++++++++++ |
233 |
|
234 |
Removes a network from the cluster. |
235 |
|
236 |
:directory: network-remove |
237 |
:env. vars: NETWORK_NAME |
238 |
:pre-execution: master node |
239 |
:post-execution: master node |
240 |
|
241 |
OP_NETWORK_CONNECT |
242 |
++++++++++++++++++ |
243 |
|
244 |
Connects a network to a nodegroup. |
245 |
|
246 |
:directory: network-connect |
247 |
:env. vars: GROUP_NAME, NETWORK_NAME, |
248 |
GROUP_NETWORK_MODE, GROUP_NETWORK_LINK, |
249 |
NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6, |
250 |
NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS |
251 |
:pre-execution: nodegroup nodes |
252 |
:post-execution: nodegroup nodes |
253 |
|
254 |
|
255 |
OP_NETWORK_DISCONNECT |
256 |
+++++++++++++++++++++ |
257 |
|
258 |
Disconnects a network from a nodegroup. |
259 |
|
260 |
:directory: network-disconnect |
261 |
:env. vars: GROUP_NAME, NETWORK_NAME, |
262 |
GROUP_NETWORK_MODE, GROUP_NETWORK_LINK, |
263 |
NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6, |
264 |
NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS |
265 |
:pre-execution: nodegroup nodes |
266 |
:post-execution: nodegroup nodes |
267 |
|
268 |
|
269 |
OP_NETWORK_SET_PARAMS |
270 |
+++++++++++++++++++++ |
271 |
|
272 |
Modifies a network. |
273 |
|
274 |
:directory: network-modify |
275 |
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6, |
276 |
NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS |
277 |
:pre-execution: master node |
278 |
:post-execution: master node |
279 |
|
280 |
|
281 |
Instance operations |
282 |
~~~~~~~~~~~~~~~~~~~ |
283 |
|
284 |
All instance operations take at least the following variables: |
285 |
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY, |
286 |
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY, |
287 |
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT, |
288 |
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC, |
289 |
INSTANCE_NICn_NETWORK, |
290 |
INSTANCE_NICn_NETWORK_UUID, INSTANCE_NICn_NETWORK_SUBNET, |
291 |
INSTANCE_NICn_NETWORK_GATEWAY, INSTANCE_NICn_NETWORK_SUBNET6, |
292 |
INSTANCE_NICn_NETWORK_GATEWAY6, INSTANCE_NICn_NETWORK_MAC_PREFIX, |
293 |
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE. |
294 |
|
295 |
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the |
296 |
properties of the *n* -th NIC and disk, and are zero-indexed. |
297 |
|
298 |
The INSTANCE_NICn_NETWORK_* variables are only passed if a NIC's network |
299 |
parameter is set (that is if the NIC is associated to a network defined |
300 |
via ``gnt-network``) |
301 |
|
302 |
|
303 |
OP_INSTANCE_CREATE |
304 |
++++++++++++++++++ |
305 |
|
306 |
Creates a new instance. |
307 |
|
308 |
:directory: instance-add |
309 |
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES |
310 |
:pre-execution: master node, primary and secondary nodes |
311 |
:post-execution: master node, primary and secondary nodes |
312 |
|
313 |
OP_INSTANCE_REINSTALL |
314 |
+++++++++++++++++++++ |
315 |
|
316 |
Reinstalls an instance. |
317 |
|
318 |
:directory: instance-reinstall |
319 |
:env. vars: only the standard instance vars |
320 |
:pre-execution: master node, primary and secondary nodes |
321 |
:post-execution: master node, primary and secondary nodes |
322 |
|
323 |
OP_BACKUP_EXPORT |
324 |
++++++++++++++++ |
325 |
|
326 |
Exports the instance. |
327 |
|
328 |
:directory: instance-export |
329 |
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE |
330 |
:pre-execution: master node, primary and secondary nodes |
331 |
:post-execution: master node, primary and secondary nodes |
332 |
|
333 |
OP_INSTANCE_STARTUP |
334 |
+++++++++++++++++++ |
335 |
|
336 |
Starts an instance. |
337 |
|
338 |
:directory: instance-start |
339 |
:env. vars: FORCE |
340 |
:pre-execution: master node, primary and secondary nodes |
341 |
:post-execution: master node, primary and secondary nodes |
342 |
|
343 |
OP_INSTANCE_SHUTDOWN |
344 |
++++++++++++++++++++ |
345 |
|
346 |
Stops an instance. |
347 |
|
348 |
:directory: instance-stop |
349 |
:env. vars: TIMEOUT |
350 |
:pre-execution: master node, primary and secondary nodes |
351 |
:post-execution: master node, primary and secondary nodes |
352 |
|
353 |
OP_INSTANCE_REBOOT |
354 |
++++++++++++++++++ |
355 |
|
356 |
Reboots an instance. |
357 |
|
358 |
:directory: instance-reboot |
359 |
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT |
360 |
:pre-execution: master node, primary and secondary nodes |
361 |
:post-execution: master node, primary and secondary nodes |
362 |
|
363 |
OP_INSTANCE_SET_PARAMS |
364 |
++++++++++++++++++++++ |
365 |
|
366 |
Modifies the instance parameters. |
367 |
|
368 |
:directory: instance-modify |
369 |
:env. vars: NEW_DISK_TEMPLATE, RUNTIME_MEMORY |
370 |
:pre-execution: master node, primary and secondary nodes |
371 |
:post-execution: master node, primary and secondary nodes |
372 |
|
373 |
OP_INSTANCE_SNAPSHOT |
374 |
++++++++++++++++++++ |
375 |
|
376 |
Takes a snapshot of instance's disk (must be ext template). |
377 |
|
378 |
:directory: instance-snapshot |
379 |
:env. vars: |
380 |
:pre-execution: master node, primary and secondary nodes |
381 |
:post-execution: master node, primary and secondary nodes |
382 |
|
383 |
OP_INSTANCE_FAILOVER |
384 |
++++++++++++++++++++ |
385 |
|
386 |
Failovers an instance. In the post phase INSTANCE_PRIMARY and |
387 |
INSTANCE_SECONDARY refer to the nodes that were repectively primary |
388 |
and secondary before failover. |
389 |
|
390 |
:directory: instance-failover |
391 |
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY |
392 |
:pre-execution: master node, secondary node |
393 |
:post-execution: master node, primary and secondary nodes |
394 |
|
395 |
OP_INSTANCE_MIGRATE |
396 |
++++++++++++++++++++ |
397 |
|
398 |
Migrates an instance. In the post phase INSTANCE_PRIMARY and |
399 |
INSTANCE_SECONDARY refer to the nodes that were repectively primary |
400 |
and secondary before migration. |
401 |
|
402 |
:directory: instance-migrate |
403 |
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY |
404 |
:pre-execution: master node, primary and secondary nodes |
405 |
:post-execution: master node, primary and secondary nodes |
406 |
|
407 |
|
408 |
OP_INSTANCE_REMOVE |
409 |
++++++++++++++++++ |
410 |
|
411 |
Remove an instance. |
412 |
|
413 |
:directory: instance-remove |
414 |
:env. vars: SHUTDOWN_TIMEOUT |
415 |
:pre-execution: master node |
416 |
:post-execution: master node, primary and secondary nodes |
417 |
|
418 |
OP_INSTANCE_GROW_DISK |
419 |
+++++++++++++++++++++ |
420 |
|
421 |
Grows the disk of an instance. |
422 |
|
423 |
:directory: disk-grow |
424 |
:env. vars: DISK, AMOUNT |
425 |
:pre-execution: master node, primary and secondary nodes |
426 |
:post-execution: master node, primary and secondary nodes |
427 |
|
428 |
OP_INSTANCE_RENAME |
429 |
++++++++++++++++++ |
430 |
|
431 |
Renames an instance. |
432 |
|
433 |
:directory: instance-rename |
434 |
:env. vars: INSTANCE_NEW_NAME |
435 |
:pre-execution: master node, primary and secondary nodes |
436 |
:post-execution: master node, primary and secondary nodes |
437 |
|
438 |
OP_INSTANCE_MOVE |
439 |
++++++++++++++++ |
440 |
|
441 |
Move an instance by data-copying. |
442 |
|
443 |
:directory: instance-move |
444 |
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT |
445 |
:pre-execution: master node, primary and target nodes |
446 |
:post-execution: master node, primary and target nodes |
447 |
|
448 |
OP_INSTANCE_RECREATE_DISKS |
449 |
++++++++++++++++++++++++++ |
450 |
|
451 |
Recreate an instance's missing disks. |
452 |
|
453 |
:directory: instance-recreate-disks |
454 |
:env. vars: only the standard instance vars |
455 |
:pre-execution: master node, primary and secondary nodes |
456 |
:post-execution: master node, primary and secondary nodes |
457 |
|
458 |
OP_INSTANCE_REPLACE_DISKS |
459 |
+++++++++++++++++++++++++ |
460 |
|
461 |
Replace the disks of an instance. |
462 |
|
463 |
:directory: mirrors-replace |
464 |
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY |
465 |
:pre-execution: master node, primary and new secondary nodes |
466 |
:post-execution: master node, primary and new secondary nodes |
467 |
|
468 |
OP_INSTANCE_CHANGE_GROUP |
469 |
++++++++++++++++++++++++ |
470 |
|
471 |
Moves an instance to another group. |
472 |
|
473 |
:directory: instance-change-group |
474 |
:env. vars: TARGET_GROUPS |
475 |
:pre-execution: master node |
476 |
:post-execution: master node |
477 |
|
478 |
|
479 |
Cluster operations |
480 |
~~~~~~~~~~~~~~~~~~ |
481 |
|
482 |
OP_CLUSTER_POST_INIT |
483 |
++++++++++++++++++++ |
484 |
|
485 |
This hook is called via a special "empty" LU right after cluster |
486 |
initialization. |
487 |
|
488 |
:directory: cluster-init |
489 |
:env. vars: none |
490 |
:pre-execution: none |
491 |
:post-execution: master node |
492 |
|
493 |
OP_CLUSTER_DESTROY |
494 |
++++++++++++++++++ |
495 |
|
496 |
The post phase of this hook is called during the execution of destroy |
497 |
operation and not after its completion. |
498 |
|
499 |
:directory: cluster-destroy |
500 |
:env. vars: none |
501 |
:pre-execution: none |
502 |
:post-execution: master node |
503 |
|
504 |
OP_CLUSTER_VERIFY_GROUP |
505 |
+++++++++++++++++++++++ |
506 |
|
507 |
Verifies all nodes in a group. This is a special LU with regard to |
508 |
hooks, as the result of the opcode will be combined with the result of |
509 |
post-execution hooks, in order to allow administrators to enhance the |
510 |
cluster verification procedure. |
511 |
|
512 |
:directory: cluster-verify |
513 |
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name> |
514 |
:pre-execution: none |
515 |
:post-execution: all nodes in a group |
516 |
|
517 |
OP_CLUSTER_RENAME |
518 |
+++++++++++++++++ |
519 |
|
520 |
Renames the cluster. |
521 |
|
522 |
:directory: cluster-rename |
523 |
:env. vars: NEW_NAME |
524 |
:pre-execution: master-node |
525 |
:post-execution: master-node |
526 |
|
527 |
OP_CLUSTER_SET_PARAMS |
528 |
+++++++++++++++++++++ |
529 |
|
530 |
Modifies the cluster parameters. |
531 |
|
532 |
:directory: cluster-modify |
533 |
:env. vars: NEW_VG_NAME |
534 |
:pre-execution: master node |
535 |
:post-execution: master node |
536 |
|
537 |
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNUP` |
538 |
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
539 |
|
540 |
This doesn't correspond to an actual op-code, but it is called when the |
541 |
master IP is activated. |
542 |
|
543 |
:directory: master-ip-turnup |
544 |
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION |
545 |
:pre-execution: master node |
546 |
:post-execution: master node |
547 |
|
548 |
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNDOWN` |
549 |
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
550 |
|
551 |
This doesn't correspond to an actual op-code, but it is called when the |
552 |
master IP is deactivated. |
553 |
|
554 |
:directory: master-ip-turndown |
555 |
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION |
556 |
:pre-execution: master node |
557 |
:post-execution: master node |
558 |
|
559 |
|
560 |
Obsolete operations |
561 |
~~~~~~~~~~~~~~~~~~~ |
562 |
|
563 |
The following operations are no longer present or don't execute hooks |
564 |
anymore in Ganeti 2.0: |
565 |
|
566 |
- OP_INIT_CLUSTER |
567 |
- OP_MASTER_FAILOVER |
568 |
- OP_INSTANCE_ADD_MDDRBD |
569 |
- OP_INSTANCE_REMOVE_MDDRBD |
570 |
|
571 |
|
572 |
Environment variables |
573 |
--------------------- |
574 |
|
575 |
Note that all variables listed here are actually prefixed with *GANETI_* |
576 |
in order to provide a clear namespace. In addition, post-execution |
577 |
scripts receive another set of variables, prefixed with *GANETI_POST_*, |
578 |
representing the status after the opcode executed. |
579 |
|
580 |
Common variables |
581 |
~~~~~~~~~~~~~~~~ |
582 |
|
583 |
This is the list of environment variables supported by all operations: |
584 |
|
585 |
HOOKS_VERSION |
586 |
Documents the hooks interface version. In case this doesnt match |
587 |
what the script expects, it should not run. The documents conforms |
588 |
to the version 2. |
589 |
|
590 |
HOOKS_PHASE |
591 |
One of *PRE* or *POST* denoting which phase are we in. |
592 |
|
593 |
CLUSTER |
594 |
The cluster name. |
595 |
|
596 |
MASTER |
597 |
The master node. |
598 |
|
599 |
OP_CODE |
600 |
One of the *OP_* values from the list of operations. |
601 |
|
602 |
OBJECT_TYPE |
603 |
One of ``INSTANCE``, ``NODE``, ``CLUSTER``. |
604 |
|
605 |
DATA_DIR |
606 |
The path to the Ganeti configuration directory (to read, for |
607 |
example, the *ssconf* files). |
608 |
|
609 |
|
610 |
Specialised variables |
611 |
~~~~~~~~~~~~~~~~~~~~~ |
612 |
|
613 |
This is the list of variables which are specific to one or more |
614 |
operations. |
615 |
|
616 |
CLUSTER_IP_VERSION |
617 |
IP version of the master IP (4 or 6) |
618 |
|
619 |
INSTANCE_NAME |
620 |
The name of the instance which is the target of the operation. |
621 |
|
622 |
INSTANCE_BE_x,y,z,... |
623 |
Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance |
624 |
|
625 |
INSTANCE_DISK_TEMPLATE |
626 |
The disk type for the instance. |
627 |
|
628 |
NEW_DISK_TEMPLATE |
629 |
The new disk type for the instance. |
630 |
|
631 |
INSTANCE_DISK_COUNT |
632 |
The number of disks for the instance. |
633 |
|
634 |
INSTANCE_DISKn_SIZE |
635 |
The size of disk *n* for the instance. |
636 |
|
637 |
INSTANCE_DISKn_MODE |
638 |
Either *rw* for a read-write disk or *ro* for a read-only one. |
639 |
|
640 |
INSTANCE_HV_x,y,z,... |
641 |
Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader |
642 |
|
643 |
INSTANCE_HYPERVISOR |
644 |
The instance hypervisor. |
645 |
|
646 |
INSTANCE_NIC_COUNT |
647 |
The number of NICs for the instance. |
648 |
|
649 |
INSTANCE_NICn_BRIDGE |
650 |
The bridge to which the *n* -th NIC of the instance is attached. |
651 |
|
652 |
INSTANCE_NICn_IP |
653 |
The IP (if any) of the *n* -th NIC of the instance. |
654 |
|
655 |
INSTANCE_NICn_MAC |
656 |
The MAC address of the *n* -th NIC of the instance. |
657 |
|
658 |
INSTANCE_NICn_MODE |
659 |
The mode of the *n* -th NIC of the instance. |
660 |
|
661 |
INSTANCE_OS_TYPE |
662 |
The name of the instance OS. |
663 |
|
664 |
INSTANCE_PRIMARY |
665 |
The name of the node which is the primary for the instance. Note that |
666 |
for migrations/failovers, you shouldn't rely on this variable since |
667 |
the nodes change during the exectution, but on the |
668 |
OLD_PRIMARY/NEW_PRIMARY values. |
669 |
|
670 |
INSTANCE_SECONDARY |
671 |
Space-separated list of secondary nodes for the instance. Note that |
672 |
for migrations/failovers, you shouldn't rely on this variable since |
673 |
the nodes change during the exectution, but on the |
674 |
OLD_SECONDARY/NEW_SECONDARY values. |
675 |
|
676 |
INSTANCE_MEMORY |
677 |
The memory size (in MiBs) of the instance. |
678 |
|
679 |
INSTANCE_VCPUS |
680 |
The number of virtual CPUs for the instance. |
681 |
|
682 |
INSTANCE_STATUS |
683 |
The run status of the instance. |
684 |
|
685 |
MASTER_CAPABLE |
686 |
Whether a node is capable of being promoted to master. |
687 |
|
688 |
VM_CAPABLE |
689 |
Whether the node can host instances. |
690 |
|
691 |
MASTER_NETDEV |
692 |
Network device of the master IP |
693 |
|
694 |
MASTER_IP |
695 |
The master IP |
696 |
|
697 |
MASTER_NETMASK |
698 |
Netmask of the master IP |
699 |
|
700 |
INSTANCE_TAGS |
701 |
A space-delimited list of the instance's tags. |
702 |
|
703 |
NODE_NAME |
704 |
The target node of this operation (not the node on which the hook |
705 |
runs). |
706 |
|
707 |
NODE_PIP |
708 |
The primary IP of the target node (the one over which inter-node |
709 |
communication is done). |
710 |
|
711 |
NODE_SIP |
712 |
The secondary IP of the target node (the one over which drbd |
713 |
replication is done). This can be equal to the primary ip, in case |
714 |
the cluster is not dual-homed. |
715 |
|
716 |
FORCE |
717 |
This is provided by some operations when the user gave this flag. |
718 |
|
719 |
IGNORE_CONSISTENCY |
720 |
The user has specified this flag. It is used when failing over |
721 |
instances in case the primary node is down. |
722 |
|
723 |
ADD_MODE |
724 |
The mode of the instance create: either *create* for create from |
725 |
scratch or *import* for restoring from an exported image. |
726 |
|
727 |
SRC_NODE, SRC_PATH, SRC_IMAGE |
728 |
In case the instance has been added by import, these variables are |
729 |
defined and point to the source node, source path (the directory |
730 |
containing the image and the config file) and the source disk image |
731 |
file. |
732 |
|
733 |
NEW_SECONDARY |
734 |
The name of the node on which the new mirror component is being |
735 |
added (for replace disk). This can be the name of the current |
736 |
secondary, if the new mirror is on the same secondary. For |
737 |
migrations/failovers, this is the old primary node. |
738 |
|
739 |
OLD_SECONDARY |
740 |
The name of the old secondary in the replace-disks command. Note that |
741 |
this can be equal to the new secondary if the secondary node hasn't |
742 |
actually changed. For migrations/failovers, this is the new primary |
743 |
node. |
744 |
|
745 |
OLD_PRIMARY, NEW_PRIMARY |
746 |
For migrations/failovers, the old and respectively new primary |
747 |
nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables |
748 |
|
749 |
EXPORT_MODE |
750 |
The instance export mode. Either "remote" or "local". |
751 |
|
752 |
EXPORT_NODE |
753 |
The node on which the exported image of the instance was done. |
754 |
|
755 |
EXPORT_DO_SHUTDOWN |
756 |
This variable tells if the instance has been shutdown or not while |
757 |
doing the export. In the "was shutdown" case, it's likely that the |
758 |
filesystem is consistent, whereas in the "did not shutdown" case, |
759 |
the filesystem would need a check (journal replay or full fsck) in |
760 |
order to guarantee consistency. |
761 |
|
762 |
REMOVE_INSTANCE |
763 |
Whether the instance was removed from the node. |
764 |
|
765 |
SHUTDOWN_TIMEOUT |
766 |
Amount of time to wait for the instance to shutdown. |
767 |
|
768 |
TIMEOUT |
769 |
Amount of time to wait before aborting the op. |
770 |
|
771 |
OLD_NAME, NEW_NAME |
772 |
Old/new name of the node group. |
773 |
|
774 |
GROUP_NAME |
775 |
The name of the node group. |
776 |
|
777 |
NEW_ALLOC_POLICY |
778 |
The new allocation policy for the node group. |
779 |
|
780 |
CLUSTER_TAGS |
781 |
The list of cluster tags, space separated. |
782 |
|
783 |
NODE_TAGS_<name> |
784 |
The list of tags for node *<name>*, space separated. |
785 |
|
786 |
Examples |
787 |
-------- |
788 |
|
789 |
The startup of an instance will pass this environment to the hook |
790 |
script:: |
791 |
|
792 |
GANETI_CLUSTER=cluster1.example.com |
793 |
GANETI_DATA_DIR=/var/lib/ganeti |
794 |
GANETI_FORCE=False |
795 |
GANETI_HOOKS_PATH=instance-start |
796 |
GANETI_HOOKS_PHASE=post |
797 |
GANETI_HOOKS_VERSION=2 |
798 |
GANETI_INSTANCE_DISK0_MODE=rw |
799 |
GANETI_INSTANCE_DISK0_SIZE=128 |
800 |
GANETI_INSTANCE_DISK_COUNT=1 |
801 |
GANETI_INSTANCE_DISK_TEMPLATE=drbd |
802 |
GANETI_INSTANCE_MEMORY=128 |
803 |
GANETI_INSTANCE_NAME=instance2.example.com |
804 |
GANETI_INSTANCE_NIC0_BRIDGE=xen-br0 |
805 |
GANETI_INSTANCE_NIC0_IP= |
806 |
GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58 |
807 |
GANETI_INSTANCE_NIC_COUNT=1 |
808 |
GANETI_INSTANCE_OS_TYPE=debootstrap |
809 |
GANETI_INSTANCE_PRIMARY=node3.example.com |
810 |
GANETI_INSTANCE_SECONDARY=node5.example.com |
811 |
GANETI_INSTANCE_STATUS=down |
812 |
GANETI_INSTANCE_VCPUS=1 |
813 |
GANETI_MASTER=node1.example.com |
814 |
GANETI_OBJECT_TYPE=INSTANCE |
815 |
GANETI_OP_CODE=OP_INSTANCE_STARTUP |
816 |
GANETI_OP_TARGET=instance2.example.com |
817 |
|
818 |
.. vim: set textwidth=72 : |
819 |
.. Local Variables: |
820 |
.. mode: rst |
821 |
.. fill-column: 72 |
822 |
.. End: |