root / doc / hooks.rst @ 1aef3df8
History | View | Annotate | Download (18.7 kB)
1 |
Ganeti customisation using hooks |
---|---|
2 |
================================ |
3 |
|
4 |
Documents ganeti version 2.0 |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
Introduction |
9 |
------------ |
10 |
|
11 |
|
12 |
In order to allow customisation of operations, ganeti runs scripts |
13 |
under ``/etc/ganeti/hooks`` based on certain rules. |
14 |
|
15 |
|
16 |
This is similar to the ``/etc/network/`` structure present in Debian |
17 |
for network interface handling. |
18 |
|
19 |
Organisation |
20 |
------------ |
21 |
|
22 |
For every operation, two sets of scripts are run: |
23 |
|
24 |
- pre phase (for authorization/checking) |
25 |
- post phase (for logging) |
26 |
|
27 |
Also, for each operation, the scripts are run on one or more nodes, |
28 |
depending on the operation type. |
29 |
|
30 |
Note that, even though we call them scripts, we are actually talking |
31 |
about any executable. |
32 |
|
33 |
*pre* scripts |
34 |
~~~~~~~~~~~~~ |
35 |
|
36 |
The *pre* scripts have a definite target: to check that the operation |
37 |
is allowed given the site-specific constraints. You could have, for |
38 |
example, a rule that says every new instance is required to exists in |
39 |
a database; to implement this, you could write a script that checks |
40 |
the new instance parameters against your database. |
41 |
|
42 |
The objective of these scripts should be their return code (zero or |
43 |
non-zero for success and failure). However, if they modify the |
44 |
environment in any way, they should be idempotent, as failed |
45 |
executions could be restarted and thus the script(s) run again with |
46 |
exactly the same parameters. |
47 |
|
48 |
Note that if a node is unreachable at the time a hooks is run, this |
49 |
will not be interpreted as a deny for the execution. In other words, |
50 |
only an actual error returned from a script will cause abort, and not |
51 |
an unreachable node. |
52 |
|
53 |
Therefore, if you want to guarantee that a hook script is run and |
54 |
denies an action, it's best to put it on the master node. |
55 |
|
56 |
*post* scripts |
57 |
~~~~~~~~~~~~~~ |
58 |
|
59 |
These scripts should do whatever you need as a reaction to the |
60 |
completion of an operation. Their return code is not checked (but |
61 |
logged), and they should not depend on the fact that the *pre* scripts |
62 |
have been run. |
63 |
|
64 |
Naming |
65 |
~~~~~~ |
66 |
|
67 |
The allowed names for the scripts consist of (similar to *run-parts*) |
68 |
upper and lower case, digits, underscores and hyphens. In other words, |
69 |
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be |
70 |
ignored. |
71 |
|
72 |
|
73 |
Order of execution |
74 |
~~~~~~~~~~~~~~~~~~ |
75 |
|
76 |
On a single node, the scripts in a directory are run in lexicographic |
77 |
order (more exactly, the python string comparison order). It is |
78 |
advisable to implement the usual *NN-name* convention where *NN* is a |
79 |
two digit number. |
80 |
|
81 |
For an operation whose hooks are run on multiple nodes, there is no |
82 |
specific ordering of nodes with regard to hooks execution; you should |
83 |
assume that the scripts are run in parallel on the target nodes |
84 |
(keeping on each node the above specified ordering). If you need any |
85 |
kind of inter-node synchronisation, you have to implement it yourself |
86 |
in the scripts. |
87 |
|
88 |
Execution environment |
89 |
~~~~~~~~~~~~~~~~~~~~~ |
90 |
|
91 |
The scripts will be run as follows: |
92 |
|
93 |
- no command line arguments |
94 |
|
95 |
- no controlling *tty* |
96 |
|
97 |
- stdin is actually */dev/null* |
98 |
|
99 |
- stdout and stderr are directed to files |
100 |
|
101 |
- PATH is reset to ``/sbin:/bin:/usr/sbin:/usr/bin`` |
102 |
|
103 |
- the environment is cleared, and only ganeti-specific variables will |
104 |
be left |
105 |
|
106 |
|
107 |
All information about the cluster is passed using environment |
108 |
variables. Different operations will have sligthly different |
109 |
environments, but most of the variables are common. |
110 |
|
111 |
Operation list |
112 |
-------------- |
113 |
|
114 |
Node operations |
115 |
~~~~~~~~~~~~~~~ |
116 |
|
117 |
OP_NODE_ADD |
118 |
+++++++++++ |
119 |
|
120 |
Adds a node to the cluster. |
121 |
|
122 |
:directory: node-add |
123 |
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE |
124 |
:pre-execution: all existing nodes |
125 |
:post-execution: all nodes plus the new node |
126 |
|
127 |
|
128 |
OP_NODE_REMOVE |
129 |
++++++++++++++ |
130 |
|
131 |
Removes a node from the cluster. On the removed node the hooks are |
132 |
called during the execution of the operation and not after its |
133 |
completion. |
134 |
|
135 |
:directory: node-remove |
136 |
:env. vars: NODE_NAME |
137 |
:pre-execution: all existing nodes except the removed node |
138 |
:post-execution: all existing nodes |
139 |
|
140 |
OP_NODE_SET_PARAMS |
141 |
++++++++++++++++++ |
142 |
|
143 |
Changes a node's parameters. |
144 |
|
145 |
:directory: node-modify |
146 |
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE |
147 |
:pre-execution: master node, the target node |
148 |
:post-execution: master node, the target node |
149 |
|
150 |
OP_NODE_EVACUATE |
151 |
++++++++++++++++ |
152 |
|
153 |
Relocate secondary instances from a node. |
154 |
|
155 |
:directory: node-evacuate |
156 |
:env. vars: NEW_SECONDARY, NODE_NAME |
157 |
:pre-execution: master node, target node |
158 |
:post-execution: master node, target node |
159 |
|
160 |
OP_NODE_MIGRATE |
161 |
++++++++++++++++ |
162 |
|
163 |
Relocate secondary instances from a node. |
164 |
|
165 |
:directory: node-migrate |
166 |
:env. vars: NODE_NAME |
167 |
:pre-execution: master node |
168 |
:post-execution: master node |
169 |
|
170 |
|
171 |
Node group operations |
172 |
~~~~~~~~~~~~~~~~~~~~~ |
173 |
|
174 |
OP_GROUP_ADD |
175 |
++++++++++++ |
176 |
|
177 |
Adds a node group to the cluster. |
178 |
|
179 |
:directory: group-add |
180 |
:env. vars: GROUP_NAME |
181 |
:pre-execution: master node |
182 |
:post-execution: master node |
183 |
|
184 |
OP_GROUP_SET_PARAMS |
185 |
+++++++++++++++++++ |
186 |
|
187 |
Changes a node group's parameters. |
188 |
|
189 |
:directory: group-modify |
190 |
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY |
191 |
:pre-execution: master node |
192 |
:post-execution: master node |
193 |
|
194 |
OP_GROUP_REMOVE |
195 |
+++++++++++++++ |
196 |
|
197 |
Removes a node group from the cluster. Since the node group must be |
198 |
empty for removal to succeed, the concept of "nodes in the group" does |
199 |
not exist, and the hook is only executed in the master node. |
200 |
|
201 |
:directory: group-remove |
202 |
:env. vars: GROUP_NAME |
203 |
:pre-execution: master node |
204 |
:post-execution: master node |
205 |
|
206 |
OP_GROUP_RENAME |
207 |
+++++++++++++++ |
208 |
|
209 |
Renames a node group. |
210 |
|
211 |
:directory: group-rename |
212 |
:env. vars: OLD_NAME, NEW_NAME |
213 |
:pre-execution: master node and all nodes in the group |
214 |
:post-execution: master node and all nodes in the group |
215 |
|
216 |
OP_GROUP_EVACUATE |
217 |
+++++++++++++++++ |
218 |
|
219 |
Evacuates a node group. |
220 |
|
221 |
:directory: group-evacuate |
222 |
:env. vars: GROUP_NAME, TARGET_GROUPS |
223 |
:pre-execution: master node and all nodes in the group |
224 |
:post-execution: master node and all nodes in the group |
225 |
|
226 |
|
227 |
Instance operations |
228 |
~~~~~~~~~~~~~~~~~~~ |
229 |
|
230 |
All instance operations take at least the following variables: |
231 |
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY, |
232 |
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY, |
233 |
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT, |
234 |
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC, |
235 |
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE. |
236 |
|
237 |
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the |
238 |
properties of the *n* -th NIC and disk, and are zero-indexed. |
239 |
|
240 |
|
241 |
OP_INSTANCE_CREATE |
242 |
++++++++++++++++++ |
243 |
|
244 |
Creates a new instance. |
245 |
|
246 |
:directory: instance-add |
247 |
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES |
248 |
:pre-execution: master node, primary and secondary nodes |
249 |
:post-execution: master node, primary and secondary nodes |
250 |
|
251 |
OP_INSTANCE_REINSTALL |
252 |
+++++++++++++++++++++ |
253 |
|
254 |
Reinstalls an instance. |
255 |
|
256 |
:directory: instance-reinstall |
257 |
:env. vars: only the standard instance vars |
258 |
:pre-execution: master node, primary and secondary nodes |
259 |
:post-execution: master node, primary and secondary nodes |
260 |
|
261 |
OP_BACKUP_EXPORT |
262 |
++++++++++++++++ |
263 |
|
264 |
Exports the instance. |
265 |
|
266 |
:directory: instance-export |
267 |
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE |
268 |
:pre-execution: master node, primary and secondary nodes |
269 |
:post-execution: master node, primary and secondary nodes |
270 |
|
271 |
OP_INSTANCE_STARTUP |
272 |
+++++++++++++++++++ |
273 |
|
274 |
Starts an instance. |
275 |
|
276 |
:directory: instance-start |
277 |
:env. vars: FORCE |
278 |
:pre-execution: master node, primary and secondary nodes |
279 |
:post-execution: master node, primary and secondary nodes |
280 |
|
281 |
OP_INSTANCE_SHUTDOWN |
282 |
++++++++++++++++++++ |
283 |
|
284 |
Stops an instance. |
285 |
|
286 |
:directory: instance-stop |
287 |
:env. vars: TIMEOUT |
288 |
:pre-execution: master node, primary and secondary nodes |
289 |
:post-execution: master node, primary and secondary nodes |
290 |
|
291 |
OP_INSTANCE_REBOOT |
292 |
++++++++++++++++++ |
293 |
|
294 |
Reboots an instance. |
295 |
|
296 |
:directory: instance-reboot |
297 |
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT |
298 |
:pre-execution: master node, primary and secondary nodes |
299 |
:post-execution: master node, primary and secondary nodes |
300 |
|
301 |
OP_INSTANCE_SET_PARAMS |
302 |
++++++++++++++++++++++ |
303 |
|
304 |
Modifies the instance parameters. |
305 |
|
306 |
:directory: instance-modify |
307 |
:env. vars: NEW_DISK_TEMPLATE |
308 |
:pre-execution: master node, primary and secondary nodes |
309 |
:post-execution: master node, primary and secondary nodes |
310 |
|
311 |
OP_INSTANCE_FAILOVER |
312 |
++++++++++++++++++++ |
313 |
|
314 |
Failovers an instance. In the post phase INSTANCE_PRIMARY and |
315 |
INSTANCE_SECONDARY refer to the nodes that were repectively primary |
316 |
and secondary before failover. |
317 |
|
318 |
:directory: instance-failover |
319 |
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY |
320 |
:pre-execution: master node, secondary node |
321 |
:post-execution: master node, primary and secondary nodes |
322 |
|
323 |
OP_INSTANCE_MIGRATE |
324 |
++++++++++++++++++++ |
325 |
|
326 |
Migrates an instance. In the post phase INSTANCE_PRIMARY and |
327 |
INSTANCE_SECONDARY refer to the nodes that were repectively primary |
328 |
and secondary before migration. |
329 |
|
330 |
:directory: instance-migrate |
331 |
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY |
332 |
:pre-execution: master node, secondary node |
333 |
:post-execution: master node, primary and secondary nodes |
334 |
|
335 |
|
336 |
OP_INSTANCE_REMOVE |
337 |
++++++++++++++++++ |
338 |
|
339 |
Remove an instance. |
340 |
|
341 |
:directory: instance-remove |
342 |
:env. vars: SHUTDOWN_TIMEOUT |
343 |
:pre-execution: master node |
344 |
:post-execution: master node, primary and secondary nodes |
345 |
|
346 |
OP_INSTANCE_REPLACE_DISKS |
347 |
+++++++++++++++++++++++++ |
348 |
|
349 |
Replace an instance's disks. |
350 |
|
351 |
:directory: mirror-replace |
352 |
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY |
353 |
:pre-execution: master node, primary and secondary nodes |
354 |
:post-execution: master node, primary and secondary nodes |
355 |
|
356 |
OP_INSTANCE_GROW_DISK |
357 |
+++++++++++++++++++++ |
358 |
|
359 |
Grows the disk of an instance. |
360 |
|
361 |
:directory: disk-grow |
362 |
:env. vars: DISK, AMOUNT |
363 |
:pre-execution: master node, primary and secondary nodes |
364 |
:post-execution: master node, primary and secondary nodes |
365 |
|
366 |
OP_INSTANCE_RENAME |
367 |
++++++++++++++++++ |
368 |
|
369 |
Renames an instance. |
370 |
|
371 |
:directory: instance-rename |
372 |
:env. vars: INSTANCE_NEW_NAME |
373 |
:pre-execution: master node, primary and secondary nodes |
374 |
:post-execution: master node, primary and secondary nodes |
375 |
|
376 |
OP_INSTANCE_MOVE |
377 |
++++++++++++++++ |
378 |
|
379 |
Move an instance by data-copying. |
380 |
|
381 |
:directory: instance-move |
382 |
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT |
383 |
:pre-execution: master node, primary and target nodes |
384 |
:post-execution: master node, primary and target nodes |
385 |
|
386 |
OP_INSTANCE_RECREATE_DISKS |
387 |
++++++++++++++++++++++++++ |
388 |
|
389 |
Recreate an instance's missing disks. |
390 |
|
391 |
:directory: instance-recreate-disks |
392 |
:env. vars: only the standard instance vars |
393 |
:pre-execution: master node, primary and secondary nodes |
394 |
:post-execution: master node, primary and secondary nodes |
395 |
|
396 |
OP_INSTANCE_REPLACE_DISKS |
397 |
+++++++++++++++++++++++++ |
398 |
|
399 |
Replace the disks of an instance. |
400 |
|
401 |
:directory: mirrors-replace |
402 |
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY |
403 |
:pre-execution: master node, primary and new secondary nodes |
404 |
:post-execution: master node, primary and new secondary nodes |
405 |
|
406 |
OP_INSTANCE_CHANGE_GROUP |
407 |
++++++++++++++++++++++++ |
408 |
|
409 |
Moves an instance to another group. |
410 |
|
411 |
:directory: instance-change-group |
412 |
:env. vars: TARGET_GROUPS |
413 |
:pre-execution: master node |
414 |
:post-execution: master node |
415 |
|
416 |
|
417 |
Cluster operations |
418 |
~~~~~~~~~~~~~~~~~~ |
419 |
|
420 |
OP_CLUSTER_POST_INIT |
421 |
++++++++++++++++++++ |
422 |
|
423 |
This hook is called via a special "empty" LU right after cluster |
424 |
initialization. |
425 |
|
426 |
:directory: cluster-init |
427 |
:env. vars: none |
428 |
:pre-execution: none |
429 |
:post-execution: master node |
430 |
|
431 |
OP_CLUSTER_DESTROY |
432 |
++++++++++++++++++ |
433 |
|
434 |
The post phase of this hook is called during the execution of destroy |
435 |
operation and not after its completion. |
436 |
|
437 |
:directory: cluster-destroy |
438 |
:env. vars: none |
439 |
:pre-execution: none |
440 |
:post-execution: master node |
441 |
|
442 |
OP_CLUSTER_VERIFY_GROUP |
443 |
+++++++++++++++++++++++ |
444 |
|
445 |
Verifies all nodes in a group. This is a special LU with regard to |
446 |
hooks, as the result of the opcode will be combined with the result of |
447 |
post-execution hooks, in order to allow administrators to enhance the |
448 |
cluster verification procedure. |
449 |
|
450 |
:directory: cluster-verify |
451 |
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name> |
452 |
:pre-execution: none |
453 |
:post-execution: all nodes in a group |
454 |
|
455 |
OP_CLUSTER_RENAME |
456 |
+++++++++++++++++ |
457 |
|
458 |
Renames the cluster. |
459 |
|
460 |
:directory: cluster-rename |
461 |
:env. vars: NEW_NAME |
462 |
:pre-execution: master-node |
463 |
:post-execution: master-node |
464 |
|
465 |
OP_CLUSTER_SET_PARAMS |
466 |
+++++++++++++++++++++ |
467 |
|
468 |
Modifies the cluster parameters. |
469 |
|
470 |
:directory: cluster-modify |
471 |
:env. vars: NEW_VG_NAME |
472 |
:pre-execution: master node |
473 |
:post-execution: master node |
474 |
|
475 |
|
476 |
Obsolete operations |
477 |
~~~~~~~~~~~~~~~~~~~ |
478 |
|
479 |
The following operations are no longer present or don't execute hooks |
480 |
anymore in Ganeti 2.0: |
481 |
|
482 |
- OP_INIT_CLUSTER |
483 |
- OP_MASTER_FAILOVER |
484 |
- OP_INSTANCE_ADD_MDDRBD |
485 |
- OP_INSTANCE_REMOVE_MDDRBD |
486 |
|
487 |
|
488 |
Environment variables |
489 |
--------------------- |
490 |
|
491 |
Note that all variables listed here are actually prefixed with *GANETI_* |
492 |
in order to provide a clear namespace. In addition, post-execution |
493 |
scripts receive another set of variables, prefixed with *GANETI_POST_*, |
494 |
representing the status after the opcode executed. |
495 |
|
496 |
Common variables |
497 |
~~~~~~~~~~~~~~~~ |
498 |
|
499 |
This is the list of environment variables supported by all operations: |
500 |
|
501 |
HOOKS_VERSION |
502 |
Documents the hooks interface version. In case this doesnt match |
503 |
what the script expects, it should not run. The documents conforms |
504 |
to the version 2. |
505 |
|
506 |
HOOKS_PHASE |
507 |
One of *PRE* or *POST* denoting which phase are we in. |
508 |
|
509 |
CLUSTER |
510 |
The cluster name. |
511 |
|
512 |
MASTER |
513 |
The master node. |
514 |
|
515 |
OP_CODE |
516 |
One of the *OP_* values from the list of operations. |
517 |
|
518 |
OBJECT_TYPE |
519 |
One of ``INSTANCE``, ``NODE``, ``CLUSTER``. |
520 |
|
521 |
DATA_DIR |
522 |
The path to the Ganeti configuration directory (to read, for |
523 |
example, the *ssconf* files). |
524 |
|
525 |
|
526 |
Specialised variables |
527 |
~~~~~~~~~~~~~~~~~~~~~ |
528 |
|
529 |
This is the list of variables which are specific to one or more |
530 |
operations. |
531 |
|
532 |
INSTANCE_NAME |
533 |
The name of the instance which is the target of the operation. |
534 |
|
535 |
INSTANCE_BE_x,y,z,... |
536 |
Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance |
537 |
|
538 |
INSTANCE_DISK_TEMPLATE |
539 |
The disk type for the instance. |
540 |
|
541 |
NEW_DISK_TEMPLATE |
542 |
The new disk type for the instance. |
543 |
|
544 |
INSTANCE_DISK_COUNT |
545 |
The number of disks for the instance. |
546 |
|
547 |
INSTANCE_DISKn_SIZE |
548 |
The size of disk *n* for the instance. |
549 |
|
550 |
INSTANCE_DISKn_MODE |
551 |
Either *rw* for a read-write disk or *ro* for a read-only one. |
552 |
|
553 |
INSTANCE_HV_x,y,z,... |
554 |
Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader |
555 |
|
556 |
INSTANCE_HYPERVISOR |
557 |
The instance hypervisor. |
558 |
|
559 |
INSTANCE_NIC_COUNT |
560 |
The number of NICs for the instance. |
561 |
|
562 |
INSTANCE_NICn_BRIDGE |
563 |
The bridge to which the *n* -th NIC of the instance is attached. |
564 |
|
565 |
INSTANCE_NICn_IP |
566 |
The IP (if any) of the *n* -th NIC of the instance. |
567 |
|
568 |
INSTANCE_NICn_MAC |
569 |
The MAC address of the *n* -th NIC of the instance. |
570 |
|
571 |
INSTANCE_NICn_MODE |
572 |
The mode of the *n* -th NIC of the instance. |
573 |
|
574 |
INSTANCE_OS_TYPE |
575 |
The name of the instance OS. |
576 |
|
577 |
INSTANCE_PRIMARY |
578 |
The name of the node which is the primary for the instance. Note that |
579 |
for migrations/failovers, you shouldn't rely on this variable since |
580 |
the nodes change during the exectution, but on the |
581 |
OLD_PRIMARY/NEW_PRIMARY values. |
582 |
|
583 |
INSTANCE_SECONDARY |
584 |
Space-separated list of secondary nodes for the instance. Note that |
585 |
for migrations/failovers, you shouldn't rely on this variable since |
586 |
the nodes change during the exectution, but on the |
587 |
OLD_SECONDARY/NEW_SECONDARY values. |
588 |
|
589 |
INSTANCE_MEMORY |
590 |
The memory size (in MiBs) of the instance. |
591 |
|
592 |
INSTANCE_VCPUS |
593 |
The number of virtual CPUs for the instance. |
594 |
|
595 |
INSTANCE_STATUS |
596 |
The run status of the instance. |
597 |
|
598 |
MASTER_CAPABLE |
599 |
Whether a node is capable of being promoted to master. |
600 |
|
601 |
VM_CAPABLE |
602 |
Whether the node can host instances. |
603 |
|
604 |
INSTANCE_TAGS |
605 |
A space-delimited list of the instance's tags. |
606 |
|
607 |
NODE_NAME |
608 |
The target node of this operation (not the node on which the hook |
609 |
runs). |
610 |
|
611 |
NODE_PIP |
612 |
The primary IP of the target node (the one over which inter-node |
613 |
communication is done). |
614 |
|
615 |
NODE_SIP |
616 |
The secondary IP of the target node (the one over which drbd |
617 |
replication is done). This can be equal to the primary ip, in case |
618 |
the cluster is not dual-homed. |
619 |
|
620 |
FORCE |
621 |
This is provided by some operations when the user gave this flag. |
622 |
|
623 |
IGNORE_CONSISTENCY |
624 |
The user has specified this flag. It is used when failing over |
625 |
instances in case the primary node is down. |
626 |
|
627 |
ADD_MODE |
628 |
The mode of the instance create: either *create* for create from |
629 |
scratch or *import* for restoring from an exported image. |
630 |
|
631 |
SRC_NODE, SRC_PATH, SRC_IMAGE |
632 |
In case the instance has been added by import, these variables are |
633 |
defined and point to the source node, source path (the directory |
634 |
containing the image and the config file) and the source disk image |
635 |
file. |
636 |
|
637 |
NEW_SECONDARY |
638 |
The name of the node on which the new mirror component is being |
639 |
added (for replace disk). This can be the name of the current |
640 |
secondary, if the new mirror is on the same secondary. For |
641 |
migrations/failovers, this is the old primary node. |
642 |
|
643 |
OLD_SECONDARY |
644 |
The name of the old secondary in the replace-disks command. Note that |
645 |
this can be equal to the new secondary if the secondary node hasn't |
646 |
actually changed. For migrations/failovers, this is the new primary |
647 |
node. |
648 |
|
649 |
OLD_PRIMARY, NEW_PRIMARY |
650 |
For migrations/failovers, the old and respectively new primary |
651 |
nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables |
652 |
|
653 |
EXPORT_MODE |
654 |
The instance export mode. Either "remote" or "local". |
655 |
|
656 |
EXPORT_NODE |
657 |
The node on which the exported image of the instance was done. |
658 |
|
659 |
EXPORT_DO_SHUTDOWN |
660 |
This variable tells if the instance has been shutdown or not while |
661 |
doing the export. In the "was shutdown" case, it's likely that the |
662 |
filesystem is consistent, whereas in the "did not shutdown" case, |
663 |
the filesystem would need a check (journal replay or full fsck) in |
664 |
order to guarantee consistency. |
665 |
|
666 |
REMOVE_INSTANCE |
667 |
Whether the instance was removed from the node. |
668 |
|
669 |
SHUTDOWN_TIMEOUT |
670 |
Amount of time to wait for the instance to shutdown. |
671 |
|
672 |
TIMEOUT |
673 |
Amount of time to wait before aborting the op. |
674 |
|
675 |
OLD_NAME, NEW_NAME |
676 |
Old/new name of the node group. |
677 |
|
678 |
GROUP_NAME |
679 |
The name of the node group. |
680 |
|
681 |
NEW_ALLOC_POLICY |
682 |
The new allocation policy for the node group. |
683 |
|
684 |
CLUSTER_TAGS |
685 |
The list of cluster tags, space separated. |
686 |
|
687 |
NODE_TAGS_<name> |
688 |
The list of tags for node *<name>*, space separated. |
689 |
|
690 |
Examples |
691 |
-------- |
692 |
|
693 |
The startup of an instance will pass this environment to the hook |
694 |
script:: |
695 |
|
696 |
GANETI_CLUSTER=cluster1.example.com |
697 |
GANETI_DATA_DIR=/var/lib/ganeti |
698 |
GANETI_FORCE=False |
699 |
GANETI_HOOKS_PATH=instance-start |
700 |
GANETI_HOOKS_PHASE=post |
701 |
GANETI_HOOKS_VERSION=2 |
702 |
GANETI_INSTANCE_DISK0_MODE=rw |
703 |
GANETI_INSTANCE_DISK0_SIZE=128 |
704 |
GANETI_INSTANCE_DISK_COUNT=1 |
705 |
GANETI_INSTANCE_DISK_TEMPLATE=drbd |
706 |
GANETI_INSTANCE_MEMORY=128 |
707 |
GANETI_INSTANCE_NAME=instance2.example.com |
708 |
GANETI_INSTANCE_NIC0_BRIDGE=xen-br0 |
709 |
GANETI_INSTANCE_NIC0_IP= |
710 |
GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58 |
711 |
GANETI_INSTANCE_NIC_COUNT=1 |
712 |
GANETI_INSTANCE_OS_TYPE=debootstrap |
713 |
GANETI_INSTANCE_PRIMARY=node3.example.com |
714 |
GANETI_INSTANCE_SECONDARY=node5.example.com |
715 |
GANETI_INSTANCE_STATUS=down |
716 |
GANETI_INSTANCE_VCPUS=1 |
717 |
GANETI_MASTER=node1.example.com |
718 |
GANETI_OBJECT_TYPE=INSTANCE |
719 |
GANETI_OP_CODE=OP_INSTANCE_STARTUP |
720 |
GANETI_OP_TARGET=instance2.example.com |
721 |
|
722 |
.. vim: set textwidth=72 : |
723 |
.. Local Variables: |
724 |
.. mode: rst |
725 |
.. fill-column: 72 |
726 |
.. End: |