Statistics
| Branch: | Tag: | Revision:

root / doc / hooks.rst @ d89168ff

History | View | Annotate | Download (21.4 kB)

1
Ganeti customisation using hooks
2
================================
3

    
4
Documents Ganeti version 2.7
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
In order to allow customisation of operations, Ganeti runs scripts in
12
sub-directories of ``@SYSCONFDIR@/ganeti/hooks``. These sub-directories
13
are named ``$hook-$phase.d``, where ``$phase`` is either ``pre`` or
14
``post`` and ``$hook`` matches the directory name given for a hook (e.g.
15
``cluster-verify-post.d`` or ``node-add-pre.d``).
16

    
17
This is similar to the ``/etc/network/`` structure present in Debian
18
for network interface handling.
19

    
20
Organisation
21
------------
22

    
23
For every operation, two sets of scripts are run:
24

    
25
- pre phase (for authorization/checking)
26
- post phase (for logging)
27

    
28
Also, for each operation, the scripts are run on one or more nodes,
29
depending on the operation type.
30

    
31
Note that, even though we call them scripts, we are actually talking
32
about any executable.
33

    
34
*pre* scripts
35
~~~~~~~~~~~~~
36

    
37
The *pre* scripts have a definite target: to check that the operation
38
is allowed given the site-specific constraints. You could have, for
39
example, a rule that says every new instance is required to exists in
40
a database; to implement this, you could write a script that checks
41
the new instance parameters against your database.
42

    
43
The objective of these scripts should be their return code (zero or
44
non-zero for success and failure). However, if they modify the
45
environment in any way, they should be idempotent, as failed
46
executions could be restarted and thus the script(s) run again with
47
exactly the same parameters.
48

    
49
Note that if a node is unreachable at the time a hooks is run, this
50
will not be interpreted as a deny for the execution. In other words,
51
only an actual error returned from a script will cause abort, and not
52
an unreachable node.
53

    
54
Therefore, if you want to guarantee that a hook script is run and
55
denies an action, it's best to put it on the master node.
56

    
57
*post* scripts
58
~~~~~~~~~~~~~~
59

    
60
These scripts should do whatever you need as a reaction to the
61
completion of an operation. Their return code is not checked (but
62
logged), and they should not depend on the fact that the *pre* scripts
63
have been run.
64

    
65
Naming
66
~~~~~~
67

    
68
The allowed names for the scripts consist of (similar to *run-parts*)
69
upper and lower case, digits, underscores and hyphens. In other words,
70
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
71
ignored.
72

    
73

    
74
Order of execution
75
~~~~~~~~~~~~~~~~~~
76

    
77
On a single node, the scripts in a directory are run in lexicographic
78
order (more exactly, the python string comparison order). It is
79
advisable to implement the usual *NN-name* convention where *NN* is a
80
two digit number.
81

    
82
For an operation whose hooks are run on multiple nodes, there is no
83
specific ordering of nodes with regard to hooks execution; you should
84
assume that the scripts are run in parallel on the target nodes
85
(keeping on each node the above specified ordering).  If you need any
86
kind of inter-node synchronisation, you have to implement it yourself
87
in the scripts.
88

    
89
Execution environment
90
~~~~~~~~~~~~~~~~~~~~~
91

    
92
The scripts will be run as follows:
93

    
94
- no command line arguments
95

    
96
- no controlling *tty*
97

    
98
- stdin is actually */dev/null*
99

    
100
- stdout and stderr are directed to files
101

    
102
- PATH is reset to :pyeval:`constants.HOOKS_PATH`
103

    
104
- the environment is cleared, and only ganeti-specific variables will
105
  be left
106

    
107

    
108
All information about the cluster is passed using environment
109
variables. Different operations will have sligthly different
110
environments, but most of the variables are common.
111

    
112
Operation list
113
--------------
114

    
115
Node operations
116
~~~~~~~~~~~~~~~
117

    
118
OP_NODE_ADD
119
+++++++++++
120

    
121
Adds a node to the cluster.
122

    
123
:directory: node-add
124
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE
125
:pre-execution: all existing nodes
126
:post-execution: all nodes plus the new node
127

    
128

    
129
OP_NODE_REMOVE
130
++++++++++++++
131

    
132
Removes a node from the cluster. On the removed node the hooks are
133
called during the execution of the operation and not after its
134
completion.
135

    
136
:directory: node-remove
137
:env. vars: NODE_NAME
138
:pre-execution: all existing nodes except the removed node
139
:post-execution: all existing nodes
140

    
141
OP_NODE_SET_PARAMS
142
++++++++++++++++++
143

    
144
Changes a node's parameters.
145

    
146
:directory: node-modify
147
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE
148
:pre-execution: master node, the target node
149
:post-execution: master node, the target node
150

    
151
OP_NODE_MIGRATE
152
++++++++++++++++
153

    
154
Relocate secondary instances from a node.
155

    
156
:directory: node-migrate
157
:env. vars: NODE_NAME
158
:pre-execution: master node
159
:post-execution: master node
160

    
161

    
162
Node group operations
163
~~~~~~~~~~~~~~~~~~~~~
164

    
165
OP_GROUP_ADD
166
++++++++++++
167

    
168
Adds a node group to the cluster.
169

    
170
:directory: group-add
171
:env. vars: GROUP_NAME
172
:pre-execution: master node
173
:post-execution: master node
174

    
175
OP_GROUP_SET_PARAMS
176
+++++++++++++++++++
177

    
178
Changes a node group's parameters.
179

    
180
:directory: group-modify
181
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY
182
:pre-execution: master node
183
:post-execution: master node
184

    
185
OP_GROUP_REMOVE
186
+++++++++++++++
187

    
188
Removes a node group from the cluster. Since the node group must be
189
empty for removal to succeed, the concept of "nodes in the group" does
190
not exist, and the hook is only executed in the master node.
191

    
192
:directory: group-remove
193
:env. vars: GROUP_NAME
194
:pre-execution: master node
195
:post-execution: master node
196

    
197
OP_GROUP_RENAME
198
+++++++++++++++
199

    
200
Renames a node group.
201

    
202
:directory: group-rename
203
:env. vars: OLD_NAME, NEW_NAME
204
:pre-execution: master node and all nodes in the group
205
:post-execution: master node and all nodes in the group
206

    
207
OP_GROUP_EVACUATE
208
+++++++++++++++++
209

    
210
Evacuates a node group.
211

    
212
:directory: group-evacuate
213
:env. vars: GROUP_NAME, TARGET_GROUPS
214
:pre-execution: master node and all nodes in the group
215
:post-execution: master node and all nodes in the group
216

    
217
Network operations
218
~~~~~~~~~~~~~~~~~~
219

    
220
OP_NETWORK_ADD
221
++++++++++++++
222

    
223
Adds a network to the cluster.
224

    
225
:directory: network-add
226
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
227
            NETWORK_GATEWAY6, NETWORK_TYPE, NETWORK_MAC_PREFIX, NETWORK_TAGS
228
:pre-execution: master node
229
:post-execution: master node
230

    
231
OP_NETWORK_REMOVE
232
+++++++++++++++++
233

    
234
Removes a network from the cluster.
235

    
236
:directory: network-remove
237
:env. vars: NETWORK_NAME
238
:pre-execution: master node
239
:post-execution: master node
240

    
241
OP_NETWORK_CONNECT
242
++++++++++++++++++
243

    
244
Connects a network to a nodegroup.
245

    
246
:directory: network-connect
247
:env. vars: GROUP_NAME, NETWORK_NAME,
248
            GROUP_NETWORK_MODE, GROUP_NETWORK_LINK,
249
            NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
250
            NETWORK_GATEWAY6, NETWORK_TYPE, NETWORK_MAC_PREFIX, NETWORK_TAGS
251
:pre-execution: nodegroup nodes
252
:post-execution: nodegroup nodes
253

    
254

    
255
OP_NETWORK_DISCONNECT
256
+++++++++++++++++++++
257

    
258
Disconnects a network from a nodegroup.
259

    
260
:directory: network-disconnect
261
:env. vars: GROUP_NAME, NETWORK_NAME,
262
            GROUP_NETWORK_MODE, GROUP_NETWORK_LINK,
263
            NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
264
            NETWORK_GATEWAY6, NETWORK_TYPE, NETWORK_MAC_PREFIX, NETWORK_TAGS
265
:pre-execution: nodegroup nodes
266
:post-execution: nodegroup nodes
267

    
268

    
269
OP_NETWORK_SET_PARAMS
270
+++++++++++++++++++++
271

    
272
Modifies a network.
273

    
274
:directory: network-modify
275
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
276
            NETWORK_GATEWAY6, NETWORK_TYPE, NETWORK_MAC_PREFIX, NETWORK_TAGS
277
:pre-execution: master node
278
:post-execution: master node
279

    
280

    
281
Instance operations
282
~~~~~~~~~~~~~~~~~~~
283

    
284
All instance operations take at least the following variables:
285
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY,
286
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
287
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
288
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
289
INSTANCE_NICn_NETWORK, INSTANCE_NICn_NETWORK_FAMILY,
290
INSTANCE_NICn_NETWORK_UUID, INSTANCE_NICn_NETWORK_SUBNET,
291
INSTANCE_NICn_NETWORK_GATEWAY, INSTANCE_NICn_NETWORK_SUBNET6,
292
INSTANCE_NICn_NETWORK_GATEWAY6, INSTANCE_NICn_NETWORK_MAC_PREFIX,
293
INSTANCE_NICn_NETWORK_TYPE, INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE,
294
INSTANCE_DISKn_MODE.
295

    
296
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
297
properties of the *n* -th NIC and disk, and are zero-indexed.
298

    
299
The INSTANCE_NICn_NETWORK_* variables are only passed if a NIC's network
300
parameter is set (that is if the NIC is associated to a network defined
301
via ``gnt-network``)
302

    
303

    
304
OP_INSTANCE_CREATE
305
++++++++++++++++++
306

    
307
Creates a new instance.
308

    
309
:directory: instance-add
310
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
311
:pre-execution: master node, primary and secondary nodes
312
:post-execution: master node, primary and secondary nodes
313

    
314
OP_INSTANCE_REINSTALL
315
+++++++++++++++++++++
316

    
317
Reinstalls an instance.
318

    
319
:directory: instance-reinstall
320
:env. vars: only the standard instance vars
321
:pre-execution: master node, primary and secondary nodes
322
:post-execution: master node, primary and secondary nodes
323

    
324
OP_BACKUP_EXPORT
325
++++++++++++++++
326

    
327
Exports the instance.
328

    
329
:directory: instance-export
330
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE
331
:pre-execution: master node, primary and secondary nodes
332
:post-execution: master node, primary and secondary nodes
333

    
334
OP_INSTANCE_STARTUP
335
+++++++++++++++++++
336

    
337
Starts an instance.
338

    
339
:directory: instance-start
340
:env. vars: FORCE
341
:pre-execution: master node, primary and secondary nodes
342
:post-execution: master node, primary and secondary nodes
343

    
344
OP_INSTANCE_SHUTDOWN
345
++++++++++++++++++++
346

    
347
Stops an instance.
348

    
349
:directory: instance-stop
350
:env. vars: TIMEOUT
351
:pre-execution: master node, primary and secondary nodes
352
:post-execution: master node, primary and secondary nodes
353

    
354
OP_INSTANCE_REBOOT
355
++++++++++++++++++
356

    
357
Reboots an instance.
358

    
359
:directory: instance-reboot
360
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT
361
:pre-execution: master node, primary and secondary nodes
362
:post-execution: master node, primary and secondary nodes
363

    
364
OP_INSTANCE_SET_PARAMS
365
++++++++++++++++++++++
366

    
367
Modifies the instance parameters.
368

    
369
:directory: instance-modify
370
:env. vars: NEW_DISK_TEMPLATE, RUNTIME_MEMORY
371
:pre-execution: master node, primary and secondary nodes
372
:post-execution: master node, primary and secondary nodes
373

    
374
OP_INSTANCE_FAILOVER
375
++++++++++++++++++++
376

    
377
Failovers an instance. In the post phase INSTANCE_PRIMARY and
378
INSTANCE_SECONDARY refer to the nodes that were repectively primary
379
and secondary before failover.
380

    
381
:directory: instance-failover
382
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
383
:pre-execution: master node, secondary node
384
:post-execution: master node, primary and secondary nodes
385

    
386
OP_INSTANCE_MIGRATE
387
++++++++++++++++++++
388

    
389
Migrates an instance. In the post phase INSTANCE_PRIMARY and
390
INSTANCE_SECONDARY refer to the nodes that were repectively primary
391
and secondary before migration.
392

    
393
:directory: instance-migrate
394
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
395
:pre-execution: master node, primary and secondary nodes
396
:post-execution: master node, primary and secondary nodes
397

    
398

    
399
OP_INSTANCE_REMOVE
400
++++++++++++++++++
401

    
402
Remove an instance.
403

    
404
:directory: instance-remove
405
:env. vars: SHUTDOWN_TIMEOUT
406
:pre-execution: master node
407
:post-execution: master node, primary and secondary nodes
408

    
409
OP_INSTANCE_GROW_DISK
410
+++++++++++++++++++++
411

    
412
Grows the disk of an instance.
413

    
414
:directory: disk-grow
415
:env. vars: DISK, AMOUNT
416
:pre-execution: master node, primary and secondary nodes
417
:post-execution: master node, primary and secondary nodes
418

    
419
OP_INSTANCE_RENAME
420
++++++++++++++++++
421

    
422
Renames an instance.
423

    
424
:directory: instance-rename
425
:env. vars: INSTANCE_NEW_NAME
426
:pre-execution: master node, primary and secondary nodes
427
:post-execution: master node, primary and secondary nodes
428

    
429
OP_INSTANCE_MOVE
430
++++++++++++++++
431

    
432
Move an instance by data-copying.
433

    
434
:directory: instance-move
435
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT
436
:pre-execution: master node, primary and target nodes
437
:post-execution: master node, primary and target nodes
438

    
439
OP_INSTANCE_RECREATE_DISKS
440
++++++++++++++++++++++++++
441

    
442
Recreate an instance's missing disks.
443

    
444
:directory: instance-recreate-disks
445
:env. vars: only the standard instance vars
446
:pre-execution: master node, primary and secondary nodes
447
:post-execution: master node, primary and secondary nodes
448

    
449
OP_INSTANCE_REPLACE_DISKS
450
+++++++++++++++++++++++++
451

    
452
Replace the disks of an instance.
453

    
454
:directory: mirrors-replace
455
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
456
:pre-execution: master node, primary and new secondary nodes
457
:post-execution: master node, primary and new secondary nodes
458

    
459
OP_INSTANCE_CHANGE_GROUP
460
++++++++++++++++++++++++
461

    
462
Moves an instance to another group.
463

    
464
:directory: instance-change-group
465
:env. vars: TARGET_GROUPS
466
:pre-execution: master node
467
:post-execution: master node
468

    
469

    
470
Cluster operations
471
~~~~~~~~~~~~~~~~~~
472

    
473
OP_CLUSTER_POST_INIT
474
++++++++++++++++++++
475

    
476
This hook is called via a special "empty" LU right after cluster
477
initialization.
478

    
479
:directory: cluster-init
480
:env. vars: none
481
:pre-execution: none
482
:post-execution: master node
483

    
484
OP_CLUSTER_DESTROY
485
++++++++++++++++++
486

    
487
The post phase of this hook is called during the execution of destroy
488
operation and not after its completion.
489

    
490
:directory: cluster-destroy
491
:env. vars: none
492
:pre-execution: none
493
:post-execution: master node
494

    
495
OP_CLUSTER_VERIFY_GROUP
496
+++++++++++++++++++++++
497

    
498
Verifies all nodes in a group. This is a special LU with regard to
499
hooks, as the result of the opcode will be combined with the result of
500
post-execution hooks, in order to allow administrators to enhance the
501
cluster verification procedure.
502

    
503
:directory: cluster-verify
504
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name>
505
:pre-execution: none
506
:post-execution: all nodes in a group
507

    
508
OP_CLUSTER_RENAME
509
+++++++++++++++++
510

    
511
Renames the cluster.
512

    
513
:directory: cluster-rename
514
:env. vars: NEW_NAME
515
:pre-execution: master-node
516
:post-execution: master-node
517

    
518
OP_CLUSTER_SET_PARAMS
519
+++++++++++++++++++++
520

    
521
Modifies the cluster parameters.
522

    
523
:directory: cluster-modify
524
:env. vars: NEW_VG_NAME
525
:pre-execution: master node
526
:post-execution: master node
527

    
528
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNUP`
529
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
530

    
531
This doesn't correspond to an actual op-code, but it is called when the
532
master IP is activated.
533

    
534
:directory: master-ip-turnup
535
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
536
:pre-execution: master node
537
:post-execution: master node
538

    
539
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNDOWN`
540
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
541

    
542
This doesn't correspond to an actual op-code, but it is called when the
543
master IP is deactivated.
544

    
545
:directory: master-ip-turndown
546
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
547
:pre-execution: master node
548
:post-execution: master node
549

    
550

    
551
Obsolete operations
552
~~~~~~~~~~~~~~~~~~~
553

    
554
The following operations are no longer present or don't execute hooks
555
anymore in Ganeti 2.0:
556

    
557
- OP_INIT_CLUSTER
558
- OP_MASTER_FAILOVER
559
- OP_INSTANCE_ADD_MDDRBD
560
- OP_INSTANCE_REMOVE_MDDRBD
561

    
562

    
563
Environment variables
564
---------------------
565

    
566
Note that all variables listed here are actually prefixed with *GANETI_*
567
in order to provide a clear namespace. In addition, post-execution
568
scripts receive another set of variables, prefixed with *GANETI_POST_*,
569
representing the status after the opcode executed.
570

    
571
Common variables
572
~~~~~~~~~~~~~~~~
573

    
574
This is the list of environment variables supported by all operations:
575

    
576
HOOKS_VERSION
577
  Documents the hooks interface version. In case this doesnt match
578
  what the script expects, it should not run. The documents conforms
579
  to the version 2.
580

    
581
HOOKS_PHASE
582
  One of *PRE* or *POST* denoting which phase are we in.
583

    
584
CLUSTER
585
  The cluster name.
586

    
587
MASTER
588
  The master node.
589

    
590
OP_CODE
591
  One of the *OP_* values from the list of operations.
592

    
593
OBJECT_TYPE
594
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
595

    
596
DATA_DIR
597
  The path to the Ganeti configuration directory (to read, for
598
  example, the *ssconf* files).
599

    
600

    
601
Specialised variables
602
~~~~~~~~~~~~~~~~~~~~~
603

    
604
This is the list of variables which are specific to one or more
605
operations.
606

    
607
CLUSTER_IP_VERSION
608
  IP version of the master IP (4 or 6)
609

    
610
INSTANCE_NAME
611
  The name of the instance which is the target of the operation.
612

    
613
INSTANCE_BE_x,y,z,...
614
  Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance
615

    
616
INSTANCE_DISK_TEMPLATE
617
  The disk type for the instance.
618

    
619
NEW_DISK_TEMPLATE
620
  The new disk type for the instance.
621

    
622
INSTANCE_DISK_COUNT
623
  The number of disks for the instance.
624

    
625
INSTANCE_DISKn_SIZE
626
  The size of disk *n* for the instance.
627

    
628
INSTANCE_DISKn_MODE
629
  Either *rw* for a read-write disk or *ro* for a read-only one.
630

    
631
INSTANCE_HV_x,y,z,...
632
  Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader
633

    
634
INSTANCE_HYPERVISOR
635
  The instance hypervisor.
636

    
637
INSTANCE_NIC_COUNT
638
  The number of NICs for the instance.
639

    
640
INSTANCE_NICn_BRIDGE
641
  The bridge to which the *n* -th NIC of the instance is attached.
642

    
643
INSTANCE_NICn_IP
644
  The IP (if any) of the *n* -th NIC of the instance.
645

    
646
INSTANCE_NICn_MAC
647
  The MAC address of the *n* -th NIC of the instance.
648

    
649
INSTANCE_NICn_MODE
650
  The mode of the *n* -th NIC of the instance.
651

    
652
INSTANCE_OS_TYPE
653
  The name of the instance OS.
654

    
655
INSTANCE_PRIMARY
656
  The name of the node which is the primary for the instance. Note that
657
  for migrations/failovers, you shouldn't rely on this variable since
658
  the nodes change during the exectution, but on the
659
  OLD_PRIMARY/NEW_PRIMARY values.
660

    
661
INSTANCE_SECONDARY
662
  Space-separated list of secondary nodes for the instance. Note that
663
  for migrations/failovers, you shouldn't rely on this variable since
664
  the nodes change during the exectution, but on the
665
  OLD_SECONDARY/NEW_SECONDARY values.
666

    
667
INSTANCE_MEMORY
668
  The memory size (in MiBs) of the instance.
669

    
670
INSTANCE_VCPUS
671
  The number of virtual CPUs for the instance.
672

    
673
INSTANCE_STATUS
674
  The run status of the instance.
675

    
676
MASTER_CAPABLE
677
  Whether a node is capable of being promoted to master.
678

    
679
VM_CAPABLE
680
  Whether the node can host instances.
681

    
682
MASTER_NETDEV
683
  Network device of the master IP
684

    
685
MASTER_IP
686
  The master IP
687

    
688
MASTER_NETMASK
689
  Netmask of the master IP
690

    
691
INSTANCE_TAGS
692
  A space-delimited list of the instance's tags.
693

    
694
NODE_NAME
695
  The target node of this operation (not the node on which the hook
696
  runs).
697

    
698
NODE_PIP
699
  The primary IP of the target node (the one over which inter-node
700
  communication is done).
701

    
702
NODE_SIP
703
  The secondary IP of the target node (the one over which drbd
704
  replication is done). This can be equal to the primary ip, in case
705
  the cluster is not dual-homed.
706

    
707
FORCE
708
  This is provided by some operations when the user gave this flag.
709

    
710
IGNORE_CONSISTENCY
711
  The user has specified this flag. It is used when failing over
712
  instances in case the primary node is down.
713

    
714
ADD_MODE
715
  The mode of the instance create: either *create* for create from
716
  scratch or *import* for restoring from an exported image.
717

    
718
SRC_NODE, SRC_PATH, SRC_IMAGE
719
  In case the instance has been added by import, these variables are
720
  defined and point to the source node, source path (the directory
721
  containing the image and the config file) and the source disk image
722
  file.
723

    
724
NEW_SECONDARY
725
  The name of the node on which the new mirror component is being
726
  added (for replace disk). This can be the name of the current
727
  secondary, if the new mirror is on the same secondary. For
728
  migrations/failovers, this is the old primary node.
729

    
730
OLD_SECONDARY
731
  The name of the old secondary in the replace-disks command. Note that
732
  this can be equal to the new secondary if the secondary node hasn't
733
  actually changed. For migrations/failovers, this is the new primary
734
  node.
735

    
736
OLD_PRIMARY, NEW_PRIMARY
737
  For migrations/failovers, the old and respectively new primary
738
  nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables
739

    
740
EXPORT_MODE
741
  The instance export mode. Either "remote" or "local".
742

    
743
EXPORT_NODE
744
  The node on which the exported image of the instance was done.
745

    
746
EXPORT_DO_SHUTDOWN
747
  This variable tells if the instance has been shutdown or not while
748
  doing the export. In the "was shutdown" case, it's likely that the
749
  filesystem is consistent, whereas in the "did not shutdown" case,
750
  the filesystem would need a check (journal replay or full fsck) in
751
  order to guarantee consistency.
752

    
753
REMOVE_INSTANCE
754
  Whether the instance was removed from the node.
755

    
756
SHUTDOWN_TIMEOUT
757
  Amount of time to wait for the instance to shutdown.
758

    
759
TIMEOUT
760
  Amount of time to wait before aborting the op.
761

    
762
OLD_NAME, NEW_NAME
763
  Old/new name of the node group.
764

    
765
GROUP_NAME
766
  The name of the node group.
767

    
768
NEW_ALLOC_POLICY
769
  The new allocation policy for the node group.
770

    
771
CLUSTER_TAGS
772
  The list of cluster tags, space separated.
773

    
774
NODE_TAGS_<name>
775
  The list of tags for node *<name>*, space separated.
776

    
777
Examples
778
--------
779

    
780
The startup of an instance will pass this environment to the hook
781
script::
782

    
783
  GANETI_CLUSTER=cluster1.example.com
784
  GANETI_DATA_DIR=/var/lib/ganeti
785
  GANETI_FORCE=False
786
  GANETI_HOOKS_PATH=instance-start
787
  GANETI_HOOKS_PHASE=post
788
  GANETI_HOOKS_VERSION=2
789
  GANETI_INSTANCE_DISK0_MODE=rw
790
  GANETI_INSTANCE_DISK0_SIZE=128
791
  GANETI_INSTANCE_DISK_COUNT=1
792
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
793
  GANETI_INSTANCE_MEMORY=128
794
  GANETI_INSTANCE_NAME=instance2.example.com
795
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
796
  GANETI_INSTANCE_NIC0_IP=
797
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
798
  GANETI_INSTANCE_NIC_COUNT=1
799
  GANETI_INSTANCE_OS_TYPE=debootstrap
800
  GANETI_INSTANCE_PRIMARY=node3.example.com
801
  GANETI_INSTANCE_SECONDARY=node5.example.com
802
  GANETI_INSTANCE_STATUS=down
803
  GANETI_INSTANCE_VCPUS=1
804
  GANETI_MASTER=node1.example.com
805
  GANETI_OBJECT_TYPE=INSTANCE
806
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
807
  GANETI_OP_TARGET=instance2.example.com
808

    
809
.. vim: set textwidth=72 :
810
.. Local Variables:
811
.. mode: rst
812
.. fill-column: 72
813
.. End: