Statistics
| Branch: | Tag: | Revision:

root / doc / hooks.rst @ fb4b885a

History | View | Annotate | Download (21.3 kB)

1
Ganeti customisation using hooks
2
================================
3

    
4
Documents Ganeti version 2.7
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
In order to allow customisation of operations, Ganeti runs scripts in
12
sub-directories of ``@SYSCONFDIR@/ganeti/hooks``. These sub-directories
13
are named ``$hook-$phase.d``, where ``$phase`` is either ``pre`` or
14
``post`` and ``$hook`` matches the directory name given for a hook (e.g.
15
``cluster-verify-post.d`` or ``node-add-pre.d``).
16

    
17
This is similar to the ``/etc/network/`` structure present in Debian
18
for network interface handling.
19

    
20
Organisation
21
------------
22

    
23
For every operation, two sets of scripts are run:
24

    
25
- pre phase (for authorization/checking)
26
- post phase (for logging)
27

    
28
Also, for each operation, the scripts are run on one or more nodes,
29
depending on the operation type.
30

    
31
Note that, even though we call them scripts, we are actually talking
32
about any executable.
33

    
34
*pre* scripts
35
~~~~~~~~~~~~~
36

    
37
The *pre* scripts have a definite target: to check that the operation
38
is allowed given the site-specific constraints. You could have, for
39
example, a rule that says every new instance is required to exists in
40
a database; to implement this, you could write a script that checks
41
the new instance parameters against your database.
42

    
43
The objective of these scripts should be their return code (zero or
44
non-zero for success and failure). However, if they modify the
45
environment in any way, they should be idempotent, as failed
46
executions could be restarted and thus the script(s) run again with
47
exactly the same parameters.
48

    
49
Note that if a node is unreachable at the time a hooks is run, this
50
will not be interpreted as a deny for the execution. In other words,
51
only an actual error returned from a script will cause abort, and not
52
an unreachable node.
53

    
54
Therefore, if you want to guarantee that a hook script is run and
55
denies an action, it's best to put it on the master node.
56

    
57
*post* scripts
58
~~~~~~~~~~~~~~
59

    
60
These scripts should do whatever you need as a reaction to the
61
completion of an operation. Their return code is not checked (but
62
logged), and they should not depend on the fact that the *pre* scripts
63
have been run.
64

    
65
Naming
66
~~~~~~
67

    
68
The allowed names for the scripts consist of (similar to *run-parts*)
69
upper and lower case, digits, underscores and hyphens. In other words,
70
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
71
ignored.
72

    
73

    
74
Order of execution
75
~~~~~~~~~~~~~~~~~~
76

    
77
On a single node, the scripts in a directory are run in lexicographic
78
order (more exactly, the python string comparison order). It is
79
advisable to implement the usual *NN-name* convention where *NN* is a
80
two digit number.
81

    
82
For an operation whose hooks are run on multiple nodes, there is no
83
specific ordering of nodes with regard to hooks execution; you should
84
assume that the scripts are run in parallel on the target nodes
85
(keeping on each node the above specified ordering).  If you need any
86
kind of inter-node synchronisation, you have to implement it yourself
87
in the scripts.
88

    
89
Execution environment
90
~~~~~~~~~~~~~~~~~~~~~
91

    
92
The scripts will be run as follows:
93

    
94
- no command line arguments
95

    
96
- no controlling *tty*
97

    
98
- stdin is actually */dev/null*
99

    
100
- stdout and stderr are directed to files
101

    
102
- PATH is reset to :pyeval:`constants.HOOKS_PATH`
103

    
104
- the environment is cleared, and only ganeti-specific variables will
105
  be left
106

    
107

    
108
All information about the cluster is passed using environment
109
variables. Different operations will have sligthly different
110
environments, but most of the variables are common.
111

    
112
Operation list
113
--------------
114

    
115
Node operations
116
~~~~~~~~~~~~~~~
117

    
118
OP_NODE_ADD
119
+++++++++++
120

    
121
Adds a node to the cluster.
122

    
123
:directory: node-add
124
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE
125
:pre-execution: all existing nodes
126
:post-execution: all nodes plus the new node
127

    
128

    
129
OP_NODE_REMOVE
130
++++++++++++++
131

    
132
Removes a node from the cluster. On the removed node the hooks are
133
called during the execution of the operation and not after its
134
completion.
135

    
136
:directory: node-remove
137
:env. vars: NODE_NAME
138
:pre-execution: all existing nodes except the removed node
139
:post-execution: all existing nodes
140

    
141
OP_NODE_SET_PARAMS
142
++++++++++++++++++
143

    
144
Changes a node's parameters.
145

    
146
:directory: node-modify
147
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE
148
:pre-execution: master node, the target node
149
:post-execution: master node, the target node
150

    
151
OP_NODE_MIGRATE
152
++++++++++++++++
153

    
154
Relocate secondary instances from a node.
155

    
156
:directory: node-migrate
157
:env. vars: NODE_NAME
158
:pre-execution: master node
159
:post-execution: master node
160

    
161

    
162
Node group operations
163
~~~~~~~~~~~~~~~~~~~~~
164

    
165
OP_GROUP_ADD
166
++++++++++++
167

    
168
Adds a node group to the cluster.
169

    
170
:directory: group-add
171
:env. vars: GROUP_NAME
172
:pre-execution: master node
173
:post-execution: master node
174

    
175
OP_GROUP_SET_PARAMS
176
+++++++++++++++++++
177

    
178
Changes a node group's parameters.
179

    
180
:directory: group-modify
181
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY
182
:pre-execution: master node
183
:post-execution: master node
184

    
185
OP_GROUP_REMOVE
186
+++++++++++++++
187

    
188
Removes a node group from the cluster. Since the node group must be
189
empty for removal to succeed, the concept of "nodes in the group" does
190
not exist, and the hook is only executed in the master node.
191

    
192
:directory: group-remove
193
:env. vars: GROUP_NAME
194
:pre-execution: master node
195
:post-execution: master node
196

    
197
OP_GROUP_RENAME
198
+++++++++++++++
199

    
200
Renames a node group.
201

    
202
:directory: group-rename
203
:env. vars: OLD_NAME, NEW_NAME
204
:pre-execution: master node and all nodes in the group
205
:post-execution: master node and all nodes in the group
206

    
207
OP_GROUP_EVACUATE
208
+++++++++++++++++
209

    
210
Evacuates a node group.
211

    
212
:directory: group-evacuate
213
:env. vars: GROUP_NAME, TARGET_GROUPS
214
:pre-execution: master node and all nodes in the group
215
:post-execution: master node and all nodes in the group
216

    
217
Network operations
218
~~~~~~~~~~~~~~~~~~
219

    
220
OP_NETWORK_ADD
221
++++++++++++++
222

    
223
Adds a network to the cluster.
224

    
225
:directory: network-add
226
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
227
            NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS
228
:pre-execution: master node
229
:post-execution: master node
230

    
231
OP_NETWORK_REMOVE
232
+++++++++++++++++
233

    
234
Removes a network from the cluster.
235

    
236
:directory: network-remove
237
:env. vars: NETWORK_NAME
238
:pre-execution: master node
239
:post-execution: master node
240

    
241
OP_NETWORK_CONNECT
242
++++++++++++++++++
243

    
244
Connects a network to a nodegroup.
245

    
246
:directory: network-connect
247
:env. vars: GROUP_NAME, NETWORK_NAME,
248
            GROUP_NETWORK_MODE, GROUP_NETWORK_LINK,
249
            NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
250
            NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS
251
:pre-execution: nodegroup nodes
252
:post-execution: nodegroup nodes
253

    
254

    
255
OP_NETWORK_DISCONNECT
256
+++++++++++++++++++++
257

    
258
Disconnects a network from a nodegroup.
259

    
260
:directory: network-disconnect
261
:env. vars: GROUP_NAME, NETWORK_NAME,
262
            GROUP_NETWORK_MODE, GROUP_NETWORK_LINK,
263
            NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
264
            NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS
265
:pre-execution: nodegroup nodes
266
:post-execution: nodegroup nodes
267

    
268

    
269
OP_NETWORK_SET_PARAMS
270
+++++++++++++++++++++
271

    
272
Modifies a network.
273

    
274
:directory: network-modify
275
:env. vars: NETWORK_NAME, NETWORK_SUBNET, NETWORK_GATEWAY, NETWORK_SUBNET6,
276
            NETWORK_GATEWAY6, NETWORK_MAC_PREFIX, NETWORK_TAGS
277
:pre-execution: master node
278
:post-execution: master node
279

    
280

    
281
Instance operations
282
~~~~~~~~~~~~~~~~~~~
283

    
284
All instance operations take at least the following variables:
285
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY,
286
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
287
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
288
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
289
INSTANCE_NICn_NETWORK,
290
INSTANCE_NICn_NETWORK_UUID, INSTANCE_NICn_NETWORK_SUBNET,
291
INSTANCE_NICn_NETWORK_GATEWAY, INSTANCE_NICn_NETWORK_SUBNET6,
292
INSTANCE_NICn_NETWORK_GATEWAY6, INSTANCE_NICn_NETWORK_MAC_PREFIX,
293
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE.
294

    
295
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
296
properties of the *n* -th NIC and disk, and are zero-indexed.
297

    
298
The INSTANCE_NICn_NETWORK_* variables are only passed if a NIC's network
299
parameter is set (that is if the NIC is associated to a network defined
300
via ``gnt-network``)
301

    
302

    
303
OP_INSTANCE_CREATE
304
++++++++++++++++++
305

    
306
Creates a new instance.
307

    
308
:directory: instance-add
309
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
310
:pre-execution: master node, primary and secondary nodes
311
:post-execution: master node, primary and secondary nodes
312

    
313
OP_INSTANCE_REINSTALL
314
+++++++++++++++++++++
315

    
316
Reinstalls an instance.
317

    
318
:directory: instance-reinstall
319
:env. vars: only the standard instance vars
320
:pre-execution: master node, primary and secondary nodes
321
:post-execution: master node, primary and secondary nodes
322

    
323
OP_BACKUP_EXPORT
324
++++++++++++++++
325

    
326
Exports the instance.
327

    
328
:directory: instance-export
329
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE
330
:pre-execution: master node, primary and secondary nodes
331
:post-execution: master node, primary and secondary nodes
332

    
333
OP_INSTANCE_STARTUP
334
+++++++++++++++++++
335

    
336
Starts an instance.
337

    
338
:directory: instance-start
339
:env. vars: FORCE
340
:pre-execution: master node, primary and secondary nodes
341
:post-execution: master node, primary and secondary nodes
342

    
343
OP_INSTANCE_SHUTDOWN
344
++++++++++++++++++++
345

    
346
Stops an instance.
347

    
348
:directory: instance-stop
349
:env. vars: TIMEOUT
350
:pre-execution: master node, primary and secondary nodes
351
:post-execution: master node, primary and secondary nodes
352

    
353
OP_INSTANCE_REBOOT
354
++++++++++++++++++
355

    
356
Reboots an instance.
357

    
358
:directory: instance-reboot
359
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT
360
:pre-execution: master node, primary and secondary nodes
361
:post-execution: master node, primary and secondary nodes
362

    
363
OP_INSTANCE_SET_PARAMS
364
++++++++++++++++++++++
365

    
366
Modifies the instance parameters.
367

    
368
:directory: instance-modify
369
:env. vars: NEW_DISK_TEMPLATE, RUNTIME_MEMORY
370
:pre-execution: master node, primary and secondary nodes
371
:post-execution: master node, primary and secondary nodes
372

    
373
OP_INSTANCE_FAILOVER
374
++++++++++++++++++++
375

    
376
Failovers an instance. In the post phase INSTANCE_PRIMARY and
377
INSTANCE_SECONDARY refer to the nodes that were repectively primary
378
and secondary before failover.
379

    
380
:directory: instance-failover
381
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
382
:pre-execution: master node, secondary node
383
:post-execution: master node, primary and secondary nodes
384

    
385
OP_INSTANCE_MIGRATE
386
++++++++++++++++++++
387

    
388
Migrates an instance. In the post phase INSTANCE_PRIMARY and
389
INSTANCE_SECONDARY refer to the nodes that were repectively primary
390
and secondary before migration.
391

    
392
:directory: instance-migrate
393
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
394
:pre-execution: master node, primary and secondary nodes
395
:post-execution: master node, primary and secondary nodes
396

    
397

    
398
OP_INSTANCE_REMOVE
399
++++++++++++++++++
400

    
401
Remove an instance.
402

    
403
:directory: instance-remove
404
:env. vars: SHUTDOWN_TIMEOUT
405
:pre-execution: master node
406
:post-execution: master node, primary and secondary nodes
407

    
408
OP_INSTANCE_GROW_DISK
409
+++++++++++++++++++++
410

    
411
Grows the disk of an instance.
412

    
413
:directory: disk-grow
414
:env. vars: DISK, AMOUNT
415
:pre-execution: master node, primary and secondary nodes
416
:post-execution: master node, primary and secondary nodes
417

    
418
OP_INSTANCE_RENAME
419
++++++++++++++++++
420

    
421
Renames an instance.
422

    
423
:directory: instance-rename
424
:env. vars: INSTANCE_NEW_NAME
425
:pre-execution: master node, primary and secondary nodes
426
:post-execution: master node, primary and secondary nodes
427

    
428
OP_INSTANCE_MOVE
429
++++++++++++++++
430

    
431
Move an instance by data-copying.
432

    
433
:directory: instance-move
434
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT
435
:pre-execution: master node, primary and target nodes
436
:post-execution: master node, primary and target nodes
437

    
438
OP_INSTANCE_RECREATE_DISKS
439
++++++++++++++++++++++++++
440

    
441
Recreate an instance's missing disks.
442

    
443
:directory: instance-recreate-disks
444
:env. vars: only the standard instance vars
445
:pre-execution: master node, primary and secondary nodes
446
:post-execution: master node, primary and secondary nodes
447

    
448
OP_INSTANCE_REPLACE_DISKS
449
+++++++++++++++++++++++++
450

    
451
Replace the disks of an instance.
452

    
453
:directory: mirrors-replace
454
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
455
:pre-execution: master node, primary and new secondary nodes
456
:post-execution: master node, primary and new secondary nodes
457

    
458
OP_INSTANCE_CHANGE_GROUP
459
++++++++++++++++++++++++
460

    
461
Moves an instance to another group.
462

    
463
:directory: instance-change-group
464
:env. vars: TARGET_GROUPS
465
:pre-execution: master node
466
:post-execution: master node
467

    
468

    
469
Cluster operations
470
~~~~~~~~~~~~~~~~~~
471

    
472
OP_CLUSTER_POST_INIT
473
++++++++++++++++++++
474

    
475
This hook is called via a special "empty" LU right after cluster
476
initialization.
477

    
478
:directory: cluster-init
479
:env. vars: none
480
:pre-execution: none
481
:post-execution: master node
482

    
483
OP_CLUSTER_DESTROY
484
++++++++++++++++++
485

    
486
The post phase of this hook is called during the execution of destroy
487
operation and not after its completion.
488

    
489
:directory: cluster-destroy
490
:env. vars: none
491
:pre-execution: none
492
:post-execution: master node
493

    
494
OP_CLUSTER_VERIFY_GROUP
495
+++++++++++++++++++++++
496

    
497
Verifies all nodes in a group. This is a special LU with regard to
498
hooks, as the result of the opcode will be combined with the result of
499
post-execution hooks, in order to allow administrators to enhance the
500
cluster verification procedure.
501

    
502
:directory: cluster-verify
503
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name>
504
:pre-execution: none
505
:post-execution: all nodes in a group
506

    
507
OP_CLUSTER_RENAME
508
+++++++++++++++++
509

    
510
Renames the cluster.
511

    
512
:directory: cluster-rename
513
:env. vars: NEW_NAME
514
:pre-execution: master-node
515
:post-execution: master-node
516

    
517
OP_CLUSTER_SET_PARAMS
518
+++++++++++++++++++++
519

    
520
Modifies the cluster parameters.
521

    
522
:directory: cluster-modify
523
:env. vars: NEW_VG_NAME
524
:pre-execution: master node
525
:post-execution: master node
526

    
527
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNUP`
528
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
529

    
530
This doesn't correspond to an actual op-code, but it is called when the
531
master IP is activated.
532

    
533
:directory: master-ip-turnup
534
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
535
:pre-execution: master node
536
:post-execution: master node
537

    
538
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNDOWN`
539
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
540

    
541
This doesn't correspond to an actual op-code, but it is called when the
542
master IP is deactivated.
543

    
544
:directory: master-ip-turndown
545
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
546
:pre-execution: master node
547
:post-execution: master node
548

    
549

    
550
Obsolete operations
551
~~~~~~~~~~~~~~~~~~~
552

    
553
The following operations are no longer present or don't execute hooks
554
anymore in Ganeti 2.0:
555

    
556
- OP_INIT_CLUSTER
557
- OP_MASTER_FAILOVER
558
- OP_INSTANCE_ADD_MDDRBD
559
- OP_INSTANCE_REMOVE_MDDRBD
560

    
561

    
562
Environment variables
563
---------------------
564

    
565
Note that all variables listed here are actually prefixed with *GANETI_*
566
in order to provide a clear namespace. In addition, post-execution
567
scripts receive another set of variables, prefixed with *GANETI_POST_*,
568
representing the status after the opcode executed.
569

    
570
Common variables
571
~~~~~~~~~~~~~~~~
572

    
573
This is the list of environment variables supported by all operations:
574

    
575
HOOKS_VERSION
576
  Documents the hooks interface version. In case this doesnt match
577
  what the script expects, it should not run. The documents conforms
578
  to the version 2.
579

    
580
HOOKS_PHASE
581
  One of *PRE* or *POST* denoting which phase are we in.
582

    
583
CLUSTER
584
  The cluster name.
585

    
586
MASTER
587
  The master node.
588

    
589
OP_CODE
590
  One of the *OP_* values from the list of operations.
591

    
592
OBJECT_TYPE
593
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
594

    
595
DATA_DIR
596
  The path to the Ganeti configuration directory (to read, for
597
  example, the *ssconf* files).
598

    
599

    
600
Specialised variables
601
~~~~~~~~~~~~~~~~~~~~~
602

    
603
This is the list of variables which are specific to one or more
604
operations.
605

    
606
CLUSTER_IP_VERSION
607
  IP version of the master IP (4 or 6)
608

    
609
INSTANCE_NAME
610
  The name of the instance which is the target of the operation.
611

    
612
INSTANCE_BE_x,y,z,...
613
  Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance
614

    
615
INSTANCE_DISK_TEMPLATE
616
  The disk type for the instance.
617

    
618
NEW_DISK_TEMPLATE
619
  The new disk type for the instance.
620

    
621
INSTANCE_DISK_COUNT
622
  The number of disks for the instance.
623

    
624
INSTANCE_DISKn_SIZE
625
  The size of disk *n* for the instance.
626

    
627
INSTANCE_DISKn_MODE
628
  Either *rw* for a read-write disk or *ro* for a read-only one.
629

    
630
INSTANCE_HV_x,y,z,...
631
  Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader
632

    
633
INSTANCE_HYPERVISOR
634
  The instance hypervisor.
635

    
636
INSTANCE_NIC_COUNT
637
  The number of NICs for the instance.
638

    
639
INSTANCE_NICn_BRIDGE
640
  The bridge to which the *n* -th NIC of the instance is attached.
641

    
642
INSTANCE_NICn_IP
643
  The IP (if any) of the *n* -th NIC of the instance.
644

    
645
INSTANCE_NICn_MAC
646
  The MAC address of the *n* -th NIC of the instance.
647

    
648
INSTANCE_NICn_MODE
649
  The mode of the *n* -th NIC of the instance.
650

    
651
INSTANCE_OS_TYPE
652
  The name of the instance OS.
653

    
654
INSTANCE_PRIMARY
655
  The name of the node which is the primary for the instance. Note that
656
  for migrations/failovers, you shouldn't rely on this variable since
657
  the nodes change during the exectution, but on the
658
  OLD_PRIMARY/NEW_PRIMARY values.
659

    
660
INSTANCE_SECONDARY
661
  Space-separated list of secondary nodes for the instance. Note that
662
  for migrations/failovers, you shouldn't rely on this variable since
663
  the nodes change during the exectution, but on the
664
  OLD_SECONDARY/NEW_SECONDARY values.
665

    
666
INSTANCE_MEMORY
667
  The memory size (in MiBs) of the instance.
668

    
669
INSTANCE_VCPUS
670
  The number of virtual CPUs for the instance.
671

    
672
INSTANCE_STATUS
673
  The run status of the instance.
674

    
675
MASTER_CAPABLE
676
  Whether a node is capable of being promoted to master.
677

    
678
VM_CAPABLE
679
  Whether the node can host instances.
680

    
681
MASTER_NETDEV
682
  Network device of the master IP
683

    
684
MASTER_IP
685
  The master IP
686

    
687
MASTER_NETMASK
688
  Netmask of the master IP
689

    
690
INSTANCE_TAGS
691
  A space-delimited list of the instance's tags.
692

    
693
NODE_NAME
694
  The target node of this operation (not the node on which the hook
695
  runs).
696

    
697
NODE_PIP
698
  The primary IP of the target node (the one over which inter-node
699
  communication is done).
700

    
701
NODE_SIP
702
  The secondary IP of the target node (the one over which drbd
703
  replication is done). This can be equal to the primary ip, in case
704
  the cluster is not dual-homed.
705

    
706
FORCE
707
  This is provided by some operations when the user gave this flag.
708

    
709
IGNORE_CONSISTENCY
710
  The user has specified this flag. It is used when failing over
711
  instances in case the primary node is down.
712

    
713
ADD_MODE
714
  The mode of the instance create: either *create* for create from
715
  scratch or *import* for restoring from an exported image.
716

    
717
SRC_NODE, SRC_PATH, SRC_IMAGE
718
  In case the instance has been added by import, these variables are
719
  defined and point to the source node, source path (the directory
720
  containing the image and the config file) and the source disk image
721
  file.
722

    
723
NEW_SECONDARY
724
  The name of the node on which the new mirror component is being
725
  added (for replace disk). This can be the name of the current
726
  secondary, if the new mirror is on the same secondary. For
727
  migrations/failovers, this is the old primary node.
728

    
729
OLD_SECONDARY
730
  The name of the old secondary in the replace-disks command. Note that
731
  this can be equal to the new secondary if the secondary node hasn't
732
  actually changed. For migrations/failovers, this is the new primary
733
  node.
734

    
735
OLD_PRIMARY, NEW_PRIMARY
736
  For migrations/failovers, the old and respectively new primary
737
  nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables
738

    
739
EXPORT_MODE
740
  The instance export mode. Either "remote" or "local".
741

    
742
EXPORT_NODE
743
  The node on which the exported image of the instance was done.
744

    
745
EXPORT_DO_SHUTDOWN
746
  This variable tells if the instance has been shutdown or not while
747
  doing the export. In the "was shutdown" case, it's likely that the
748
  filesystem is consistent, whereas in the "did not shutdown" case,
749
  the filesystem would need a check (journal replay or full fsck) in
750
  order to guarantee consistency.
751

    
752
REMOVE_INSTANCE
753
  Whether the instance was removed from the node.
754

    
755
SHUTDOWN_TIMEOUT
756
  Amount of time to wait for the instance to shutdown.
757

    
758
TIMEOUT
759
  Amount of time to wait before aborting the op.
760

    
761
OLD_NAME, NEW_NAME
762
  Old/new name of the node group.
763

    
764
GROUP_NAME
765
  The name of the node group.
766

    
767
NEW_ALLOC_POLICY
768
  The new allocation policy for the node group.
769

    
770
CLUSTER_TAGS
771
  The list of cluster tags, space separated.
772

    
773
NODE_TAGS_<name>
774
  The list of tags for node *<name>*, space separated.
775

    
776
Examples
777
--------
778

    
779
The startup of an instance will pass this environment to the hook
780
script::
781

    
782
  GANETI_CLUSTER=cluster1.example.com
783
  GANETI_DATA_DIR=/var/lib/ganeti
784
  GANETI_FORCE=False
785
  GANETI_HOOKS_PATH=instance-start
786
  GANETI_HOOKS_PHASE=post
787
  GANETI_HOOKS_VERSION=2
788
  GANETI_INSTANCE_DISK0_MODE=rw
789
  GANETI_INSTANCE_DISK0_SIZE=128
790
  GANETI_INSTANCE_DISK_COUNT=1
791
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
792
  GANETI_INSTANCE_MEMORY=128
793
  GANETI_INSTANCE_NAME=instance2.example.com
794
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
795
  GANETI_INSTANCE_NIC0_IP=
796
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
797
  GANETI_INSTANCE_NIC_COUNT=1
798
  GANETI_INSTANCE_OS_TYPE=debootstrap
799
  GANETI_INSTANCE_PRIMARY=node3.example.com
800
  GANETI_INSTANCE_SECONDARY=node5.example.com
801
  GANETI_INSTANCE_STATUS=down
802
  GANETI_INSTANCE_VCPUS=1
803
  GANETI_MASTER=node1.example.com
804
  GANETI_OBJECT_TYPE=INSTANCE
805
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
806
  GANETI_OP_TARGET=instance2.example.com
807

    
808
.. vim: set textwidth=72 :
809
.. Local Variables:
810
.. mode: rst
811
.. fill-column: 72
812
.. End: