Statistics
| Branch: | Tag: | Revision:

root / doc / hooks.rst @ 56372573

History | View | Annotate | Download (18.2 kB)

1
Ganeti customisation using hooks
2
================================
3

    
4
Documents ganeti version 2.0
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11

    
12
In order to allow customisation of operations, ganeti runs scripts
13
under ``/etc/ganeti/hooks`` based on certain rules.
14

    
15

    
16
This is similar to the ``/etc/network/`` structure present in Debian
17
for network interface handling.
18

    
19
Organisation
20
------------
21

    
22
For every operation, two sets of scripts are run:
23

    
24
- pre phase (for authorization/checking)
25
- post phase (for logging)
26

    
27
Also, for each operation, the scripts are run on one or more nodes,
28
depending on the operation type.
29

    
30
Note that, even though we call them scripts, we are actually talking
31
about any executable.
32

    
33
*pre* scripts
34
~~~~~~~~~~~~~
35

    
36
The *pre* scripts have a definite target: to check that the operation
37
is allowed given the site-specific constraints. You could have, for
38
example, a rule that says every new instance is required to exists in
39
a database; to implement this, you could write a script that checks
40
the new instance parameters against your database.
41

    
42
The objective of these scripts should be their return code (zero or
43
non-zero for success and failure). However, if they modify the
44
environment in any way, they should be idempotent, as failed
45
executions could be restarted and thus the script(s) run again with
46
exactly the same parameters.
47

    
48
Note that if a node is unreachable at the time a hooks is run, this
49
will not be interpreted as a deny for the execution. In other words,
50
only an actual error returned from a script will cause abort, and not
51
an unreachable node.
52

    
53
Therefore, if you want to guarantee that a hook script is run and
54
denies an action, it's best to put it on the master node.
55

    
56
*post* scripts
57
~~~~~~~~~~~~~~
58

    
59
These scripts should do whatever you need as a reaction to the
60
completion of an operation. Their return code is not checked (but
61
logged), and they should not depend on the fact that the *pre* scripts
62
have been run.
63

    
64
Naming
65
~~~~~~
66

    
67
The allowed names for the scripts consist of (similar to *run-parts*)
68
upper and lower case, digits, underscores and hyphens. In other words,
69
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
70
ignored.
71

    
72

    
73
Order of execution
74
~~~~~~~~~~~~~~~~~~
75

    
76
On a single node, the scripts in a directory are run in lexicographic
77
order (more exactly, the python string comparison order). It is
78
advisable to implement the usual *NN-name* convention where *NN* is a
79
two digit number.
80

    
81
For an operation whose hooks are run on multiple nodes, there is no
82
specific ordering of nodes with regard to hooks execution; you should
83
assume that the scripts are run in parallel on the target nodes
84
(keeping on each node the above specified ordering).  If you need any
85
kind of inter-node synchronisation, you have to implement it yourself
86
in the scripts.
87

    
88
Execution environment
89
~~~~~~~~~~~~~~~~~~~~~
90

    
91
The scripts will be run as follows:
92

    
93
- no command line arguments
94

    
95
- no controlling *tty*
96

    
97
- stdin is actually */dev/null*
98

    
99
- stdout and stderr are directed to files
100

    
101
- PATH is reset to ``/sbin:/bin:/usr/sbin:/usr/bin``
102

    
103
- the environment is cleared, and only ganeti-specific variables will
104
  be left
105

    
106

    
107
All information about the cluster is passed using environment
108
variables. Different operations will have sligthly different
109
environments, but most of the variables are common.
110

    
111
Operation list
112
--------------
113

    
114
Node operations
115
~~~~~~~~~~~~~~~
116

    
117
OP_NODE_ADD
118
+++++++++++
119

    
120
Adds a node to the cluster.
121

    
122
:directory: node-add
123
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE
124
:pre-execution: all existing nodes
125
:post-execution: all nodes plus the new node
126

    
127

    
128
OP_NODE_REMOVE
129
++++++++++++++
130

    
131
Removes a node from the cluster. On the removed node the hooks are
132
called during the execution of the operation and not after its
133
completion.
134

    
135
:directory: node-remove
136
:env. vars: NODE_NAME
137
:pre-execution: all existing nodes except the removed node
138
:post-execution: all existing nodes
139

    
140
OP_NODE_SET_PARAMS
141
++++++++++++++++++
142

    
143
Changes a node's parameters.
144

    
145
:directory: node-modify
146
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE
147
:pre-execution: master node, the target node
148
:post-execution: master node, the target node
149

    
150
OP_NODE_EVACUATE
151
++++++++++++++++
152

    
153
Relocate secondary instances from a node.
154

    
155
:directory: node-evacuate
156
:env. vars: NEW_SECONDARY, NODE_NAME
157
:pre-execution: master node, target node
158
:post-execution: master node, target node
159

    
160
OP_NODE_MIGRATE
161
++++++++++++++++
162

    
163
Relocate secondary instances from a node.
164

    
165
:directory: node-migrate
166
:env. vars: NODE_NAME
167
:pre-execution: master node
168
:post-execution: master node
169

    
170

    
171
Node group operations
172
~~~~~~~~~~~~~~~~~~~~~
173

    
174
OP_GROUP_ADD
175
++++++++++++
176

    
177
Adds a node group to the cluster.
178

    
179
:directory: group-add
180
:env. vars: GROUP_NAME
181
:pre-execution: master node
182
:post-execution: master node
183

    
184
OP_GROUP_SET_PARAMS
185
+++++++++++++++++++
186

    
187
Changes a node group's parameters.
188

    
189
:directory: group-modify
190
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY
191
:pre-execution: master node
192
:post-execution: master node
193

    
194
OP_GROUP_REMOVE
195
+++++++++++++++
196

    
197
Removes a node group from the cluster. Since the node group must be
198
empty for removal to succeed, the concept of "nodes in the group" does
199
not exist, and the hook is only executed in the master node.
200

    
201
:directory: group-remove
202
:env. vars: GROUP_NAME
203
:pre-execution: master node
204
:post-execution: master node
205

    
206
OP_GROUP_RENAME
207
+++++++++++++++
208

    
209
Renames a node group.
210

    
211
:directory: group-rename
212
:env. vars: OLD_NAME, NEW_NAME
213
:pre-execution: master node and all nodes in the group
214
:post-execution: master node and all nodes in the group
215

    
216

    
217
Instance operations
218
~~~~~~~~~~~~~~~~~~~
219

    
220
All instance operations take at least the following variables:
221
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY,
222
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
223
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
224
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
225
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE.
226

    
227
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
228
properties of the *n* -th NIC and disk, and are zero-indexed.
229

    
230

    
231
OP_INSTANCE_CREATE
232
++++++++++++++++++
233

    
234
Creates a new instance.
235

    
236
:directory: instance-add
237
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
238
:pre-execution: master node, primary and secondary nodes
239
:post-execution: master node, primary and secondary nodes
240

    
241
OP_INSTANCE_REINSTALL
242
+++++++++++++++++++++
243

    
244
Reinstalls an instance.
245

    
246
:directory: instance-reinstall
247
:env. vars: only the standard instance vars
248
:pre-execution: master node, primary and secondary nodes
249
:post-execution: master node, primary and secondary nodes
250

    
251
OP_BACKUP_EXPORT
252
++++++++++++++++
253

    
254
Exports the instance.
255

    
256
:directory: instance-export
257
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE
258
:pre-execution: master node, primary and secondary nodes
259
:post-execution: master node, primary and secondary nodes
260

    
261
OP_INSTANCE_STARTUP
262
+++++++++++++++++++
263

    
264
Starts an instance.
265

    
266
:directory: instance-start
267
:env. vars: FORCE
268
:pre-execution: master node, primary and secondary nodes
269
:post-execution: master node, primary and secondary nodes
270

    
271
OP_INSTANCE_SHUTDOWN
272
++++++++++++++++++++
273

    
274
Stops an instance.
275

    
276
:directory: instance-stop
277
:env. vars: TIMEOUT
278
:pre-execution: master node, primary and secondary nodes
279
:post-execution: master node, primary and secondary nodes
280

    
281
OP_INSTANCE_REBOOT
282
++++++++++++++++++
283

    
284
Reboots an instance.
285

    
286
:directory: instance-reboot
287
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT
288
:pre-execution: master node, primary and secondary nodes
289
:post-execution: master node, primary and secondary nodes
290

    
291
OP_INSTANCE_SET_PARAMS
292
++++++++++++++++++++++
293

    
294
Modifies the instance parameters.
295

    
296
:directory: instance-modify
297
:env. vars: NEW_DISK_TEMPLATE
298
:pre-execution: master node, primary and secondary nodes
299
:post-execution: master node, primary and secondary nodes
300

    
301
OP_INSTANCE_FAILOVER
302
++++++++++++++++++++
303

    
304
Failovers an instance. In the post phase INSTANCE_PRIMARY and
305
INSTANCE_SECONDARY refer to the nodes that were repectively primary
306
and secondary before failover.
307

    
308
:directory: instance-failover
309
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
310
:pre-execution: master node, secondary node
311
:post-execution: master node, primary and secondary nodes
312

    
313
OP_INSTANCE_MIGRATE
314
++++++++++++++++++++
315

    
316
Migrates an instance. In the post phase INSTANCE_PRIMARY and
317
INSTANCE_SECONDARY refer to the nodes that were repectively primary
318
and secondary before migration.
319

    
320
:directory: instance-migrate
321
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
322
:pre-execution: master node, secondary node
323
:post-execution: master node, primary and secondary nodes
324

    
325

    
326
OP_INSTANCE_REMOVE
327
++++++++++++++++++
328

    
329
Remove an instance.
330

    
331
:directory: instance-remove
332
:env. vars: SHUTDOWN_TIMEOUT
333
:pre-execution: master node
334
:post-execution: master node, primary and secondary nodes
335

    
336
OP_INSTANCE_REPLACE_DISKS
337
+++++++++++++++++++++++++
338

    
339
Replace an instance's disks.
340

    
341
:directory: mirror-replace
342
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
343
:pre-execution: master node, primary and secondary nodes
344
:post-execution: master node, primary and secondary nodes
345

    
346
OP_INSTANCE_GROW_DISK
347
+++++++++++++++++++++
348

    
349
Grows the disk of an instance.
350

    
351
:directory: disk-grow
352
:env. vars: DISK, AMOUNT
353
:pre-execution: master node, primary and secondary nodes
354
:post-execution: master node, primary and secondary nodes
355

    
356
OP_INSTANCE_RENAME
357
++++++++++++++++++
358

    
359
Renames an instance.
360

    
361
:directory: instance-rename
362
:env. vars: INSTANCE_NEW_NAME
363
:pre-execution: master node, primary and secondary nodes
364
:post-execution: master node, primary and secondary nodes
365

    
366
OP_INSTANCE_MOVE
367
++++++++++++++++
368

    
369
Move an instance by data-copying.
370

    
371
:directory: instance-move
372
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT
373
:pre-execution: master node, primary and target nodes
374
:post-execution: master node, primary and target nodes
375

    
376
OP_INSTANCE_RECREATE_DISKS
377
++++++++++++++++++++++++++
378

    
379
Recreate an instance's missing disks.
380

    
381
:directory: instance-recreate-disks
382
:env. vars: only the standard instance vars
383
:pre-execution: master node, primary and secondary nodes
384
:post-execution: master node, primary and secondary nodes
385

    
386
OP_INSTANCE_REPLACE_DISKS
387
+++++++++++++++++++++++++
388

    
389
Replace the disks of an instance.
390

    
391
:directory: mirrors-replace
392
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
393
:pre-execution: master node, primary and new secondary nodes
394
:post-execution: master node, primary and new secondary nodes
395

    
396

    
397
Cluster operations
398
~~~~~~~~~~~~~~~~~~
399

    
400
OP_CLUSTER_POST_INIT
401
++++++++++++++++++++
402

    
403
This hook is called via a special "empty" LU right after cluster
404
initialization.
405

    
406
:directory: cluster-init
407
:env. vars: none
408
:pre-execution: none
409
:post-execution: master node
410

    
411
OP_CLUSTER_DESTROY
412
++++++++++++++++++
413

    
414
The post phase of this hook is called during the execution of destroy
415
operation and not after its completion.
416

    
417
:directory: cluster-destroy
418
:env. vars: none
419
:pre-execution: none
420
:post-execution: master node
421

    
422
OP_CLUSTER_VERIFY_GROUP
423
+++++++++++++++++++++++
424

    
425
Verifies all nodes in a group. This is a special LU with regard to
426
hooks, as the result of the opcode will be combined with the result of
427
post-execution hooks, in order to allow administrators to enhance the
428
cluster verification procedure.
429

    
430
:directory: cluster-verify
431
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name>
432
:pre-execution: none
433
:post-execution: all nodes in a group
434

    
435
OP_CLUSTER_RENAME
436
+++++++++++++++++
437

    
438
Renames the cluster.
439

    
440
:directory: cluster-rename
441
:env. vars: NEW_NAME
442
:pre-execution: master-node
443
:post-execution: master-node
444

    
445
OP_CLUSTER_SET_PARAMS
446
+++++++++++++++++++++
447

    
448
Modifies the cluster parameters.
449

    
450
:directory: cluster-modify
451
:env. vars: NEW_VG_NAME
452
:pre-execution: master node
453
:post-execution: master node
454

    
455

    
456
Obsolete operations
457
~~~~~~~~~~~~~~~~~~~
458

    
459
The following operations are no longer present or don't execute hooks
460
anymore in Ganeti 2.0:
461

    
462
- OP_INIT_CLUSTER
463
- OP_MASTER_FAILOVER
464
- OP_INSTANCE_ADD_MDDRBD
465
- OP_INSTANCE_REMOVE_MDDRBD
466

    
467

    
468
Environment variables
469
---------------------
470

    
471
Note that all variables listed here are actually prefixed with *GANETI_*
472
in order to provide a clear namespace. In addition, post-execution
473
scripts receive another set of variables, prefixed with *GANETI_POST_*,
474
representing the status after the opcode executed.
475

    
476
Common variables
477
~~~~~~~~~~~~~~~~
478

    
479
This is the list of environment variables supported by all operations:
480

    
481
HOOKS_VERSION
482
  Documents the hooks interface version. In case this doesnt match
483
  what the script expects, it should not run. The documents conforms
484
  to the version 2.
485

    
486
HOOKS_PHASE
487
  One of *PRE* or *POST* denoting which phase are we in.
488

    
489
CLUSTER
490
  The cluster name.
491

    
492
MASTER
493
  The master node.
494

    
495
OP_CODE
496
  One of the *OP_* values from the list of operations.
497

    
498
OBJECT_TYPE
499
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
500

    
501
DATA_DIR
502
  The path to the Ganeti configuration directory (to read, for
503
  example, the *ssconf* files).
504

    
505

    
506
Specialised variables
507
~~~~~~~~~~~~~~~~~~~~~
508

    
509
This is the list of variables which are specific to one or more
510
operations.
511

    
512
INSTANCE_NAME
513
  The name of the instance which is the target of the operation.
514

    
515
INSTANCE_BE_x,y,z,...
516
  Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance
517

    
518
INSTANCE_DISK_TEMPLATE
519
  The disk type for the instance.
520

    
521
NEW_DISK_TEMPLATE
522
  The new disk type for the instance.
523

    
524
INSTANCE_DISK_COUNT
525
  The number of disks for the instance.
526

    
527
INSTANCE_DISKn_SIZE
528
  The size of disk *n* for the instance.
529

    
530
INSTANCE_DISKn_MODE
531
  Either *rw* for a read-write disk or *ro* for a read-only one.
532

    
533
INSTANCE_HV_x,y,z,...
534
  Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader
535

    
536
INSTANCE_HYPERVISOR
537
  The instance hypervisor.
538

    
539
INSTANCE_NIC_COUNT
540
  The number of NICs for the instance.
541

    
542
INSTANCE_NICn_BRIDGE
543
  The bridge to which the *n* -th NIC of the instance is attached.
544

    
545
INSTANCE_NICn_IP
546
  The IP (if any) of the *n* -th NIC of the instance.
547

    
548
INSTANCE_NICn_MAC
549
  The MAC address of the *n* -th NIC of the instance.
550

    
551
INSTANCE_NICn_MODE
552
  The mode of the *n* -th NIC of the instance.
553

    
554
INSTANCE_OS_TYPE
555
  The name of the instance OS.
556

    
557
INSTANCE_PRIMARY
558
  The name of the node which is the primary for the instance. Note that
559
  for migrations/failovers, you shouldn't rely on this variable since
560
  the nodes change during the exectution, but on the
561
  OLD_PRIMARY/NEW_PRIMARY values.
562

    
563
INSTANCE_SECONDARY
564
  Space-separated list of secondary nodes for the instance. Note that
565
  for migrations/failovers, you shouldn't rely on this variable since
566
  the nodes change during the exectution, but on the
567
  OLD_SECONDARY/NEW_SECONDARY values.
568

    
569
INSTANCE_MEMORY
570
  The memory size (in MiBs) of the instance.
571

    
572
INSTANCE_VCPUS
573
  The number of virtual CPUs for the instance.
574

    
575
INSTANCE_STATUS
576
  The run status of the instance.
577

    
578
MASTER_CAPABLE
579
  Whether a node is capable of being promoted to master.
580

    
581
VM_CAPABLE
582
  Whether the node can host instances.
583

    
584
NODE_NAME
585
  The target node of this operation (not the node on which the hook
586
  runs).
587

    
588
NODE_PIP
589
  The primary IP of the target node (the one over which inter-node
590
  communication is done).
591

    
592
NODE_SIP
593
  The secondary IP of the target node (the one over which drbd
594
  replication is done). This can be equal to the primary ip, in case
595
  the cluster is not dual-homed.
596

    
597
FORCE
598
  This is provided by some operations when the user gave this flag.
599

    
600
IGNORE_CONSISTENCY
601
  The user has specified this flag. It is used when failing over
602
  instances in case the primary node is down.
603

    
604
ADD_MODE
605
  The mode of the instance create: either *create* for create from
606
  scratch or *import* for restoring from an exported image.
607

    
608
SRC_NODE, SRC_PATH, SRC_IMAGE
609
  In case the instance has been added by import, these variables are
610
  defined and point to the source node, source path (the directory
611
  containing the image and the config file) and the source disk image
612
  file.
613

    
614
NEW_SECONDARY
615
  The name of the node on which the new mirror component is being
616
  added (for replace disk). This can be the name of the current
617
  secondary, if the new mirror is on the same secondary. For
618
  migrations/failovers, this is the old primary node.
619

    
620
OLD_SECONDARY
621
  The name of the old secondary in the replace-disks command. Note that
622
  this can be equal to the new secondary if the secondary node hasn't
623
  actually changed. For migrations/failovers, this is the new primary
624
  node.
625

    
626
OLD_PRIMARY, NEW_PRIMARY
627
  For migrations/failovers, the old and respectively new primary
628
  nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables
629

    
630
EXPORT_MODE
631
  The instance export mode. Either "remote" or "local".
632

    
633
EXPORT_NODE
634
  The node on which the exported image of the instance was done.
635

    
636
EXPORT_DO_SHUTDOWN
637
  This variable tells if the instance has been shutdown or not while
638
  doing the export. In the "was shutdown" case, it's likely that the
639
  filesystem is consistent, whereas in the "did not shutdown" case,
640
  the filesystem would need a check (journal replay or full fsck) in
641
  order to guarantee consistency.
642

    
643
REMOVE_INSTANCE
644
  Whether the instance was removed from the node.
645

    
646
SHUTDOWN_TIMEOUT
647
  Amount of time to wait for the instance to shutdown.
648

    
649
TIMEOUT
650
  Amount of time to wait before aborting the op.
651

    
652
OLD_NAME, NEW_NAME
653
  Old/new name of the node group.
654

    
655
GROUP_NAME
656
  The name of the node group.
657

    
658
NEW_ALLOC_POLICY
659
  The new allocation policy for the node group.
660

    
661
CLUSTER_TAGS
662
  The list of cluster tags, space separated.
663

    
664
NODE_TAGS_<name>
665
  The list of tags for node *<name>*, space separated.
666

    
667
Examples
668
--------
669

    
670
The startup of an instance will pass this environment to the hook
671
script::
672

    
673
  GANETI_CLUSTER=cluster1.example.com
674
  GANETI_DATA_DIR=/var/lib/ganeti
675
  GANETI_FORCE=False
676
  GANETI_HOOKS_PATH=instance-start
677
  GANETI_HOOKS_PHASE=post
678
  GANETI_HOOKS_VERSION=2
679
  GANETI_INSTANCE_DISK0_MODE=rw
680
  GANETI_INSTANCE_DISK0_SIZE=128
681
  GANETI_INSTANCE_DISK_COUNT=1
682
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
683
  GANETI_INSTANCE_MEMORY=128
684
  GANETI_INSTANCE_NAME=instance2.example.com
685
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
686
  GANETI_INSTANCE_NIC0_IP=
687
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
688
  GANETI_INSTANCE_NIC_COUNT=1
689
  GANETI_INSTANCE_OS_TYPE=debootstrap
690
  GANETI_INSTANCE_PRIMARY=node3.example.com
691
  GANETI_INSTANCE_SECONDARY=node5.example.com
692
  GANETI_INSTANCE_STATUS=down
693
  GANETI_INSTANCE_VCPUS=1
694
  GANETI_MASTER=node1.example.com
695
  GANETI_OBJECT_TYPE=INSTANCE
696
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
697
  GANETI_OP_TARGET=instance2.example.com
698

    
699
.. vim: set textwidth=72 :
700
.. Local Variables:
701
.. mode: rst
702
.. fill-column: 72
703
.. End: