Statistics
| Branch: | Tag: | Revision:

root / doc / hooks.rst @ 8ac5c5d7

History | View | Annotate | Download (18 kB)

1
Ganeti customisation using hooks
2
================================
3

    
4
Documents ganeti version 2.0
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11

    
12
In order to allow customisation of operations, ganeti runs scripts
13
under ``/etc/ganeti/hooks`` based on certain rules.
14

    
15

    
16
This is similar to the ``/etc/network/`` structure present in Debian
17
for network interface handling.
18

    
19
Organisation
20
------------
21

    
22
For every operation, two sets of scripts are run:
23

    
24
- pre phase (for authorization/checking)
25
- post phase (for logging)
26

    
27
Also, for each operation, the scripts are run on one or more nodes,
28
depending on the operation type.
29

    
30
Note that, even though we call them scripts, we are actually talking
31
about any executable.
32

    
33
*pre* scripts
34
~~~~~~~~~~~~~
35

    
36
The *pre* scripts have a definite target: to check that the operation
37
is allowed given the site-specific constraints. You could have, for
38
example, a rule that says every new instance is required to exists in
39
a database; to implement this, you could write a script that checks
40
the new instance parameters against your database.
41

    
42
The objective of these scripts should be their return code (zero or
43
non-zero for success and failure). However, if they modify the
44
environment in any way, they should be idempotent, as failed
45
executions could be restarted and thus the script(s) run again with
46
exactly the same parameters.
47

    
48
Note that if a node is unreachable at the time a hooks is run, this
49
will not be interpreted as a deny for the execution. In other words,
50
only an actual error returned from a script will cause abort, and not
51
an unreachable node.
52

    
53
Therefore, if you want to guarantee that a hook script is run and
54
denies an action, it's best to put it on the master node.
55

    
56
*post* scripts
57
~~~~~~~~~~~~~~
58

    
59
These scripts should do whatever you need as a reaction to the
60
completion of an operation. Their return code is not checked (but
61
logged), and they should not depend on the fact that the *pre* scripts
62
have been run.
63

    
64
Naming
65
~~~~~~
66

    
67
The allowed names for the scripts consist of (similar to *run-parts* )
68
upper and lower case, digits, underscores and hyphens. In other words,
69
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
70
ignored.
71

    
72

    
73
Order of execution
74
~~~~~~~~~~~~~~~~~~
75

    
76
On a single node, the scripts in a directory are run in lexicographic
77
order (more exactly, the python string comparison order). It is
78
advisable to implement the usual *NN-name* convention where *NN* is a
79
two digit number.
80

    
81
For an operation whose hooks are run on multiple nodes, there is no
82
specific ordering of nodes with regard to hooks execution; you should
83
assume that the scripts are run in parallel on the target nodes
84
(keeping on each node the above specified ordering).  If you need any
85
kind of inter-node synchronisation, you have to implement it yourself
86
in the scripts.
87

    
88
Execution environment
89
~~~~~~~~~~~~~~~~~~~~~
90

    
91
The scripts will be run as follows:
92

    
93
- no command line arguments
94

    
95
- no controlling *tty*
96

    
97
- stdin is actually */dev/null*
98

    
99
- stdout and stderr are directed to files
100

    
101
- PATH is reset to ``/sbin:/bin:/usr/sbin:/usr/bin``
102

    
103
- the environment is cleared, and only ganeti-specific variables will
104
  be left
105

    
106

    
107
All information about the cluster is passed using environment
108
variables. Different operations will have sligthly different
109
environments, but most of the variables are common.
110

    
111
Operation list
112
--------------
113

    
114
Node operations
115
~~~~~~~~~~~~~~~
116

    
117
OP_NODE_ADD
118
+++++++++++
119

    
120
Adds a node to the cluster.
121

    
122
:directory: node-add
123
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE
124
:pre-execution: all existing nodes
125
:post-execution: all nodes plus the new node
126

    
127

    
128
OP_NODE_REMOVE
129
++++++++++++++
130

    
131
Removes a node from the cluster. On the removed node the hooks are
132
called during the execution of the operation and not after its
133
completion.
134

    
135
:directory: node-remove
136
:env. vars: NODE_NAME
137
:pre-execution: all existing nodes except the removed node
138
:post-execution: all existing nodes
139

    
140
OP_NODE_SET_PARAMS
141
++++++++++++++++++
142

    
143
Changes a node's parameters.
144

    
145
:directory: node-modify
146
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE
147
:pre-execution: master node, the target node
148
:post-execution: master node, the target node
149

    
150
OP_NODE_EVACUATE
151
++++++++++++++++
152

    
153
Relocate secondary instances from a node.
154

    
155
:directory: node-evacuate
156
:env. vars: NEW_SECONDARY, NODE_NAME
157
:pre-execution: master node, target node
158
:post-execution: master node, target node
159

    
160
OP_NODE_MIGRATE
161
++++++++++++++++
162

    
163
Relocate secondary instances from a node.
164

    
165
:directory: node-migrate
166
:env. vars: NODE_NAME
167
:pre-execution: master node
168
:post-execution: master node
169

    
170

    
171
Node group operations
172
~~~~~~~~~~~~~~~~~~~~~
173

    
174
OP_GROUP_ADD
175
++++++++++++
176

    
177
Adds a node group to the cluster.
178

    
179
:directory: group-add
180
:env. vars: GROUP_NAME
181
:pre-execution: master node
182
:post-execution: master node
183

    
184
OP_GROUP_SET_PARAMS
185
+++++++++++++++++++
186

    
187
Changes a node group's parameters.
188

    
189
:directory: group-modify
190
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY
191
:pre-execution: master node
192
:post-execution: master node
193

    
194
OP_GROUP_REMOVE
195
+++++++++++++++
196

    
197
Removes a node group from the cluster. Since the node group must be
198
empty for removal to succeed, the concept of "nodes in the group" does
199
not exist, and the hook is only executed in the master node.
200

    
201
:directory: group-remove
202
:env. vars: GROUP_NAME
203
:pre-execution: master node
204
:post-execution: master node
205

    
206
OP_GROUP_RENAME
207
+++++++++++++++
208

    
209
Renames a node group.
210

    
211
:directory: group-rename
212
:env. vars: OLD_NAME, NEW_NAME
213
:pre-execution: master node and all nodes in the group
214
:post-execution: master node and all nodes in the group
215

    
216

    
217
Instance operations
218
~~~~~~~~~~~~~~~~~~~
219

    
220
All instance operations take at least the following variables:
221
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY,
222
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
223
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
224
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
225
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE.
226

    
227
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
228
properties of the *n* -th NIC and disk, and are zero-indexed.
229

    
230

    
231
OP_INSTANCE_CREATE
232
++++++++++++++++++
233

    
234
Creates a new instance.
235

    
236
:directory: instance-add
237
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
238
:pre-execution: master node, primary and secondary nodes
239
:post-execution: master node, primary and secondary nodes
240

    
241
OP_INSTANCE_REINSTALL
242
+++++++++++++++++++++
243

    
244
Reinstalls an instance.
245

    
246
:directory: instance-reinstall
247
:env. vars: only the standard instance vars
248
:pre-execution: master node, primary and secondary nodes
249
:post-execution: master node, primary and secondary nodes
250

    
251
OP_BACKUP_EXPORT
252
++++++++++++++++
253

    
254
Exports the instance.
255

    
256
:directory: instance-export
257
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE
258
:pre-execution: master node, primary and secondary nodes
259
:post-execution: master node, primary and secondary nodes
260

    
261
OP_INSTANCE_STARTUP
262
+++++++++++++++++++
263

    
264
Starts an instance.
265

    
266
:directory: instance-start
267
:env. vars: FORCE
268
:pre-execution: master node, primary and secondary nodes
269
:post-execution: master node, primary and secondary nodes
270

    
271
OP_INSTANCE_SHUTDOWN
272
++++++++++++++++++++
273

    
274
Stops an instance.
275

    
276
:directory: instance-stop
277
:env. vars: TIMEOUT
278
:pre-execution: master node, primary and secondary nodes
279
:post-execution: master node, primary and secondary nodes
280

    
281
OP_INSTANCE_REBOOT
282
++++++++++++++++++
283

    
284
Reboots an instance.
285

    
286
:directory: instance-reboot
287
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT
288
:pre-execution: master node, primary and secondary nodes
289
:post-execution: master node, primary and secondary nodes
290

    
291
OP_INSTANCE_SET_PARAMS
292
++++++++++++++++++++++
293

    
294
Modifies the instance parameters.
295

    
296
:directory: instance-modify
297
:env. vars: NEW_DISK_TEMPLATE
298
:pre-execution: master node, primary and secondary nodes
299
:post-execution: master node, primary and secondary nodes
300

    
301
OP_INSTANCE_FAILOVER
302
++++++++++++++++++++
303

    
304
Failovers an instance. In the post phase INSTANCE_PRIMARY and
305
INSTANCE_SECONDARY refer to the nodes that were repectively primary
306
and secondary before failover.
307

    
308
:directory: instance-failover
309
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
310
:pre-execution: master node, secondary node
311
:post-execution: master node, primary and secondary nodes
312

    
313
OP_INSTANCE_MIGRATE
314
++++++++++++++++++++
315

    
316
Migrates an instance. In the post phase INSTANCE_PRIMARY and
317
INSTANCE_SECONDARY refer to the nodes that were repectively primary
318
and secondary before migration.
319

    
320
:directory: instance-migrate
321
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
322
:pre-execution: master node, secondary node
323
:post-execution: master node, primary and secondary nodes
324

    
325

    
326
OP_INSTANCE_REMOVE
327
++++++++++++++++++
328

    
329
Remove an instance.
330

    
331
:directory: instance-remove
332
:env. vars: SHUTDOWN_TIMEOUT
333
:pre-execution: master node
334
:post-execution: master node, primary and secondary nodes
335

    
336
OP_INSTANCE_REPLACE_DISKS
337
+++++++++++++++++++++++++
338

    
339
Replace an instance's disks.
340

    
341
:directory: mirror-replace
342
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
343
:pre-execution: master node, primary and secondary nodes
344
:post-execution: master node, primary and secondary nodes
345

    
346
OP_INSTANCE_GROW_DISK
347
+++++++++++++++++++++
348

    
349
Grows the disk of an instance.
350

    
351
:directory: disk-grow
352
:env. vars: DISK, AMOUNT
353
:pre-execution: master node, primary and secondary nodes
354
:post-execution: master node, primary and secondary nodes
355

    
356
OP_INSTANCE_RENAME
357
++++++++++++++++++
358

    
359
Renames an instance.
360

    
361
:directory: instance-rename
362
:env. vars: INSTANCE_NEW_NAME
363
:pre-execution: master node, primary and secondary nodes
364
:post-execution: master node, primary and secondary nodes
365

    
366
OP_INSTANCE_MOVE
367
++++++++++++++++
368

    
369
Move an instance by data-copying.
370

    
371
:directory: instance-move
372
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT
373
:pre-execution: master node, primary and target nodes
374
:post-execution: master node, primary and target nodes
375

    
376
OP_INSTANCE_RECREATE_DISKS
377
++++++++++++++++++++++++++
378

    
379
Recreate an instance's missing disks.
380

    
381
:directory: instance-recreate-disks
382
:env. vars: only the standard instance vars
383
:pre-execution: master node, primary and secondary nodes
384
:post-execution: master node, primary and secondary nodes
385

    
386
OP_INSTANCE_REPLACE_DISKS
387
+++++++++++++++++++++++++
388

    
389
Replace the disks of an instance.
390

    
391
:directory: mirrors-replace
392
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
393
:pre-execution: master node, primary and new secondary nodes
394
:post-execution: master node, primary and new secondary nodes
395

    
396

    
397
Cluster operations
398
~~~~~~~~~~~~~~~~~~
399

    
400
OP_CLUSTER_POST_INIT
401
++++++++++++++++++++
402

    
403
This hook is called via a special "empty" LU right after cluster
404
initialization.
405

    
406
:directory: cluster-init
407
:env. vars: none
408
:pre-execution: none
409
:post-execution: master node
410

    
411
OP_CLUSTER_DESTROY
412
++++++++++++++++++
413

    
414
The post phase of this hook is called during the execution of destroy
415
operation and not after its completion.
416

    
417
:directory: cluster-destroy
418
:env. vars: none
419
:pre-execution: none
420
:post-execution: master node
421

    
422
OP_CLUSTER_VERIFY
423
+++++++++++++++++
424

    
425
Verifies the cluster status. This is a special LU with regard to
426
hooks, as the result of the opcode will be combined with the result of
427
post-execution hooks, in order to allow administrators to enhance the
428
cluster verification procedure.
429

    
430
:directory: cluster-verify
431
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name>
432
:pre-execution: none
433
:post-execution: all nodes
434

    
435
OP_CLUSTER_RENAME
436
+++++++++++++++++
437

    
438
Renames the cluster.
439

    
440
:directory: cluster-rename
441
:env. vars: NEW_NAME
442
:pre-execution: master-node
443
:post-execution: master-node
444

    
445
OP_CLUSTER_SET_PARAMS
446
+++++++++++++++++++++
447

    
448
Modifies the cluster parameters.
449

    
450
:directory: cluster-modify
451
:env. vars: NEW_VG_NAME
452
:pre-execution: master node
453
:post-execution: master node
454

    
455

    
456
Obsolete operations
457
~~~~~~~~~~~~~~~~~~~
458

    
459
The following operations are no longer present or don't execute hooks
460
anymore in Ganeti 2.0:
461

    
462
- OP_INIT_CLUSTER
463
- OP_MASTER_FAILOVER
464
- OP_INSTANCE_ADD_MDDRBD
465
- OP_INSTANCE_REMOVE_MDDRBD
466

    
467

    
468
Environment variables
469
---------------------
470

    
471
Note that all variables listed here are actually prefixed with
472
*GANETI_* in order to provide a clear namespace.
473

    
474
Common variables
475
~~~~~~~~~~~~~~~~
476

    
477
This is the list of environment variables supported by all operations:
478

    
479
HOOKS_VERSION
480
  Documents the hooks interface version. In case this doesnt match
481
  what the script expects, it should not run. The documents conforms
482
  to the version 2.
483

    
484
HOOKS_PHASE
485
  One of *PRE* or *POST* denoting which phase are we in.
486

    
487
CLUSTER
488
  The cluster name.
489

    
490
MASTER
491
  The master node.
492

    
493
OP_CODE
494
  One of the *OP_* values from the list of operations.
495

    
496
OBJECT_TYPE
497
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
498

    
499
DATA_DIR
500
  The path to the Ganeti configuration directory (to read, for
501
  example, the *ssconf* files).
502

    
503

    
504
Specialised variables
505
~~~~~~~~~~~~~~~~~~~~~
506

    
507
This is the list of variables which are specific to one or more
508
operations.
509

    
510
INSTANCE_NAME
511
  The name of the instance which is the target of the operation.
512

    
513
INSTANCE_BE_x,y,z,...
514
  Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance
515

    
516
INSTANCE_DISK_TEMPLATE
517
  The disk type for the instance.
518

    
519
NEW_DISK_TEMPLATE
520
  The new disk type for the instance.
521

    
522
INSTANCE_DISK_COUNT
523
  The number of disks for the instance.
524

    
525
INSTANCE_DISKn_SIZE
526
  The size of disk *n* for the instance.
527

    
528
INSTANCE_DISKn_MODE
529
  Either *rw* for a read-write disk or *ro* for a read-only one.
530

    
531
INSTANCE_HV_x,y,z,...
532
  Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader
533

    
534
INSTANCE_HYPERVISOR
535
  The instance hypervisor.
536

    
537
INSTANCE_NIC_COUNT
538
  The number of NICs for the instance.
539

    
540
INSTANCE_NICn_BRIDGE
541
  The bridge to which the *n* -th NIC of the instance is attached.
542

    
543
INSTANCE_NICn_IP
544
  The IP (if any) of the *n* -th NIC of the instance.
545

    
546
INSTANCE_NICn_MAC
547
  The MAC address of the *n* -th NIC of the instance.
548

    
549
INSTANCE_NICn_MODE
550
  The mode of the *n* -th NIC of the instance.
551

    
552
INSTANCE_OS_TYPE
553
  The name of the instance OS.
554

    
555
INSTANCE_PRIMARY
556
  The name of the node which is the primary for the instance. Note that
557
  for migrations/failovers, you shouldn't rely on this variable since
558
  the nodes change during the exectution, but on the
559
  OLD_PRIMARY/NEW_PRIMARY values.
560

    
561
INSTANCE_SECONDARY
562
  Space-separated list of secondary nodes for the instance. Note that
563
  for migrations/failovers, you shouldn't rely on this variable since
564
  the nodes change during the exectution, but on the
565
  OLD_SECONDARY/NEW_SECONDARY values.
566

    
567
INSTANCE_MEMORY
568
  The memory size (in MiBs) of the instance.
569

    
570
INSTANCE_VCPUS
571
  The number of virtual CPUs for the instance.
572

    
573
INSTANCE_STATUS
574
  The run status of the instance.
575

    
576
MASTER_CAPABLE
577
  Whether a node is capable of being promoted to master.
578

    
579
VM_CAPABLE
580
  Whether the node can host instances.
581

    
582
NODE_NAME
583
  The target node of this operation (not the node on which the hook
584
  runs).
585

    
586
NODE_PIP
587
  The primary IP of the target node (the one over which inter-node
588
  communication is done).
589

    
590
NODE_SIP
591
  The secondary IP of the target node (the one over which drbd
592
  replication is done). This can be equal to the primary ip, in case
593
  the cluster is not dual-homed.
594

    
595
FORCE
596
  This is provided by some operations when the user gave this flag.
597

    
598
IGNORE_CONSISTENCY
599
  The user has specified this flag. It is used when failing over
600
  instances in case the primary node is down.
601

    
602
ADD_MODE
603
  The mode of the instance create: either *create* for create from
604
  scratch or *import* for restoring from an exported image.
605

    
606
SRC_NODE, SRC_PATH, SRC_IMAGE
607
  In case the instance has been added by import, these variables are
608
  defined and point to the source node, source path (the directory
609
  containing the image and the config file) and the source disk image
610
  file.
611

    
612
NEW_SECONDARY
613
  The name of the node on which the new mirror component is being
614
  added (for replace disk). This can be the name of the current
615
  secondary, if the new mirror is on the same secondary. For
616
  migrations/failovers, this is the old primary node.
617

    
618
OLD_SECONDARY
619
  The name of the old secondary in the replace-disks command. Note that
620
  this can be equal to the new secondary if the secondary node hasn't
621
  actually changed. For migrations/failovers, this is the new primary
622
  node.
623

    
624
OLD_PRIMARY, NEW_PRIMARY
625
  For migrations/failovers, the old and respectively new primary
626
  nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables
627

    
628
EXPORT_MODE
629
  The instance export mode. Either "remote" or "local".
630

    
631
EXPORT_NODE
632
  The node on which the exported image of the instance was done.
633

    
634
EXPORT_DO_SHUTDOWN
635
  This variable tells if the instance has been shutdown or not while
636
  doing the export. In the "was shutdown" case, it's likely that the
637
  filesystem is consistent, whereas in the "did not shutdown" case,
638
  the filesystem would need a check (journal replay or full fsck) in
639
  order to guarantee consistency.
640

    
641
REMOVE_INSTANCE
642
  Whether the instance was removed from the node.
643

    
644
SHUTDOWN_TIMEOUT
645
  Amount of time to wait for the instance to shutdown.
646

    
647
TIMEOUT
648
  Amount of time to wait before aborting the op.
649

    
650
OLD_NAME, NEW_NAME
651
  Old/new name of the node group.
652

    
653
GROUP_NAME
654
  The name of the node group.
655

    
656
NEW_ALLOC_POLICY
657
  The new allocation policy for the node group.
658

    
659
CLUSTER_TAGS
660
  The list of cluster tags, space separated.
661

    
662
NODE_TAGS_<name>
663
  The list of tags for node *<name>*, space separated.
664

    
665
Examples
666
--------
667

    
668
The startup of an instance will pass this environment to the hook
669
script::
670

    
671
  GANETI_CLUSTER=cluster1.example.com
672
  GANETI_DATA_DIR=/var/lib/ganeti
673
  GANETI_FORCE=False
674
  GANETI_HOOKS_PATH=instance-start
675
  GANETI_HOOKS_PHASE=post
676
  GANETI_HOOKS_VERSION=2
677
  GANETI_INSTANCE_DISK0_MODE=rw
678
  GANETI_INSTANCE_DISK0_SIZE=128
679
  GANETI_INSTANCE_DISK_COUNT=1
680
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
681
  GANETI_INSTANCE_MEMORY=128
682
  GANETI_INSTANCE_NAME=instance2.example.com
683
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
684
  GANETI_INSTANCE_NIC0_IP=
685
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
686
  GANETI_INSTANCE_NIC_COUNT=1
687
  GANETI_INSTANCE_OS_TYPE=debootstrap
688
  GANETI_INSTANCE_PRIMARY=node3.example.com
689
  GANETI_INSTANCE_SECONDARY=node5.example.com
690
  GANETI_INSTANCE_STATUS=down
691
  GANETI_INSTANCE_VCPUS=1
692
  GANETI_MASTER=node1.example.com
693
  GANETI_OBJECT_TYPE=INSTANCE
694
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
695
  GANETI_OP_TARGET=instance2.example.com
696

    
697
.. vim: set textwidth=72 :
698
.. Local Variables:
699
.. mode: rst
700
.. fill-column: 72
701
.. End: