Revision 4d6443f4

b/Makefile.am
108 108

  
109 109

  
110 110
docsgml = \
111
	doc/hooks.sgml \
112 111
	doc/install.sgml \
113 112
	doc/rapi.sgml
114 113

  
115 114
docrst = \
116 115
	doc/admin.rst \
117 116
	doc/design-2.0.rst \
117
	doc/hooks.rst \
118 118
	doc/iallocator.rst \
119 119
	doc/security.rst
120 120

  
b/doc/hooks.rst
1
Ganeti customisation using hooks
2
================================
3

  
4
Documents ganeti version 2.0
5

  
6
.. contents::
7

  
8
Introduction
9
------------
10

  
11

  
12
In order to allow customisation of operations, ganeti runs scripts
13
under ``/etc/ganeti/hooks`` based on certain rules.
14

  
15

  
16
This is similar to the ``/etc/network/`` structure present in Debian
17
for network interface handling.
18

  
19
Organisation
20
------------
21

  
22
For every operation, two sets of scripts are run:
23

  
24
- pre phase (for authorization/checking)
25
- post phase (for logging)
26

  
27
Also, for each operation, the scripts are run on one or more nodes,
28
depending on the operation type.
29

  
30
Note that, even though we call them scripts, we are actually talking
31
about any executable.
32

  
33
*pre* scripts
34
~~~~~~~~~~~~~
35

  
36
The *pre* scripts have a definite target: to check that the operation
37
is allowed given the site-specific constraints. You could have, for
38
example, a rule that says every new instance is required to exists in
39
a database; to implement this, you could write a script that checks
40
the new instance parameters against your database.
41

  
42
The objective of these scripts should be their return code (zero or
43
non-zero for success and failure). However, if they modify the
44
environment in any way, they should be idempotent, as failed
45
executions could be restarted and thus the script(s) run again with
46
exactly the same parameters.
47

  
48
Note that if a node is unreachable at the time a hooks is run, this
49
will not be interpreted as a deny for the execution. In other words,
50
only an actual error returned from a script will cause abort, and not
51
an unreachable node.
52

  
53
Therefore, if you want to guarantee that a hook script is run and
54
denies an action, it's best to put it on the master node.
55

  
56
*post* scripts
57
~~~~~~~~~~~~~~
58

  
59
These scripts should do whatever you need as a reaction to the
60
completion of an operation. Their return code is not checked (but
61
logged), and they should not depend on the fact that the *pre* scripts
62
have been run.
63

  
64
Naming
65
~~~~~~
66

  
67
The allowed names for the scripts consist of (similar to *run-parts* )
68
upper and lower case, digits, underscores and hyphens. In other words,
69
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
70
ignored.
71

  
72

  
73
Order of execution
74
~~~~~~~~~~~~~~~~~~
75

  
76
On a single node, the scripts in a directory are run in lexicographic
77
order (more exactly, the python string comparison order). It is
78
advisable to implement the usual *NN-name* convention where *NN* is a
79
two digit number.
80

  
81
For an operation whose hooks are run on multiple nodes, there is no
82
specific ordering of nodes with regard to hooks execution; you should
83
assume that the scripts are run in parallel on the target nodes
84
(keeping on each node the above specified ordering).  If you need any
85
kind of inter-node synchronisation, you have to implement it yourself
86
in the scripts.
87

  
88
Execution environment
89
~~~~~~~~~~~~~~~~~~~~~
90

  
91
The scripts will be run as follows:
92

  
93
- no command line arguments
94

  
95
- no controlling *tty*
96

  
97
- stdin is actually */dev/null*
98

  
99
- stdout and stderr are directed to files
100

  
101
- PATH is reset to ``/sbin:/bin:/usr/sbin:/usr/bin``
102

  
103
- the environment is cleared, and only ganeti-specific variables will
104
  be left
105

  
106

  
107
All informations about the cluster is passed using environment
108
variables. Different operations will have sligthly different
109
environments, but most of the variables are common.
110

  
111
Operation list
112
--------------
113

  
114
Node operations
115
~~~~~~~~~~~~~~~
116

  
117
OP_ADD_NODE
118
+++++++++++
119

  
120
Adds a node to the cluster.
121

  
122
:directory: node-add
123
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP
124
:pre-execution: all existing nodes
125
:post-execution: all nodes plus the new node
126

  
127

  
128
OP_REMOVE_NODE
129
++++++++++++++
130

  
131
Removes a node from the cluster.
132

  
133
:directory: node-remove
134
:env. vars: NODE_NAME
135
:pre-execution: all existing nodes except the removed node
136
:post-execution: all existing nodes except the removed node
137

  
138
OP_NODE_SET_PARAMS
139
++++++++++++++++++
140

  
141
Changes a node's parameters.
142

  
143
:directory: node-modify
144
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED
145
:pre-execution: master node, the target node
146
:post-execution: master node, the target node
147

  
148

  
149
Instance operations
150
~~~~~~~~~~~~~~~~~~~
151

  
152
All instance operations take at least the following variables:
153
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARIES,
154
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
155
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
156
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
157
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE.
158

  
159
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
160
properties of the *n* -th NIC and disk, and are zero-indexed.
161

  
162

  
163
OP_INSTANCE_ADD
164
+++++++++++++++
165

  
166
Creates a new instance.
167

  
168
:directory: instance-add
169
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
170
:pre-execution: master node, primary and secondary nodes
171
:post-execution: master node, primary and secondary nodes
172

  
173
OP_INSTANCE_REINSTALL
174
+++++++++++++++++++++
175

  
176
Reinstalls an instance.
177

  
178
:directory: instance-reinstall
179
:env. vars: only the standard instance vars
180
:pre-execution: master node, primary and secondary nodes
181
:post-execution: master node, primary and secondary nodes
182

  
183
OP_BACKUP_EXPORT
184
++++++++++++++++
185

  
186
Exports the instance.
187

  
188

  
189
:directory: instance-export
190
:env. vars: EXPORT_NODE, EXPORT_DO_SHUTDOWN
191
:pre-execution: master node, primary and secondary nodes
192
:post-execution: master node, primary and secondary nodes
193

  
194
OP_INSTANCE_START
195
+++++++++++++++++
196

  
197
Starts an instance.
198

  
199
:directory: instance-start
200
:env. vars: INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARIES, FORCE
201
:pre-execution: master node, primary and secondary nodes
202
:post-execution: master node, primary and secondary nodes
203

  
204
OP_INSTANCE_SHUTDOWN
205
++++++++++++++++++++
206

  
207
Stops an instance.
208

  
209
:directory: instance-shutdown
210
:env. vars: INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARIES
211
:pre-execution: master node, primary and secondary nodes
212
:post-execution: master node, primary and secondary nodes
213

  
214
OP_INSTANCE_REBOOT
215
++++++++++++++++++
216

  
217
Reboots an instance.
218

  
219
:directory: instance-reboot
220
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE
221
:pre-execution: master node, primary and secondary nodes
222
:post-execution: master node, primary and secondary nodes
223

  
224
OP_INSTANCE_MODIFY
225
++++++++++++++++++
226

  
227
Modifies the instance parameters.
228

  
229
:directory: instance-modify
230
:env. vars: INSTANCE_NAME, MEM_SIZE, VCPUS, INSTANCE_IP
231
:pre-execution: master node, primary and secondary nodes
232
:post-execution: master node, primary and secondary nodes
233

  
234
OP_INSTANCE_FAILOVER
235
++++++++++++++++++++
236

  
237
Failovers an instance.
238

  
239
:directory: instance-failover
240
:env. vars: IGNORE_CONSISTENCY
241
:pre-execution: master node, secondary node
242
:post-execution: master node, secondary node
243

  
244
OP_INSTANCE_MIGRATE
245
++++++++++++++++++++
246

  
247
Migrates an instance.
248

  
249
:directory: instance-failover
250
:env. vars: INSTANCE_MIGRATE_LIVE, INSTANCE_MIGRATE_CLEANUP
251
:pre-execution: master node, secondary node
252
:post-execution: master node, secondary node
253

  
254

  
255
OP_INSTANCE_REMOVE
256
++++++++++++++++++
257

  
258
Remove an instance.
259

  
260
:directory: instance-remove
261
:env. vars: INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARIES
262
:pre-execution: master node
263
:post-execution: master node
264

  
265
OP_INSTANCE_REPLACE_DISKS
266
+++++++++++++++++++++++++
267

  
268
Replace an instance's disks.
269

  
270
:directory: mirror-replace
271
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
272
:pre-execution: master node, primary and secondary nodes
273
:post-execution: master node, primary and secondary nodes
274

  
275
OP_INSTANCE_GROW_DISK
276
+++++++++++++++++++++
277

  
278
Grows the disk of an instance.
279

  
280
:directory: disk-grow
281
:env. vars: DISK, AMOUNT
282
:pre-execution: master node, primary node
283
:post-execution: master node, primary node
284

  
285
OP_INSTANCE_RENAME
286
++++++++++++++++++
287

  
288
Renames an instance.
289

  
290
:directory: instance-rename
291
:env. vars: INSTANCE_NEW_NAME
292
:pre-execution: master node, primary and secondary nodes
293
:post-execution: master node, primary and secondary nodes
294

  
295
Cluster operations
296
~~~~~~~~~~~~~~~~~~
297

  
298
OP_CLUSTER_VERIFY
299
+++++++++++++++++
300

  
301
Verifies the cluster status. This is a special LU with regard to
302
hooks, as the result of the opcode will be combined with the result of
303
post-execution hooks, in order to allow administrators to enhance the
304
cluster verification procedure.
305

  
306
:directory: cluster-verify
307
:env. vars: CLUSTER, MASTER
308
:pre-execution: none
309
:post-execution: all nodes
310

  
311
OP_CLUSTER_RENAME
312
+++++++++++++++++
313

  
314
Renames the cluster.
315

  
316
:directory: cluster-rename
317
:env. vars: NEW_NAME
318
:pre-execution: master-node
319
:post-execution: master-node
320

  
321
OP_CLUSTER_SET_PARAMS
322
+++++++++++++++++++++
323

  
324
Modifies the cluster parameters.
325

  
326
:directory: cluster-modify
327
:env. vars: NEW_VG_NAME
328
:pre-execution: master node
329
:post-execution: master node
330

  
331

  
332
Obsolete operations
333
~~~~~~~~~~~~~~~~~~~
334

  
335
The following operations are no longer present or don't execute hooks
336
anymore in Ganeti 2.0:
337

  
338
- OP_INIT_CLUSTER
339
- OP_MASTER_FAILOVER
340
- OP_INSTANCE_ADD_MDDRBD
341
- OP_INSTANCE_REMOVE_MDDRBD
342

  
343

  
344
Environment variables
345
---------------------
346

  
347
Note that all variables listed here are actually prefixed with
348
*GANETI_* in order to provide a clear namespace.
349

  
350
Common variables
351
~~~~~~~~~~~~~~~~
352

  
353
This is the list of environment variables supported by all operations:
354

  
355
HOOKS_VERSION
356
  Documents the hooks interface version. In case this doesnt match
357
  what the script expects, it should not run. The documents conforms
358
  to the version 2.
359

  
360
HOOKS_PHASE
361
  One of *PRE* or *POST* denoting which phase are we in.
362

  
363
CLUSTER
364
  The cluster name.
365

  
366
MASTER
367
  The master node.
368

  
369
OP_CODE
370
  One of the *OP_* values from the list of operations.
371

  
372
OBJECT_TYPE
373
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
374

  
375
DATA_DIR
376
  The path to the Ganeti configuration directory (to read, for
377
  example, the *ssconf* files).
378

  
379

  
380
Specialised variables
381
~~~~~~~~~~~~~~~~~~~~~
382

  
383
This is the list of variables which are specific to one or more
384
operations.
385

  
386
INSTANCE_NAME
387
  The name of the instance which is the target of the operation.
388

  
389
INSTANCE_DISK_TEMPLATE
390
  The disk type for the instance.
391

  
392
INSTANCE_DISK_COUNT
393
  The number of disks for the instance.
394

  
395
INSTANCE_DISKn_SIZE
396
  The size of disk *n* for the instance.
397

  
398
INSTANCE_DISKn_MODE
399
  Either *rw* for a read-write disk or *ro* for a read-only one.
400

  
401
INSTANCE_NIC_COUNT
402
  The number of NICs for the instance.
403

  
404
INSTANCE_NICn_BRIDGE
405
  The bridge to which the *n* -th NIC of the instance is attached.
406

  
407
INSTANCE_NICn_IP
408
  The IP (if any) of the *n* -th NIC of the instance.
409

  
410
INSTANCE_NICn_MAC
411
  The MAC address of the *n* -th NIC of the instance.
412

  
413
INSTANCE_OS_TYPE
414
  The name of the instance OS.
415

  
416
INSTANCE_PRIMARY
417
  The name of the node which is the primary for the instance.
418

  
419
INSTANCE_SECONDARIES
420
  Space-separated list of secondary nodes for the instance.
421

  
422
INSTANCE_MEMORY
423
  The memory size (in MiBs) of the instance.
424

  
425
INSTANCE_VCPUS
426
  The number of virtual CPUs for the instance.
427

  
428
INSTANCE_STATUS
429
  The run status of the instance.
430

  
431
NODE_NAME
432
  The target node of this operation (not the node on which the hook
433
  runs).
434

  
435
NODE_PIP
436
  The primary IP of the target node (the one over which inter-node
437
  communication is done).
438

  
439
NODE_SIP
440
  The secondary IP of the target node (the one over which drbd
441
  replication is done). This can be equal to the primary ip, in case
442
  the cluster is not dual-homed.
443

  
444
FORCE
445
  This is provided by some operations when the user gave this flag.
446

  
447
IGNORE_CONSISTENCY
448
  The user has specified this flag. It is used when failing over
449
  instances in case the primary node is down.
450

  
451
ADD_MODE
452
  The mode of the instance create: either *create* for create from
453
  scratch or *import* for restoring from an exported image.
454

  
455
SRC_NODE, SRC_PATH, SRC_IMAGE
456
  In case the instance has been added by import, these variables are
457
  defined and point to the source node, source path (the directory
458
  containing the image and the config file) and the source disk image
459
  file.
460

  
461
NEW_SECONDARY
462
  The name of the node on which the new mirror component is being
463
  added. This can be the name of the current secondary, if the new
464
  mirror is on the same secondary.
465

  
466
OLD_SECONDARY
467
  The name of the old secondary in the replace-disks command Note that
468
  this can be equal to the new secondary if the secondary node hasn't
469
  actually changed.
470

  
471
EXPORT_NODE
472
  The node on which the exported image of the instance was done.
473

  
474
EXPORT_DO_SHUTDOWN
475
  This variable tells if the instance has been shutdown or not while
476
  doing the export. In the "was shutdown" case, it's likely that the
477
  filesystem is consistent, whereas in the "did not shutdown" case,
478
  the filesystem would need a check (journal replay or full fsck) in
479
  order to guarantee consistency.
480

  
481
Examples
482
--------
483

  
484
The startup of an instance will pass this environment to the hook
485
script::
486

  
487
  GANETI_CLUSTER=cluster1.example.com
488
  GANETI_DATA_DIR=/var/lib/ganeti
489
  GANETI_FORCE=False
490
  GANETI_HOOKS_PATH=instance-start
491
  GANETI_HOOKS_PHASE=post
492
  GANETI_HOOKS_VERSION=2
493
  GANETI_INSTANCE_DISK0_MODE=rw
494
  GANETI_INSTANCE_DISK0_SIZE=128
495
  GANETI_INSTANCE_DISK_COUNT=1
496
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
497
  GANETI_INSTANCE_MEMORY=128
498
  GANETI_INSTANCE_NAME=instance2.example.com
499
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
500
  GANETI_INSTANCE_NIC0_IP=
501
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
502
  GANETI_INSTANCE_NIC_COUNT=1
503
  GANETI_INSTANCE_OS_TYPE=debootstrap
504
  GANETI_INSTANCE_PRIMARY=node3.example.com
505
  GANETI_INSTANCE_SECONDARIES=node5.example.com
506
  GANETI_INSTANCE_STATUS=down
507
  GANETI_INSTANCE_VCPUS=1
508
  GANETI_MASTER=node1.example.com
509
  GANETI_OBJECT_TYPE=INSTANCE
510
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
511
  GANETI_OP_TARGET=instance2.example.com
/dev/null
1
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
2
]>
3
  <article class="specification">
4
  <articleinfo>
5
    <title>Ganeti customisation using hooks</title>
6
  </articleinfo>
7
  <para>Documents ganeti version 1.2</para>
8
  <section>
9
    <title>Introduction</title>
10

  
11
    <para>
12
      In order to allow customisation of operations, ganeti will run
13
      scripts under <filename
14
      class="directory">/etc/ganeti/hooks</filename> based on certain
15
      rules.
16
    </para>
17

  
18
      <para>This is similar to the <filename
19
      class="directory">/etc/network/</filename> structure present in
20
      Debian for network interface handling.</para>
21

  
22
    </section>
23

  
24
    <section>
25
      <title>Organisation</title>
26

  
27
      <para>For every operation, two sets of scripts are run:
28

  
29
      <itemizedlist>
30
          <listitem>
31
            <simpara>pre phase (for authorization/checking)</simpara>
32
          </listitem>
33
          <listitem>
34
            <simpara>post phase (for logging)</simpara>
35
          </listitem>
36
        </itemizedlist>
37
      </para>
38

  
39
      <para>Also, for each operation, the scripts are run on one or
40
      more nodes, depending on the operation type.</para>
41

  
42
      <para>Note that, even though we call them scripts, we are
43
      actually talking about any executable.</para>
44

  
45
      <section>
46
        <title><emphasis>pre</emphasis> scripts</title>
47

  
48
        <para>The <emphasis>pre</emphasis> scripts have a definite
49
        target: to check that the operation is allowed given the
50
        site-specific constraints. You could have, for example, a rule
51
        that says every new instance is required to exists in a
52
        database; to implement this, you could write a script that
53
        checks the new instance parameters against your
54
        database.</para>
55

  
56
        <para>The objective of these scripts should be their return
57
        code (zero or non-zero for success and failure). However, if
58
        they modify the environment in any way, they should be
59
        idempotent, as failed executions could be restarted and thus
60
        the script(s) run again with exactly the same
61
        parameters.</para>
62

  
63
      <para>
64
        Note that if a node is unreachable at the time a hooks is run,
65
        this will not be interpreted as a deny for the execution. In
66
        other words, only an actual error returned from a script will
67
        cause abort, and not an unreachable node.
68
      </para>
69

  
70
      <para>
71
        Therefore, if you want to guarantee that a hook script is run
72
        and denies an action, it's best to put it on the master node.
73
      </para>
74

  
75
      </section>
76

  
77
      <section>
78
        <title><emphasis>post</emphasis> scripts</title>
79

  
80
        <para>These scripts should do whatever you need as a reaction
81
        to the completion of an operation. Their return code is not
82
        checked (but logged), and they should not depend on the fact
83
        that the <emphasis>pre</emphasis> scripts have been
84
        run.</para>
85

  
86
      </section>
87

  
88
      <section>
89
        <title>Naming</title>
90

  
91
        <para>The allowed names for the scripts consist of (similar to
92
        <citerefentry> <refentrytitle>run-parts</refentrytitle>
93
        <manvolnum>8</manvolnum> </citerefentry>) upper and lower
94
        case, digits, underscores and hyphens. In other words, the
95
        regexp
96
        <computeroutput>^[a-zA-Z0-9_-]+$</computeroutput>. Also,
97
        non-executable scripts will be ignored.
98
        </para>
99
      </section>
100

  
101
      <section>
102
        <title>Order of execution</title>
103

  
104
        <para>On a single node, the scripts in a directory are run in
105
        lexicographic order (more exactly, the python string
106
        comparison order). It is advisable to implement the usual
107
        <emphasis>NN-name</emphasis> convention where
108
        <emphasis>NN</emphasis> is a two digit number.</para>
109

  
110
        <para>For an operation whose hooks are run on multiple nodes,
111
        there is no specific ordering of nodes with regard to hooks
112
        execution; you should assume that the scripts are run in
113
        parallel on the target nodes (keeping on each node the above
114
        specified ordering).  If you need any kind of inter-node
115
        synchronisation, you have to implement it yourself in the
116
        scripts.</para>
117

  
118
      </section>
119

  
120
      <section>
121
        <title>Execution environment</title>
122

  
123
        <para>The scripts will be run as follows:
124
          <itemizedlist>
125
            <listitem>
126
              <simpara>no command line arguments</simpara>
127
            </listitem>
128
            <listitem>
129
              <simpara>no controlling <acronym>tty</acronym></simpara>
130
            </listitem>
131
            <listitem>
132
              <simpara><varname>stdin</varname> is
133
                actually <filename>/dev/null</filename></simpara>
134
            </listitem>
135
            <listitem>
136
              <simpara><varname>stdout</varname> and
137
                <varname>stderr</varname> are directed to
138
                files</simpara>
139
            </listitem>
140
            <listitem>
141
              <simpara>the <varname>PATH</varname> is reset to
142
                <literal>/sbin:/bin:/usr/sbin:/usr/bin</literal></simpara>
143
            </listitem>
144
            <listitem>
145
              <simpara>the environment is cleared, and only
146
                ganeti-specific variables will be left</simpara>
147
            </listitem>
148
          </itemizedlist>
149

  
150
        </para>
151

  
152
      <para>All informations about the cluster is passed using
153
      environment variables. Different operations will have sligthly
154
      different environments, but most of the variables are
155
      common.</para>
156

  
157
    </section>
158

  
159

  
160
    <section>
161
      <title>Operation list</title>
162
      <table>
163
        <title>Operation list</title>
164
        <tgroup cols="7">
165
          <colspec>
166
          <colspec>
167
          <colspec>
168
          <colspec>
169
          <colspec>
170
          <colspec colname="prehooks">
171
          <colspec colname="posthooks">
172
          <spanspec namest="prehooks" nameend="posthooks"
173
            spanname="bothhooks">
174
          <thead>
175
            <row>
176
              <entry>Operation ID</entry>
177
              <entry>Directory prefix</entry>
178
              <entry>Description</entry>
179
              <entry>Command</entry>
180
              <entry>Supported env. variables</entry>
181
              <entry><emphasis>pre</emphasis> hooks</entry>
182
              <entry><emphasis>post</emphasis> hooks</entry>
183
            </row>
184
          </thead>
185
          <tbody>
186
            <row>
187
              <entry>OP_INIT_CLUSTER</entry>
188
              <entry><filename class="directory">cluster-init</filename></entry>
189
              <entry>Initialises the cluster</entry>
190
              <entry><computeroutput>gnt-cluster init</computeroutput></entry>
191
              <entry><constant>CLUSTER</constant>, <constant>MASTER</constant></entry>
192
              <entry spanname="bothhooks">master node, cluster name</entry>
193
            </row>
194
            <row>
195
              <entry>OP_MASTER_FAILOVER</entry>
196
              <entry><filename class="directory">master-failover</filename></entry>
197
              <entry>Changes the master</entry>
198
              <entry><computeroutput>gnt-cluster master-failover</computeroutput></entry>
199
              <entry><constant>OLD_MASTER</constant>, <constant>NEW_MASTER</constant></entry>
200
              <entry>the new master</entry>
201
              <entry>all nodes</entry>
202
            </row>
203
            <row>
204
              <entry>OP_ADD_NODE</entry>
205
              <entry><filename class="directory">node-add</filename></entry>
206
              <entry>Adds a new node to the cluster</entry>
207
              <entry><computeroutput>gnt-node add</computeroutput></entry>
208
              <entry><constant>NODE_NAME</constant>, <constant>NODE_PIP</constant>, <constant>NODE_SIP</constant></entry>
209
              <entry>all existing nodes</entry>
210
              <entry>all existing nodes plus the new node</entry>
211
            </row>
212
            <row>
213
              <entry>OP_REMOVE_NODE</entry>
214
              <entry><filename class="directory">node-remove</filename></entry>
215
              <entry>Removes a node from the cluster</entry>
216
              <entry><computeroutput>gnt-node remove</computeroutput></entry>
217
              <entry><constant>NODE_NAME</constant></entry>
218
              <entry spanname="bothhooks">all existing nodes except the removed node</entry>
219
            </row>
220
            <row>
221
              <entry>OP_INSTANCE_ADD</entry>
222
              <entry><filename class="directory">instance-add</filename></entry>
223
              <entry>Creates a new instance</entry>
224
              <entry><computeroutput>gnt-instance add</computeroutput></entry>
225
              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>DISK_TEMPLATE</constant>, <constant>MEM_SIZE</constant>, <constant>DISK_SIZE</constant>, <constant>SWAP_SIZE</constant>, <constant>VCPUS</constant>, <constant>INSTANCE_IP</constant>, <constant>INSTANCE_ADD_MODE</constant>, <constant>SRC_NODE</constant>, <constant>SRC_PATH</constant>, <constant>SRC_IMAGE</constant></entry>
226
              <entry spanname="bothhooks" morerows="4">master node, primary and
227
                   secondary nodes</entry>
228
            </row>
229
            <row>
230
              <entry>OP_BACKUP_EXPORT</entry>
231
              <entry><filename class="directory">instance-export</filename></entry>
232
              <entry>Export the instance</entry>
233
              <entry><computeroutput>gnt-backup export</computeroutput></entry>
234
              <entry><constant>INSTANCE_NAME</constant>, <constant>EXPORT_NODE</constant>, <constant>EXPORT_DO_SHUTDOWN</constant></entry>
235
            </row>
236
            <row>
237
              <entry>OP_INSTANCE_START</entry>
238
              <entry><filename class="directory">instance-start</filename></entry>
239
              <entry>Starts an instance</entry>
240
              <entry><computeroutput>gnt-instance start</computeroutput></entry>
241
              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>FORCE</constant></entry>
242
            </row>
243
            <row>
244
              <entry>OP_INSTANCE_SHUTDOWN</entry>
245
              <entry><filename class="directory">instance-shutdown</filename></entry>
246
              <entry>Stops an instance</entry>
247
              <entry><computeroutput>gnt-instance shutdown</computeroutput></entry>
248
              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant></entry>
249
            </row>
250
            <row>
251
              <entry>OP_INSTANCE_MODIFY</entry>
252
              <entry><filename class="directory">instance-modify</filename></entry>
253
              <entry>Modifies the instance parameters.</entry>
254
              <entry><computeroutput>gnt-instance modify</computeroutput></entry>
255
              <entry><constant>INSTANCE_NAME</constant>, <constant>MEM_SIZE</constant>, <constant>VCPUS</constant>, <constant>INSTANCE_IP</constant></entry>
256
            </row>
257
            <row>
258
              <entry>OP_INSTANCE_FAILOVER</entry>
259
              <entry><filename class="directory">instance-failover</filename></entry>
260
              <entry>Failover an instance</entry>
261
              <entry><computeroutput>gnt-instance start</computeroutput></entry>
262
              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant>, <constant>IGNORE_CONSISTENCY</constant></entry>
263
            </row>
264
            <row>
265
              <entry>OP_INSTANCE_REMOVE</entry>
266
              <entry><filename class="directory">instance-remove</filename></entry>
267
              <entry>Remove an instance</entry>
268
              <entry><computeroutput>gnt-instance remove</computeroutput></entry>
269
              <entry><constant>INSTANCE_NAME</constant>, <constant>INSTANCE_PRIMARY</constant>, <constant>INSTANCE_SECONDARIES</constant></entry>
270
              <entry spanname="bothhooks">master node</entry>
271
            </row>
272
            <row>
273
              <entry>OP_INSTANCE_ADD_MDDRBD</entry>
274
              <entry><filename class="directory">mirror-add</filename></entry>
275
              <entry>Adds a mirror component</entry>
276
              <entry><computeroutput>gnt-instance add-mirror</computeroutput></entry>
277
              <entry><constant>INSTANCE_NAME</constant>, <constant>NEW_SECONDARY</constant>, <constant>DISK_NAME</constant></entry>
278
            </row>
279
            <row>
280
              <entry>OP_INSTANCE_REMOVE_MDDRBD</entry>
281
              <entry><filename class="directory">mirror-remove</filename></entry>
282
              <entry>Removes a mirror component</entry>
283
              <entry><computeroutput>gnt-instance remove-mirror</computeroutput></entry>
284
              <entry><constant>INSTANCE_NAME</constant>, <constant>OLD_SECONDARY</constant>, <constant>DISK_NAME</constant>, <constant>DISK_ID</constant></entry>
285
            </row>
286
            <row>
287
              <entry>OP_INSTANCE_REPLACE_DISKS</entry>
288
              <entry><filename class="directory">mirror-replace</filename></entry>
289
              <entry>Replace all mirror components</entry>
290
              <entry><computeroutput>gnt-instance replace-disks</computeroutput></entry>
291
              <entry><constant>INSTANCE_NAME</constant>, <constant>OLD_SECONDARY</constant>, <constant>NEW_SECONDARY</constant></entry>
292

  
293
            </row>
294
            <row>
295
              <entry>OP_CLUSTER_VERIFY</entry>
296
              <entry><filename class="directory">cluster-verify</filename></entry>
297
              <entry>Verifies the cluster status</entry>
298
              <entry><computeroutput>gnt-cluster verify</computeroutput></entry>
299
              <entry><constant>CLUSTER</constant>, <constant>MASTER</constant></entry>
300
              <entry>NONE</entry>
301
              <entry>all nodes</entry>
302
            </row>
303
          </tbody>
304
        </tgroup>
305
      </table>
306
    </section>
307

  
308
    <section>
309
      <title>Environment variables</title>
310

  
311
      <para>Note that all variables listed here are actually prefixed
312
      with <constant>GANETI_</constant> in order to provide a
313
      different namespace.</para>
314

  
315
      <section>
316
        <title>Common variables</title>
317

  
318
        <para>This is the list of environment variables supported by
319
        all operations:</para>
320

  
321
        <variablelist>
322
          <varlistentry>
323
            <term>HOOKS_VERSION</term>
324
            <listitem>
325
              <para>Documents the hooks interface version. In case this
326
            doesnt match what the script expects, it should not
327
            run. The documents conforms to the version
328
            <literal>1</literal>.</para>
329
            </listitem>
330
          </varlistentry>
331
          <varlistentry>
332
            <term>HOOKS_PHASE</term>
333
            <listitem>
334
              <para>one of <constant>PRE</constant> or
335
              <constant>POST</constant> denoting which phase are we
336
              in.</para>
337
            </listitem>
338
          </varlistentry>
339
          <varlistentry>
340
            <term>CLUSTER</term>
341
            <listitem>
342
              <para>the cluster name</para>
343
            </listitem>
344
          </varlistentry>
345
          <varlistentry>
346
            <term>MASTER</term>
347
            <listitem>
348
              <para>the master node</para>
349
            </listitem>
350
          </varlistentry>
351
          <varlistentry>
352
            <term>OP_ID</term>
353
            <listitem>
354
              <para>one of the <constant>OP_*</constant> values from
355
              the table of operations</para>
356
            </listitem>
357
          </varlistentry>
358
          <varlistentry>
359
            <term>OBJECT_TYPE</term>
360
            <listitem>
361
              <para>one of <simplelist type="inline">
362
                  <member><constant>INSTANCE</constant></member>
363
                  <member><constant>NODE</constant></member>
364
                  <member><constant>CLUSTER</constant></member>
365
                </simplelist>, showing the target of the operation.
366
             </para>
367
            </listitem>
368
          </varlistentry>
369
          <!-- commented out since it causes problems in our rpc
370
               multi-node optimised calls
371
          <varlistentry>
372
            <term>HOST_NAME</term>
373
            <listitem>
374
              <para>The name of the node the hook is run on as known by
375
            the cluster.</para>
376
            </listitem>
377
          </varlistentry>
378
          <varlistentry>
379
            <term>HOST_TYPE</term>
380
            <listitem>
381
              <para>one of <simplelist type="inline">
382
                  <member><constant>MASTER</constant></member>
383
                  <member><constant>NODE</constant></member>
384
                </simplelist>, showing the role of this node in the cluster.
385
             </para>
386
            </listitem>
387
          </varlistentry>
388
          -->
389
        </variablelist>
390
      </section>
391

  
392
      <section>
393
        <title>Specialised variables</title>
394

  
395
        <para>This is the list of variables which are specific to one
396
        or more operations.</para>
397
        <variablelist>
398
          <varlistentry>
399
            <term>INSTANCE_NAME</term>
400
            <listitem>
401
              <para>The name of the instance which is the target of
402
              the operation.</para>
403
            </listitem>
404
          </varlistentry>
405
          <varlistentry>
406
            <term>INSTANCE_DISK_TYPE</term>
407
            <listitem>
408
              <para>The disk type for the instance.</para>
409
            </listitem>
410
          </varlistentry>
411
          <varlistentry>
412
            <term>INSTANCE_DISK_SIZE</term>
413
            <listitem>
414
              <para>The (OS) disk size for the instance.</para>
415
            </listitem>
416
          </varlistentry>
417
          <varlistentry>
418
            <term>INSTANCE_OS</term>
419
            <listitem>
420
              <para>The name of the instance OS.</para>
421
            </listitem>
422
          </varlistentry>
423
          <varlistentry>
424
            <term>INSTANCE_PRIMARY</term>
425
            <listitem>
426
              <para>The name of the node which is the primary for the
427
              instance.</para>
428
            </listitem>
429
          </varlistentry>
430
          <varlistentry>
431
            <term>INSTANCE_SECONDARIES</term>
432
            <listitem>
433
              <para>Space-separated list of secondary nodes for the
434
              instance.</para>
435
            </listitem>
436
          </varlistentry>
437
          <varlistentry>
438
            <term>NODE_NAME</term>
439
            <listitem>
440
              <para>The target node of this operation (not the node on
441
              which the hook runs).</para>
442
            </listitem>
443
          </varlistentry>
444
          <varlistentry>
445
            <term>NODE_PIP</term>
446
            <listitem>
447
              <para>The primary IP of the target node (the one over
448
              which inter-node communication is done).</para>
449
            </listitem>
450
          </varlistentry>
451
          <varlistentry>
452
            <term>NODE_SIP</term>
453
            <listitem>
454
              <para>The secondary IP of the target node (the one over
455
              which drbd replication is done). This can be equal to
456
              the primary ip, in case the cluster is not
457
              dual-homed.</para>
458
            </listitem>
459
          </varlistentry>
460
          <varlistentry>
461
            <term>OLD_MASTER</term>
462
            <term>NEW_MASTER</term>
463
            <listitem>
464
              <para>The old, respectively the new master for the
465
              master failover operation.</para>
466
            </listitem>
467
          </varlistentry>
468
          <varlistentry>
469
            <term>FORCE</term>
470
            <listitem>
471
              <para>This is provided by some operations when the user
472
              gave this flag.</para>
473
            </listitem>
474
          </varlistentry>
475
          <varlistentry>
476
            <term>IGNORE_CONSISTENCY</term>
477
            <listitem>
478
              <para>The user has specified this flag. It is used when
479
              failing over instances in case the primary node is
480
              down.</para>
481
            </listitem>
482
          </varlistentry>
483
          <varlistentry>
484
            <term>MEM_SIZE, DISK_SIZE, SWAP_SIZE, VCPUS</term>
485
            <listitem>
486
              <para>The memory, disk, swap size and the number of
487
              processor selected for the instance (in
488
              <command>gnt-instance add</command> or
489
              <command>gnt-instance modify</command>).</para>
490
            </listitem>
491
          </varlistentry>
492
          <varlistentry>
493
            <term>INSTANCE_IP</term>
494
            <listitem>
495
              <para>If defined, the instance IP in the
496
              <command>gnt-instance add</command> and
497
              <command>gnt-instance set</command> commands. If not
498
              defined, it means that no IP has been defined.</para>
499
            </listitem>
500
          </varlistentry>
501
          <varlistentry>
502
            <term>DISK_TEMPLATE</term>
503
            <listitem>
504
              <para>The disk template type when creating the instance.</para>
505
            </listitem>
506
          </varlistentry>
507
          <varlistentry>
508
            <term>INSTANCE_ADD_MODE</term>
509
            <listitem>
510
              <para>The mode of the create: either
511
              <constant>create</constant> for create from scratch or
512
              <constant>import</constant> for restoring from an
513
              exported image.</para>
514
            </listitem>
515
          </varlistentry>
516
          <varlistentry>
517
            <term>SRC_NODE, SRC_PATH, SRC_IMAGE</term>
518
            <listitem>
519
              <para>In case the instance has been added by import,
520
              these variables are defined and point to the source
521
              node, source path (the directory containing the image
522
              and the config file) and the source disk image
523
              file.</para>
524
            </listitem>
525
          </varlistentry>
526
          <varlistentry>
527
            <term>DISK_NAME</term>
528
            <listitem>
529
              <para>The disk name (either <filename>sda</filename> or
530
              <filename>sdb</filename>) in mirror operations
531
              (add/remove mirror).</para>
532
            </listitem>
533
          </varlistentry>
534
          <varlistentry>
535
            <term>DISK_ID</term>
536
            <listitem>
537
              <para>The disk id for mirror remove operations. You can
538
              look this up using <command>gnt-instance
539
              info</command>.</para>
540
            </listitem>
541
          </varlistentry>
542
          <varlistentry>
543
            <term>NEW_SECONDARY</term>
544
            <listitem>
545
              <para>The name of the node on which the new mirror
546
              componet is being added. This can be the name of the
547
              current secondary, if the new mirror is on the same
548
              secondary.</para>
549
            </listitem>
550
          </varlistentry>
551
          <varlistentry>
552
            <term>OLD_SECONDARY</term>
553
            <listitem>
554
              <para>The name of the old secondary. This is used in
555
              both <command>replace-disks</command> and
556
              <command>remove-mirror</command>. Note that this can be
557
              equal to the new secondary (only
558
              <command>replace-disks</command> has both variables) if
559
              the secondary node hasn't actually changed).</para>
560
            </listitem>
561
          </varlistentry>
562
          <varlistentry>
563
            <term>EXPORT_NODE</term>
564
            <listitem>
565
              <para>The node on which the exported image of the
566
              instance was done.</para>
567
            </listitem>
568
          </varlistentry>
569
          <varlistentry>
570
            <term>EXPORT_DO_SHUTDOWN</term>
571
            <listitem>
572
              <para>This variable tells if the instance has been
573
              shutdown or not while doing the export. In the "was
574
              shutdown" case, it's likely that the filesystem is
575
              consistent, whereas in the "did not shutdown" case, the
576
              filesystem would need a check (journal replay or full
577
              fsck) in order to guarantee consistency.</para>
578
            </listitem>
579
          </varlistentry>
580
        </variablelist>
581

  
582
      </section>
583

  
584
    </section>
585

  
586
  </section>
587
  </article>

Also available in: Unified diff