Statistics
| Branch: | Tag: | Revision:

root / doc / hooks.rst @ 0232b768

History | View | Annotate | Download (19.4 kB)

1
Ganeti customisation using hooks
2
================================
3

    
4
Documents Ganeti version 2.6
5

    
6
.. contents::
7

    
8
Introduction
9
------------
10

    
11
In order to allow customisation of operations, Ganeti runs scripts in
12
sub-directories of ``@SYSCONFDIR@/ganeti/hooks``. These sub-directories
13
are named ``$hook-$phase.d``, where ``$phase`` is either ``pre`` or
14
``post`` and ``$hook`` matches the directory name given for a hook (e.g.
15
``cluster-verify-post.d`` or ``node-add-pre.d``).
16

    
17
This is similar to the ``/etc/network/`` structure present in Debian
18
for network interface handling.
19

    
20
Organisation
21
------------
22

    
23
For every operation, two sets of scripts are run:
24

    
25
- pre phase (for authorization/checking)
26
- post phase (for logging)
27

    
28
Also, for each operation, the scripts are run on one or more nodes,
29
depending on the operation type.
30

    
31
Note that, even though we call them scripts, we are actually talking
32
about any executable.
33

    
34
*pre* scripts
35
~~~~~~~~~~~~~
36

    
37
The *pre* scripts have a definite target: to check that the operation
38
is allowed given the site-specific constraints. You could have, for
39
example, a rule that says every new instance is required to exists in
40
a database; to implement this, you could write a script that checks
41
the new instance parameters against your database.
42

    
43
The objective of these scripts should be their return code (zero or
44
non-zero for success and failure). However, if they modify the
45
environment in any way, they should be idempotent, as failed
46
executions could be restarted and thus the script(s) run again with
47
exactly the same parameters.
48

    
49
Note that if a node is unreachable at the time a hooks is run, this
50
will not be interpreted as a deny for the execution. In other words,
51
only an actual error returned from a script will cause abort, and not
52
an unreachable node.
53

    
54
Therefore, if you want to guarantee that a hook script is run and
55
denies an action, it's best to put it on the master node.
56

    
57
*post* scripts
58
~~~~~~~~~~~~~~
59

    
60
These scripts should do whatever you need as a reaction to the
61
completion of an operation. Their return code is not checked (but
62
logged), and they should not depend on the fact that the *pre* scripts
63
have been run.
64

    
65
Naming
66
~~~~~~
67

    
68
The allowed names for the scripts consist of (similar to *run-parts*)
69
upper and lower case, digits, underscores and hyphens. In other words,
70
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be
71
ignored.
72

    
73

    
74
Order of execution
75
~~~~~~~~~~~~~~~~~~
76

    
77
On a single node, the scripts in a directory are run in lexicographic
78
order (more exactly, the python string comparison order). It is
79
advisable to implement the usual *NN-name* convention where *NN* is a
80
two digit number.
81

    
82
For an operation whose hooks are run on multiple nodes, there is no
83
specific ordering of nodes with regard to hooks execution; you should
84
assume that the scripts are run in parallel on the target nodes
85
(keeping on each node the above specified ordering).  If you need any
86
kind of inter-node synchronisation, you have to implement it yourself
87
in the scripts.
88

    
89
Execution environment
90
~~~~~~~~~~~~~~~~~~~~~
91

    
92
The scripts will be run as follows:
93

    
94
- no command line arguments
95

    
96
- no controlling *tty*
97

    
98
- stdin is actually */dev/null*
99

    
100
- stdout and stderr are directed to files
101

    
102
- PATH is reset to :pyeval:`constants.HOOKS_PATH`
103

    
104
- the environment is cleared, and only ganeti-specific variables will
105
  be left
106

    
107

    
108
All information about the cluster is passed using environment
109
variables. Different operations will have sligthly different
110
environments, but most of the variables are common.
111

    
112
Operation list
113
--------------
114

    
115
Node operations
116
~~~~~~~~~~~~~~~
117

    
118
OP_NODE_ADD
119
+++++++++++
120

    
121
Adds a node to the cluster.
122

    
123
:directory: node-add
124
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP, MASTER_CAPABLE, VM_CAPABLE
125
:pre-execution: all existing nodes
126
:post-execution: all nodes plus the new node
127

    
128

    
129
OP_NODE_REMOVE
130
++++++++++++++
131

    
132
Removes a node from the cluster. On the removed node the hooks are
133
called during the execution of the operation and not after its
134
completion.
135

    
136
:directory: node-remove
137
:env. vars: NODE_NAME
138
:pre-execution: all existing nodes except the removed node
139
:post-execution: all existing nodes
140

    
141
OP_NODE_SET_PARAMS
142
++++++++++++++++++
143

    
144
Changes a node's parameters.
145

    
146
:directory: node-modify
147
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED, MASTER_CAPABLE, VM_CAPABLE
148
:pre-execution: master node, the target node
149
:post-execution: master node, the target node
150

    
151
OP_NODE_MIGRATE
152
++++++++++++++++
153

    
154
Relocate secondary instances from a node.
155

    
156
:directory: node-migrate
157
:env. vars: NODE_NAME
158
:pre-execution: master node
159
:post-execution: master node
160

    
161

    
162
Node group operations
163
~~~~~~~~~~~~~~~~~~~~~
164

    
165
OP_GROUP_ADD
166
++++++++++++
167

    
168
Adds a node group to the cluster.
169

    
170
:directory: group-add
171
:env. vars: GROUP_NAME
172
:pre-execution: master node
173
:post-execution: master node
174

    
175
OP_GROUP_SET_PARAMS
176
+++++++++++++++++++
177

    
178
Changes a node group's parameters.
179

    
180
:directory: group-modify
181
:env. vars: GROUP_NAME, NEW_ALLOC_POLICY
182
:pre-execution: master node
183
:post-execution: master node
184

    
185
OP_GROUP_REMOVE
186
+++++++++++++++
187

    
188
Removes a node group from the cluster. Since the node group must be
189
empty for removal to succeed, the concept of "nodes in the group" does
190
not exist, and the hook is only executed in the master node.
191

    
192
:directory: group-remove
193
:env. vars: GROUP_NAME
194
:pre-execution: master node
195
:post-execution: master node
196

    
197
OP_GROUP_RENAME
198
+++++++++++++++
199

    
200
Renames a node group.
201

    
202
:directory: group-rename
203
:env. vars: OLD_NAME, NEW_NAME
204
:pre-execution: master node and all nodes in the group
205
:post-execution: master node and all nodes in the group
206

    
207
OP_GROUP_EVACUATE
208
+++++++++++++++++
209

    
210
Evacuates a node group.
211

    
212
:directory: group-evacuate
213
:env. vars: GROUP_NAME, TARGET_GROUPS
214
:pre-execution: master node and all nodes in the group
215
:post-execution: master node and all nodes in the group
216

    
217

    
218
Instance operations
219
~~~~~~~~~~~~~~~~~~~
220

    
221
All instance operations take at least the following variables:
222
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARY,
223
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY,
224
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT,
225
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC,
226
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE.
227

    
228
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
229
properties of the *n* -th NIC and disk, and are zero-indexed.
230

    
231

    
232
OP_INSTANCE_CREATE
233
++++++++++++++++++
234

    
235
Creates a new instance.
236

    
237
:directory: instance-add
238
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES
239
:pre-execution: master node, primary and secondary nodes
240
:post-execution: master node, primary and secondary nodes
241

    
242
OP_INSTANCE_REINSTALL
243
+++++++++++++++++++++
244

    
245
Reinstalls an instance.
246

    
247
:directory: instance-reinstall
248
:env. vars: only the standard instance vars
249
:pre-execution: master node, primary and secondary nodes
250
:post-execution: master node, primary and secondary nodes
251

    
252
OP_BACKUP_EXPORT
253
++++++++++++++++
254

    
255
Exports the instance.
256

    
257
:directory: instance-export
258
:env. vars: EXPORT_MODE, EXPORT_NODE, EXPORT_DO_SHUTDOWN, REMOVE_INSTANCE
259
:pre-execution: master node, primary and secondary nodes
260
:post-execution: master node, primary and secondary nodes
261

    
262
OP_INSTANCE_STARTUP
263
+++++++++++++++++++
264

    
265
Starts an instance.
266

    
267
:directory: instance-start
268
:env. vars: FORCE
269
:pre-execution: master node, primary and secondary nodes
270
:post-execution: master node, primary and secondary nodes
271

    
272
OP_INSTANCE_SHUTDOWN
273
++++++++++++++++++++
274

    
275
Stops an instance.
276

    
277
:directory: instance-stop
278
:env. vars: TIMEOUT
279
:pre-execution: master node, primary and secondary nodes
280
:post-execution: master node, primary and secondary nodes
281

    
282
OP_INSTANCE_REBOOT
283
++++++++++++++++++
284

    
285
Reboots an instance.
286

    
287
:directory: instance-reboot
288
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE, SHUTDOWN_TIMEOUT
289
:pre-execution: master node, primary and secondary nodes
290
:post-execution: master node, primary and secondary nodes
291

    
292
OP_INSTANCE_SET_PARAMS
293
++++++++++++++++++++++
294

    
295
Modifies the instance parameters.
296

    
297
:directory: instance-modify
298
:env. vars: NEW_DISK_TEMPLATE, RUNTIME_MEMORY
299
:pre-execution: master node, primary and secondary nodes
300
:post-execution: master node, primary and secondary nodes
301

    
302
OP_INSTANCE_FAILOVER
303
++++++++++++++++++++
304

    
305
Failovers an instance. In the post phase INSTANCE_PRIMARY and
306
INSTANCE_SECONDARY refer to the nodes that were repectively primary
307
and secondary before failover.
308

    
309
:directory: instance-failover
310
:env. vars: IGNORE_CONSISTENCY, SHUTDOWN_TIMEOUT, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
311
:pre-execution: master node, secondary node
312
:post-execution: master node, primary and secondary nodes
313

    
314
OP_INSTANCE_MIGRATE
315
++++++++++++++++++++
316

    
317
Migrates an instance. In the post phase INSTANCE_PRIMARY and
318
INSTANCE_SECONDARY refer to the nodes that were repectively primary
319
and secondary before migration.
320

    
321
:directory: instance-migrate
322
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_PRIMARY, OLD_SECONDARY, NEW_PRIMARY, NEW_SECONDARY
323
:pre-execution: master node, secondary node
324
:post-execution: master node, primary and secondary nodes
325

    
326

    
327
OP_INSTANCE_REMOVE
328
++++++++++++++++++
329

    
330
Remove an instance.
331

    
332
:directory: instance-remove
333
:env. vars: SHUTDOWN_TIMEOUT
334
:pre-execution: master node
335
:post-execution: master node, primary and secondary nodes
336

    
337
OP_INSTANCE_GROW_DISK
338
+++++++++++++++++++++
339

    
340
Grows the disk of an instance.
341

    
342
:directory: disk-grow
343
:env. vars: DISK, AMOUNT
344
:pre-execution: master node, primary and secondary nodes
345
:post-execution: master node, primary and secondary nodes
346

    
347
OP_INSTANCE_RENAME
348
++++++++++++++++++
349

    
350
Renames an instance.
351

    
352
:directory: instance-rename
353
:env. vars: INSTANCE_NEW_NAME
354
:pre-execution: master node, primary and secondary nodes
355
:post-execution: master node, primary and secondary nodes
356

    
357
OP_INSTANCE_MOVE
358
++++++++++++++++
359

    
360
Move an instance by data-copying.
361

    
362
:directory: instance-move
363
:env. vars: TARGET_NODE, SHUTDOWN_TIMEOUT
364
:pre-execution: master node, primary and target nodes
365
:post-execution: master node, primary and target nodes
366

    
367
OP_INSTANCE_RECREATE_DISKS
368
++++++++++++++++++++++++++
369

    
370
Recreate an instance's missing disks.
371

    
372
:directory: instance-recreate-disks
373
:env. vars: only the standard instance vars
374
:pre-execution: master node, primary and secondary nodes
375
:post-execution: master node, primary and secondary nodes
376

    
377
OP_INSTANCE_REPLACE_DISKS
378
+++++++++++++++++++++++++
379

    
380
Replace the disks of an instance.
381

    
382
:directory: mirrors-replace
383
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY
384
:pre-execution: master node, primary and new secondary nodes
385
:post-execution: master node, primary and new secondary nodes
386

    
387
OP_INSTANCE_CHANGE_GROUP
388
++++++++++++++++++++++++
389

    
390
Moves an instance to another group.
391

    
392
:directory: instance-change-group
393
:env. vars: TARGET_GROUPS
394
:pre-execution: master node
395
:post-execution: master node
396

    
397

    
398
Cluster operations
399
~~~~~~~~~~~~~~~~~~
400

    
401
OP_CLUSTER_POST_INIT
402
++++++++++++++++++++
403

    
404
This hook is called via a special "empty" LU right after cluster
405
initialization.
406

    
407
:directory: cluster-init
408
:env. vars: none
409
:pre-execution: none
410
:post-execution: master node
411

    
412
OP_CLUSTER_DESTROY
413
++++++++++++++++++
414

    
415
The post phase of this hook is called during the execution of destroy
416
operation and not after its completion.
417

    
418
:directory: cluster-destroy
419
:env. vars: none
420
:pre-execution: none
421
:post-execution: master node
422

    
423
OP_CLUSTER_VERIFY_GROUP
424
+++++++++++++++++++++++
425

    
426
Verifies all nodes in a group. This is a special LU with regard to
427
hooks, as the result of the opcode will be combined with the result of
428
post-execution hooks, in order to allow administrators to enhance the
429
cluster verification procedure.
430

    
431
:directory: cluster-verify
432
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name>
433
:pre-execution: none
434
:post-execution: all nodes in a group
435

    
436
OP_CLUSTER_RENAME
437
+++++++++++++++++
438

    
439
Renames the cluster.
440

    
441
:directory: cluster-rename
442
:env. vars: NEW_NAME
443
:pre-execution: master-node
444
:post-execution: master-node
445

    
446
OP_CLUSTER_SET_PARAMS
447
+++++++++++++++++++++
448

    
449
Modifies the cluster parameters.
450

    
451
:directory: cluster-modify
452
:env. vars: NEW_VG_NAME
453
:pre-execution: master node
454
:post-execution: master node
455

    
456
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNUP`
457
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
458

    
459
This doesn't correspond to an actual op-code, but it is called when the
460
master IP is activated.
461

    
462
:directory: master-ip-turnup
463
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
464
:pre-execution: master node
465
:post-execution: master node
466

    
467
Virtual operation :pyeval:`constants.FAKE_OP_MASTER_TURNDOWN`
468
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
469

    
470
This doesn't correspond to an actual op-code, but it is called when the
471
master IP is deactivated.
472

    
473
:directory: master-ip-turndown
474
:env. vars: MASTER_NETDEV, MASTER_IP, MASTER_NETMASK, CLUSTER_IP_VERSION
475
:pre-execution: master node
476
:post-execution: master node
477

    
478

    
479
Obsolete operations
480
~~~~~~~~~~~~~~~~~~~
481

    
482
The following operations are no longer present or don't execute hooks
483
anymore in Ganeti 2.0:
484

    
485
- OP_INIT_CLUSTER
486
- OP_MASTER_FAILOVER
487
- OP_INSTANCE_ADD_MDDRBD
488
- OP_INSTANCE_REMOVE_MDDRBD
489

    
490

    
491
Environment variables
492
---------------------
493

    
494
Note that all variables listed here are actually prefixed with *GANETI_*
495
in order to provide a clear namespace. In addition, post-execution
496
scripts receive another set of variables, prefixed with *GANETI_POST_*,
497
representing the status after the opcode executed.
498

    
499
Common variables
500
~~~~~~~~~~~~~~~~
501

    
502
This is the list of environment variables supported by all operations:
503

    
504
HOOKS_VERSION
505
  Documents the hooks interface version. In case this doesnt match
506
  what the script expects, it should not run. The documents conforms
507
  to the version 2.
508

    
509
HOOKS_PHASE
510
  One of *PRE* or *POST* denoting which phase are we in.
511

    
512
CLUSTER
513
  The cluster name.
514

    
515
MASTER
516
  The master node.
517

    
518
OP_CODE
519
  One of the *OP_* values from the list of operations.
520

    
521
OBJECT_TYPE
522
  One of ``INSTANCE``, ``NODE``, ``CLUSTER``.
523

    
524
DATA_DIR
525
  The path to the Ganeti configuration directory (to read, for
526
  example, the *ssconf* files).
527

    
528

    
529
Specialised variables
530
~~~~~~~~~~~~~~~~~~~~~
531

    
532
This is the list of variables which are specific to one or more
533
operations.
534

    
535
CLUSTER_IP_VERSION
536
  IP version of the master IP (4 or 6)
537

    
538
INSTANCE_NAME
539
  The name of the instance which is the target of the operation.
540

    
541
INSTANCE_BE_x,y,z,...
542
  Instance BE params. There is one variable per BE param. For instance, GANETI_INSTANCE_BE_auto_balance
543

    
544
INSTANCE_DISK_TEMPLATE
545
  The disk type for the instance.
546

    
547
NEW_DISK_TEMPLATE
548
  The new disk type for the instance.
549

    
550
INSTANCE_DISK_COUNT
551
  The number of disks for the instance.
552

    
553
INSTANCE_DISKn_SIZE
554
  The size of disk *n* for the instance.
555

    
556
INSTANCE_DISKn_MODE
557
  Either *rw* for a read-write disk or *ro* for a read-only one.
558

    
559
INSTANCE_HV_x,y,z,...
560
  Instance hypervisor options. There is one variable per option. For instance, GANETI_INSTANCE_HV_use_bootloader
561

    
562
INSTANCE_HYPERVISOR
563
  The instance hypervisor.
564

    
565
INSTANCE_NIC_COUNT
566
  The number of NICs for the instance.
567

    
568
INSTANCE_NICn_BRIDGE
569
  The bridge to which the *n* -th NIC of the instance is attached.
570

    
571
INSTANCE_NICn_IP
572
  The IP (if any) of the *n* -th NIC of the instance.
573

    
574
INSTANCE_NICn_MAC
575
  The MAC address of the *n* -th NIC of the instance.
576

    
577
INSTANCE_NICn_MODE
578
  The mode of the *n* -th NIC of the instance.
579

    
580
INSTANCE_OS_TYPE
581
  The name of the instance OS.
582

    
583
INSTANCE_PRIMARY
584
  The name of the node which is the primary for the instance. Note that
585
  for migrations/failovers, you shouldn't rely on this variable since
586
  the nodes change during the exectution, but on the
587
  OLD_PRIMARY/NEW_PRIMARY values.
588

    
589
INSTANCE_SECONDARY
590
  Space-separated list of secondary nodes for the instance. Note that
591
  for migrations/failovers, you shouldn't rely on this variable since
592
  the nodes change during the exectution, but on the
593
  OLD_SECONDARY/NEW_SECONDARY values.
594

    
595
INSTANCE_MEMORY
596
  The memory size (in MiBs) of the instance.
597

    
598
INSTANCE_VCPUS
599
  The number of virtual CPUs for the instance.
600

    
601
INSTANCE_STATUS
602
  The run status of the instance.
603

    
604
MASTER_CAPABLE
605
  Whether a node is capable of being promoted to master.
606

    
607
VM_CAPABLE
608
  Whether the node can host instances.
609

    
610
MASTER_NETDEV
611
  Network device of the master IP
612

    
613
MASTER_IP
614
  The master IP
615

    
616
MASTER_NETMASK
617
  Netmask of the master IP
618

    
619
INSTANCE_TAGS
620
  A space-delimited list of the instance's tags.
621

    
622
NODE_NAME
623
  The target node of this operation (not the node on which the hook
624
  runs).
625

    
626
NODE_PIP
627
  The primary IP of the target node (the one over which inter-node
628
  communication is done).
629

    
630
NODE_SIP
631
  The secondary IP of the target node (the one over which drbd
632
  replication is done). This can be equal to the primary ip, in case
633
  the cluster is not dual-homed.
634

    
635
FORCE
636
  This is provided by some operations when the user gave this flag.
637

    
638
IGNORE_CONSISTENCY
639
  The user has specified this flag. It is used when failing over
640
  instances in case the primary node is down.
641

    
642
ADD_MODE
643
  The mode of the instance create: either *create* for create from
644
  scratch or *import* for restoring from an exported image.
645

    
646
SRC_NODE, SRC_PATH, SRC_IMAGE
647
  In case the instance has been added by import, these variables are
648
  defined and point to the source node, source path (the directory
649
  containing the image and the config file) and the source disk image
650
  file.
651

    
652
NEW_SECONDARY
653
  The name of the node on which the new mirror component is being
654
  added (for replace disk). This can be the name of the current
655
  secondary, if the new mirror is on the same secondary. For
656
  migrations/failovers, this is the old primary node.
657

    
658
OLD_SECONDARY
659
  The name of the old secondary in the replace-disks command. Note that
660
  this can be equal to the new secondary if the secondary node hasn't
661
  actually changed. For migrations/failovers, this is the new primary
662
  node.
663

    
664
OLD_PRIMARY, NEW_PRIMARY
665
  For migrations/failovers, the old and respectively new primary
666
  nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables
667

    
668
EXPORT_MODE
669
  The instance export mode. Either "remote" or "local".
670

    
671
EXPORT_NODE
672
  The node on which the exported image of the instance was done.
673

    
674
EXPORT_DO_SHUTDOWN
675
  This variable tells if the instance has been shutdown or not while
676
  doing the export. In the "was shutdown" case, it's likely that the
677
  filesystem is consistent, whereas in the "did not shutdown" case,
678
  the filesystem would need a check (journal replay or full fsck) in
679
  order to guarantee consistency.
680

    
681
REMOVE_INSTANCE
682
  Whether the instance was removed from the node.
683

    
684
SHUTDOWN_TIMEOUT
685
  Amount of time to wait for the instance to shutdown.
686

    
687
TIMEOUT
688
  Amount of time to wait before aborting the op.
689

    
690
OLD_NAME, NEW_NAME
691
  Old/new name of the node group.
692

    
693
GROUP_NAME
694
  The name of the node group.
695

    
696
NEW_ALLOC_POLICY
697
  The new allocation policy for the node group.
698

    
699
CLUSTER_TAGS
700
  The list of cluster tags, space separated.
701

    
702
NODE_TAGS_<name>
703
  The list of tags for node *<name>*, space separated.
704

    
705
Examples
706
--------
707

    
708
The startup of an instance will pass this environment to the hook
709
script::
710

    
711
  GANETI_CLUSTER=cluster1.example.com
712
  GANETI_DATA_DIR=/var/lib/ganeti
713
  GANETI_FORCE=False
714
  GANETI_HOOKS_PATH=instance-start
715
  GANETI_HOOKS_PHASE=post
716
  GANETI_HOOKS_VERSION=2
717
  GANETI_INSTANCE_DISK0_MODE=rw
718
  GANETI_INSTANCE_DISK0_SIZE=128
719
  GANETI_INSTANCE_DISK_COUNT=1
720
  GANETI_INSTANCE_DISK_TEMPLATE=drbd
721
  GANETI_INSTANCE_MEMORY=128
722
  GANETI_INSTANCE_NAME=instance2.example.com
723
  GANETI_INSTANCE_NIC0_BRIDGE=xen-br0
724
  GANETI_INSTANCE_NIC0_IP=
725
  GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58
726
  GANETI_INSTANCE_NIC_COUNT=1
727
  GANETI_INSTANCE_OS_TYPE=debootstrap
728
  GANETI_INSTANCE_PRIMARY=node3.example.com
729
  GANETI_INSTANCE_SECONDARY=node5.example.com
730
  GANETI_INSTANCE_STATUS=down
731
  GANETI_INSTANCE_VCPUS=1
732
  GANETI_MASTER=node1.example.com
733
  GANETI_OBJECT_TYPE=INSTANCE
734
  GANETI_OP_CODE=OP_INSTANCE_STARTUP
735
  GANETI_OP_TARGET=instance2.example.com
736

    
737
.. vim: set textwidth=72 :
738
.. Local Variables:
739
.. mode: rst
740
.. fill-column: 72
741
.. End: