root / doc / hooks.rst @ 4fe5cf90
History | View | Annotate | Download (16.8 kB)
1 |
Ganeti customisation using hooks |
---|---|
2 |
================================ |
3 |
|
4 |
Documents ganeti version 2.0 |
5 |
|
6 |
.. contents:: |
7 |
|
8 |
Introduction |
9 |
------------ |
10 |
|
11 |
|
12 |
In order to allow customisation of operations, ganeti runs scripts |
13 |
under ``/etc/ganeti/hooks`` based on certain rules. |
14 |
|
15 |
|
16 |
This is similar to the ``/etc/network/`` structure present in Debian |
17 |
for network interface handling. |
18 |
|
19 |
Organisation |
20 |
------------ |
21 |
|
22 |
For every operation, two sets of scripts are run: |
23 |
|
24 |
- pre phase (for authorization/checking) |
25 |
- post phase (for logging) |
26 |
|
27 |
Also, for each operation, the scripts are run on one or more nodes, |
28 |
depending on the operation type. |
29 |
|
30 |
Note that, even though we call them scripts, we are actually talking |
31 |
about any executable. |
32 |
|
33 |
*pre* scripts |
34 |
~~~~~~~~~~~~~ |
35 |
|
36 |
The *pre* scripts have a definite target: to check that the operation |
37 |
is allowed given the site-specific constraints. You could have, for |
38 |
example, a rule that says every new instance is required to exists in |
39 |
a database; to implement this, you could write a script that checks |
40 |
the new instance parameters against your database. |
41 |
|
42 |
The objective of these scripts should be their return code (zero or |
43 |
non-zero for success and failure). However, if they modify the |
44 |
environment in any way, they should be idempotent, as failed |
45 |
executions could be restarted and thus the script(s) run again with |
46 |
exactly the same parameters. |
47 |
|
48 |
Note that if a node is unreachable at the time a hooks is run, this |
49 |
will not be interpreted as a deny for the execution. In other words, |
50 |
only an actual error returned from a script will cause abort, and not |
51 |
an unreachable node. |
52 |
|
53 |
Therefore, if you want to guarantee that a hook script is run and |
54 |
denies an action, it's best to put it on the master node. |
55 |
|
56 |
*post* scripts |
57 |
~~~~~~~~~~~~~~ |
58 |
|
59 |
These scripts should do whatever you need as a reaction to the |
60 |
completion of an operation. Their return code is not checked (but |
61 |
logged), and they should not depend on the fact that the *pre* scripts |
62 |
have been run. |
63 |
|
64 |
Naming |
65 |
~~~~~~ |
66 |
|
67 |
The allowed names for the scripts consist of (similar to *run-parts* ) |
68 |
upper and lower case, digits, underscores and hyphens. In other words, |
69 |
the regexp ``^[a-zA-Z0-9_-]+$``. Also, non-executable scripts will be |
70 |
ignored. |
71 |
|
72 |
|
73 |
Order of execution |
74 |
~~~~~~~~~~~~~~~~~~ |
75 |
|
76 |
On a single node, the scripts in a directory are run in lexicographic |
77 |
order (more exactly, the python string comparison order). It is |
78 |
advisable to implement the usual *NN-name* convention where *NN* is a |
79 |
two digit number. |
80 |
|
81 |
For an operation whose hooks are run on multiple nodes, there is no |
82 |
specific ordering of nodes with regard to hooks execution; you should |
83 |
assume that the scripts are run in parallel on the target nodes |
84 |
(keeping on each node the above specified ordering). If you need any |
85 |
kind of inter-node synchronisation, you have to implement it yourself |
86 |
in the scripts. |
87 |
|
88 |
Execution environment |
89 |
~~~~~~~~~~~~~~~~~~~~~ |
90 |
|
91 |
The scripts will be run as follows: |
92 |
|
93 |
- no command line arguments |
94 |
|
95 |
- no controlling *tty* |
96 |
|
97 |
- stdin is actually */dev/null* |
98 |
|
99 |
- stdout and stderr are directed to files |
100 |
|
101 |
- PATH is reset to ``/sbin:/bin:/usr/sbin:/usr/bin`` |
102 |
|
103 |
- the environment is cleared, and only ganeti-specific variables will |
104 |
be left |
105 |
|
106 |
|
107 |
All information about the cluster is passed using environment |
108 |
variables. Different operations will have sligthly different |
109 |
environments, but most of the variables are common. |
110 |
|
111 |
Operation list |
112 |
-------------- |
113 |
|
114 |
Node operations |
115 |
~~~~~~~~~~~~~~~ |
116 |
|
117 |
OP_ADD_NODE |
118 |
+++++++++++ |
119 |
|
120 |
Adds a node to the cluster. |
121 |
|
122 |
:directory: node-add |
123 |
:env. vars: NODE_NAME, NODE_PIP, NODE_SIP |
124 |
:pre-execution: all existing nodes |
125 |
:post-execution: all nodes plus the new node |
126 |
|
127 |
|
128 |
OP_REMOVE_NODE |
129 |
++++++++++++++ |
130 |
|
131 |
Removes a node from the cluster. On the removed node the hooks are |
132 |
called during the execution of the operation and not after its |
133 |
completion. |
134 |
|
135 |
:directory: node-remove |
136 |
:env. vars: NODE_NAME |
137 |
:pre-execution: all existing nodes except the removed node |
138 |
:post-execution: all existing nodes |
139 |
|
140 |
OP_NODE_SET_PARAMS |
141 |
++++++++++++++++++ |
142 |
|
143 |
Changes a node's parameters. |
144 |
|
145 |
:directory: node-modify |
146 |
:env. vars: MASTER_CANDIDATE, OFFLINE, DRAINED |
147 |
:pre-execution: master node, the target node |
148 |
:post-execution: master node, the target node |
149 |
|
150 |
OP_NODE_EVACUATE |
151 |
++++++++++++++++ |
152 |
|
153 |
Relocate secondary instances from a node. |
154 |
|
155 |
:directory: node-evacuate |
156 |
:env. vars: NEW_SECONDARY, NODE_NAME |
157 |
:pre-execution: master node, target node |
158 |
:post-execution: master node, target node |
159 |
|
160 |
OP_NODE_MIGRATE |
161 |
++++++++++++++++ |
162 |
|
163 |
Relocate secondary instances from a node. |
164 |
|
165 |
:directory: node-migrate |
166 |
:env. vars: NODE_NAME |
167 |
:pre-execution: master node |
168 |
:post-execution: master node |
169 |
|
170 |
|
171 |
Node group operations |
172 |
~~~~~~~~~~~~~~~~~~~~~ |
173 |
|
174 |
OP_ADD_GROUP |
175 |
++++++++++++ |
176 |
|
177 |
Adds a node group to the cluster. |
178 |
|
179 |
:directory: group-add |
180 |
:env. vars: GROUP_NAME |
181 |
:pre-execution: master node |
182 |
:post-execution: master node |
183 |
|
184 |
OP_REMOVE_GROUP |
185 |
+++++++++++++++ |
186 |
|
187 |
Removes a node group from the cluster. Since the node group must be |
188 |
empty for removal to succeed, the concept of "nodes in the group" does |
189 |
not exist, and the hook is only executed in the master node. |
190 |
|
191 |
:directory: group-remove |
192 |
:env. vars: GROUP_NAME |
193 |
:pre-execution: master node |
194 |
:post-execution: master node |
195 |
|
196 |
OP_RENAME_GROUP |
197 |
+++++++++++++++ |
198 |
|
199 |
Renames a node group. |
200 |
|
201 |
:directory: group-rename |
202 |
:env. vars: OLD_NAME, NEW_NAME |
203 |
:pre-execution: master node and all nodes in the group |
204 |
:post-execution: master node and all nodes in the group |
205 |
|
206 |
|
207 |
Instance operations |
208 |
~~~~~~~~~~~~~~~~~~~ |
209 |
|
210 |
All instance operations take at least the following variables: |
211 |
INSTANCE_NAME, INSTANCE_PRIMARY, INSTANCE_SECONDARIES, |
212 |
INSTANCE_OS_TYPE, INSTANCE_DISK_TEMPLATE, INSTANCE_MEMORY, |
213 |
INSTANCE_DISK_SIZES, INSTANCE_VCPUS, INSTANCE_NIC_COUNT, |
214 |
INSTANCE_NICn_IP, INSTANCE_NICn_BRIDGE, INSTANCE_NICn_MAC, |
215 |
INSTANCE_DISK_COUNT, INSTANCE_DISKn_SIZE, INSTANCE_DISKn_MODE. |
216 |
|
217 |
The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the |
218 |
properties of the *n* -th NIC and disk, and are zero-indexed. |
219 |
|
220 |
|
221 |
OP_INSTANCE_ADD |
222 |
+++++++++++++++ |
223 |
|
224 |
Creates a new instance. |
225 |
|
226 |
:directory: instance-add |
227 |
:env. vars: ADD_MODE, SRC_NODE, SRC_PATH, SRC_IMAGES |
228 |
:pre-execution: master node, primary and secondary nodes |
229 |
:post-execution: master node, primary and secondary nodes |
230 |
|
231 |
OP_INSTANCE_REINSTALL |
232 |
+++++++++++++++++++++ |
233 |
|
234 |
Reinstalls an instance. |
235 |
|
236 |
:directory: instance-reinstall |
237 |
:env. vars: only the standard instance vars |
238 |
:pre-execution: master node, primary and secondary nodes |
239 |
:post-execution: master node, primary and secondary nodes |
240 |
|
241 |
OP_BACKUP_EXPORT |
242 |
++++++++++++++++ |
243 |
|
244 |
Exports the instance. |
245 |
|
246 |
:directory: instance-export |
247 |
:env. vars: EXPORT_NODE, EXPORT_DO_SHUTDOWN |
248 |
:pre-execution: master node, primary and secondary nodes |
249 |
:post-execution: master node, primary and secondary nodes |
250 |
|
251 |
OP_INSTANCE_START |
252 |
+++++++++++++++++ |
253 |
|
254 |
Starts an instance. |
255 |
|
256 |
:directory: instance-start |
257 |
:env. vars: FORCE |
258 |
:pre-execution: master node, primary and secondary nodes |
259 |
:post-execution: master node, primary and secondary nodes |
260 |
|
261 |
OP_INSTANCE_SHUTDOWN |
262 |
++++++++++++++++++++ |
263 |
|
264 |
Stops an instance. |
265 |
|
266 |
:directory: instance-stop |
267 |
:env. vars: only the standard instance vars |
268 |
:pre-execution: master node, primary and secondary nodes |
269 |
:post-execution: master node, primary and secondary nodes |
270 |
|
271 |
OP_INSTANCE_REBOOT |
272 |
++++++++++++++++++ |
273 |
|
274 |
Reboots an instance. |
275 |
|
276 |
:directory: instance-reboot |
277 |
:env. vars: IGNORE_SECONDARIES, REBOOT_TYPE |
278 |
:pre-execution: master node, primary and secondary nodes |
279 |
:post-execution: master node, primary and secondary nodes |
280 |
|
281 |
OP_INSTANCE_MODIFY |
282 |
++++++++++++++++++ |
283 |
|
284 |
Modifies the instance parameters. |
285 |
|
286 |
:directory: instance-modify |
287 |
:env. vars: only the standard instance vars |
288 |
:pre-execution: master node, primary and secondary nodes |
289 |
:post-execution: master node, primary and secondary nodes |
290 |
|
291 |
OP_INSTANCE_FAILOVER |
292 |
++++++++++++++++++++ |
293 |
|
294 |
Failovers an instance. In the post phase INSTANCE_PRIMARY and |
295 |
INSTANCE_SECONDARIES refer to the nodes that were repectively primary |
296 |
and secondary before failover. |
297 |
|
298 |
:directory: instance-failover |
299 |
:env. vars: IGNORE_CONSISTENCY, OLD_SECONDARY, OLD_PRIMARY, NEW_SECONDARY, NEW_PRIMARY |
300 |
:pre-execution: master node, secondary node |
301 |
:post-execution: master node, primary and secondary nodes |
302 |
|
303 |
OP_INSTANCE_MIGRATE |
304 |
++++++++++++++++++++ |
305 |
|
306 |
Migrates an instance. In the post phase INSTANCE_PRIMARY and |
307 |
INSTANCE_SECONDARIES refer to the nodes that were repectively primary |
308 |
and secondary before migration. |
309 |
|
310 |
:directory: instance-migrate |
311 |
:env. vars: MIGRATE_LIVE, MIGRATE_CLEANUP, OLD_SECONDARY, OLD_PRIMARY, NEW_SECONDARY, NEW_PRIMARY |
312 |
:pre-execution: master node, secondary node |
313 |
:post-execution: master node, primary and secondary nodes |
314 |
|
315 |
|
316 |
OP_INSTANCE_REMOVE |
317 |
++++++++++++++++++ |
318 |
|
319 |
Remove an instance. |
320 |
|
321 |
:directory: instance-remove |
322 |
:env. vars: only the standard instance vars |
323 |
:pre-execution: master node |
324 |
:post-execution: master node, primary and secondary nodes |
325 |
|
326 |
OP_INSTANCE_REPLACE_DISKS |
327 |
+++++++++++++++++++++++++ |
328 |
|
329 |
Replace an instance's disks. |
330 |
|
331 |
:directory: mirror-replace |
332 |
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY |
333 |
:pre-execution: master node, primary and secondary nodes |
334 |
:post-execution: master node, primary and secondary nodes |
335 |
|
336 |
OP_INSTANCE_GROW_DISK |
337 |
+++++++++++++++++++++ |
338 |
|
339 |
Grows the disk of an instance. |
340 |
|
341 |
:directory: disk-grow |
342 |
:env. vars: DISK, AMOUNT |
343 |
:pre-execution: master node, primary and secondary nodes |
344 |
:post-execution: master node, primary and secondary nodes |
345 |
|
346 |
OP_INSTANCE_RENAME |
347 |
++++++++++++++++++ |
348 |
|
349 |
Renames an instance. |
350 |
|
351 |
:directory: instance-rename |
352 |
:env. vars: INSTANCE_NEW_NAME |
353 |
:pre-execution: master node, primary and secondary nodes |
354 |
:post-execution: master node, primary and secondary nodes |
355 |
|
356 |
OP_INSTANCE_MOVE |
357 |
++++++++++++++++ |
358 |
|
359 |
Move an instance by data-copying. |
360 |
|
361 |
:directory: instance-move |
362 |
:env. vars: TARGET_NODE |
363 |
:pre-execution: master node, primary and target nodes |
364 |
:post-execution: master node, primary and target nodes |
365 |
|
366 |
OP_INSTANCE_RECREATE_DISKS |
367 |
++++++++++++++++++++++++++ |
368 |
|
369 |
Recreate an instance's missing disks. |
370 |
|
371 |
:directory: instance-recreate-disks |
372 |
:env. vars: only the standard instance vars |
373 |
:pre-execution: master node, primary and secondary nodes |
374 |
:post-execution: master node, primary and secondary nodes |
375 |
|
376 |
OP_INSTANCE_REPLACE_DISKS |
377 |
+++++++++++++++++++++++++ |
378 |
|
379 |
Replace the disks of an instance. |
380 |
|
381 |
:directory: mirrors-replace |
382 |
:env. vars: MODE, NEW_SECONDARY, OLD_SECONDARY |
383 |
:pre-execution: master node, primary and new secondary nodes |
384 |
:post-execution: master node, primary and new secondary nodes |
385 |
|
386 |
|
387 |
Cluster operations |
388 |
~~~~~~~~~~~~~~~~~~ |
389 |
|
390 |
OP_POST_INIT_CLUSTER |
391 |
++++++++++++++++++++ |
392 |
|
393 |
This hook is called via a special "empty" LU right after cluster |
394 |
initialization. |
395 |
|
396 |
:directory: cluster-init |
397 |
:env. vars: none |
398 |
:pre-execution: none |
399 |
:post-execution: master node |
400 |
|
401 |
OP_DESTROY_CLUSTER |
402 |
++++++++++++++++++ |
403 |
|
404 |
The post phase of this hook is called during the execution of destroy |
405 |
operation and not after its completion. |
406 |
|
407 |
:directory: cluster-destroy |
408 |
:env. vars: none |
409 |
:pre-execution: none |
410 |
:post-execution: master node |
411 |
|
412 |
OP_CLUSTER_VERIFY |
413 |
+++++++++++++++++ |
414 |
|
415 |
Verifies the cluster status. This is a special LU with regard to |
416 |
hooks, as the result of the opcode will be combined with the result of |
417 |
post-execution hooks, in order to allow administrators to enhance the |
418 |
cluster verification procedure. |
419 |
|
420 |
:directory: cluster-verify |
421 |
:env. vars: CLUSTER, MASTER, CLUSTER_TAGS, NODE_TAGS_<name> |
422 |
:pre-execution: none |
423 |
:post-execution: all nodes |
424 |
|
425 |
OP_CLUSTER_RENAME |
426 |
+++++++++++++++++ |
427 |
|
428 |
Renames the cluster. |
429 |
|
430 |
:directory: cluster-rename |
431 |
:env. vars: NEW_NAME |
432 |
:pre-execution: master-node |
433 |
:post-execution: master-node |
434 |
|
435 |
OP_CLUSTER_SET_PARAMS |
436 |
+++++++++++++++++++++ |
437 |
|
438 |
Modifies the cluster parameters. |
439 |
|
440 |
:directory: cluster-modify |
441 |
:env. vars: NEW_VG_NAME |
442 |
:pre-execution: master node |
443 |
:post-execution: master node |
444 |
|
445 |
|
446 |
Obsolete operations |
447 |
~~~~~~~~~~~~~~~~~~~ |
448 |
|
449 |
The following operations are no longer present or don't execute hooks |
450 |
anymore in Ganeti 2.0: |
451 |
|
452 |
- OP_INIT_CLUSTER |
453 |
- OP_MASTER_FAILOVER |
454 |
- OP_INSTANCE_ADD_MDDRBD |
455 |
- OP_INSTANCE_REMOVE_MDDRBD |
456 |
|
457 |
|
458 |
Environment variables |
459 |
--------------------- |
460 |
|
461 |
Note that all variables listed here are actually prefixed with |
462 |
*GANETI_* in order to provide a clear namespace. |
463 |
|
464 |
Common variables |
465 |
~~~~~~~~~~~~~~~~ |
466 |
|
467 |
This is the list of environment variables supported by all operations: |
468 |
|
469 |
HOOKS_VERSION |
470 |
Documents the hooks interface version. In case this doesnt match |
471 |
what the script expects, it should not run. The documents conforms |
472 |
to the version 2. |
473 |
|
474 |
HOOKS_PHASE |
475 |
One of *PRE* or *POST* denoting which phase are we in. |
476 |
|
477 |
CLUSTER |
478 |
The cluster name. |
479 |
|
480 |
MASTER |
481 |
The master node. |
482 |
|
483 |
OP_CODE |
484 |
One of the *OP_* values from the list of operations. |
485 |
|
486 |
OBJECT_TYPE |
487 |
One of ``INSTANCE``, ``NODE``, ``CLUSTER``. |
488 |
|
489 |
DATA_DIR |
490 |
The path to the Ganeti configuration directory (to read, for |
491 |
example, the *ssconf* files). |
492 |
|
493 |
|
494 |
Specialised variables |
495 |
~~~~~~~~~~~~~~~~~~~~~ |
496 |
|
497 |
This is the list of variables which are specific to one or more |
498 |
operations. |
499 |
|
500 |
INSTANCE_NAME |
501 |
The name of the instance which is the target of the operation. |
502 |
|
503 |
INSTANCE_DISK_TEMPLATE |
504 |
The disk type for the instance. |
505 |
|
506 |
INSTANCE_DISK_COUNT |
507 |
The number of disks for the instance. |
508 |
|
509 |
INSTANCE_DISKn_SIZE |
510 |
The size of disk *n* for the instance. |
511 |
|
512 |
INSTANCE_DISKn_MODE |
513 |
Either *rw* for a read-write disk or *ro* for a read-only one. |
514 |
|
515 |
INSTANCE_NIC_COUNT |
516 |
The number of NICs for the instance. |
517 |
|
518 |
INSTANCE_NICn_BRIDGE |
519 |
The bridge to which the *n* -th NIC of the instance is attached. |
520 |
|
521 |
INSTANCE_NICn_IP |
522 |
The IP (if any) of the *n* -th NIC of the instance. |
523 |
|
524 |
INSTANCE_NICn_MAC |
525 |
The MAC address of the *n* -th NIC of the instance. |
526 |
|
527 |
INSTANCE_OS_TYPE |
528 |
The name of the instance OS. |
529 |
|
530 |
INSTANCE_PRIMARY |
531 |
The name of the node which is the primary for the instance. Note that |
532 |
for migrations/failovers, you shouldn't rely on this variable since |
533 |
the nodes change during the exectution, but on the |
534 |
OLD_PRIMARY/NEW_PRIMARY values. |
535 |
|
536 |
INSTANCE_SECONDARIES |
537 |
Space-separated list of secondary nodes for the instance. Note that |
538 |
for migrations/failovers, you shouldn't rely on this variable since |
539 |
the nodes change during the exectution, but on the |
540 |
OLD_SECONDARY/NEW_SECONDARY values. |
541 |
|
542 |
INSTANCE_MEMORY |
543 |
The memory size (in MiBs) of the instance. |
544 |
|
545 |
INSTANCE_VCPUS |
546 |
The number of virtual CPUs for the instance. |
547 |
|
548 |
INSTANCE_STATUS |
549 |
The run status of the instance. |
550 |
|
551 |
NODE_NAME |
552 |
The target node of this operation (not the node on which the hook |
553 |
runs). |
554 |
|
555 |
NODE_PIP |
556 |
The primary IP of the target node (the one over which inter-node |
557 |
communication is done). |
558 |
|
559 |
NODE_SIP |
560 |
The secondary IP of the target node (the one over which drbd |
561 |
replication is done). This can be equal to the primary ip, in case |
562 |
the cluster is not dual-homed. |
563 |
|
564 |
FORCE |
565 |
This is provided by some operations when the user gave this flag. |
566 |
|
567 |
IGNORE_CONSISTENCY |
568 |
The user has specified this flag. It is used when failing over |
569 |
instances in case the primary node is down. |
570 |
|
571 |
ADD_MODE |
572 |
The mode of the instance create: either *create* for create from |
573 |
scratch or *import* for restoring from an exported image. |
574 |
|
575 |
SRC_NODE, SRC_PATH, SRC_IMAGE |
576 |
In case the instance has been added by import, these variables are |
577 |
defined and point to the source node, source path (the directory |
578 |
containing the image and the config file) and the source disk image |
579 |
file. |
580 |
|
581 |
NEW_SECONDARY |
582 |
The name of the node on which the new mirror component is being |
583 |
added (for replace disk). This can be the name of the current |
584 |
secondary, if the new mirror is on the same secondary. For |
585 |
migrations/failovers, this is the old primary node. |
586 |
|
587 |
OLD_SECONDARY |
588 |
The name of the old secondary in the replace-disks command. Note that |
589 |
this can be equal to the new secondary if the secondary node hasn't |
590 |
actually changed. For migrations/failovers, this is the new primary |
591 |
node. |
592 |
|
593 |
OLD_PRIMARY, NEW_PRIMARY |
594 |
For migrations/failovers, the old and respectively new primary |
595 |
nodes. These two mirror the NEW_SECONDARY/OLD_SECONDARY variables |
596 |
|
597 |
EXPORT_NODE |
598 |
The node on which the exported image of the instance was done. |
599 |
|
600 |
EXPORT_DO_SHUTDOWN |
601 |
This variable tells if the instance has been shutdown or not while |
602 |
doing the export. In the "was shutdown" case, it's likely that the |
603 |
filesystem is consistent, whereas in the "did not shutdown" case, |
604 |
the filesystem would need a check (journal replay or full fsck) in |
605 |
order to guarantee consistency. |
606 |
|
607 |
CLUSTER_TAGS |
608 |
The list of cluster tags, space separated. |
609 |
|
610 |
NODE_TAGS_<name> |
611 |
The list of tags for node *<name>*, space separated. |
612 |
|
613 |
Examples |
614 |
-------- |
615 |
|
616 |
The startup of an instance will pass this environment to the hook |
617 |
script:: |
618 |
|
619 |
GANETI_CLUSTER=cluster1.example.com |
620 |
GANETI_DATA_DIR=/var/lib/ganeti |
621 |
GANETI_FORCE=False |
622 |
GANETI_HOOKS_PATH=instance-start |
623 |
GANETI_HOOKS_PHASE=post |
624 |
GANETI_HOOKS_VERSION=2 |
625 |
GANETI_INSTANCE_DISK0_MODE=rw |
626 |
GANETI_INSTANCE_DISK0_SIZE=128 |
627 |
GANETI_INSTANCE_DISK_COUNT=1 |
628 |
GANETI_INSTANCE_DISK_TEMPLATE=drbd |
629 |
GANETI_INSTANCE_MEMORY=128 |
630 |
GANETI_INSTANCE_NAME=instance2.example.com |
631 |
GANETI_INSTANCE_NIC0_BRIDGE=xen-br0 |
632 |
GANETI_INSTANCE_NIC0_IP= |
633 |
GANETI_INSTANCE_NIC0_MAC=aa:00:00:a5:91:58 |
634 |
GANETI_INSTANCE_NIC_COUNT=1 |
635 |
GANETI_INSTANCE_OS_TYPE=debootstrap |
636 |
GANETI_INSTANCE_PRIMARY=node3.example.com |
637 |
GANETI_INSTANCE_SECONDARIES=node5.example.com |
638 |
GANETI_INSTANCE_STATUS=down |
639 |
GANETI_INSTANCE_VCPUS=1 |
640 |
GANETI_MASTER=node1.example.com |
641 |
GANETI_OBJECT_TYPE=INSTANCE |
642 |
GANETI_OP_CODE=OP_INSTANCE_STARTUP |
643 |
GANETI_OP_TARGET=instance2.example.com |
644 |
|
645 |
.. vim: set textwidth=72 : |
646 |
.. Local Variables: |
647 |
.. mode: rst |
648 |
.. fill-column: 72 |
649 |
.. End: |