1 HBAL(1) Ganeti | Version @GANETI_VERSION@
2 =========================================
7 hbal \- Cluster balancer for Ganeti
12 **hbal** {backend options...} [algorithm options...] [reporting options...]
19 { **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* |
24 **[ \--max-cpu *cpu-ratio* ]**
25 **[ \--min-disk *disk-ratio* ]**
28 **[ -g *delta* ]** **[ \--min-gain-limit *threshold* ]**
30 **[ \--no-disk-moves ]**
31 **[ \--no-instance-moves ]**
32 **[ -U *util-file* ]**
34 **[ \--select-instances *inst...* ]**
35 **[ \--exclude-instances *inst...* ]**
40 **[ -p[ *fields* ] ]**
41 **[ \--print-instances ]**
49 hbal is a cluster balancer that looks at the current state of the
50 cluster (nodes with their total and free disk, memory, etc.) and
51 instance placement and computes a series of steps designed to bring
52 the cluster into a better state.
54 The algorithm used is designed to be stable (i.e. it will give you the
55 same results when restarting it from the middle of the solution) and
56 reasonably fast. It is not, however, designed to be a perfect algorithm:
57 it is possible to make it go into a corner from which it can find no
58 improvement, because it looks only one "step" ahead.
60 By default, the program will show the solution incrementally as it is
61 computed, in a somewhat cryptic format; for getting the actual Ganeti
62 command list, use the **-C** option.
67 The program works in independent steps; at each step, we compute the
68 best instance move that lowers the cluster score.
70 The possible move type for an instance are combinations of
71 failover/migrate and replace-disks such that we change one of the
72 instance nodes, and the other one remains (but possibly with changed
73 role, e.g. from primary it becomes secondary). The list is:
76 - replace secondary (r)
77 - replace primary, a composite move (f, r, f)
78 - failover and replace secondary, also composite (f, r)
79 - replace secondary and failover, also composite (r, f)
81 We don't do the only remaining possibility of replacing both nodes
82 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
83 exhaustive search over both candidate primary and secondary nodes, and
84 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
85 give better scores but will result in more disk replacements.
87 PLACEMENT RESTRICTIONS
88 ~~~~~~~~~~~~~~~~~~~~~~
90 At each step, we prevent an instance move if it would cause:
92 - a node to go into N+1 failure state
93 - an instance to move onto an offline node (offline nodes are either
94 read from the cluster or declared with *-O*; drained nodes are
96 - an exclusion-tag based conflict (exclusion tags are read from the
97 cluster and/or defined via the *\--exclusion-tags* option)
98 - a max vcpu/pcpu ratio to be exceeded (configured via *\--max-cpu*)
99 - min disk free percentage to go below the configured limit
100 (configured via *\--min-disk*)
105 As said before, the algorithm tries to minimise the cluster score at
106 each step. Currently this score is computed as a weighted sum of the
107 following components:
109 - standard deviation of the percent of free memory
110 - standard deviation of the percent of reserved memory
111 - standard deviation of the percent of free disk
112 - count of nodes failing N+1 check
113 - count of instances living (either as primary or secondary) on
114 offline nodes; in the sense of hbal (and the other htools) drained
115 nodes are considered offline
116 - count of instances living (as primary) on offline nodes; this
117 differs from the above metric by helping failover of such instances
119 - standard deviation of the ratio of virtual-to-physical cpus (for
120 primary instances of the node)
121 - standard deviation of the dynamic load on the nodes, for cpus,
122 memory, disk and network
124 The free memory and free disk values help ensure that all nodes are
125 somewhat balanced in their resource usage. The reserved memory helps
126 to ensure that nodes are somewhat balanced in holding secondary
127 instances, and that no node keeps too much memory reserved for
128 N+1. And finally, the N+1 percentage helps guide the algorithm towards
129 eliminating N+1 failures, if possible.
131 Except for the N+1 failures and offline instances counts, we use the
132 standard deviation since when used with values within a fixed range
133 (we use percents expressed as values between zero and one) it gives
134 consistent results across all metrics (there are some small issues
135 related to different means, but it works generally well). The 'count'
136 type values will have higher score and thus will matter more for
137 balancing; thus these are better for hard constraints (like evacuating
138 nodes and fixing N+1 failures). For example, the offline instances
139 count (i.e. the number of instances living on offline nodes) will
140 cause the algorithm to actively move instances away from offline
141 nodes. This, coupled with the restriction on placement given by
142 offline nodes, will cause evacuation of such nodes.
144 The dynamic load values need to be read from an external file (Ganeti
145 doesn't supply them), and are computed for each node as: sum of
146 primary instance cpu load, sum of primary instance memory load, sum of
147 primary and secondary instance disk load (as DRBD generates write load
148 on secondary nodes too in normal case and in degraded scenarios also
149 read load), and sum of primary instance network load. An example of
150 how to generate these values for input to hbal would be to track ``xm
151 list`` for instances over a day and by computing the delta of the cpu
152 values, and feed that via the *-U* option for all instances (and keep
153 the other metrics as one). For the algorithm to work, all that is
154 needed is that the values are consistent for a metric across all
155 instances (e.g. all instances use cpu% to report cpu usage, and not
156 something related to number of CPU seconds used if the CPUs are
157 different), and that they are normalised to between zero and one. Note
158 that it's recommended to not have zero as the load value for any
159 instance metric since then secondary instances are not well balanced.
161 On a perfectly balanced cluster (all nodes the same size, all
162 instances the same size and spread across the nodes equally), the
163 values for all metrics would be zero. This doesn't happen too often in
169 Since current Ganeti versions do not report the memory used by offline
170 (down) instances, ignoring the run status of instances will cause
171 wrong calculations. For this reason, the algorithm subtracts the
172 memory size of down instances from the free node memory of their
173 primary node, in effect simulating the startup of such instances.
178 The exclusion tags mechanism is designed to prevent instances which
179 run the same workload (e.g. two DNS servers) to land on the same node,
180 which would make the respective node a SPOF for the given service.
182 It works by tagging instances with certain tags and then building
183 exclusion maps based on these. Which tags are actually used is
184 configured either via the command line (option *\--exclusion-tags*)
185 or via adding them to the cluster tags:
187 \--exclusion-tags=a,b
188 This will make all instance tags of the form *a:\**, *b:\** be
189 considered for the exclusion map
191 cluster tags *htools:iextags:a*, *htools:iextags:b*
192 This will make instance tags *a:\**, *b:\** be considered for the
193 exclusion map. More precisely, the suffix of cluster tags starting
194 with *htools:iextags:* will become the prefix of the exclusion tags.
196 Both the above forms mean that two instances both having (e.g.) the
197 tag *a:foo* or *b:bar* won't end on the same node.
202 The options that can be passed to the program are as follows:
204 -C, \--print-commands
205 Print the command list at the end of the run. Without this, the
206 program will only show a shorter, but cryptic output.
208 Note that the moves list will be split into independent steps,
209 called "jobsets", but only for visual inspection, not for actually
210 parallelisation. It is not possible to parallelise these directly
211 when executed via "gnt-instance" commands, since a compound command
212 (e.g. failover and replace-disks) must be executed
213 serially. Parallel execution is only possible when using the Luxi
214 backend and the *-L* option.
216 The algorithm for splitting the moves into jobsets is by
217 accumulating moves until the next move is touching nodes already
218 touched by the current moves; this means we can't execute in
219 parallel (due to resource allocation in Ganeti) and thus we start a
223 Prints the before and after node status, in a format designed to allow
224 the user to understand the node's most important parameters. See the
225 man page **htools**\(1) for more details about this option.
228 Prints the before and after instance map. This is less useful as the
229 node status, but it can help in understanding instance moves.
232 This option (which can be given multiple times) will mark nodes as
233 being *offline*. This means a couple of things:
235 - instances won't be placed on these nodes, not even temporarily;
236 e.g. the *replace primary* move is not available if the secondary
237 node is offline, since this move requires a failover.
238 - these nodes will not be included in the score calculation (except
239 for the percentage of instances on offline nodes)
241 Note that algorithm will also mark as offline any nodes which are
242 reported by RAPI as such, or that have "?" in file-based input in
245 -e *score*, \--min-score=*score*
246 This parameter denotes the minimum score we are happy with and alters
247 the computation in two ways:
249 - if the cluster has the initial score lower than this value, then we
250 don't enter the algorithm at all, and exit with success
251 - during the iterative process, if we reach a score lower than this
252 value, we exit the algorithm
254 The default value of the parameter is currently ``1e-9`` (chosen
257 -g *delta*, \--min-gain=*delta*
258 Since the balancing algorithm can sometimes result in just very tiny
259 improvements, that bring less gain that they cost in relocation
260 time, this parameter (defaulting to 0.01) represents the minimum
261 gain we require during a step, to continue balancing.
263 \--min-gain-limit=*threshold*
264 The above min-gain option will only take effect if the cluster score
265 is already below *threshold* (defaults to 0.1). The rationale behind
266 this setting is that at high cluster scores (badly balanced
267 clusters), we don't want to abort the rebalance too quickly, as
268 later gains might still be significant. However, under the
269 threshold, the total gain is only the threshold value, so we can
273 This parameter prevents hbal from using disk move
274 (i.e. "gnt-instance replace-disks") operations. This will result in
275 a much quicker balancing, but of course the improvements are
276 limited. It is up to the user to decide when to use one or another.
279 This parameter prevents hbal from using instance moves
280 (i.e. "gnt-instance migrate/failover") operations. This will only use
281 the slow disk-replacement operations, and will also provide a worse
282 balance, but can be useful if moving instances around is deemed unsafe
286 This parameter restricts the list of instances considered for moving
287 to the ones living on offline/drained nodes. It can be used as a
288 (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
289 note that it doesn't guarantee full evacuation.
291 \--select-instances=*instances*
292 This parameter marks the given instances (as a comma-separated list)
293 as the only ones being moved during the rebalance.
295 \--exclude-instances=*instances*
296 This parameter marks the given instances (as a comma-separated list)
297 from being moved during the rebalance.
300 This parameter specifies a file holding instance dynamic utilisation
301 information that will be used to tweak the balancing algorithm to
302 equalise load on the nodes (as opposed to static resource
303 usage). The file is in the format "instance_name cpu_util mem_util
304 disk_util net_util" where the "_util" parameters are interpreted as
305 numbers and the instance name must match exactly the instance as
306 read from Ganeti. In case of unknown instance names, the program
309 If not given, the default values are one for all metrics and thus
310 dynamic utilisation has only one effect on the algorithm: the
311 equalisation of the secondary instances across nodes (this is the
312 only metric that is not tracked by another, dedicated value, and
313 thus the disk load of instances will cause secondary instance
314 equalisation). Note that value of one will also influence slightly
315 the primary instance count, but that is already tracked via other
316 metrics and thus the influence of the dynamic utilisation will be
317 practically insignificant.
319 -S *filename*, \--save-cluster=*filename*
320 If given, the state of the cluster before the balancing is saved to
321 the given file plus the extension "original"
322 (i.e. *filename*.original), and the state at the end of the
323 balancing is saved to the given file plus the extension "balanced"
324 (i.e. *filename*.balanced). This allows re-feeding the cluster state
325 to either hbal itself or for example hspace via the ``-t`` option.
327 -t *datafile*, \--text-data=*datafile*
328 Backend specification: the name of the file holding node and instance
329 information (if not collecting via RAPI or LUXI). This or one of the
330 other backends must be selected. The option is described in the man
334 Backend specification: collect data directly from the *cluster* given
335 as an argument via RAPI. The option is described in the man page
339 Backend specification: collect data directly from the master daemon,
340 which is to be contacted via LUXI (an internal Ganeti protocol). The
341 option is described in the man page **htools**\(1).
344 When using the Luxi backend, hbal can also execute the given
345 commands. The execution method is to execute the individual jobsets
346 (see the *-C* option for details) in separate stages, aborting if at
347 any time a jobset doesn't have all jobs successful. Each step in the
348 balancing solution will be translated into exactly one Ganeti job
349 (having between one and three OpCodes), and all the steps in a
350 jobset will be executed in parallel. The jobsets themselves are
353 The execution of the job series can be interrupted, see below for
356 -l *N*, \--max-length=*N*
357 Restrict the solution to this length. This can be used for example
358 to automate the execution of the balancing.
360 \--max-cpu=*cpu-ratio*
361 The maximum virtual to physical cpu ratio, as a floating point number
362 greater than or equal to one. For example, specifying *cpu-ratio* as
363 **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
364 should be allowed to be in use for primary instances. A value of
365 exactly one means there will be no over-subscription of CPU (except
366 for the CPU time used by the node itself), and values below one do not
367 make sense, as that means other resources (e.g. disk) won't be fully
368 utilised due to CPU restrictions.
370 \--min-disk=*disk-ratio*
371 The minimum amount of free disk space remaining, as a floating point
372 number. For example, specifying *disk-ratio* as **0.25** means that
373 at least one quarter of disk space should be left free on nodes.
375 -G *uuid*, \--group=*uuid*
376 On an multi-group cluster, select this group for
377 processing. Otherwise hbal will abort, since it cannot balance
378 multiple groups at the same time.
381 Increase the output verbosity. Each usage of this option will
382 increase the verbosity (currently more than 2 doesn't make sense)
383 from the default of one.
386 Decrease the output verbosity. Each usage of this option will
387 decrease the verbosity (less than zero doesn't make sense) from the
391 Just show the program version and exit.
396 When executing jobs via LUXI (using the ``-X`` option), normally hbal
397 will execute all jobs until either one errors out or all the jobs finish
400 Since balancing can take a long time, it is possible to stop hbal early
403 - by sending a ``SIGINT`` (``^C``), hbal will register the termination
404 request, and will wait until the currently submitted jobs finish, at
405 which point it will exit (with exit code 0 if all jobs finished
406 correctly, otherwise with exit code 1 as usual)
408 - by sending a ``SIGTERM``, hbal will immediately exit (with exit code
409 2\); it is the responsibility of the user to follow up with Ganeti
410 and check the result of the currently-executing jobs
412 Note that in any situation, it's perfectly safe to kill hbal, either via
413 the above signals or via any other signal (e.g. ``SIGQUIT``,
414 ``SIGKILL``), since the jobs themselves are processed by Ganeti whereas
415 hbal (after submission) only watches their progression. In this case,
416 the user will have to query Ganeti for job results.
421 The exit status of the command will be zero, unless for some reason the
422 algorithm failed (e.g. wrong node or instance data), invalid command
423 line options, or (in case of job execution) one of the jobs has failed.
425 Once job execution via Luxi has started (``-X``), if the balancing was
426 interrupted early (via *SIGINT*, or via ``--max-length``) but all jobs
427 executed successfully, then the exit status is zero; a non-zero exit
428 code means that the cluster state should be investigated, since a job
429 failed or we couldn't compute its status and this can also point to a
430 problem on the Ganeti side.
435 The program does not check all its input data for consistency, and
436 sometime aborts with cryptic errors messages with invalid data.
438 The algorithm is not perfect.
443 Note that these examples are not for the latest version (they don't
444 have full node data).
449 With the default options, the program shows each individual step and
450 the improvements it brings in cluster score::
453 Loaded 20 nodes, 80 instances
454 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
455 Initial score: 0.52329131
456 Trying to minimize the CV...
457 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
458 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
459 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
460 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
461 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
462 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
463 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
464 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
465 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
466 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
467 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
468 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
469 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
470 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
471 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
472 Cluster score improved from 0.52329131 to 0.00252594
474 In the above output, we can see:
476 - the input data (here from files) shows a cluster with 20 nodes and
478 - the cluster is not initially N+1 compliant
479 - the initial score is 0.52329131
481 The step list follows, showing the instance, its initial
482 primary/secondary nodes, the new primary secondary, the cluster list,
483 and the actions taken in this step (with 'f' denoting failover/migrate
484 and 'r' denoting replace secondary).
486 Finally, the program shows the improvement in cluster score.
488 A more detailed output is obtained via the *-C* and *-p* options::
491 Loaded 20 nodes, 80 instances
492 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
493 Initial cluster status:
494 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
495 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
496 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
497 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
498 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
499 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
500 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
501 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
502 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
503 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
504 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
505 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
506 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
507 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
508 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
509 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
510 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
511 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
512 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
513 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
514 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
516 Initial score: 0.52329131
517 Trying to minimize the CV...
518 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
519 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
520 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
521 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
522 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
523 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
524 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
525 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
526 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
527 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
528 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
529 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
530 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
531 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
532 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
533 Cluster score improved from 0.52329131 to 0.00252594
535 Commands to run to reach the above solution:
537 echo gnt-instance migrate instance14
538 echo gnt-instance replace-disks -n node16 instance14
539 echo gnt-instance migrate instance14
541 echo gnt-instance migrate instance54
542 echo gnt-instance replace-disks -n node16 instance54
543 echo gnt-instance migrate instance54
545 echo gnt-instance migrate instance4
546 echo gnt-instance replace-disks -n node16 instance4
548 echo gnt-instance replace-disks -n node2 instance48
549 echo gnt-instance migrate instance48
551 echo gnt-instance replace-disks -n node16 instance93
552 echo gnt-instance migrate instance93
554 echo gnt-instance replace-disks -n node2 instance89
555 echo gnt-instance migrate instance89
557 echo gnt-instance replace-disks -n node16 instance5
558 echo gnt-instance migrate instance5
560 echo gnt-instance migrate instance94
561 echo gnt-instance replace-disks -n node16 instance94
563 echo gnt-instance migrate instance44
564 echo gnt-instance replace-disks -n node15 instance44
566 echo gnt-instance replace-disks -n node16 instance62
568 echo gnt-instance replace-disks -n node16 instance13
570 echo gnt-instance replace-disks -n node7 instance19
572 echo gnt-instance replace-disks -n node1 instance43
574 echo gnt-instance replace-disks -n node4 instance1
576 echo gnt-instance replace-disks -n node17 instance58
578 Final cluster status:
579 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
580 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
581 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
582 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
583 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
584 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
585 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
586 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
587 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
588 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
589 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
590 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
591 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
592 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
593 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
594 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
595 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
596 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
597 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
598 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
599 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
601 Here we see, beside the step list, the initial and final cluster
602 status, with the final one showing all nodes being N+1 compliant, and
603 the command list to reach the final solution. In the initial listing,
604 we see which nodes are not N+1 compliant.
606 The algorithm is stable as long as each step above is fully completed,
607 e.g. in step 8, both the migrate and the replace-disks are
608 done. Otherwise, if only the migrate is done, the input data is
609 changed in a way that the program will output a different solution
610 list (but hopefully will end in the same state).
612 .. vim: set textwidth=72 :