1 HBAL(1) Ganeti | Version @GANETI_VERSION@
2 =========================================
7 hbal \- Cluster balancer for Ganeti
12 **hbal** {backend options...} [algorithm options...] [reporting options...]
19 { **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* }
23 **[ --max-cpu *cpu-ratio* ]**
24 **[ --min-disk *disk-ratio* ]**
27 **[ -g *delta* ]** **[ --min-gain-limit *threshold* ]**
29 **[ --no-disk-moves ]**
30 **[ --no-instance-moves ]**
31 **[ -U *util-file* ]**
33 **[ --exclude-instances *inst...* ]**
38 **[ -p[ *fields* ] ]**
39 **[ --print-instances ]**
47 hbal is a cluster balancer that looks at the current state of the
48 cluster (nodes with their total and free disk, memory, etc.) and
49 instance placement and computes a series of steps designed to bring
50 the cluster into a better state.
52 The algorithm used is designed to be stable (i.e. it will give you the
53 same results when restarting it from the middle of the solution) and
54 reasonably fast. It is not, however, designed to be a perfect
55 algorithm--it is possible to make it go into a corner from which
56 it can find no improvement, because it looks only one "step" ahead.
58 By default, the program will show the solution incrementally as it is
59 computed, in a somewhat cryptic format; for getting the actual Ganeti
60 command list, use the **-C** option.
65 The program works in independent steps; at each step, we compute the
66 best instance move that lowers the cluster score.
68 The possible move type for an instance are combinations of
69 failover/migrate and replace-disks such that we change one of the
70 instance nodes, and the other one remains (but possibly with changed
71 role, e.g. from primary it becomes secondary). The list is:
74 - replace secondary (r)
75 - replace primary, a composite move (f, r, f)
76 - failover and replace secondary, also composite (f, r)
77 - replace secondary and failover, also composite (r, f)
79 We don't do the only remaining possibility of replacing both nodes
80 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
81 exhaustive search over both candidate primary and secondary nodes, and
82 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
83 give better scores but will result in more disk replacements.
85 PLACEMENT RESTRICTIONS
86 ~~~~~~~~~~~~~~~~~~~~~~
88 At each step, we prevent an instance move if it would cause:
90 - a node to go into N+1 failure state
91 - an instance to move onto an offline node (offline nodes are either
92 read from the cluster or declared with *-O*)
93 - an exclusion-tag based conflict (exclusion tags are read from the
94 cluster and/or defined via the *--exclusion-tags* option)
95 - a max vcpu/pcpu ratio to be exceeded (configured via *--max-cpu*)
96 - min disk free percentage to go below the configured limit
97 (configured via *--min-disk*)
102 As said before, the algorithm tries to minimise the cluster score at
103 each step. Currently this score is computed as a sum of the following
106 - standard deviation of the percent of free memory
107 - standard deviation of the percent of reserved memory
108 - standard deviation of the percent of free disk
109 - count of nodes failing N+1 check
110 - count of instances living (either as primary or secondary) on
112 - count of instances living (as primary) on offline nodes; this
113 differs from the above metric by helping failover of such instances
115 - standard deviation of the ratio of virtual-to-physical cpus (for
116 primary instances of the node)
117 - standard deviation of the dynamic load on the nodes, for cpus,
118 memory, disk and network
120 The free memory and free disk values help ensure that all nodes are
121 somewhat balanced in their resource usage. The reserved memory helps
122 to ensure that nodes are somewhat balanced in holding secondary
123 instances, and that no node keeps too much memory reserved for
124 N+1. And finally, the N+1 percentage helps guide the algorithm towards
125 eliminating N+1 failures, if possible.
127 Except for the N+1 failures and offline instances counts, we use the
128 standard deviation since when used with values within a fixed range
129 (we use percents expressed as values between zero and one) it gives
130 consistent results across all metrics (there are some small issues
131 related to different means, but it works generally well). The 'count'
132 type values will have higher score and thus will matter more for
133 balancing; thus these are better for hard constraints (like evacuating
134 nodes and fixing N+1 failures). For example, the offline instances
135 count (i.e. the number of instances living on offline nodes) will
136 cause the algorithm to actively move instances away from offline
137 nodes. This, coupled with the restriction on placement given by
138 offline nodes, will cause evacuation of such nodes.
140 The dynamic load values need to be read from an external file (Ganeti
141 doesn't supply them), and are computed for each node as: sum of
142 primary instance cpu load, sum of primary instance memory load, sum of
143 primary and secondary instance disk load (as DRBD generates write load
144 on secondary nodes too in normal case and in degraded scenarios also
145 read load), and sum of primary instance network load. An example of
146 how to generate these values for input to hbal would be to track ``xm
147 list`` for instances over a day and by computing the delta of the cpu
148 values, and feed that via the *-U* option for all instances (and keep
149 the other metrics as one). For the algorithm to work, all that is
150 needed is that the values are consistent for a metric across all
151 instances (e.g. all instances use cpu% to report cpu usage, and not
152 something related to number of CPU seconds used if the CPUs are
153 different), and that they are normalised to between zero and one. Note
154 that it's recommended to not have zero as the load value for any
155 instance metric since then secondary instances are not well balanced.
157 On a perfectly balanced cluster (all nodes the same size, all
158 instances the same size and spread across the nodes equally), the
159 values for all metrics would be zero. This doesn't happen too often in
165 Since current Ganeti versions do not report the memory used by offline
166 (down) instances, ignoring the run status of instances will cause
167 wrong calculations. For this reason, the algorithm subtracts the
168 memory size of down instances from the free node memory of their
169 primary node, in effect simulating the startup of such instances.
174 The exclusion tags mechanism is designed to prevent instances which
175 run the same workload (e.g. two DNS servers) to land on the same node,
176 which would make the respective node a SPOF for the given service.
178 It works by tagging instances with certain tags and then building
179 exclusion maps based on these. Which tags are actually used is
180 configured either via the command line (option *--exclusion-tags*)
181 or via adding them to the cluster tags:
184 This will make all instance tags of the form *a:\**, *b:\** be
185 considered for the exclusion map
187 cluster tags *htools:iextags:a*, *htools:iextags:b*
188 This will make instance tags *a:\**, *b:\** be considered for the
189 exclusion map. More precisely, the suffix of cluster tags starting
190 with *htools:iextags:* will become the prefix of the exclusion tags.
192 Both the above forms mean that two instances both having (e.g.) the
193 tag *a:foo* or *b:bar* won't end on the same node.
198 The options that can be passed to the program are as follows:
201 Print the command list at the end of the run. Without this, the
202 program will only show a shorter, but cryptic output.
204 Note that the moves list will be split into independent steps,
205 called "jobsets", but only for visual inspection, not for actually
206 parallelisation. It is not possible to parallelise these directly
207 when executed via "gnt-instance" commands, since a compound command
208 (e.g. failover and replace-disks) must be executed
209 serially. Parallel execution is only possible when using the Luxi
210 backend and the *-L* option.
212 The algorithm for splitting the moves into jobsets is by
213 accumulating moves until the next move is touching nodes already
214 touched by the current moves; this means we can't execute in
215 parallel (due to resource allocation in Ganeti) and thus we start a
219 Prints the before and after node status, in a format designed to
220 allow the user to understand the node's most important parameters.
222 It is possible to customise the listed information by passing a
223 comma-separated list of field names to this option (the field list
224 is currently undocumented), or to extend the default field list by
225 prefixing the additional field list with a plus sign. By default,
226 the node list will contain the following information:
229 a character denoting the status of the node, with '-' meaning an
230 offline node, '*' meaning N+1 failure and blank meaning a good
237 the total node memory
240 the memory used by the node itself
243 the memory used by instances
246 amount memory which seems to be in use but cannot be determined
247 why or by which instance; usually this means that the hypervisor
248 has some overhead or that there are other reporting errors
254 the reserved node memory, which is the amount of free memory
255 needed for N+1 compliance
264 the number of physical cpus on the node
267 the number of virtual cpus allocated to primary instances
270 number of primary instances
273 number of secondary instances
276 percent of free memory
282 ratio of virtual to physical cpus
285 the dynamic CPU load (if the information is available)
288 the dynamic memory load (if the information is available)
291 the dynamic disk load (if the information is available)
294 the dynamic net load (if the information is available)
297 Prints the before and after instance map. This is less useful as the
298 node status, but it can help in understanding instance moves.
301 Only shows a one-line output from the program, designed for the case
302 when one wants to look at multiple clusters at once and check their
305 The line will contain four fields:
307 - initial cluster score
308 - number of steps in the solution
309 - final cluster score
310 - improvement in the cluster score
313 This option (which can be given multiple times) will mark nodes as
314 being *offline*. This means a couple of things:
316 - instances won't be placed on these nodes, not even temporarily;
317 e.g. the *replace primary* move is not available if the secondary
318 node is offline, since this move requires a failover.
319 - these nodes will not be included in the score calculation (except
320 for the percentage of instances on offline nodes)
322 Note that algorithm will also mark as offline any nodes which are
323 reported by RAPI as such, or that have "?" in file-based input in
326 -e *score*, --min-score=*score*
327 This parameter denotes the minimum score we are happy with and alters
328 the computation in two ways:
330 - if the cluster has the initial score lower than this value, then we
331 don't enter the algorithm at all, and exit with success
332 - during the iterative process, if we reach a score lower than this
333 value, we exit the algorithm
335 The default value of the parameter is currently ``1e-9`` (chosen
338 -g *delta*, --min-gain=*delta*
339 Since the balancing algorithm can sometimes result in just very tiny
340 improvements, that bring less gain that they cost in relocation
341 time, this parameter (defaulting to 0.01) represents the minimum
342 gain we require during a step, to continue balancing.
344 --min-gain-limit=*threshold*
345 The above min-gain option will only take effect if the cluster score
346 is already below *threshold* (defaults to 0.1). The rationale behind
347 this setting is that at high cluster scores (badly balanced
348 clusters), we don't want to abort the rebalance too quickly, as
349 later gains might still be significant. However, under the
350 threshold, the total gain is only the threshold value, so we can
354 This parameter prevents hbal from using disk move
355 (i.e. "gnt-instance replace-disks") operations. This will result in
356 a much quicker balancing, but of course the improvements are
357 limited. It is up to the user to decide when to use one or another.
360 This parameter prevents hbal from using instance moves
361 (i.e. "gnt-instance migrate/failover") operations. This will only use
362 the slow disk-replacement operations, and will also provide a worse
363 balance, but can be useful if moving instances around is deemed unsafe
367 This parameter restricts the list of instances considered for moving
368 to the ones living on offline/drained nodes. It can be used as a
369 (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
370 note that it doesn't guarantee full evacuation.
372 --exclude-instances=*instances*
373 This parameter marks the given instances (as a comma-separated list)
374 from being moved during the rebalance.
377 This parameter specifies a file holding instance dynamic utilisation
378 information that will be used to tweak the balancing algorithm to
379 equalise load on the nodes (as opposed to static resource
380 usage). The file is in the format "instance_name cpu_util mem_util
381 disk_util net_util" where the "_util" parameters are interpreted as
382 numbers and the instance name must match exactly the instance as
383 read from Ganeti. In case of unknown instance names, the program
386 If not given, the default values are one for all metrics and thus
387 dynamic utilisation has only one effect on the algorithm: the
388 equalisation of the secondary instances across nodes (this is the
389 only metric that is not tracked by another, dedicated value, and
390 thus the disk load of instances will cause secondary instance
391 equalisation). Note that value of one will also influence slightly
392 the primary instance count, but that is already tracked via other
393 metrics and thus the influence of the dynamic utilisation will be
394 practically insignificant.
396 -t *datafile*, --text-data=*datafile*
397 The name of the file holding node and instance information (if not
398 collecting via RAPI or LUXI). This or one of the other backends must
401 -S *filename*, --save-cluster=*filename*
402 If given, the state of the cluster before the balancing is saved to
403 the given file plus the extension "original"
404 (i.e. *filename*.original), and the state at the end of the
405 balancing is saved to the given file plus the extension "balanced"
406 (i.e. *filename*.balanced). This allows re-feeding the cluster state
407 to either hbal itself or for example hspace.
410 Collect data directly from the *cluster* given as an argument via
411 RAPI. If the argument doesn't contain a colon (:), then it is
412 converted into a fully-built URL via prepending ``https://`` and
413 appending the default RAPI port, otherwise it's considered a
414 fully-specified URL and is used as-is.
417 Collect data directly from the master daemon, which is to be
418 contacted via the luxi (an internal Ganeti protocol). An optional
419 *path* argument is interpreted as the path to the unix socket on
420 which the master daemon listens; otherwise, the default path used by
421 ganeti when installed with *--localstatedir=/var* is used.
424 When using the Luxi backend, hbal can also execute the given
425 commands. The execution method is to execute the individual jobsets
426 (see the *-C* option for details) in separate stages, aborting if at
427 any time a jobset doesn't have all jobs successful. Each step in the
428 balancing solution will be translated into exactly one Ganeti job
429 (having between one and three OpCodes), and all the steps in a
430 jobset will be executed in parallel. The jobsets themselves are
433 -l *N*, --max-length=*N*
434 Restrict the solution to this length. This can be used for example
435 to automate the execution of the balancing.
437 --max-cpu=*cpu-ratio*
438 The maximum virtual to physical cpu ratio, as a floating point
439 number between zero and one. For example, specifying *cpu-ratio* as
440 **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual
441 cpus should be allowed to be in use for primary instances. A value
442 of one doesn't make sense though, as that means no disk space can be
445 --min-disk=*disk-ratio*
446 The minimum amount of free disk space remaining, as a floating point
447 number. For example, specifying *disk-ratio* as **0.25** means that
448 at least one quarter of disk space should be left free on nodes.
450 -G *uuid*, --group=*uuid*
451 On an multi-group cluster, select this group for
452 processing. Otherwise hbal will abort, since it cannot balance
453 multiple groups at the same time.
456 Increase the output verbosity. Each usage of this option will
457 increase the verbosity (currently more than 2 doesn't make sense)
458 from the default of one.
461 Decrease the output verbosity. Each usage of this option will
462 decrease the verbosity (less than zero doesn't make sense) from the
466 Just show the program version and exit.
471 The exit status of the command will be zero, unless for some reason
472 the algorithm fatally failed (e.g. wrong node or instance data), or
473 (in case of job execution) any job has failed.
478 The program does not check its input data for consistency, and aborts
479 with cryptic errors messages in this case.
481 The algorithm is not perfect.
483 The output format is not easily scriptable, and the program should
484 feed moves directly into Ganeti (either via RAPI or via a gnt-debug
490 Note that these examples are not for the latest version (they don't
491 have full node data).
496 With the default options, the program shows each individual step and
497 the improvements it brings in cluster score::
500 Loaded 20 nodes, 80 instances
501 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
502 Initial score: 0.52329131
503 Trying to minimize the CV...
504 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
505 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
506 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
507 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
508 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
509 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
510 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
511 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
512 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
513 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
514 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
515 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
516 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
517 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
518 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
519 Cluster score improved from 0.52329131 to 0.00252594
521 In the above output, we can see:
523 - the input data (here from files) shows a cluster with 20 nodes and
525 - the cluster is not initially N+1 compliant
526 - the initial score is 0.52329131
528 The step list follows, showing the instance, its initial
529 primary/secondary nodes, the new primary secondary, the cluster list,
530 and the actions taken in this step (with 'f' denoting failover/migrate
531 and 'r' denoting replace secondary).
533 Finally, the program shows the improvement in cluster score.
535 A more detailed output is obtained via the *-C* and *-p* options::
538 Loaded 20 nodes, 80 instances
539 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
540 Initial cluster status:
541 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
542 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
543 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
544 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
545 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
546 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
547 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
548 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
549 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
550 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
551 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
552 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
553 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
554 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
555 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
556 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
557 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
558 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
559 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
560 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
561 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
563 Initial score: 0.52329131
564 Trying to minimize the CV...
565 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
566 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
567 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
568 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
569 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
570 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
571 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
572 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
573 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
574 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
575 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
576 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
577 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
578 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
579 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
580 Cluster score improved from 0.52329131 to 0.00252594
582 Commands to run to reach the above solution:
584 echo gnt-instance migrate instance14
585 echo gnt-instance replace-disks -n node16 instance14
586 echo gnt-instance migrate instance14
588 echo gnt-instance migrate instance54
589 echo gnt-instance replace-disks -n node16 instance54
590 echo gnt-instance migrate instance54
592 echo gnt-instance migrate instance4
593 echo gnt-instance replace-disks -n node16 instance4
595 echo gnt-instance replace-disks -n node2 instance48
596 echo gnt-instance migrate instance48
598 echo gnt-instance replace-disks -n node16 instance93
599 echo gnt-instance migrate instance93
601 echo gnt-instance replace-disks -n node2 instance89
602 echo gnt-instance migrate instance89
604 echo gnt-instance replace-disks -n node16 instance5
605 echo gnt-instance migrate instance5
607 echo gnt-instance migrate instance94
608 echo gnt-instance replace-disks -n node16 instance94
610 echo gnt-instance migrate instance44
611 echo gnt-instance replace-disks -n node15 instance44
613 echo gnt-instance replace-disks -n node16 instance62
615 echo gnt-instance replace-disks -n node16 instance13
617 echo gnt-instance replace-disks -n node7 instance19
619 echo gnt-instance replace-disks -n node1 instance43
621 echo gnt-instance replace-disks -n node4 instance1
623 echo gnt-instance replace-disks -n node17 instance58
625 Final cluster status:
626 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
627 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
628 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
629 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
630 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
631 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
632 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
633 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
634 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
635 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
636 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
637 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
638 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
639 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
640 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
641 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
642 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
643 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
644 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
645 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
646 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
648 Here we see, beside the step list, the initial and final cluster
649 status, with the final one showing all nodes being N+1 compliant, and
650 the command list to reach the final solution. In the initial listing,
651 we see which nodes are not N+1 compliant.
653 The algorithm is stable as long as each step above is fully completed,
654 e.g. in step 8, both the migrate and the replace-disks are
655 done. Otherwise, if only the migrate is done, the input data is
656 changed in a way that the program will output a different solution
657 list (but hopefully will end in the same state).
659 .. vim: set textwidth=72 :