1 HBAL(1) Ganeti | Version @GANETI_VERSION@
2 =========================================
7 hbal \- Cluster balancer for Ganeti
12 **hbal** {backend options...} [algorithm options...] [reporting options...]
19 { **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* }
23 **[ --max-cpu *cpu-ratio* ]**
24 **[ --min-disk *disk-ratio* ]**
27 **[ -g *delta* ]** **[ --min-gain-limit *threshold* ]**
29 **[ --no-disk-moves ]**
30 **[ --no-instance-moves ]**
31 **[ -U *util-file* ]**
33 **[ --select-instances *inst...* ]**
34 **[ --exclude-instances *inst...* ]**
39 **[ -p[ *fields* ] ]**
40 **[ --print-instances ]**
48 hbal is a cluster balancer that looks at the current state of the
49 cluster (nodes with their total and free disk, memory, etc.) and
50 instance placement and computes a series of steps designed to bring
51 the cluster into a better state.
53 The algorithm used is designed to be stable (i.e. it will give you the
54 same results when restarting it from the middle of the solution) and
55 reasonably fast. It is not, however, designed to be a perfect
56 algorithm--it is possible to make it go into a corner from which
57 it can find no improvement, because it looks only one "step" ahead.
59 By default, the program will show the solution incrementally as it is
60 computed, in a somewhat cryptic format; for getting the actual Ganeti
61 command list, use the **-C** option.
66 The program works in independent steps; at each step, we compute the
67 best instance move that lowers the cluster score.
69 The possible move type for an instance are combinations of
70 failover/migrate and replace-disks such that we change one of the
71 instance nodes, and the other one remains (but possibly with changed
72 role, e.g. from primary it becomes secondary). The list is:
75 - replace secondary (r)
76 - replace primary, a composite move (f, r, f)
77 - failover and replace secondary, also composite (f, r)
78 - replace secondary and failover, also composite (r, f)
80 We don't do the only remaining possibility of replacing both nodes
81 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
82 exhaustive search over both candidate primary and secondary nodes, and
83 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
84 give better scores but will result in more disk replacements.
86 PLACEMENT RESTRICTIONS
87 ~~~~~~~~~~~~~~~~~~~~~~
89 At each step, we prevent an instance move if it would cause:
91 - a node to go into N+1 failure state
92 - an instance to move onto an offline node (offline nodes are either
93 read from the cluster or declared with *-O*)
94 - an exclusion-tag based conflict (exclusion tags are read from the
95 cluster and/or defined via the *--exclusion-tags* option)
96 - a max vcpu/pcpu ratio to be exceeded (configured via *--max-cpu*)
97 - min disk free percentage to go below the configured limit
98 (configured via *--min-disk*)
103 As said before, the algorithm tries to minimise the cluster score at
104 each step. Currently this score is computed as a sum of the following
107 - standard deviation of the percent of free memory
108 - standard deviation of the percent of reserved memory
109 - standard deviation of the percent of free disk
110 - count of nodes failing N+1 check
111 - count of instances living (either as primary or secondary) on
113 - count of instances living (as primary) on offline nodes; this
114 differs from the above metric by helping failover of such instances
116 - standard deviation of the ratio of virtual-to-physical cpus (for
117 primary instances of the node)
118 - standard deviation of the dynamic load on the nodes, for cpus,
119 memory, disk and network
121 The free memory and free disk values help ensure that all nodes are
122 somewhat balanced in their resource usage. The reserved memory helps
123 to ensure that nodes are somewhat balanced in holding secondary
124 instances, and that no node keeps too much memory reserved for
125 N+1. And finally, the N+1 percentage helps guide the algorithm towards
126 eliminating N+1 failures, if possible.
128 Except for the N+1 failures and offline instances counts, we use the
129 standard deviation since when used with values within a fixed range
130 (we use percents expressed as values between zero and one) it gives
131 consistent results across all metrics (there are some small issues
132 related to different means, but it works generally well). The 'count'
133 type values will have higher score and thus will matter more for
134 balancing; thus these are better for hard constraints (like evacuating
135 nodes and fixing N+1 failures). For example, the offline instances
136 count (i.e. the number of instances living on offline nodes) will
137 cause the algorithm to actively move instances away from offline
138 nodes. This, coupled with the restriction on placement given by
139 offline nodes, will cause evacuation of such nodes.
141 The dynamic load values need to be read from an external file (Ganeti
142 doesn't supply them), and are computed for each node as: sum of
143 primary instance cpu load, sum of primary instance memory load, sum of
144 primary and secondary instance disk load (as DRBD generates write load
145 on secondary nodes too in normal case and in degraded scenarios also
146 read load), and sum of primary instance network load. An example of
147 how to generate these values for input to hbal would be to track ``xm
148 list`` for instances over a day and by computing the delta of the cpu
149 values, and feed that via the *-U* option for all instances (and keep
150 the other metrics as one). For the algorithm to work, all that is
151 needed is that the values are consistent for a metric across all
152 instances (e.g. all instances use cpu% to report cpu usage, and not
153 something related to number of CPU seconds used if the CPUs are
154 different), and that they are normalised to between zero and one. Note
155 that it's recommended to not have zero as the load value for any
156 instance metric since then secondary instances are not well balanced.
158 On a perfectly balanced cluster (all nodes the same size, all
159 instances the same size and spread across the nodes equally), the
160 values for all metrics would be zero. This doesn't happen too often in
166 Since current Ganeti versions do not report the memory used by offline
167 (down) instances, ignoring the run status of instances will cause
168 wrong calculations. For this reason, the algorithm subtracts the
169 memory size of down instances from the free node memory of their
170 primary node, in effect simulating the startup of such instances.
175 The exclusion tags mechanism is designed to prevent instances which
176 run the same workload (e.g. two DNS servers) to land on the same node,
177 which would make the respective node a SPOF for the given service.
179 It works by tagging instances with certain tags and then building
180 exclusion maps based on these. Which tags are actually used is
181 configured either via the command line (option *--exclusion-tags*)
182 or via adding them to the cluster tags:
185 This will make all instance tags of the form *a:\**, *b:\** be
186 considered for the exclusion map
188 cluster tags *htools:iextags:a*, *htools:iextags:b*
189 This will make instance tags *a:\**, *b:\** be considered for the
190 exclusion map. More precisely, the suffix of cluster tags starting
191 with *htools:iextags:* will become the prefix of the exclusion tags.
193 Both the above forms mean that two instances both having (e.g.) the
194 tag *a:foo* or *b:bar* won't end on the same node.
199 The options that can be passed to the program are as follows:
202 Print the command list at the end of the run. Without this, the
203 program will only show a shorter, but cryptic output.
205 Note that the moves list will be split into independent steps,
206 called "jobsets", but only for visual inspection, not for actually
207 parallelisation. It is not possible to parallelise these directly
208 when executed via "gnt-instance" commands, since a compound command
209 (e.g. failover and replace-disks) must be executed
210 serially. Parallel execution is only possible when using the Luxi
211 backend and the *-L* option.
213 The algorithm for splitting the moves into jobsets is by
214 accumulating moves until the next move is touching nodes already
215 touched by the current moves; this means we can't execute in
216 parallel (due to resource allocation in Ganeti) and thus we start a
220 Prints the before and after node status, in a format designed to allow
221 the user to understand the node's most important parameters. See the
222 man page **htools**(1) for more details about this option.
225 Prints the before and after instance map. This is less useful as the
226 node status, but it can help in understanding instance moves.
229 This option (which can be given multiple times) will mark nodes as
230 being *offline*. This means a couple of things:
232 - instances won't be placed on these nodes, not even temporarily;
233 e.g. the *replace primary* move is not available if the secondary
234 node is offline, since this move requires a failover.
235 - these nodes will not be included in the score calculation (except
236 for the percentage of instances on offline nodes)
238 Note that algorithm will also mark as offline any nodes which are
239 reported by RAPI as such, or that have "?" in file-based input in
242 -e *score*, --min-score=*score*
243 This parameter denotes the minimum score we are happy with and alters
244 the computation in two ways:
246 - if the cluster has the initial score lower than this value, then we
247 don't enter the algorithm at all, and exit with success
248 - during the iterative process, if we reach a score lower than this
249 value, we exit the algorithm
251 The default value of the parameter is currently ``1e-9`` (chosen
254 -g *delta*, --min-gain=*delta*
255 Since the balancing algorithm can sometimes result in just very tiny
256 improvements, that bring less gain that they cost in relocation
257 time, this parameter (defaulting to 0.01) represents the minimum
258 gain we require during a step, to continue balancing.
260 --min-gain-limit=*threshold*
261 The above min-gain option will only take effect if the cluster score
262 is already below *threshold* (defaults to 0.1). The rationale behind
263 this setting is that at high cluster scores (badly balanced
264 clusters), we don't want to abort the rebalance too quickly, as
265 later gains might still be significant. However, under the
266 threshold, the total gain is only the threshold value, so we can
270 This parameter prevents hbal from using disk move
271 (i.e. "gnt-instance replace-disks") operations. This will result in
272 a much quicker balancing, but of course the improvements are
273 limited. It is up to the user to decide when to use one or another.
276 This parameter prevents hbal from using instance moves
277 (i.e. "gnt-instance migrate/failover") operations. This will only use
278 the slow disk-replacement operations, and will also provide a worse
279 balance, but can be useful if moving instances around is deemed unsafe
283 This parameter restricts the list of instances considered for moving
284 to the ones living on offline/drained nodes. It can be used as a
285 (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
286 note that it doesn't guarantee full evacuation.
288 --select-instances=*instances*
289 This parameter marks the given instances (as a comma-separated list)
290 as the only ones being moved during the rebalance.
292 --exclude-instances=*instances*
293 This parameter marks the given instances (as a comma-separated list)
294 from being moved during the rebalance.
297 This parameter specifies a file holding instance dynamic utilisation
298 information that will be used to tweak the balancing algorithm to
299 equalise load on the nodes (as opposed to static resource
300 usage). The file is in the format "instance_name cpu_util mem_util
301 disk_util net_util" where the "_util" parameters are interpreted as
302 numbers and the instance name must match exactly the instance as
303 read from Ganeti. In case of unknown instance names, the program
306 If not given, the default values are one for all metrics and thus
307 dynamic utilisation has only one effect on the algorithm: the
308 equalisation of the secondary instances across nodes (this is the
309 only metric that is not tracked by another, dedicated value, and
310 thus the disk load of instances will cause secondary instance
311 equalisation). Note that value of one will also influence slightly
312 the primary instance count, but that is already tracked via other
313 metrics and thus the influence of the dynamic utilisation will be
314 practically insignificant.
316 -S *filename*, --save-cluster=*filename*
317 If given, the state of the cluster before the balancing is saved to
318 the given file plus the extension "original"
319 (i.e. *filename*.original), and the state at the end of the
320 balancing is saved to the given file plus the extension "balanced"
321 (i.e. *filename*.balanced). This allows re-feeding the cluster state
322 to either hbal itself or for example hspace via the ``-t`` option.
324 -t *datafile*, --text-data=*datafile*
325 Backend specification: the name of the file holding node and instance
326 information (if not collecting via RAPI or LUXI). This or one of the
327 other backends must be selected. The option is described in the man
331 Backend specification: collect data directly from the *cluster* given
332 as an argument via RAPI. The option is described in the man page
336 Backend specification: collect data directly from the master daemon,
337 which is to be contacted via LUXI (an internal Ganeti protocol). The
338 option is described in the man page **htools**(1).
341 When using the Luxi backend, hbal can also execute the given
342 commands. The execution method is to execute the individual jobsets
343 (see the *-C* option for details) in separate stages, aborting if at
344 any time a jobset doesn't have all jobs successful. Each step in the
345 balancing solution will be translated into exactly one Ganeti job
346 (having between one and three OpCodes), and all the steps in a
347 jobset will be executed in parallel. The jobsets themselves are
350 The execution of the job series can be interrupted, see below for
353 -l *N*, --max-length=*N*
354 Restrict the solution to this length. This can be used for example
355 to automate the execution of the balancing.
357 --max-cpu=*cpu-ratio*
358 The maximum virtual to physical cpu ratio, as a floating point number
359 greater than or equal to one. For example, specifying *cpu-ratio* as
360 **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
361 should be allowed to be in use for primary instances. A value of
362 exactly one means there will be no over-subscription of CPU (except
363 for the CPU time used by the node itself), and values below one do not
364 make sense, as that means other resources (e.g. disk) won't be fully
365 utilised due to CPU restrictions.
367 --min-disk=*disk-ratio*
368 The minimum amount of free disk space remaining, as a floating point
369 number. For example, specifying *disk-ratio* as **0.25** means that
370 at least one quarter of disk space should be left free on nodes.
372 -G *uuid*, --group=*uuid*
373 On an multi-group cluster, select this group for
374 processing. Otherwise hbal will abort, since it cannot balance
375 multiple groups at the same time.
378 Increase the output verbosity. Each usage of this option will
379 increase the verbosity (currently more than 2 doesn't make sense)
380 from the default of one.
383 Decrease the output verbosity. Each usage of this option will
384 decrease the verbosity (less than zero doesn't make sense) from the
388 Just show the program version and exit.
393 When executing jobs via LUXI (using the ``-X`` option), normally hbal
394 will execute all jobs until either one errors out or all the jobs finish
397 Since balancing can take a long time, it is possible to stop hbal early
400 - by sending a ``SIGINT`` (``^C``), hbal will register the termination
401 request, and will wait until the currently submitted jobs finish, at
402 which point it will exit (with exit code 1)
403 - by sending a ``SIGTERM``, hbal will immediately exit (with exit code
404 2); it is the responsibility of the user to follow up with Ganeti the
405 result of the currently-executing jobs
407 Note that in any situation, it's perfectly safe to kill hbal, either via
408 the above signals or via any other signal (e.g. ``SIGQUIT``,
409 ``SIGKILL``), since the jobs themselves are processed by Ganeti whereas
410 hbal (after submission) only watches their progression. In this case,
411 the use will again have to query Ganeti for job results.
416 The exit status of the command will be zero, unless for some reason the
417 algorithm fatally failed (e.g. wrong node or instance data), or (in case
418 of job execution) either one of the jobs has failed or the balancing was
424 The program does not check all its input data for consistency, and
425 sometime aborts with cryptic errors messages with invalid data.
427 The algorithm is not perfect.
432 Note that these examples are not for the latest version (they don't
433 have full node data).
438 With the default options, the program shows each individual step and
439 the improvements it brings in cluster score::
442 Loaded 20 nodes, 80 instances
443 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
444 Initial score: 0.52329131
445 Trying to minimize the CV...
446 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
447 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
448 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
449 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
450 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
451 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
452 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
453 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
454 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
455 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
456 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
457 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
458 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
459 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
460 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
461 Cluster score improved from 0.52329131 to 0.00252594
463 In the above output, we can see:
465 - the input data (here from files) shows a cluster with 20 nodes and
467 - the cluster is not initially N+1 compliant
468 - the initial score is 0.52329131
470 The step list follows, showing the instance, its initial
471 primary/secondary nodes, the new primary secondary, the cluster list,
472 and the actions taken in this step (with 'f' denoting failover/migrate
473 and 'r' denoting replace secondary).
475 Finally, the program shows the improvement in cluster score.
477 A more detailed output is obtained via the *-C* and *-p* options::
480 Loaded 20 nodes, 80 instances
481 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
482 Initial cluster status:
483 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
484 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
485 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
486 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
487 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
488 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
489 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
490 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
491 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
492 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
493 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
494 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
495 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
496 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
497 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
498 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
499 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
500 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
501 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
502 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
503 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
505 Initial score: 0.52329131
506 Trying to minimize the CV...
507 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
508 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
509 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
510 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
511 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
512 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
513 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
514 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
515 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
516 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
517 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
518 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
519 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
520 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
521 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
522 Cluster score improved from 0.52329131 to 0.00252594
524 Commands to run to reach the above solution:
526 echo gnt-instance migrate instance14
527 echo gnt-instance replace-disks -n node16 instance14
528 echo gnt-instance migrate instance14
530 echo gnt-instance migrate instance54
531 echo gnt-instance replace-disks -n node16 instance54
532 echo gnt-instance migrate instance54
534 echo gnt-instance migrate instance4
535 echo gnt-instance replace-disks -n node16 instance4
537 echo gnt-instance replace-disks -n node2 instance48
538 echo gnt-instance migrate instance48
540 echo gnt-instance replace-disks -n node16 instance93
541 echo gnt-instance migrate instance93
543 echo gnt-instance replace-disks -n node2 instance89
544 echo gnt-instance migrate instance89
546 echo gnt-instance replace-disks -n node16 instance5
547 echo gnt-instance migrate instance5
549 echo gnt-instance migrate instance94
550 echo gnt-instance replace-disks -n node16 instance94
552 echo gnt-instance migrate instance44
553 echo gnt-instance replace-disks -n node15 instance44
555 echo gnt-instance replace-disks -n node16 instance62
557 echo gnt-instance replace-disks -n node16 instance13
559 echo gnt-instance replace-disks -n node7 instance19
561 echo gnt-instance replace-disks -n node1 instance43
563 echo gnt-instance replace-disks -n node4 instance1
565 echo gnt-instance replace-disks -n node17 instance58
567 Final cluster status:
568 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
569 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
570 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
571 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
572 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
573 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
574 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
575 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
576 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
577 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
578 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
579 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
580 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
581 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
582 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
583 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
584 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
585 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
586 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
587 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
588 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
590 Here we see, beside the step list, the initial and final cluster
591 status, with the final one showing all nodes being N+1 compliant, and
592 the command list to reach the final solution. In the initial listing,
593 we see which nodes are not N+1 compliant.
595 The algorithm is stable as long as each step above is fully completed,
596 e.g. in step 8, both the migrate and the replace-disks are
597 done. Otherwise, if only the migrate is done, the input data is
598 changed in a way that the program will output a different solution
599 list (but hopefully will end in the same state).
601 .. vim: set textwidth=72 :