1 HBAL(1) Ganeti | Version @GANETI_VERSION@
2 =========================================
7 hbal \- Cluster balancer for Ganeti
12 **hbal** {backend options...} [algorithm options...] [reporting options...]
19 { **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* |
24 **[ \--max-cpu *cpu-ratio* ]**
25 **[ \--min-disk *disk-ratio* ]**
28 **[ -g *delta* ]** **[ \--min-gain-limit *threshold* ]**
30 **[ \--no-disk-moves ]**
31 **[ \--no-instance-moves ]**
32 **[ -U *util-file* ]**
34 **[ \--select-instances *inst...* ]**
35 **[ \--exclude-instances *inst...* ]**
40 **[ -p[ *fields* ] ]**
41 **[ \--print-instances ]**
49 hbal is a cluster balancer that looks at the current state of the
50 cluster (nodes with their total and free disk, memory, etc.) and
51 instance placement and computes a series of steps designed to bring
52 the cluster into a better state.
54 The algorithm used is designed to be stable (i.e. it will give you the
55 same results when restarting it from the middle of the solution) and
56 reasonably fast. It is not, however, designed to be a perfect algorithm:
57 it is possible to make it go into a corner from which it can find no
58 improvement, because it looks only one "step" ahead.
60 By default, the program will show the solution incrementally as it is
61 computed, in a somewhat cryptic format; for getting the actual Ganeti
62 command list, use the **-C** option.
67 The program works in independent steps; at each step, we compute the
68 best instance move that lowers the cluster score.
70 The possible move type for an instance are combinations of
71 failover/migrate and replace-disks such that we change one of the
72 instance nodes, and the other one remains (but possibly with changed
73 role, e.g. from primary it becomes secondary). The list is:
76 - replace secondary (r)
77 - replace primary, a composite move (f, r, f)
78 - failover and replace secondary, also composite (f, r)
79 - replace secondary and failover, also composite (r, f)
81 We don't do the only remaining possibility of replacing both nodes
82 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
83 exhaustive search over both candidate primary and secondary nodes, and
84 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
85 give better scores but will result in more disk replacements.
87 PLACEMENT RESTRICTIONS
88 ~~~~~~~~~~~~~~~~~~~~~~
90 At each step, we prevent an instance move if it would cause:
92 - a node to go into N+1 failure state
93 - an instance to move onto an offline node (offline nodes are either
94 read from the cluster or declared with *-O*)
95 - an exclusion-tag based conflict (exclusion tags are read from the
96 cluster and/or defined via the *\--exclusion-tags* option)
97 - a max vcpu/pcpu ratio to be exceeded (configured via *\--max-cpu*)
98 - min disk free percentage to go below the configured limit
99 (configured via *\--min-disk*)
104 As said before, the algorithm tries to minimise the cluster score at
105 each step. Currently this score is computed as a sum of the following
108 - standard deviation of the percent of free memory
109 - standard deviation of the percent of reserved memory
110 - standard deviation of the percent of free disk
111 - count of nodes failing N+1 check
112 - count of instances living (either as primary or secondary) on
114 - count of instances living (as primary) on offline nodes; this
115 differs from the above metric by helping failover of such instances
117 - standard deviation of the ratio of virtual-to-physical cpus (for
118 primary instances of the node)
119 - standard deviation of the dynamic load on the nodes, for cpus,
120 memory, disk and network
122 The free memory and free disk values help ensure that all nodes are
123 somewhat balanced in their resource usage. The reserved memory helps
124 to ensure that nodes are somewhat balanced in holding secondary
125 instances, and that no node keeps too much memory reserved for
126 N+1. And finally, the N+1 percentage helps guide the algorithm towards
127 eliminating N+1 failures, if possible.
129 Except for the N+1 failures and offline instances counts, we use the
130 standard deviation since when used with values within a fixed range
131 (we use percents expressed as values between zero and one) it gives
132 consistent results across all metrics (there are some small issues
133 related to different means, but it works generally well). The 'count'
134 type values will have higher score and thus will matter more for
135 balancing; thus these are better for hard constraints (like evacuating
136 nodes and fixing N+1 failures). For example, the offline instances
137 count (i.e. the number of instances living on offline nodes) will
138 cause the algorithm to actively move instances away from offline
139 nodes. This, coupled with the restriction on placement given by
140 offline nodes, will cause evacuation of such nodes.
142 The dynamic load values need to be read from an external file (Ganeti
143 doesn't supply them), and are computed for each node as: sum of
144 primary instance cpu load, sum of primary instance memory load, sum of
145 primary and secondary instance disk load (as DRBD generates write load
146 on secondary nodes too in normal case and in degraded scenarios also
147 read load), and sum of primary instance network load. An example of
148 how to generate these values for input to hbal would be to track ``xm
149 list`` for instances over a day and by computing the delta of the cpu
150 values, and feed that via the *-U* option for all instances (and keep
151 the other metrics as one). For the algorithm to work, all that is
152 needed is that the values are consistent for a metric across all
153 instances (e.g. all instances use cpu% to report cpu usage, and not
154 something related to number of CPU seconds used if the CPUs are
155 different), and that they are normalised to between zero and one. Note
156 that it's recommended to not have zero as the load value for any
157 instance metric since then secondary instances are not well balanced.
159 On a perfectly balanced cluster (all nodes the same size, all
160 instances the same size and spread across the nodes equally), the
161 values for all metrics would be zero. This doesn't happen too often in
167 Since current Ganeti versions do not report the memory used by offline
168 (down) instances, ignoring the run status of instances will cause
169 wrong calculations. For this reason, the algorithm subtracts the
170 memory size of down instances from the free node memory of their
171 primary node, in effect simulating the startup of such instances.
176 The exclusion tags mechanism is designed to prevent instances which
177 run the same workload (e.g. two DNS servers) to land on the same node,
178 which would make the respective node a SPOF for the given service.
180 It works by tagging instances with certain tags and then building
181 exclusion maps based on these. Which tags are actually used is
182 configured either via the command line (option *\--exclusion-tags*)
183 or via adding them to the cluster tags:
185 \--exclusion-tags=a,b
186 This will make all instance tags of the form *a:\**, *b:\** be
187 considered for the exclusion map
189 cluster tags *htools:iextags:a*, *htools:iextags:b*
190 This will make instance tags *a:\**, *b:\** be considered for the
191 exclusion map. More precisely, the suffix of cluster tags starting
192 with *htools:iextags:* will become the prefix of the exclusion tags.
194 Both the above forms mean that two instances both having (e.g.) the
195 tag *a:foo* or *b:bar* won't end on the same node.
200 The options that can be passed to the program are as follows:
202 -C, \--print-commands
203 Print the command list at the end of the run. Without this, the
204 program will only show a shorter, but cryptic output.
206 Note that the moves list will be split into independent steps,
207 called "jobsets", but only for visual inspection, not for actually
208 parallelisation. It is not possible to parallelise these directly
209 when executed via "gnt-instance" commands, since a compound command
210 (e.g. failover and replace-disks) must be executed
211 serially. Parallel execution is only possible when using the Luxi
212 backend and the *-L* option.
214 The algorithm for splitting the moves into jobsets is by
215 accumulating moves until the next move is touching nodes already
216 touched by the current moves; this means we can't execute in
217 parallel (due to resource allocation in Ganeti) and thus we start a
221 Prints the before and after node status, in a format designed to allow
222 the user to understand the node's most important parameters. See the
223 man page **htools**(1) for more details about this option.
226 Prints the before and after instance map. This is less useful as the
227 node status, but it can help in understanding instance moves.
230 This option (which can be given multiple times) will mark nodes as
231 being *offline*. This means a couple of things:
233 - instances won't be placed on these nodes, not even temporarily;
234 e.g. the *replace primary* move is not available if the secondary
235 node is offline, since this move requires a failover.
236 - these nodes will not be included in the score calculation (except
237 for the percentage of instances on offline nodes)
239 Note that algorithm will also mark as offline any nodes which are
240 reported by RAPI as such, or that have "?" in file-based input in
243 -e *score*, \--min-score=*score*
244 This parameter denotes the minimum score we are happy with and alters
245 the computation in two ways:
247 - if the cluster has the initial score lower than this value, then we
248 don't enter the algorithm at all, and exit with success
249 - during the iterative process, if we reach a score lower than this
250 value, we exit the algorithm
252 The default value of the parameter is currently ``1e-9`` (chosen
255 -g *delta*, \--min-gain=*delta*
256 Since the balancing algorithm can sometimes result in just very tiny
257 improvements, that bring less gain that they cost in relocation
258 time, this parameter (defaulting to 0.01) represents the minimum
259 gain we require during a step, to continue balancing.
261 \--min-gain-limit=*threshold*
262 The above min-gain option will only take effect if the cluster score
263 is already below *threshold* (defaults to 0.1). The rationale behind
264 this setting is that at high cluster scores (badly balanced
265 clusters), we don't want to abort the rebalance too quickly, as
266 later gains might still be significant. However, under the
267 threshold, the total gain is only the threshold value, so we can
271 This parameter prevents hbal from using disk move
272 (i.e. "gnt-instance replace-disks") operations. This will result in
273 a much quicker balancing, but of course the improvements are
274 limited. It is up to the user to decide when to use one or another.
277 This parameter prevents hbal from using instance moves
278 (i.e. "gnt-instance migrate/failover") operations. This will only use
279 the slow disk-replacement operations, and will also provide a worse
280 balance, but can be useful if moving instances around is deemed unsafe
284 This parameter restricts the list of instances considered for moving
285 to the ones living on offline/drained nodes. It can be used as a
286 (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
287 note that it doesn't guarantee full evacuation.
289 \--select-instances=*instances*
290 This parameter marks the given instances (as a comma-separated list)
291 as the only ones being moved during the rebalance.
293 \--exclude-instances=*instances*
294 This parameter marks the given instances (as a comma-separated list)
295 from being moved during the rebalance.
298 This parameter specifies a file holding instance dynamic utilisation
299 information that will be used to tweak the balancing algorithm to
300 equalise load on the nodes (as opposed to static resource
301 usage). The file is in the format "instance_name cpu_util mem_util
302 disk_util net_util" where the "_util" parameters are interpreted as
303 numbers and the instance name must match exactly the instance as
304 read from Ganeti. In case of unknown instance names, the program
307 If not given, the default values are one for all metrics and thus
308 dynamic utilisation has only one effect on the algorithm: the
309 equalisation of the secondary instances across nodes (this is the
310 only metric that is not tracked by another, dedicated value, and
311 thus the disk load of instances will cause secondary instance
312 equalisation). Note that value of one will also influence slightly
313 the primary instance count, but that is already tracked via other
314 metrics and thus the influence of the dynamic utilisation will be
315 practically insignificant.
317 -S *filename*, \--save-cluster=*filename*
318 If given, the state of the cluster before the balancing is saved to
319 the given file plus the extension "original"
320 (i.e. *filename*.original), and the state at the end of the
321 balancing is saved to the given file plus the extension "balanced"
322 (i.e. *filename*.balanced). This allows re-feeding the cluster state
323 to either hbal itself or for example hspace via the ``-t`` option.
325 -t *datafile*, \--text-data=*datafile*
326 Backend specification: the name of the file holding node and instance
327 information (if not collecting via RAPI or LUXI). This or one of the
328 other backends must be selected. The option is described in the man
332 Backend specification: collect data directly from the *cluster* given
333 as an argument via RAPI. The option is described in the man page
337 Backend specification: collect data directly from the master daemon,
338 which is to be contacted via LUXI (an internal Ganeti protocol). The
339 option is described in the man page **htools**(1).
342 When using the Luxi backend, hbal can also execute the given
343 commands. The execution method is to execute the individual jobsets
344 (see the *-C* option for details) in separate stages, aborting if at
345 any time a jobset doesn't have all jobs successful. Each step in the
346 balancing solution will be translated into exactly one Ganeti job
347 (having between one and three OpCodes), and all the steps in a
348 jobset will be executed in parallel. The jobsets themselves are
351 The execution of the job series can be interrupted, see below for
354 -l *N*, \--max-length=*N*
355 Restrict the solution to this length. This can be used for example
356 to automate the execution of the balancing.
358 \--max-cpu=*cpu-ratio*
359 The maximum virtual to physical cpu ratio, as a floating point number
360 greater than or equal to one. For example, specifying *cpu-ratio* as
361 **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
362 should be allowed to be in use for primary instances. A value of
363 exactly one means there will be no over-subscription of CPU (except
364 for the CPU time used by the node itself), and values below one do not
365 make sense, as that means other resources (e.g. disk) won't be fully
366 utilised due to CPU restrictions.
368 \--min-disk=*disk-ratio*
369 The minimum amount of free disk space remaining, as a floating point
370 number. For example, specifying *disk-ratio* as **0.25** means that
371 at least one quarter of disk space should be left free on nodes.
373 -G *uuid*, \--group=*uuid*
374 On an multi-group cluster, select this group for
375 processing. Otherwise hbal will abort, since it cannot balance
376 multiple groups at the same time.
379 Increase the output verbosity. Each usage of this option will
380 increase the verbosity (currently more than 2 doesn't make sense)
381 from the default of one.
384 Decrease the output verbosity. Each usage of this option will
385 decrease the verbosity (less than zero doesn't make sense) from the
389 Just show the program version and exit.
394 When executing jobs via LUXI (using the ``-X`` option), normally hbal
395 will execute all jobs until either one errors out or all the jobs finish
398 Since balancing can take a long time, it is possible to stop hbal early
401 - by sending a ``SIGINT`` (``^C``), hbal will register the termination
402 request, and will wait until the currently submitted jobs finish, at
403 which point it will exit (with exit code 1)
404 - by sending a ``SIGTERM``, hbal will immediately exit (with exit code
405 2); it is the responsibility of the user to follow up with Ganeti the
406 result of the currently-executing jobs
408 Note that in any situation, it's perfectly safe to kill hbal, either via
409 the above signals or via any other signal (e.g. ``SIGQUIT``,
410 ``SIGKILL``), since the jobs themselves are processed by Ganeti whereas
411 hbal (after submission) only watches their progression. In this case,
412 the use will again have to query Ganeti for job results.
417 The exit status of the command will be zero, unless for some reason the
418 algorithm fatally failed (e.g. wrong node or instance data), or (in case
419 of job execution) either one of the jobs has failed or the balancing was
425 The program does not check all its input data for consistency, and
426 sometime aborts with cryptic errors messages with invalid data.
428 The algorithm is not perfect.
433 Note that these examples are not for the latest version (they don't
434 have full node data).
439 With the default options, the program shows each individual step and
440 the improvements it brings in cluster score::
443 Loaded 20 nodes, 80 instances
444 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
445 Initial score: 0.52329131
446 Trying to minimize the CV...
447 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
448 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
449 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
450 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
451 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
452 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
453 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
454 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
455 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
456 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
457 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
458 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
459 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
460 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
461 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
462 Cluster score improved from 0.52329131 to 0.00252594
464 In the above output, we can see:
466 - the input data (here from files) shows a cluster with 20 nodes and
468 - the cluster is not initially N+1 compliant
469 - the initial score is 0.52329131
471 The step list follows, showing the instance, its initial
472 primary/secondary nodes, the new primary secondary, the cluster list,
473 and the actions taken in this step (with 'f' denoting failover/migrate
474 and 'r' denoting replace secondary).
476 Finally, the program shows the improvement in cluster score.
478 A more detailed output is obtained via the *-C* and *-p* options::
481 Loaded 20 nodes, 80 instances
482 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
483 Initial cluster status:
484 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
485 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
486 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
487 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
488 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
489 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
490 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
491 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
492 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
493 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
494 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
495 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
496 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
497 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
498 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
499 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
500 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
501 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
502 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
503 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
504 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
506 Initial score: 0.52329131
507 Trying to minimize the CV...
508 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
509 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
510 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
511 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
512 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
513 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
514 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
515 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
516 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
517 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
518 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
519 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
520 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
521 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
522 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
523 Cluster score improved from 0.52329131 to 0.00252594
525 Commands to run to reach the above solution:
527 echo gnt-instance migrate instance14
528 echo gnt-instance replace-disks -n node16 instance14
529 echo gnt-instance migrate instance14
531 echo gnt-instance migrate instance54
532 echo gnt-instance replace-disks -n node16 instance54
533 echo gnt-instance migrate instance54
535 echo gnt-instance migrate instance4
536 echo gnt-instance replace-disks -n node16 instance4
538 echo gnt-instance replace-disks -n node2 instance48
539 echo gnt-instance migrate instance48
541 echo gnt-instance replace-disks -n node16 instance93
542 echo gnt-instance migrate instance93
544 echo gnt-instance replace-disks -n node2 instance89
545 echo gnt-instance migrate instance89
547 echo gnt-instance replace-disks -n node16 instance5
548 echo gnt-instance migrate instance5
550 echo gnt-instance migrate instance94
551 echo gnt-instance replace-disks -n node16 instance94
553 echo gnt-instance migrate instance44
554 echo gnt-instance replace-disks -n node15 instance44
556 echo gnt-instance replace-disks -n node16 instance62
558 echo gnt-instance replace-disks -n node16 instance13
560 echo gnt-instance replace-disks -n node7 instance19
562 echo gnt-instance replace-disks -n node1 instance43
564 echo gnt-instance replace-disks -n node4 instance1
566 echo gnt-instance replace-disks -n node17 instance58
568 Final cluster status:
569 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
570 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
571 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
572 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
573 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
574 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
575 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
576 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
577 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
578 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
579 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
580 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
581 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
582 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
583 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
584 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
585 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
586 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
587 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
588 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
589 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
591 Here we see, beside the step list, the initial and final cluster
592 status, with the final one showing all nodes being N+1 compliant, and
593 the command list to reach the final solution. In the initial listing,
594 we see which nodes are not N+1 compliant.
596 The algorithm is stable as long as each step above is fully completed,
597 e.g. in step 8, both the migrate and the replace-disks are
598 done. Otherwise, if only the migrate is done, the input data is
599 changed in a way that the program will output a different solution
600 list (but hopefully will end in the same state).
602 .. vim: set textwidth=72 :