1 HBAL(1) htools | Ganeti H-tools
2 ===============================
7 hbal \- Cluster balancer for Ganeti
12 **hbal** {backend options...} [algorithm options...] [reporting options...]
19 { **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* }
23 **[ --max-cpu *cpu-ratio* ]**
24 **[ --min-disk *disk-ratio* ]**
27 **[ -g *delta* ]** **[ --min-gain-limit *threshold* ]**
29 **[ --no-disk-moves ]**
30 **[ -U *util-file* ]**
32 **[ --exclude-instances *inst...* ]**
37 **[ -p[ *fields* ] ]**
38 **[ --print-instances ]**
46 hbal is a cluster balancer that looks at the current state of the
47 cluster (nodes with their total and free disk, memory, etc.) and
48 instance placement and computes a series of steps designed to bring
49 the cluster into a better state.
51 The algorithm used is designed to be stable (i.e. it will give you the
52 same results when restarting it from the middle of the solution) and
53 reasonably fast. It is not, however, designed to be a perfect
54 algorithm--it is possible to make it go into a corner from which
55 it can find no improvement, because it looks only one "step" ahead.
57 By default, the program will show the solution incrementally as it is
58 computed, in a somewhat cryptic format; for getting the actual Ganeti
59 command list, use the **-C** option.
64 The program works in independent steps; at each step, we compute the
65 best instance move that lowers the cluster score.
67 The possible move type for an instance are combinations of
68 failover/migrate and replace-disks such that we change one of the
69 instance nodes, and the other one remains (but possibly with changed
70 role, e.g. from primary it becomes secondary). The list is:
73 - replace secondary (r)
74 - replace primary, a composite move (f, r, f)
75 - failover and replace secondary, also composite (f, r)
76 - replace secondary and failover, also composite (r, f)
78 We don't do the only remaining possibility of replacing both nodes
79 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
80 exhaustive search over both candidate primary and secondary nodes, and
81 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
82 give better scores but will result in more disk replacements.
84 PLACEMENT RESTRICTIONS
85 ~~~~~~~~~~~~~~~~~~~~~~
87 At each step, we prevent an instance move if it would cause:
89 - a node to go into N+1 failure state
90 - an instance to move onto an offline node (offline nodes are either
91 read from the cluster or declared with *-O*)
92 - an exclusion-tag based conflict (exclusion tags are read from the
93 cluster and/or defined via the *--exclusion-tags* option)
94 - a max vcpu/pcpu ratio to be exceeded (configured via *--max-cpu*)
95 - min disk free percentage to go below the configured limit
96 (configured via *--min-disk*)
101 As said before, the algorithm tries to minimise the cluster score at
102 each step. Currently this score is computed as a sum of the following
105 - standard deviation of the percent of free memory
106 - standard deviation of the percent of reserved memory
107 - standard deviation of the percent of free disk
108 - count of nodes failing N+1 check
109 - count of instances living (either as primary or secondary) on
111 - count of instances living (as primary) on offline nodes; this
112 differs from the above metric by helping failover of such instances
114 - standard deviation of the ratio of virtual-to-physical cpus (for
115 primary instances of the node)
116 - standard deviation of the dynamic load on the nodes, for cpus,
117 memory, disk and network
119 The free memory and free disk values help ensure that all nodes are
120 somewhat balanced in their resource usage. The reserved memory helps
121 to ensure that nodes are somewhat balanced in holding secondary
122 instances, and that no node keeps too much memory reserved for
123 N+1. And finally, the N+1 percentage helps guide the algorithm towards
124 eliminating N+1 failures, if possible.
126 Except for the N+1 failures and offline instances counts, we use the
127 standard deviation since when used with values within a fixed range
128 (we use percents expressed as values between zero and one) it gives
129 consistent results across all metrics (there are some small issues
130 related to different means, but it works generally well). The 'count'
131 type values will have higher score and thus will matter more for
132 balancing; thus these are better for hard constraints (like evacuating
133 nodes and fixing N+1 failures). For example, the offline instances
134 count (i.e. the number of instances living on offline nodes) will
135 cause the algorithm to actively move instances away from offline
136 nodes. This, coupled with the restriction on placement given by
137 offline nodes, will cause evacuation of such nodes.
139 The dynamic load values need to be read from an external file (Ganeti
140 doesn't supply them), and are computed for each node as: sum of
141 primary instance cpu load, sum of primary instance memory load, sum of
142 primary and secondary instance disk load (as DRBD generates write load
143 on secondary nodes too in normal case and in degraded scenarios also
144 read load), and sum of primary instance network load. An example of
145 how to generate these values for input to hbal would be to track ``xm
146 list`` for instances over a day and by computing the delta of the cpu
147 values, and feed that via the *-U* option for all instances (and keep
148 the other metrics as one). For the algorithm to work, all that is
149 needed is that the values are consistent for a metric across all
150 instances (e.g. all instances use cpu% to report cpu usage, and not
151 something related to number of CPU seconds used if the CPUs are
152 different), and that they are normalised to between zero and one. Note
153 that it's recommended to not have zero as the load value for any
154 instance metric since then secondary instances are not well balanced.
156 On a perfectly balanced cluster (all nodes the same size, all
157 instances the same size and spread across the nodes equally), the
158 values for all metrics would be zero. This doesn't happen too often in
164 Since current Ganeti versions do not report the memory used by offline
165 (down) instances, ignoring the run status of instances will cause
166 wrong calculations. For this reason, the algorithm subtracts the
167 memory size of down instances from the free node memory of their
168 primary node, in effect simulating the startup of such instances.
173 The exclusion tags mechanism is designed to prevent instances which
174 run the same workload (e.g. two DNS servers) to land on the same node,
175 which would make the respective node a SPOF for the given service.
177 It works by tagging instances with certain tags and then building
178 exclusion maps based on these. Which tags are actually used is
179 configured either via the command line (option *--exclusion-tags*)
180 or via adding them to the cluster tags:
183 This will make all instance tags of the form *a:\**, *b:\** be
184 considered for the exclusion map
186 cluster tags *htools:iextags:a*, *htools:iextags:b*
187 This will make instance tags *a:\**, *b:\** be considered for the
188 exclusion map. More precisely, the suffix of cluster tags starting
189 with *htools:iextags:* will become the prefix of the exclusion tags.
191 Both the above forms mean that two instances both having (e.g.) the
192 tag *a:foo* or *b:bar* won't end on the same node.
197 The options that can be passed to the program are as follows:
200 Print the command list at the end of the run. Without this, the
201 program will only show a shorter, but cryptic output.
203 Note that the moves list will be split into independent steps,
204 called "jobsets", but only for visual inspection, not for actually
205 parallelisation. It is not possible to parallelise these directly
206 when executed via "gnt-instance" commands, since a compound command
207 (e.g. failover and replace-disks) must be executed
208 serially. Parallel execution is only possible when using the Luxi
209 backend and the *-L* option.
211 The algorithm for splitting the moves into jobsets is by
212 accumulating moves until the next move is touching nodes already
213 touched by the current moves; this means we can't execute in
214 parallel (due to resource allocation in Ganeti) and thus we start a
218 Prints the before and after node status, in a format designed to
219 allow the user to understand the node's most important parameters.
221 It is possible to customise the listed information by passing a
222 comma-separated list of field names to this option (the field list
223 is currently undocumented), or to extend the default field list by
224 prefixing the additional field list with a plus sign. By default,
225 the node list will contain the following information:
228 a character denoting the status of the node, with '-' meaning an
229 offline node, '*' meaning N+1 failure and blank meaning a good
236 the total node memory
239 the memory used by the node itself
242 the memory used by instances
245 amount memory which seems to be in use but cannot be determined
246 why or by which instance; usually this means that the hypervisor
247 has some overhead or that there are other reporting errors
253 the reserved node memory, which is the amount of free memory
254 needed for N+1 compliance
263 the number of physical cpus on the node
266 the number of virtual cpus allocated to primary instances
269 number of primary instances
272 number of secondary instances
275 percent of free memory
281 ratio of virtual to physical cpus
284 the dynamic CPU load (if the information is available)
287 the dynamic memory load (if the information is available)
290 the dynamic disk load (if the information is available)
293 the dynamic net load (if the information is available)
296 Prints the before and after instance map. This is less useful as the
297 node status, but it can help in understanding instance moves.
300 Only shows a one-line output from the program, designed for the case
301 when one wants to look at multiple clusters at once and check their
304 The line will contain four fields:
306 - initial cluster score
307 - number of steps in the solution
308 - final cluster score
309 - improvement in the cluster score
312 This option (which can be given multiple times) will mark nodes as
313 being *offline*. This means a couple of things:
315 - instances won't be placed on these nodes, not even temporarily;
316 e.g. the *replace primary* move is not available if the secondary
317 node is offline, since this move requires a failover.
318 - these nodes will not be included in the score calculation (except
319 for the percentage of instances on offline nodes)
321 Note that algorithm will also mark as offline any nodes which are
322 reported by RAPI as such, or that have "?" in file-based input in
325 -e *score*, --min-score=*score*
326 This parameter denotes the minimum score we are happy with and alters
327 the computation in two ways:
329 - if the cluster has the initial score lower than this value, then we
330 don't enter the algorithm at all, and exit with success
331 - during the iterative process, if we reach a score lower than this
332 value, we exit the algorithm
334 The default value of the parameter is currently ``1e-9`` (chosen
337 -g *delta*, --min-gain=*delta*
338 Since the balancing algorithm can sometimes result in just very tiny
339 improvements, that bring less gain that they cost in relocation
340 time, this parameter (defaulting to 0.01) represents the minimum
341 gain we require during a step, to continue balancing.
343 --min-gain-limit=*threshold*
344 The above min-gain option will only take effect if the cluster score
345 is already below *threshold* (defaults to 0.1). The rationale behind
346 this setting is that at high cluster scores (badly balanced
347 clusters), we don't want to abort the rebalance too quickly, as
348 later gains might still be significant. However, under the
349 threshold, the total gain is only the threshold value, so we can
353 This parameter prevents hbal from using disk move
354 (i.e. "gnt-instance replace-disks") operations. This will result in
355 a much quicker balancing, but of course the improvements are
356 limited. It is up to the user to decide when to use one or another.
359 This parameter restricts the list of instances considered for moving
360 to the ones living on offline/drained nodes. It can be used as a
361 (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
362 note that it doesn't guarantee full evacuation.
364 --exclude-instances=*instances*
365 This parameter marks the given instances (as a comma-separated list)
366 from being moved during the rebalance.
369 This parameter specifies a file holding instance dynamic utilisation
370 information that will be used to tweak the balancing algorithm to
371 equalise load on the nodes (as opposed to static resource
372 usage). The file is in the format "instance_name cpu_util mem_util
373 disk_util net_util" where the "_util" parameters are interpreted as
374 numbers and the instance name must match exactly the instance as
375 read from Ganeti. In case of unknown instance names, the program
378 If not given, the default values are one for all metrics and thus
379 dynamic utilisation has only one effect on the algorithm: the
380 equalisation of the secondary instances across nodes (this is the
381 only metric that is not tracked by another, dedicated value, and
382 thus the disk load of instances will cause secondary instance
383 equalisation). Note that value of one will also influence slightly
384 the primary instance count, but that is already tracked via other
385 metrics and thus the influence of the dynamic utilisation will be
386 practically insignificant.
388 -t *datafile*, --text-data=*datafile*
389 The name of the file holding node and instance information (if not
390 collecting via RAPI or LUXI). This or one of the other backends must
393 -S *filename*, --save-cluster=*filename*
394 If given, the state of the cluster before the balancing is saved to
395 the given file plus the extension "original"
396 (i.e. *filename*.original), and the state at the end of the
397 balancing is saved to the given file plus the extension "balanced"
398 (i.e. *filename*.balanced). This allows re-feeding the cluster state
399 to either hbal itself or for example hspace.
402 Collect data directly from the *cluster* given as an argument via
403 RAPI. If the argument doesn't contain a colon (:), then it is
404 converted into a fully-built URL via prepending ``https://`` and
405 appending the default RAPI port, otherwise it's considered a
406 fully-specified URL and is used as-is.
409 Collect data directly from the master daemon, which is to be
410 contacted via the luxi (an internal Ganeti protocol). An optional
411 *path* argument is interpreted as the path to the unix socket on
412 which the master daemon listens; otherwise, the default path used by
413 ganeti when installed with *--localstatedir=/var* is used.
416 When using the Luxi backend, hbal can also execute the given
417 commands. The execution method is to execute the individual jobsets
418 (see the *-C* option for details) in separate stages, aborting if at
419 any time a jobset doesn't have all jobs successful. Each step in the
420 balancing solution will be translated into exactly one Ganeti job
421 (having between one and three OpCodes), and all the steps in a
422 jobset will be executed in parallel. The jobsets themselves are
425 -l *N*, --max-length=*N*
426 Restrict the solution to this length. This can be used for example
427 to automate the execution of the balancing.
429 --max-cpu=*cpu-ratio*
430 The maximum virtual to physical cpu ratio, as a floating point
431 number between zero and one. For example, specifying *cpu-ratio* as
432 **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual
433 cpus should be allowed to be in use for primary instances. A value
434 of one doesn't make sense though, as that means no disk space can be
437 --min-disk=*disk-ratio*
438 The minimum amount of free disk space remaining, as a floating point
439 number. For example, specifying *disk-ratio* as **0.25** means that
440 at least one quarter of disk space should be left free on nodes.
442 -G *uuid*, --group=*uuid*
443 On an multi-group cluster, select this group for
444 processing. Otherwise hbal will abort, since it cannot balance
445 multiple groups at the same time.
448 Increase the output verbosity. Each usage of this option will
449 increase the verbosity (currently more than 2 doesn't make sense)
450 from the default of one.
453 Decrease the output verbosity. Each usage of this option will
454 decrease the verbosity (less than zero doesn't make sense) from the
458 Just show the program version and exit.
463 The exit status of the command will be zero, unless for some reason
464 the algorithm fatally failed (e.g. wrong node or instance data), or
465 (in case of job execution) any job has failed.
470 If the variables **HTOOLS_NODES** and **HTOOLS_INSTANCES** are present
471 in the environment, they will override the default names for the nodes
472 and instances files. These will have of course no effect when the RAPI
473 or Luxi backends are used.
478 The program does not check its input data for consistency, and aborts
479 with cryptic errors messages in this case.
481 The algorithm is not perfect.
483 The output format is not easily scriptable, and the program should
484 feed moves directly into Ganeti (either via RAPI or via a gnt-debug
490 Note that these examples are not for the latest version (they don't
491 have full node data).
496 With the default options, the program shows each individual step and
497 the improvements it brings in cluster score::
500 Loaded 20 nodes, 80 instances
501 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
502 Initial score: 0.52329131
503 Trying to minimize the CV...
504 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
505 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
506 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
507 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
508 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
509 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
510 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
511 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
512 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
513 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
514 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
515 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
516 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
517 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
518 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
519 Cluster score improved from 0.52329131 to 0.00252594
521 In the above output, we can see:
523 - the input data (here from files) shows a cluster with 20 nodes and
525 - the cluster is not initially N+1 compliant
526 - the initial score is 0.52329131
528 The step list follows, showing the instance, its initial
529 primary/secondary nodes, the new primary secondary, the cluster list,
530 and the actions taken in this step (with 'f' denoting failover/migrate
531 and 'r' denoting replace secondary).
533 Finally, the program shows the improvement in cluster score.
535 A more detailed output is obtained via the *-C* and *-p* options::
538 Loaded 20 nodes, 80 instances
539 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
540 Initial cluster status:
541 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
542 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
543 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
544 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
545 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
546 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
547 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
548 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
549 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
550 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
551 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
552 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
553 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
554 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
555 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
556 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
557 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
558 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
559 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
560 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
561 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
563 Initial score: 0.52329131
564 Trying to minimize the CV...
565 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
566 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
567 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
568 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
569 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
570 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
571 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
572 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
573 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
574 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
575 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
576 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
577 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
578 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
579 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
580 Cluster score improved from 0.52329131 to 0.00252594
582 Commands to run to reach the above solution:
584 echo gnt-instance migrate instance14
585 echo gnt-instance replace-disks -n node16 instance14
586 echo gnt-instance migrate instance14
588 echo gnt-instance migrate instance54
589 echo gnt-instance replace-disks -n node16 instance54
590 echo gnt-instance migrate instance54
592 echo gnt-instance migrate instance4
593 echo gnt-instance replace-disks -n node16 instance4
595 echo gnt-instance replace-disks -n node2 instance48
596 echo gnt-instance migrate instance48
598 echo gnt-instance replace-disks -n node16 instance93
599 echo gnt-instance migrate instance93
601 echo gnt-instance replace-disks -n node2 instance89
602 echo gnt-instance migrate instance89
604 echo gnt-instance replace-disks -n node16 instance5
605 echo gnt-instance migrate instance5
607 echo gnt-instance migrate instance94
608 echo gnt-instance replace-disks -n node16 instance94
610 echo gnt-instance migrate instance44
611 echo gnt-instance replace-disks -n node15 instance44
613 echo gnt-instance replace-disks -n node16 instance62
615 echo gnt-instance replace-disks -n node16 instance13
617 echo gnt-instance replace-disks -n node7 instance19
619 echo gnt-instance replace-disks -n node1 instance43
621 echo gnt-instance replace-disks -n node4 instance1
623 echo gnt-instance replace-disks -n node17 instance58
625 Final cluster status:
626 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
627 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
628 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
629 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
630 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
631 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
632 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
633 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
634 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
635 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
636 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
637 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
638 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
639 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
640 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
641 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
642 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
643 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
644 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
645 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
646 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
648 Here we see, beside the step list, the initial and final cluster
649 status, with the final one showing all nodes being N+1 compliant, and
650 the command list to reach the final solution. In the initial listing,
651 we see which nodes are not N+1 compliant.
653 The algorithm is stable as long as each step above is fully completed,
654 e.g. in step 8, both the migrate and the replace-disks are
655 done. Otherwise, if only the migrate is done, the input data is
656 changed in a way that the program will output a different solution
657 list (but hopefully will end in the same state).
662 **hspace**(1), **hscan**(1), **hail**(1), **ganeti**(7),
663 **gnt-instance**(8), **gnt-node**(8)
668 Copyright (C) 2009, 2010 Google Inc. Permission is granted to copy,
669 distribute and/or modify under the terms of the GNU General Public
670 License as published by the Free Software Foundation; either version 2
671 of the License, or (at your option) any later version.
673 On Debian systems, the complete text of the GNU General Public License
674 can be found in /usr/share/common-licenses/GPL.