1 .TH HBAL 1 2009-03-23 htools "Ganeti H-tools"
3 hbal \- Cluster balancer for Ganeti
7 .B "[backend options...]"
8 .B "[algorithm options...]"
9 .B "[reporting options...]"
16 .BI "[ -m " cluster " ]"
18 .BI "[ -L[" path "] [-X]]"
20 .BI "[ -t " data-file " ]"
24 .BI "[ --max-cpu " cpu-ratio " ]"
25 .BI "[ --min-disk " disk-ratio " ]"
26 .BI "[ -l " limit " ]"
27 .BI "[ -e " score " ]"
28 .BI "[ -O " name... " ]"
29 .B "[ --no-disk-moves ]"
30 .BI "[ -U " util-file " ]"
32 .BI "[ --exclude-instances " inst... " ]"
36 .BI "[ -C[" file "] ]"
37 .BI "[ -p[" fields "] ]"
38 .B "[ --print-instances ]"
44 hbal is a cluster balancer that looks at the current state of the
45 cluster (nodes with their total and free disk, memory, etc.) and
46 instance placement and computes a series of steps designed to bring
47 the cluster into a better state.
49 The algorithm used is designed to be stable (i.e. it will give you the
50 same results when restarting it from the middle of the solution) and
51 reasonably fast. It is not, however, designed to be a perfect
52 algorithm \(em it is possible to make it go into a corner from which
53 it can find no improvement, because it looks only one "step" ahead.
55 By default, the program will show the solution incrementally as it is
56 computed, in a somewhat cryptic format; for getting the actual Ganeti
57 command list, use the \fB-C\fR option.
61 The program works in independent steps; at each step, we compute the
62 best instance move that lowers the cluster score.
64 The possible move type for an instance are combinations of
65 failover/migrate and replace-disks such that we change one of the
66 instance nodes, and the other one remains (but possibly with changed
67 role, e.g. from primary it becomes secondary). The list is:
77 replace primary, a composite move (f, r, f)
80 failover and replace secondary, also composite (f, r)
83 replace secondary and failover, also composite (r, f)
86 We don't do the only remaining possibility of replacing both nodes
87 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
88 exhaustive search over both candidate primary and secondary nodes, and
89 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
90 give better scores but will result in more disk replacements.
92 .SS PLACEMENT RESTRICTIONS
94 At each step, we prevent an instance move if it would cause:
99 a node to go into N+1 failure state
102 an instance to move onto an offline node (offline nodes are either
103 read from the cluster or declared with \fI-O\fR)
106 an exclusion-tag based conflict (exclusion tags are read from the
107 cluster and/or defined via the \fI--exclusion-tags\fR option)
110 a max vcpu/pcpu ratio to be exceeded (configured via \fI--max-cpu\fR)
113 min disk free percentage to go below the configured limit (configured
114 via \fI--min-disk\fR)
118 As said before, the algorithm tries to minimise the cluster score at
119 each step. Currently this score is computed as a sum of the following
124 standard deviation of the percent of free memory
127 standard deviation of the percent of reserved memory
130 standard deviation of the percent of free disk
133 count of nodes failing N+1 check
136 count of instances living (either as primary or secondary) on
140 count of instances living (as primary) on offline nodes; this differs
141 from the above metric by helping failover of such instances in 2-node
145 standard deviation of the ratio of virtual-to-physical cpus (for
146 primary instances of the node)
149 standard deviation of the dynamic load on the nodes, for cpus,
150 memory, disk and network
153 The free memory and free disk values help ensure that all nodes are
154 somewhat balanced in their resource usage. The reserved memory helps
155 to ensure that nodes are somewhat balanced in holding secondary
156 instances, and that no node keeps too much memory reserved for
157 N+1. And finally, the N+1 percentage helps guide the algorithm towards
158 eliminating N+1 failures, if possible.
160 Except for the N+1 failures and offline instances counts, we use the
161 standard deviation since when used with values within a fixed range
162 (we use percents expressed as values between zero and one) it gives
163 consistent results across all metrics (there are some small issues
164 related to different means, but it works generally well). The 'count'
165 type values will have higher score and thus will matter more for
166 balancing; thus these are better for hard constraints (like evacuating
167 nodes and fixing N+1 failures). For example, the offline instances
168 count (i.e. the number of instances living on offline nodes) will
169 cause the algorithm to actively move instances away from offline
170 nodes. This, coupled with the restriction on placement given by
171 offline nodes, will cause evacuation of such nodes.
173 The dynamic load values need to be read from an external file (Ganeti
174 doesn't supply them), and are computed for each node as: sum of
175 primary instance cpu load, sum of primary instance memory load, sum of
176 primary and secondary instance disk load (as DRBD generates write load
177 on secondary nodes too in normal case and in degraded scenarios also
178 read load), and sum of primary instance network load. An example of
179 how to generate these values for input to hbal would be to track "xm
180 list" for instance over a day and by computing the delta of the cpu
181 values, and feed that via the \fI-U\fR option for all instances (and
182 keep the other metrics as one). For the algorithm to work, all that is
183 needed is that the values are consistent for a metric across all
184 instances (e.g. all instances use cpu% to report cpu usage, and not
185 something related to number of CPU seconds used if the CPUs are
186 different), and that they are normalised to between zero and one. Note
187 that it's recommended to not have zero as the load value for any
188 instance metric since then secondary instances are not well balanced.
190 On a perfectly balanced cluster (all nodes the same size, all
191 instances the same size and spread across the nodes equally), the
192 values for all metrics would be zero. This doesn't happen too often in
195 .SS OFFLINE INSTANCES
197 Since current Ganeti versions do not report the memory used by offline
198 (down) instances, ignoring the run status of instances will cause
199 wrong calculations. For this reason, the algorithm subtracts the
200 memory size of down instances from the free node memory of their
201 primary node, in effect simulating the startup of such instances.
205 The exclusion tags mechanism is designed to prevent instances which
206 run the same workload (e.g. two DNS servers) to land on the same node,
207 which would make the respective node a SPOF for the given service.
209 It works by tagging instances with certain tags and then building
210 exclusion maps based on these. Which tags are actually used is
211 configured either via the command line (option \fI--exclusion-tags\fR)
212 or via adding them to the cluster tags:
215 .B --exclusion-tags=a,b
216 This will make all instance tags of the form \fIa:*\fR, \fIb:*\fR be
217 considered for the exclusion map
220 cluster tags \fBhtools:iextags:a\fR, \fBhtools:iextags:b\fR
221 This will make instance tags \fIa:*\fR, \fIb:*\fR be considered for
222 the exclusion map. More precisely, the suffix of cluster tags starting
223 with \fBhtools:iextags:\fR will become the prefix of the exclusion
227 Both the above forms mean that two instances both having (e.g.) the
228 tag \fIa:foo\fR or \fIb:bar\fR won't end on the same node.
231 The options that can be passed to the program are as follows:
233 .B -C, --print-commands
234 Print the command list at the end of the run. Without this, the
235 program will only show a shorter, but cryptic output.
237 Note that the moves list will be split into independent steps, called
238 "jobsets", but only for visual inspection, not for actually
239 parallelisation. It is not possible to parallelise these directly when
240 executed via "gnt-instance" commands, since a compound command
241 (e.g. failover and replace\-disks) must be executed serially. Parallel
242 execution is only possible when using the Luxi backend and the
245 The algorithm for splitting the moves into jobsets is by accumulating
246 moves until the next move is touching nodes already touched by the
247 current moves; this means we can't execute in parallel (due to
248 resource allocation in Ganeti) and thus we start a new jobset.
252 Prints the before and after node status, in a format designed to allow
253 the user to understand the node's most important parameters.
255 It is possible to customise the listed information by passing a
256 comma\(hyseparated list of field names to this option (the field list is
257 currently undocumented). By default, the node list will contain these
262 a character denoting the status of the node, with '\-' meaning an
263 offline node, '*' meaning N+1 failure and blank meaning a good node
269 the total node memory
272 the memory used by the node itself
275 the memory used by instances
278 amount memory which seems to be in use but cannot be determined why or
279 by which instance; usually this means that the hypervisor has some
280 overhead or that there are other reporting errors
286 the reserved node memory, which is the amount of free memory needed
296 the number of physical cpus on the node
299 the number of virtual cpus allocated to primary instances
302 number of primary instances
305 number of secondary instances
308 percent of free memory
314 ratio of virtual to physical cpus
317 the dynamic CPU load (if the information is available)
320 the dynamic memory load (if the information is available)
323 the dynamic disk load (if the information is available)
326 the dynamic net load (if the information is available)
331 Prints the before and after instance map. This is less useful as the
332 node status, but it can help in understanding instance moves.
336 Only shows a one\(hyline output from the program, designed for the case
337 when one wants to look at multiple clusters at once and check their
340 The line will contain four fields:
345 initial cluster score
348 number of steps in the solution
354 improvement in the cluster score
360 This option (which can be given multiple times) will mark nodes as
361 being \fIoffline\fR. This means a couple of things:
366 instances won't be placed on these nodes, not even temporarily;
367 e.g. the \fIreplace primary\fR move is not available if the secondary
368 node is offline, since this move requires a failover.
371 these nodes will not be included in the score calculation (except for
372 the percentage of instances on offline nodes)
374 Note that hbal will also mark as offline any nodes which are reported
375 by RAPI as such, or that have "?" in file\(hybased input in any numeric
380 .BI "-e" score ", --min-score=" score
381 This parameter denotes the minimum score we are happy with and alters
382 the computation in two ways:
387 if the cluster has the initial score lower than this value, then we
388 don't enter the algorithm at all, and exit with success
391 during the iterative process, if we reach a score lower than this
392 value, we exit the algorithm
394 The default value of the parameter is currently \fI1e-9\fR (chosen
399 .BI "--no-disk-moves"
400 This parameter prevents hbal from using disk move (i.e. "gnt\-instance
401 replace\-disks") operations. This will result in a much quicker
402 balancing, but of course the improvements are limited. It is up to the
403 user to decide when to use one or another.
407 This parameter restricts the list of instances considered for moving
408 to the ones living on offline/drained nodes. It can be used as a
409 (bulk) replacement for Ganeti's own \fIgnt-node evacuate\fR, with the
410 note that it doesn't guarantee full evacuation.
413 .BI "--exclude-instances " instances
414 This parameter marks the given instances (as a comma-separated list)
415 from being moved during the rebalance. Note that the instances must be
416 given their full name (as reported by Ganeti).
420 This parameter specifies a file holding instance dynamic utilisation
421 information that will be used to tweak the balancing algorithm to
422 equalise load on the nodes (as opposed to static resource usage). The
423 file is in the format "instance_name cpu_util mem_util disk_util
424 net_util" where the "_util" parameters are interpreted as numbers and
425 the instance name must match exactly the instance as read from
426 Ganeti. In case of unknown instance names, the program will abort.
428 If not given, the default values are one for all metrics and thus
429 dynamic utilisation has only one effect on the algorithm: the
430 equalisation of the secondary instances across nodes (this is the only
431 metric that is not tracked by another, dedicated value, and thus the
432 disk load of instances will cause secondary instance
433 equalisation). Note that value of one will also influence slightly the
434 primary instance count, but that is already tracked via other metrics
435 and thus the influence of the dynamic utilisation will be practically
439 .BI "-t" datafile ", --text-data=" datafile
440 The name of the file holding node and instance information (if not
441 collecting via RAPI or LUXI). This or one of the other backends must
446 Collect data directly from the
448 given as an argument via RAPI. If the argument doesn't contain a colon
449 (:), then it is converted into a fully\(hybuilt URL via prepending
450 https:// and appending the default RAPI port, otherwise it's
451 considered a fully\(hyspecified URL and is used as\(hyis.
455 Collect data directly from the master daemon, which is to be contacted
456 via the luxi (an internal Ganeti protocol). An optional \fIpath\fR
457 argument is interpreted as the path to the unix socket on which the
458 master daemon listens; otherwise, the default path used by ganeti when
459 installed with \fI--localstatedir=/var\fR is used.
463 When using the Luxi backend, hbal can also execute the given
464 commands. The execution method is to execute the individual jobsets
465 (see the \fI-C\fR option for details) in separate stages, aborting if
466 at any time a jobset doesn't have all jobs successful. Each step in
467 the balancing solution will be translated into exactly one Ganeti job
468 (having between one and three OpCodes), and all the steps in a jobset
469 will be executed in parallel. The jobsets themselves are executed
473 .BI "-l" N ", --max-length=" N
474 Restrict the solution to this length. This can be used for example to
475 automate the execution of the balancing.
478 .BI "--max-cpu " cpu-ratio
479 The maximum virtual\(hyto\(hyphysical cpu ratio, as a floating point
480 number between zero and one. For example, specifying \fIcpu-ratio\fR
481 as \fB2.5\fR means that, for a 4\(hycpu machine, a maximum of 10
482 virtual cpus should be allowed to be in use for primary instances. A
483 value of one doesn't make sense though, as that means no disk space
487 .BI "--min-disk " disk-ratio
488 The minimum amount of free disk space remaining, as a floating point
489 number. For example, specifying \fIdisk-ratio\fR as \fB0.25\fR means
490 that at least one quarter of disk space should be left free on nodes.
494 Increase the output verbosity. Each usage of this option will increase
495 the verbosity (currently more than 2 doesn't make sense) from the
500 Decrease the output verbosity. Each usage of this option will decrease
501 the verbosity (less than zero doesn't make sense) from the default of
506 Just show the program version and exit.
510 The exist status of the command will be zero, unless for some reason
511 the algorithm fatally failed (e.g. wrong node or instance data).
515 If the variables \fBHTOOLS_NODES\fR and \fBHTOOLS_INSTANCES\fR are
516 present in the environment, they will override the default names for
517 the nodes and instances files. These will have of course no effect
518 when the RAPI or Luxi backends are used.
522 The program does not check its input data for consistency, and aborts
523 with cryptic errors messages in this case.
525 The algorithm is not perfect.
527 The output format is not easily scriptable, and the program should
528 feed moves directly into Ganeti (either via RAPI or via a gnt\-debug
533 Note that this example are not for the latest version (they don't have
538 With the default options, the program shows each individual step and
539 the improvements it brings in cluster score:
544 Loaded 20 nodes, 80 instances
545 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
546 Initial score: 0.52329131
547 Trying to minimize the CV...
548 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
549 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
550 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
551 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
552 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
553 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
554 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
555 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
556 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
557 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
558 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
559 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
560 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
561 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
562 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
563 Cluster score improved from 0.52329131 to 0.00252594
567 In the above output, we can see:
568 - the input data (here from files) shows a cluster with 20 nodes and
570 - the cluster is not initially N+1 compliant
571 - the initial score is 0.52329131
573 The step list follows, showing the instance, its initial
574 primary/secondary nodes, the new primary secondary, the cluster list,
575 and the actions taken in this step (with 'f' denoting failover/migrate
576 and 'r' denoting replace secondary).
578 Finally, the program shows the improvement in cluster score.
580 A more detailed output is obtained via the \fB-C\fR and \fB-p\fR options:
585 Loaded 20 nodes, 80 instances
586 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
587 Initial cluster status:
588 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
589 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
590 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
591 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
592 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
593 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
594 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
595 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
596 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
597 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
598 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
599 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
600 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
601 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
602 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
603 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
604 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
605 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
606 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
607 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
608 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
610 Initial score: 0.52329131
611 Trying to minimize the CV...
612 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
613 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
614 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
615 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
616 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
617 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
618 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
619 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
620 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
621 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
622 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
623 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
624 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
625 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
626 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
627 Cluster score improved from 0.52329131 to 0.00252594
629 Commands to run to reach the above solution:
631 echo gnt\-instance migrate instance14
632 echo gnt\-instance replace\-disks \-n node16 instance14
633 echo gnt\-instance migrate instance14
635 echo gnt\-instance migrate instance54
636 echo gnt\-instance replace\-disks \-n node16 instance54
637 echo gnt\-instance migrate instance54
639 echo gnt\-instance migrate instance4
640 echo gnt\-instance replace\-disks \-n node16 instance4
642 echo gnt\-instance replace\-disks \-n node2 instance48
643 echo gnt\-instance migrate instance48
645 echo gnt\-instance replace\-disks \-n node16 instance93
646 echo gnt\-instance migrate instance93
648 echo gnt\-instance replace\-disks \-n node2 instance89
649 echo gnt\-instance migrate instance89
651 echo gnt\-instance replace\-disks \-n node16 instance5
652 echo gnt\-instance migrate instance5
654 echo gnt\-instance migrate instance94
655 echo gnt\-instance replace\-disks \-n node16 instance94
657 echo gnt\-instance migrate instance44
658 echo gnt\-instance replace\-disks \-n node15 instance44
660 echo gnt\-instance replace\-disks \-n node16 instance62
662 echo gnt\-instance replace\-disks \-n node16 instance13
664 echo gnt\-instance replace\-disks \-n node7 instance19
666 echo gnt\-instance replace\-disks \-n node1 instance43
668 echo gnt\-instance replace\-disks \-n node4 instance1
670 echo gnt\-instance replace\-disks \-n node17 instance58
672 Final cluster status:
673 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
674 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
675 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
676 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
677 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
678 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
679 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
680 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
681 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
682 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
683 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
684 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
685 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
686 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
687 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
688 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
689 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
690 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
691 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
692 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
693 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
698 Here we see, beside the step list, the initial and final cluster
699 status, with the final one showing all nodes being N+1 compliant, and
700 the command list to reach the final solution. In the initial listing,
701 we see which nodes are not N+1 compliant.
703 The algorithm is stable as long as each step above is fully completed,
704 e.g. in step 8, both the migrate and the replace\-disks are
705 done. Otherwise, if only the migrate is done, the input data is
706 changed in a way that the program will output a different solution
707 list (but hopefully will end in the same state).
710 .BR hspace "(1), " hscan "(1), " hail "(1), "
711 .BR ganeti "(7), " gnt-instance "(8), " gnt-node "(8)"
715 Copyright (C) 2009 Google Inc. Permission is granted to copy,
716 distribute and/or modify under the terms of the GNU General Public
717 License as published by the Free Software Foundation; either version 2
718 of the License, or (at your option) any later version.
720 On Debian systems, the complete text of the GNU General Public License
721 can be found in /usr/share/common-licenses/GPL.