1 .TH HBAL 1 2009-03-23 htools "Ganeti H-tools"
3 hbal \- Cluster balancer for Ganeti
7 .B "[backend options...]"
8 .B "[algorithm options...]"
9 .B "[reporting options...]"
16 .BI "[ -m " cluster " ]"
18 .BI "[ -L[" path "] [-X]]"
20 .BI "[ -t " data-file " ]"
24 .BI "[ --max-cpu " cpu-ratio " ]"
25 .BI "[ --min-disk " disk-ratio " ]"
26 .BI "[ -l " limit " ]"
27 .BI "[ -e " score " ]"
28 .BI "[ -O " name... " ]"
29 .B "[ --no-disk-moves ]"
30 .BI "[ -U " util-file " ]"
32 .BI "[ --exclude-instances " inst... " ]"
36 .BI "[ -C[" file "] ]"
37 .BI "[ -p[" fields "] ]"
38 .B "[ --print-instances ]"
44 hbal is a cluster balancer that looks at the current state of the
45 cluster (nodes with their total and free disk, memory, etc.) and
46 instance placement and computes a series of steps designed to bring
47 the cluster into a better state.
49 The algorithm used is designed to be stable (i.e. it will give you the
50 same results when restarting it from the middle of the solution) and
51 reasonably fast. It is not, however, designed to be a perfect
52 algorithm \(em it is possible to make it go into a corner from which
53 it can find no improvement, because it looks only one "step" ahead.
55 By default, the program will show the solution incrementally as it is
56 computed, in a somewhat cryptic format; for getting the actual Ganeti
57 command list, use the \fB-C\fR option.
61 The program works in independent steps; at each step, we compute the
62 best instance move that lowers the cluster score.
64 The possible move type for an instance are combinations of
65 failover/migrate and replace-disks such that we change one of the
66 instance nodes, and the other one remains (but possibly with changed
67 role, e.g. from primary it becomes secondary). The list is:
77 replace primary, a composite move (f, r, f)
80 failover and replace secondary, also composite (f, r)
83 replace secondary and failover, also composite (r, f)
86 We don't do the only remaining possibility of replacing both nodes
87 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
88 exhaustive search over both candidate primary and secondary nodes, and
89 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
90 give better scores but will result in more disk replacements.
92 .SS PLACEMENT RESTRICTIONS
94 At each step, we prevent an instance move if it would cause:
99 a node to go into N+1 failure state
102 an instance to move onto an offline node (offline nodes are either
103 read from the cluster or declared with \fI-O\fR)
106 an exclusion-tag based conflict (exclusion tags are read from the
107 cluster and/or defined via the \fI--exclusion-tags\fR option)
110 a max vcpu/pcpu ratio to be exceeded (configured via \fI--max-cpu\fR)
113 min disk free percentage to go below the configured limit (configured
114 via \fI--min-disk\fR)
118 As said before, the algorithm tries to minimise the cluster score at
119 each step. Currently this score is computed as a sum of the following
124 standard deviation of the percent of free memory
127 standard deviation of the percent of reserved memory
130 standard deviation of the percent of free disk
133 count of nodes failing N+1 check
136 count of instances living (either as primary or secondary) on
140 count of instances living (as primary) on offline nodes; this differs
141 from the above metric by helping failover of such instances in 2-node
145 standard deviation of the ratio of virtual-to-physical cpus (for
146 primary instances of the node)
149 standard deviation of the dynamic load on the nodes, for cpus,
150 memory, disk and network
153 The free memory and free disk values help ensure that all nodes are
154 somewhat balanced in their resource usage. The reserved memory helps
155 to ensure that nodes are somewhat balanced in holding secondary
156 instances, and that no node keeps too much memory reserved for
157 N+1. And finally, the N+1 percentage helps guide the algorithm towards
158 eliminating N+1 failures, if possible.
160 Except for the N+1 failures and offline instances counts, we use the
161 standard deviation since when used with values within a fixed range
162 (we use percents expressed as values between zero and one) it gives
163 consistent results across all metrics (there are some small issues
164 related to different means, but it works generally well). The 'count'
165 type values will have higher score and thus will matter more for
166 balancing; thus these are better for hard constraints (like evacuating
167 nodes and fixing N+1 failures). For example, the offline instances
168 count (i.e. the number of instances living on offline nodes) will
169 cause the algorithm to actively move instances away from offline
170 nodes. This, coupled with the restriction on placement given by
171 offline nodes, will cause evacuation of such nodes.
173 The dynamic load values need to be read from an external file (Ganeti
174 doesn't supply them), and are computed for each node as: sum of
175 primary instance cpu load, sum of primary instance memory load, sum of
176 primary and secondary instance disk load (as DRBD generates write load
177 on secondary nodes too in normal case and in degraded scenarios also
178 read load), and sum of primary instance network load. An example of
179 how to generate these values for input to hbal would be to track "xm
180 list" for instance over a day and by computing the delta of the cpu
181 values, and feed that via the \fI-U\fR option for all instances (and
182 keep the other metrics as one). For the algorithm to work, all that is
183 needed is that the values are consistent for a metric across all
184 instances (e.g. all instances use cpu% to report cpu usage, and not
185 something related to number of CPU seconds used if the CPUs are
186 different), and that they are normalised to between zero and one. Note
187 that it's recommended to not have zero as the load value for any
188 instance metric since then secondary instances are not well balanced.
190 On a perfectly balanced cluster (all nodes the same size, all
191 instances the same size and spread across the nodes equally), the
192 values for all metrics would be zero. This doesn't happen too often in
195 .SS OFFLINE INSTANCES
197 Since current Ganeti versions do not report the memory used by offline
198 (down) instances, ignoring the run status of instances will cause
199 wrong calculations. For this reason, the algorithm subtracts the
200 memory size of down instances from the free node memory of their
201 primary node, in effect simulating the startup of such instances.
205 The exclusion tags mechanism is designed to prevent instances which
206 run the same workload (e.g. two DNS servers) to land on the same node,
207 which would make the respective node a SPOF for the given service.
209 It works by tagging instances with certain tags and then building
210 exclusion maps based on these. Which tags are actually used is
211 configured either via the command line (option \fI--exclusion-tags\fR)
212 or via adding them to the cluster tags:
215 .B --exclusion-tags=a,b
216 This will make all instance tags of the form \fIa:*\fR, \fIb:*\fR be
217 considered for the exclusion map
220 cluster tags \fBhtools:iextags:a\fR, \fBhtools:iextags:b\fR
221 This will make instance tags \fIa:*\fR, \fIb:*\fR be considered for
222 the exclusion map. More precisely, the suffix of cluster tags starting
223 with \fBhtools:iextags:\fR will become the prefix of the exclusion
227 Both the above forms mean that two instances both having (e.g.) the
228 tag \fIa:foo\fR or \fIb:bar\fR won't end on the same node.
231 The options that can be passed to the program are as follows:
233 .B -C, --print-commands
234 Print the command list at the end of the run. Without this, the
235 program will only show a shorter, but cryptic output.
237 Note that the moves list will be split into independent steps, called
238 "jobsets", but only for visual inspection, not for actually
239 parallelisation. It is not possible to parallelise these directly when
240 executed via "gnt-instance" commands, since a compound command
241 (e.g. failover and replace\-disks) must be executed serially. Parallel
242 execution is only possible when using the Luxi backend and the
245 The algorithm for splitting the moves into jobsets is by accumulating
246 moves until the next move is touching nodes already touched by the
247 current moves; this means we can't execute in parallel (due to
248 resource allocation in Ganeti) and thus we start a new jobset.
252 Prints the before and after node status, in a format designed to allow
253 the user to understand the node's most important parameters.
255 It is possible to customise the listed information by passing a
256 comma\(hyseparated list of field names to this option (the field list
257 is currently undocumented), or to extend the default field list by
258 prefixing the additional field list with a plus sign. By default, the
259 node list will contain the following information:
263 a character denoting the status of the node, with '\-' meaning an
264 offline node, '*' meaning N+1 failure and blank meaning a good node
270 the total node memory
273 the memory used by the node itself
276 the memory used by instances
279 amount memory which seems to be in use but cannot be determined why or
280 by which instance; usually this means that the hypervisor has some
281 overhead or that there are other reporting errors
287 the reserved node memory, which is the amount of free memory needed
297 the number of physical cpus on the node
300 the number of virtual cpus allocated to primary instances
303 number of primary instances
306 number of secondary instances
309 percent of free memory
315 ratio of virtual to physical cpus
318 the dynamic CPU load (if the information is available)
321 the dynamic memory load (if the information is available)
324 the dynamic disk load (if the information is available)
327 the dynamic net load (if the information is available)
332 Prints the before and after instance map. This is less useful as the
333 node status, but it can help in understanding instance moves.
337 Only shows a one\(hyline output from the program, designed for the case
338 when one wants to look at multiple clusters at once and check their
341 The line will contain four fields:
346 initial cluster score
349 number of steps in the solution
355 improvement in the cluster score
361 This option (which can be given multiple times) will mark nodes as
362 being \fIoffline\fR. This means a couple of things:
367 instances won't be placed on these nodes, not even temporarily;
368 e.g. the \fIreplace primary\fR move is not available if the secondary
369 node is offline, since this move requires a failover.
372 these nodes will not be included in the score calculation (except for
373 the percentage of instances on offline nodes)
375 Note that hbal will also mark as offline any nodes which are reported
376 by RAPI as such, or that have "?" in file\(hybased input in any numeric
381 .BI "-e" score ", --min-score=" score
382 This parameter denotes the minimum score we are happy with and alters
383 the computation in two ways:
388 if the cluster has the initial score lower than this value, then we
389 don't enter the algorithm at all, and exit with success
392 during the iterative process, if we reach a score lower than this
393 value, we exit the algorithm
395 The default value of the parameter is currently \fI1e-9\fR (chosen
400 .BI "--no-disk-moves"
401 This parameter prevents hbal from using disk move (i.e. "gnt\-instance
402 replace\-disks") operations. This will result in a much quicker
403 balancing, but of course the improvements are limited. It is up to the
404 user to decide when to use one or another.
408 This parameter restricts the list of instances considered for moving
409 to the ones living on offline/drained nodes. It can be used as a
410 (bulk) replacement for Ganeti's own \fIgnt-node evacuate\fR, with the
411 note that it doesn't guarantee full evacuation.
414 .BI "--exclude-instances " instances
415 This parameter marks the given instances (as a comma-separated list)
416 from being moved during the rebalance.
420 This parameter specifies a file holding instance dynamic utilisation
421 information that will be used to tweak the balancing algorithm to
422 equalise load on the nodes (as opposed to static resource usage). The
423 file is in the format "instance_name cpu_util mem_util disk_util
424 net_util" where the "_util" parameters are interpreted as numbers and
425 the instance name must match exactly the instance as read from
426 Ganeti. In case of unknown instance names, the program will abort.
428 If not given, the default values are one for all metrics and thus
429 dynamic utilisation has only one effect on the algorithm: the
430 equalisation of the secondary instances across nodes (this is the only
431 metric that is not tracked by another, dedicated value, and thus the
432 disk load of instances will cause secondary instance
433 equalisation). Note that value of one will also influence slightly the
434 primary instance count, but that is already tracked via other metrics
435 and thus the influence of the dynamic utilisation will be practically
439 .BI "-t" datafile ", --text-data=" datafile
440 The name of the file holding node and instance information (if not
441 collecting via RAPI or LUXI). This or one of the other backends must
445 .BI "-S" datafile ", --save-cluster=" datafile
446 If given, the state of the cluster at the end of the balancing is
447 saved to the given file. This allows re-feeding the cluster state to
448 either hbal itself or for example hspace.
452 Collect data directly from the
454 given as an argument via RAPI. If the argument doesn't contain a colon
455 (:), then it is converted into a fully\(hybuilt URL via prepending
456 https:// and appending the default RAPI port, otherwise it's
457 considered a fully\(hyspecified URL and is used as\(hyis.
461 Collect data directly from the master daemon, which is to be contacted
462 via the luxi (an internal Ganeti protocol). An optional \fIpath\fR
463 argument is interpreted as the path to the unix socket on which the
464 master daemon listens; otherwise, the default path used by ganeti when
465 installed with \fI--localstatedir=/var\fR is used.
469 When using the Luxi backend, hbal can also execute the given
470 commands. The execution method is to execute the individual jobsets
471 (see the \fI-C\fR option for details) in separate stages, aborting if
472 at any time a jobset doesn't have all jobs successful. Each step in
473 the balancing solution will be translated into exactly one Ganeti job
474 (having between one and three OpCodes), and all the steps in a jobset
475 will be executed in parallel. The jobsets themselves are executed
479 .BI "-l" N ", --max-length=" N
480 Restrict the solution to this length. This can be used for example to
481 automate the execution of the balancing.
484 .BI "--max-cpu " cpu-ratio
485 The maximum virtual\(hyto\(hyphysical cpu ratio, as a floating point
486 number between zero and one. For example, specifying \fIcpu-ratio\fR
487 as \fB2.5\fR means that, for a 4\(hycpu machine, a maximum of 10
488 virtual cpus should be allowed to be in use for primary instances. A
489 value of one doesn't make sense though, as that means no disk space
493 .BI "--min-disk " disk-ratio
494 The minimum amount of free disk space remaining, as a floating point
495 number. For example, specifying \fIdisk-ratio\fR as \fB0.25\fR means
496 that at least one quarter of disk space should be left free on nodes.
500 Increase the output verbosity. Each usage of this option will increase
501 the verbosity (currently more than 2 doesn't make sense) from the
506 Decrease the output verbosity. Each usage of this option will decrease
507 the verbosity (less than zero doesn't make sense) from the default of
512 Just show the program version and exit.
516 The exist status of the command will be zero, unless for some reason
517 the algorithm fatally failed (e.g. wrong node or instance data).
521 If the variables \fBHTOOLS_NODES\fR and \fBHTOOLS_INSTANCES\fR are
522 present in the environment, they will override the default names for
523 the nodes and instances files. These will have of course no effect
524 when the RAPI or Luxi backends are used.
528 The program does not check its input data for consistency, and aborts
529 with cryptic errors messages in this case.
531 The algorithm is not perfect.
533 The output format is not easily scriptable, and the program should
534 feed moves directly into Ganeti (either via RAPI or via a gnt\-debug
539 Note that this example are not for the latest version (they don't have
544 With the default options, the program shows each individual step and
545 the improvements it brings in cluster score:
550 Loaded 20 nodes, 80 instances
551 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
552 Initial score: 0.52329131
553 Trying to minimize the CV...
554 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
555 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
556 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
557 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
558 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
559 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
560 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
561 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
562 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
563 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
564 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
565 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
566 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
567 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
568 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
569 Cluster score improved from 0.52329131 to 0.00252594
573 In the above output, we can see:
574 - the input data (here from files) shows a cluster with 20 nodes and
576 - the cluster is not initially N+1 compliant
577 - the initial score is 0.52329131
579 The step list follows, showing the instance, its initial
580 primary/secondary nodes, the new primary secondary, the cluster list,
581 and the actions taken in this step (with 'f' denoting failover/migrate
582 and 'r' denoting replace secondary).
584 Finally, the program shows the improvement in cluster score.
586 A more detailed output is obtained via the \fB-C\fR and \fB-p\fR options:
591 Loaded 20 nodes, 80 instances
592 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
593 Initial cluster status:
594 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
595 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
596 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
597 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
598 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
599 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
600 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
601 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
602 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
603 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
604 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
605 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
606 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
607 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
608 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
609 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
610 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
611 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
612 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
613 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
614 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
616 Initial score: 0.52329131
617 Trying to minimize the CV...
618 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
619 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
620 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
621 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
622 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
623 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
624 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
625 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
626 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
627 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
628 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
629 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
630 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
631 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
632 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
633 Cluster score improved from 0.52329131 to 0.00252594
635 Commands to run to reach the above solution:
637 echo gnt\-instance migrate instance14
638 echo gnt\-instance replace\-disks \-n node16 instance14
639 echo gnt\-instance migrate instance14
641 echo gnt\-instance migrate instance54
642 echo gnt\-instance replace\-disks \-n node16 instance54
643 echo gnt\-instance migrate instance54
645 echo gnt\-instance migrate instance4
646 echo gnt\-instance replace\-disks \-n node16 instance4
648 echo gnt\-instance replace\-disks \-n node2 instance48
649 echo gnt\-instance migrate instance48
651 echo gnt\-instance replace\-disks \-n node16 instance93
652 echo gnt\-instance migrate instance93
654 echo gnt\-instance replace\-disks \-n node2 instance89
655 echo gnt\-instance migrate instance89
657 echo gnt\-instance replace\-disks \-n node16 instance5
658 echo gnt\-instance migrate instance5
660 echo gnt\-instance migrate instance94
661 echo gnt\-instance replace\-disks \-n node16 instance94
663 echo gnt\-instance migrate instance44
664 echo gnt\-instance replace\-disks \-n node15 instance44
666 echo gnt\-instance replace\-disks \-n node16 instance62
668 echo gnt\-instance replace\-disks \-n node16 instance13
670 echo gnt\-instance replace\-disks \-n node7 instance19
672 echo gnt\-instance replace\-disks \-n node1 instance43
674 echo gnt\-instance replace\-disks \-n node4 instance1
676 echo gnt\-instance replace\-disks \-n node17 instance58
678 Final cluster status:
679 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
680 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
681 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
682 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
683 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
684 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
685 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
686 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
687 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
688 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
689 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
690 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
691 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
692 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
693 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
694 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
695 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
696 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
697 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
698 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
699 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
704 Here we see, beside the step list, the initial and final cluster
705 status, with the final one showing all nodes being N+1 compliant, and
706 the command list to reach the final solution. In the initial listing,
707 we see which nodes are not N+1 compliant.
709 The algorithm is stable as long as each step above is fully completed,
710 e.g. in step 8, both the migrate and the replace\-disks are
711 done. Otherwise, if only the migrate is done, the input data is
712 changed in a way that the program will output a different solution
713 list (but hopefully will end in the same state).
716 .BR hspace "(1), " hscan "(1), " hail "(1), "
717 .BR ganeti "(7), " gnt-instance "(8), " gnt-node "(8)"
721 Copyright (C) 2009 Google Inc. Permission is granted to copy,
722 distribute and/or modify under the terms of the GNU General Public
723 License as published by the Free Software Foundation; either version 2
724 of the License, or (at your option) any later version.
726 On Debian systems, the complete text of the GNU General Public License
727 can be found in /usr/share/common-licenses/GPL.