1 .TH HBAL 1 2009-03-23 htools "Ganeti H-tools"
3 hbal \- Cluster balancer for Ganeti
7 .B "[backend options...]"
8 .B "[algorithm options...]"
9 .B "[reporting options...]"
16 .BI "[ -m " cluster " ]"
18 .BI "[ -L[" path "] [-X]]"
20 .BI "[ -n " nodes-file " ]"
21 .BI "[ -i " instances-file " ]"
25 .BI "[ --max-cpu " cpu-ratio " ]"
26 .BI "[ --min-disk " disk-ratio " ]"
27 .BI "[ -l " limit " ]"
28 .BI "[ -e " score " ]"
29 .BI "[ -O " name... " ]"
30 .B "[ --no-disk-moves ]"
34 .BI "[ -C[" file "] ]"
36 .B "[ --print-instances ]"
42 hbal is a cluster balancer that looks at the current state of the
43 cluster (nodes with their total and free disk, memory, etc.) and
44 instance placement and computes a series of steps designed to bring
45 the cluster into a better state.
47 The algorithm to do so is designed to be stable (i.e. it will give you
48 the same results when restarting it from the middle of the solution)
49 and reasonably fast. It is not, however, designed to be a perfect
50 algorithm - it is possible to make it go into a corner from which it
51 can find no improvement, because it only look one "step" ahead.
53 By default, the program will show the solution incrementally as it is
54 computed, in a somewhat cryptic format; for getting the actual Ganeti
55 command list, use the \fB-C\fR option.
59 The program works in independent steps; at each step, we compute the
60 best instance move that lowers the cluster score.
62 The possible move type for an instance are combinations of
63 failover/migrate and replace-disks such that we change one of the
64 instance nodes, and the other one remains (but possibly with changed
65 role, e.g. from primary it becomes secondary). The list is:
75 replace primary, a composite move (f, r, f)
78 failover and replace secondary, also composite (f, r)
81 replace secondary and failover, also composite (r, f)
84 We don't do the only remaining possibility of replacing both nodes
85 (r,f,r,f or the equivalent f,r,f,r) since these move needs an
86 exhaustive search over both candidate primary and secondary nodes, and
87 is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
88 give better scores but will result in more disk replacements.
92 As said before, the algorithm tries to minimise the cluster score at
93 each step. Currently this score is computed as a sum of the following
98 coefficient of variance of the percent of free memory
101 coefficient of variance of the percent of reserved memory
104 coefficient of variance of the percent of free disk
107 percentage of nodes failing N+1 check
110 percentage of instances living (either as primary or secondary) on
114 coefficent of variance of the ratio of virtual-to-physical cpus (for
115 primary instaces of the node)
118 The free memory and free disk values help ensure that all nodes are
119 somewhat balanced in their resource usage. The reserved memory helps
120 to ensure that nodes are somewhat balanced in holding secondary
121 instances, and that no node keeps too much memory reserved for
122 N+1. And finally, the N+1 percentage helps guide the algorithm towards
123 eliminating N+1 failures, if possible.
125 Except for the N+1 failures and offline instances percentage, we use
126 the coefficient of variance since this brings the values into the same
127 unit so to speak, and with a restrict domain of values (between zero
128 and one). The percentage of N+1 failures, while also in this numeric
129 range, doesn't actually has the same meaning, but it has shown to work
132 The other alternative, using for N+1 checks the coefficient of
133 variance of (N+1 fail=1, N+1 pass=0) across nodes could hint the
134 algorithm to make more N+1 failures if most nodes are N+1 fail
135 already. Since this (making N+1 failures) is not allowed by other
136 rules of the algorithm, so the N+1 checks would simply not work
137 anymore in this case.
139 The offline instances percentage (meaning the percentage of instances
140 living on offline nodes) will cause the algorithm to actively move
141 instances away from offline nodes. This, coupled with the restriction
142 on placement given by offline nodes, will cause evacuation of such
145 On a perfectly balanced cluster (all nodes the same size, all
146 instances the same size and spread across the nodes equally), all
147 values would be zero. This doesn't happen too often in practice :)
149 .SS OFFLINE INSTANCES
151 Since current Ganeti versions do not report the memory used by offline
152 (down) instances, ignoring the run status of instances will cause
153 wrong calculations. For this reason, the algorithm subtracts the
154 memory size of down instances from the free node memory of their
155 primary node, in effect simulating the startup of such instances.
157 .SS OTHER POSSIBLE METRICS
159 It would be desirable to add more metrics to the algorithm, especially
160 dynamically-computed metrics, such as:
164 CPU usage of instances
174 The options that can be passed to the program are as follows:
176 .B -C, --print-commands
177 Print the command list at the end of the run. Without this, the
178 program will only show a shorter, but cryptic output.
180 Note that the moves list will be split into independent steps, called
181 "jobsets", but only for visual inspection, not for actually
182 parallelisation. It is not possible to parallelise these directly when
183 executed via "gnt-instance" commands, since a compound command
184 (e.g. failover and replace-disks) must be executed serially. Parallel
185 execution is only possible when using the Luxi backend and the
188 The algorithm for splitting the moves into jobsets is by accumulating
189 moves until the next move is touching nodes already touched by the
190 current moves; this means we can't execute in parallel (due to
191 resource allocation in Ganeti) and thus we start a new jobset.
195 Prints the before and after node status, in a format designed to allow
196 the user to understand the node's most important parameters.
200 Prints the before and after instance map. This is less useful as the
201 node status, but it can help in understanding instance moves.
203 The node list will contain these informations:
207 a character denoting the status of the node, with '-' meaning an
208 offline node, '*' meaning N+1 failure and blank meaning a good node
214 the total node memory
217 the memory used by the node itself
220 the memory used by instances
223 amount memory which seems to be in use but cannot be determined why or
224 by which instance; usually this means that the hypervisor has some
225 overhead or that there are other reporting errors
231 the reserved node memory, which is the amount of free memory needed
241 the number of physical cpus on the node
244 the number of virtual cpus allocated to primary instances
247 number of primary instances
250 number of secondary instances
253 percent of free memory
259 ratio of virtual to physical cpus
262 the dynamic CPU load (if the information is available)
265 the dynamic memory load (if the information is available)
268 the dynamic disk load (if the information is available)
271 the dynamic net load (if the information is available)
276 Only shows a one-line output from the program, designed for the case
277 when one wants to look at multiple clusters at once and check their
280 The line will contain four fields:
285 initial cluster score
288 number of steps in the solution
294 improvement in the cluster score
300 This option (which can be given multiple times) will mark nodes as
301 being \fIoffline\fR. This means a couple of things:
306 instances won't be placed on these nodes, not even temporarily;
307 e.g. the \fIreplace primary\fR move is not available if the secondary
308 node is offline, since this move requires a failover.
311 these nodes will not be included in the score calculation (except for
312 the percentage of instances on offline nodes)
314 Note that hbal will also mark as offline any nodes which are reported
315 by RAPI as such, or that have "?" in file-based input in any numeric
320 .BI "-e" score ", --min-score=" score
321 This parameter denotes the minimum score we are happy with and alters
322 the computation in two ways:
327 if the cluster has the initial score lower than this value, then we
328 don't enter the algorithm at all, and exit with success
331 during the iterative process, if we reach a score lower than this
332 value, we exit the algorithm
334 The default value of the parameter is currently \fI1e-9\fR (chosen
339 .BI "--no-disk-moves"
340 This parameter prevent hbal from using disk move (i.e. "gnt-instance
341 replace-disks") operations. This will result in a much quicker
342 balancing, but of course the improvements are limited. It is up to the
343 user to decide when to use one or another.
346 .BI "-n" nodefile ", --nodes=" nodefile
347 The name of the file holding node information (if not collecting via
348 RAPI), instead of the default \fInodes\fR file (but see below how to
349 customize the default value via the environment).
352 .BI "-i" instancefile ", --instances=" instancefile
353 The name of the file holding instance information (if not collecting
354 via RAPI), instead of the default \fIinstances\fR file (but see below
355 how to customize the default value via the environment).
359 Collect data not from files but directly from the
361 given as an argument via RAPI. If the argument doesn't contain a colon
362 (:), then it is converted into a fully-built URL via prepending
363 https:// and appending the default RAPI port, otherwise it's
364 considered a fully-specified URL and is used as-is.
368 Collect data not from files but directly from the master daemon, which
369 is to be contacted via the luxi (an internal Ganeti protocol). An
370 optional \fIpath\fR argument is interpreted as the path to the unix
371 socket on which the master daemon listens; otherwise, the default path
372 used by ganeti when installed with "--localstatedir=/var" is used.
376 When using the Luxi backend, hbal can also execute the given
377 commands. The execution method is to execute the individual jobsets
378 (see the \fI-C\fR option for details) in separate stages, aborting if
379 at any time a jobset doesn't have all jobs successful. Each step in
380 the balancing solution will be translated into exactly one Ganeti job
381 (having between one and three OpCodes), and all the steps in a jobset
382 will be executed in parallel. The jobsets themselves are executed
386 .BI "-l" N ", --max-length=" N
387 Restrict the solution to this length. This can be used for example to
388 automate the execution of the balancing.
391 .BI "--max-cpu " cpu-ratio
392 The maximum virtual-to-physical cpu ratio, as a floating point number
393 between zero and one. For example, specifying \fIcpu-ratio\fR as
394 \fB2.5\fR means that, for a 4-cpu machine, a maximum of 10 virtual
395 cpus should be allowed to be in use for primary instances. A value of
396 one doesn't make sense though, as that means no disk space can be used
400 .BI "--min-disk " disk-ratio
401 The minimum amount of free disk space remaining, as a floating point
402 number. For example, specifying \fIdisk-ratio\fR as \fB0.25\fR means
403 that at least one quarter of disk space should be left free on nodes.
407 Increase the output verbosity. Each usage of this option will increase
408 the verbosity (currently more than 2 doesn't make sense) from the
413 Decrease the output verbosity. Each usage of this option will decrease
414 the verbosity (less than zero doesn't make sense) from the default of
419 Just show the program version and exit.
423 The exist status of the command will be zero, unless for some reason
424 the algorithm fatally failed (e.g. wrong node or instance data).
428 If the variables \fBHTOOLS_NODES\fR and \fBHTOOLS_INSTANCES\fR are
429 present in the environment, they will override the default names for
430 the nodes and instances files. These will have of course no effect
435 The program does not check its input data for consistency, and aborts
436 with cryptic errors messages in this case.
438 The algorithm is not perfect.
440 The output format is not easily scriptable, and the program should
441 feed moves directly into Ganeti (either via RAPI or via a gnt-debug
446 Note that this example are not for the latest version (they don't have
451 With the default options, the program shows each individual step and
452 the improvements it brings in cluster score:
457 Loaded 20 nodes, 80 instances
458 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
459 Initial score: 0.52329131
460 Trying to minimize the CV...
461 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
462 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
463 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
464 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
465 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
466 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
467 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
468 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
469 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
470 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
471 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
472 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
473 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
474 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
475 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
476 Cluster score improved from 0.52329131 to 0.00252594
480 In the above output, we can see:
481 - the input data (here from files) shows a cluster with 20 nodes and
483 - the cluster is not initially N+1 compliant
484 - the initial score is 0.52329131
486 The step list follows, showing the instance, its initial
487 primary/secondary nodes, the new primary secondary, the cluster list,
488 and the actions taken in this step (with 'f' denoting failover/migrate
489 and 'r' denoting replace secondary).
491 Finally, the program shows the improvement in cluster score.
493 A more detailed output is obtained via the \fB-C\fR and \fB-p\fR options:
498 Loaded 20 nodes, 80 instances
499 Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
500 Initial cluster status:
501 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
502 * node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
503 node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179
504 * node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
505 * node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
506 * node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573
507 * node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
508 * node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
509 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
510 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
511 * node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179
512 node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577
513 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
514 node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577
515 node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577
516 * node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782
517 node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000
518 node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479
519 * node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239
520 * node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179
521 node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068
523 Initial score: 0.52329131
524 Trying to minimize the CV...
525 1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f
526 2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f
527 3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16
528 4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f
529 5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f
530 6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f
531 7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f
532 8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16
533 9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15
534 10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16
535 11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16
536 12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7
537 13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1
538 14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4
539 15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17
540 Cluster score improved from 0.52329131 to 0.00252594
542 Commands to run to reach the above solution:
544 echo gnt-instance migrate instance14
545 echo gnt-instance replace-disks -n node16 instance14
546 echo gnt-instance migrate instance14
548 echo gnt-instance migrate instance54
549 echo gnt-instance replace-disks -n node16 instance54
550 echo gnt-instance migrate instance54
552 echo gnt-instance migrate instance4
553 echo gnt-instance replace-disks -n node16 instance4
555 echo gnt-instance replace-disks -n node2 instance48
556 echo gnt-instance migrate instance48
558 echo gnt-instance replace-disks -n node16 instance93
559 echo gnt-instance migrate instance93
561 echo gnt-instance replace-disks -n node2 instance89
562 echo gnt-instance migrate instance89
564 echo gnt-instance replace-disks -n node16 instance5
565 echo gnt-instance migrate instance5
567 echo gnt-instance migrate instance94
568 echo gnt-instance replace-disks -n node16 instance94
570 echo gnt-instance migrate instance44
571 echo gnt-instance replace-disks -n node15 instance44
573 echo gnt-instance replace-disks -n node16 instance62
575 echo gnt-instance replace-disks -n node16 instance13
577 echo gnt-instance replace-disks -n node7 instance19
579 echo gnt-instance replace-disks -n node1 instance43
581 echo gnt-instance replace-disks -n node4 instance1
583 echo gnt-instance replace-disks -n node17 instance58
585 Final cluster status:
586 N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk
587 node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
588 node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
589 node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
590 node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
591 node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947
592 node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
593 node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
594 node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
595 node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
596 node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
597 node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
598 node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
599 node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
600 node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951
601 node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408
602 node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007
603 node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105
604 node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570
605 node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179
606 node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565
611 Here we see, beside the step list, the initial and final cluster
612 status, with the final one showing all nodes being N+1 compliant, and
613 the command list to reach the final solution. In the initial listing,
614 we see which nodes are not N+1 compliant.
616 The algorithm is stable as long as each step above is fully completed,
617 e.g. in step 8, both the migrate and the replace-disks are
618 done. Otherwise, if only the migrate is done, the input data is
619 changed in a way that the program will output a different solution
620 list (but hopefully will end in the same state).
623 .BR hspace "(1), " hscan "(1), " hail "(1), "
624 .BR ganeti "(7), " gnt-instance "(8), " gnt-node "(8)"
628 Copyright (C) 2009 Google Inc. Permission is granted to copy,
629 distribute and/or modify under the terms of the GNU General Public
630 License as published by the Free Software Foundation; either version 2
631 of the License, or (at your option) any later version.
633 On Debian systems, the complete text of the GNU General Public License
634 can be found in /usr/share/common-licenses/GPL.