Statistics
| Branch: | Tag: | Revision:

root / hbal.1 @ 00b15752

History | View | Annotate | Download (17.5 kB)

1
.TH HBAL 1 2009-03-23 htools "Ganeti H-tools"
2
.SH NAME
3
hbal \- Cluster balancer for Ganeti
4

    
5
.SH SYNOPSIS
6
.B hbal
7
.B "[-C]"
8
.B "[-p]"
9
.B "[-o]"
10
.B "[-v... | -q]"
11
.BI "[-l" limit "]"
12
.BI "[-O" name... "]"
13
.BI "[-e" score "]"
14
.BI "[-m " cluster "]"
15
.BI "[-n " nodes-file " ]"
16
.BI "[-i " instances-file "]"
17

    
18
.B hbal
19
.B --version
20

    
21
.SH DESCRIPTION
22
hbal is a cluster balancer that looks at the current state of the
23
cluster (nodes with their total and free disk, memory, etc.) and
24
instance placement and computes a series of steps designed to bring
25
the cluster into a better state.
26

    
27
The algorithm to do so is designed to be stable (i.e. it will give you
28
the same results when restarting it from the middle of the solution)
29
and reasonably fast. It is not, however, designed to be a perfect
30
algorithm - it is possible to make it go into a corner from which it
31
can find no improvement, because it only look one "step" ahead.
32

    
33
By default, the program will show the solution incrementally as it is
34
computed, in a somewhat cryptic format; for getting the actual Ganeti
35
command list, use the \fB-C\fR option.
36

    
37
.SS ALGORITHM
38

    
39
The program works in independent steps; at each step, we compute the
40
best instance move that lowers the cluster score.
41

    
42
The possible move type for an instance are combinations of
43
failover/migrate and replace-disks such that we change one of the
44
instance nodes, and the other one remains (but possibly with changed
45
role, e.g. from primary it becomes secondary). The list is:
46
.RS 4
47
.TP 3
48
\(em
49
failover (f)
50
.TP
51
\(em
52
replace secondary (r)
53
.TP
54
\(em
55
replace primary, a composite move (f, r, f)
56
.TP
57
\(em
58
failover and replace secondary, also composite (f, r)
59
.TP
60
\(em
61
replace secondary and failover, also composite (r, f)
62
.RE
63

    
64
We don't do the only remaining possibility of replacing both nodes
65
(r,f,r,f or the equivalent f,r,f,r) since these move needs an
66
exhaustive search over both candidate primary and secondary nodes, and
67
is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
68
give better scores but will result in more disk replacements.
69

    
70
.SS CLUSTER SCORING
71

    
72
As said before, the algorithm tries to minimise the cluster score at
73
each step. Currently this score is computed as a sum of the following
74
components:
75
.RS 4
76
.TP 3
77
\(em
78
coefficient of variance of the percent of free memory
79
.TP
80
\(em
81
coefficient of variance of the percent of reserved memory
82
.TP
83
\(em
84
coefficient of variance of the percent of free disk
85
.TP
86
\(em
87
percentage of nodes failing N+1 check
88
.TP
89
\(em
90
percentage of instances living (either as primary or secondary) on
91
offline nodes
92
.RE
93

    
94
The free memory and free disk values help ensure that all nodes are
95
somewhat balanced in their resource usage. The reserved memory helps
96
to ensure that nodes are somewhat balanced in holding secondary
97
instances, and that no node keeps too much memory reserved for
98
N+1. And finally, the N+1 percentage helps guide the algorithm towards
99
eliminating N+1 failures, if possible.
100

    
101
Except for the N+1 failures and offline instances percentage, we use
102
the coefficient of variance since this brings the values into the same
103
unit so to speak, and with a restrict domain of values (between zero
104
and one). The percentage of N+1 failures, while also in this numeric
105
range, doesn't actually has the same meaning, but it has shown to work
106
well.
107

    
108
The other alternative, using for N+1 checks the coefficient of
109
variance of (N+1 fail=1, N+1 pass=0) across nodes could hint the
110
algorithm to make more N+1 failures if most nodes are N+1 fail
111
already. Since this (making N+1 failures) is not allowed by other
112
rules of the algorithm, so the N+1 checks would simply not work
113
anymore in this case.
114

    
115
The offline instances percentage (meaning the percentage of instances
116
living on offline nodes) will cause the algorithm to actively move
117
instances away from offline nodes. This, coupled with the restriction
118
on placement given by offline nodes, will cause evacuation of such
119
nodes.
120

    
121
On a perfectly balanced cluster (all nodes the same size, all
122
instances the same size and spread across the nodes equally), all
123
values would be zero. This doesn't happen too often in practice :)
124

    
125
.SS OFFLINE INSTANCES
126

    
127
Since current Ganeti versions do not report the memory used by offline
128
(down) instances, ignoring the run status of instances will cause
129
wrong calculations. For this reason, the algorithm subtracts the
130
memory size of down instances from the free node memory of their
131
primary node, in effect simulating the startup of such instances.
132

    
133
.SS OTHER POSSIBLE METRICS
134

    
135
It would be desirable to add more metrics to the algorithm, especially
136
dynamically-computed metrics, such as:
137
.RS 4
138
.TP 3
139
\(em
140
CPU usage of instances, combined with VCPU versus PCPU count
141
.TP
142
\(em
143
Disk IO usage
144
.TP
145
\(em
146
Network IO
147
.RE
148

    
149
.SH OPTIONS
150
The options that can be passed to the program are as follows:
151
.TP
152
.B -C, --print-commands
153
Print the command list at the end of the run. Without this, the
154
program will only show a shorter, but cryptic output.
155
.TP
156
.B -p, --print-nodes
157
Prints the before and after node status, in a format designed to allow
158
the user to understand the node's most important parameters.
159

    
160
The node list will contain these informations:
161
.RS
162
.TP
163
.B F
164
a character denoting the status of the node, with '-' meaning an
165
offline node, '*' meaning N+1 failure and blank meaning a good node
166
.TP
167
.B Name
168
the node name
169
.TP
170
.B t_mem
171
the total node memory
172
.TP
173
.B n_mem
174
the memory used by the node itself
175
.TP
176
.B i_mem
177
the memory used by instances
178
.TP
179
.B x_mem
180
amount memory which seems to be in use but cannot be determined why or
181
by which instance; usually this means that the hypervisor has some
182
overhead or that there are other reporting errors
183
.TP
184
.B f_mem
185
the free node memory
186
.TP
187
.B r_mem
188
the reserved node memory, which is the amount of free memory needed
189
for N+1 compliance
190
.TP
191
.B t_dsk
192
total disk
193
.TP
194
.B f_dsk
195
free disk
196
.TP
197
.B pri
198
number of primary instances
199
.TP
200
.B sec
201
number of secondary instances
202
.TP
203
.B p_fmem
204
percent of free memory
205
.TP
206
.B p_fdsk
207
percent of free disk
208
.RE
209

    
210
.TP
211
.B -o, --oneline
212
Only shows a one-line output from the program, designed for the case
213
when one wants to look at multiple clusters at once and check their
214
status.
215

    
216
The line will contain four fields:
217
.RS
218
.RS 4
219
.TP 3
220
\(em
221
initial cluster score
222
.TP
223
\(em
224
number of steps in the solution
225
.TP
226
\(em
227
final cluster score
228
.TP
229
\(em
230
improvement in the cluster score
231
.RE
232
.RE
233

    
234
.TP
235
.BI "-O " name
236
This option (which can be given multiple times) will mark nodes as
237
being \fIoffline\fR. This means a couple of things:
238
.RS
239
.RS 4
240
.TP 3
241
\(em
242
instances won't be placed on these nodes, not even temporarily;
243
e.g. the \fIreplace primary\fR move is not available if the secondary
244
node is offline, since this move requires a failover.
245
.TP
246
\(em
247
these nodes will not be included in the score calculation (except for
248
the percentage of instances on offline nodes)
249
.RE
250
Note that hbal will also mark as offline any nodes which are reported
251
by RAPI as such, or that have "?" in file-based input in any numeric
252
fields.
253
.RE
254

    
255
.TP
256
.BI "-e" score ", --min-score=" score
257
This parameter denotes the minimum score we are happy with and alters
258
the computation in two ways:
259
.RS
260
.RS 4
261
.TP 3
262
\(em
263
if the cluster has the initial score lower than this value, then we
264
don't enter the algorithm at all, and exit with success
265
.TP
266
\(em
267
during the iterative process, if we reach a score lower than this
268
value, we exit the algorithm
269
.RE
270
The default value of the parameter is currently \fI1e-9\fR (chosen
271
empirically).
272
.RE
273

    
274
.TP
275
.BI "-n" nodefile ", --nodes=" nodefile
276
The name of the file holding node information (if not collecting via
277
RAPI), instead of the default \fInodes\fR file (but see below how to
278
customize the default value via the environment).
279

    
280
.TP
281
.BI "-i" instancefile ", --instances=" instancefile
282
The name of the file holding instance information (if not collecting
283
via RAPI), instead of the default \fIinstances\fR file (but see below
284
how to customize the default value via the environment).
285

    
286
.TP
287
.BI "-m" cluster
288
Collect data not from files but directly from the
289
.I cluster
290
given as an argument via RAPI. This work for both Ganeti 1.2 and
291
Ganeti 2.0.
292

    
293
.TP
294
.BI "-l" N ", --max-length=" N
295
Restrict the solution to this length. This can be used for example to
296
automate the execution of the balancing.
297

    
298
.TP
299
.B -v, --verbose
300
Increase the output verbosity. Each usage of this option will increase
301
the verbosity (currently more than 2 doesn't make sense) from the
302
default of one.
303

    
304
.TP
305
.B -q, --quiet
306
Decrease the output verbosity. Each usage of this option will decrease
307
the verbosity (less than zero doesn't make sense) from the default of
308
one.
309

    
310
.TP
311
.B -V, --version
312
Just show the program version and exit.
313

    
314
.SH EXIT STATUS
315

    
316
The exist status of the command will be zero, unless for some reason
317
the algorithm fatally failed (e.g. wrong node or instance data).
318

    
319
.SH ENVIRONMENT
320

    
321
If the variables \fBHTOOLS_NODES\fR and \fBHTOOLS_INSTANCES\fR are
322
present in the environment, they will override the default names for
323
the nodes and instances files. These will have of course no effect
324
when RAPI is used.
325

    
326
.SH BUGS
327

    
328
The program does not check its input data for consistency, and aborts
329
with cryptic errors messages in this case.
330

    
331
The algorithm is not perfect.
332

    
333
The algorithm doesn't deal with non-\fBdrbd\fR instances, and chokes
334
on input data which has such instances.
335

    
336
The output format is not easily scriptable, and the program should
337
feed moves directly into Ganeti (either via RAPI or via a gnt-debug
338
input file).
339

    
340
.SH EXAMPLE
341

    
342
Note that this example are not for the latest version (they don't have
343
full node data).
344

    
345
.SS Default output
346

    
347
With the default options, the program shows each individual step and
348
the improvements it brings in cluster score:
349

    
350
.in +4n
351
.nf
352
.RB "$" " hbal"
353
Loaded 20 nodes, 80 instances
354
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
355
Initial score: 0.52329131
356
Trying to minimize the CV...
357
    1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
358
    2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
359
    3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
360
    4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
361
    5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
362
    6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
363
    7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
364
    8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
365
    9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
366
   10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
367
   11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
368
   12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
369
   13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
370
   14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
371
   15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
372
Cluster score improved from 0.52329131 to 0.00252594
373
.fi
374
.in
375

    
376
In the above output, we can see:
377
  - the input data (here from files) shows a cluster with 20 nodes and
378
    80 instances
379
  - the cluster is not initially N+1 compliant
380
  - the initial score is 0.52329131
381

    
382
The step list follows, showing the instance, its initial
383
primary/secondary nodes, the new primary secondary, the cluster list,
384
and the actions taken in this step (with 'f' denoting failover/migrate
385
and 'r' denoting replace secondary).
386

    
387
Finally, the program shows the improvement in cluster score.
388

    
389
A more detailed output is obtained via the \fB-C\fR and \fB-p\fR options:
390

    
391
.in +4n
392
.nf
393
.RB "$" " hbal"
394
Loaded 20 nodes, 80 instances
395
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
396
Initial cluster status:
397
N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
398
 * node1  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
399
   node2  32762 31280 12000  1861  1026   0   8 0.95476 0.55179
400
 * node3  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
401
 * node4  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
402
 * node5  32762  1280  6000  1861   978   5   5 0.03907 0.52573
403
 * node6  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
404
 * node7  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
405
   node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
406
   node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
407
 * node10 32762  7280 12000  1861  1026   4   4 0.22221 0.55179
408
   node11 32762  7280  6000  1861   922   4   5 0.22221 0.49577
409
   node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
410
   node13 32762  7280  6000  1861   922   4   5 0.22221 0.49577
411
   node14 32762  7280  6000  1861   922   4   5 0.22221 0.49577
412
 * node15 32762  7280 12000  1861  1131   4   3 0.22221 0.60782
413
   node16 32762 31280     0  1861  1860   0   0 0.95476 1.00000
414
   node17 32762  7280  6000  1861  1106   5   3 0.22221 0.59479
415
 * node18 32762  1280  6000  1396   561   5   3 0.03907 0.40239
416
 * node19 32762  1280  6000  1861  1026   5   3 0.03907 0.55179
417
   node20 32762 13280 12000  1861   689   3   9 0.40535 0.37068
418

    
419
Initial score: 0.52329131
420
Trying to minimize the CV...
421
    1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
422
    2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
423
    3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
424
    4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
425
    5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
426
    6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
427
    7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
428
    8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
429
    9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
430
   10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
431
   11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
432
   12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
433
   13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
434
   14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
435
   15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
436
Cluster score improved from 0.52329131 to 0.00252594
437

    
438
Commands to run to reach the above solution:
439
  echo step 1
440
  echo gnt-instance migrate instance14
441
  echo gnt-instance replace-disks -n node16 instance14
442
  echo gnt-instance migrate instance14
443
  echo step 2
444
  echo gnt-instance migrate instance54
445
  echo gnt-instance replace-disks -n node16 instance54
446
  echo gnt-instance migrate instance54
447
  echo step 3
448
  echo gnt-instance migrate instance4
449
  echo gnt-instance replace-disks -n node16 instance4
450
  echo step 4
451
  echo gnt-instance replace-disks -n node2 instance48
452
  echo gnt-instance migrate instance48
453
  echo step 5
454
  echo gnt-instance replace-disks -n node16 instance93
455
  echo gnt-instance migrate instance93
456
  echo step 6
457
  echo gnt-instance replace-disks -n node2 instance89
458
  echo gnt-instance migrate instance89
459
  echo step 7
460
  echo gnt-instance replace-disks -n node16 instance5
461
  echo gnt-instance migrate instance5
462
  echo step 8
463
  echo gnt-instance migrate instance94
464
  echo gnt-instance replace-disks -n node16 instance94
465
  echo step 9
466
  echo gnt-instance migrate instance44
467
  echo gnt-instance replace-disks -n node15 instance44
468
  echo step 10
469
  echo gnt-instance replace-disks -n node16 instance62
470
  echo step 11
471
  echo gnt-instance replace-disks -n node16 instance13
472
  echo step 12
473
  echo gnt-instance replace-disks -n node7 instance19
474
  echo step 13
475
  echo gnt-instance replace-disks -n node1 instance43
476
  echo step 14
477
  echo gnt-instance replace-disks -n node4 instance1
478
  echo step 15
479
  echo gnt-instance replace-disks -n node17 instance58
480

    
481
Final cluster status:
482
N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
483
   node1  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
484
   node2  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
485
   node3  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
486
   node4  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
487
   node5  32762  7280  6000  1861  1078   4   5 0.22221 0.57947
488
   node6  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
489
   node7  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
490
   node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
491
   node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
492
   node10 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
493
   node11 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
494
   node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
495
   node13 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
496
   node14 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
497
   node15 32762  7280  6000  1861  1031   4   4 0.22221 0.55408
498
   node16 32762  7280  6000  1861  1060   4   4 0.22221 0.57007
499
   node17 32762  7280  6000  1861  1006   5   4 0.22221 0.54105
500
   node18 32762  7280  6000  1396   761   4   2 0.22221 0.54570
501
   node19 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
502
   node20 32762 13280  6000  1861  1089   3   5 0.40535 0.58565
503

    
504
.fi
505
.in
506

    
507
Here we see, beside the step list, the initial and final cluster
508
status, with the final one showing all nodes being N+1 compliant, and
509
the command list to reach the final solution. In the initial listing,
510
we see which nodes are not N+1 compliant.
511

    
512
The algorithm is stable as long as each step above is fully completed,
513
e.g. in step 8, both the migrate and the replace-disks are
514
done. Otherwise, if only the migrate is done, the input data is
515
changed in a way that the program will output a different solution
516
list (but hopefully will end in the same state).
517

    
518
.SH SEE ALSO
519
.BR hn1 "(1), " hscan "(1), " ganeti "(7), " gnt-instance "(8), "
520
.BR gnt-node "(8)"