Statistics
| Branch: | Tag: | Revision:

root / hbal.1 @ d0003b35

History | View | Annotate | Download (16.3 kB)

1
.TH HBAL 1 2009-03-23 htools "Ganeti H-tools"
2
.SH NAME
3
hbal \- Cluster balancer for Ganeti
4

    
5
.SH SYNOPSIS
6
.B hbal
7
.B "[-C]"
8
.B "[-p]"
9
.B "[-o]"
10
.BI "[-l" limit "]"
11
.BI "[-O" name... "]"
12
.BI "[-m " cluster "]"
13
.BI "[-n " nodes-file " ]"
14
.BI "[-i " instances-file "]"
15

    
16
.B hbal
17
.B --version
18

    
19
.SH DESCRIPTION
20
hbal is a cluster balancer that looks at the current state of the
21
cluster (nodes with their total and free disk, memory, etc.) and
22
instance placement and computes a series of steps designed to bring
23
the cluster into a better state.
24

    
25
The algorithm to do so is designed to be stable (i.e. it will give you
26
the same results when restarting it from the middle of the solution)
27
and reasonably fast. It is not, however, designed to be a perfect
28
algorithm - it is possible to make it go into a corner from which it
29
can find no improvement, because it only look one "step" ahead.
30

    
31
By default, the program will show the solution incrementally as it is
32
computed, in a somewhat cryptic format; for getting the actual Ganeti
33
command list, use the \fB-C\fR option.
34

    
35
.SS ALGORITHM
36

    
37
The program works in independent steps; at each step, we compute the
38
best instance move that lowers the cluster score.
39

    
40
The possible move type for an instance are combinations of
41
failover/migrate and replace-disks such that we change one of the
42
instance nodes, and the other one remains (but possibly with changed
43
role, e.g. from primary it becomes secondary). The list is:
44
.RS 4
45
.TP 3
46
\(em
47
failover (f)
48
.TP
49
\(em
50
replace secondary (r)
51
.TP
52
\(em
53
replace primary, a composite move (f, r, f)
54
.TP
55
\(em
56
failover and replace secondary, also composite (f, r)
57
.TP
58
\(em
59
replace secondary and failover, also composite (r, f)
60
.RE
61

    
62
We don't do the only remaining possibility of replacing both nodes
63
(r,f,r,f or the equivalent f,r,f,r) since these move needs an
64
exhaustive search over both candidate primary and secondary nodes, and
65
is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
66
give better scores but will result in more disk replacements.
67

    
68
.SS CLUSTER SCORING
69

    
70
As said before, the algorithm tries to minimise the cluster score at
71
each step. Currently this score is computed as a sum of the following
72
components:
73
.RS 4
74
.TP 3
75
\(em
76
coefficient of variance of the percent of free memory
77
.TP
78
\(em
79
coefficient of variance of the percent of reserved memory
80
.TP
81
\(em
82
coefficient of variance of the percent of free disk
83
.TP
84
\(em
85
percentage of nodes failing N+1 check
86
.TP
87
\(em
88
percentage of instances living (either as primary or secondary) on
89
offline nodes
90
.RE
91

    
92
The free memory and free disk values help ensure that all nodes are
93
somewhat balanced in their resource usage. The reserved memory helps
94
to ensure that nodes are somewhat balanced in holding secondary
95
instances, and that no node keeps too much memory reserved for
96
N+1. And finally, the N+1 percentage helps guide the algorithm towards
97
eliminating N+1 failures, if possible.
98

    
99
Except for the N+1 failures and offline instances percentage, we use
100
the coefficient of variance since this brings the values into the same
101
unit so to speak, and with a restrict domain of values (between zero
102
and one). The percentage of N+1 failures, while also in this numeric
103
range, doesn't actually has the same meaning, but it has shown to work
104
well.
105

    
106
The other alternative, using for N+1 checks the coefficient of
107
variance of (N+1 fail=1, N+1 pass=0) across nodes could hint the
108
algorithm to make more N+1 failures if most nodes are N+1 fail
109
already. Since this (making N+1 failures) is not allowed by other
110
rules of the algorithm, so the N+1 checks would simply not work
111
anymore in this case.
112

    
113
The offline instances percentage (meaning the percentage of instances
114
living on offline nodes) will cause the algorithm to actively move
115
instances away from offline nodes. This, coupled with the restriction
116
on placement given by offline nodes, will cause evacuation of such
117
nodes.
118

    
119
On a perfectly balanced cluster (all nodes the same size, all
120
instances the same size and spread across the nodes equally), all
121
values would be zero. This doesn't happen too often in practice :)
122

    
123
.SS OFFLINE INSTANCES
124

    
125
Since current Ganeti versions do not report the memory used by offline
126
(down) instances, ignoring the run status of instances will cause
127
wrong calculations. For this reason, the algorithm subtracts the
128
memory size of down instances from the free node memory of their
129
primary node, in effect simulating the startup of such instances.
130

    
131
.SS OTHER POSSIBLE METRICS
132

    
133
It would be desirable to add more metrics to the algorithm, especially
134
dynamically-computed metrics, such as:
135
.RS 4
136
.TP 3
137
\(em
138
CPU usage of instances, combined with VCPU versus PCPU count
139
.TP
140
\(em
141
Disk IO usage
142
.TP
143
\(em
144
Network IO
145
.RE
146

    
147
.SH OPTIONS
148
The options that can be passed to the program are as follows:
149
.TP
150
.B -C, --print-commands
151
Print the command list at the end of the run. Without this, the
152
program will only show a shorter, but cryptic output.
153
.TP
154
.B -p, --print-nodes
155
Prints the before and after node status, in a format designed to allow
156
the user to understand the node's most important parameters.
157

    
158
The node list will contain these informations:
159
.RS
160
.TP
161
.B F
162
a character denoting the status of the node, with '-' meaning an
163
offline node, '*' meaning N+1 failure and blank meaning a good node
164
.TP
165
.B Name
166
the node name
167
.TP
168
.B t_mem
169
the total node memory
170
.TP
171
.B n_mem
172
the memory used by the node itself
173
.TP
174
.B i_mem
175
the memory used by instances
176
.TP
177
.B x_mem
178
amount memory which seems to be in use but cannot be determined why or
179
by which instance; usually this means that the hypervisor has some
180
overhead or that there are other reporting errors
181
.TP
182
.B f_mem
183
the free node memory
184
.TP
185
.B r_mem
186
the reserved node memory, which is the amount of free memory needed
187
for N+1 compliance
188
.TP
189
.B t_dsk
190
total disk
191
.TP
192
.B f_dsk
193
free disk
194
.TP
195
.B pri
196
number of primary instances
197
.TP
198
.B sec
199
number of secondary instances
200
.TP
201
.B p_fmem
202
percent of free memory
203
.TP
204
.B p_fdsk
205
percent of free disk
206
.RE
207

    
208
.TP
209
.B -o, --oneline
210
Only shows a one-line output from the program, designed for the case
211
when one wants to look at multiple clusters at once and check their
212
status.
213

    
214
The line will contain four fields:
215
.RS
216
.RS 4
217
.TP 3
218
\(em
219
initial cluster score
220
.TP
221
\(em
222
number of steps in the solution
223
.TP
224
\(em
225
final cluster score
226
.TP
227
\(em
228
improvement in the cluster score
229
.RE
230
.RE
231

    
232
.TP
233
.BI "-O " name
234
This option (which can be given multiple times) will mark nodes as
235
being \fIoffline\fR. This means a couple of things:
236
.RS
237
.RS 4
238
.TP 3
239
\(em
240
instances won't be placed on these nodes, not even temporarily;
241
e.g. the \fIreplace primary\fR move is not available if the secondary
242
node is offline, since this move requires a failover.
243
.TP
244
\(em
245
these nodes will not be included in the score calculation (except for
246
the percentage of instances on offline nodes)
247
.RE
248
.RE
249

    
250
.TP
251
.BI "-n" nodefile ", --nodes=" nodefile
252
The name of the file holding node information (if not collecting via
253
RAPI), instead of the default
254
.I nodes
255
file.
256

    
257
.TP
258
.BI "-i" instancefile ", --instances=" instancefile
259
The name of the file holding instance information (if not collecting
260
via RAPI), instead of the default
261
.I instances
262
file.
263

    
264
.TP
265
.BI "-m" cluster
266
Collect data not from files but directly from the
267
.I cluster
268
given as an argument via RAPI. This work for both Ganeti 1.2 and
269
Ganeti 2.0.
270

    
271
.TP
272
.BI "-l" N ", --max-length=" N
273
Restrict the solution to this length. This can be used for example to
274
automate the execution of the balancing.
275

    
276
.TP
277
.B -v, --verbose
278
Increase the output verbosity. Each usage of this option will increase
279
the verbosity (currently more than 2 doesn't make sense) from the
280
default of zero.
281

    
282
.TP
283
.B -V, --version
284
Just show the program version and exit.
285

    
286
.SH EXIT STATUS
287

    
288
The exist status of the command will be zero, unless for some reason
289
the algorithm fatally failed (e.g. wrong node or instance data).
290

    
291
.SH BUGS
292

    
293
The program does not check its input data for consistency, and aborts
294
with cryptic errors messages in this case.
295

    
296
The algorithm is not perfect.
297

    
298
The algorithm doesn't deal with non-\fBdrbd\fR instances, and chokes
299
on input data which has such instances.
300

    
301
The output format is not easily scriptable, and the program should
302
feed moves directly into Ganeti (either via RAPI or via a gnt-debug
303
input file).
304

    
305
.SH EXAMPLE
306

    
307
Note that this example are not for the latest version (they don't have
308
full node data).
309

    
310
.SS Default output
311

    
312
With the default options, the program shows each individual step and
313
the improvements it brings in cluster score:
314

    
315
.in +4n
316
.nf
317
.RB "$" " hbal"
318
Loaded 20 nodes, 80 instances
319
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
320
Initial score: 0.52329131
321
Trying to minimize the CV...
322
    1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
323
    2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
324
    3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
325
    4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
326
    5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
327
    6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
328
    7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
329
    8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
330
    9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
331
   10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
332
   11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
333
   12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
334
   13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
335
   14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
336
   15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
337
Cluster score improved from 0.52329131 to 0.00252594
338
.fi
339
.in
340

    
341
In the above output, we can see:
342
  - the input data (here from files) shows a cluster with 20 nodes and
343
    80 instances
344
  - the cluster is not initially N+1 compliant
345
  - the initial score is 0.52329131
346

    
347
The step list follows, showing the instance, its initial
348
primary/secondary nodes, the new primary secondary, the cluster list,
349
and the actions taken in this step (with 'f' denoting failover/migrate
350
and 'r' denoting replace secondary).
351

    
352
Finally, the program shows the improvement in cluster score.
353

    
354
A more detailed output is obtained via the \fB-C\fR and \fB-p\fR options:
355

    
356
.in +4n
357
.nf
358
.RB "$" " hbal"
359
Loaded 20 nodes, 80 instances
360
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
361
Initial cluster status:
362
N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
363
 * node1  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
364
   node2  32762 31280 12000  1861  1026   0   8 0.95476 0.55179
365
 * node3  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
366
 * node4  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
367
 * node5  32762  1280  6000  1861   978   5   5 0.03907 0.52573
368
 * node6  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
369
 * node7  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
370
   node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
371
   node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
372
 * node10 32762  7280 12000  1861  1026   4   4 0.22221 0.55179
373
   node11 32762  7280  6000  1861   922   4   5 0.22221 0.49577
374
   node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
375
   node13 32762  7280  6000  1861   922   4   5 0.22221 0.49577
376
   node14 32762  7280  6000  1861   922   4   5 0.22221 0.49577
377
 * node15 32762  7280 12000  1861  1131   4   3 0.22221 0.60782
378
   node16 32762 31280     0  1861  1860   0   0 0.95476 1.00000
379
   node17 32762  7280  6000  1861  1106   5   3 0.22221 0.59479
380
 * node18 32762  1280  6000  1396   561   5   3 0.03907 0.40239
381
 * node19 32762  1280  6000  1861  1026   5   3 0.03907 0.55179
382
   node20 32762 13280 12000  1861   689   3   9 0.40535 0.37068
383

    
384
Initial score: 0.52329131
385
Trying to minimize the CV...
386
    1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
387
    2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
388
    3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
389
    4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
390
    5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
391
    6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
392
    7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
393
    8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
394
    9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
395
   10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
396
   11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
397
   12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
398
   13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
399
   14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
400
   15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
401
Cluster score improved from 0.52329131 to 0.00252594
402

    
403
Commands to run to reach the above solution:
404
  echo step 1
405
  echo gnt-instance migrate instance14
406
  echo gnt-instance replace-disks -n node16 instance14
407
  echo gnt-instance migrate instance14
408
  echo step 2
409
  echo gnt-instance migrate instance54
410
  echo gnt-instance replace-disks -n node16 instance54
411
  echo gnt-instance migrate instance54
412
  echo step 3
413
  echo gnt-instance migrate instance4
414
  echo gnt-instance replace-disks -n node16 instance4
415
  echo step 4
416
  echo gnt-instance replace-disks -n node2 instance48
417
  echo gnt-instance migrate instance48
418
  echo step 5
419
  echo gnt-instance replace-disks -n node16 instance93
420
  echo gnt-instance migrate instance93
421
  echo step 6
422
  echo gnt-instance replace-disks -n node2 instance89
423
  echo gnt-instance migrate instance89
424
  echo step 7
425
  echo gnt-instance replace-disks -n node16 instance5
426
  echo gnt-instance migrate instance5
427
  echo step 8
428
  echo gnt-instance migrate instance94
429
  echo gnt-instance replace-disks -n node16 instance94
430
  echo step 9
431
  echo gnt-instance migrate instance44
432
  echo gnt-instance replace-disks -n node15 instance44
433
  echo step 10
434
  echo gnt-instance replace-disks -n node16 instance62
435
  echo step 11
436
  echo gnt-instance replace-disks -n node16 instance13
437
  echo step 12
438
  echo gnt-instance replace-disks -n node7 instance19
439
  echo step 13
440
  echo gnt-instance replace-disks -n node1 instance43
441
  echo step 14
442
  echo gnt-instance replace-disks -n node4 instance1
443
  echo step 15
444
  echo gnt-instance replace-disks -n node17 instance58
445

    
446
Final cluster status:
447
N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
448
   node1  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
449
   node2  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
450
   node3  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
451
   node4  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
452
   node5  32762  7280  6000  1861  1078   4   5 0.22221 0.57947
453
   node6  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
454
   node7  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
455
   node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
456
   node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
457
   node10 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
458
   node11 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
459
   node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
460
   node13 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
461
   node14 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
462
   node15 32762  7280  6000  1861  1031   4   4 0.22221 0.55408
463
   node16 32762  7280  6000  1861  1060   4   4 0.22221 0.57007
464
   node17 32762  7280  6000  1861  1006   5   4 0.22221 0.54105
465
   node18 32762  7280  6000  1396   761   4   2 0.22221 0.54570
466
   node19 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
467
   node20 32762 13280  6000  1861  1089   3   5 0.40535 0.58565
468

    
469
.fi
470
.in
471

    
472
Here we see, beside the step list, the initial and final cluster
473
status, with the final one showing all nodes being N+1 compliant, and
474
the command list to reach the final solution. In the initial listing,
475
we see which nodes are not N+1 compliant.
476

    
477
The algorithm is stable as long as each step above is fully completed,
478
e.g. in step 8, both the migrate and the replace-disks are
479
done. Otherwise, if only the migrate is done, the input data is
480
changed in a way that the program will output a different solution
481
list (but hopefully will end in the same state).
482

    
483
.SH SEE ALSO
484
.BR hn1 "(1), " hscan "(1), " ganeti "(7), " gnt-instance "(8), "
485
.BR gnt-node "(8)"