Revision a7a8f280

b/Ganeti/HTools/Instance.hs
97 97
             , pNode = pn
98 98
             , sNode = sn
99 99
             , idx = -1
100
             , util = T.zeroUtil
100
             , util = T.baseUtil
101 101
             }
102 102

  
103 103
-- | Changes the index.
b/hbal.1
156 156
how to generate these values for input to hbal would be to track "xm
157 157
list" for instance over a day and by computing the delta of the cpu
158 158
values, and feed that via the \fI-U\fR option for all instances (and
159
keep the other metrics as zero). For the algorithm to work, all that
160
is needed is that the values are consistent for a metric across all
159
keep the other metrics as one). For the algorithm to work, all that is
160
needed is that the values are consistent for a metric across all
161 161
instances (e.g. all instances use cpu% to report cpu usage, but they
162
could represent network bandwith in Gbps).
162
could represent network bandwith in Gbps). Note that it's recommended
163
to not have zero as the load value for any instance metric since then
164
secondary instances are not well balanced.
163 165

  
164 166
On a perfectly balanced cluster (all nodes the same size, all
165
instances the same size and spread across the nodes equally), all
166
values would be zero. This doesn't happen too often in practice :)
167
instances the same size and spread across the nodes equally), the
168
values for all metrics would be zero. This doesn't happen too often in
169
practice :)
167 170

  
168 171
.SS OFFLINE INSTANCES
169 172

  
......
371 374
the instance name must match exactly the instance as read from
372 375
Ganeti. In case of unknown instance names, the program will abort.
373 376

  
374
If not given, the default values are zero for all metrics and thus
375
dynamic utilisation has no effect on the balancing algorithm.
377
If not given, the default values are one for all metrics and thus
378
dynamic utilisation has only one effect on the algorithm: the
379
equalisation of the secondary instances across nodes (this is the only
380
metric that is not tracked by another, dedicated value, and thus the
381
disk load of instances will cause secondary instance
382
equalisation). Note that value of one will also influence slightly the
383
primary instance count, but that is already tracked via other metrics
384
and thus the influence of the dynamic utilisation will be practically
385
insignificant.
376 386

  
377 387
.TP
378 388
.BI "-n" nodefile ", --nodes=" nodefile

Also available in: Unified diff