root / man / hbal.rst @ e37f47d3
History | View | Annotate | Download (26.3 kB)
1 |
HBAL(1) Ganeti | Version @GANETI_VERSION@ |
---|---|
2 |
========================================= |
3 |
|
4 |
NAME |
5 |
---- |
6 |
|
7 |
hbal \- Cluster balancer for Ganeti |
8 |
|
9 |
SYNOPSIS |
10 |
-------- |
11 |
|
12 |
**hbal** {backend options...} [algorithm options...] [reporting options...] |
13 |
|
14 |
**hbal** \--version |
15 |
|
16 |
|
17 |
Backend options: |
18 |
|
19 |
{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* | |
20 |
**-I** *path* } |
21 |
|
22 |
Algorithm options: |
23 |
|
24 |
**[ \--max-cpu *cpu-ratio* ]** |
25 |
**[ \--min-disk *disk-ratio* ]** |
26 |
**[ -l *limit* ]** |
27 |
**[ -e *score* ]** |
28 |
**[ -g *delta* ]** **[ \--min-gain-limit *threshold* ]** |
29 |
**[ -O *name...* ]** |
30 |
**[ \--no-disk-moves ]** |
31 |
**[ \--no-instance-moves ]** |
32 |
**[ -U *util-file* ]** |
33 |
**[ \--evac-mode ]** |
34 |
**[ \--select-instances *inst...* ]** |
35 |
**[ \--exclude-instances *inst...* ]** |
36 |
|
37 |
Reporting options: |
38 |
|
39 |
**[ -C[ *file* ] ]** |
40 |
**[ -p[ *fields* ] ]** |
41 |
**[ \--print-instances ]** |
42 |
**[ -S *file* ]** |
43 |
**[ -v... | -q ]** |
44 |
|
45 |
|
46 |
DESCRIPTION |
47 |
----------- |
48 |
|
49 |
hbal is a cluster balancer that looks at the current state of the |
50 |
cluster (nodes with their total and free disk, memory, etc.) and |
51 |
instance placement and computes a series of steps designed to bring |
52 |
the cluster into a better state. |
53 |
|
54 |
The algorithm used is designed to be stable (i.e. it will give you the |
55 |
same results when restarting it from the middle of the solution) and |
56 |
reasonably fast. It is not, however, designed to be a perfect algorithm: |
57 |
it is possible to make it go into a corner from which it can find no |
58 |
improvement, because it looks only one "step" ahead. |
59 |
|
60 |
By default, the program will show the solution incrementally as it is |
61 |
computed, in a somewhat cryptic format; for getting the actual Ganeti |
62 |
command list, use the **-C** option. |
63 |
|
64 |
ALGORITHM |
65 |
~~~~~~~~~ |
66 |
|
67 |
The program works in independent steps; at each step, we compute the |
68 |
best instance move that lowers the cluster score. |
69 |
|
70 |
The possible move type for an instance are combinations of |
71 |
failover/migrate and replace-disks such that we change one of the |
72 |
instance nodes, and the other one remains (but possibly with changed |
73 |
role, e.g. from primary it becomes secondary). The list is: |
74 |
|
75 |
- failover (f) |
76 |
- replace secondary (r) |
77 |
- replace primary, a composite move (f, r, f) |
78 |
- failover and replace secondary, also composite (f, r) |
79 |
- replace secondary and failover, also composite (r, f) |
80 |
|
81 |
We don't do the only remaining possibility of replacing both nodes |
82 |
(r,f,r,f or the equivalent f,r,f,r) since these move needs an |
83 |
exhaustive search over both candidate primary and secondary nodes, and |
84 |
is O(n*n) in the number of nodes. Furthermore, it doesn't seems to |
85 |
give better scores but will result in more disk replacements. |
86 |
|
87 |
PLACEMENT RESTRICTIONS |
88 |
~~~~~~~~~~~~~~~~~~~~~~ |
89 |
|
90 |
At each step, we prevent an instance move if it would cause: |
91 |
|
92 |
- a node to go into N+1 failure state |
93 |
- an instance to move onto an offline node (offline nodes are either |
94 |
read from the cluster or declared with *-O*) |
95 |
- an exclusion-tag based conflict (exclusion tags are read from the |
96 |
cluster and/or defined via the *\--exclusion-tags* option) |
97 |
- a max vcpu/pcpu ratio to be exceeded (configured via *\--max-cpu*) |
98 |
- min disk free percentage to go below the configured limit |
99 |
(configured via *\--min-disk*) |
100 |
|
101 |
CLUSTER SCORING |
102 |
~~~~~~~~~~~~~~~ |
103 |
|
104 |
As said before, the algorithm tries to minimise the cluster score at |
105 |
each step. Currently this score is computed as a sum of the following |
106 |
components: |
107 |
|
108 |
- standard deviation of the percent of free memory |
109 |
- standard deviation of the percent of reserved memory |
110 |
- standard deviation of the percent of free disk |
111 |
- count of nodes failing N+1 check |
112 |
- count of instances living (either as primary or secondary) on |
113 |
offline nodes |
114 |
- count of instances living (as primary) on offline nodes; this |
115 |
differs from the above metric by helping failover of such instances |
116 |
in 2-node clusters |
117 |
- standard deviation of the ratio of virtual-to-physical cpus (for |
118 |
primary instances of the node) |
119 |
- standard deviation of the dynamic load on the nodes, for cpus, |
120 |
memory, disk and network |
121 |
|
122 |
The free memory and free disk values help ensure that all nodes are |
123 |
somewhat balanced in their resource usage. The reserved memory helps |
124 |
to ensure that nodes are somewhat balanced in holding secondary |
125 |
instances, and that no node keeps too much memory reserved for |
126 |
N+1. And finally, the N+1 percentage helps guide the algorithm towards |
127 |
eliminating N+1 failures, if possible. |
128 |
|
129 |
Except for the N+1 failures and offline instances counts, we use the |
130 |
standard deviation since when used with values within a fixed range |
131 |
(we use percents expressed as values between zero and one) it gives |
132 |
consistent results across all metrics (there are some small issues |
133 |
related to different means, but it works generally well). The 'count' |
134 |
type values will have higher score and thus will matter more for |
135 |
balancing; thus these are better for hard constraints (like evacuating |
136 |
nodes and fixing N+1 failures). For example, the offline instances |
137 |
count (i.e. the number of instances living on offline nodes) will |
138 |
cause the algorithm to actively move instances away from offline |
139 |
nodes. This, coupled with the restriction on placement given by |
140 |
offline nodes, will cause evacuation of such nodes. |
141 |
|
142 |
The dynamic load values need to be read from an external file (Ganeti |
143 |
doesn't supply them), and are computed for each node as: sum of |
144 |
primary instance cpu load, sum of primary instance memory load, sum of |
145 |
primary and secondary instance disk load (as DRBD generates write load |
146 |
on secondary nodes too in normal case and in degraded scenarios also |
147 |
read load), and sum of primary instance network load. An example of |
148 |
how to generate these values for input to hbal would be to track ``xm |
149 |
list`` for instances over a day and by computing the delta of the cpu |
150 |
values, and feed that via the *-U* option for all instances (and keep |
151 |
the other metrics as one). For the algorithm to work, all that is |
152 |
needed is that the values are consistent for a metric across all |
153 |
instances (e.g. all instances use cpu% to report cpu usage, and not |
154 |
something related to number of CPU seconds used if the CPUs are |
155 |
different), and that they are normalised to between zero and one. Note |
156 |
that it's recommended to not have zero as the load value for any |
157 |
instance metric since then secondary instances are not well balanced. |
158 |
|
159 |
On a perfectly balanced cluster (all nodes the same size, all |
160 |
instances the same size and spread across the nodes equally), the |
161 |
values for all metrics would be zero. This doesn't happen too often in |
162 |
practice :) |
163 |
|
164 |
OFFLINE INSTANCES |
165 |
~~~~~~~~~~~~~~~~~ |
166 |
|
167 |
Since current Ganeti versions do not report the memory used by offline |
168 |
(down) instances, ignoring the run status of instances will cause |
169 |
wrong calculations. For this reason, the algorithm subtracts the |
170 |
memory size of down instances from the free node memory of their |
171 |
primary node, in effect simulating the startup of such instances. |
172 |
|
173 |
EXCLUSION TAGS |
174 |
~~~~~~~~~~~~~~ |
175 |
|
176 |
The exclusion tags mechanism is designed to prevent instances which |
177 |
run the same workload (e.g. two DNS servers) to land on the same node, |
178 |
which would make the respective node a SPOF for the given service. |
179 |
|
180 |
It works by tagging instances with certain tags and then building |
181 |
exclusion maps based on these. Which tags are actually used is |
182 |
configured either via the command line (option *\--exclusion-tags*) |
183 |
or via adding them to the cluster tags: |
184 |
|
185 |
\--exclusion-tags=a,b |
186 |
This will make all instance tags of the form *a:\**, *b:\** be |
187 |
considered for the exclusion map |
188 |
|
189 |
cluster tags *htools:iextags:a*, *htools:iextags:b* |
190 |
This will make instance tags *a:\**, *b:\** be considered for the |
191 |
exclusion map. More precisely, the suffix of cluster tags starting |
192 |
with *htools:iextags:* will become the prefix of the exclusion tags. |
193 |
|
194 |
Both the above forms mean that two instances both having (e.g.) the |
195 |
tag *a:foo* or *b:bar* won't end on the same node. |
196 |
|
197 |
OPTIONS |
198 |
------- |
199 |
|
200 |
The options that can be passed to the program are as follows: |
201 |
|
202 |
-C, \--print-commands |
203 |
Print the command list at the end of the run. Without this, the |
204 |
program will only show a shorter, but cryptic output. |
205 |
|
206 |
Note that the moves list will be split into independent steps, |
207 |
called "jobsets", but only for visual inspection, not for actually |
208 |
parallelisation. It is not possible to parallelise these directly |
209 |
when executed via "gnt-instance" commands, since a compound command |
210 |
(e.g. failover and replace-disks) must be executed |
211 |
serially. Parallel execution is only possible when using the Luxi |
212 |
backend and the *-L* option. |
213 |
|
214 |
The algorithm for splitting the moves into jobsets is by |
215 |
accumulating moves until the next move is touching nodes already |
216 |
touched by the current moves; this means we can't execute in |
217 |
parallel (due to resource allocation in Ganeti) and thus we start a |
218 |
new jobset. |
219 |
|
220 |
-p, \--print-nodes |
221 |
Prints the before and after node status, in a format designed to allow |
222 |
the user to understand the node's most important parameters. See the |
223 |
man page **htools**(1) for more details about this option. |
224 |
|
225 |
\--print-instances |
226 |
Prints the before and after instance map. This is less useful as the |
227 |
node status, but it can help in understanding instance moves. |
228 |
|
229 |
-O *name* |
230 |
This option (which can be given multiple times) will mark nodes as |
231 |
being *offline*. This means a couple of things: |
232 |
|
233 |
- instances won't be placed on these nodes, not even temporarily; |
234 |
e.g. the *replace primary* move is not available if the secondary |
235 |
node is offline, since this move requires a failover. |
236 |
- these nodes will not be included in the score calculation (except |
237 |
for the percentage of instances on offline nodes) |
238 |
|
239 |
Note that algorithm will also mark as offline any nodes which are |
240 |
reported by RAPI as such, or that have "?" in file-based input in |
241 |
any numeric fields. |
242 |
|
243 |
-e *score*, \--min-score=*score* |
244 |
This parameter denotes the minimum score we are happy with and alters |
245 |
the computation in two ways: |
246 |
|
247 |
- if the cluster has the initial score lower than this value, then we |
248 |
don't enter the algorithm at all, and exit with success |
249 |
- during the iterative process, if we reach a score lower than this |
250 |
value, we exit the algorithm |
251 |
|
252 |
The default value of the parameter is currently ``1e-9`` (chosen |
253 |
empirically). |
254 |
|
255 |
-g *delta*, \--min-gain=*delta* |
256 |
Since the balancing algorithm can sometimes result in just very tiny |
257 |
improvements, that bring less gain that they cost in relocation |
258 |
time, this parameter (defaulting to 0.01) represents the minimum |
259 |
gain we require during a step, to continue balancing. |
260 |
|
261 |
\--min-gain-limit=*threshold* |
262 |
The above min-gain option will only take effect if the cluster score |
263 |
is already below *threshold* (defaults to 0.1). The rationale behind |
264 |
this setting is that at high cluster scores (badly balanced |
265 |
clusters), we don't want to abort the rebalance too quickly, as |
266 |
later gains might still be significant. However, under the |
267 |
threshold, the total gain is only the threshold value, so we can |
268 |
exit early. |
269 |
|
270 |
\--no-disk-moves |
271 |
This parameter prevents hbal from using disk move |
272 |
(i.e. "gnt-instance replace-disks") operations. This will result in |
273 |
a much quicker balancing, but of course the improvements are |
274 |
limited. It is up to the user to decide when to use one or another. |
275 |
|
276 |
\--no-instance-moves |
277 |
This parameter prevents hbal from using instance moves |
278 |
(i.e. "gnt-instance migrate/failover") operations. This will only use |
279 |
the slow disk-replacement operations, and will also provide a worse |
280 |
balance, but can be useful if moving instances around is deemed unsafe |
281 |
or not preferred. |
282 |
|
283 |
\--evac-mode |
284 |
This parameter restricts the list of instances considered for moving |
285 |
to the ones living on offline/drained nodes. It can be used as a |
286 |
(bulk) replacement for Ganeti's own *gnt-node evacuate*, with the |
287 |
note that it doesn't guarantee full evacuation. |
288 |
|
289 |
\--select-instances=*instances* |
290 |
This parameter marks the given instances (as a comma-separated list) |
291 |
as the only ones being moved during the rebalance. |
292 |
|
293 |
\--exclude-instances=*instances* |
294 |
This parameter marks the given instances (as a comma-separated list) |
295 |
from being moved during the rebalance. |
296 |
|
297 |
-U *util-file* |
298 |
This parameter specifies a file holding instance dynamic utilisation |
299 |
information that will be used to tweak the balancing algorithm to |
300 |
equalise load on the nodes (as opposed to static resource |
301 |
usage). The file is in the format "instance_name cpu_util mem_util |
302 |
disk_util net_util" where the "_util" parameters are interpreted as |
303 |
numbers and the instance name must match exactly the instance as |
304 |
read from Ganeti. In case of unknown instance names, the program |
305 |
will abort. |
306 |
|
307 |
If not given, the default values are one for all metrics and thus |
308 |
dynamic utilisation has only one effect on the algorithm: the |
309 |
equalisation of the secondary instances across nodes (this is the |
310 |
only metric that is not tracked by another, dedicated value, and |
311 |
thus the disk load of instances will cause secondary instance |
312 |
equalisation). Note that value of one will also influence slightly |
313 |
the primary instance count, but that is already tracked via other |
314 |
metrics and thus the influence of the dynamic utilisation will be |
315 |
practically insignificant. |
316 |
|
317 |
-S *filename*, \--save-cluster=*filename* |
318 |
If given, the state of the cluster before the balancing is saved to |
319 |
the given file plus the extension "original" |
320 |
(i.e. *filename*.original), and the state at the end of the |
321 |
balancing is saved to the given file plus the extension "balanced" |
322 |
(i.e. *filename*.balanced). This allows re-feeding the cluster state |
323 |
to either hbal itself or for example hspace via the ``-t`` option. |
324 |
|
325 |
-t *datafile*, \--text-data=*datafile* |
326 |
Backend specification: the name of the file holding node and instance |
327 |
information (if not collecting via RAPI or LUXI). This or one of the |
328 |
other backends must be selected. The option is described in the man |
329 |
page **htools**(1). |
330 |
|
331 |
-m *cluster* |
332 |
Backend specification: collect data directly from the *cluster* given |
333 |
as an argument via RAPI. The option is described in the man page |
334 |
**htools**(1). |
335 |
|
336 |
-L [*path*] |
337 |
Backend specification: collect data directly from the master daemon, |
338 |
which is to be contacted via LUXI (an internal Ganeti protocol). The |
339 |
option is described in the man page **htools**(1). |
340 |
|
341 |
-X |
342 |
When using the Luxi backend, hbal can also execute the given |
343 |
commands. The execution method is to execute the individual jobsets |
344 |
(see the *-C* option for details) in separate stages, aborting if at |
345 |
any time a jobset doesn't have all jobs successful. Each step in the |
346 |
balancing solution will be translated into exactly one Ganeti job |
347 |
(having between one and three OpCodes), and all the steps in a |
348 |
jobset will be executed in parallel. The jobsets themselves are |
349 |
executed serially. |
350 |
|
351 |
The execution of the job series can be interrupted, see below for |
352 |
signal handling. |
353 |
|
354 |
-l *N*, \--max-length=*N* |
355 |
Restrict the solution to this length. This can be used for example |
356 |
to automate the execution of the balancing. |
357 |
|
358 |
\--max-cpu=*cpu-ratio* |
359 |
The maximum virtual to physical cpu ratio, as a floating point number |
360 |
greater than or equal to one. For example, specifying *cpu-ratio* as |
361 |
**2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus |
362 |
should be allowed to be in use for primary instances. A value of |
363 |
exactly one means there will be no over-subscription of CPU (except |
364 |
for the CPU time used by the node itself), and values below one do not |
365 |
make sense, as that means other resources (e.g. disk) won't be fully |
366 |
utilised due to CPU restrictions. |
367 |
|
368 |
\--min-disk=*disk-ratio* |
369 |
The minimum amount of free disk space remaining, as a floating point |
370 |
number. For example, specifying *disk-ratio* as **0.25** means that |
371 |
at least one quarter of disk space should be left free on nodes. |
372 |
|
373 |
-G *uuid*, \--group=*uuid* |
374 |
On an multi-group cluster, select this group for |
375 |
processing. Otherwise hbal will abort, since it cannot balance |
376 |
multiple groups at the same time. |
377 |
|
378 |
-v, \--verbose |
379 |
Increase the output verbosity. Each usage of this option will |
380 |
increase the verbosity (currently more than 2 doesn't make sense) |
381 |
from the default of one. |
382 |
|
383 |
-q, \--quiet |
384 |
Decrease the output verbosity. Each usage of this option will |
385 |
decrease the verbosity (less than zero doesn't make sense) from the |
386 |
default of one. |
387 |
|
388 |
-V, \--version |
389 |
Just show the program version and exit. |
390 |
|
391 |
SIGNAL HANDLING |
392 |
--------------- |
393 |
|
394 |
When executing jobs via LUXI (using the ``-X`` option), normally hbal |
395 |
will execute all jobs until either one errors out or all the jobs finish |
396 |
successfully. |
397 |
|
398 |
Since balancing can take a long time, it is possible to stop hbal early |
399 |
in two ways: |
400 |
|
401 |
- by sending a ``SIGINT`` (``^C``), hbal will register the termination |
402 |
request, and will wait until the currently submitted jobs finish, at |
403 |
which point it will exit (with exit code 1) |
404 |
- by sending a ``SIGTERM``, hbal will immediately exit (with exit code |
405 |
2); it is the responsibility of the user to follow up with Ganeti the |
406 |
result of the currently-executing jobs |
407 |
|
408 |
Note that in any situation, it's perfectly safe to kill hbal, either via |
409 |
the above signals or via any other signal (e.g. ``SIGQUIT``, |
410 |
``SIGKILL``), since the jobs themselves are processed by Ganeti whereas |
411 |
hbal (after submission) only watches their progression. In this case, |
412 |
the use will again have to query Ganeti for job results. |
413 |
|
414 |
EXIT STATUS |
415 |
----------- |
416 |
|
417 |
The exit status of the command will be zero, unless for some reason the |
418 |
algorithm failed (e.g. wrong node or instance data), invalid command |
419 |
line options, or (in case of job execution) one of the jobs has failed. |
420 |
|
421 |
Once job execution via Luxi has started (``-X``), if the balancing was |
422 |
interrupted early (via *SIGINT*, or via ``--max-length``) but all jobs |
423 |
executed successfully, then the exit status is zero; a non-zero exit |
424 |
code means that the cluster state should be investigated, since a job |
425 |
failed or we couldn't compute its status and this can also point to a |
426 |
problem on the Ganeti side. |
427 |
|
428 |
BUGS |
429 |
---- |
430 |
|
431 |
The program does not check all its input data for consistency, and |
432 |
sometime aborts with cryptic errors messages with invalid data. |
433 |
|
434 |
The algorithm is not perfect. |
435 |
|
436 |
EXAMPLE |
437 |
------- |
438 |
|
439 |
Note that these examples are not for the latest version (they don't |
440 |
have full node data). |
441 |
|
442 |
Default output |
443 |
~~~~~~~~~~~~~~ |
444 |
|
445 |
With the default options, the program shows each individual step and |
446 |
the improvements it brings in cluster score:: |
447 |
|
448 |
$ hbal |
449 |
Loaded 20 nodes, 80 instances |
450 |
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy. |
451 |
Initial score: 0.52329131 |
452 |
Trying to minimize the CV... |
453 |
1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f |
454 |
2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f |
455 |
3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16 |
456 |
4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f |
457 |
5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f |
458 |
6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f |
459 |
7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f |
460 |
8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16 |
461 |
9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15 |
462 |
10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16 |
463 |
11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16 |
464 |
12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7 |
465 |
13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1 |
466 |
14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4 |
467 |
15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17 |
468 |
Cluster score improved from 0.52329131 to 0.00252594 |
469 |
|
470 |
In the above output, we can see: |
471 |
|
472 |
- the input data (here from files) shows a cluster with 20 nodes and |
473 |
80 instances |
474 |
- the cluster is not initially N+1 compliant |
475 |
- the initial score is 0.52329131 |
476 |
|
477 |
The step list follows, showing the instance, its initial |
478 |
primary/secondary nodes, the new primary secondary, the cluster list, |
479 |
and the actions taken in this step (with 'f' denoting failover/migrate |
480 |
and 'r' denoting replace secondary). |
481 |
|
482 |
Finally, the program shows the improvement in cluster score. |
483 |
|
484 |
A more detailed output is obtained via the *-C* and *-p* options:: |
485 |
|
486 |
$ hbal |
487 |
Loaded 20 nodes, 80 instances |
488 |
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy. |
489 |
Initial cluster status: |
490 |
N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk |
491 |
* node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
492 |
node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179 |
493 |
* node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
494 |
* node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
495 |
* node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573 |
496 |
* node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
497 |
* node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
498 |
node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
499 |
node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
500 |
* node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179 |
501 |
node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
502 |
node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
503 |
node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
504 |
node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
505 |
* node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782 |
506 |
node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000 |
507 |
node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479 |
508 |
* node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239 |
509 |
* node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
510 |
node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068 |
511 |
|
512 |
Initial score: 0.52329131 |
513 |
Trying to minimize the CV... |
514 |
1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f |
515 |
2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f |
516 |
3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16 |
517 |
4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f |
518 |
5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f |
519 |
6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f |
520 |
7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f |
521 |
8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16 |
522 |
9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15 |
523 |
10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16 |
524 |
11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16 |
525 |
12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7 |
526 |
13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1 |
527 |
14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4 |
528 |
15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17 |
529 |
Cluster score improved from 0.52329131 to 0.00252594 |
530 |
|
531 |
Commands to run to reach the above solution: |
532 |
echo step 1 |
533 |
echo gnt-instance migrate instance14 |
534 |
echo gnt-instance replace-disks -n node16 instance14 |
535 |
echo gnt-instance migrate instance14 |
536 |
echo step 2 |
537 |
echo gnt-instance migrate instance54 |
538 |
echo gnt-instance replace-disks -n node16 instance54 |
539 |
echo gnt-instance migrate instance54 |
540 |
echo step 3 |
541 |
echo gnt-instance migrate instance4 |
542 |
echo gnt-instance replace-disks -n node16 instance4 |
543 |
echo step 4 |
544 |
echo gnt-instance replace-disks -n node2 instance48 |
545 |
echo gnt-instance migrate instance48 |
546 |
echo step 5 |
547 |
echo gnt-instance replace-disks -n node16 instance93 |
548 |
echo gnt-instance migrate instance93 |
549 |
echo step 6 |
550 |
echo gnt-instance replace-disks -n node2 instance89 |
551 |
echo gnt-instance migrate instance89 |
552 |
echo step 7 |
553 |
echo gnt-instance replace-disks -n node16 instance5 |
554 |
echo gnt-instance migrate instance5 |
555 |
echo step 8 |
556 |
echo gnt-instance migrate instance94 |
557 |
echo gnt-instance replace-disks -n node16 instance94 |
558 |
echo step 9 |
559 |
echo gnt-instance migrate instance44 |
560 |
echo gnt-instance replace-disks -n node15 instance44 |
561 |
echo step 10 |
562 |
echo gnt-instance replace-disks -n node16 instance62 |
563 |
echo step 11 |
564 |
echo gnt-instance replace-disks -n node16 instance13 |
565 |
echo step 12 |
566 |
echo gnt-instance replace-disks -n node7 instance19 |
567 |
echo step 13 |
568 |
echo gnt-instance replace-disks -n node1 instance43 |
569 |
echo step 14 |
570 |
echo gnt-instance replace-disks -n node4 instance1 |
571 |
echo step 15 |
572 |
echo gnt-instance replace-disks -n node17 instance58 |
573 |
|
574 |
Final cluster status: |
575 |
N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk |
576 |
node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
577 |
node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
578 |
node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
579 |
node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
580 |
node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947 |
581 |
node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
582 |
node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
583 |
node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
584 |
node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
585 |
node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
586 |
node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
587 |
node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
588 |
node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
589 |
node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
590 |
node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408 |
591 |
node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007 |
592 |
node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105 |
593 |
node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570 |
594 |
node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
595 |
node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565 |
596 |
|
597 |
Here we see, beside the step list, the initial and final cluster |
598 |
status, with the final one showing all nodes being N+1 compliant, and |
599 |
the command list to reach the final solution. In the initial listing, |
600 |
we see which nodes are not N+1 compliant. |
601 |
|
602 |
The algorithm is stable as long as each step above is fully completed, |
603 |
e.g. in step 8, both the migrate and the replace-disks are |
604 |
done. Otherwise, if only the migrate is done, the input data is |
605 |
changed in a way that the program will output a different solution |
606 |
list (but hopefully will end in the same state). |
607 |
|
608 |
.. vim: set textwidth=72 : |
609 |
.. Local Variables: |
610 |
.. mode: rst |
611 |
.. fill-column: 72 |
612 |
.. End: |