Statistics
| Branch: | Tag: | Revision:

root / man / hbal.rst @ 23fe06c2

History | View | Annotate | Download (26.2 kB)

1
HBAL(1) Ganeti | Version @GANETI_VERSION@
2
=========================================
3

    
4
NAME
5
----
6

    
7
hbal \- Cluster balancer for Ganeti
8

    
9
SYNOPSIS
10
--------
11

    
12
**hbal** {backend options...} [algorithm options...] [reporting options...]
13

    
14
**hbal** --version
15

    
16

    
17
Backend options:
18

    
19
{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* }
20

    
21
Algorithm options:
22

    
23
**[ --max-cpu *cpu-ratio* ]**
24
**[ --min-disk *disk-ratio* ]**
25
**[ -l *limit* ]**
26
**[ -e *score* ]**
27
**[ -g *delta* ]** **[ --min-gain-limit *threshold* ]**
28
**[ -O *name...* ]**
29
**[ --no-disk-moves ]**
30
**[ --no-instance-moves ]**
31
**[ -U *util-file* ]**
32
**[ --evac-mode ]**
33
**[ --select-instances *inst...* ]**
34
**[ --exclude-instances *inst...* ]**
35

    
36
Reporting options:
37

    
38
**[ -C[ *file* ] ]**
39
**[ -p[ *fields* ] ]**
40
**[ --print-instances ]**
41
**[ -o ]**
42
**[ -v... | -q ]**
43

    
44

    
45
DESCRIPTION
46
-----------
47

    
48
hbal is a cluster balancer that looks at the current state of the
49
cluster (nodes with their total and free disk, memory, etc.) and
50
instance placement and computes a series of steps designed to bring
51
the cluster into a better state.
52

    
53
The algorithm used is designed to be stable (i.e. it will give you the
54
same results when restarting it from the middle of the solution) and
55
reasonably fast. It is not, however, designed to be a perfect
56
algorithm--it is possible to make it go into a corner from which
57
it can find no improvement, because it looks only one "step" ahead.
58

    
59
By default, the program will show the solution incrementally as it is
60
computed, in a somewhat cryptic format; for getting the actual Ganeti
61
command list, use the **-C** option.
62

    
63
ALGORITHM
64
~~~~~~~~~
65

    
66
The program works in independent steps; at each step, we compute the
67
best instance move that lowers the cluster score.
68

    
69
The possible move type for an instance are combinations of
70
failover/migrate and replace-disks such that we change one of the
71
instance nodes, and the other one remains (but possibly with changed
72
role, e.g. from primary it becomes secondary). The list is:
73

    
74
- failover (f)
75
- replace secondary (r)
76
- replace primary, a composite move (f, r, f)
77
- failover and replace secondary, also composite (f, r)
78
- replace secondary and failover, also composite (r, f)
79

    
80
We don't do the only remaining possibility of replacing both nodes
81
(r,f,r,f or the equivalent f,r,f,r) since these move needs an
82
exhaustive search over both candidate primary and secondary nodes, and
83
is O(n*n) in the number of nodes. Furthermore, it doesn't seems to
84
give better scores but will result in more disk replacements.
85

    
86
PLACEMENT RESTRICTIONS
87
~~~~~~~~~~~~~~~~~~~~~~
88

    
89
At each step, we prevent an instance move if it would cause:
90

    
91
- a node to go into N+1 failure state
92
- an instance to move onto an offline node (offline nodes are either
93
  read from the cluster or declared with *-O*)
94
- an exclusion-tag based conflict (exclusion tags are read from the
95
  cluster and/or defined via the *--exclusion-tags* option)
96
- a max vcpu/pcpu ratio to be exceeded (configured via *--max-cpu*)
97
- min disk free percentage to go below the configured limit
98
  (configured via *--min-disk*)
99

    
100
CLUSTER SCORING
101
~~~~~~~~~~~~~~~
102

    
103
As said before, the algorithm tries to minimise the cluster score at
104
each step. Currently this score is computed as a sum of the following
105
components:
106

    
107
- standard deviation of the percent of free memory
108
- standard deviation of the percent of reserved memory
109
- standard deviation of the percent of free disk
110
- count of nodes failing N+1 check
111
- count of instances living (either as primary or secondary) on
112
  offline nodes
113
- count of instances living (as primary) on offline nodes; this
114
  differs from the above metric by helping failover of such instances
115
  in 2-node clusters
116
- standard deviation of the ratio of virtual-to-physical cpus (for
117
  primary instances of the node)
118
- standard deviation of the dynamic load on the nodes, for cpus,
119
  memory, disk and network
120

    
121
The free memory and free disk values help ensure that all nodes are
122
somewhat balanced in their resource usage. The reserved memory helps
123
to ensure that nodes are somewhat balanced in holding secondary
124
instances, and that no node keeps too much memory reserved for
125
N+1. And finally, the N+1 percentage helps guide the algorithm towards
126
eliminating N+1 failures, if possible.
127

    
128
Except for the N+1 failures and offline instances counts, we use the
129
standard deviation since when used with values within a fixed range
130
(we use percents expressed as values between zero and one) it gives
131
consistent results across all metrics (there are some small issues
132
related to different means, but it works generally well). The 'count'
133
type values will have higher score and thus will matter more for
134
balancing; thus these are better for hard constraints (like evacuating
135
nodes and fixing N+1 failures). For example, the offline instances
136
count (i.e. the number of instances living on offline nodes) will
137
cause the algorithm to actively move instances away from offline
138
nodes. This, coupled with the restriction on placement given by
139
offline nodes, will cause evacuation of such nodes.
140

    
141
The dynamic load values need to be read from an external file (Ganeti
142
doesn't supply them), and are computed for each node as: sum of
143
primary instance cpu load, sum of primary instance memory load, sum of
144
primary and secondary instance disk load (as DRBD generates write load
145
on secondary nodes too in normal case and in degraded scenarios also
146
read load), and sum of primary instance network load. An example of
147
how to generate these values for input to hbal would be to track ``xm
148
list`` for instances over a day and by computing the delta of the cpu
149
values, and feed that via the *-U* option for all instances (and keep
150
the other metrics as one). For the algorithm to work, all that is
151
needed is that the values are consistent for a metric across all
152
instances (e.g. all instances use cpu% to report cpu usage, and not
153
something related to number of CPU seconds used if the CPUs are
154
different), and that they are normalised to between zero and one. Note
155
that it's recommended to not have zero as the load value for any
156
instance metric since then secondary instances are not well balanced.
157

    
158
On a perfectly balanced cluster (all nodes the same size, all
159
instances the same size and spread across the nodes equally), the
160
values for all metrics would be zero. This doesn't happen too often in
161
practice :)
162

    
163
OFFLINE INSTANCES
164
~~~~~~~~~~~~~~~~~
165

    
166
Since current Ganeti versions do not report the memory used by offline
167
(down) instances, ignoring the run status of instances will cause
168
wrong calculations. For this reason, the algorithm subtracts the
169
memory size of down instances from the free node memory of their
170
primary node, in effect simulating the startup of such instances.
171

    
172
EXCLUSION TAGS
173
~~~~~~~~~~~~~~
174

    
175
The exclusion tags mechanism is designed to prevent instances which
176
run the same workload (e.g. two DNS servers) to land on the same node,
177
which would make the respective node a SPOF for the given service.
178

    
179
It works by tagging instances with certain tags and then building
180
exclusion maps based on these. Which tags are actually used is
181
configured either via the command line (option *--exclusion-tags*)
182
or via adding them to the cluster tags:
183

    
184
--exclusion-tags=a,b
185
  This will make all instance tags of the form *a:\**, *b:\** be
186
  considered for the exclusion map
187

    
188
cluster tags *htools:iextags:a*, *htools:iextags:b*
189
  This will make instance tags *a:\**, *b:\** be considered for the
190
  exclusion map. More precisely, the suffix of cluster tags starting
191
  with *htools:iextags:* will become the prefix of the exclusion tags.
192

    
193
Both the above forms mean that two instances both having (e.g.) the
194
tag *a:foo* or *b:bar* won't end on the same node.
195

    
196
OPTIONS
197
-------
198

    
199
The options that can be passed to the program are as follows:
200

    
201
-C, --print-commands
202
  Print the command list at the end of the run. Without this, the
203
  program will only show a shorter, but cryptic output.
204

    
205
  Note that the moves list will be split into independent steps,
206
  called "jobsets", but only for visual inspection, not for actually
207
  parallelisation. It is not possible to parallelise these directly
208
  when executed via "gnt-instance" commands, since a compound command
209
  (e.g. failover and replace-disks) must be executed
210
  serially. Parallel execution is only possible when using the Luxi
211
  backend and the *-L* option.
212

    
213
  The algorithm for splitting the moves into jobsets is by
214
  accumulating moves until the next move is touching nodes already
215
  touched by the current moves; this means we can't execute in
216
  parallel (due to resource allocation in Ganeti) and thus we start a
217
  new jobset.
218

    
219
-p, --print-nodes
220
  Prints the before and after node status, in a format designed to allow
221
  the user to understand the node's most important parameters. See the
222
  man page **htools**(1) for more details about this option.
223

    
224
--print-instances
225
  Prints the before and after instance map. This is less useful as the
226
  node status, but it can help in understanding instance moves.
227

    
228
-o, --oneline
229
  Only shows a one-line output from the program, designed for the case
230
  when one wants to look at multiple clusters at once and check their
231
  status.
232

    
233
  The line will contain four fields:
234

    
235
  - initial cluster score
236
  - number of steps in the solution
237
  - final cluster score
238
  - improvement in the cluster score
239

    
240
-O *name*
241
  This option (which can be given multiple times) will mark nodes as
242
  being *offline*. This means a couple of things:
243

    
244
  - instances won't be placed on these nodes, not even temporarily;
245
    e.g. the *replace primary* move is not available if the secondary
246
    node is offline, since this move requires a failover.
247
  - these nodes will not be included in the score calculation (except
248
    for the percentage of instances on offline nodes)
249

    
250
  Note that algorithm will also mark as offline any nodes which are
251
  reported by RAPI as such, or that have "?" in file-based input in
252
  any numeric fields.
253

    
254
-e *score*, --min-score=*score*
255
  This parameter denotes the minimum score we are happy with and alters
256
  the computation in two ways:
257

    
258
  - if the cluster has the initial score lower than this value, then we
259
    don't enter the algorithm at all, and exit with success
260
  - during the iterative process, if we reach a score lower than this
261
    value, we exit the algorithm
262

    
263
  The default value of the parameter is currently ``1e-9`` (chosen
264
  empirically).
265

    
266
-g *delta*, --min-gain=*delta*
267
  Since the balancing algorithm can sometimes result in just very tiny
268
  improvements, that bring less gain that they cost in relocation
269
  time, this parameter (defaulting to 0.01) represents the minimum
270
  gain we require during a step, to continue balancing.
271

    
272
--min-gain-limit=*threshold*
273
  The above min-gain option will only take effect if the cluster score
274
  is already below *threshold* (defaults to 0.1). The rationale behind
275
  this setting is that at high cluster scores (badly balanced
276
  clusters), we don't want to abort the rebalance too quickly, as
277
  later gains might still be significant. However, under the
278
  threshold, the total gain is only the threshold value, so we can
279
  exit early.
280

    
281
--no-disk-moves
282
  This parameter prevents hbal from using disk move
283
  (i.e. "gnt-instance replace-disks") operations. This will result in
284
  a much quicker balancing, but of course the improvements are
285
  limited. It is up to the user to decide when to use one or another.
286

    
287
--no-instance-moves
288
  This parameter prevents hbal from using instance moves
289
  (i.e. "gnt-instance migrate/failover") operations. This will only use
290
  the slow disk-replacement operations, and will also provide a worse
291
  balance, but can be useful if moving instances around is deemed unsafe
292
  or not preferred.
293

    
294
--evac-mode
295
  This parameter restricts the list of instances considered for moving
296
  to the ones living on offline/drained nodes. It can be used as a
297
  (bulk) replacement for Ganeti's own *gnt-node evacuate*, with the
298
  note that it doesn't guarantee full evacuation.
299

    
300
--select-instances=*instances*
301
  This parameter marks the given instances (as a comma-separated list)
302
  as the only ones being moved during the rebalance.
303

    
304
--exclude-instances=*instances*
305
  This parameter marks the given instances (as a comma-separated list)
306
  from being moved during the rebalance.
307

    
308
-U *util-file*
309
  This parameter specifies a file holding instance dynamic utilisation
310
  information that will be used to tweak the balancing algorithm to
311
  equalise load on the nodes (as opposed to static resource
312
  usage). The file is in the format "instance_name cpu_util mem_util
313
  disk_util net_util" where the "_util" parameters are interpreted as
314
  numbers and the instance name must match exactly the instance as
315
  read from Ganeti. In case of unknown instance names, the program
316
  will abort.
317

    
318
  If not given, the default values are one for all metrics and thus
319
  dynamic utilisation has only one effect on the algorithm: the
320
  equalisation of the secondary instances across nodes (this is the
321
  only metric that is not tracked by another, dedicated value, and
322
  thus the disk load of instances will cause secondary instance
323
  equalisation). Note that value of one will also influence slightly
324
  the primary instance count, but that is already tracked via other
325
  metrics and thus the influence of the dynamic utilisation will be
326
  practically insignificant.
327

    
328
-S *filename*, --save-cluster=*filename*
329
  If given, the state of the cluster before the balancing is saved to
330
  the given file plus the extension "original"
331
  (i.e. *filename*.original), and the state at the end of the
332
  balancing is saved to the given file plus the extension "balanced"
333
  (i.e. *filename*.balanced). This allows re-feeding the cluster state
334
  to either hbal itself or for example hspace via the ``-t`` option.
335

    
336
-t *datafile*, --text-data=*datafile*
337
  Backend specification: the name of the file holding node and instance
338
  information (if not collecting via RAPI or LUXI). This or one of the
339
  other backends must be selected. The option is described in the man
340
  page **htools**(1).
341

    
342
-m *cluster*
343
  Backend specification: collect data directly from the *cluster* given
344
  as an argument via RAPI. The option is described in the man page
345
  **htools**(1).
346

    
347
-L [*path*]
348
  Backend specification: collect data directly from the master daemon,
349
  which is to be contacted via LUXI (an internal Ganeti protocol). The
350
  option is described in the man page **htools**(1).
351

    
352
-X
353
  When using the Luxi backend, hbal can also execute the given
354
  commands. The execution method is to execute the individual jobsets
355
  (see the *-C* option for details) in separate stages, aborting if at
356
  any time a jobset doesn't have all jobs successful. Each step in the
357
  balancing solution will be translated into exactly one Ganeti job
358
  (having between one and three OpCodes), and all the steps in a
359
  jobset will be executed in parallel. The jobsets themselves are
360
  executed serially.
361

    
362
  The execution of the job series can be interrupted, see below for
363
  signal handling.
364

    
365
-l *N*, --max-length=*N*
366
  Restrict the solution to this length. This can be used for example
367
  to automate the execution of the balancing.
368

    
369
--max-cpu=*cpu-ratio*
370
  The maximum virtual to physical cpu ratio, as a floating point number
371
  greater than or equal to one. For example, specifying *cpu-ratio* as
372
  **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
373
  should be allowed to be in use for primary instances. A value of
374
  exactly one means there will be no over-subscription of CPU (except
375
  for the CPU time used by the node itself), and values below one do not
376
  make sense, as that means other resources (e.g. disk) won't be fully
377
  utilised due to CPU restrictions.
378

    
379
--min-disk=*disk-ratio*
380
  The minimum amount of free disk space remaining, as a floating point
381
  number. For example, specifying *disk-ratio* as **0.25** means that
382
  at least one quarter of disk space should be left free on nodes.
383

    
384
-G *uuid*, --group=*uuid*
385
  On an multi-group cluster, select this group for
386
  processing. Otherwise hbal will abort, since it cannot balance
387
  multiple groups at the same time.
388

    
389
-v, --verbose
390
  Increase the output verbosity. Each usage of this option will
391
  increase the verbosity (currently more than 2 doesn't make sense)
392
  from the default of one.
393

    
394
-q, --quiet
395
  Decrease the output verbosity. Each usage of this option will
396
  decrease the verbosity (less than zero doesn't make sense) from the
397
  default of one.
398

    
399
-V, --version
400
  Just show the program version and exit.
401

    
402
SIGNAL HANDLING
403
---------------
404

    
405
When executing jobs via LUXI (using the ``-X`` option), normally hbal
406
will execute all jobs until either one errors out or all the jobs finish
407
successfully.
408

    
409
Since balancing can take a long time, it is possible to stop hbal early
410
in two ways:
411

    
412
- by sending a ``SIGINT`` (``^C``), hbal will register the termination
413
  request, and will wait until the currently submitted jobs finish, at
414
  which point it will exit (with exit code 1)
415
- by sending a ``SIGTERM``, hbal will immediately exit (with exit code
416
  2); it is the responsibility of the user to follow up with Ganeti the
417
  result of the currently-executing jobs
418

    
419
Note that in any situation, it's perfectly safe to kill hbal, either via
420
the above signals or via any other signal (e.g. ``SIGQUIT``,
421
``SIGKILL``), since the jobs themselves are processed by Ganeti whereas
422
hbal (after submission) only watches their progression. In this case,
423
the use will again have to query Ganeti for job results.
424

    
425
EXIT STATUS
426
-----------
427

    
428
The exit status of the command will be zero, unless for some reason the
429
algorithm fatally failed (e.g. wrong node or instance data), or (in case
430
of job execution) either one of the jobs has failed or the balancing was
431
interrupted early.
432

    
433
BUGS
434
----
435

    
436
The program does not check all its input data for consistency, and
437
sometime aborts with cryptic errors messages with invalid data.
438

    
439
The algorithm is not perfect.
440

    
441
EXAMPLE
442
-------
443

    
444
Note that these examples are not for the latest version (they don't
445
have full node data).
446

    
447
Default output
448
~~~~~~~~~~~~~~
449

    
450
With the default options, the program shows each individual step and
451
the improvements it brings in cluster score::
452

    
453
    $ hbal
454
    Loaded 20 nodes, 80 instances
455
    Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
456
    Initial score: 0.52329131
457
    Trying to minimize the CV...
458
        1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
459
        2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
460
        3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
461
        4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
462
        5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
463
        6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
464
        7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
465
        8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
466
        9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
467
       10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
468
       11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
469
       12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
470
       13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
471
       14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
472
       15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
473
    Cluster score improved from 0.52329131 to 0.00252594
474

    
475
In the above output, we can see:
476

    
477
- the input data (here from files) shows a cluster with 20 nodes and
478
  80 instances
479
- the cluster is not initially N+1 compliant
480
- the initial score is 0.52329131
481

    
482
The step list follows, showing the instance, its initial
483
primary/secondary nodes, the new primary secondary, the cluster list,
484
and the actions taken in this step (with 'f' denoting failover/migrate
485
and 'r' denoting replace secondary).
486

    
487
Finally, the program shows the improvement in cluster score.
488

    
489
A more detailed output is obtained via the *-C* and *-p* options::
490

    
491
    $ hbal
492
    Loaded 20 nodes, 80 instances
493
    Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy.
494
    Initial cluster status:
495
    N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
496
     * node1  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
497
       node2  32762 31280 12000  1861  1026   0   8 0.95476 0.55179
498
     * node3  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
499
     * node4  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
500
     * node5  32762  1280  6000  1861   978   5   5 0.03907 0.52573
501
     * node6  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
502
     * node7  32762  1280  6000  1861  1026   5   3 0.03907 0.55179
503
       node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
504
       node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
505
     * node10 32762  7280 12000  1861  1026   4   4 0.22221 0.55179
506
       node11 32762  7280  6000  1861   922   4   5 0.22221 0.49577
507
       node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
508
       node13 32762  7280  6000  1861   922   4   5 0.22221 0.49577
509
       node14 32762  7280  6000  1861   922   4   5 0.22221 0.49577
510
     * node15 32762  7280 12000  1861  1131   4   3 0.22221 0.60782
511
       node16 32762 31280     0  1861  1860   0   0 0.95476 1.00000
512
       node17 32762  7280  6000  1861  1106   5   3 0.22221 0.59479
513
     * node18 32762  1280  6000  1396   561   5   3 0.03907 0.40239
514
     * node19 32762  1280  6000  1861  1026   5   3 0.03907 0.55179
515
       node20 32762 13280 12000  1861   689   3   9 0.40535 0.37068
516

    
517
    Initial score: 0.52329131
518
    Trying to minimize the CV...
519
        1. instance14  node1:node10  => node16:node10 0.42109120 a=f r:node16 f
520
        2. instance54  node4:node15  => node16:node15 0.31904594 a=f r:node16 f
521
        3. instance4   node5:node2   => node2:node16  0.26611015 a=f r:node16
522
        4. instance48  node18:node20 => node2:node18  0.21361717 a=r:node2 f
523
        5. instance93  node19:node18 => node16:node19 0.16166425 a=r:node16 f
524
        6. instance89  node3:node20  => node2:node3   0.11005629 a=r:node2 f
525
        7. instance5   node6:node2   => node16:node6  0.05841589 a=r:node16 f
526
        8. instance94  node7:node20  => node20:node16 0.00658759 a=f r:node16
527
        9. instance44  node20:node2  => node2:node15  0.00438740 a=f r:node15
528
       10. instance62  node14:node18 => node14:node16 0.00390087 a=r:node16
529
       11. instance13  node11:node14 => node11:node16 0.00361787 a=r:node16
530
       12. instance19  node10:node11 => node10:node7  0.00336636 a=r:node7
531
       13. instance43  node12:node13 => node12:node1  0.00305681 a=r:node1
532
       14. instance1   node1:node2   => node1:node4   0.00263124 a=r:node4
533
       15. instance58  node19:node20 => node19:node17 0.00252594 a=r:node17
534
    Cluster score improved from 0.52329131 to 0.00252594
535

    
536
    Commands to run to reach the above solution:
537
      echo step 1
538
      echo gnt-instance migrate instance14
539
      echo gnt-instance replace-disks -n node16 instance14
540
      echo gnt-instance migrate instance14
541
      echo step 2
542
      echo gnt-instance migrate instance54
543
      echo gnt-instance replace-disks -n node16 instance54
544
      echo gnt-instance migrate instance54
545
      echo step 3
546
      echo gnt-instance migrate instance4
547
      echo gnt-instance replace-disks -n node16 instance4
548
      echo step 4
549
      echo gnt-instance replace-disks -n node2 instance48
550
      echo gnt-instance migrate instance48
551
      echo step 5
552
      echo gnt-instance replace-disks -n node16 instance93
553
      echo gnt-instance migrate instance93
554
      echo step 6
555
      echo gnt-instance replace-disks -n node2 instance89
556
      echo gnt-instance migrate instance89
557
      echo step 7
558
      echo gnt-instance replace-disks -n node16 instance5
559
      echo gnt-instance migrate instance5
560
      echo step 8
561
      echo gnt-instance migrate instance94
562
      echo gnt-instance replace-disks -n node16 instance94
563
      echo step 9
564
      echo gnt-instance migrate instance44
565
      echo gnt-instance replace-disks -n node15 instance44
566
      echo step 10
567
      echo gnt-instance replace-disks -n node16 instance62
568
      echo step 11
569
      echo gnt-instance replace-disks -n node16 instance13
570
      echo step 12
571
      echo gnt-instance replace-disks -n node7 instance19
572
      echo step 13
573
      echo gnt-instance replace-disks -n node1 instance43
574
      echo step 14
575
      echo gnt-instance replace-disks -n node4 instance1
576
      echo step 15
577
      echo gnt-instance replace-disks -n node17 instance58
578

    
579
    Final cluster status:
580
    N1 Name   t_mem f_mem r_mem t_dsk f_dsk pri sec  p_fmem  p_fdsk
581
       node1  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
582
       node2  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
583
       node3  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
584
       node4  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
585
       node5  32762  7280  6000  1861  1078   4   5 0.22221 0.57947
586
       node6  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
587
       node7  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
588
       node8  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
589
       node9  32762  7280  6000  1861  1026   4   4 0.22221 0.55179
590
       node10 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
591
       node11 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
592
       node12 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
593
       node13 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
594
       node14 32762  7280  6000  1861  1022   4   4 0.22221 0.54951
595
       node15 32762  7280  6000  1861  1031   4   4 0.22221 0.55408
596
       node16 32762  7280  6000  1861  1060   4   4 0.22221 0.57007
597
       node17 32762  7280  6000  1861  1006   5   4 0.22221 0.54105
598
       node18 32762  7280  6000  1396   761   4   2 0.22221 0.54570
599
       node19 32762  7280  6000  1861  1026   4   4 0.22221 0.55179
600
       node20 32762 13280  6000  1861  1089   3   5 0.40535 0.58565
601

    
602
Here we see, beside the step list, the initial and final cluster
603
status, with the final one showing all nodes being N+1 compliant, and
604
the command list to reach the final solution. In the initial listing,
605
we see which nodes are not N+1 compliant.
606

    
607
The algorithm is stable as long as each step above is fully completed,
608
e.g. in step 8, both the migrate and the replace-disks are
609
done. Otherwise, if only the migrate is done, the input data is
610
changed in a way that the program will output a different solution
611
list (but hopefully will end in the same state).
612

    
613
.. vim: set textwidth=72 :
614
.. Local Variables:
615
.. mode: rst
616
.. fill-column: 72
617
.. End: