Revision 5a19bd35
b/Makefile.am | ||
---|---|---|
447 | 447 |
man/gnt-instance.8 \ |
448 | 448 |
man/gnt-job.8 \ |
449 | 449 |
man/gnt-node.8 \ |
450 |
man/gnt-os.8 |
|
450 |
man/gnt-os.8 \ |
|
451 |
man/hail.1 \ |
|
452 |
man/hbal.1 \ |
|
453 |
man/hscan.1 \ |
|
454 |
man/hspace.1 |
|
451 | 455 |
|
452 |
manrst = $(patsubst %.7,%.rst,$(patsubst %.8,%.rst,$(man_MANS)))
|
|
456 |
manrst = $(patsubst %.1,%.rst,$(patsubst %.7,%.rst,$(patsubst %.8,%.rst,$(man_MANS))))
|
|
453 | 457 |
manhtml = $(patsubst %.rst,%.html,$(manrst)) |
454 | 458 |
mangen = $(patsubst %.rst,%.gen,$(manrst)) |
455 | 459 |
maninput = \ |
456 |
$(patsubst %.7,%.7.in,$(patsubst %.8,%.8.in,$(man_MANS))) \
|
|
460 |
$(patsubst %.1,%.1.in,$(patsubst %.7,%.7.in,$(patsubst %.8,%.8.in,$(man_MANS)))) \
|
|
457 | 461 |
$(patsubst %.html,%.html.in,$(manhtml)) \ |
458 | 462 |
man/footer.man man/footer.html $(mangen) |
459 | 463 |
|
... | ... | |
635 | 639 |
man/%.gen: man/%.rst lib/query.py lib/build/sphinx_ext.py |
636 | 640 |
PYTHONPATH=. $(RUN_IN_TEMPDIR) $(CURDIR)/$(DOCPP) < $< > $@ |
637 | 641 |
|
638 |
man/%.7.in man/%.8.in: man/%.gen man/footer.man |
|
642 |
man/%.7.in man/%.8.in man/%.1.in: man/%.gen man/footer.man
|
|
639 | 643 |
@test -n "$(PANDOC)" || \ |
640 | 644 |
{ echo 'pandoc' not found during configure; exit 1; } |
641 | 645 |
set -o pipefail ; \ |
... | ... | |
650 | 654 |
$(PANDOC) -s -f rst -t html -A man/footer.html $< | \ |
651 | 655 |
sed -e 's/\\@/@/g' > $@ |
652 | 656 |
|
657 |
man/%.1: man/%.1.in $(REPLACE_VARS_SED) |
|
658 |
sed -f $(REPLACE_VARS_SED) < $< > $@ |
|
659 |
|
|
653 | 660 |
man/%.7: man/%.7.in $(REPLACE_VARS_SED) |
654 | 661 |
sed -f $(REPLACE_VARS_SED) < $< > $@ |
655 | 662 |
|
/dev/null | ||
---|---|---|
1 |
HAIL(1) htools | Ganeti H-tools |
|
2 |
=============================== |
|
3 |
|
|
4 |
NAME |
|
5 |
---- |
|
6 |
|
|
7 |
hail - Ganeti IAllocator plugin |
|
8 |
|
|
9 |
SYNOPSIS |
|
10 |
-------- |
|
11 |
|
|
12 |
**hail** [ **-t** *datafile* | **--simulate** *spec* ] *input-file* |
|
13 |
|
|
14 |
**hail** --version |
|
15 |
|
|
16 |
DESCRIPTION |
|
17 |
----------- |
|
18 |
|
|
19 |
hail is a Ganeti IAllocator plugin that allows automatic instance |
|
20 |
placement and automatic instance secondary node replacement using the |
|
21 |
same algorithm as **hbal**(1). |
|
22 |
|
|
23 |
The program takes input via a JSON-file containing current cluster |
|
24 |
state and the request details, and output (on stdout) a JSON-formatted |
|
25 |
response. In case of critical failures, the error message is printed |
|
26 |
on stderr and the exit code is changed to show failure. |
|
27 |
|
|
28 |
ALGORITHM |
|
29 |
~~~~~~~~~ |
|
30 |
|
|
31 |
The program uses a simplified version of the hbal algorithm. |
|
32 |
|
|
33 |
For relocations, we try to change the secondary node of the instance |
|
34 |
to all the valid other nodes; the node which results in the best |
|
35 |
cluster score is chosen. |
|
36 |
|
|
37 |
For single-node allocations (non-mirrored instances), again we |
|
38 |
select the node which, when chosen as the primary node, gives the best |
|
39 |
score. |
|
40 |
|
|
41 |
For dual-node allocations (mirrored instances), we chose the best |
|
42 |
pair; this is the only choice where the algorithm is non-trivial |
|
43 |
with regard to cluster size. |
|
44 |
|
|
45 |
For node evacuations (*multi-evacuate* mode), we iterate over all |
|
46 |
instances which live as secondaries on those nodes and try to relocate |
|
47 |
them using the single-instance relocation algorithm. |
|
48 |
|
|
49 |
In all cases, the cluster scoring is identical to the hbal algorithm. |
|
50 |
|
|
51 |
OPTIONS |
|
52 |
------- |
|
53 |
|
|
54 |
The options that can be passed to the program are as follows: |
|
55 |
|
|
56 |
-p, --print-nodes |
|
57 |
Prints the before and after node status, in a format designed to |
|
58 |
allow the user to understand the node's most important |
|
59 |
parameters. See the man page **hbal**(1) for more details about this |
|
60 |
field. |
|
61 |
|
|
62 |
-t *datafile*, --text-data=*datafile* |
|
63 |
The name of the file holding cluster information, to override the |
|
64 |
data in the JSON request itself. This is mostly used for debugging. |
|
65 |
|
|
66 |
--simulate *description* |
|
67 |
Similar to the **-t** option, this allows overriding the cluster |
|
68 |
data with a simulated cluster. For details about the description, |
|
69 |
see the man page **hspace**(1). |
|
70 |
|
|
71 |
CONFIGURATION |
|
72 |
------------- |
|
73 |
|
|
74 |
For the tag-exclusion configuration (see the manpage of hbal for more |
|
75 |
details), the list of which instance tags to consider as exclusion |
|
76 |
tags will be read from the cluster tags, configured as follows: |
|
77 |
|
|
78 |
- get all cluster tags starting with **htools:iextags:** |
|
79 |
- use their suffix as the prefix for exclusion tags |
|
80 |
|
|
81 |
For example, given a cluster tag like **htools:iextags:service**, |
|
82 |
all instance tags of the form **service:X** will be considered as |
|
83 |
exclusion tags, meaning that (e.g.) two instances which both have a |
|
84 |
tag **service:foo** will not be placed on the same primary node. |
|
85 |
|
|
86 |
OPTIONS |
|
87 |
------- |
|
88 |
|
|
89 |
The options that can be passed to the program are as follows: |
|
90 |
|
|
91 |
EXIT STATUS |
|
92 |
----------- |
|
93 |
|
|
94 |
The exist status of the command will be zero, unless for some reason |
|
95 |
the algorithm fatally failed (e.g. wrong node or instance data). |
|
96 |
|
|
97 |
SEE ALSO |
|
98 |
-------- |
|
99 |
|
|
100 |
**hbal**(1), **hspace**(1), **hscan**(1), **ganeti**(7), |
|
101 |
**gnt-instance**(8), **gnt-node**(8) |
|
102 |
|
|
103 |
COPYRIGHT |
|
104 |
--------- |
|
105 |
|
|
106 |
Copyright (C) 2009, 2010, 2011 Google Inc. Permission is granted to |
|
107 |
copy, distribute and/or modify under the terms of the GNU General |
|
108 |
Public License as published by the Free Software Foundation; either |
|
109 |
version 2 of the License, or (at your option) any later version. |
|
110 |
|
|
111 |
On Debian systems, the complete text of the GNU General Public License |
|
112 |
can be found in /usr/share/common-licenses/GPL. |
/dev/null | ||
---|---|---|
1 |
HBAL(1) htools | Ganeti H-tools |
|
2 |
=============================== |
|
3 |
|
|
4 |
NAME |
|
5 |
---- |
|
6 |
|
|
7 |
hbal \- Cluster balancer for Ganeti |
|
8 |
|
|
9 |
SYNOPSIS |
|
10 |
-------- |
|
11 |
|
|
12 |
**hbal** {backend options...} [algorithm options...] [reporting options...] |
|
13 |
|
|
14 |
**hbal** --version |
|
15 |
|
|
16 |
|
|
17 |
Backend options: |
|
18 |
|
|
19 |
{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* } |
|
20 |
|
|
21 |
Algorithm options: |
|
22 |
|
|
23 |
**[ --max-cpu *cpu-ratio* ]** |
|
24 |
**[ --min-disk *disk-ratio* ]** |
|
25 |
**[ -l *limit* ]** |
|
26 |
**[ -e *score* ]** |
|
27 |
**[ -g *delta* ]** **[ --min-gain-limit *threshold* ]** |
|
28 |
**[ -O *name...* ]** |
|
29 |
**[ --no-disk-moves ]** |
|
30 |
**[ -U *util-file* ]** |
|
31 |
**[ --evac-mode ]** |
|
32 |
**[ --exclude-instances *inst...* ]** |
|
33 |
|
|
34 |
Reporting options: |
|
35 |
|
|
36 |
**[ -C[ *file* ] ]** |
|
37 |
**[ -p[ *fields* ] ]** |
|
38 |
**[ --print-instances ]** |
|
39 |
**[ -o ]** |
|
40 |
**[ -v... | -q ]** |
|
41 |
|
|
42 |
|
|
43 |
DESCRIPTION |
|
44 |
----------- |
|
45 |
|
|
46 |
hbal is a cluster balancer that looks at the current state of the |
|
47 |
cluster (nodes with their total and free disk, memory, etc.) and |
|
48 |
instance placement and computes a series of steps designed to bring |
|
49 |
the cluster into a better state. |
|
50 |
|
|
51 |
The algorithm used is designed to be stable (i.e. it will give you the |
|
52 |
same results when restarting it from the middle of the solution) and |
|
53 |
reasonably fast. It is not, however, designed to be a perfect |
|
54 |
algorithm--it is possible to make it go into a corner from which |
|
55 |
it can find no improvement, because it looks only one "step" ahead. |
|
56 |
|
|
57 |
By default, the program will show the solution incrementally as it is |
|
58 |
computed, in a somewhat cryptic format; for getting the actual Ganeti |
|
59 |
command list, use the **-C** option. |
|
60 |
|
|
61 |
ALGORITHM |
|
62 |
~~~~~~~~~ |
|
63 |
|
|
64 |
The program works in independent steps; at each step, we compute the |
|
65 |
best instance move that lowers the cluster score. |
|
66 |
|
|
67 |
The possible move type for an instance are combinations of |
|
68 |
failover/migrate and replace-disks such that we change one of the |
|
69 |
instance nodes, and the other one remains (but possibly with changed |
|
70 |
role, e.g. from primary it becomes secondary). The list is: |
|
71 |
|
|
72 |
- failover (f) |
|
73 |
- replace secondary (r) |
|
74 |
- replace primary, a composite move (f, r, f) |
|
75 |
- failover and replace secondary, also composite (f, r) |
|
76 |
- replace secondary and failover, also composite (r, f) |
|
77 |
|
|
78 |
We don't do the only remaining possibility of replacing both nodes |
|
79 |
(r,f,r,f or the equivalent f,r,f,r) since these move needs an |
|
80 |
exhaustive search over both candidate primary and secondary nodes, and |
|
81 |
is O(n*n) in the number of nodes. Furthermore, it doesn't seems to |
|
82 |
give better scores but will result in more disk replacements. |
|
83 |
|
|
84 |
PLACEMENT RESTRICTIONS |
|
85 |
~~~~~~~~~~~~~~~~~~~~~~ |
|
86 |
|
|
87 |
At each step, we prevent an instance move if it would cause: |
|
88 |
|
|
89 |
- a node to go into N+1 failure state |
|
90 |
- an instance to move onto an offline node (offline nodes are either |
|
91 |
read from the cluster or declared with *-O*) |
|
92 |
- an exclusion-tag based conflict (exclusion tags are read from the |
|
93 |
cluster and/or defined via the *--exclusion-tags* option) |
|
94 |
- a max vcpu/pcpu ratio to be exceeded (configured via *--max-cpu*) |
|
95 |
- min disk free percentage to go below the configured limit |
|
96 |
(configured via *--min-disk*) |
|
97 |
|
|
98 |
CLUSTER SCORING |
|
99 |
~~~~~~~~~~~~~~~ |
|
100 |
|
|
101 |
As said before, the algorithm tries to minimise the cluster score at |
|
102 |
each step. Currently this score is computed as a sum of the following |
|
103 |
components: |
|
104 |
|
|
105 |
- standard deviation of the percent of free memory |
|
106 |
- standard deviation of the percent of reserved memory |
|
107 |
- standard deviation of the percent of free disk |
|
108 |
- count of nodes failing N+1 check |
|
109 |
- count of instances living (either as primary or secondary) on |
|
110 |
offline nodes |
|
111 |
- count of instances living (as primary) on offline nodes; this |
|
112 |
differs from the above metric by helping failover of such instances |
|
113 |
in 2-node clusters |
|
114 |
- standard deviation of the ratio of virtual-to-physical cpus (for |
|
115 |
primary instances of the node) |
|
116 |
- standard deviation of the dynamic load on the nodes, for cpus, |
|
117 |
memory, disk and network |
|
118 |
|
|
119 |
The free memory and free disk values help ensure that all nodes are |
|
120 |
somewhat balanced in their resource usage. The reserved memory helps |
|
121 |
to ensure that nodes are somewhat balanced in holding secondary |
|
122 |
instances, and that no node keeps too much memory reserved for |
|
123 |
N+1. And finally, the N+1 percentage helps guide the algorithm towards |
|
124 |
eliminating N+1 failures, if possible. |
|
125 |
|
|
126 |
Except for the N+1 failures and offline instances counts, we use the |
|
127 |
standard deviation since when used with values within a fixed range |
|
128 |
(we use percents expressed as values between zero and one) it gives |
|
129 |
consistent results across all metrics (there are some small issues |
|
130 |
related to different means, but it works generally well). The 'count' |
|
131 |
type values will have higher score and thus will matter more for |
|
132 |
balancing; thus these are better for hard constraints (like evacuating |
|
133 |
nodes and fixing N+1 failures). For example, the offline instances |
|
134 |
count (i.e. the number of instances living on offline nodes) will |
|
135 |
cause the algorithm to actively move instances away from offline |
|
136 |
nodes. This, coupled with the restriction on placement given by |
|
137 |
offline nodes, will cause evacuation of such nodes. |
|
138 |
|
|
139 |
The dynamic load values need to be read from an external file (Ganeti |
|
140 |
doesn't supply them), and are computed for each node as: sum of |
|
141 |
primary instance cpu load, sum of primary instance memory load, sum of |
|
142 |
primary and secondary instance disk load (as DRBD generates write load |
|
143 |
on secondary nodes too in normal case and in degraded scenarios also |
|
144 |
read load), and sum of primary instance network load. An example of |
|
145 |
how to generate these values for input to hbal would be to track ``xm |
|
146 |
list`` for instances over a day and by computing the delta of the cpu |
|
147 |
values, and feed that via the *-U* option for all instances (and keep |
|
148 |
the other metrics as one). For the algorithm to work, all that is |
|
149 |
needed is that the values are consistent for a metric across all |
|
150 |
instances (e.g. all instances use cpu% to report cpu usage, and not |
|
151 |
something related to number of CPU seconds used if the CPUs are |
|
152 |
different), and that they are normalised to between zero and one. Note |
|
153 |
that it's recommended to not have zero as the load value for any |
|
154 |
instance metric since then secondary instances are not well balanced. |
|
155 |
|
|
156 |
On a perfectly balanced cluster (all nodes the same size, all |
|
157 |
instances the same size and spread across the nodes equally), the |
|
158 |
values for all metrics would be zero. This doesn't happen too often in |
|
159 |
practice :) |
|
160 |
|
|
161 |
OFFLINE INSTANCES |
|
162 |
~~~~~~~~~~~~~~~~~ |
|
163 |
|
|
164 |
Since current Ganeti versions do not report the memory used by offline |
|
165 |
(down) instances, ignoring the run status of instances will cause |
|
166 |
wrong calculations. For this reason, the algorithm subtracts the |
|
167 |
memory size of down instances from the free node memory of their |
|
168 |
primary node, in effect simulating the startup of such instances. |
|
169 |
|
|
170 |
EXCLUSION TAGS |
|
171 |
~~~~~~~~~~~~~~ |
|
172 |
|
|
173 |
The exclusion tags mechanism is designed to prevent instances which |
|
174 |
run the same workload (e.g. two DNS servers) to land on the same node, |
|
175 |
which would make the respective node a SPOF for the given service. |
|
176 |
|
|
177 |
It works by tagging instances with certain tags and then building |
|
178 |
exclusion maps based on these. Which tags are actually used is |
|
179 |
configured either via the command line (option *--exclusion-tags*) |
|
180 |
or via adding them to the cluster tags: |
|
181 |
|
|
182 |
--exclusion-tags=a,b |
|
183 |
This will make all instance tags of the form *a:\**, *b:\** be |
|
184 |
considered for the exclusion map |
|
185 |
|
|
186 |
cluster tags *htools:iextags:a*, *htools:iextags:b* |
|
187 |
This will make instance tags *a:\**, *b:\** be considered for the |
|
188 |
exclusion map. More precisely, the suffix of cluster tags starting |
|
189 |
with *htools:iextags:* will become the prefix of the exclusion tags. |
|
190 |
|
|
191 |
Both the above forms mean that two instances both having (e.g.) the |
|
192 |
tag *a:foo* or *b:bar* won't end on the same node. |
|
193 |
|
|
194 |
OPTIONS |
|
195 |
------- |
|
196 |
|
|
197 |
The options that can be passed to the program are as follows: |
|
198 |
|
|
199 |
-C, --print-commands |
|
200 |
Print the command list at the end of the run. Without this, the |
|
201 |
program will only show a shorter, but cryptic output. |
|
202 |
|
|
203 |
Note that the moves list will be split into independent steps, |
|
204 |
called "jobsets", but only for visual inspection, not for actually |
|
205 |
parallelisation. It is not possible to parallelise these directly |
|
206 |
when executed via "gnt-instance" commands, since a compound command |
|
207 |
(e.g. failover and replace-disks) must be executed |
|
208 |
serially. Parallel execution is only possible when using the Luxi |
|
209 |
backend and the *-L* option. |
|
210 |
|
|
211 |
The algorithm for splitting the moves into jobsets is by |
|
212 |
accumulating moves until the next move is touching nodes already |
|
213 |
touched by the current moves; this means we can't execute in |
|
214 |
parallel (due to resource allocation in Ganeti) and thus we start a |
|
215 |
new jobset. |
|
216 |
|
|
217 |
-p, --print-nodes |
|
218 |
Prints the before and after node status, in a format designed to |
|
219 |
allow the user to understand the node's most important parameters. |
|
220 |
|
|
221 |
It is possible to customise the listed information by passing a |
|
222 |
comma-separated list of field names to this option (the field list |
|
223 |
is currently undocumented), or to extend the default field list by |
|
224 |
prefixing the additional field list with a plus sign. By default, |
|
225 |
the node list will contain the following information: |
|
226 |
|
|
227 |
F |
|
228 |
a character denoting the status of the node, with '-' meaning an |
|
229 |
offline node, '*' meaning N+1 failure and blank meaning a good |
|
230 |
node |
|
231 |
|
|
232 |
Name |
|
233 |
the node name |
|
234 |
|
|
235 |
t_mem |
|
236 |
the total node memory |
|
237 |
|
|
238 |
n_mem |
|
239 |
the memory used by the node itself |
|
240 |
|
|
241 |
i_mem |
|
242 |
the memory used by instances |
|
243 |
|
|
244 |
x_mem |
|
245 |
amount memory which seems to be in use but cannot be determined |
|
246 |
why or by which instance; usually this means that the hypervisor |
|
247 |
has some overhead or that there are other reporting errors |
|
248 |
|
|
249 |
f_mem |
|
250 |
the free node memory |
|
251 |
|
|
252 |
r_mem |
|
253 |
the reserved node memory, which is the amount of free memory |
|
254 |
needed for N+1 compliance |
|
255 |
|
|
256 |
t_dsk |
|
257 |
total disk |
|
258 |
|
|
259 |
f_dsk |
|
260 |
free disk |
|
261 |
|
|
262 |
pcpu |
|
263 |
the number of physical cpus on the node |
|
264 |
|
|
265 |
vcpu |
|
266 |
the number of virtual cpus allocated to primary instances |
|
267 |
|
|
268 |
pcnt |
|
269 |
number of primary instances |
|
270 |
|
|
271 |
scnt |
|
272 |
number of secondary instances |
|
273 |
|
|
274 |
p_fmem |
|
275 |
percent of free memory |
|
276 |
|
|
277 |
p_fdsk |
|
278 |
percent of free disk |
|
279 |
|
|
280 |
r_cpu |
|
281 |
ratio of virtual to physical cpus |
|
282 |
|
|
283 |
lCpu |
|
284 |
the dynamic CPU load (if the information is available) |
|
285 |
|
|
286 |
lMem |
|
287 |
the dynamic memory load (if the information is available) |
|
288 |
|
|
289 |
lDsk |
|
290 |
the dynamic disk load (if the information is available) |
|
291 |
|
|
292 |
lNet |
|
293 |
the dynamic net load (if the information is available) |
|
294 |
|
|
295 |
--print-instances |
|
296 |
Prints the before and after instance map. This is less useful as the |
|
297 |
node status, but it can help in understanding instance moves. |
|
298 |
|
|
299 |
-o, --oneline |
|
300 |
Only shows a one-line output from the program, designed for the case |
|
301 |
when one wants to look at multiple clusters at once and check their |
|
302 |
status. |
|
303 |
|
|
304 |
The line will contain four fields: |
|
305 |
|
|
306 |
- initial cluster score |
|
307 |
- number of steps in the solution |
|
308 |
- final cluster score |
|
309 |
- improvement in the cluster score |
|
310 |
|
|
311 |
-O *name* |
|
312 |
This option (which can be given multiple times) will mark nodes as |
|
313 |
being *offline*. This means a couple of things: |
|
314 |
|
|
315 |
- instances won't be placed on these nodes, not even temporarily; |
|
316 |
e.g. the *replace primary* move is not available if the secondary |
|
317 |
node is offline, since this move requires a failover. |
|
318 |
- these nodes will not be included in the score calculation (except |
|
319 |
for the percentage of instances on offline nodes) |
|
320 |
|
|
321 |
Note that algorithm will also mark as offline any nodes which are |
|
322 |
reported by RAPI as such, or that have "?" in file-based input in |
|
323 |
any numeric fields. |
|
324 |
|
|
325 |
-e *score*, --min-score=*score* |
|
326 |
This parameter denotes the minimum score we are happy with and alters |
|
327 |
the computation in two ways: |
|
328 |
|
|
329 |
- if the cluster has the initial score lower than this value, then we |
|
330 |
don't enter the algorithm at all, and exit with success |
|
331 |
- during the iterative process, if we reach a score lower than this |
|
332 |
value, we exit the algorithm |
|
333 |
|
|
334 |
The default value of the parameter is currently ``1e-9`` (chosen |
|
335 |
empirically). |
|
336 |
|
|
337 |
-g *delta*, --min-gain=*delta* |
|
338 |
Since the balancing algorithm can sometimes result in just very tiny |
|
339 |
improvements, that bring less gain that they cost in relocation |
|
340 |
time, this parameter (defaulting to 0.01) represents the minimum |
|
341 |
gain we require during a step, to continue balancing. |
|
342 |
|
|
343 |
--min-gain-limit=*threshold* |
|
344 |
The above min-gain option will only take effect if the cluster score |
|
345 |
is already below *threshold* (defaults to 0.1). The rationale behind |
|
346 |
this setting is that at high cluster scores (badly balanced |
|
347 |
clusters), we don't want to abort the rebalance too quickly, as |
|
348 |
later gains might still be significant. However, under the |
|
349 |
threshold, the total gain is only the threshold value, so we can |
|
350 |
exit early. |
|
351 |
|
|
352 |
--no-disk-moves |
|
353 |
This parameter prevents hbal from using disk move |
|
354 |
(i.e. "gnt-instance replace-disks") operations. This will result in |
|
355 |
a much quicker balancing, but of course the improvements are |
|
356 |
limited. It is up to the user to decide when to use one or another. |
|
357 |
|
|
358 |
--evac-mode |
|
359 |
This parameter restricts the list of instances considered for moving |
|
360 |
to the ones living on offline/drained nodes. It can be used as a |
|
361 |
(bulk) replacement for Ganeti's own *gnt-node evacuate*, with the |
|
362 |
note that it doesn't guarantee full evacuation. |
|
363 |
|
|
364 |
--exclude-instances=*instances* |
|
365 |
This parameter marks the given instances (as a comma-separated list) |
|
366 |
from being moved during the rebalance. |
|
367 |
|
|
368 |
-U *util-file* |
|
369 |
This parameter specifies a file holding instance dynamic utilisation |
|
370 |
information that will be used to tweak the balancing algorithm to |
|
371 |
equalise load on the nodes (as opposed to static resource |
|
372 |
usage). The file is in the format "instance_name cpu_util mem_util |
|
373 |
disk_util net_util" where the "_util" parameters are interpreted as |
|
374 |
numbers and the instance name must match exactly the instance as |
|
375 |
read from Ganeti. In case of unknown instance names, the program |
|
376 |
will abort. |
|
377 |
|
|
378 |
If not given, the default values are one for all metrics and thus |
|
379 |
dynamic utilisation has only one effect on the algorithm: the |
|
380 |
equalisation of the secondary instances across nodes (this is the |
|
381 |
only metric that is not tracked by another, dedicated value, and |
|
382 |
thus the disk load of instances will cause secondary instance |
|
383 |
equalisation). Note that value of one will also influence slightly |
|
384 |
the primary instance count, but that is already tracked via other |
|
385 |
metrics and thus the influence of the dynamic utilisation will be |
|
386 |
practically insignificant. |
|
387 |
|
|
388 |
-t *datafile*, --text-data=*datafile* |
|
389 |
The name of the file holding node and instance information (if not |
|
390 |
collecting via RAPI or LUXI). This or one of the other backends must |
|
391 |
be selected. |
|
392 |
|
|
393 |
-S *filename*, --save-cluster=*filename* |
|
394 |
If given, the state of the cluster before the balancing is saved to |
|
395 |
the given file plus the extension "original" |
|
396 |
(i.e. *filename*.original), and the state at the end of the |
|
397 |
balancing is saved to the given file plus the extension "balanced" |
|
398 |
(i.e. *filename*.balanced). This allows re-feeding the cluster state |
|
399 |
to either hbal itself or for example hspace. |
|
400 |
|
|
401 |
-m *cluster* |
|
402 |
Collect data directly from the *cluster* given as an argument via |
|
403 |
RAPI. If the argument doesn't contain a colon (:), then it is |
|
404 |
converted into a fully-built URL via prepending ``https://`` and |
|
405 |
appending the default RAPI port, otherwise it's considered a |
|
406 |
fully-specified URL and is used as-is. |
|
407 |
|
|
408 |
-L [*path*] |
|
409 |
Collect data directly from the master daemon, which is to be |
|
410 |
contacted via the luxi (an internal Ganeti protocol). An optional |
|
411 |
*path* argument is interpreted as the path to the unix socket on |
|
412 |
which the master daemon listens; otherwise, the default path used by |
|
413 |
ganeti when installed with *--localstatedir=/var* is used. |
|
414 |
|
|
415 |
-X |
|
416 |
When using the Luxi backend, hbal can also execute the given |
|
417 |
commands. The execution method is to execute the individual jobsets |
|
418 |
(see the *-C* option for details) in separate stages, aborting if at |
|
419 |
any time a jobset doesn't have all jobs successful. Each step in the |
|
420 |
balancing solution will be translated into exactly one Ganeti job |
|
421 |
(having between one and three OpCodes), and all the steps in a |
|
422 |
jobset will be executed in parallel. The jobsets themselves are |
|
423 |
executed serially. |
|
424 |
|
|
425 |
-l *N*, --max-length=*N* |
|
426 |
Restrict the solution to this length. This can be used for example |
|
427 |
to automate the execution of the balancing. |
|
428 |
|
|
429 |
--max-cpu=*cpu-ratio* |
|
430 |
The maximum virtual to physical cpu ratio, as a floating point |
|
431 |
number between zero and one. For example, specifying *cpu-ratio* as |
|
432 |
**2.5** means that, for a 4-cpu machine, a maximum of 10 virtual |
|
433 |
cpus should be allowed to be in use for primary instances. A value |
|
434 |
of one doesn't make sense though, as that means no disk space can be |
|
435 |
used on it. |
|
436 |
|
|
437 |
--min-disk=*disk-ratio* |
|
438 |
The minimum amount of free disk space remaining, as a floating point |
|
439 |
number. For example, specifying *disk-ratio* as **0.25** means that |
|
440 |
at least one quarter of disk space should be left free on nodes. |
|
441 |
|
|
442 |
-G *uuid*, --group=*uuid* |
|
443 |
On an multi-group cluster, select this group for |
|
444 |
processing. Otherwise hbal will abort, since it cannot balance |
|
445 |
multiple groups at the same time. |
|
446 |
|
|
447 |
-v, --verbose |
|
448 |
Increase the output verbosity. Each usage of this option will |
|
449 |
increase the verbosity (currently more than 2 doesn't make sense) |
|
450 |
from the default of one. |
|
451 |
|
|
452 |
-q, --quiet |
|
453 |
Decrease the output verbosity. Each usage of this option will |
|
454 |
decrease the verbosity (less than zero doesn't make sense) from the |
|
455 |
default of one. |
|
456 |
|
|
457 |
-V, --version |
|
458 |
Just show the program version and exit. |
|
459 |
|
|
460 |
EXIT STATUS |
|
461 |
----------- |
|
462 |
|
|
463 |
The exit status of the command will be zero, unless for some reason |
|
464 |
the algorithm fatally failed (e.g. wrong node or instance data), or |
|
465 |
(in case of job execution) any job has failed. |
|
466 |
|
|
467 |
BUGS |
|
468 |
---- |
|
469 |
|
|
470 |
The program does not check its input data for consistency, and aborts |
|
471 |
with cryptic errors messages in this case. |
|
472 |
|
|
473 |
The algorithm is not perfect. |
|
474 |
|
|
475 |
The output format is not easily scriptable, and the program should |
|
476 |
feed moves directly into Ganeti (either via RAPI or via a gnt-debug |
|
477 |
input file). |
|
478 |
|
|
479 |
EXAMPLE |
|
480 |
------- |
|
481 |
|
|
482 |
Note that these examples are not for the latest version (they don't |
|
483 |
have full node data). |
|
484 |
|
|
485 |
Default output |
|
486 |
~~~~~~~~~~~~~~ |
|
487 |
|
|
488 |
With the default options, the program shows each individual step and |
|
489 |
the improvements it brings in cluster score:: |
|
490 |
|
|
491 |
$ hbal |
|
492 |
Loaded 20 nodes, 80 instances |
|
493 |
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy. |
|
494 |
Initial score: 0.52329131 |
|
495 |
Trying to minimize the CV... |
|
496 |
1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f |
|
497 |
2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f |
|
498 |
3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16 |
|
499 |
4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f |
|
500 |
5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f |
|
501 |
6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f |
|
502 |
7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f |
|
503 |
8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16 |
|
504 |
9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15 |
|
505 |
10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16 |
|
506 |
11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16 |
|
507 |
12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7 |
|
508 |
13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1 |
|
509 |
14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4 |
|
510 |
15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17 |
|
511 |
Cluster score improved from 0.52329131 to 0.00252594 |
|
512 |
|
|
513 |
In the above output, we can see: |
|
514 |
|
|
515 |
- the input data (here from files) shows a cluster with 20 nodes and |
|
516 |
80 instances |
|
517 |
- the cluster is not initially N+1 compliant |
|
518 |
- the initial score is 0.52329131 |
|
519 |
|
|
520 |
The step list follows, showing the instance, its initial |
|
521 |
primary/secondary nodes, the new primary secondary, the cluster list, |
|
522 |
and the actions taken in this step (with 'f' denoting failover/migrate |
|
523 |
and 'r' denoting replace secondary). |
|
524 |
|
|
525 |
Finally, the program shows the improvement in cluster score. |
|
526 |
|
|
527 |
A more detailed output is obtained via the *-C* and *-p* options:: |
|
528 |
|
|
529 |
$ hbal |
|
530 |
Loaded 20 nodes, 80 instances |
|
531 |
Cluster is not N+1 happy, continuing but no guarantee that the cluster will end N+1 happy. |
|
532 |
Initial cluster status: |
|
533 |
N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk |
|
534 |
* node1 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
535 |
node2 32762 31280 12000 1861 1026 0 8 0.95476 0.55179 |
|
536 |
* node3 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
537 |
* node4 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
538 |
* node5 32762 1280 6000 1861 978 5 5 0.03907 0.52573 |
|
539 |
* node6 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
540 |
* node7 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
541 |
node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
542 |
node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
543 |
* node10 32762 7280 12000 1861 1026 4 4 0.22221 0.55179 |
|
544 |
node11 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
|
545 |
node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
546 |
node13 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
|
547 |
node14 32762 7280 6000 1861 922 4 5 0.22221 0.49577 |
|
548 |
* node15 32762 7280 12000 1861 1131 4 3 0.22221 0.60782 |
|
549 |
node16 32762 31280 0 1861 1860 0 0 0.95476 1.00000 |
|
550 |
node17 32762 7280 6000 1861 1106 5 3 0.22221 0.59479 |
|
551 |
* node18 32762 1280 6000 1396 561 5 3 0.03907 0.40239 |
|
552 |
* node19 32762 1280 6000 1861 1026 5 3 0.03907 0.55179 |
|
553 |
node20 32762 13280 12000 1861 689 3 9 0.40535 0.37068 |
|
554 |
|
|
555 |
Initial score: 0.52329131 |
|
556 |
Trying to minimize the CV... |
|
557 |
1. instance14 node1:node10 => node16:node10 0.42109120 a=f r:node16 f |
|
558 |
2. instance54 node4:node15 => node16:node15 0.31904594 a=f r:node16 f |
|
559 |
3. instance4 node5:node2 => node2:node16 0.26611015 a=f r:node16 |
|
560 |
4. instance48 node18:node20 => node2:node18 0.21361717 a=r:node2 f |
|
561 |
5. instance93 node19:node18 => node16:node19 0.16166425 a=r:node16 f |
|
562 |
6. instance89 node3:node20 => node2:node3 0.11005629 a=r:node2 f |
|
563 |
7. instance5 node6:node2 => node16:node6 0.05841589 a=r:node16 f |
|
564 |
8. instance94 node7:node20 => node20:node16 0.00658759 a=f r:node16 |
|
565 |
9. instance44 node20:node2 => node2:node15 0.00438740 a=f r:node15 |
|
566 |
10. instance62 node14:node18 => node14:node16 0.00390087 a=r:node16 |
|
567 |
11. instance13 node11:node14 => node11:node16 0.00361787 a=r:node16 |
|
568 |
12. instance19 node10:node11 => node10:node7 0.00336636 a=r:node7 |
|
569 |
13. instance43 node12:node13 => node12:node1 0.00305681 a=r:node1 |
|
570 |
14. instance1 node1:node2 => node1:node4 0.00263124 a=r:node4 |
|
571 |
15. instance58 node19:node20 => node19:node17 0.00252594 a=r:node17 |
|
572 |
Cluster score improved from 0.52329131 to 0.00252594 |
|
573 |
|
|
574 |
Commands to run to reach the above solution: |
|
575 |
echo step 1 |
|
576 |
echo gnt-instance migrate instance14 |
|
577 |
echo gnt-instance replace-disks -n node16 instance14 |
|
578 |
echo gnt-instance migrate instance14 |
|
579 |
echo step 2 |
|
580 |
echo gnt-instance migrate instance54 |
|
581 |
echo gnt-instance replace-disks -n node16 instance54 |
|
582 |
echo gnt-instance migrate instance54 |
|
583 |
echo step 3 |
|
584 |
echo gnt-instance migrate instance4 |
|
585 |
echo gnt-instance replace-disks -n node16 instance4 |
|
586 |
echo step 4 |
|
587 |
echo gnt-instance replace-disks -n node2 instance48 |
|
588 |
echo gnt-instance migrate instance48 |
|
589 |
echo step 5 |
|
590 |
echo gnt-instance replace-disks -n node16 instance93 |
|
591 |
echo gnt-instance migrate instance93 |
|
592 |
echo step 6 |
|
593 |
echo gnt-instance replace-disks -n node2 instance89 |
|
594 |
echo gnt-instance migrate instance89 |
|
595 |
echo step 7 |
|
596 |
echo gnt-instance replace-disks -n node16 instance5 |
|
597 |
echo gnt-instance migrate instance5 |
|
598 |
echo step 8 |
|
599 |
echo gnt-instance migrate instance94 |
|
600 |
echo gnt-instance replace-disks -n node16 instance94 |
|
601 |
echo step 9 |
|
602 |
echo gnt-instance migrate instance44 |
|
603 |
echo gnt-instance replace-disks -n node15 instance44 |
|
604 |
echo step 10 |
|
605 |
echo gnt-instance replace-disks -n node16 instance62 |
|
606 |
echo step 11 |
|
607 |
echo gnt-instance replace-disks -n node16 instance13 |
|
608 |
echo step 12 |
|
609 |
echo gnt-instance replace-disks -n node7 instance19 |
|
610 |
echo step 13 |
|
611 |
echo gnt-instance replace-disks -n node1 instance43 |
|
612 |
echo step 14 |
|
613 |
echo gnt-instance replace-disks -n node4 instance1 |
|
614 |
echo step 15 |
|
615 |
echo gnt-instance replace-disks -n node17 instance58 |
|
616 |
|
|
617 |
Final cluster status: |
|
618 |
N1 Name t_mem f_mem r_mem t_dsk f_dsk pri sec p_fmem p_fdsk |
|
619 |
node1 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
620 |
node2 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
621 |
node3 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
622 |
node4 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
623 |
node5 32762 7280 6000 1861 1078 4 5 0.22221 0.57947 |
|
624 |
node6 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
625 |
node7 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
626 |
node8 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
627 |
node9 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
628 |
node10 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
629 |
node11 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
|
630 |
node12 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
631 |
node13 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
|
632 |
node14 32762 7280 6000 1861 1022 4 4 0.22221 0.54951 |
|
633 |
node15 32762 7280 6000 1861 1031 4 4 0.22221 0.55408 |
|
634 |
node16 32762 7280 6000 1861 1060 4 4 0.22221 0.57007 |
|
635 |
node17 32762 7280 6000 1861 1006 5 4 0.22221 0.54105 |
|
636 |
node18 32762 7280 6000 1396 761 4 2 0.22221 0.54570 |
|
637 |
node19 32762 7280 6000 1861 1026 4 4 0.22221 0.55179 |
|
638 |
node20 32762 13280 6000 1861 1089 3 5 0.40535 0.58565 |
|
639 |
|
|
640 |
Here we see, beside the step list, the initial and final cluster |
|
641 |
status, with the final one showing all nodes being N+1 compliant, and |
|
642 |
the command list to reach the final solution. In the initial listing, |
|
643 |
we see which nodes are not N+1 compliant. |
|
644 |
|
|
645 |
The algorithm is stable as long as each step above is fully completed, |
|
646 |
e.g. in step 8, both the migrate and the replace-disks are |
|
647 |
done. Otherwise, if only the migrate is done, the input data is |
|
648 |
changed in a way that the program will output a different solution |
|
649 |
list (but hopefully will end in the same state). |
|
650 |
|
|
651 |
SEE ALSO |
|
652 |
-------- |
|
653 |
|
|
654 |
**hspace**(1), **hscan**(1), **hail**(1), **ganeti**(7), |
|
655 |
**gnt-instance**(8), **gnt-node**(8) |
|
656 |
|
|
657 |
COPYRIGHT |
|
658 |
--------- |
|
659 |
|
|
660 |
Copyright (C) 2009, 2010, 2011 Google Inc. Permission is granted to |
|
661 |
copy, distribute and/or modify under the terms of the GNU General |
|
662 |
Public License as published by the Free Software Foundation; either |
|
663 |
version 2 of the License, or (at your option) any later version. |
|
664 |
|
|
665 |
On Debian systems, the complete text of the GNU General Public License |
|
666 |
can be found in /usr/share/common-licenses/GPL. |
/dev/null | ||
---|---|---|
1 |
HSCAN(1) htools | Ganeti H-tools |
|
2 |
================================ |
|
3 |
|
|
4 |
NAME |
|
5 |
---- |
|
6 |
|
|
7 |
hscan - Scan clusters via RAPI and save node/instance data |
|
8 |
|
|
9 |
SYNOPSIS |
|
10 |
-------- |
|
11 |
|
|
12 |
**hscan** [-p] [--no-headers] [-d *path* ] *cluster...* |
|
13 |
|
|
14 |
**hscan** --version |
|
15 |
|
|
16 |
DESCRIPTION |
|
17 |
----------- |
|
18 |
|
|
19 |
hscan is a tool for scanning clusters via RAPI and saving their data |
|
20 |
in the input format used by **hbal**(1) and **hspace**(1). It will |
|
21 |
also show a one-line score for each cluster scanned or, if desired, |
|
22 |
the cluster state as show by the **-p** option to the other tools. |
|
23 |
|
|
24 |
For each cluster, one file named *cluster***.data** will be generated |
|
25 |
holding the node and instance data. This file can then be used in |
|
26 |
**hbal**(1) or **hspace**(1) via the *-t* option. In case the |
|
27 |
cluster name contains slashes (as it can happen when the cluster is a |
|
28 |
fully-specified URL), these will be replaced with underscores. |
|
29 |
|
|
30 |
The one-line output for each cluster will show the following: |
|
31 |
|
|
32 |
Name |
|
33 |
The name of the cluster (or the IP address that was given, etc.) |
|
34 |
|
|
35 |
Nodes |
|
36 |
The number of nodes in the cluster |
|
37 |
|
|
38 |
Inst |
|
39 |
The number of instances in the cluster |
|
40 |
|
|
41 |
BNode |
|
42 |
The number of nodes failing N+1 |
|
43 |
|
|
44 |
BInst |
|
45 |
The number of instances living on N+1-failed nodes |
|
46 |
|
|
47 |
t_mem |
|
48 |
Total memory in the cluster |
|
49 |
|
|
50 |
f_mem |
|
51 |
Free memory in the cluster |
|
52 |
|
|
53 |
t_disk |
|
54 |
Total disk in the cluster |
|
55 |
|
|
56 |
f_disk |
|
57 |
Free disk space in the cluster |
|
58 |
|
|
59 |
Score |
|
60 |
The score of the cluster, as would be reported by **hbal**(1) if run |
|
61 |
on the generated data files. |
|
62 |
|
|
63 |
In case of errors while collecting data, all fields after the name of |
|
64 |
the cluster are replaced with the error display. |
|
65 |
|
|
66 |
**Note:** this output format is not yet final so it should not be used |
|
67 |
for scripting yet. |
|
68 |
|
|
69 |
OPTIONS |
|
70 |
------- |
|
71 |
|
|
72 |
The options that can be passed to the program are as follows: |
|
73 |
|
|
74 |
-p, --print-nodes |
|
75 |
Prints the node status for each cluster after the cluster's one-line |
|
76 |
status display, in a format designed to allow the user to understand |
|
77 |
the node's most important parameters. For details, see the man page |
|
78 |
for **hbal**(1). |
|
79 |
|
|
80 |
-d *path* |
|
81 |
Save the node and instance data for each cluster under *path*, |
|
82 |
instead of the current directory. |
|
83 |
|
|
84 |
-V, --version |
|
85 |
Just show the program version and exit. |
|
86 |
|
|
87 |
EXIT STATUS |
|
88 |
----------- |
|
89 |
|
|
90 |
The exist status of the command will be zero, unless for some reason |
|
91 |
loading the input data failed fatally (e.g. wrong node or instance |
|
92 |
data). |
|
93 |
|
|
94 |
BUGS |
|
95 |
---- |
|
96 |
|
|
97 |
The program does not check its input data for consistency, and aborts |
|
98 |
with cryptic errors messages in this case. |
|
99 |
|
|
100 |
EXAMPLE |
|
101 |
------- |
|
102 |
|
|
103 |
:: |
|
104 |
|
|
105 |
$ hscan cluster1 |
|
106 |
Name Nodes Inst BNode BInst t_mem f_mem t_disk f_disk Score |
|
107 |
cluster1 2 2 0 0 1008 652 255 253 0.24404762 |
|
108 |
$ ls -l cluster1.data |
|
109 |
-rw-r--r-- 1 root root 364 2009-03-23 07:26 cluster1.data |
|
110 |
|
|
111 |
SEE ALSO |
|
112 |
-------- |
|
113 |
|
|
114 |
**hbal**(1), **hspace**(1), **hail**(1), **ganeti**(7), |
|
115 |
**gnt-instance**(8), **gnt-node**(8) |
|
116 |
|
|
117 |
COPYRIGHT |
|
118 |
--------- |
|
119 |
|
|
120 |
Copyright (C) 2009, 2010, 2011 Google Inc. Permission is granted to |
|
121 |
copy, distribute and/or modify under the terms of the GNU General |
|
122 |
Public License as published by the Free Software Foundation; either |
|
123 |
version 2 of the License, or (at your option) any later version. |
|
124 |
|
|
125 |
On Debian systems, the complete text of the GNU General Public License |
|
126 |
can be found in /usr/share/common-licenses/GPL. |
/dev/null | ||
---|---|---|
1 |
HSPACE(1) htools | Ganeti H-tools |
|
2 |
================================= |
|
3 |
|
|
4 |
NAME |
|
5 |
---- |
|
6 |
|
|
7 |
hspace - Cluster space analyzer for Ganeti |
|
8 |
|
|
9 |
SYNOPSIS |
|
10 |
-------- |
|
11 |
|
|
12 |
**hspace** {backend options...} [algorithm options...] [request options...] |
|
13 |
[ -p [*fields*] ] [-v... | -q] |
|
14 |
|
|
15 |
**hspace** --version |
|
16 |
|
|
17 |
Backend options: |
|
18 |
|
|
19 |
{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* | |
|
20 |
**--simulate** *spec* } |
|
21 |
|
|
22 |
|
|
23 |
Algorithm options: |
|
24 |
|
|
25 |
**[ --max-cpu *cpu-ratio* ]** |
|
26 |
**[ --min-disk *disk-ratio* ]** |
|
27 |
**[ -O *name...* ]** |
|
28 |
|
|
29 |
|
|
30 |
Request options: |
|
31 |
|
|
32 |
**[--memory** *mem* **]** |
|
33 |
**[--disk** *disk* **]** |
|
34 |
**[--req-nodes** *req-nodes* **]** |
|
35 |
**[--vcpus** *vcpus* **]** |
|
36 |
**[--tiered-alloc** *spec* **]** |
|
37 |
|
|
38 |
|
|
39 |
DESCRIPTION |
|
40 |
----------- |
|
41 |
|
|
42 |
|
|
43 |
hspace computes how many additional instances can be fit on a cluster, |
|
44 |
while maintaining N+1 status. |
|
45 |
|
|
46 |
The program will try to place instances, all of the same size, on the |
|
47 |
cluster, until the point where we don't have any N+1 possible |
|
48 |
allocation. It uses the exact same allocation algorithm as the hail |
|
49 |
iallocator plugin. |
|
50 |
|
|
51 |
The output of the program is designed to interpreted as a shell |
|
52 |
fragment (or parsed as a *key=value* file). Options which extend the |
|
53 |
output (e.g. -p, -v) will output the additional information on stderr |
|
54 |
(such that the stdout is still parseable). |
|
55 |
|
|
56 |
The following keys are available in the output of the script (all |
|
57 |
prefixed with *HTS_*): |
|
58 |
|
|
59 |
SPEC_MEM, SPEC_DSK, SPEC_CPU, SPEC_RQN |
|
60 |
These represent the specifications of the instance model used for |
|
61 |
allocation (the memory, disk, cpu, requested nodes). |
|
62 |
|
|
63 |
CLUSTER_MEM, CLUSTER_DSK, CLUSTER_CPU, CLUSTER_NODES |
|
64 |
These represent the total memory, disk, CPU count and total nodes in |
|
65 |
the cluster. |
|
66 |
|
|
67 |
INI_SCORE, FIN_SCORE |
|
68 |
These are the initial (current) and final cluster score (see the hbal |
|
69 |
man page for details about the scoring algorithm). |
|
70 |
|
|
71 |
INI_INST_CNT, FIN_INST_CNT |
|
72 |
The initial and final instance count. |
|
73 |
|
|
74 |
INI_MEM_FREE, FIN_MEM_FREE |
|
75 |
The initial and final total free memory in the cluster (but this |
|
76 |
doesn't necessarily mean available for use). |
|
77 |
|
|
78 |
INI_MEM_AVAIL, FIN_MEM_AVAIL |
|
79 |
The initial and final total available memory for allocation in the |
|
80 |
cluster. If allocating redundant instances, new instances could |
|
81 |
increase the reserved memory so it doesn't necessarily mean the |
|
82 |
entirety of this memory can be used for new instance allocations. |
|
83 |
|
|
84 |
INI_MEM_RESVD, FIN_MEM_RESVD |
|
85 |
The initial and final reserved memory (for redundancy/N+1 purposes). |
|
86 |
|
|
87 |
INI_MEM_INST, FIN_MEM_INST |
|
88 |
The initial and final memory used for instances (actual runtime used |
|
89 |
RAM). |
|
90 |
|
|
91 |
INI_MEM_OVERHEAD, FIN_MEM_OVERHEAD |
|
92 |
The initial and final memory overhead--memory used for the node |
|
93 |
itself and unacounted memory (e.g. due to hypervisor overhead). |
|
94 |
|
|
95 |
INI_MEM_EFF, HTS_INI_MEM_EFF |
|
96 |
The initial and final memory efficiency, represented as instance |
|
97 |
memory divided by total memory. |
|
98 |
|
|
99 |
INI_DSK_FREE, INI_DSK_AVAIL, INI_DSK_RESVD, INI_DSK_INST, INI_DSK_EFF |
|
100 |
Initial disk stats, similar to the memory ones. |
|
101 |
|
|
102 |
FIN_DSK_FREE, FIN_DSK_AVAIL, FIN_DSK_RESVD, FIN_DSK_INST, FIN_DSK_EFF |
|
103 |
Final disk stats, similar to the memory ones. |
|
104 |
|
|
105 |
INI_CPU_INST, FIN_CPU_INST |
|
106 |
Initial and final number of virtual CPUs used by instances. |
|
107 |
|
|
108 |
INI_CPU_EFF, FIN_CPU_EFF |
|
109 |
The initial and final CPU efficiency, represented as the count of |
|
110 |
virtual instance CPUs divided by the total physical CPU count. |
|
111 |
|
|
112 |
INI_MNODE_MEM_AVAIL, FIN_MNODE_MEM_AVAIL |
|
113 |
The initial and final maximum per-node available memory. This is not |
|
114 |
very useful as a metric but can give an impression of the status of |
|
115 |
the nodes; as an example, this value restricts the maximum instance |
|
116 |
size that can be still created on the cluster. |
|
117 |
|
|
118 |
INI_MNODE_DSK_AVAIL, FIN_MNODE_DSK_AVAIL |
|
119 |
Like the above but for disk. |
|
120 |
|
|
121 |
TSPEC |
|
122 |
If the tiered allocation mode has been enabled, this parameter holds |
|
123 |
the pairs of specifications and counts of instances that can be |
|
124 |
created in this mode. The value of the key is a space-separated list |
|
125 |
of values; each value is of the form *memory,disk,vcpu=count* where |
|
126 |
the memory, disk and vcpu are the values for the current spec, and |
|
127 |
count is how many instances of this spec can be created. A complete |
|
128 |
value for this variable could be: **4096,102400,2=225 |
|
129 |
2560,102400,2=20 512,102400,2=21**. |
|
130 |
|
|
131 |
KM_USED_CPU, KM_USED_NPU, KM_USED_MEM, KM_USED_DSK |
|
132 |
These represents the metrics of used resources at the start of the |
|
133 |
computation (only for tiered allocation mode). The NPU value is |
|
134 |
"normalized" CPU count, i.e. the number of virtual CPUs divided by |
|
135 |
the maximum ratio of the virtual to physical CPUs. |
|
136 |
|
|
137 |
KM_POOL_CPU, KM_POOL_NPU, KM_POOL_MEM, KM_POOL_DSK |
|
138 |
These represents the total resources allocated during the tiered |
|
139 |
allocation process. In effect, they represent how much is readily |
|
140 |
available for allocation. |
|
141 |
|
|
142 |
KM_UNAV_CPU, KM_POOL_NPU, KM_UNAV_MEM, KM_UNAV_DSK |
|
143 |
These represents the resources left over (either free as in |
|
144 |
unallocable or allocable on their own) after the tiered allocation |
|
145 |
has been completed. They represent better the actual unallocable |
|
146 |
resources, because some other resource has been exhausted. For |
|
147 |
example, the cluster might still have 100GiB disk free, but with no |
|
148 |
memory left for instances, we cannot allocate another instance, so |
|
149 |
in effect the disk space is unallocable. Note that the CPUs here |
|
150 |
represent instance virtual CPUs, and in case the *--max-cpu* option |
|
151 |
hasn't been specified this will be -1. |
|
152 |
|
|
153 |
ALLOC_USAGE |
|
154 |
The current usage represented as initial number of instances divided |
|
155 |
per final number of instances. |
|
156 |
|
|
157 |
ALLOC_COUNT |
|
158 |
The number of instances allocated (delta between FIN_INST_CNT and |
|
159 |
INI_INST_CNT). |
|
160 |
|
|
161 |
ALLOC_FAIL*_CNT |
|
162 |
For the last attemp at allocations (which would have increased |
|
163 |
FIN_INST_CNT with one, if it had succeeded), this is the count of |
|
164 |
the failure reasons per failure type; currently defined are FAILMEM, |
|
165 |
FAILDISK and FAILCPU which represent errors due to not enough |
|
166 |
memory, disk and CPUs, and FAILN1 which represents a non N+1 |
|
167 |
compliant cluster on which we can't allocate instances at all. |
|
168 |
|
|
169 |
ALLOC_FAIL_REASON |
|
170 |
The reason for most of the failures, being one of the above FAIL* |
|
171 |
strings. |
|
172 |
|
|
173 |
OK |
|
174 |
A marker representing the successful end of the computation, and |
|
175 |
having value "1". If this key is not present in the output it means |
|
176 |
that the computation failed and any values present should not be |
|
177 |
relied upon. |
|
178 |
|
|
179 |
If the tiered allocation mode is enabled, then many of the INI_/FIN_ |
|
180 |
metrics will be also displayed with a TRL_ prefix, and denote the |
|
181 |
cluster status at the end of the tiered allocation run. |
|
182 |
|
|
183 |
OPTIONS |
|
184 |
------- |
|
185 |
|
|
186 |
The options that can be passed to the program are as follows: |
|
187 |
|
|
188 |
--memory *mem* |
|
189 |
The memory size of the instances to be placed (defaults to 4GiB). |
|
190 |
|
|
191 |
--disk *disk* |
|
192 |
The disk size of the instances to be placed (defaults to 100GiB). |
|
193 |
|
|
194 |
--req-nodes *num-nodes* |
|
195 |
The number of nodes for the instances; the default of two means |
|
196 |
mirrored instances, while passing one means plain type instances. |
|
197 |
|
|
198 |
--vcpus *vcpus* |
|
199 |
The number of VCPUs of the instances to be placed (defaults to 1). |
|
200 |
|
|
201 |
--max-cpu=*cpu-ratio* |
|
202 |
The maximum virtual to physical cpu ratio, as a floating point |
|
203 |
number between zero and one. For example, specifying *cpu-ratio* as |
|
204 |
**2.5** means that, for a 4-cpu machine, a maximum of 10 virtual |
|
205 |
cpus should be allowed to be in use for primary instances. A value |
|
206 |
of one doesn't make sense though, as that means no disk space can be |
|
207 |
used on it. |
|
208 |
|
|
209 |
--min-disk=*disk-ratio* |
|
210 |
The minimum amount of free disk space remaining, as a floating point |
|
211 |
number. For example, specifying *disk-ratio* as **0.25** means that |
|
212 |
at least one quarter of disk space should be left free on nodes. |
|
213 |
|
|
214 |
-p, --print-nodes |
|
215 |
Prints the before and after node status, in a format designed to |
|
216 |
allow the user to understand the node's most important parameters. |
|
217 |
|
|
218 |
It is possible to customise the listed information by passing a |
|
219 |
comma-separated list of field names to this option (the field list |
|
220 |
is currently undocumented), or to extend the default field list by |
|
221 |
prefixing the additional field list with a plus sign. By default, |
|
222 |
the node list will contain the following information: |
|
223 |
|
|
224 |
F |
|
225 |
a character denoting the status of the node, with '-' meaning an |
|
226 |
offline node, '*' meaning N+1 failure and blank meaning a good |
|
227 |
node |
|
228 |
|
|
229 |
Name |
|
230 |
the node name |
|
231 |
|
|
232 |
t_mem |
|
233 |
the total node memory |
|
234 |
|
|
235 |
n_mem |
|
236 |
the memory used by the node itself |
|
237 |
|
|
238 |
i_mem |
|
239 |
the memory used by instances |
|
240 |
|
|
241 |
x_mem |
|
242 |
amount memory which seems to be in use but cannot be determined |
|
243 |
why or by which instance; usually this means that the hypervisor |
|
244 |
has some overhead or that there are other reporting errors |
|
245 |
|
|
246 |
f_mem |
|
247 |
the free node memory |
|
248 |
|
|
249 |
r_mem |
|
250 |
the reserved node memory, which is the amount of free memory |
|
251 |
needed for N+1 compliance |
|
252 |
|
|
253 |
t_dsk |
|
254 |
total disk |
|
255 |
|
|
256 |
f_dsk |
|
257 |
free disk |
|
258 |
|
|
259 |
pcpu |
|
260 |
the number of physical cpus on the node |
|
261 |
|
|
262 |
vcpu |
|
263 |
the number of virtual cpus allocated to primary instances |
|
264 |
|
|
265 |
pcnt |
|
266 |
number of primary instances |
|
267 |
|
|
268 |
scnt |
|
269 |
number of secondary instances |
|
270 |
|
|
271 |
p_fmem |
|
272 |
percent of free memory |
|
273 |
|
|
274 |
p_fdsk |
|
275 |
percent of free disk |
|
276 |
|
|
277 |
r_cpu |
|
278 |
ratio of virtual to physical cpus |
|
279 |
|
|
280 |
lCpu |
|
281 |
the dynamic CPU load (if the information is available) |
|
282 |
|
|
283 |
lMem |
|
284 |
the dynamic memory load (if the information is available) |
|
285 |
|
|
286 |
lDsk |
|
287 |
the dynamic disk load (if the information is available) |
|
288 |
|
|
289 |
lNet |
|
290 |
the dynamic net load (if the information is available) |
|
291 |
|
|
292 |
-O *name* |
|
293 |
This option (which can be given multiple times) will mark nodes as |
|
294 |
being *offline*. This means a couple of things: |
|
295 |
|
|
296 |
- instances won't be placed on these nodes, not even temporarily; |
|
297 |
e.g. the *replace primary* move is not available if the secondary |
|
298 |
node is offline, since this move requires a failover. |
|
299 |
- these nodes will not be included in the score calculation (except |
|
300 |
for the percentage of instances on offline nodes) |
|
301 |
|
|
302 |
Note that the algorithm will also mark as offline any nodes which |
|
303 |
are reported by RAPI as such, or that have "?" in file-based input |
|
304 |
in any numeric fields. |
|
305 |
|
|
306 |
-t *datafile*, --text-data=*datafile* |
|
307 |
The name of the file holding node and instance information (if not |
|
308 |
collecting via RAPI or LUXI). This or one of the other backends must |
|
309 |
be selected. |
|
310 |
|
|
311 |
-S *filename*, --save-cluster=*filename* |
|
312 |
If given, the state of the cluster at the end of the allocation is |
|
313 |
saved to a file named *filename.alloc*, and if tiered allocation is |
|
314 |
enabled, the state after tiered allocation will be saved to |
|
315 |
*filename.tiered*. This allows re-feeding the cluster state to |
|
316 |
either hspace itself (with different parameters) or for example |
|
317 |
hbal. |
|
318 |
|
|
319 |
-m *cluster* |
|
320 |
Collect data directly from the *cluster* given as an argument via |
|
321 |
RAPI. If the argument doesn't contain a colon (:), then it is |
|
322 |
converted into a fully-built URL via prepending ``https://`` and |
|
323 |
appending the default RAPI port, otherwise it's considered a |
|
324 |
fully-specified URL and is used as-is. |
|
325 |
|
|
326 |
-L [*path*] |
|
327 |
Collect data directly from the master daemon, which is to be |
|
328 |
contacted via the luxi (an internal Ganeti protocol). An optional |
|
329 |
*path* argument is interpreted as the path to the unix socket on |
|
330 |
which the master daemon listens; otherwise, the default path used by |
|
331 |
ganeti when installed with *--localstatedir=/var* is used. |
|
332 |
|
|
333 |
--simulate *description* |
|
334 |
Instead of using actual data, build an empty cluster given a node |
|
335 |
description. The *description* parameter must be a comma-separated |
|
336 |
list of five elements, describing in order: |
|
337 |
|
|
338 |
- the allocation policy for this node group |
|
339 |
- the number of nodes in the cluster |
|
340 |
- the disk size of the nodes, in mebibytes |
|
341 |
- the memory size of the nodes, in mebibytes |
|
342 |
- the cpu core count for the nodes |
|
343 |
|
|
344 |
An example description would be **preferred,B20,102400,16384,4** |
|
345 |
describing a 20-node cluster where each node has 100GiB of disk |
|
346 |
space, 16GiB of memory and 4 CPU cores. Note that all nodes must |
|
347 |
have the same specs currently. |
|
348 |
|
|
349 |
This option can be given multiple times, and each new use defines a |
|
350 |
new node group. Hence different node groups can have different |
|
351 |
allocation policies and node count/specifications. |
|
352 |
|
|
353 |
--tiered-alloc *spec* |
|
354 |
Besides the standard, fixed-size allocation, also do a tiered |
|
355 |
allocation scheme where the algorithm starts from the given |
|
356 |
specification and allocates until there is no more space; then it |
|
357 |
decreases the specification and tries the allocation again. The |
|
358 |
decrease is done on the matric that last failed during |
|
359 |
allocation. The specification given is similar to the *--simulate* |
|
360 |
option and it holds: |
|
361 |
|
|
362 |
- the disk size of the instance |
|
363 |
- the memory size of the instance |
|
364 |
- the vcpu count for the insance |
|
365 |
|
|
366 |
An example description would be *10240,8192,2* describing an initial |
|
367 |
starting specification of 10GiB of disk space, 4GiB of memory and 2 |
|
368 |
VCPUs. |
|
369 |
|
|
370 |
Also note that the normal allocation and the tiered allocation are |
|
371 |
independent, and both start from the initial cluster state; as such, |
|
372 |
the instance count for these two modes are not related one to |
|
373 |
another. |
|
374 |
|
|
375 |
-v, --verbose |
|
376 |
Increase the output verbosity. Each usage of this option will |
|
377 |
increase the verbosity (currently more than 2 doesn't make sense) |
|
378 |
from the default of one. |
|
379 |
|
|
380 |
-q, --quiet |
|
381 |
Decrease the output verbosity. Each usage of this option will |
|
382 |
decrease the verbosity (less than zero doesn't make sense) from the |
|
383 |
default of one. |
|
384 |
|
|
385 |
-V, --version |
|
386 |
Just show the program version and exit. |
|
387 |
|
|
388 |
EXIT STATUS |
|
389 |
----------- |
|
390 |
|
|
391 |
The exist status of the command will be zero, unless for some reason |
|
392 |
the algorithm fatally failed (e.g. wrong node or instance data). |
|
393 |
|
|
394 |
BUGS |
|
395 |
---- |
|
396 |
|
|
397 |
The algorithm is highly dependent on the number of nodes; its runtime |
|
398 |
grows exponentially with this number, and as such is impractical for |
|
399 |
really big clusters. |
|
400 |
|
|
401 |
The algorithm doesn't rebalance the cluster or try to get the optimal |
|
402 |
fit; it just allocates in the best place for the current step, without |
|
403 |
taking into consideration the impact on future placements. |
|
404 |
|
|
405 |
SEE ALSO |
|
406 |
-------- |
|
407 |
|
|
408 |
**hbal**(1), **hscan**(1), **hail**(1), **ganeti**(7), |
|
409 |
**gnt-instance**(8), **gnt-node**(8) |
|
410 |
|
|
411 |
COPYRIGHT |
|
412 |
--------- |
|
413 |
|
|
414 |
Copyright (C) 2009, 2010, 2011 Google Inc. Permission is granted to |
|
415 |
copy, distribute and/or modify under the terms of the GNU General |
|
416 |
Public License as published by the Free Software Foundation; either |
|
417 |
version 2 of the License, or (at your option) any later version. |
|
418 |
|
|
419 |
On Debian systems, the complete text of the GNU General Public License |
|
420 |
can be found in /usr/share/common-licenses/GPL. |
b/man/footer.rst | ||
---|---|---|
23 | 23 |
daemon), **ganeti-masterd**(8) (master daemon), **ganeti-rapi**(8) |
24 | 24 |
(remote API daemon). |
25 | 25 |
|
26 |
Ganeti htools: **hbal**(1) (cluster balancer), **hspace**(1) (capacity |
|
27 |
calculation), **hail**(1) (IAllocator plugin), **hscan**(1) (data |
|
28 |
gatherer from remote clusters). |
|
29 |
|
|
26 | 30 |
COPYRIGHT |
27 | 31 |
--------- |
28 | 32 |
|
29 |
Copyright (C) 2006, 2007, 2008, 2009, 2010 Google Inc. Permission
|
|
30 |
is granted to copy, distribute and/or modify under the terms of the
|
|
31 |
GNU General Public License as published by the Free Software
|
|
32 |
Foundation; either version 2 of the License, or (at your option)
|
|
33 |
any later version. |
|
33 |
Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 Google
|
|
34 |
Inc. Permission is granted to copy, distribute and/or modify under the
|
|
35 |
terms of the GNU General Public License as published by the Free
|
|
36 |
Software Foundation; either version 2 of the License, or (at your
|
|
37 |
option) any later version.
|
|
34 | 38 |
|
35 | 39 |
On Debian systems, the complete text of the GNU General Public |
36 | 40 |
License can be found in /usr/share/common-licenses/GPL. |
b/man/hail.rst | ||
---|---|---|
1 |
HAIL(1) Ganeti | Version @GANETI_VERSION@ |
|
2 |
========================================= |
|
3 |
|
|
4 |
NAME |
|
5 |
---- |
|
6 |
|
|
7 |
hail - Ganeti IAllocator plugin |
|
8 |
|
|
9 |
SYNOPSIS |
|
10 |
-------- |
|
11 |
|
|
12 |
**hail** [ **-t** *datafile* | **--simulate** *spec* ] *input-file* |
|
13 |
|
|
14 |
**hail** --version |
|
15 |
|
|
16 |
DESCRIPTION |
|
17 |
----------- |
|
18 |
|
|
19 |
hail is a Ganeti IAllocator plugin that allows automatic instance |
|
20 |
placement and automatic instance secondary node replacement using the |
|
21 |
same algorithm as **hbal**(1). |
|
22 |
|
|
23 |
The program takes input via a JSON-file containing current cluster |
|
24 |
state and the request details, and output (on stdout) a JSON-formatted |
|
25 |
response. In case of critical failures, the error message is printed |
|
26 |
on stderr and the exit code is changed to show failure. |
|
27 |
|
|
28 |
ALGORITHM |
|
29 |
~~~~~~~~~ |
|
30 |
|
|
31 |
The program uses a simplified version of the hbal algorithm. |
|
32 |
|
|
33 |
For relocations, we try to change the secondary node of the instance |
|
34 |
to all the valid other nodes; the node which results in the best |
|
35 |
cluster score is chosen. |
|
36 |
|
|
37 |
For single-node allocations (non-mirrored instances), again we |
|
38 |
select the node which, when chosen as the primary node, gives the best |
|
39 |
score. |
|
40 |
|
|
41 |
For dual-node allocations (mirrored instances), we chose the best |
|
42 |
pair; this is the only choice where the algorithm is non-trivial |
|
43 |
with regard to cluster size. |
|
44 |
|
|
45 |
For node evacuations (*multi-evacuate* mode), we iterate over all |
|
46 |
instances which live as secondaries on those nodes and try to relocate |
|
47 |
them using the single-instance relocation algorithm. |
|
48 |
|
|
49 |
In all cases, the cluster scoring is identical to the hbal algorithm. |
|
50 |
|
|
51 |
OPTIONS |
|
52 |
------- |
|
53 |
|
|
54 |
The options that can be passed to the program are as follows: |
|
55 |
|
|
56 |
-p, --print-nodes |
|
57 |
Prints the before and after node status, in a format designed to |
|
58 |
allow the user to understand the node's most important |
|
59 |
parameters. See the man page **hbal**(1) for more details about this |
|
60 |
field. |
|
61 |
|
|
62 |
-t *datafile*, --text-data=*datafile* |
|
63 |
The name of the file holding cluster information, to override the |
Also available in: Unified diff