Statistics
| Branch: | Tag: | Revision:

root / man / hspace.rst @ be468da0

History | View | Annotate | Download (13.4 kB)

1
HSPACE(1) Ganeti | Version @GANETI_VERSION@
2
===========================================
3

    
4
NAME
5
----
6

    
7
hspace - Cluster space analyzer for Ganeti
8

    
9
SYNOPSIS
10
--------
11

    
12
**hspace** {backend options...} [algorithm options...] [request options...]
13
[output options...] [-v... | -q]
14

    
15
**hspace** --version
16

    
17
Backend options:
18

    
19
{ **-m** *cluster* | **-L[** *path* **] [-X]** | **-t** *data-file* |
20
**--simulate** *spec* }
21

    
22

    
23
Algorithm options:
24

    
25
**[ --max-cpu *cpu-ratio* ]**
26
**[ --min-disk *disk-ratio* ]**
27
**[ -O *name...* ]**
28

    
29

    
30
Request options:
31

    
32
**[--disk-template** *template* **]**
33

    
34
**[--standard-alloc** *disk,ram,cpu*  **]**
35

    
36
**[--tiered-alloc** *disk,ram,cpu* **]**
37

    
38
Output options:
39

    
40
**[--machine-readable**[=*CHOICE*] **]**
41
**[-p**[*fields*]**]**
42

    
43

    
44
DESCRIPTION
45
-----------
46

    
47
hspace computes how many additional instances can be fit on a cluster,
48
while maintaining N+1 status.
49

    
50
The program will try to place instances, all of the same size, on the
51
cluster, until the point where we don't have any N+1 possible
52
allocation. It uses the exact same allocation algorithm as the hail
53
iallocator plugin in *allocate* mode.
54

    
55
The output of the program is designed either for human consumption (the
56
default) or, when enabled with the ``--machine-readable`` option
57
(described further below), for machine consumption. In the latter case,
58
it is intended to interpreted as a shell fragment (or parsed as a
59
*key=value* file). Options which extend the output (e.g. -p, -v) will
60
output the additional information on stderr (such that the stdout is
61
still parseable).
62

    
63
By default, the instance specifications will be read from the cluster;
64
the options ``--standard-alloc`` and ``--tiered-alloc`` can be used to
65
override them.
66

    
67
The following keys are available in the machine-readable output of the
68
script (all prefixed with *HTS_*):
69

    
70
SPEC_MEM, SPEC_DSK, SPEC_CPU, SPEC_RQN, SPEC_DISK_TEMPLATE
71
  These represent the specifications of the instance model used for
72
  allocation (the memory, disk, cpu, requested nodes, disk template).
73

    
74
TSPEC_INI_MEM, TSPEC_INI_DSK, TSPEC_INI_CPU, ...
75
  Only defined when the tiered mode allocation is enabled, these are
76
  similar to the above specifications but show the initial starting spec
77
  for tiered allocation.
78

    
79
CLUSTER_MEM, CLUSTER_DSK, CLUSTER_CPU, CLUSTER_NODES
80
  These represent the total memory, disk, CPU count and total nodes in
81
  the cluster.
82

    
83
INI_SCORE, FIN_SCORE
84
  These are the initial (current) and final cluster score (see the hbal
85
  man page for details about the scoring algorithm).
86

    
87
INI_INST_CNT, FIN_INST_CNT
88
  The initial and final instance count.
89

    
90
INI_MEM_FREE, FIN_MEM_FREE
91
  The initial and final total free memory in the cluster (but this
92
  doesn't necessarily mean available for use).
93

    
94
INI_MEM_AVAIL, FIN_MEM_AVAIL
95
  The initial and final total available memory for allocation in the
96
  cluster. If allocating redundant instances, new instances could
97
  increase the reserved memory so it doesn't necessarily mean the
98
  entirety of this memory can be used for new instance allocations.
99

    
100
INI_MEM_RESVD, FIN_MEM_RESVD
101
  The initial and final reserved memory (for redundancy/N+1 purposes).
102

    
103
INI_MEM_INST, FIN_MEM_INST
104
  The initial and final memory used for instances (actual runtime used
105
  RAM).
106

    
107
INI_MEM_OVERHEAD, FIN_MEM_OVERHEAD
108
  The initial and final memory overhead--memory used for the node
109
  itself and unacounted memory (e.g. due to hypervisor overhead).
110

    
111
INI_MEM_EFF, HTS_INI_MEM_EFF
112
  The initial and final memory efficiency, represented as instance
113
  memory divided by total memory.
114

    
115
INI_DSK_FREE, INI_DSK_AVAIL, INI_DSK_RESVD, INI_DSK_INST, INI_DSK_EFF
116
  Initial disk stats, similar to the memory ones.
117

    
118
FIN_DSK_FREE, FIN_DSK_AVAIL, FIN_DSK_RESVD, FIN_DSK_INST, FIN_DSK_EFF
119
  Final disk stats, similar to the memory ones.
120

    
121
INI_CPU_INST, FIN_CPU_INST
122
  Initial and final number of virtual CPUs used by instances.
123

    
124
INI_CPU_EFF, FIN_CPU_EFF
125
  The initial and final CPU efficiency, represented as the count of
126
  virtual instance CPUs divided by the total physical CPU count.
127

    
128
INI_MNODE_MEM_AVAIL, FIN_MNODE_MEM_AVAIL
129
  The initial and final maximum per-node available memory. This is not
130
  very useful as a metric but can give an impression of the status of
131
  the nodes; as an example, this value restricts the maximum instance
132
  size that can be still created on the cluster.
133

    
134
INI_MNODE_DSK_AVAIL, FIN_MNODE_DSK_AVAIL
135
  Like the above but for disk.
136

    
137
TSPEC
138
  This parameter holds the pairs of specifications and counts of
139
  instances that can be created in the *tiered allocation* mode. The
140
  value of the key is a space-separated list of values; each value is of
141
  the form *memory,disk,vcpu=count* where the memory, disk and vcpu are
142
  the values for the current spec, and count is how many instances of
143
  this spec can be created. A complete value for this variable could be:
144
  **4096,102400,2=225 2560,102400,2=20 512,102400,2=21**.
145

    
146
KM_USED_CPU, KM_USED_NPU, KM_USED_MEM, KM_USED_DSK
147
  These represents the metrics of used resources at the start of the
148
  computation (only for tiered allocation mode). The NPU value is
149
  "normalized" CPU count, i.e. the number of virtual CPUs divided by
150
  the maximum ratio of the virtual to physical CPUs.
151

    
152
KM_POOL_CPU, KM_POOL_NPU, KM_POOL_MEM, KM_POOL_DSK
153
  These represents the total resources allocated during the tiered
154
  allocation process. In effect, they represent how much is readily
155
  available for allocation.
156

    
157
KM_UNAV_CPU, KM_POOL_NPU, KM_UNAV_MEM, KM_UNAV_DSK
158
  These represents the resources left over (either free as in
159
  unallocable or allocable on their own) after the tiered allocation
160
  has been completed. They represent better the actual unallocable
161
  resources, because some other resource has been exhausted. For
162
  example, the cluster might still have 100GiB disk free, but with no
163
  memory left for instances, we cannot allocate another instance, so
164
  in effect the disk space is unallocable. Note that the CPUs here
165
  represent instance virtual CPUs, and in case the *--max-cpu* option
166
  hasn't been specified this will be -1.
167

    
168
ALLOC_USAGE
169
  The current usage represented as initial number of instances divided
170
  per final number of instances.
171

    
172
ALLOC_COUNT
173
  The number of instances allocated (delta between FIN_INST_CNT and
174
  INI_INST_CNT).
175

    
176
ALLOC_FAIL*_CNT
177
  For the last attemp at allocations (which would have increased
178
  FIN_INST_CNT with one, if it had succeeded), this is the count of
179
  the failure reasons per failure type; currently defined are FAILMEM,
180
  FAILDISK and FAILCPU which represent errors due to not enough
181
  memory, disk and CPUs, and FAILN1 which represents a non N+1
182
  compliant cluster on which we can't allocate instances at all.
183

    
184
ALLOC_FAIL_REASON
185
  The reason for most of the failures, being one of the above FAIL*
186
  strings.
187

    
188
OK
189
  A marker representing the successful end of the computation, and
190
  having value "1". If this key is not present in the output it means
191
  that the computation failed and any values present should not be
192
  relied upon.
193

    
194
Many of the INI_/FIN_ metrics will be also displayed with a TRL_ prefix,
195
and denote the cluster status at the end of the tiered allocation run.
196

    
197
The human output format should be self-explanatory, so it is not
198
described further.
199

    
200
OPTIONS
201
-------
202

    
203
The options that can be passed to the program are as follows:
204

    
205
--disk-template *template*
206
  The disk template for the instance; one of the Ganeti disk templates
207
  (e.g. plain, drbd, so on) should be passed in.
208

    
209
--max-cpu=*cpu-ratio*
210
  The maximum virtual to physical cpu ratio, as a floating point number
211
  greater than or equal to one. For example, specifying *cpu-ratio* as
212
  **2.5** means that, for a 4-cpu machine, a maximum of 10 virtual cpus
213
  should be allowed to be in use for primary instances. A value of
214
  exactly one means there will be no over-subscription of CPU (except
215
  for the CPU time used by the node itself), and values below one do not
216
  make sense, as that means other resources (e.g. disk) won't be fully
217
  utilised due to CPU restrictions.
218

    
219
--min-disk=*disk-ratio*
220
  The minimum amount of free disk space remaining, as a floating point
221
  number. For example, specifying *disk-ratio* as **0.25** means that
222
  at least one quarter of disk space should be left free on nodes.
223

    
224
-l *rounds*, --max-length=*rounds*
225
  Restrict the number of instance allocations to this length. This is
226
  not very useful in practice, but can be used for testing hspace
227
  itself, or to limit the runtime for very big clusters.
228

    
229
-p, --print-nodes
230
  Prints the before and after node status, in a format designed to allow
231
  the user to understand the node's most important parameters. See the
232
  man page **htools**(1) for more details about this option.
233

    
234
-O *name*
235
  This option (which can be given multiple times) will mark nodes as
236
  being *offline*. This means a couple of things:
237

    
238
  - instances won't be placed on these nodes, not even temporarily;
239
    e.g. the *replace primary* move is not available if the secondary
240
    node is offline, since this move requires a failover.
241
  - these nodes will not be included in the score calculation (except
242
    for the percentage of instances on offline nodes)
243

    
244
  Note that the algorithm will also mark as offline any nodes which
245
  are reported by RAPI as such, or that have "?" in file-based input
246
  in any numeric fields.
247

    
248
-S *filename*, --save-cluster=*filename*
249
  If given, the state of the cluster at the end of the allocation is
250
  saved to a file named *filename.alloc*, and if tiered allocation is
251
  enabled, the state after tiered allocation will be saved to
252
  *filename.tiered*. This allows re-feeding the cluster state to
253
  either hspace itself (with different parameters) or for example
254
  hbal, via the ``-t`` option.
255

    
256
-t *datafile*, --text-data=*datafile*
257
  Backend specification: the name of the file holding node and instance
258
  information (if not collecting via RAPI or LUXI). This or one of the
259
  other backends must be selected. The option is described in the man
260
  page **htools**(1).
261

    
262
-m *cluster*
263
  Backend specification: collect data directly from the *cluster* given
264
  as an argument via RAPI. The option is described in the man page
265
  **htools**(1).
266

    
267
-L [*path*]
268
  Backend specification: collect data directly from the master daemon,
269
  which is to be contacted via LUXI (an internal Ganeti protocol). The
270
  option is described in the man page **htools**(1).
271

    
272
--simulate *description*
273
  Backend specification: similar to the **-t** option, this allows
274
  overriding the cluster data with a simulated cluster. For details
275
  about the description, see the man page **htools**(1).
276

    
277
--standard-alloc *disk,ram,cpu*
278
  This option overrides the instance size read from the cluster for the
279
  *standard* allocation mode, where we simply allocate instances of the
280
  same, fixed size until the cluster runs out of space.
281

    
282
  The specification given is similar to the *--simulate* option and it
283
  holds:
284

    
285
  - the disk size of the instance (units can be used)
286
  - the memory size of the instance (units can be used)
287
  - the vcpu count for the insance
288

    
289
  An example description would be *100G,4g,2* describing an instance
290
  specification of 100GB of disk space, 4GiB of memory and 2 VCPUs.
291

    
292
--tiered-alloc *disk,ram,cpu*
293
  This option overrides the instance size for the *tiered* allocation
294
  mode. In this mode, the algorithm starts from the given specification
295
  and allocates until there is no more space; then it decreases the
296
  specification and tries the allocation again. The decrease is done on
297
  the metric that last failed during allocation. The argument should
298
  have the same format as for ``--standard-alloc``.
299

    
300
  Also note that the normal allocation and the tiered allocation are
301
  independent, and both start from the initial cluster state; as such,
302
  the instance count for these two modes are not related one to
303
  another.
304

    
305
--machine-readable[=*choice*]
306
  By default, the output of the program is in "human-readable" format,
307
  i.e. text descriptions. By passing this flag you can either enable
308
  (``--machine-readable`` or ``--machine-readable=yes``) or explicitly
309
  disable (``--machine-readable=no``) the machine readable format
310
  described above.
311

    
312
-v, --verbose
313
  Increase the output verbosity. Each usage of this option will
314
  increase the verbosity (currently more than 2 doesn't make sense)
315
  from the default of one.
316

    
317
-q, --quiet
318
  Decrease the output verbosity. Each usage of this option will
319
  decrease the verbosity (less than zero doesn't make sense) from the
320
  default of one.
321

    
322
-V, --version
323
  Just show the program version and exit.
324

    
325
UNITS
326
~~~~~
327

    
328
By default, all unit-accepting options use mebibytes. Using the
329
lower-case letters of *m*, *g* and *t* (or their longer equivalents of
330
*mib*, *gib*, *tib*, for which case doesn't matter) explicit binary
331
units can be selected. Units in the SI system can be selected using the
332
upper-case letters of *M*, *G* and *T* (or their longer equivalents of
333
*MB*, *GB*, *TB*, for which case doesn't matter).
334

    
335
More details about the difference between the SI and binary systems can
336
be read in the *units(7)* man page.
337

    
338
EXIT STATUS
339
-----------
340

    
341
The exist status of the command will be zero, unless for some reason
342
the algorithm fatally failed (e.g. wrong node or instance data).
343

    
344
BUGS
345
----
346

    
347
The algorithm is highly dependent on the number of nodes; its runtime
348
grows exponentially with this number, and as such is impractical for
349
really big clusters.
350

    
351
The algorithm doesn't rebalance the cluster or try to get the optimal
352
fit; it just allocates in the best place for the current step, without
353
taking into consideration the impact on future placements.
354

    
355
.. vim: set textwidth=72 :
356
.. Local Variables:
357
.. mode: rst
358
.. fill-column: 72
359
.. End: