1 .TH HSPACE 1 2009-06-01 htools "Ganeti H-tools"
3 hspace \- Cluster space analyzer for Ganeti
7 .B "[backend options...]"
8 .B "[algorithm options...]"
9 .B "[request options..."]
18 .BI "[ -m " cluster " ]"
22 .BI "[ -n " nodes-file " ]"
23 .BI "[ -i " instances-file " ]"
27 .BI "[ --max-cpu " cpu-ratio " ]"
28 .BI "[ --min-disk " disk-ratio " ]"
29 .BI "[ -O " name... " ]"
33 .BI "[--memory " mem "]"
34 .BI "[--disk " disk "]"
35 .BI "[--req-nodes " req-nodes "]"
39 hspace computes how many additional instances can be fit on a cluster,
40 while maintaining N+1 status.
42 The program will try to place instances, all of the same size, on the
43 cluster, until the point where we don't have any N+1 possible
44 allocation. It uses the exact same allocation algorithm as the hail
47 The output of the program is designed to interpreted as a shell
48 fragment (or parsed as a \fIkey=value\fR file). Options which extend
49 the output (e.g. -p, -v) will output the additional information on
50 stderr (such that the stdout is still parseable).
52 The following keys are available in the output of the script (all
53 prefixed with \fIHTS_\fR):
55 .I SPEC_MEM, SPEC_DSK, SPEC_CPU, SPEC_RQN
56 These represent the specifications of the instance model used for
57 allocation (the memory, disk, cpu, requested nodes).
60 .I CLUSTER_MEM, CLUSTER_DSK, CLUSTER_CPU, CLUSTER_NODES
61 These represent the total memory, disk, CPU count and total nodes in
65 .I INI_SCORE, FIN_SCORE
66 These are the initial (current) and final cluster score (see the hbal
67 man page for details about the scoring algorithm).
70 .I INI_INST_CNT, FIN_INST_CNT
71 The initial and final instance count.
74 .I INI_MEM_FREE, FIN_MEM_FREE
75 The initial and final total free memory in the cluster (but this
76 doesn't necessarily mean available for use).
79 .I INI_MEM_AVAIL, FIN_MEM_AVAIL
80 The initial and final total available memory for allocation in the
81 cluster. If allocating redundant instances, new instances could
82 increase the reserved memory so it doesn't necessarily mean the
83 entirety of this memory can be used for new instance allocations.
86 .I INI_MEM_RESVD, FIN_MEM_RESVD
87 The initial and final reserved memory (for redundancy/N+1 purposes).
90 .I INI_MEM_INST, FIN_MEM_INST
91 The initial and final memory used for instances (actual runtime used
95 .I INI_MEM_OVERHEAD, FIN_MEM_OVERHEAD
96 The initial and final memory overhead - memory used for the node
97 itself and unacounted memory (e.g. due to hypervisor overhead).
100 .I INI_MEM_EFF, HTS_INI_MEM_EFF
101 The initial and final memory efficiency, represented as instance
102 memory divided by total memory.
105 .I INI_DSK_FREE, INI_DSK_AVAIL, INI_DSK_RESVD, INI_DSK_INST, INI_DSK_EFF
106 Initial disk stats, similar to the memory ones.
109 .I FIN_DSK_FREE, FIN_DSK_AVAIL, FIN_DSK_RESVD, FIN_DSK_INST, FIN_DSK_EFF
110 Final disk stats, similar to the memory ones.
113 .I INI_CPU_INST, FIN_CPU_INST
114 Initial and final number of virtual CPUs used by instances.
117 .I INI_CPU_EFF, FIN_CPU_EFF
118 The initial and final CPU efficiency, represented as the count of
119 virtual instance CPUs divided by the total physical CPU count.
122 .I INI_MNODE_MEM_AVAIL, FIN_MNODE_MEM_AVAIL
123 The initial and final maximum per-node available memory. This is not
124 very useful as a metric but can give an impression of the status of
125 the nodes; as an example, this value restricts the maximum instance
126 size that can be still created on the cluster.
129 .I INI_MNODE_DSK_AVAIL, FIN_MNODE_DSK_AVAIL
130 Like the above but for disk.
134 The current usage represented as initial number of instances divided
135 per final number of instances.
139 The number of instances allocated (delta between FIN_INST_CNT and
144 For the last attemp at allocations (which would have increased
145 FIN_INST_CNT with one, if it had succeeded), this is the count of the
146 failure reasons per failure type; currently defined are FAILMEM,
147 FAILDISK and FAILCPU which represent errors due to not enough memory,
148 disk and CPUs, and FAILN1 which represents a non N+1 compliant cluster
149 on which we can't allocate instances at all.
153 The reason for most of the failures, being one of the above FAIL*
158 A marker representing the successful end of the computation, and
159 having value "1". If this key is not present in the output it means
160 that the computation failed and any values present should not be
164 The options that can be passed to the program are as follows:
168 The memory size of the instances to be placed (defaults to 4GiB).
172 The disk size of the instances to be placed (defaults to 100GiB).
175 .BI "--req-nodes " num-nodes
176 The number of nodes for the instances; the default of two means
177 mirrored instances, while passing one means plain type instances.
180 .BI "--max-cpu " cpu-ratio
181 The maximum virtual-to-physical cpu ratio, as a floating point number
182 between zero and one. For example, specifying \fIcpu-ratio\fR as
183 \fB2.5\fR means that, for a 4-cpu machine, a maximum of 10 virtual
184 cpus should be allowed to be in use for primary instances. A value of
185 one doesn't make sense though, as that means no disk space can be used
189 .BI "--min-disk " disk-ratio
190 The minimum amount of free disk space remaining, as a floating point
191 number. For example, specifying \fIdisk-ratio\fR as \fB0.25\fR means
192 that at least one quarter of disk space should be left free on nodes.
196 Prints the before and after node status, in a format designed to allow
197 the user to understand the node's most important parameters.
199 The node list will contain these informations:
203 a character denoting the status of the node, with '-' meaning an
204 offline node, '*' meaning N+1 failure and blank meaning a good node
210 the total node memory
213 the memory used by the node itself
216 the memory used by instances
219 amount memory which seems to be in use but cannot be determined why or
220 by which instance; usually this means that the hypervisor has some
221 overhead or that there are other reporting errors
227 the reserved node memory, which is the amount of free memory needed
237 the number of physical cpus on the node
240 the number of virtual cpus allocated to primary instances
243 number of primary instances
246 number of secondary instances
249 percent of free memory
255 ratio of virtual to physical cpus
260 This option (which can be given multiple times) will mark nodes as
261 being \fIoffline\fR, and instances won't be placed on these nodes.
263 Note that hspace will also mark as offline any nodes which are
264 reported by RAPI as such, or that have "?" in file-based input in any
269 .BI "-n" nodefile ", --nodes=" nodefile
270 The name of the file holding node information (if not collecting via
271 RAPI), instead of the default \fInodes\fR file (but see below how to
272 customize the default value via the environment).
275 .BI "-i" instancefile ", --instances=" instancefile
276 The name of the file holding instance information (if not collecting
277 via RAPI), instead of the default \fIinstances\fR file (but see below
278 how to customize the default value via the environment).
282 Collect data not from files but directly from the
284 given as an argument via RAPI. If the argument doesn't contain a colon
285 (:), then it is converted into a fully-built URL via prepending
286 https:// and appending the default RAPI port, otherwise it's
287 considered a fully-specified URL and is used as-is.
291 Collect data not from files but directly from the master daemon, which
292 is to be contacted via the luxi (an internal Ganeti protocol). An
293 optional \fIpath\fR argument is interpreted as the path to the unix
294 socket on which the master daemon listens; otherwise, the default path
295 used by ganeti when installed with "--localstatedir=/var" is used.
299 Increase the output verbosity. Each usage of this option will increase
300 the verbosity (currently more than 2 doesn't make sense) from the
301 default of one. At verbosity 2 the location of the new instances is
302 shown in the standard error.
306 Decrease the output verbosity. Each usage of this option will decrease
307 the verbosity (less than zero doesn't make sense) from the default of
312 Just show the program version and exit.
316 The exist status of the command will be zero, unless for some reason
317 the algorithm fatally failed (e.g. wrong node or instance data).
321 The algorithm is highly dependent on the number of nodes; its runtime
322 grows exponentially with this number, and as such is impractical for
325 The algorithm doesn't rebalance the cluster or try to get the optimal
326 fit; it just allocates in the best place for the current step, without
327 taking into consideration the impact on future placements.
331 If the variables \fBHTOOLS_NODES\fR and \fBHTOOLS_INSTANCES\fR are
332 present in the environment, they will override the default names for
333 the nodes and instances files. These will have of course no effect
337 .BR hbal "(1), " hscan "(1), " ganeti "(7), " gnt-instance "(8), "
342 Copyright (C) 2009 Google Inc. Permission is granted to copy,
343 distribute and/or modify under the terms of the GNU General Public
344 License as published by the Free Software Foundation; either version 2
345 of the License, or (at your option) any later version.
347 On Debian systems, the complete text of the GNU General Public License
348 can be found in /usr/share/common-licenses/GPL.