root / doc / design-2.1.rst @ 4dfac6af
History | View | Annotate | Download (22.6 kB)
1 |
================= |
---|---|
2 |
Ganeti 2.1 design |
3 |
================= |
4 |
|
5 |
This document describes the major changes in Ganeti 2.1 compared to |
6 |
the 2.0 version. |
7 |
|
8 |
The 2.1 version will be a relatively small release. Its main aim is to avoid |
9 |
changing too much of the core code, while addressing issues and adding new |
10 |
features and improvements over 2.0, in a timely fashion. |
11 |
|
12 |
.. contents:: :depth: 3 |
13 |
|
14 |
Objective |
15 |
========= |
16 |
|
17 |
Ganeti 2.1 will add features to help further automatization of cluster |
18 |
operations, further improbe scalability to even bigger clusters, and make it |
19 |
easier to debug the Ganeti core. |
20 |
|
21 |
Background |
22 |
========== |
23 |
|
24 |
Overview |
25 |
======== |
26 |
|
27 |
Detailed design |
28 |
=============== |
29 |
|
30 |
As for 2.0 we divide the 2.1 design into three areas: |
31 |
|
32 |
- core changes, which affect the master daemon/job queue/locking or all/most |
33 |
logical units |
34 |
- logical unit/feature changes |
35 |
- external interface changes (eg. command line, os api, hooks, ...) |
36 |
|
37 |
Core changes |
38 |
------------ |
39 |
|
40 |
Storage units modelling |
41 |
~~~~~~~~~~~~~~~~~~~~~~~ |
42 |
|
43 |
Currently, Ganeti has a good model of the block devices for instances |
44 |
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the |
45 |
storage pools that are providing the space for these front-end |
46 |
devices. For example, there are hardcoded inter-node RPC calls for |
47 |
volume group listing, file storage creation/deletion, etc. |
48 |
|
49 |
The storage units framework will implement a generic handling for all |
50 |
kinds of storage backends: |
51 |
|
52 |
- LVM physical volumes |
53 |
- LVM volume groups |
54 |
- File-based storage directories |
55 |
- any other future storage method |
56 |
|
57 |
There will be a generic list of methods that each storage unit type |
58 |
will provide, like: |
59 |
|
60 |
- list of storage units of this type |
61 |
- check status of the storage unit |
62 |
|
63 |
Additionally, there will be specific methods for each method, for example: |
64 |
|
65 |
- enable/disable allocations on a specific PV |
66 |
- file storage directory creation/deletion |
67 |
- VG consistency fixing |
68 |
|
69 |
This will allow a much better modeling and unification of the various |
70 |
RPC calls related to backend storage pool in the future. Ganeti 2.1 is |
71 |
intended to add the basics of the framework, and not necessarilly move |
72 |
all the curent VG/FileBased operations to it. |
73 |
|
74 |
Note that while we model both LVM PVs and LVM VGs, the framework will |
75 |
**not** model any relationship between the different types. In other |
76 |
words, we don't model neither inheritances nor stacking, since this is |
77 |
too complex for our needs. While a ``vgreduce`` operation on a LVM VG |
78 |
could actually remove a PV from it, this will not be handled at the |
79 |
framework level, but at individual operation level. The goal is that |
80 |
this is a lightweight framework, for abstracting the different storage |
81 |
operation, and not for modelling the storage hierarchy. |
82 |
|
83 |
Feature changes |
84 |
--------------- |
85 |
|
86 |
Ganeti Confd |
87 |
~~~~~~~~~~~~ |
88 |
|
89 |
Current State and shortcomings |
90 |
++++++++++++++++++++++++++++++ |
91 |
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In |
92 |
particular they are divided between "master", "master candidates" and "normal". |
93 |
(Moreover they can be offline or drained, but this is not important for the |
94 |
current discussion). In general the whole configuration is only replicated to |
95 |
master candidates, and some partial information is spread to all nodes via |
96 |
ssconf. |
97 |
|
98 |
This change was done so that the most frequent Ganeti operations didn't need to |
99 |
contact all nodes, and so clusters could become bigger. If we want more |
100 |
information to be available on all nodes, we need to add more ssconf values, |
101 |
which is counter-balancing the change, or to talk with the master node, which |
102 |
is not designed to happen now, and requires its availability. |
103 |
|
104 |
Information such as the instance->primary_node mapping will be needed on all |
105 |
nodes, and we also want to make sure services external to the cluster can query |
106 |
this information as well. This information must be available at all times, so |
107 |
we can't query it through RAPI, which would be a single point of failure, as |
108 |
it's only available on the master. |
109 |
|
110 |
|
111 |
Proposed changes |
112 |
++++++++++++++++ |
113 |
|
114 |
In order to allow fast and highly available access read-only to some |
115 |
configuration values, we'll create a new ganeti-confd daemon, which will run on |
116 |
master candidates. This daemon will talk via UDP, and authenticate messages |
117 |
using HMAC with a cluster-wide shared key. |
118 |
|
119 |
An interested client can query a value by making a request to a subset of the |
120 |
cluster master candidates. It will then wait to get a few responses, and use |
121 |
the one with the highest configuration serial number (which will be always |
122 |
included in the answer). If some candidates are stale, or we are in the middle |
123 |
of a configuration update, various master candidates may return different |
124 |
values, and this should make sure the most recent information is used. |
125 |
|
126 |
In order to prevent replay attacks queries will contain the current unix |
127 |
timestamp according to the client, and the server will verify that its |
128 |
timestamp is in the same 5 minutes range (this requires synchronized clocks, |
129 |
which is a good idea anyway). Queries will also contain a "salt" which they |
130 |
expect the answers to be sent with, and clients are supposed to accept only |
131 |
answers which contain salt generated by them. |
132 |
|
133 |
The configuration daemon will be able to answer simple queries such as: |
134 |
|
135 |
- master candidates list |
136 |
- master node |
137 |
- offline nodes |
138 |
- instance list |
139 |
- instance primary nodes |
140 |
|
141 |
Wire protocol |
142 |
^^^^^^^^^^^^^ |
143 |
|
144 |
A confd query will look like this, on the wire:: |
145 |
|
146 |
{ |
147 |
"msg": "{\"type\": 1, |
148 |
\"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\", |
149 |
\"protocol\": 1, |
150 |
\"query\": \"node1.example.com\"}\n", |
151 |
"salt": "1249637704", |
152 |
"hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f" |
153 |
} |
154 |
|
155 |
Detailed explanation of the various fields: |
156 |
|
157 |
- 'msg' contains a JSON-encoded query, its fields are: |
158 |
|
159 |
- 'protocol', integer, is the confd protocol version (initially just |
160 |
constants.CONFD_PROTOCOL_VERSION, with a value of 1) |
161 |
- 'type', integer, is the query type. For example "node role by name" or |
162 |
"node primary ip by instance ip". Constants will be provided for the actual |
163 |
available query types. |
164 |
- 'query', string, is the search key. For example an ip, or a node name. |
165 |
- 'rsalt', string, is the required response salt. The client must use it to |
166 |
recognize which answer it's getting. |
167 |
|
168 |
- 'salt' must be the current unix timestamp, according to the client. Servers |
169 |
can refuse messages which have a wrong timing, according to their |
170 |
configuration and clock. |
171 |
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key |
172 |
|
173 |
If an answer comes back (which is optional, since confd works over UDP) it will |
174 |
be in this format:: |
175 |
|
176 |
{ |
177 |
"msg": "{\"status\": 0, |
178 |
\"answer\": 0, |
179 |
\"serial\": 42, |
180 |
\"protocol\": 1}\n", |
181 |
"salt": "9aa6ce92-8336-11de-af38-001d093e835f", |
182 |
"hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af" |
183 |
} |
184 |
|
185 |
Where: |
186 |
|
187 |
- 'msg' contains a JSON-encoded answer, its fields are: |
188 |
|
189 |
- 'protocol', integer, is the confd protocol version (initially just |
190 |
constants.CONFD_PROTOCOL_VERSION, with a value of 1) |
191 |
- 'status', integer, is the error code. Initially just 0 for 'ok' or '1' for |
192 |
'error' (in which case answer contains an error detail, rather than an |
193 |
answer), but in the future it may be expanded to have more meanings (eg: 2, |
194 |
the answer is compressed) |
195 |
- 'answer', is the actual answer. Its type and meaning is query specific. For |
196 |
example for "node primary ip by instance ip" queries it will be a string |
197 |
containing an IP address, for "node role by name" queries it will be an |
198 |
integer which encodes the role (master, candidate, drained, offline) |
199 |
according to constants. |
200 |
|
201 |
- 'salt' is the requested salt from the query. A client can use it to recognize |
202 |
what query the answer is answering. |
203 |
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key |
204 |
|
205 |
|
206 |
Redistribute Config |
207 |
~~~~~~~~~~~~~~~~~~~ |
208 |
|
209 |
Current State and shortcomings |
210 |
++++++++++++++++++++++++++++++ |
211 |
Currently LURedistributeConfig triggers a copy of the updated configuration |
212 |
file to all master candidates and of the ssconf files to all nodes. There are |
213 |
other files which are maintained manually but which are important to keep in |
214 |
sync. These are: |
215 |
|
216 |
- rapi SSL key certificate file (rapi.pem) (on master candidates) |
217 |
- rapi user/password file rapi_users (on master candidates) |
218 |
|
219 |
Furthermore there are some files which are hypervisor specific but we may want |
220 |
to keep in sync: |
221 |
|
222 |
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies |
223 |
the file once, during node add. This design is subject to revision to be able |
224 |
to have different passwords for different groups of instances via the use of |
225 |
hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to |
226 |
provide password-protected vnc sessions. In general, though, it would be |
227 |
useful if the vnc password files were copied as well, to avoid unwanted vnc |
228 |
password changes on instance failover/migrate. |
229 |
|
230 |
Optionally the admin may want to also ship files such as the global xend.conf |
231 |
file, and the network scripts to all nodes. |
232 |
|
233 |
Proposed changes |
234 |
++++++++++++++++ |
235 |
|
236 |
RedistributeConfig will be changed to copy also the rapi files, and to call |
237 |
every enabled hypervisor asking for a list of additional files to copy. We also |
238 |
may want to add a global list of files on the cluster object, which will be |
239 |
propagated as well, or a hook to calculate them. If we implement this feature |
240 |
there should be a way to specify whether a file must be shipped to all nodes or |
241 |
just master candidates. |
242 |
|
243 |
This code will be also shared (via tasklets or by other means, if tasklets are |
244 |
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant |
245 |
files will be automatically shipped to new master candidates as they are set). |
246 |
|
247 |
VNC Console Password |
248 |
~~~~~~~~~~~~~~~~~~~~ |
249 |
|
250 |
Current State and shortcomings |
251 |
++++++++++++++++++++++++++++++ |
252 |
|
253 |
Currently just the xen-hvm hypervisor supports setting a password to connect |
254 |
the the instances' VNC console, and has one common password stored in a file. |
255 |
|
256 |
This doesn't allow different passwords for different instances/groups of |
257 |
instances, and makes it necessary to remember to copy the file around the |
258 |
cluster when the password changes. |
259 |
|
260 |
Proposed changes |
261 |
++++++++++++++++ |
262 |
|
263 |
We'll change the VNC password file to a vnc_password_file hypervisor parameter. |
264 |
This way it can have a cluster default, but also a different value for each |
265 |
instance. The VNC enabled hypervisors (xen and kvm) will publish all the |
266 |
password files in use through the cluster so that a redistribute-config will |
267 |
ship them to all nodes (see the Redistribute Config proposed changes above). |
268 |
|
269 |
The current VNC_PASSWORD_FILE constant will be removed, but its value will be |
270 |
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards |
271 |
compatibility with 2.0. |
272 |
|
273 |
The code to export the list of VNC password files from the hypervisors to |
274 |
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors. |
275 |
|
276 |
Disk/Net parameters |
277 |
~~~~~~~~~~~~~~~~~~~ |
278 |
|
279 |
Current State and shortcomings |
280 |
++++++++++++++++++++++++++++++ |
281 |
|
282 |
Currently disks and network interfaces have a few tweakable options and all the |
283 |
rest is left to a default we chose. We're finding that we need more and more to |
284 |
tweak some of these parameters, for example to disable barriers for DRBD |
285 |
devices, or allow striping for the LVM volumes. |
286 |
|
287 |
Moreover for many of these parameters it will be nice to have cluster-wide |
288 |
defaults, and then be able to change them per disk/interface. |
289 |
|
290 |
Proposed changes |
291 |
++++++++++++++++ |
292 |
|
293 |
We will add new cluster level diskparams and netparams, which will contain all |
294 |
the tweakable parameters. All values which have a sensible cluster-wide default |
295 |
will go into this new structure while parameters which have unique values will not. |
296 |
|
297 |
Example of network parameters: |
298 |
- mode: bridge/route |
299 |
- link: for mode "bridge" the bridge to connect to, for mode route it can |
300 |
contain the routing table, or the destination interface |
301 |
|
302 |
Example of disk parameters: |
303 |
- stripe: lvm stripes |
304 |
- stripe_size: lvm stripe size |
305 |
- meta_flushes: drbd, enable/disable metadata "barriers" |
306 |
- data_flushes: drbd, enable/disable data "barriers" |
307 |
|
308 |
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or |
309 |
hypervisor specific (nic models for example), but for now they will all live in |
310 |
the same structure. Each component is supposed to validate only the parameters |
311 |
it knows about, and ganeti itself will make sure that no "globally unknown" |
312 |
parameters are added, and that no parameters have overridden meanings for |
313 |
different components. |
314 |
|
315 |
The parameters will be kept, as for the BEPARAMS into a "default" category, |
316 |
which will allow us to expand on by creating instance "classes" in the future. |
317 |
Instance classes is not a feature we plan implementing in 2.1, though. |
318 |
|
319 |
Non bridged instances support |
320 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
321 |
|
322 |
Current State and shortcomings |
323 |
++++++++++++++++++++++++++++++ |
324 |
|
325 |
Currently each instance NIC must be connected to a bridge, and if the bridge is |
326 |
not specified the default cluster one is used. This makes it impossible to use |
327 |
the vif-route xen network scripts, or other alternative mechanisms that don't |
328 |
need a bridge to work. |
329 |
|
330 |
Proposed changes |
331 |
++++++++++++++++ |
332 |
|
333 |
The new "mode" network parameter will distinguish between bridged interfaces |
334 |
and routed ones. |
335 |
|
336 |
When mode is "bridge" the "link" parameter will contain the bridge the instance |
337 |
should be connected to, effectively making things as today. The value has been |
338 |
migrated from a nic field to a parameter to allow for an easier manipulation of |
339 |
the cluster default. |
340 |
|
341 |
When mode is "route" the ip field of the interface will become mandatory, to |
342 |
allow for a route to be set. In the future we may want also to accept multiple |
343 |
IPs or IP/mask values for this purpose. We will evaluate possible meanings of |
344 |
the link parameter to signify a routing table to be used, which would allow for |
345 |
insulation between instance groups (as today happens for different bridges). |
346 |
|
347 |
For now we won't add a parameter to specify which network script gets called |
348 |
for which instance, so in a mixed cluster the network script must be able to |
349 |
handle both cases. The default kvm vif script will be changed to do so. (Xen |
350 |
doesn't have a ganeti provided script, so nothing will be done for that |
351 |
hypervisor) |
352 |
|
353 |
|
354 |
Automated disk repairs infrastructure |
355 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
356 |
|
357 |
Replacing defective disks in an automated fashion is quite difficult with the |
358 |
current version of Ganeti. These changes will introduce additional |
359 |
functionality and interfaces to simplify automating disk replacements on a |
360 |
Ganeti node. |
361 |
|
362 |
Fix node volume group |
363 |
+++++++++++++++++++++ |
364 |
|
365 |
This is the most difficult addition, as it can lead to dataloss if it's not |
366 |
properly safeguarded. |
367 |
|
368 |
The operation must be done only when all the other nodes that have instances in |
369 |
common with the target node are fine, i.e. this is the only node with problems, |
370 |
and also we have to double-check that all instances on this node have at least |
371 |
a good copy of the data. |
372 |
|
373 |
This might mean that we have to enhance the GetMirrorStatus calls, and |
374 |
introduce and a smarter version that can tell us more about the status of an |
375 |
instance. |
376 |
|
377 |
Stop allocation on a given PV |
378 |
+++++++++++++++++++++++++++++ |
379 |
|
380 |
This is somewhat simple. First we need a "list PVs" opcode (and its associated |
381 |
logical unit) and then a set PV status opcode/LU. These in combination should |
382 |
allow both checking and changing the disk/PV status. |
383 |
|
384 |
Instance disk status |
385 |
++++++++++++++++++++ |
386 |
|
387 |
This new opcode or opcode change must list the instance-disk-index and node |
388 |
combinations of the instance together with their status. This will allow |
389 |
determining what part of the instance is broken (if any). |
390 |
|
391 |
Repair instance |
392 |
+++++++++++++++ |
393 |
|
394 |
This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in order |
395 |
to fix the instance status. It only affects primary instances; secondaries can |
396 |
just be moved away. |
397 |
|
398 |
Migrate node |
399 |
++++++++++++ |
400 |
|
401 |
This new opcode/LU/RAPI call will take over the current ``gnt-node migrate`` |
402 |
code and run migrate for all instances on the node. |
403 |
|
404 |
Evacuate node |
405 |
++++++++++++++ |
406 |
|
407 |
This new opcode/LU/RAPI call will take over the current ``gnt-node evacuate`` |
408 |
code and run replace-secondary with an iallocator script for all instances on |
409 |
the node. |
410 |
|
411 |
|
412 |
External interface changes |
413 |
-------------------------- |
414 |
|
415 |
OS API |
416 |
~~~~~~ |
417 |
|
418 |
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we |
419 |
pass everything as environment variables it's a lot easier to send new |
420 |
information to the OSes without breaking retrocompatibility. This section of |
421 |
the design outlines the proposed extensions to the API and their |
422 |
implementation. |
423 |
|
424 |
API Version Compatibility Handling |
425 |
++++++++++++++++++++++++++++++++++ |
426 |
|
427 |
In 2.1 there will be a new OS API version (eg. 15), which should be mostly |
428 |
compatible with api 10, except for some new added variables. Since it's easy |
429 |
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just |
430 |
filtering out the newly added piece of information. We will still encourage |
431 |
OSes to declare support for the new API after checking that the new variables |
432 |
don't provide any conflict for them, and we will drop api 10 support after |
433 |
ganeti 2.1 has released. |
434 |
|
435 |
New Environment variables |
436 |
+++++++++++++++++++++++++ |
437 |
|
438 |
Some variables have never been added to the OS api but would definitely be |
439 |
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow |
440 |
the OS to make changes relevant to the virtualization the instance is going to |
441 |
use. Since this field is immutable for each instance, the os can tight the |
442 |
install without caring of making sure the instance can run under any |
443 |
virtualization technology. |
444 |
|
445 |
We also want the OS to know the particular hypervisor parameters, to be able to |
446 |
customize the install even more. Since the parameters can change, though, we |
447 |
will pass them only as an "FYI": if an OS ties some instance functionality to |
448 |
the value of a particular hypervisor parameter manual changes or a reinstall |
449 |
may be needed to adapt the instance to the new environment. This is not a |
450 |
regression as of today, because even if the OSes are left blind about this |
451 |
information, sometimes they still need to make compromises and cannot satisfy |
452 |
all possible parameter values. |
453 |
|
454 |
OS Variants |
455 |
+++++++++++ |
456 |
|
457 |
Currently we are assisting to some degree of "os proliferation" just to change |
458 |
a simple installation behavior. This means that the same OS gets installed on |
459 |
the cluster multiple times, with different names, to customize just one |
460 |
installation behavior. Usually such OSes try to share as much as possible |
461 |
through symlinks, but this still causes complications on the user side, |
462 |
especially when multiple parameters must be cross-matched. |
463 |
|
464 |
For example today if you want to install debian etch, lenny or squeeze you |
465 |
probably need to install the debootstrap OS multiple times, changing its |
466 |
configuration file, and calling it debootstrap-etch, debootstrap-lenny or |
467 |
debootstrap-squeeze. Furthermore if you have for example a "server" and a |
468 |
"development" environment which installs different packages/configuration files |
469 |
and must be available for all installs you'll probably end up with |
470 |
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server, |
471 |
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes |
472 |
not manageable. |
473 |
|
474 |
In order to avoid this we plan to make OSes more customizable, by allowing each |
475 |
OS to declare a list of variants which can be used to customize it. The |
476 |
variants list is mandatory and must be written, one variant per line, in the |
477 |
new "variants.list" file inside the main os dir. At least one supported variant |
478 |
must be supported. When choosing the OS exactly one variant will have to be |
479 |
specified, and will be encoded in the os name as <OS-name>+<variant>. As for |
480 |
today it will be possible to change an instance's OS at creation or install |
481 |
time. |
482 |
|
483 |
The 2.1 OS list will be the combination of each OS, plus its supported |
484 |
variants. This will cause the name name proliferation to remain, but at least |
485 |
the internal OS code will be simplified to just parsing the passed variant, |
486 |
without the need for symlinks or code duplication. |
487 |
|
488 |
Also we expect the OSes to declare only "interesting" variants, but to accept |
489 |
some non-declared ones which a user will be able to pass in by overriding the |
490 |
checks ganeti does. This will be useful for allowing some variations to be used |
491 |
without polluting the OS list (per-OS documentation should list all supported |
492 |
variants). If a variant which is not internally supported is forced through, |
493 |
the OS scripts should abort. |
494 |
|
495 |
In the future (post 2.1) we may want to move to full fledged parameters all |
496 |
orthogonal to each other (for example "architecture" (i386, amd64), "suite" |
497 |
(lenny, squeeze, ...), etc). (As opposed to the variant, which is a single |
498 |
parameter, and you need a different variant for all the set of combinations you |
499 |
want to support). In this case we envision the variants to be moved inside of |
500 |
Ganeti and be associated with lists parameter->values associations, which will |
501 |
then be passed to the OS. |
502 |
|
503 |
|
504 |
IAllocator changes |
505 |
~~~~~~~~~~~~~~~~~~ |
506 |
|
507 |
Current State and shortcomings |
508 |
++++++++++++++++++++++++++++++ |
509 |
|
510 |
The iallocator interface allows creation of instances without manually |
511 |
specifying nodes, but instead by specifying plugins which will do the |
512 |
required computations and produce a valid node list. |
513 |
|
514 |
However, the interface is quite akward to use: |
515 |
|
516 |
- one cannot set a 'default' iallocator script |
517 |
- one cannot use it to easily test if allocation would succeed |
518 |
- some new functionality, such as rebalancing clusters and calculating |
519 |
capacity estimates is needed |
520 |
|
521 |
Proposed changes |
522 |
++++++++++++++++ |
523 |
|
524 |
There are two area of improvements proposed: |
525 |
|
526 |
- improving the use of the current interface |
527 |
- extending the IAllocator API to cover more automation |
528 |
|
529 |
|
530 |
Default iallocator names |
531 |
^^^^^^^^^^^^^^^^^^^^^^^^ |
532 |
|
533 |
The cluster will hold, for each type of iallocator, a (possibly empty) |
534 |
list of modules that will be used automatically. |
535 |
|
536 |
If the list is empty, the behaviour will remain the same. |
537 |
|
538 |
If the list has one entry, then ganeti will behave as if |
539 |
'--iallocator' was specifyed on the command line. I.e. use this |
540 |
allocator by default. If the user however passed nodes, those will be |
541 |
used in preference. |
542 |
|
543 |
If the list has multiple entries, they will be tried in order until |
544 |
one gives a successful answer. |
545 |
|
546 |
Dry-run allocation |
547 |
^^^^^^^^^^^^^^^^^^ |
548 |
|
549 |
The create instance LU will get a new 'dry-run' option that will just |
550 |
simulate the placement, and return the chosen node-lists after running |
551 |
all the usual checks. |
552 |
|
553 |
Cluster balancing |
554 |
^^^^^^^^^^^^^^^^^ |
555 |
|
556 |
Instance add/removals/moves can create a situation where load on the |
557 |
nodes is not spread equally. For this, a new iallocator mode will be |
558 |
implemented called ``balance`` in which the plugin, given the current |
559 |
cluster state, and a maximum number of operations, will need to |
560 |
compute the instance relocations needed in order to achieve a "better" |
561 |
(for whatever the script believes it's better) cluster. |
562 |
|
563 |
Cluster capacity calculation |
564 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
565 |
|
566 |
In this mode, called ``capacity``, given an instance specification and |
567 |
the current cluster state (similar to the ``allocate`` mode), the |
568 |
plugin needs to return: |
569 |
|
570 |
- how many instances can be allocated on the cluster with that specification |
571 |
- on which nodes these will be allocated (in order) |