Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.1.rst @ 5ee09f03

History | View | Annotate | Download (26.2 kB)

1
=================
2
Ganeti 2.1 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.1 compared to
6
the 2.0 version.
7

    
8
The 2.1 version will be a relatively small release. Its main aim is to avoid
9
changing too much of the core code, while addressing issues and adding new
10
features and improvements over 2.0, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Ganeti 2.1 will add features to help further automatization of cluster
18
operations, further improbe scalability to even bigger clusters, and make it
19
easier to debug the Ganeti core.
20

    
21
Background
22
==========
23

    
24
Overview
25
========
26

    
27
Detailed design
28
===============
29

    
30
As for 2.0 we divide the 2.1 design into three areas:
31

    
32
- core changes, which affect the master daemon/job queue/locking or all/most
33
  logical units
34
- logical unit/feature changes
35
- external interface changes (eg. command line, os api, hooks, ...)
36

    
37
Core changes
38
------------
39

    
40
Storage units modelling
41
~~~~~~~~~~~~~~~~~~~~~~~
42

    
43
Currently, Ganeti has a good model of the block devices for instances
44
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
45
storage pools that are providing the space for these front-end
46
devices. For example, there are hardcoded inter-node RPC calls for
47
volume group listing, file storage creation/deletion, etc.
48

    
49
The storage units framework will implement a generic handling for all
50
kinds of storage backends:
51

    
52
- LVM physical volumes
53
- LVM volume groups
54
- File-based storage directories
55
- any other future storage method
56

    
57
There will be a generic list of methods that each storage unit type
58
will provide, like:
59

    
60
- list of storage units of this type
61
- check status of the storage unit
62

    
63
Additionally, there will be specific methods for each method, for example:
64

    
65
- enable/disable allocations on a specific PV
66
- file storage directory creation/deletion
67
- VG consistency fixing
68

    
69
This will allow a much better modeling and unification of the various
70
RPC calls related to backend storage pool in the future. Ganeti 2.1 is
71
intended to add the basics of the framework, and not necessarilly move
72
all the curent VG/FileBased operations to it.
73

    
74
Note that while we model both LVM PVs and LVM VGs, the framework will
75
**not** model any relationship between the different types. In other
76
words, we don't model neither inheritances nor stacking, since this is
77
too complex for our needs. While a ``vgreduce`` operation on a LVM VG
78
could actually remove a PV from it, this will not be handled at the
79
framework level, but at individual operation level. The goal is that
80
this is a lightweight framework, for abstracting the different storage
81
operation, and not for modelling the storage hierarchy.
82

    
83

    
84
Locking improvements
85
~~~~~~~~~~~~~~~~~~~~
86

    
87
Current State and shortcomings
88
++++++++++++++++++++++++++++++
89

    
90
The class ``LockSet`` (see ``lib/locking.py``) is a container for one or many
91
``SharedLock`` instances. It provides an interface to add/remove locks and to
92
acquire and subsequently release any number of those locks contained in it.
93

    
94
Locks in a ``LockSet`` are always acquired in alphabetic order. Due to the way
95
we're using locks for nodes and instances (the single cluster lock isn't
96
affected by this issue) this can lead to long delays when acquiring locks if
97
another operation tries to acquire multiple locks but has to wait for yet
98
another operation.
99

    
100
In the following demonstration we assume to have the instance locks ``inst1``,
101
``inst2``, ``inst3`` and ``inst4``.
102

    
103
#. Operation A grabs lock for instance ``inst4``.
104
#. Operation B wants to acquire all instance locks in alphabetic order, but it
105
   has to wait for ``inst4``.
106
#. Operation C tries to lock ``inst1``, but it has to wait until
107
   Operation B (which is trying to acquire all locks) releases the lock again.
108
#. Operation A finishes and releases lock on ``inst4``. Operation B can
109
   continue and eventually releases all locks.
110
#. Operation C can get ``inst1`` lock and finishes.
111

    
112
Technically there's no need for Operation C to wait for Operation A, and
113
subsequently Operation B, to finish. Operation B can't continue until
114
Operation A is done (it has to wait for ``inst4``), anyway.
115

    
116
Proposed changes
117
++++++++++++++++
118

    
119
Non-blocking lock acquiring
120
^^^^^^^^^^^^^^^^^^^^^^^^^^^
121

    
122
Acquiring locks for OpCode execution is always done in blocking mode. They
123
won't return until the lock has successfully been acquired (or an error
124
occurred, although we won't cover that case here).
125

    
126
``SharedLock`` and ``LockSet`` must be able to be acquired in a
127
non-blocking way. They must support a timeout and abort trying to acquire
128
the lock(s) after the specified amount of time.
129

    
130
Retry acquiring locks
131
^^^^^^^^^^^^^^^^^^^^^
132

    
133
To prevent other operations from waiting for a long time, such as described in
134
the demonstration before, ``LockSet`` must not keep locks for a prolonged period
135
of time when trying to acquire two or more locks. Instead it should, with an
136
increasing timeout for acquiring all locks, release all locks again and
137
sleep some time if it fails to acquire all requested locks.
138

    
139
A good timeout value needs to be determined. In any case should ``LockSet``
140
proceed to acquire locks in blocking mode after a few (unsuccessful) attempts
141
to acquire all requested locks.
142

    
143
One proposal for the timeout is to use ``2**tries`` seconds, where ``tries``
144
is the number of unsuccessful tries.
145

    
146
In the demonstration before this would allow Operation C to continue after
147
Operation B unsuccessfully tried to acquire all locks and released all
148
acquired locks (``inst1``, ``inst2`` and ``inst3``) again.
149

    
150
Other solutions discussed
151
+++++++++++++++++++++++++
152

    
153
There was also some discussion on going one step further and extend the job
154
queue (see ``lib/jqueue.py``) to select the next task for a worker depending on
155
whether it can acquire the necessary locks. While this may reduce the number of
156
necessary worker threads and/or increase throughput on large clusters with many
157
jobs, it also brings many potential problems, such as contention and increased
158
memory usage, with it. As this would be an extension of the changes proposed
159
before it could be implemented at a later point in time, but we decided to stay
160
with the simpler solution for now.
161

    
162

    
163
Feature changes
164
---------------
165

    
166
Ganeti Confd
167
~~~~~~~~~~~~
168

    
169
Current State and shortcomings
170
++++++++++++++++++++++++++++++
171
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In
172
particular they are divided between "master", "master candidates" and "normal".
173
(Moreover they can be offline or drained, but this is not important for the
174
current discussion). In general the whole configuration is only replicated to
175
master candidates, and some partial information is spread to all nodes via
176
ssconf.
177

    
178
This change was done so that the most frequent Ganeti operations didn't need to
179
contact all nodes, and so clusters could become bigger. If we want more
180
information to be available on all nodes, we need to add more ssconf values,
181
which is counter-balancing the change, or to talk with the master node, which
182
is not designed to happen now, and requires its availability.
183

    
184
Information such as the instance->primary_node mapping will be needed on all
185
nodes, and we also want to make sure services external to the cluster can query
186
this information as well. This information must be available at all times, so
187
we can't query it through RAPI, which would be a single point of failure, as
188
it's only available on the master.
189

    
190

    
191
Proposed changes
192
++++++++++++++++
193

    
194
In order to allow fast and highly available access read-only to some
195
configuration values, we'll create a new ganeti-confd daemon, which will run on
196
master candidates. This daemon will talk via UDP, and authenticate messages
197
using HMAC with a cluster-wide shared key. This key will be generated at
198
cluster init time, and stored on the clusters alongside the ganeti SSL keys,
199
and readable only by root.
200

    
201
An interested client can query a value by making a request to a subset of the
202
cluster master candidates. It will then wait to get a few responses, and use
203
the one with the highest configuration serial number. Since the configuration
204
serial number is increased each time the ganeti config is updated, and the
205
serial number is included in all answers, this can be used to make sure to use
206
the most recent answer, in case some master candidates are stale or in the
207
middle of a configuration update.
208

    
209
In order to prevent replay attacks queries will contain the current unix
210
timestamp according to the client, and the server will verify that its
211
timestamp is in the same 5 minutes range (this requires synchronized clocks,
212
which is a good idea anyway). Queries will also contain a "salt" which they
213
expect the answers to be sent with, and clients are supposed to accept only
214
answers which contain salt generated by them.
215

    
216
The configuration daemon will be able to answer simple queries such as:
217

    
218
- master candidates list
219
- master node
220
- offline nodes
221
- instance list
222
- instance primary nodes
223

    
224
Wire protocol
225
^^^^^^^^^^^^^
226

    
227
A confd query will look like this, on the wire::
228

    
229
  {
230
    "msg": "{\"type\": 1,
231
             \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
232
             \"protocol\": 1,
233
             \"query\": \"node1.example.com\"}\n",
234
    "salt": "1249637704",
235
    "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
236
  }
237

    
238
Detailed explanation of the various fields:
239

    
240
- 'msg' contains a JSON-encoded query, its fields are:
241

    
242
  - 'protocol', integer, is the confd protocol version (initially just
243
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
244
  - 'type', integer, is the query type. For example "node role by name" or
245
    "node primary ip by instance ip". Constants will be provided for the actual
246
    available query types.
247
  - 'query', string, is the search key. For example an ip, or a node name.
248
  - 'rsalt', string, is the required response salt. The client must use it to
249
    recognize which answer it's getting.
250

    
251
- 'salt' must be the current unix timestamp, according to the client. Servers
252
  can refuse messages which have a wrong timing, according to their
253
  configuration and clock.
254
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
255

    
256
If an answer comes back (which is optional, since confd works over UDP) it will
257
be in this format::
258

    
259
  {
260
    "msg": "{\"status\": 0,
261
             \"answer\": 0,
262
             \"serial\": 42,
263
             \"protocol\": 1}\n",
264
    "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
265
    "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
266
  }
267

    
268
Where:
269

    
270
- 'msg' contains a JSON-encoded answer, its fields are:
271

    
272
  - 'protocol', integer, is the confd protocol version (initially just
273
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
274
  - 'status', integer, is the error code. Initially just 0 for 'ok' or '1' for
275
    'error' (in which case answer contains an error detail, rather than an
276
    answer), but in the future it may be expanded to have more meanings (eg: 2,
277
    the answer is compressed)
278
  - 'answer', is the actual answer. Its type and meaning is query specific. For
279
    example for "node primary ip by instance ip" queries it will be a string
280
    containing an IP address, for "node role by name" queries it will be an
281
    integer which encodes the role (master, candidate, drained, offline)
282
    according to constants.
283

    
284
- 'salt' is the requested salt from the query. A client can use it to recognize
285
  what query the answer is answering.
286
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
287

    
288

    
289
Redistribute Config
290
~~~~~~~~~~~~~~~~~~~
291

    
292
Current State and shortcomings
293
++++++++++++++++++++++++++++++
294
Currently LURedistributeConfig triggers a copy of the updated configuration
295
file to all master candidates and of the ssconf files to all nodes. There are
296
other files which are maintained manually but which are important to keep in
297
sync. These are:
298

    
299
- rapi SSL key certificate file (rapi.pem) (on master candidates)
300
- rapi user/password file rapi_users (on master candidates)
301

    
302
Furthermore there are some files which are hypervisor specific but we may want
303
to keep in sync:
304

    
305
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies
306
  the file once, during node add. This design is subject to revision to be able
307
  to have different passwords for different groups of instances via the use of
308
  hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to
309
  provide password-protected vnc sessions. In general, though, it would be
310
  useful if the vnc password files were copied as well, to avoid unwanted vnc
311
  password changes on instance failover/migrate.
312

    
313
Optionally the admin may want to also ship files such as the global xend.conf
314
file, and the network scripts to all nodes.
315

    
316
Proposed changes
317
++++++++++++++++
318

    
319
RedistributeConfig will be changed to copy also the rapi files, and to call
320
every enabled hypervisor asking for a list of additional files to copy. Users
321
will have the possibility to populate a file containing a list of files to be
322
distributed; this file will be propagated as well. Such solution is really
323
simple to implement and it's easily usable by scripts.
324

    
325
This code will be also shared (via tasklets or by other means, if tasklets are
326
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant
327
files will be automatically shipped to new master candidates as they are set).
328

    
329
VNC Console Password
330
~~~~~~~~~~~~~~~~~~~~
331

    
332
Current State and shortcomings
333
++++++++++++++++++++++++++++++
334

    
335
Currently just the xen-hvm hypervisor supports setting a password to connect
336
the the instances' VNC console, and has one common password stored in a file.
337

    
338
This doesn't allow different passwords for different instances/groups of
339
instances, and makes it necessary to remember to copy the file around the
340
cluster when the password changes.
341

    
342
Proposed changes
343
++++++++++++++++
344

    
345
We'll change the VNC password file to a vnc_password_file hypervisor parameter.
346
This way it can have a cluster default, but also a different value for each
347
instance. The VNC enabled hypervisors (xen and kvm) will publish all the
348
password files in use through the cluster so that a redistribute-config will
349
ship them to all nodes (see the Redistribute Config proposed changes above).
350

    
351
The current VNC_PASSWORD_FILE constant will be removed, but its value will be
352
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards
353
compatibility with 2.0.
354

    
355
The code to export the list of VNC password files from the hypervisors to
356
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors.
357

    
358
Disk/Net parameters
359
~~~~~~~~~~~~~~~~~~~
360

    
361
Current State and shortcomings
362
++++++++++++++++++++++++++++++
363

    
364
Currently disks and network interfaces have a few tweakable options and all the
365
rest is left to a default we chose. We're finding that we need more and more to
366
tweak some of these parameters, for example to disable barriers for DRBD
367
devices, or allow striping for the LVM volumes.
368

    
369
Moreover for many of these parameters it will be nice to have cluster-wide
370
defaults, and then be able to change them per disk/interface.
371

    
372
Proposed changes
373
++++++++++++++++
374

    
375
We will add new cluster level diskparams and netparams, which will contain all
376
the tweakable parameters. All values which have a sensible cluster-wide default
377
will go into this new structure while parameters which have unique values will not.
378

    
379
Example of network parameters:
380
  - mode: bridge/route
381
  - link: for mode "bridge" the bridge to connect to, for mode route it can
382
    contain the routing table, or the destination interface
383

    
384
Example of disk parameters:
385
  - stripe: lvm stripes
386
  - stripe_size: lvm stripe size
387
  - meta_flushes: drbd, enable/disable metadata "barriers"
388
  - data_flushes: drbd, enable/disable data "barriers"
389

    
390
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or
391
hypervisor specific (nic models for example), but for now they will all live in
392
the same structure. Each component is supposed to validate only the parameters
393
it knows about, and ganeti itself will make sure that no "globally unknown"
394
parameters are added, and that no parameters have overridden meanings for
395
different components.
396

    
397
The parameters will be kept, as for the BEPARAMS into a "default" category,
398
which will allow us to expand on by creating instance "classes" in the future.
399
Instance classes is not a feature we plan implementing in 2.1, though.
400

    
401
Non bridged instances support
402
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
403

    
404
Current State and shortcomings
405
++++++++++++++++++++++++++++++
406

    
407
Currently each instance NIC must be connected to a bridge, and if the bridge is
408
not specified the default cluster one is used. This makes it impossible to use
409
the vif-route xen network scripts, or other alternative mechanisms that don't
410
need a bridge to work.
411

    
412
Proposed changes
413
++++++++++++++++
414

    
415
The new "mode" network parameter will distinguish between bridged interfaces
416
and routed ones.
417

    
418
When mode is "bridge" the "link" parameter will contain the bridge the instance
419
should be connected to, effectively making things as today. The value has been
420
migrated from a nic field to a parameter to allow for an easier manipulation of
421
the cluster default.
422

    
423
When mode is "route" the ip field of the interface will become mandatory, to
424
allow for a route to be set. In the future we may want also to accept multiple
425
IPs or IP/mask values for this purpose. We will evaluate possible meanings of
426
the link parameter to signify a routing table to be used, which would allow for
427
insulation between instance groups (as today happens for different bridges).
428

    
429
For now we won't add a parameter to specify which network script gets called
430
for which instance, so in a mixed cluster the network script must be able to
431
handle both cases. The default kvm vif script will be changed to do so. (Xen
432
doesn't have a ganeti provided script, so nothing will be done for that
433
hypervisor)
434

    
435

    
436
Automated disk repairs infrastructure
437
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
438

    
439
Replacing defective disks in an automated fashion is quite difficult with the
440
current version of Ganeti. These changes will introduce additional
441
functionality and interfaces to simplify automating disk replacements on a
442
Ganeti node.
443

    
444
Fix node volume group
445
+++++++++++++++++++++
446

    
447
This is the most difficult addition, as it can lead to dataloss if it's not
448
properly safeguarded.
449

    
450
The operation must be done only when all the other nodes that have instances in
451
common with the target node are fine, i.e. this is the only node with problems,
452
and also we have to double-check that all instances on this node have at least
453
a good copy of the data.
454

    
455
This might mean that we have to enhance the GetMirrorStatus calls, and
456
introduce and a smarter version that can tell us more about the status of an
457
instance.
458

    
459
Stop allocation on a given PV
460
+++++++++++++++++++++++++++++
461

    
462
This is somewhat simple. First we need a "list PVs" opcode (and its associated
463
logical unit) and then a set PV status opcode/LU. These in combination should
464
allow both checking and changing the disk/PV status.
465

    
466
Instance disk status
467
++++++++++++++++++++
468

    
469
This new opcode or opcode change must list the instance-disk-index and node
470
combinations of the instance together with their status. This will allow
471
determining what part of the instance is broken (if any).
472

    
473
Repair instance
474
+++++++++++++++
475

    
476
This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in order
477
to fix the instance status. It only affects primary instances; secondaries can
478
just be moved away.
479

    
480
Migrate node
481
++++++++++++
482

    
483
This new opcode/LU/RAPI call will take over the current ``gnt-node migrate``
484
code and run migrate for all instances on the node.
485

    
486
Evacuate node
487
++++++++++++++
488

    
489
This new opcode/LU/RAPI call will take over the current ``gnt-node evacuate``
490
code and run replace-secondary with an iallocator script for all instances on
491
the node.
492

    
493

    
494
External interface changes
495
--------------------------
496

    
497
OS API
498
~~~~~~
499

    
500
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we
501
pass everything as environment variables it's a lot easier to send new
502
information to the OSes without breaking retrocompatibility. This section of
503
the design outlines the proposed extensions to the API and their
504
implementation.
505

    
506
API Version Compatibility Handling
507
++++++++++++++++++++++++++++++++++
508

    
509
In 2.1 there will be a new OS API version (eg. 15), which should be mostly
510
compatible with api 10, except for some new added variables. Since it's easy
511
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just
512
filtering out the newly added piece of information. We will still encourage
513
OSes to declare support for the new API after checking that the new variables
514
don't provide any conflict for them, and we will drop api 10 support after
515
ganeti 2.1 has released.
516

    
517
New Environment variables
518
+++++++++++++++++++++++++
519

    
520
Some variables have never been added to the OS api but would definitely be
521
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow
522
the OS to make changes relevant to the virtualization the instance is going to
523
use. Since this field is immutable for each instance, the os can tight the
524
install without caring of making sure the instance can run under any
525
virtualization technology.
526

    
527
We also want the OS to know the particular hypervisor parameters, to be able to
528
customize the install even more.  Since the parameters can change, though, we
529
will pass them only as an "FYI": if an OS ties some instance functionality to
530
the value of a particular hypervisor parameter manual changes or a reinstall
531
may be needed to adapt the instance to the new environment. This is not a
532
regression as of today, because even if the OSes are left blind about this
533
information, sometimes they still need to make compromises and cannot satisfy
534
all possible parameter values.
535

    
536
OS Variants
537
+++++++++++
538

    
539
Currently we are assisting to some degree of "os proliferation" just to change
540
a simple installation behavior. This means that the same OS gets installed on
541
the cluster multiple times, with different names, to customize just one
542
installation behavior. Usually such OSes try to share as much as possible
543
through symlinks, but this still causes complications on the user side,
544
especially when multiple parameters must be cross-matched.
545

    
546
For example today if you want to install debian etch, lenny or squeeze you
547
probably need to install the debootstrap OS multiple times, changing its
548
configuration file, and calling it debootstrap-etch, debootstrap-lenny or
549
debootstrap-squeeze. Furthermore if you have for example a "server" and a
550
"development" environment which installs different packages/configuration files
551
and must be available for all installs you'll probably end  up with
552
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server,
553
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes
554
not manageable.
555

    
556
In order to avoid this we plan to make OSes more customizable, by allowing each
557
OS to declare a list of variants which can be used to customize it. The
558
variants list is mandatory and must be written, one variant per line, in the
559
new "variants.list" file inside the main os dir. At least one supported variant
560
must be supported. When choosing the OS exactly one variant will have to be
561
specified, and will be encoded in the os name as <OS-name>+<variant>. As for
562
today it will be possible to change an instance's OS at creation or install
563
time.
564

    
565
The 2.1 OS list will be the combination of each OS, plus its supported
566
variants. This will cause the name name proliferation to remain, but at least
567
the internal OS code will be simplified to just parsing the passed variant,
568
without the need for symlinks or code duplication.
569

    
570
Also we expect the OSes to declare only "interesting" variants, but to accept
571
some non-declared ones which a user will be able to pass in by overriding the
572
checks ganeti does. This will be useful for allowing some variations to be used
573
without polluting the OS list (per-OS documentation should list all supported
574
variants). If a variant which is not internally supported is forced through,
575
the OS scripts should abort.
576

    
577
In the future (post 2.1) we may want to move to full fledged parameters all
578
orthogonal to each other (for example "architecture" (i386, amd64), "suite"
579
(lenny, squeeze, ...), etc). (As opposed to the variant, which is a single
580
parameter, and you need a different variant for all the set of combinations you
581
want to support).  In this case we envision the variants to be moved inside of
582
Ganeti and be associated with lists parameter->values associations, which will
583
then be passed to the OS.
584

    
585

    
586
IAllocator changes
587
~~~~~~~~~~~~~~~~~~
588

    
589
Current State and shortcomings
590
++++++++++++++++++++++++++++++
591

    
592
The iallocator interface allows creation of instances without manually
593
specifying nodes, but instead by specifying plugins which will do the
594
required computations and produce a valid node list.
595

    
596
However, the interface is quite akward to use:
597

    
598
- one cannot set a 'default' iallocator script
599
- one cannot use it to easily test if allocation would succeed
600
- some new functionality, such as rebalancing clusters and calculating
601
  capacity estimates is needed
602

    
603
Proposed changes
604
++++++++++++++++
605

    
606
There are two area of improvements proposed:
607

    
608
- improving the use of the current interface
609
- extending the IAllocator API to cover more automation
610

    
611

    
612
Default iallocator names
613
^^^^^^^^^^^^^^^^^^^^^^^^
614

    
615
The cluster will hold, for each type of iallocator, a (possibly empty)
616
list of modules that will be used automatically.
617

    
618
If the list is empty, the behaviour will remain the same.
619

    
620
If the list has one entry, then ganeti will behave as if
621
'--iallocator' was specifyed on the command line. I.e. use this
622
allocator by default. If the user however passed nodes, those will be
623
used in preference.
624

    
625
If the list has multiple entries, they will be tried in order until
626
one gives a successful answer.
627

    
628
Dry-run allocation
629
^^^^^^^^^^^^^^^^^^
630

    
631
The create instance LU will get a new 'dry-run' option that will just
632
simulate the placement, and return the chosen node-lists after running
633
all the usual checks.
634

    
635
Cluster balancing
636
^^^^^^^^^^^^^^^^^
637

    
638
Instance add/removals/moves can create a situation where load on the
639
nodes is not spread equally. For this, a new iallocator mode will be
640
implemented called ``balance`` in which the plugin, given the current
641
cluster state, and a maximum number of operations, will need to
642
compute the instance relocations needed in order to achieve a "better"
643
(for whatever the script believes it's better) cluster.
644

    
645
Cluster capacity calculation
646
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
647

    
648
In this mode, called ``capacity``, given an instance specification and
649
the current cluster state (similar to the ``allocate`` mode), the
650
plugin needs to return:
651

    
652
- how many instances can be allocated on the cluster with that specification
653
- on which nodes these will be allocated (in order)