Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.1.rst @ 0f828357

History | View | Annotate | Download (28.3 kB)

1
=================
2
Ganeti 2.1 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.1 compared to
6
the 2.0 version.
7

    
8
The 2.1 version will be a relatively small release. Its main aim is to avoid
9
changing too much of the core code, while addressing issues and adding new
10
features and improvements over 2.0, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Ganeti 2.1 will add features to help further automatization of cluster
18
operations, further improbe scalability to even bigger clusters, and make it
19
easier to debug the Ganeti core.
20

    
21
Background
22
==========
23

    
24
Overview
25
========
26

    
27
Detailed design
28
===============
29

    
30
As for 2.0 we divide the 2.1 design into three areas:
31

    
32
- core changes, which affect the master daemon/job queue/locking or all/most
33
  logical units
34
- logical unit/feature changes
35
- external interface changes (eg. command line, os api, hooks, ...)
36

    
37
Core changes
38
------------
39

    
40
Storage units modelling
41
~~~~~~~~~~~~~~~~~~~~~~~
42

    
43
Currently, Ganeti has a good model of the block devices for instances
44
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
45
storage pools that are providing the space for these front-end
46
devices. For example, there are hardcoded inter-node RPC calls for
47
volume group listing, file storage creation/deletion, etc.
48

    
49
The storage units framework will implement a generic handling for all
50
kinds of storage backends:
51

    
52
- LVM physical volumes
53
- LVM volume groups
54
- File-based storage directories
55
- any other future storage method
56

    
57
There will be a generic list of methods that each storage unit type
58
will provide, like:
59

    
60
- list of storage units of this type
61
- check status of the storage unit
62

    
63
Additionally, there will be specific methods for each method, for example:
64

    
65
- enable/disable allocations on a specific PV
66
- file storage directory creation/deletion
67
- VG consistency fixing
68

    
69
This will allow a much better modeling and unification of the various
70
RPC calls related to backend storage pool in the future. Ganeti 2.1 is
71
intended to add the basics of the framework, and not necessarilly move
72
all the curent VG/FileBased operations to it.
73

    
74
Note that while we model both LVM PVs and LVM VGs, the framework will
75
**not** model any relationship between the different types. In other
76
words, we don't model neither inheritances nor stacking, since this is
77
too complex for our needs. While a ``vgreduce`` operation on a LVM VG
78
could actually remove a PV from it, this will not be handled at the
79
framework level, but at individual operation level. The goal is that
80
this is a lightweight framework, for abstracting the different storage
81
operation, and not for modelling the storage hierarchy.
82

    
83

    
84
Locking improvements
85
~~~~~~~~~~~~~~~~~~~~
86

    
87
Current State and shortcomings
88
++++++++++++++++++++++++++++++
89

    
90
The class ``LockSet`` (see ``lib/locking.py``) is a container for one or
91
many ``SharedLock`` instances. It provides an interface to add/remove locks
92
and to acquire and subsequently release any number of those locks contained
93
in it.
94

    
95
Locks in a ``LockSet`` are always acquired in alphabetic order. Due to the
96
way we're using locks for nodes and instances (the single cluster lock isn't
97
affected by this issue) this can lead to long delays when acquiring locks if
98
another operation tries to acquire multiple locks but has to wait for yet
99
another operation.
100

    
101
In the following demonstration we assume to have the instance locks
102
``inst1``, ``inst2``, ``inst3`` and ``inst4``.
103

    
104
#. Operation A grabs lock for instance ``inst4``.
105
#. Operation B wants to acquire all instance locks in alphabetic order, but
106
   it has to wait for ``inst4``.
107
#. Operation C tries to lock ``inst1``, but it has to wait until
108
   Operation B (which is trying to acquire all locks) releases the lock
109
   again.
110
#. Operation A finishes and releases lock on ``inst4``. Operation B can
111
   continue and eventually releases all locks.
112
#. Operation C can get ``inst1`` lock and finishes.
113

    
114
Technically there's no need for Operation C to wait for Operation A, and
115
subsequently Operation B, to finish. Operation B can't continue until
116
Operation A is done (it has to wait for ``inst4``), anyway.
117

    
118
Proposed changes
119
++++++++++++++++
120

    
121
Non-blocking lock acquiring
122
^^^^^^^^^^^^^^^^^^^^^^^^^^^
123

    
124
Acquiring locks for OpCode execution is always done in blocking mode. They
125
won't return until the lock has successfully been acquired (or an error
126
occurred, although we won't cover that case here).
127

    
128
``SharedLock`` and ``LockSet`` must be able to be acquired in a non-blocking
129
way. They must support a timeout and abort trying to acquire the lock(s)
130
after the specified amount of time.
131

    
132
Retry acquiring locks
133
^^^^^^^^^^^^^^^^^^^^^
134

    
135
To prevent other operations from waiting for a long time, such as described
136
in the demonstration before, ``LockSet`` must not keep locks for a prolonged
137
period of time when trying to acquire two or more locks. Instead it should,
138
with an increasing timeout for acquiring all locks, release all locks again
139
and sleep some time if it fails to acquire all requested locks.
140

    
141
A good timeout value needs to be determined. In any case should ``LockSet``
142
proceed to acquire locks in blocking mode after a few (unsuccessful)
143
attempts to acquire all requested locks.
144

    
145
One proposal for the timeout is to use ``2**tries`` seconds, where ``tries``
146
is the number of unsuccessful tries.
147

    
148
In the demonstration before this would allow Operation C to continue after
149
Operation B unsuccessfully tried to acquire all locks and released all
150
acquired locks (``inst1``, ``inst2`` and ``inst3``) again.
151

    
152
Other solutions discussed
153
+++++++++++++++++++++++++
154

    
155
There was also some discussion on going one step further and extend the job
156
queue (see ``lib/jqueue.py``) to select the next task for a worker depending
157
on whether it can acquire the necessary locks. While this may reduce the
158
number of necessary worker threads and/or increase throughput on large
159
clusters with many jobs, it also brings many potential problems, such as
160
contention and increased memory usage, with it. As this would be an
161
extension of the changes proposed before it could be implemented at a later
162
point in time, but we decided to stay with the simpler solution for now.
163

    
164

    
165
Feature changes
166
---------------
167

    
168
Ganeti Confd
169
~~~~~~~~~~~~
170

    
171
Current State and shortcomings
172
++++++++++++++++++++++++++++++
173
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In
174
particular they are divided between "master", "master candidates" and "normal".
175
(Moreover they can be offline or drained, but this is not important for the
176
current discussion). In general the whole configuration is only replicated to
177
master candidates, and some partial information is spread to all nodes via
178
ssconf.
179

    
180
This change was done so that the most frequent Ganeti operations didn't need to
181
contact all nodes, and so clusters could become bigger. If we want more
182
information to be available on all nodes, we need to add more ssconf values,
183
which is counter-balancing the change, or to talk with the master node, which
184
is not designed to happen now, and requires its availability.
185

    
186
Information such as the instance->primary_node mapping will be needed on all
187
nodes, and we also want to make sure services external to the cluster can query
188
this information as well. This information must be available at all times, so
189
we can't query it through RAPI, which would be a single point of failure, as
190
it's only available on the master.
191

    
192

    
193
Proposed changes
194
++++++++++++++++
195

    
196
In order to allow fast and highly available access read-only to some
197
configuration values, we'll create a new ganeti-confd daemon, which will run on
198
master candidates. This daemon will talk via UDP, and authenticate messages
199
using HMAC with a cluster-wide shared key. This key will be generated at
200
cluster init time, and stored on the clusters alongside the ganeti SSL keys,
201
and readable only by root.
202

    
203
An interested client can query a value by making a request to a subset of the
204
cluster master candidates. It will then wait to get a few responses, and use
205
the one with the highest configuration serial number. Since the configuration
206
serial number is increased each time the ganeti config is updated, and the
207
serial number is included in all answers, this can be used to make sure to use
208
the most recent answer, in case some master candidates are stale or in the
209
middle of a configuration update.
210

    
211
In order to prevent replay attacks queries will contain the current unix
212
timestamp according to the client, and the server will verify that its
213
timestamp is in the same 5 minutes range (this requires synchronized clocks,
214
which is a good idea anyway). Queries will also contain a "salt" which they
215
expect the answers to be sent with, and clients are supposed to accept only
216
answers which contain salt generated by them.
217

    
218
The configuration daemon will be able to answer simple queries such as:
219

    
220
- master candidates list
221
- master node
222
- offline nodes
223
- instance list
224
- instance primary nodes
225

    
226
Wire protocol
227
^^^^^^^^^^^^^
228

    
229
A confd query will look like this, on the wire::
230

    
231
  {
232
    "msg": "{\"type\": 1,
233
             \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
234
             \"protocol\": 1,
235
             \"query\": \"node1.example.com\"}\n",
236
    "salt": "1249637704",
237
    "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
238
  }
239

    
240
Detailed explanation of the various fields:
241

    
242
- 'msg' contains a JSON-encoded query, its fields are:
243

    
244
  - 'protocol', integer, is the confd protocol version (initially just
245
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
246
  - 'type', integer, is the query type. For example "node role by name" or
247
    "node primary ip by instance ip". Constants will be provided for the actual
248
    available query types.
249
  - 'query', string, is the search key. For example an ip, or a node name.
250
  - 'rsalt', string, is the required response salt. The client must use it to
251
    recognize which answer it's getting.
252

    
253
- 'salt' must be the current unix timestamp, according to the client. Servers
254
  can refuse messages which have a wrong timing, according to their
255
  configuration and clock.
256
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
257

    
258
If an answer comes back (which is optional, since confd works over UDP) it will
259
be in this format::
260

    
261
  {
262
    "msg": "{\"status\": 0,
263
             \"answer\": 0,
264
             \"serial\": 42,
265
             \"protocol\": 1}\n",
266
    "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
267
    "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
268
  }
269

    
270
Where:
271

    
272
- 'msg' contains a JSON-encoded answer, its fields are:
273

    
274
  - 'protocol', integer, is the confd protocol version (initially just
275
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
276
  - 'status', integer, is the error code. Initially just 0 for 'ok' or '1' for
277
    'error' (in which case answer contains an error detail, rather than an
278
    answer), but in the future it may be expanded to have more meanings (eg: 2,
279
    the answer is compressed)
280
  - 'answer', is the actual answer. Its type and meaning is query specific. For
281
    example for "node primary ip by instance ip" queries it will be a string
282
    containing an IP address, for "node role by name" queries it will be an
283
    integer which encodes the role (master, candidate, drained, offline)
284
    according to constants.
285

    
286
- 'salt' is the requested salt from the query. A client can use it to recognize
287
  what query the answer is answering.
288
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
289

    
290

    
291
Redistribute Config
292
~~~~~~~~~~~~~~~~~~~
293

    
294
Current State and shortcomings
295
++++++++++++++++++++++++++++++
296
Currently LURedistributeConfig triggers a copy of the updated configuration
297
file to all master candidates and of the ssconf files to all nodes. There are
298
other files which are maintained manually but which are important to keep in
299
sync. These are:
300

    
301
- rapi SSL key certificate file (rapi.pem) (on master candidates)
302
- rapi user/password file rapi_users (on master candidates)
303

    
304
Furthermore there are some files which are hypervisor specific but we may want
305
to keep in sync:
306

    
307
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies
308
  the file once, during node add. This design is subject to revision to be able
309
  to have different passwords for different groups of instances via the use of
310
  hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to
311
  provide password-protected vnc sessions. In general, though, it would be
312
  useful if the vnc password files were copied as well, to avoid unwanted vnc
313
  password changes on instance failover/migrate.
314

    
315
Optionally the admin may want to also ship files such as the global xend.conf
316
file, and the network scripts to all nodes.
317

    
318
Proposed changes
319
++++++++++++++++
320

    
321
RedistributeConfig will be changed to copy also the rapi files, and to call
322
every enabled hypervisor asking for a list of additional files to copy. Users
323
will have the possibility to populate a file containing a list of files to be
324
distributed; this file will be propagated as well. Such solution is really
325
simple to implement and it's easily usable by scripts.
326

    
327
This code will be also shared (via tasklets or by other means, if tasklets are
328
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant
329
files will be automatically shipped to new master candidates as they are set).
330

    
331
VNC Console Password
332
~~~~~~~~~~~~~~~~~~~~
333

    
334
Current State and shortcomings
335
++++++++++++++++++++++++++++++
336

    
337
Currently just the xen-hvm hypervisor supports setting a password to connect
338
the the instances' VNC console, and has one common password stored in a file.
339

    
340
This doesn't allow different passwords for different instances/groups of
341
instances, and makes it necessary to remember to copy the file around the
342
cluster when the password changes.
343

    
344
Proposed changes
345
++++++++++++++++
346

    
347
We'll change the VNC password file to a vnc_password_file hypervisor parameter.
348
This way it can have a cluster default, but also a different value for each
349
instance. The VNC enabled hypervisors (xen and kvm) will publish all the
350
password files in use through the cluster so that a redistribute-config will
351
ship them to all nodes (see the Redistribute Config proposed changes above).
352

    
353
The current VNC_PASSWORD_FILE constant will be removed, but its value will be
354
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards
355
compatibility with 2.0.
356

    
357
The code to export the list of VNC password files from the hypervisors to
358
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors.
359

    
360
Disk/Net parameters
361
~~~~~~~~~~~~~~~~~~~
362

    
363
Current State and shortcomings
364
++++++++++++++++++++++++++++++
365

    
366
Currently disks and network interfaces have a few tweakable options and all the
367
rest is left to a default we chose. We're finding that we need more and more to
368
tweak some of these parameters, for example to disable barriers for DRBD
369
devices, or allow striping for the LVM volumes.
370

    
371
Moreover for many of these parameters it will be nice to have cluster-wide
372
defaults, and then be able to change them per disk/interface.
373

    
374
Proposed changes
375
++++++++++++++++
376

    
377
We will add new cluster level diskparams and netparams, which will contain all
378
the tweakable parameters. All values which have a sensible cluster-wide default
379
will go into this new structure while parameters which have unique values will not.
380

    
381
Example of network parameters:
382
  - mode: bridge/route
383
  - link: for mode "bridge" the bridge to connect to, for mode route it can
384
    contain the routing table, or the destination interface
385

    
386
Example of disk parameters:
387
  - stripe: lvm stripes
388
  - stripe_size: lvm stripe size
389
  - meta_flushes: drbd, enable/disable metadata "barriers"
390
  - data_flushes: drbd, enable/disable data "barriers"
391

    
392
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or
393
hypervisor specific (nic models for example), but for now they will all live in
394
the same structure. Each component is supposed to validate only the parameters
395
it knows about, and ganeti itself will make sure that no "globally unknown"
396
parameters are added, and that no parameters have overridden meanings for
397
different components.
398

    
399
The parameters will be kept, as for the BEPARAMS into a "default" category,
400
which will allow us to expand on by creating instance "classes" in the future.
401
Instance classes is not a feature we plan implementing in 2.1, though.
402

    
403
Non bridged instances support
404
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
405

    
406
Current State and shortcomings
407
++++++++++++++++++++++++++++++
408

    
409
Currently each instance NIC must be connected to a bridge, and if the bridge is
410
not specified the default cluster one is used. This makes it impossible to use
411
the vif-route xen network scripts, or other alternative mechanisms that don't
412
need a bridge to work.
413

    
414
Proposed changes
415
++++++++++++++++
416

    
417
The new "mode" network parameter will distinguish between bridged interfaces
418
and routed ones.
419

    
420
When mode is "bridge" the "link" parameter will contain the bridge the instance
421
should be connected to, effectively making things as today. The value has been
422
migrated from a nic field to a parameter to allow for an easier manipulation of
423
the cluster default.
424

    
425
When mode is "route" the ip field of the interface will become mandatory, to
426
allow for a route to be set. In the future we may want also to accept multiple
427
IPs or IP/mask values for this purpose. We will evaluate possible meanings of
428
the link parameter to signify a routing table to be used, which would allow for
429
insulation between instance groups (as today happens for different bridges).
430

    
431
For now we won't add a parameter to specify which network script gets called
432
for which instance, so in a mixed cluster the network script must be able to
433
handle both cases. The default kvm vif script will be changed to do so. (Xen
434
doesn't have a ganeti provided script, so nothing will be done for that
435
hypervisor)
436

    
437
Introducing persistent UUIDs
438
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
439

    
440
Current state and shortcomings
441
++++++++++++++++++++++++++++++
442

    
443
Some objects in the Ganeti configurations are tracked by their name
444
while also supporting renames. This creates an extra difficulty,
445
because neither Ganeti nor external management tools can then track
446
the actual entity, and due to the name change it behaves like a new
447
one.
448

    
449
Proposed changes part 1
450
+++++++++++++++++++++++
451

    
452
We will change Ganeti to use UUIDs for entity tracking, but in a
453
staggered way. In 2.1, we will simply add an “uuid” attribute to each
454
of the instances, nodes and cluster itself. This will be reported on
455
instance creation for nodes, and on node adds for the nodes. It will
456
be of course avaiblable for querying via the OpQueryNodes/Instance and
457
cluster information, and via RAPI as well.
458

    
459
Note that Ganeti will not provide any way to change this attribute.
460

    
461
Upgrading from Ganeti 2.0 will automatically add an ‘uuid’ attribute
462
to all entities missing it.
463

    
464

    
465
Proposed changes part 2
466
+++++++++++++++++++++++
467

    
468
In the next release (e.g. 2.2), the tracking of objects will change
469
from the name to the UUID internally, and externally Ganeti will
470
accept both forms of identification; e.g. an RAPI call would be made
471
either against ``/2/instances/foo.bar`` or against
472
``/2/instances/bb3b2e42…``. Since an FQDN must have at least a dot,
473
and dots are not valid characters in UUIDs, we will not have namespace
474
issues.
475

    
476
Another change here is that node identification (during cluster
477
operations/queries like master startup, “am I the master?” and
478
similar) could be done via UUIDs which is more stable than the current
479
hostname-based scheme.
480

    
481
Internal tracking refers to the way the configuration is stored; a
482
DRBD disk of an instance refers to the node name (so that IPs can be
483
changed easily), but this is still a problem for name changes; thus
484
these will be changed to point to the node UUID to ease renames.
485

    
486
The advantages of this change (after the second round of changes), is
487
that node rename becomes trivial, whereas today node rename would
488
require a complete lock of all instances.
489

    
490

    
491
Automated disk repairs infrastructure
492
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
493

    
494
Replacing defective disks in an automated fashion is quite difficult with the
495
current version of Ganeti. These changes will introduce additional
496
functionality and interfaces to simplify automating disk replacements on a
497
Ganeti node.
498

    
499
Fix node volume group
500
+++++++++++++++++++++
501

    
502
This is the most difficult addition, as it can lead to dataloss if it's not
503
properly safeguarded.
504

    
505
The operation must be done only when all the other nodes that have instances in
506
common with the target node are fine, i.e. this is the only node with problems,
507
and also we have to double-check that all instances on this node have at least
508
a good copy of the data.
509

    
510
This might mean that we have to enhance the GetMirrorStatus calls, and
511
introduce and a smarter version that can tell us more about the status of an
512
instance.
513

    
514
Stop allocation on a given PV
515
+++++++++++++++++++++++++++++
516

    
517
This is somewhat simple. First we need a "list PVs" opcode (and its associated
518
logical unit) and then a set PV status opcode/LU. These in combination should
519
allow both checking and changing the disk/PV status.
520

    
521
Instance disk status
522
++++++++++++++++++++
523

    
524
This new opcode or opcode change must list the instance-disk-index and node
525
combinations of the instance together with their status. This will allow
526
determining what part of the instance is broken (if any).
527

    
528
Repair instance
529
+++++++++++++++
530

    
531
This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in order
532
to fix the instance status. It only affects primary instances; secondaries can
533
just be moved away.
534

    
535
Migrate node
536
++++++++++++
537

    
538
This new opcode/LU/RAPI call will take over the current ``gnt-node migrate``
539
code and run migrate for all instances on the node.
540

    
541
Evacuate node
542
++++++++++++++
543

    
544
This new opcode/LU/RAPI call will take over the current ``gnt-node evacuate``
545
code and run replace-secondary with an iallocator script for all instances on
546
the node.
547

    
548

    
549
External interface changes
550
--------------------------
551

    
552
OS API
553
~~~~~~
554

    
555
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we
556
pass everything as environment variables it's a lot easier to send new
557
information to the OSes without breaking retrocompatibility. This section of
558
the design outlines the proposed extensions to the API and their
559
implementation.
560

    
561
API Version Compatibility Handling
562
++++++++++++++++++++++++++++++++++
563

    
564
In 2.1 there will be a new OS API version (eg. 15), which should be mostly
565
compatible with api 10, except for some new added variables. Since it's easy
566
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just
567
filtering out the newly added piece of information. We will still encourage
568
OSes to declare support for the new API after checking that the new variables
569
don't provide any conflict for them, and we will drop api 10 support after
570
ganeti 2.1 has released.
571

    
572
New Environment variables
573
+++++++++++++++++++++++++
574

    
575
Some variables have never been added to the OS api but would definitely be
576
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow
577
the OS to make changes relevant to the virtualization the instance is going to
578
use. Since this field is immutable for each instance, the os can tight the
579
install without caring of making sure the instance can run under any
580
virtualization technology.
581

    
582
We also want the OS to know the particular hypervisor parameters, to be able to
583
customize the install even more.  Since the parameters can change, though, we
584
will pass them only as an "FYI": if an OS ties some instance functionality to
585
the value of a particular hypervisor parameter manual changes or a reinstall
586
may be needed to adapt the instance to the new environment. This is not a
587
regression as of today, because even if the OSes are left blind about this
588
information, sometimes they still need to make compromises and cannot satisfy
589
all possible parameter values.
590

    
591
OS Variants
592
+++++++++++
593

    
594
Currently we are assisting to some degree of "os proliferation" just to change
595
a simple installation behavior. This means that the same OS gets installed on
596
the cluster multiple times, with different names, to customize just one
597
installation behavior. Usually such OSes try to share as much as possible
598
through symlinks, but this still causes complications on the user side,
599
especially when multiple parameters must be cross-matched.
600

    
601
For example today if you want to install debian etch, lenny or squeeze you
602
probably need to install the debootstrap OS multiple times, changing its
603
configuration file, and calling it debootstrap-etch, debootstrap-lenny or
604
debootstrap-squeeze. Furthermore if you have for example a "server" and a
605
"development" environment which installs different packages/configuration files
606
and must be available for all installs you'll probably end  up with
607
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server,
608
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes
609
not manageable.
610

    
611
In order to avoid this we plan to make OSes more customizable, by allowing each
612
OS to declare a list of variants which can be used to customize it. The
613
variants list is mandatory and must be written, one variant per line, in the
614
new "variants.list" file inside the main os dir. At least one supported variant
615
must be supported. When choosing the OS exactly one variant will have to be
616
specified, and will be encoded in the os name as <OS-name>+<variant>. As for
617
today it will be possible to change an instance's OS at creation or install
618
time.
619

    
620
The 2.1 OS list will be the combination of each OS, plus its supported
621
variants. This will cause the name name proliferation to remain, but at least
622
the internal OS code will be simplified to just parsing the passed variant,
623
without the need for symlinks or code duplication.
624

    
625
Also we expect the OSes to declare only "interesting" variants, but to accept
626
some non-declared ones which a user will be able to pass in by overriding the
627
checks ganeti does. This will be useful for allowing some variations to be used
628
without polluting the OS list (per-OS documentation should list all supported
629
variants). If a variant which is not internally supported is forced through,
630
the OS scripts should abort.
631

    
632
In the future (post 2.1) we may want to move to full fledged parameters all
633
orthogonal to each other (for example "architecture" (i386, amd64), "suite"
634
(lenny, squeeze, ...), etc). (As opposed to the variant, which is a single
635
parameter, and you need a different variant for all the set of combinations you
636
want to support).  In this case we envision the variants to be moved inside of
637
Ganeti and be associated with lists parameter->values associations, which will
638
then be passed to the OS.
639

    
640

    
641
IAllocator changes
642
~~~~~~~~~~~~~~~~~~
643

    
644
Current State and shortcomings
645
++++++++++++++++++++++++++++++
646

    
647
The iallocator interface allows creation of instances without manually
648
specifying nodes, but instead by specifying plugins which will do the
649
required computations and produce a valid node list.
650

    
651
However, the interface is quite akward to use:
652

    
653
- one cannot set a 'default' iallocator script
654
- one cannot use it to easily test if allocation would succeed
655
- some new functionality, such as rebalancing clusters and calculating
656
  capacity estimates is needed
657

    
658
Proposed changes
659
++++++++++++++++
660

    
661
There are two area of improvements proposed:
662

    
663
- improving the use of the current interface
664
- extending the IAllocator API to cover more automation
665

    
666

    
667
Default iallocator names
668
^^^^^^^^^^^^^^^^^^^^^^^^
669

    
670
The cluster will hold, for each type of iallocator, a (possibly empty)
671
list of modules that will be used automatically.
672

    
673
If the list is empty, the behaviour will remain the same.
674

    
675
If the list has one entry, then ganeti will behave as if
676
'--iallocator' was specifyed on the command line. I.e. use this
677
allocator by default. If the user however passed nodes, those will be
678
used in preference.
679

    
680
If the list has multiple entries, they will be tried in order until
681
one gives a successful answer.
682

    
683
Dry-run allocation
684
^^^^^^^^^^^^^^^^^^
685

    
686
The create instance LU will get a new 'dry-run' option that will just
687
simulate the placement, and return the chosen node-lists after running
688
all the usual checks.
689

    
690
Cluster balancing
691
^^^^^^^^^^^^^^^^^
692

    
693
Instance add/removals/moves can create a situation where load on the
694
nodes is not spread equally. For this, a new iallocator mode will be
695
implemented called ``balance`` in which the plugin, given the current
696
cluster state, and a maximum number of operations, will need to
697
compute the instance relocations needed in order to achieve a "better"
698
(for whatever the script believes it's better) cluster.
699

    
700
Cluster capacity calculation
701
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
702

    
703
In this mode, called ``capacity``, given an instance specification and
704
the current cluster state (similar to the ``allocate`` mode), the
705
plugin needs to return:
706

    
707
- how many instances can be allocated on the cluster with that specification
708
- on which nodes these will be allocated (in order)