Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.1.rst @ a5bca3e9

History | View | Annotate | Download (22.8 kB)

1
=================
2
Ganeti 2.1 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.1 compared to
6
the 2.0 version.
7

    
8
The 2.1 version will be a relatively small release. Its main aim is to avoid
9
changing too much of the core code, while addressing issues and adding new
10
features and improvements over 2.0, in a timely fashion.
11

    
12
.. contents:: :depth: 3
13

    
14
Objective
15
=========
16

    
17
Ganeti 2.1 will add features to help further automatization of cluster
18
operations, further improbe scalability to even bigger clusters, and make it
19
easier to debug the Ganeti core.
20

    
21
Background
22
==========
23

    
24
Overview
25
========
26

    
27
Detailed design
28
===============
29

    
30
As for 2.0 we divide the 2.1 design into three areas:
31

    
32
- core changes, which affect the master daemon/job queue/locking or all/most
33
  logical units
34
- logical unit/feature changes
35
- external interface changes (eg. command line, os api, hooks, ...)
36

    
37
Core changes
38
------------
39

    
40
Storage units modelling
41
~~~~~~~~~~~~~~~~~~~~~~~
42

    
43
Currently, Ganeti has a good model of the block devices for instances
44
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
45
storage pools that are providing the space for these front-end
46
devices. For example, there are hardcoded inter-node RPC calls for
47
volume group listing, file storage creation/deletion, etc.
48

    
49
The storage units framework will implement a generic handling for all
50
kinds of storage backends:
51

    
52
- LVM physical volumes
53
- LVM volume groups
54
- File-based storage directories
55
- any other future storage method
56

    
57
There will be a generic list of methods that each storage unit type
58
will provide, like:
59

    
60
- list of storage units of this type
61
- check status of the storage unit
62

    
63
Additionally, there will be specific methods for each method, for example:
64

    
65
- enable/disable allocations on a specific PV
66
- file storage directory creation/deletion
67
- VG consistency fixing
68

    
69
This will allow a much better modeling and unification of the various
70
RPC calls related to backend storage pool in the future. Ganeti 2.1 is
71
intended to add the basics of the framework, and not necessarilly move
72
all the curent VG/FileBased operations to it.
73

    
74
Note that while we model both LVM PVs and LVM VGs, the framework will
75
**not** model any relationship between the different types. In other
76
words, we don't model neither inheritances nor stacking, since this is
77
too complex for our needs. While a ``vgreduce`` operation on a LVM VG
78
could actually remove a PV from it, this will not be handled at the
79
framework level, but at individual operation level. The goal is that
80
this is a lightweight framework, for abstracting the different storage
81
operation, and not for modelling the storage hierarchy.
82

    
83
Feature changes
84
---------------
85

    
86
Ganeti Confd
87
~~~~~~~~~~~~
88

    
89
Current State and shortcomings
90
++++++++++++++++++++++++++++++
91
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In
92
particular they are divided between "master", "master candidates" and "normal".
93
(Moreover they can be offline or drained, but this is not important for the
94
current discussion). In general the whole configuration is only replicated to
95
master candidates, and some partial information is spread to all nodes via
96
ssconf.
97

    
98
This change was done so that the most frequent Ganeti operations didn't need to
99
contact all nodes, and so clusters could become bigger. If we want more
100
information to be available on all nodes, we need to add more ssconf values,
101
which is counter-balancing the change, or to talk with the master node, which
102
is not designed to happen now, and requires its availability.
103

    
104
Information such as the instance->primary_node mapping will be needed on all
105
nodes, and we also want to make sure services external to the cluster can query
106
this information as well. This information must be available at all times, so
107
we can't query it through RAPI, which would be a single point of failure, as
108
it's only available on the master.
109

    
110

    
111
Proposed changes
112
++++++++++++++++
113

    
114
In order to allow fast and highly available access read-only to some
115
configuration values, we'll create a new ganeti-confd daemon, which will run on
116
master candidates. This daemon will talk via UDP, and authenticate messages
117
using HMAC with a cluster-wide shared key. This key will be generated at
118
cluster init time, and stored on the clusters alongside the ganeti SSL keys,
119
and readable only by root.
120

    
121
An interested client can query a value by making a request to a subset of the
122
cluster master candidates. It will then wait to get a few responses, and use
123
the one with the highest configuration serial number. Since the configuration
124
serial number is increased each time the ganeti config is updated, and the
125
serial number is included in all answers, this can be used to make sure to use
126
the most recent answer, in case some master candidates are stale or in the
127
middle of a configuration update.
128

    
129
In order to prevent replay attacks queries will contain the current unix
130
timestamp according to the client, and the server will verify that its
131
timestamp is in the same 5 minutes range (this requires synchronized clocks,
132
which is a good idea anyway). Queries will also contain a "salt" which they
133
expect the answers to be sent with, and clients are supposed to accept only
134
answers which contain salt generated by them.
135

    
136
The configuration daemon will be able to answer simple queries such as:
137

    
138
- master candidates list
139
- master node
140
- offline nodes
141
- instance list
142
- instance primary nodes
143

    
144
Wire protocol
145
^^^^^^^^^^^^^
146

    
147
A confd query will look like this, on the wire::
148

    
149
  {
150
    "msg": "{\"type\": 1,
151
             \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
152
             \"protocol\": 1,
153
             \"query\": \"node1.example.com\"}\n",
154
    "salt": "1249637704",
155
    "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
156
  }
157

    
158
Detailed explanation of the various fields:
159

    
160
- 'msg' contains a JSON-encoded query, its fields are:
161

    
162
  - 'protocol', integer, is the confd protocol version (initially just
163
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
164
  - 'type', integer, is the query type. For example "node role by name" or
165
    "node primary ip by instance ip". Constants will be provided for the actual
166
    available query types.
167
  - 'query', string, is the search key. For example an ip, or a node name.
168
  - 'rsalt', string, is the required response salt. The client must use it to
169
    recognize which answer it's getting.
170

    
171
- 'salt' must be the current unix timestamp, according to the client. Servers
172
  can refuse messages which have a wrong timing, according to their
173
  configuration and clock.
174
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
175

    
176
If an answer comes back (which is optional, since confd works over UDP) it will
177
be in this format::
178

    
179
  {
180
    "msg": "{\"status\": 0,
181
             \"answer\": 0,
182
             \"serial\": 42,
183
             \"protocol\": 1}\n",
184
    "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
185
    "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
186
  }
187

    
188
Where:
189

    
190
- 'msg' contains a JSON-encoded answer, its fields are:
191

    
192
  - 'protocol', integer, is the confd protocol version (initially just
193
    constants.CONFD_PROTOCOL_VERSION, with a value of 1)
194
  - 'status', integer, is the error code. Initially just 0 for 'ok' or '1' for
195
    'error' (in which case answer contains an error detail, rather than an
196
    answer), but in the future it may be expanded to have more meanings (eg: 2,
197
    the answer is compressed)
198
  - 'answer', is the actual answer. Its type and meaning is query specific. For
199
    example for "node primary ip by instance ip" queries it will be a string
200
    containing an IP address, for "node role by name" queries it will be an
201
    integer which encodes the role (master, candidate, drained, offline)
202
    according to constants.
203

    
204
- 'salt' is the requested salt from the query. A client can use it to recognize
205
  what query the answer is answering.
206
- 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
207

    
208

    
209
Redistribute Config
210
~~~~~~~~~~~~~~~~~~~
211

    
212
Current State and shortcomings
213
++++++++++++++++++++++++++++++
214
Currently LURedistributeConfig triggers a copy of the updated configuration
215
file to all master candidates and of the ssconf files to all nodes. There are
216
other files which are maintained manually but which are important to keep in
217
sync. These are:
218

    
219
- rapi SSL key certificate file (rapi.pem) (on master candidates)
220
- rapi user/password file rapi_users (on master candidates)
221

    
222
Furthermore there are some files which are hypervisor specific but we may want
223
to keep in sync:
224

    
225
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies
226
  the file once, during node add. This design is subject to revision to be able
227
  to have different passwords for different groups of instances via the use of
228
  hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to
229
  provide password-protected vnc sessions. In general, though, it would be
230
  useful if the vnc password files were copied as well, to avoid unwanted vnc
231
  password changes on instance failover/migrate.
232

    
233
Optionally the admin may want to also ship files such as the global xend.conf
234
file, and the network scripts to all nodes.
235

    
236
Proposed changes
237
++++++++++++++++
238

    
239
RedistributeConfig will be changed to copy also the rapi files, and to call
240
every enabled hypervisor asking for a list of additional files to copy. Users
241
will have the possibility to populate a file containing a list of files to be
242
distributed; this file will be propagated as well. Such solution is really
243
simple to implement and it's easily usable by scripts.
244

    
245
This code will be also shared (via tasklets or by other means, if tasklets are
246
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant
247
files will be automatically shipped to new master candidates as they are set).
248

    
249
VNC Console Password
250
~~~~~~~~~~~~~~~~~~~~
251

    
252
Current State and shortcomings
253
++++++++++++++++++++++++++++++
254

    
255
Currently just the xen-hvm hypervisor supports setting a password to connect
256
the the instances' VNC console, and has one common password stored in a file.
257

    
258
This doesn't allow different passwords for different instances/groups of
259
instances, and makes it necessary to remember to copy the file around the
260
cluster when the password changes.
261

    
262
Proposed changes
263
++++++++++++++++
264

    
265
We'll change the VNC password file to a vnc_password_file hypervisor parameter.
266
This way it can have a cluster default, but also a different value for each
267
instance. The VNC enabled hypervisors (xen and kvm) will publish all the
268
password files in use through the cluster so that a redistribute-config will
269
ship them to all nodes (see the Redistribute Config proposed changes above).
270

    
271
The current VNC_PASSWORD_FILE constant will be removed, but its value will be
272
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards
273
compatibility with 2.0.
274

    
275
The code to export the list of VNC password files from the hypervisors to
276
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors.
277

    
278
Disk/Net parameters
279
~~~~~~~~~~~~~~~~~~~
280

    
281
Current State and shortcomings
282
++++++++++++++++++++++++++++++
283

    
284
Currently disks and network interfaces have a few tweakable options and all the
285
rest is left to a default we chose. We're finding that we need more and more to
286
tweak some of these parameters, for example to disable barriers for DRBD
287
devices, or allow striping for the LVM volumes.
288

    
289
Moreover for many of these parameters it will be nice to have cluster-wide
290
defaults, and then be able to change them per disk/interface.
291

    
292
Proposed changes
293
++++++++++++++++
294

    
295
We will add new cluster level diskparams and netparams, which will contain all
296
the tweakable parameters. All values which have a sensible cluster-wide default
297
will go into this new structure while parameters which have unique values will not.
298

    
299
Example of network parameters:
300
  - mode: bridge/route
301
  - link: for mode "bridge" the bridge to connect to, for mode route it can
302
    contain the routing table, or the destination interface
303

    
304
Example of disk parameters:
305
  - stripe: lvm stripes
306
  - stripe_size: lvm stripe size
307
  - meta_flushes: drbd, enable/disable metadata "barriers"
308
  - data_flushes: drbd, enable/disable data "barriers"
309

    
310
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or
311
hypervisor specific (nic models for example), but for now they will all live in
312
the same structure. Each component is supposed to validate only the parameters
313
it knows about, and ganeti itself will make sure that no "globally unknown"
314
parameters are added, and that no parameters have overridden meanings for
315
different components.
316

    
317
The parameters will be kept, as for the BEPARAMS into a "default" category,
318
which will allow us to expand on by creating instance "classes" in the future.
319
Instance classes is not a feature we plan implementing in 2.1, though.
320

    
321
Non bridged instances support
322
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
323

    
324
Current State and shortcomings
325
++++++++++++++++++++++++++++++
326

    
327
Currently each instance NIC must be connected to a bridge, and if the bridge is
328
not specified the default cluster one is used. This makes it impossible to use
329
the vif-route xen network scripts, or other alternative mechanisms that don't
330
need a bridge to work.
331

    
332
Proposed changes
333
++++++++++++++++
334

    
335
The new "mode" network parameter will distinguish between bridged interfaces
336
and routed ones.
337

    
338
When mode is "bridge" the "link" parameter will contain the bridge the instance
339
should be connected to, effectively making things as today. The value has been
340
migrated from a nic field to a parameter to allow for an easier manipulation of
341
the cluster default.
342

    
343
When mode is "route" the ip field of the interface will become mandatory, to
344
allow for a route to be set. In the future we may want also to accept multiple
345
IPs or IP/mask values for this purpose. We will evaluate possible meanings of
346
the link parameter to signify a routing table to be used, which would allow for
347
insulation between instance groups (as today happens for different bridges).
348

    
349
For now we won't add a parameter to specify which network script gets called
350
for which instance, so in a mixed cluster the network script must be able to
351
handle both cases. The default kvm vif script will be changed to do so. (Xen
352
doesn't have a ganeti provided script, so nothing will be done for that
353
hypervisor)
354

    
355

    
356
Automated disk repairs infrastructure
357
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
358

    
359
Replacing defective disks in an automated fashion is quite difficult with the
360
current version of Ganeti. These changes will introduce additional
361
functionality and interfaces to simplify automating disk replacements on a
362
Ganeti node.
363

    
364
Fix node volume group
365
+++++++++++++++++++++
366

    
367
This is the most difficult addition, as it can lead to dataloss if it's not
368
properly safeguarded.
369

    
370
The operation must be done only when all the other nodes that have instances in
371
common with the target node are fine, i.e. this is the only node with problems,
372
and also we have to double-check that all instances on this node have at least
373
a good copy of the data.
374

    
375
This might mean that we have to enhance the GetMirrorStatus calls, and
376
introduce and a smarter version that can tell us more about the status of an
377
instance.
378

    
379
Stop allocation on a given PV
380
+++++++++++++++++++++++++++++
381

    
382
This is somewhat simple. First we need a "list PVs" opcode (and its associated
383
logical unit) and then a set PV status opcode/LU. These in combination should
384
allow both checking and changing the disk/PV status.
385

    
386
Instance disk status
387
++++++++++++++++++++
388

    
389
This new opcode or opcode change must list the instance-disk-index and node
390
combinations of the instance together with their status. This will allow
391
determining what part of the instance is broken (if any).
392

    
393
Repair instance
394
+++++++++++++++
395

    
396
This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in order
397
to fix the instance status. It only affects primary instances; secondaries can
398
just be moved away.
399

    
400
Migrate node
401
++++++++++++
402

    
403
This new opcode/LU/RAPI call will take over the current ``gnt-node migrate``
404
code and run migrate for all instances on the node.
405

    
406
Evacuate node
407
++++++++++++++
408

    
409
This new opcode/LU/RAPI call will take over the current ``gnt-node evacuate``
410
code and run replace-secondary with an iallocator script for all instances on
411
the node.
412

    
413

    
414
External interface changes
415
--------------------------
416

    
417
OS API
418
~~~~~~
419

    
420
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we
421
pass everything as environment variables it's a lot easier to send new
422
information to the OSes without breaking retrocompatibility. This section of
423
the design outlines the proposed extensions to the API and their
424
implementation.
425

    
426
API Version Compatibility Handling
427
++++++++++++++++++++++++++++++++++
428

    
429
In 2.1 there will be a new OS API version (eg. 15), which should be mostly
430
compatible with api 10, except for some new added variables. Since it's easy
431
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just
432
filtering out the newly added piece of information. We will still encourage
433
OSes to declare support for the new API after checking that the new variables
434
don't provide any conflict for them, and we will drop api 10 support after
435
ganeti 2.1 has released.
436

    
437
New Environment variables
438
+++++++++++++++++++++++++
439

    
440
Some variables have never been added to the OS api but would definitely be
441
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow
442
the OS to make changes relevant to the virtualization the instance is going to
443
use. Since this field is immutable for each instance, the os can tight the
444
install without caring of making sure the instance can run under any
445
virtualization technology.
446

    
447
We also want the OS to know the particular hypervisor parameters, to be able to
448
customize the install even more.  Since the parameters can change, though, we
449
will pass them only as an "FYI": if an OS ties some instance functionality to
450
the value of a particular hypervisor parameter manual changes or a reinstall
451
may be needed to adapt the instance to the new environment. This is not a
452
regression as of today, because even if the OSes are left blind about this
453
information, sometimes they still need to make compromises and cannot satisfy
454
all possible parameter values.
455

    
456
OS Variants
457
+++++++++++
458

    
459
Currently we are assisting to some degree of "os proliferation" just to change
460
a simple installation behavior. This means that the same OS gets installed on
461
the cluster multiple times, with different names, to customize just one
462
installation behavior. Usually such OSes try to share as much as possible
463
through symlinks, but this still causes complications on the user side,
464
especially when multiple parameters must be cross-matched.
465

    
466
For example today if you want to install debian etch, lenny or squeeze you
467
probably need to install the debootstrap OS multiple times, changing its
468
configuration file, and calling it debootstrap-etch, debootstrap-lenny or
469
debootstrap-squeeze. Furthermore if you have for example a "server" and a
470
"development" environment which installs different packages/configuration files
471
and must be available for all installs you'll probably end  up with
472
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server,
473
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes
474
not manageable.
475

    
476
In order to avoid this we plan to make OSes more customizable, by allowing each
477
OS to declare a list of variants which can be used to customize it. The
478
variants list is mandatory and must be written, one variant per line, in the
479
new "variants.list" file inside the main os dir. At least one supported variant
480
must be supported. When choosing the OS exactly one variant will have to be
481
specified, and will be encoded in the os name as <OS-name>+<variant>. As for
482
today it will be possible to change an instance's OS at creation or install
483
time.
484

    
485
The 2.1 OS list will be the combination of each OS, plus its supported
486
variants. This will cause the name name proliferation to remain, but at least
487
the internal OS code will be simplified to just parsing the passed variant,
488
without the need for symlinks or code duplication.
489

    
490
Also we expect the OSes to declare only "interesting" variants, but to accept
491
some non-declared ones which a user will be able to pass in by overriding the
492
checks ganeti does. This will be useful for allowing some variations to be used
493
without polluting the OS list (per-OS documentation should list all supported
494
variants). If a variant which is not internally supported is forced through,
495
the OS scripts should abort.
496

    
497
In the future (post 2.1) we may want to move to full fledged parameters all
498
orthogonal to each other (for example "architecture" (i386, amd64), "suite"
499
(lenny, squeeze, ...), etc). (As opposed to the variant, which is a single
500
parameter, and you need a different variant for all the set of combinations you
501
want to support).  In this case we envision the variants to be moved inside of
502
Ganeti and be associated with lists parameter->values associations, which will
503
then be passed to the OS.
504

    
505

    
506
IAllocator changes
507
~~~~~~~~~~~~~~~~~~
508

    
509
Current State and shortcomings
510
++++++++++++++++++++++++++++++
511

    
512
The iallocator interface allows creation of instances without manually
513
specifying nodes, but instead by specifying plugins which will do the
514
required computations and produce a valid node list.
515

    
516
However, the interface is quite akward to use:
517

    
518
- one cannot set a 'default' iallocator script
519
- one cannot use it to easily test if allocation would succeed
520
- some new functionality, such as rebalancing clusters and calculating
521
  capacity estimates is needed
522

    
523
Proposed changes
524
++++++++++++++++
525

    
526
There are two area of improvements proposed:
527

    
528
- improving the use of the current interface
529
- extending the IAllocator API to cover more automation
530

    
531

    
532
Default iallocator names
533
^^^^^^^^^^^^^^^^^^^^^^^^
534

    
535
The cluster will hold, for each type of iallocator, a (possibly empty)
536
list of modules that will be used automatically.
537

    
538
If the list is empty, the behaviour will remain the same.
539

    
540
If the list has one entry, then ganeti will behave as if
541
'--iallocator' was specifyed on the command line. I.e. use this
542
allocator by default. If the user however passed nodes, those will be
543
used in preference.
544

    
545
If the list has multiple entries, they will be tried in order until
546
one gives a successful answer.
547

    
548
Dry-run allocation
549
^^^^^^^^^^^^^^^^^^
550

    
551
The create instance LU will get a new 'dry-run' option that will just
552
simulate the placement, and return the chosen node-lists after running
553
all the usual checks.
554

    
555
Cluster balancing
556
^^^^^^^^^^^^^^^^^
557

    
558
Instance add/removals/moves can create a situation where load on the
559
nodes is not spread equally. For this, a new iallocator mode will be
560
implemented called ``balance`` in which the plugin, given the current
561
cluster state, and a maximum number of operations, will need to
562
compute the instance relocations needed in order to achieve a "better"
563
(for whatever the script believes it's better) cluster.
564

    
565
Cluster capacity calculation
566
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
567

    
568
In this mode, called ``capacity``, given an instance specification and
569
the current cluster state (similar to the ``allocate`` mode), the
570
plugin needs to return:
571

    
572
- how many instances can be allocated on the cluster with that specification
573
- on which nodes these will be allocated (in order)