Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.1.rst @ a392a6b8

History | View | Annotate | Download (18 kB)

1
=================
2
Ganeti 2.1 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.1 compared to
6
the 2.0 version.
7

    
8
The 2.1 version will be a relatively small release. Its main aim is to avoid
9
changing too much of the core code, while addressing issues and adding new
10
features and improvements over 2.0, in a timely fashion.
11

    
12
.. contents:: :depth: 3
13

    
14
Objective
15
=========
16

    
17
Ganeti 2.1 will add features to help further automatization of cluster
18
operations, further improbe scalability to even bigger clusters, and make it
19
easier to debug the Ganeti core.
20

    
21
Background
22
==========
23

    
24
Overview
25
========
26

    
27
Detailed design
28
===============
29

    
30
As for 2.0 we divide the 2.1 design into three areas:
31

    
32
- core changes, which affect the master daemon/job queue/locking or all/most
33
  logical units
34
- logical unit/feature changes
35
- external interface changes (eg. command line, os api, hooks, ...)
36

    
37
Core changes
38
------------
39

    
40
Storage units modelling
41
~~~~~~~~~~~~~~~~~~~~~~~
42

    
43
Currently, Ganeti has a good model of the block devices for instances
44
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
45
storage pools that are providing the space for these front-end
46
devices. For example, there are hardcoded inter-node RPC calls for
47
volume group listing, file storage creation/deletion, etc.
48

    
49
The storage units framework will implement a generic handling for all
50
kinds of storage backends:
51

    
52
- LVM physical volumes
53
- LVM volume groups
54
- File-based storage directories
55
- any other future storage method
56

    
57
There will be a generic list of methods that each storage unit type
58
will provide, like:
59

    
60
- list of storage units of this type
61
- check status of the storage unit
62

    
63
Additionally, there will be specific methods for each method, for example:
64

    
65
- enable/disable allocations on a specific PV
66
- file storage directory creation/deletion
67
- VG consistency fixing
68

    
69
This will allow a much better modeling and unification of the various
70
RPC calls related to backend storage pool in the future. Ganeti 2.1 is
71
intended to add the basics of the framework, and not necessarilly move
72
all the curent VG/FileBased operations to it.
73

    
74
Note that while we model both LVM PVs and LVM VGs, the framework will
75
**not** model any relationship between the different types. In other
76
words, we don't model neither inheritances nor stacking, since this is
77
too complex for our needs. While a ``vgreduce`` operation on a LVM VG
78
could actually remove a PV from it, this will not be handled at the
79
framework level, but at individual operation level. The goal is that
80
this is a lightweight framework, for abstracting the different storage
81
operation, and not for modelling the storage hierarchy.
82

    
83
Feature changes
84
---------------
85

    
86
Ganeti Confd
87
~~~~~~~~~~~~
88

    
89
Current State and shortcomings
90
++++++++++++++++++++++++++++++
91
In Ganeti 2.0 all nodes are equal, but some are more equal than others. In
92
particular they are divided between "master", "master candidates" and "normal".
93
(Moreover they can be offline or drained, but this is not important for the
94
current discussion). In general the whole configuration is only replicated to
95
master candidates, and some partial information is spread to all nodes via
96
ssconf.
97

    
98
This change was done so that the most frequent Ganeti operations didn't need to
99
contact all nodes, and so clusters could become bigger. If we want more
100
information to be available on all nodes, we need to add more ssconf values,
101
which is counter-balancing the change, or to talk with the master node, which
102
is not designed to happen now, and requires its availability.
103

    
104
Information such as the instance->primary_node mapping will be needed on all
105
nodes, and we also want to make sure services external to the cluster can query
106
this information as well. This information must be available at all times, so
107
we can't query it through RAPI, which would be a single point of failure, as
108
it's only available on the master.
109

    
110

    
111
Proposed changes
112
++++++++++++++++
113

    
114
In order to allow fast and highly available access read-only to some
115
configuration values, we'll create a new ganeti-confd daemon, which will run on
116
master candidates. This daemon will talk via UDP, and authenticate messages
117
using HMAC with a cluster-wide shared key.
118

    
119
An interested client can query a value by making a request to a subset of the
120
cluster master candidates. It will then wait to get a few responses, and use
121
the one with the highest configuration serial number (which will be always
122
included in the answer). If some candidates are stale, or we are in the middle
123
of a configuration update, various master candidates may return different
124
values, and this should make sure the most recent information is used.
125

    
126
In order to prevent replay attacks queries will contain the current unix
127
timestamp according to the client, and the server will verify that its
128
timestamp is in the same 5 minutes range (this requires synchronized clocks,
129
which is a good idea anyway). Queries will also contain a "salt" which they
130
expect the answers to be sent with, and clients are supposed to accept only
131
answers which contain salt generated by them.
132

    
133
The configuration daemon will be able to answer simple queries such as:
134
- master candidates list
135
- master node
136
- offline nodes
137
- instance list
138
- instance primary nodes
139

    
140

    
141
Redistribute Config
142
~~~~~~~~~~~~~~~~~~~
143

    
144
Current State and shortcomings
145
++++++++++++++++++++++++++++++
146
Currently LURedistributeConfig triggers a copy of the updated configuration
147
file to all master candidates and of the ssconf files to all nodes. There are
148
other files which are maintained manually but which are important to keep in
149
sync. These are:
150

    
151
- rapi SSL key certificate file (rapi.pem) (on master candidates)
152
- rapi user/password file rapi_users (on master candidates)
153

    
154
Furthermore there are some files which are hypervisor specific but we may want
155
to keep in sync:
156

    
157
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies
158
  the file once, during node add. This design is subject to revision to be able
159
  to have different passwords for different groups of instances via the use of
160
  hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to
161
  provide password-protected vnc sessions. In general, though, it would be
162
  useful if the vnc password files were copied as well, to avoid unwanted vnc
163
  password changes on instance failover/migrate.
164

    
165
Optionally the admin may want to also ship files such as the global xend.conf
166
file, and the network scripts to all nodes.
167

    
168
Proposed changes
169
++++++++++++++++
170

    
171
RedistributeConfig will be changed to copy also the rapi files, and to call
172
every enabled hypervisor asking for a list of additional files to copy. We also
173
may want to add a global list of files on the cluster object, which will be
174
propagated as well, or a hook to calculate them. If we implement this feature
175
there should be a way to specify whether a file must be shipped to all nodes or
176
just master candidates.
177

    
178
This code will be also shared (via tasklets or by other means, if tasklets are
179
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant
180
files will be automatically shipped to new master candidates as they are set).
181

    
182
VNC Console Password
183
~~~~~~~~~~~~~~~~~~~~
184

    
185
Current State and shortcomings
186
++++++++++++++++++++++++++++++
187

    
188
Currently just the xen-hvm hypervisor supports setting a password to connect
189
the the instances' VNC console, and has one common password stored in a file.
190

    
191
This doesn't allow different passwords for different instances/groups of
192
instances, and makes it necessary to remember to copy the file around the
193
cluster when the password changes.
194

    
195
Proposed changes
196
++++++++++++++++
197

    
198
We'll change the VNC password file to a vnc_password_file hypervisor parameter.
199
This way it can have a cluster default, but also a different value for each
200
instance. The VNC enabled hypervisors (xen and kvm) will publish all the
201
password files in use through the cluster so that a redistribute-config will
202
ship them to all nodes (see the Redistribute Config proposed changes above).
203

    
204
The current VNC_PASSWORD_FILE constant will be removed, but its value will be
205
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards
206
compatibility with 2.0.
207

    
208
The code to export the list of VNC password files from the hypervisors to
209
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors.
210

    
211
Disk/Net parameters
212
~~~~~~~~~~~~~~~~~~~
213

    
214
Current State and shortcomings
215
++++++++++++++++++++++++++++++
216

    
217
Currently disks and network interfaces have a few tweakable options and all the
218
rest is left to a default we chose. We're finding that we need more and more to
219
tweak some of these parameters, for example to disable barriers for DRBD
220
devices, or allow striping for the LVM volumes.
221

    
222
Moreover for many of these parameters it will be nice to have cluster-wide
223
defaults, and then be able to change them per disk/interface.
224

    
225
Proposed changes
226
++++++++++++++++
227

    
228
We will add new cluster level diskparams and netparams, which will contain all
229
the tweakable parameters. All values which have a sensible cluster-wide default
230
will go into this new structure while parameters which have unique values will not.
231

    
232
Example of network parameters:
233
  - mode: bridge/route
234
  - link: for mode "bridge" the bridge to connect to, for mode route it can
235
    contain the routing table, or the destination interface
236

    
237
Example of disk parameters:
238
  - stripe: lvm stripes
239
  - stripe_size: lvm stripe size
240
  - meta_flushes: drbd, enable/disable metadata "barriers"
241
  - data_flushes: drbd, enable/disable data "barriers"
242

    
243
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or
244
hypervisor specific (nic models for example), but for now they will all live in
245
the same structure. Each component is supposed to validate only the parameters
246
it knows about, and ganeti itself will make sure that no "globally unknown"
247
parameters are added, and that no parameters have overridden meanings for
248
different components.
249

    
250
The parameters will be kept, as for the BEPARAMS into a "default" category,
251
which will allow us to expand on by creating instance "classes" in the future.
252
Instance classes is not a feature we plan implementing in 2.1, though.
253

    
254
Non bridged instances support
255
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
256

    
257
Current State and shortcomings
258
++++++++++++++++++++++++++++++
259

    
260
Currently each instance NIC must be connected to a bridge, and if the bridge is
261
not specified the default cluster one is used. This makes it impossible to use
262
the vif-route xen network scripts, or other alternative mechanisms that don't
263
need a bridge to work.
264

    
265
Proposed changes
266
++++++++++++++++
267

    
268
The new "mode" network parameter will distinguish between bridged interfaces
269
and routed ones.
270

    
271
When mode is "bridge" the "link" parameter will contain the bridge the instance
272
should be connected to, effectively making things as today. The value has been
273
migrated from a nic field to a parameter to allow for an easier manipulation of
274
the cluster default.
275

    
276
When mode is "route" the ip field of the interface will become mandatory, to
277
allow for a route to be set. In the future we may want also to accept multiple
278
IPs or IP/mask values for this purpose. We will evaluate possible meanings of
279
the link parameter to signify a routing table to be used, which would allow for
280
insulation between instance groups (as today happens for different bridges).
281

    
282
For now we won't add a parameter to specify which network script gets called
283
for which instance, so in a mixed cluster the network script must be able to
284
handle both cases. The default kvm vif script will be changed to do so. (Xen
285
doesn't have a ganeti provided script, so nothing will be done for that
286
hypervisor)
287

    
288
External interface changes
289
--------------------------
290

    
291
OS API
292
~~~~~~
293

    
294
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we
295
pass everything as environment variables it's a lot easier to send new
296
information to the OSes without breaking retrocompatibility. This section of
297
the design outlines the proposed extensions to the API and their
298
implementation.
299

    
300
API Version Compatibility Handling
301
++++++++++++++++++++++++++++++++++
302

    
303
In 2.1 there will be a new OS API version (eg. 15), which should be mostly
304
compatible with api 10, except for some new added variables. Since it's easy
305
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just
306
filtering out the newly added piece of information. We will still encourage
307
OSes to declare support for the new API after checking that the new variables
308
don't provide any conflict for them, and we will drop api 10 support after
309
ganeti 2.1 has released.
310

    
311
New Environment variables
312
+++++++++++++++++++++++++
313

    
314
Some variables have never been added to the OS api but would definitely be
315
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow
316
the OS to make changes relevant to the virtualization the instance is going to
317
use. Since this field is immutable for each instance, the os can tight the
318
install without caring of making sure the instance can run under any
319
virtualization technology.
320

    
321
We also want the OS to know the particular hypervisor parameters, to be able to
322
customize the install even more.  Since the parameters can change, though, we
323
will pass them only as an "FYI": if an OS ties some instance functionality to
324
the value of a particular hypervisor parameter manual changes or a reinstall
325
may be needed to adapt the instance to the new environment. This is not a
326
regression as of today, because even if the OSes are left blind about this
327
information, sometimes they still need to make compromises and cannot satisfy
328
all possible parameter values.
329

    
330
OS Flavours
331
+++++++++++
332

    
333
Currently we are assisting to some degree of "os proliferation" just to change
334
a simple installation behavior. This means that the same OS gets installed on
335
the cluster multiple times, with different names, to customize just one
336
installation behavior. Usually such OSes try to share as much as possible
337
through symlinks, but this still causes complications on the user side,
338
especially when multiple parameters must be cross-matched.
339

    
340
For example today if you want to install debian etch, lenny or squeeze you
341
probably need to install the debootstrap OS multiple times, changing its
342
configuration file, and calling it debootstrap-etch, debootstrap-lenny or
343
debootstrap-squeeze. Furthermore if you have for example a "server" and a
344
"development" environment which installs different packages/configuration files
345
and must be available for all installs you'll probably end  up with
346
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server,
347
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes
348
not manageable.
349

    
350
In order to avoid this we plan to make OSes more customizable, by allowing each
351
OS to declare a list of flavours which can be used to customize it. The
352
flavours list is mandatory for new API OSes and must contain at least one
353
supported flavour. When choosing the OS exactly one flavour will have to be
354
specified, and will be encoded in the os name as <OS-name>+<flavour>. As for
355
today it will be possible to change an instance's OS at creation or install
356
time.
357

    
358
The 2.1 OS list will be the combination of each OS, plus its supported
359
flavours. This will cause the name name proliferation to remain, but at least
360
the internal OS code will be simplified to just parsing the passed flavour,
361
without the need for symlinks or code duplication.
362

    
363
Also we expect the OSes to declare only "interesting" flavours, but to accept
364
some non-declared ones which a user will be able to pass in by overriding the
365
checks ganeti does. This will be useful for allowing some variations to be used
366
without polluting the OS list (per-OS documentation should list all supported
367
flavours). If a flavour which is not internally supported is forced through,
368
the OS scripts should abort.
369

    
370
In the future (post 2.1) we may want to move to full fledged orthogonal
371
parameters for the OSes. In this case we envision the flavours to be moved
372
inside of Ganeti and be associated with lists parameter->values associations,
373
which will then be passed to the OS.
374

    
375
IAllocator changes
376
~~~~~~~~~~~~~~~~~~
377

    
378
Current State and shortcomings
379
++++++++++++++++++++++++++++++
380

    
381
The iallocator interface allows creation of instances without manually
382
specifying nodes, but instead by specifying plugins which will do the
383
required computations and produce a valid node list.
384

    
385
However, the interface is quite akward to use:
386

    
387
- one cannot set a 'default' iallocator script
388
- one cannot use it to easily test if allocation would succeed
389
- some new functionality, such as rebalancing clusters and calculating
390
  capacity estimates is needed
391

    
392
Proposed changes
393
++++++++++++++++
394

    
395
There are two area of improvements proposed:
396

    
397
- improving the use of the current interface
398
- extending the IAllocator API to cover more automation
399

    
400

    
401
Default iallocator names
402
^^^^^^^^^^^^^^^^^^^^^^^^
403

    
404
The cluster will hold, for each type of iallocator, a (possibly empty)
405
list of modules that will be used automatically.
406

    
407
If the list is empty, the behaviour will remain the same.
408

    
409
If the list has one entry, then ganeti will behave as if
410
'--iallocator' was specifyed on the command line. I.e. use this
411
allocator by default. If the user however passed nodes, those will be
412
used in preference.
413

    
414
If the list has multiple entries, they will be tried in order until
415
one gives a successful answer.
416

    
417
Dry-run allocation
418
^^^^^^^^^^^^^^^^^^
419

    
420
The create instance LU will get a new 'dry-run' option that will just
421
simulate the placement, and return the chosen node-lists after running
422
all the usual checks.
423

    
424
Cluster balancing
425
^^^^^^^^^^^^^^^^^
426

    
427
Instance add/removals/moves can create a situation where load on the
428
nodes is not spread equally. For this, a new iallocator mode will be
429
implemented called ``balance`` in which the plugin, given the current
430
cluster state, and a maximum number of operations, will need to
431
compute the instance relocations needed in order to achieve a "better"
432
(for whatever the script believes it's better) cluster.
433

    
434
Cluster capacity calculation
435
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
436

    
437
In this mode, called ``capacity``, given an instance specification and
438
the current cluster state (similar to the ``allocate`` mode), the
439
plugin needs to return:
440

    
441
- how many instances can be allocated on the cluster with that specification
442
- on which nodes these will be allocated (in order)