root / doc / design-2.1.rst @ 3bd3d643
History | View | Annotate | Download (13.5 kB)
1 |
================= |
---|---|
2 |
Ganeti 2.1 design |
3 |
================= |
4 |
|
5 |
This document describes the major changes in Ganeti 2.1 compared to |
6 |
the 2.0 version. |
7 |
|
8 |
The 2.1 version will be a relatively small release. Its main aim is to avoid |
9 |
changing too much of the core code, while addressing issues and adding new |
10 |
features and improvements over 2.0, in a timely fashion. |
11 |
|
12 |
.. contents:: :depth: 3 |
13 |
|
14 |
Objective |
15 |
========= |
16 |
|
17 |
Ganeti 2.1 will add features to help further automatization of cluster |
18 |
operations, further improbe scalability to even bigger clusters, and make it |
19 |
easier to debug the Ganeti core. |
20 |
|
21 |
Background |
22 |
========== |
23 |
|
24 |
Overview |
25 |
======== |
26 |
|
27 |
Detailed design |
28 |
=============== |
29 |
|
30 |
As for 2.0 we divide the 2.1 design into three areas: |
31 |
|
32 |
- core changes, which affect the master daemon/job queue/locking or all/most |
33 |
logical units |
34 |
- logical unit/feature changes |
35 |
- external interface changes (eg. command line, os api, hooks, ...) |
36 |
|
37 |
Core changes |
38 |
------------ |
39 |
|
40 |
Feature changes |
41 |
--------------- |
42 |
|
43 |
Redistribute Config |
44 |
~~~~~~~~~~~~~~~~~~~ |
45 |
|
46 |
Current State and shortcomings |
47 |
++++++++++++++++++++++++++++++ |
48 |
Currently LURedistributeConfig triggers a copy of the updated configuration |
49 |
file to all master candidates and of the ssconf files to all nodes. There are |
50 |
other files which are maintained manually but which are important to keep in |
51 |
sync. These are: |
52 |
|
53 |
- rapi SSL key certificate file (rapi.pem) (on master candidates) |
54 |
- rapi user/password file rapi_users (on master candidates) |
55 |
|
56 |
Furthermore there are some files which are hypervisor specific but we may want |
57 |
to keep in sync: |
58 |
|
59 |
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and copies |
60 |
the file once, during node add. This design is subject to revision to be able |
61 |
to have different passwords for different groups of instances via the use of |
62 |
hypervisor parameters, and to allow xen-hvm and kvm to use an equal system to |
63 |
provide password-protected vnc sessions. In general, though, it would be |
64 |
useful if the vnc password files were copied as well, to avoid unwanted vnc |
65 |
password changes on instance failover/migrate. |
66 |
|
67 |
Optionally the admin may want to also ship files such as the global xend.conf |
68 |
file, and the network scripts to all nodes. |
69 |
|
70 |
Proposed changes |
71 |
++++++++++++++++ |
72 |
|
73 |
RedistributeConfig will be changed to copy also the rapi files, and to call |
74 |
every enabled hypervisor asking for a list of additional files to copy. We also |
75 |
may want to add a global list of files on the cluster object, which will be |
76 |
propagated as well, or a hook to calculate them. If we implement this feature |
77 |
there should be a way to specify whether a file must be shipped to all nodes or |
78 |
just master candidates. |
79 |
|
80 |
This code will be also shared (via tasklets or by other means, if tasklets are |
81 |
not ready for 2.1) with the AddNode and SetNodeParams LUs (so that the relevant |
82 |
files will be automatically shipped to new master candidates as they are set). |
83 |
|
84 |
VNC Console Password |
85 |
~~~~~~~~~~~~~~~~~~~~ |
86 |
|
87 |
Current State and shortcomings |
88 |
++++++++++++++++++++++++++++++ |
89 |
|
90 |
Currently just the xen-hvm hypervisor supports setting a password to connect |
91 |
the the instances' VNC console, and has one common password stored in a file. |
92 |
|
93 |
This doesn't allow different passwords for different instances/groups of |
94 |
instances, and makes it necessary to remember to copy the file around the |
95 |
cluster when the password changes. |
96 |
|
97 |
Proposed changes |
98 |
++++++++++++++++ |
99 |
|
100 |
We'll change the VNC password file to a vnc_password_file hypervisor parameter. |
101 |
This way it can have a cluster default, but also a different value for each |
102 |
instance. The VNC enabled hypervisors (xen and kvm) will publish all the |
103 |
password files in use through the cluster so that a redistribute-config will |
104 |
ship them to all nodes (see the Redistribute Config proposed changes above). |
105 |
|
106 |
The current VNC_PASSWORD_FILE constant will be removed, but its value will be |
107 |
used as the default HV_VNC_PASSWORD_FILE value, thus retaining backwards |
108 |
compatibility with 2.0. |
109 |
|
110 |
The code to export the list of VNC password files from the hypervisors to |
111 |
RedistributeConfig will be shared between the KVM and xen-hvm hypervisors. |
112 |
|
113 |
Disk/Net parameters |
114 |
~~~~~~~~~~~~~~~~~~~ |
115 |
|
116 |
Current State and shortcomings |
117 |
++++++++++++++++++++++++++++++ |
118 |
|
119 |
Currently disks and network interfaces have a few tweakable options and all the |
120 |
rest is left to a default we chose. We're finding that we need more and more to |
121 |
tweak some of these parameters, for example to disable barriers for DRBD |
122 |
devices, or allow striping for the LVM volumes. |
123 |
|
124 |
Moreover for many of these parameters it will be nice to have cluster-wide |
125 |
defaults, and then be able to change them per disk/interface. |
126 |
|
127 |
Proposed changes |
128 |
++++++++++++++++ |
129 |
|
130 |
We will add new cluster level diskparams and netparams, which will contain all |
131 |
the tweakable parameters. All values which have a sensible cluster-wide default |
132 |
will go into this new structure while parameters which have unique values will not. |
133 |
|
134 |
Example of network parameters: |
135 |
- mode: bridge/route |
136 |
- link: for mode "bridge" the bridge to connect to, for mode route it can |
137 |
contain the routing table, or the destination interface |
138 |
|
139 |
Example of disk parameters: |
140 |
- stripe: lvm stripes |
141 |
- stripe_size: lvm stripe size |
142 |
- meta_flushes: drbd, enable/disable metadata "barriers" |
143 |
- data_flushes: drbd, enable/disable data "barriers" |
144 |
|
145 |
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs files) or |
146 |
hypervisor specific (nic models for example), but for now they will all live in |
147 |
the same structure. Each component is supposed to validate only the parameters |
148 |
it knows about, and ganeti itself will make sure that no "globally unknown" |
149 |
parameters are added, and that no parameters have overridden meanings for |
150 |
different components. |
151 |
|
152 |
The parameters will be kept, as for the BEPARAMS into a "default" category, |
153 |
which will allow us to expand on by creating instance "classes" in the future. |
154 |
Instance classes is not a feature we plan implementing in 2.1, though. |
155 |
|
156 |
Non bridged instances support |
157 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
158 |
|
159 |
Current State and shortcomings |
160 |
++++++++++++++++++++++++++++++ |
161 |
|
162 |
Currently each instance NIC must be connected to a bridge, and if the bridge is |
163 |
not specified the default cluster one is used. This makes it impossible to use |
164 |
the vif-route xen network scripts, or other alternative mechanisms that don't |
165 |
need a bridge to work. |
166 |
|
167 |
Proposed changes |
168 |
++++++++++++++++ |
169 |
|
170 |
The new "mode" network parameter will distinguish between bridged interfaces |
171 |
and routed ones. |
172 |
|
173 |
When mode is "bridge" the "link" parameter will contain the bridge the instance |
174 |
should be connected to, effectively making things as today. The value has been |
175 |
migrated from a nic field to a parameter to allow for an easier manipulation of |
176 |
the cluster default. |
177 |
|
178 |
When mode is "route" the ip field of the interface will become mandatory, to |
179 |
allow for a route to be set. In the future we may want also to accept multiple |
180 |
IPs or IP/mask values for this purpose. We will evaluate possible meanings of |
181 |
the link parameter to signify a routing table to be used, which would allow for |
182 |
insulation between instance groups (as today happens for different bridges). |
183 |
|
184 |
For now we won't add a parameter to specify which network script gets called |
185 |
for which instance, so in a mixed cluster the network script must be able to |
186 |
handle both cases. The default kvm vif script will be changed to do so. (Xen |
187 |
doesn't have a ganeti provided script, so nothing will be done for that |
188 |
hypervisor) |
189 |
|
190 |
External interface changes |
191 |
-------------------------- |
192 |
|
193 |
OS API |
194 |
~~~~~~ |
195 |
|
196 |
The OS API of Ganeti 2.0 has been built with extensibility in mind. Since we |
197 |
pass everything as environment variables it's a lot easier to send new |
198 |
information to the OSes without breaking retrocompatibility. This section of |
199 |
the design outlines the proposed extensions to the API and their |
200 |
implementation. |
201 |
|
202 |
API Version Compatibility Handling |
203 |
++++++++++++++++++++++++++++++++++ |
204 |
|
205 |
In 2.1 there will be a new OS API version (eg. 15), which should be mostly |
206 |
compatible with api 10, except for some new added variables. Since it's easy |
207 |
not to pass some variables we'll be able to handle Ganeti 2.0 OSes by just |
208 |
filtering out the newly added piece of information. We will still encourage |
209 |
OSes to declare support for the new API after checking that the new variables |
210 |
don't provide any conflict for them, and we will drop api 10 support after |
211 |
ganeti 2.1 has released. |
212 |
|
213 |
New Environment variables |
214 |
+++++++++++++++++++++++++ |
215 |
|
216 |
Some variables have never been added to the OS api but would definitely be |
217 |
useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable to allow |
218 |
the OS to make changes relevant to the virtualization the instance is going to |
219 |
use. Since this field is immutable for each instance, the os can tight the |
220 |
install without caring of making sure the instance can run under any |
221 |
virtualization technology. |
222 |
|
223 |
We also want the OS to know the particular hypervisor parameters, to be able to |
224 |
customize the install even more. Since the parameters can change, though, we |
225 |
will pass them only as an "FYI": if an OS ties some instance functionality to |
226 |
the value of a particular hypervisor parameter manual changes or a reinstall |
227 |
may be needed to adapt the instance to the new environment. This is not a |
228 |
regression as of today, because even if the OSes are left blind about this |
229 |
information, sometimes they still need to make compromises and cannot satisfy |
230 |
all possible parameter values. |
231 |
|
232 |
OS Parameters |
233 |
+++++++++++++ |
234 |
|
235 |
Currently we are assisting to some degree of "os proliferation" just to change |
236 |
a simple installation behavior. This means that the same OS gets installed on |
237 |
the cluster multiple times, with different names, to customize just one |
238 |
installation behavior. Usually such OSes try to share as much as possible |
239 |
through symlinks, but this still causes complications on the user side, |
240 |
especially when multiple parameters must be cross-matched. |
241 |
|
242 |
For example today if you want to install debian etch, lenny or squeeze you |
243 |
probably need to install the debootstrap OS multiple times, changing its |
244 |
configuration file, and calling it debootstrap-etch, debootstrap-lenny or |
245 |
debootstrap-squeeze. Furthermore if you have for example a "server" and a |
246 |
"development" environment which installs different packages/configuration files |
247 |
and must be available for all installs you'll probably end up with |
248 |
deboostrap-etch-server, debootstrap-etch-dev, debootrap-lenny-server, |
249 |
debootstrap-lenny-dev, etc. Crossing more than two parameters quickly becomes |
250 |
not manageable. |
251 |
|
252 |
In order to avoid this we plan to make OSes more customizable, by allowing |
253 |
arbitrary flags to be passed to them. These will be special "OS parameters" |
254 |
which will be handled by Ganeti mostly as hypervisor or be parameters. This |
255 |
slightly complicates the interface, but allows one OS (for example |
256 |
"debootstrap" to be customizable and not require copies to perform different |
257 |
cations). |
258 |
|
259 |
Each OS will be able to declare which parameters it supports by listing them |
260 |
one per line in a special "parameters" file in the OS dir. The parameters can |
261 |
have a per-os cluster default, or be specified at instance creation time. They |
262 |
will then be passed to the OS scripts as: INSTANCE_OS_PARAMETER_<NAME> with |
263 |
their specified value. The only value checking that will be performed is that |
264 |
the os parameter value is a string, with only "normal" characters in it. |
265 |
|
266 |
It will be impossible to change parameters for an instance, except at reinstall |
267 |
time. Upon reinstall with a different OS the parameters will be by default |
268 |
discarded and reset to the default (or passed) values, unless a special |
269 |
--keep-known-os-parameters flag is passed. |
270 |
|
271 |
IAllocator changes |
272 |
~~~~~~~~~~~~~~~~~~ |
273 |
|
274 |
Current State and shortcomings |
275 |
++++++++++++++++++++++++++++++ |
276 |
|
277 |
The iallocator interface allows creation of instances without manually |
278 |
specifying nodes, but instead by specifying plugins which will do the |
279 |
required computations and produce a valid node list. |
280 |
|
281 |
However, the interface is quite akward to use: |
282 |
|
283 |
- one cannot set a 'default' iallocator script |
284 |
- one cannot use it to easily test if allocation would succeed |
285 |
- some new functionality, such as rebalancing clusters and calculating |
286 |
capacity estimates is needed |
287 |
|
288 |
Proposed changes |
289 |
++++++++++++++++ |
290 |
|
291 |
There are two area of improvements proposed: |
292 |
|
293 |
- improving the use of the current interface |
294 |
- extending the IAllocator API to cover more automation |
295 |
|
296 |
|
297 |
Default iallocator names |
298 |
^^^^^^^^^^^^^^^^^^^^^^^^ |
299 |
|
300 |
The cluster will hold, for each type of iallocator, a (possibly empty) |
301 |
list of modules that will be used automatically. |
302 |
|
303 |
If the list is empty, the behaviour will remain the same. |
304 |
|
305 |
If the list has one entry, then ganeti will behave as if |
306 |
'--iallocator' was specifyed on the command line. I.e. use this |
307 |
allocator by default. If the user however passed nodes, those will be |
308 |
used in preference. |
309 |
|
310 |
If the list has multiple entries, they will be tried in order until |
311 |
one gives a successful answer. |
312 |
|
313 |
Dry-run allocation |
314 |
^^^^^^^^^^^^^^^^^^ |
315 |
|
316 |
The create instance LU will get a new 'dry-run' option that will just |
317 |
simulate the placement, and return the chosen node-lists after running |
318 |
all the usual checks. |
319 |
|
320 |
Cluster balancing |
321 |
^^^^^^^^^^^^^^^^^ |
322 |
|
323 |
Instance add/removals/moves can create a situation where load on the |
324 |
nodes is not spread equally. For this, a new iallocator mode will be |
325 |
implemented called ``balance`` in which the plugin, given the current |
326 |
cluster state, and a maximum number of operations, will need to |
327 |
compute the instance relocations needed in order to achieve a "better" |
328 |
(for whatever the script believes it's better) cluster. |
329 |
|
330 |
Cluster capacity calculation |
331 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
332 |
|
333 |
In this mode, called ``capacity``, given an instance specification and |
334 |
the current cluster state (similar to the ``allocate`` mode), the |
335 |
plugin needs to return: |
336 |
|
337 |
- how many instances can be allocated on the cluster with that specification |
338 |
- on which nodes these will be allocated (in order) |