Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.2.rst @ a7c6552d

History | View | Annotate | Download (18.7 kB)

1
=================
2
Ganeti 2.2 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.2 compared to
6
the 2.1 version.
7

    
8
The 2.2 version will be a relatively small release. Its main aim is to
9
avoid changing too much of the core code, while addressing issues and
10
adding new features and improvements over 2.1, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Background
18
==========
19

    
20
Overview
21
========
22

    
23
Detailed design
24
===============
25

    
26
As for 2.1 we divide the 2.2 design into three areas:
27

    
28
- core changes, which affect the master daemon/job queue/locking or
29
  all/most logical units
30
- logical unit/feature changes
31
- external interface changes (eg. command line, os api, hooks, ...)
32

    
33
Core changes
34
------------
35

    
36
Remote procedure call timeouts
37
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38

    
39
Current state and shortcomings
40
++++++++++++++++++++++++++++++
41

    
42
The current RPC protocol used by Ganeti is based on HTTP. Every request
43
consists of an HTTP PUT request (e.g. ``PUT /hooks_runner HTTP/1.0``)
44
and doesn't return until the function called has returned. Parameters
45
and return values are encoded using JSON.
46

    
47
On the server side, ``ganeti-noded`` handles every incoming connection
48
in a separate process by forking just after accepting the connection.
49
This process exits after sending the response.
50

    
51
There is one major problem with this design: Timeouts can not be used on
52
a per-request basis. Neither client or server know how long it will
53
take. Even if we might be able to group requests into different
54
categories (e.g. fast and slow), this is not reliable.
55

    
56
If a node has an issue or the network connection fails while a request
57
is being handled, the master daemon can wait for a long time for the
58
connection to time out (e.g. due to the operating system's underlying
59
TCP keep-alive packets or timeouts). While the settings for keep-alive
60
packets can be changed using Linux-specific socket options, we prefer to
61
use application-level timeouts because these cover both machine down and
62
unresponsive node daemon cases.
63

    
64
Proposed changes
65
++++++++++++++++
66

    
67
RPC glossary
68
^^^^^^^^^^^^
69

    
70
Function call ID
71
  Unique identifier returned by ``ganeti-noded`` after invoking a
72
  function.
73
Function process
74
  Process started by ``ganeti-noded`` to call actual (backend) function.
75

    
76
Protocol
77
^^^^^^^^
78

    
79
Initially we chose HTTP as our RPC protocol because there were existing
80
libraries, which, unfortunately, turned out to miss important features
81
(such as SSL certificate authentication) and we had to write our own.
82

    
83
This proposal can easily be implemented using HTTP, though it would
84
likely be more efficient and less complicated to use the LUXI protocol
85
already used to communicate between client tools and the Ganeti master
86
daemon. Switching to another protocol can occur at a later point. This
87
proposal should be implemented using HTTP as its underlying protocol.
88

    
89
The LUXI protocol currently contains two functions, ``WaitForJobChange``
90
and ``AutoArchiveJobs``, which can take a longer time. They both support
91
a parameter to specify the timeout. This timeout is usually chosen as
92
roughly half of the socket timeout, guaranteeing a response before the
93
socket times out. After the specified amount of time,
94
``AutoArchiveJobs`` returns and reports the number of archived jobs.
95
``WaitForJobChange`` returns and reports a timeout. In both cases, the
96
functions can be called again.
97

    
98
A similar model can be used for the inter-node RPC protocol. In some
99
sense, the node daemon will implement a light variant of *"node daemon
100
jobs"*. When the function call is sent, it specifies an initial timeout.
101
If the function didn't finish within this timeout, a response is sent
102
with a unique identifier, the function call ID. The client can then
103
choose to wait for the function to finish again with a timeout.
104
Inter-node RPC calls would no longer be blocking indefinitely and there
105
would be an implicit ping-mechanism.
106

    
107
Request handling
108
^^^^^^^^^^^^^^^^
109

    
110
To support the protocol changes described above, the way the node daemon
111
handles request will have to change. Instead of forking and handling
112
every connection in a separate process, there should be one child
113
process per function call and the master process will handle the
114
communication with clients and the function processes using asynchronous
115
I/O.
116

    
117
Function processes communicate with the parent process via stdio and
118
possibly their exit status. Every function process has a unique
119
identifier, though it shouldn't be the process ID only (PIDs can be
120
recycled and are prone to race conditions for this use case). The
121
proposed format is ``${ppid}:${cpid}:${time}:${random}``, where ``ppid``
122
is the ``ganeti-noded`` PID, ``cpid`` the child's PID, ``time`` the
123
current Unix timestamp with decimal places and ``random`` at least 16
124
random bits.
125

    
126
The following operations will be supported:
127

    
128
``StartFunction(fn_name, fn_args, timeout)``
129
  Starts a function specified by ``fn_name`` with arguments in
130
  ``fn_args`` and waits up to ``timeout`` seconds for the function
131
  to finish. Fire-and-forget calls can be made by specifying a timeout
132
  of 0 seconds (e.g. for powercycling the node). Returns three values:
133
  function call ID (if not finished), whether function finished (or
134
  timeout) and the function's return value.
135
``WaitForFunction(fnc_id, timeout)``
136
  Waits up to ``timeout`` seconds for function call to finish. Return
137
  value same as ``StartFunction``.
138

    
139
In the future, ``StartFunction`` could support an additional parameter
140
to specify after how long the function process should be aborted.
141

    
142
Simplified timing diagram::
143

    
144
  Master daemon        Node daemon                      Function process
145
   |
146
  Call function
147
  (timeout 10s) -----> Parse request and fork for ----> Start function
148
                       calling actual function, then     |
149
                       wait up to 10s for function to    |
150
                       finish                            |
151
                        |                                |
152
                       ...                              ...
153
                        |                                |
154
  Examine return <----  |                                |
155
  value and wait                                         |
156
  again -------------> Wait another 10s for function     |
157
                        |                                |
158
                       ...                              ...
159
                        |                                |
160
  Examine return <----  |                                |
161
  value and wait                                         |
162
  again -------------> Wait another 10s for function     |
163
                        |                                |
164
                       ...                              ...
165
                        |                                |
166
                        |                               Function ends,
167
                       Get return value and forward <-- process exits
168
  Process return <---- it to caller
169
  value and continue
170
   |
171

    
172
.. TODO: Convert diagram above to graphviz/dot graphic
173

    
174
On process termination (e.g. after having been sent a ``SIGTERM`` or
175
``SIGINT`` signal), ``ganeti-noded`` should send ``SIGTERM`` to all
176
function processes and wait for all of them to terminate.
177

    
178

    
179
Inter-cluster instance moves
180
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
181

    
182
Current state and shortcomings
183
++++++++++++++++++++++++++++++
184

    
185
With the current design of Ganeti, moving whole instances between
186
different clusters involves a lot of manual work. There are several ways
187
to move instances, one of them being to export the instance, manually
188
copying all data to the new cluster before importing it again. Manual
189
changes to the instances configuration, such as the IP address, may be
190
necessary in the new environment. The goal is to improve and automate
191
this process in Ganeti 2.2.
192

    
193
Proposed changes
194
++++++++++++++++
195

    
196
Authorization, Authentication and Security
197
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
198

    
199
Until now, each Ganeti cluster was a self-contained entity and wouldn't
200
talk to other Ganeti clusters. Nodes within clusters only had to trust
201
the other nodes in the same cluster and the network used for replication
202
was trusted, too (hence the ability the use a separate, local network
203
for replication).
204

    
205
For inter-cluster instance transfers this model must be weakened. Nodes
206
in one cluster will have to talk to nodes in other clusters, sometimes
207
in other locations and, very important, via untrusted network
208
connections.
209

    
210
Various option have been considered for securing and authenticating the
211
data transfer from one machine to another. To reduce the risk of
212
accidentally overwriting data due to software bugs, authenticating the
213
arriving data was considered critical. Eventually we decided to use
214
socat's OpenSSL options (``OPENSSL:``, ``OPENSSL-LISTEN:`` et al), which
215
provide us with encryption, authentication and authorization when used
216
with separate keys and certificates.
217

    
218
Combinations of OpenSSH, GnuPG and Netcat were deemed too complex to set
219
up from within Ganeti. Any solution involving OpenSSH would require a
220
dedicated user with a home directory and likely automated modifications
221
to the user's ``$HOME/.ssh/authorized_keys`` file. When using Netcat,
222
GnuPG or another encryption method would be necessary to transfer the
223
data over an untrusted network. socat combines both in one program and
224
is already a dependency.
225

    
226
Each of the two clusters will have to generate an RSA key. The public
227
parts are exchanged between the clusters by a third party, such as an
228
administrator or a system interacting with Ganeti via the remote API
229
("third party" from here on). After receiving each other's public key,
230
the clusters can start talking to each other.
231

    
232
All encrypted connections must be verified on both sides. Neither side
233
may accept unverified certificates. The generated certificate should
234
only be valid for the time necessary to move the instance.
235

    
236
For additional protection of the instance data, the two clusters can
237
verify the certificates exchanged via the third party by signing them
238
using HMAC with a key shared among the involved clusters. If the third
239
party does not know this secret, it can't forge the certificates and
240
redirect the data. Unless disabled by a new cluster parameter, verifying
241
the HMAC must be mandatory. The HMAC will be prepended to the
242
certificate and only covers the certificate (from ``-----BEGIN
243
CERTIFICATE-----`` to ``-----END CERTIFICATE-----``).
244

    
245
On the web, the destination cluster would be equivalent to an HTTPS
246
server requiring verifiable client certificates. The browser would be
247
equivalent to the source cluster and must verify the server's
248
certificate while providing a client certificate to the server.
249

    
250
Copying data
251
^^^^^^^^^^^^
252

    
253
To simplify the implementation, we decided to operate at a block-device
254
level only, allowing us to easily support non-DRBD instance moves.
255

    
256
Intra-cluster instance moves will re-use the existing export and import
257
scripts supplied by instance OS definitions. Unlike simply copying the
258
raw data, this allows to use filesystem-specific utilities to dump only
259
used parts of the disk and to exclude certain disks from the move.
260
Compression should be used to further reduce the amount of data
261
transferred.
262

    
263
The export scripts writes all data to stdout and the import script reads
264
it from stdin again. To avoid copying data and reduce disk space
265
consumption, everything is read from the disk and sent over the network
266
directly, where it'll be written to the new block device directly again.
267

    
268
Workflow
269
^^^^^^^^
270

    
271
#. Third party tells source cluster to shut down instance, asks for the
272
   instance specification and for the public part of an encryption key
273
#. Third party tells destination cluster to create an instance with the
274
   same specifications as on source cluster and to prepare for an
275
   instance move with the key received from the source cluster and
276
   receives the public part of the destination's encryption key
277
#. Third party hands public part of the destination's encryption key
278
   together with all necessary information to source cluster and tells
279
   it to start the move
280
#. Source cluster connects to destination cluster for each disk and
281
   transfers its data using the instance OS definition's export and
282
   import scripts
283
#. Due to the asynchronous nature of the whole process, the destination
284
   cluster checks whether all disks have been transferred every time
285
   after transfering a single disk; if so, it destroys the encryption
286
   key
287
#. After sending all disks, the source cluster destroys its key
288
#. Destination cluster runs OS definition's rename script to adjust
289
   instance settings if needed (e.g. IP address)
290
#. Destination cluster starts the instance if requested at the beginning
291
   by the third party
292
#. Source cluster removes the instance if requested
293

    
294
Miscellaneous notes
295
^^^^^^^^^^^^^^^^^^^
296

    
297
- A very similar system could also be used for instance exports within
298
  the same cluster. Currently OpenSSH is being used, but could be
299
  replaced by socat and SSL/TLS.
300
- During the design of intra-cluster instance moves we also discussed
301
  encrypting instance exports using GnuPG.
302
- While most instances should have exactly the same configuration as
303
  on the source cluster, setting them up with a different disk layout
304
  might be helpful in some use-cases.
305
- A cleanup operation, similar to the one available for failed instance
306
  migrations, should be provided.
307
- ``ganeti-watcher`` should remove instances pending a move from another
308
  cluster after a certain amount of time. This takes care of failures
309
  somewhere in the process.
310
- RSA keys can be generated using the existing
311
  ``bootstrap.GenerateSelfSignedSslCert`` function, though it might be
312
  useful to not write both parts into a single file, requiring small
313
  changes to the function. The public part always starts with
314
  ``-----BEGIN CERTIFICATE-----`` and ends with ``-----END
315
  CERTIFICATE-----``.
316
- The source and destination cluster might be different when it comes
317
  to available hypervisors, kernels, etc. The destination cluster should
318
  refuse to accept an instance move if it can't fulfill an instance's
319
  requirements.
320

    
321

    
322
Feature changes
323
---------------
324

    
325
KVM Security
326
~~~~~~~~~~~~
327

    
328
Current state and shortcomings
329
++++++++++++++++++++++++++++++
330

    
331
Currently all kvm processes run as root. Taking ownership of the
332
hypervisor process, from inside a virtual machine, would mean a full
333
compromise of the whole Ganeti cluster, knowledge of all Ganeti
334
authentication secrets, full access to all running instances, and the
335
option of subverting other basic services on the cluster (eg: ssh).
336

    
337
Proposed changes
338
++++++++++++++++
339

    
340
We would like to decrease the surface of attack available if an
341
hypervisor is compromised. We can do so adding different features to
342
Ganeti, which will allow restricting the broken hypervisor
343
possibilities, in the absence of a local privilege escalation attack, to
344
subvert the node.
345

    
346
Dropping privileges in kvm to a single user (easy)
347
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
348

    
349
By passing the ``-runas`` option to kvm, we can make it drop privileges.
350
The user can be chosen by an hypervisor parameter, so that each instance
351
can have its own user, but by default they will all run under the same
352
one. It should be very easy to implement, and can easily be backported
353
to 2.1.X.
354

    
355
This mode protects the Ganeti cluster from a subverted hypervisor, but
356
doesn't protect the instances between each other, unless care is taken
357
to specify a different user for each. This would prevent the worst
358
attacks, including:
359

    
360
- logging in to other nodes
361
- administering the Ganeti cluster
362
- subverting other services
363

    
364
But the following would remain an option:
365

    
366
- terminate other VMs (but not start them again, as that requires root
367
  privileges to set up networking) (unless different users are used)
368
- trace other VMs, and probably subvert them and access their data
369
  (unless different users are used)
370
- send network traffic from the node
371
- read unprotected data on the node filesystem
372

    
373
Running kvm in a chroot (slightly harder)
374
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
375

    
376
By passing the ``-chroot`` option to kvm, we can restrict the kvm
377
process in its own (possibly empty) root directory. We need to set this
378
area up so that the instance disks and control sockets are accessible,
379
so it would require slightly more work at the Ganeti level.
380

    
381
Breaking out in a chroot would mean:
382

    
383
- a lot less options to find a local privilege escalation vector
384
- the impossibility to write local data, if the chroot is set up
385
  correctly
386
- the impossibility to read filesystem data on the host
387

    
388
It would still be possible though to:
389

    
390
- terminate other VMs
391
- trace other VMs, and possibly subvert them (if a tracer can be
392
  installed in the chroot)
393
- send network traffic from the node
394

    
395

    
396
Running kvm with a pool of users (slightly harder)
397
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
398

    
399
If rather than passing a single user as an hypervisor parameter, we have
400
a pool of useable ones, we can dynamically choose a free one to use and
401
thus guarantee that each machine will be separate from the others,
402
without putting the burden of this on the cluster administrator.
403

    
404
This would mean interfering between machines would be impossible, and
405
can still be combined with the chroot benefits.
406

    
407
Running iptables rules to limit network interaction (easy)
408
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
409

    
410
These don't need to be handled by Ganeti, but we can ship examples. If
411
the users used to run VMs would be blocked from sending some or all
412
network traffic, it would become impossible for a broken into hypervisor
413
to send arbitrary data on the node network, which is especially useful
414
when the instance and the node network are separated (using ganeti-nbma
415
or a separate set of network interfaces), or when a separate replication
416
network is maintained. We need to experiment to see how much restriction
417
we can properly apply, without limiting the instance legitimate traffic.
418

    
419

    
420
Running kvm inside a container (even harder)
421
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
422

    
423
Recent linux kernels support different process namespaces through
424
control groups. PIDs, users, filesystems and even network interfaces can
425
be separated. If we can set up ganeti to run kvm in a separate container
426
we could insulate all the host process from being even visible if the
427
hypervisor gets broken into. Most probably separating the network
428
namespace would require one extra hop in the host, through a veth
429
interface, thus reducing performance, so we may want to avoid that, and
430
just rely on iptables.
431

    
432
Implementation plan
433
+++++++++++++++++++
434

    
435
We will first implement dropping privileges for kvm processes as a
436
single user, and most probably backport it to 2.1. Then we'll ship
437
example iptables rules to show how the user can be limited in its
438
network activities.  After that we'll implement chroot restriction for
439
kvm processes, and extend the user limitation to use a user pool.
440

    
441
Finally we'll look into namespaces and containers, although that might
442
slip after the 2.2 release.
443

    
444
External interface changes
445
--------------------------
446

    
447
.. vim: set textwidth=72 :