Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.2.rst @ f0476905

History | View | Annotate | Download (25.6 kB)

1
=================
2
Ganeti 2.2 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.2 compared to
6
the 2.1 version.
7

    
8
The 2.2 version will be a relatively small release. Its main aim is to
9
avoid changing too much of the core code, while addressing issues and
10
adding new features and improvements over 2.1, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Background
18
==========
19

    
20
Overview
21
========
22

    
23
Detailed design
24
===============
25

    
26
As for 2.1 we divide the 2.2 design into three areas:
27

    
28
- core changes, which affect the master daemon/job queue/locking or
29
  all/most logical units
30
- logical unit/feature changes
31
- external interface changes (eg. command line, os api, hooks, ...)
32

    
33
Core changes
34
------------
35

    
36
Remote procedure call timeouts
37
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38

    
39
Current state and shortcomings
40
++++++++++++++++++++++++++++++
41

    
42
The current RPC protocol used by Ganeti is based on HTTP. Every request
43
consists of an HTTP PUT request (e.g. ``PUT /hooks_runner HTTP/1.0``)
44
and doesn't return until the function called has returned. Parameters
45
and return values are encoded using JSON.
46

    
47
On the server side, ``ganeti-noded`` handles every incoming connection
48
in a separate process by forking just after accepting the connection.
49
This process exits after sending the response.
50

    
51
There is one major problem with this design: Timeouts can not be used on
52
a per-request basis. Neither client or server know how long it will
53
take. Even if we might be able to group requests into different
54
categories (e.g. fast and slow), this is not reliable.
55

    
56
If a node has an issue or the network connection fails while a request
57
is being handled, the master daemon can wait for a long time for the
58
connection to time out (e.g. due to the operating system's underlying
59
TCP keep-alive packets or timeouts). While the settings for keep-alive
60
packets can be changed using Linux-specific socket options, we prefer to
61
use application-level timeouts because these cover both machine down and
62
unresponsive node daemon cases.
63

    
64
Proposed changes
65
++++++++++++++++
66

    
67
RPC glossary
68
^^^^^^^^^^^^
69

    
70
Function call ID
71
  Unique identifier returned by ``ganeti-noded`` after invoking a
72
  function.
73
Function process
74
  Process started by ``ganeti-noded`` to call actual (backend) function.
75

    
76
Protocol
77
^^^^^^^^
78

    
79
Initially we chose HTTP as our RPC protocol because there were existing
80
libraries, which, unfortunately, turned out to miss important features
81
(such as SSL certificate authentication) and we had to write our own.
82

    
83
This proposal can easily be implemented using HTTP, though it would
84
likely be more efficient and less complicated to use the LUXI protocol
85
already used to communicate between client tools and the Ganeti master
86
daemon. Switching to another protocol can occur at a later point. This
87
proposal should be implemented using HTTP as its underlying protocol.
88

    
89
The LUXI protocol currently contains two functions, ``WaitForJobChange``
90
and ``AutoArchiveJobs``, which can take a longer time. They both support
91
a parameter to specify the timeout. This timeout is usually chosen as
92
roughly half of the socket timeout, guaranteeing a response before the
93
socket times out. After the specified amount of time,
94
``AutoArchiveJobs`` returns and reports the number of archived jobs.
95
``WaitForJobChange`` returns and reports a timeout. In both cases, the
96
functions can be called again.
97

    
98
A similar model can be used for the inter-node RPC protocol. In some
99
sense, the node daemon will implement a light variant of *"node daemon
100
jobs"*. When the function call is sent, it specifies an initial timeout.
101
If the function didn't finish within this timeout, a response is sent
102
with a unique identifier, the function call ID. The client can then
103
choose to wait for the function to finish again with a timeout.
104
Inter-node RPC calls would no longer be blocking indefinitely and there
105
would be an implicit ping-mechanism.
106

    
107
Request handling
108
^^^^^^^^^^^^^^^^
109

    
110
To support the protocol changes described above, the way the node daemon
111
handles request will have to change. Instead of forking and handling
112
every connection in a separate process, there should be one child
113
process per function call and the master process will handle the
114
communication with clients and the function processes using asynchronous
115
I/O.
116

    
117
Function processes communicate with the parent process via stdio and
118
possibly their exit status. Every function process has a unique
119
identifier, though it shouldn't be the process ID only (PIDs can be
120
recycled and are prone to race conditions for this use case). The
121
proposed format is ``${ppid}:${cpid}:${time}:${random}``, where ``ppid``
122
is the ``ganeti-noded`` PID, ``cpid`` the child's PID, ``time`` the
123
current Unix timestamp with decimal places and ``random`` at least 16
124
random bits.
125

    
126
The following operations will be supported:
127

    
128
``StartFunction(fn_name, fn_args, timeout)``
129
  Starts a function specified by ``fn_name`` with arguments in
130
  ``fn_args`` and waits up to ``timeout`` seconds for the function
131
  to finish. Fire-and-forget calls can be made by specifying a timeout
132
  of 0 seconds (e.g. for powercycling the node). Returns three values:
133
  function call ID (if not finished), whether function finished (or
134
  timeout) and the function's return value.
135
``WaitForFunction(fnc_id, timeout)``
136
  Waits up to ``timeout`` seconds for function call to finish. Return
137
  value same as ``StartFunction``.
138

    
139
In the future, ``StartFunction`` could support an additional parameter
140
to specify after how long the function process should be aborted.
141

    
142
Simplified timing diagram::
143

    
144
  Master daemon        Node daemon                      Function process
145
   |
146
  Call function
147
  (timeout 10s) -----> Parse request and fork for ----> Start function
148
                       calling actual function, then     |
149
                       wait up to 10s for function to    |
150
                       finish                            |
151
                        |                                |
152
                       ...                              ...
153
                        |                                |
154
  Examine return <----  |                                |
155
  value and wait                                         |
156
  again -------------> Wait another 10s for function     |
157
                        |                                |
158
                       ...                              ...
159
                        |                                |
160
  Examine return <----  |                                |
161
  value and wait                                         |
162
  again -------------> Wait another 10s for function     |
163
                        |                                |
164
                       ...                              ...
165
                        |                                |
166
                        |                               Function ends,
167
                       Get return value and forward <-- process exits
168
  Process return <---- it to caller
169
  value and continue
170
   |
171

    
172
.. TODO: Convert diagram above to graphviz/dot graphic
173

    
174
On process termination (e.g. after having been sent a ``SIGTERM`` or
175
``SIGINT`` signal), ``ganeti-noded`` should send ``SIGTERM`` to all
176
function processes and wait for all of them to terminate.
177

    
178

    
179
Inter-cluster instance moves
180
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
181

    
182
Current state and shortcomings
183
++++++++++++++++++++++++++++++
184

    
185
With the current design of Ganeti, moving whole instances between
186
different clusters involves a lot of manual work. There are several ways
187
to move instances, one of them being to export the instance, manually
188
copying all data to the new cluster before importing it again. Manual
189
changes to the instances configuration, such as the IP address, may be
190
necessary in the new environment. The goal is to improve and automate
191
this process in Ganeti 2.2.
192

    
193
Proposed changes
194
++++++++++++++++
195

    
196
Authorization, Authentication and Security
197
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
198

    
199
Until now, each Ganeti cluster was a self-contained entity and wouldn't
200
talk to other Ganeti clusters. Nodes within clusters only had to trust
201
the other nodes in the same cluster and the network used for replication
202
was trusted, too (hence the ability the use a separate, local network
203
for replication).
204

    
205
For inter-cluster instance transfers this model must be weakened. Nodes
206
in one cluster will have to talk to nodes in other clusters, sometimes
207
in other locations and, very important, via untrusted network
208
connections.
209

    
210
Various option have been considered for securing and authenticating the
211
data transfer from one machine to another. To reduce the risk of
212
accidentally overwriting data due to software bugs, authenticating the
213
arriving data was considered critical. Eventually we decided to use
214
socat's OpenSSL options (``OPENSSL:``, ``OPENSSL-LISTEN:`` et al), which
215
provide us with encryption, authentication and authorization when used
216
with separate keys and certificates.
217

    
218
Combinations of OpenSSH, GnuPG and Netcat were deemed too complex to set
219
up from within Ganeti. Any solution involving OpenSSH would require a
220
dedicated user with a home directory and likely automated modifications
221
to the user's ``$HOME/.ssh/authorized_keys`` file. When using Netcat,
222
GnuPG or another encryption method would be necessary to transfer the
223
data over an untrusted network. socat combines both in one program and
224
is already a dependency.
225

    
226
Each of the two clusters will have to generate an RSA key. The public
227
parts are exchanged between the clusters by a third party, such as an
228
administrator or a system interacting with Ganeti via the remote API
229
("third party" from here on). After receiving each other's public key,
230
the clusters can start talking to each other.
231

    
232
All encrypted connections must be verified on both sides. Neither side
233
may accept unverified certificates. The generated certificate should
234
only be valid for the time necessary to move the instance.
235

    
236
For additional protection of the instance data, the two clusters can
237
verify the certificates and destination information exchanged via the
238
third party by checking an HMAC signature using a key shared among the
239
involved clusters. By default this secret key will be a random string
240
unique to the cluster, generated by running SHA1 over 20 bytes read from
241
``/dev/urandom`` and the administrator must synchronize the secrets
242
between clusters before instances can be moved. If the third party does
243
not know the secret, it can't forge the certificates or redirect the
244
data. Unless disabled by a new cluster parameter, verifying the HMAC
245
signatures must be mandatory. The HMAC signature for X509 certificates
246
will be prepended to the certificate similar to an RFC822 header and
247
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to
248
``-----END CERTIFICATE-----``). The header name will be
249
``X-Ganeti-Signature``.
250

    
251
On the web, the destination cluster would be equivalent to an HTTPS
252
server requiring verifiable client certificates. The browser would be
253
equivalent to the source cluster and must verify the server's
254
certificate while providing a client certificate to the server.
255

    
256
Copying data
257
^^^^^^^^^^^^
258

    
259
To simplify the implementation, we decided to operate at a block-device
260
level only, allowing us to easily support non-DRBD instance moves.
261

    
262
Intra-cluster instance moves will re-use the existing export and import
263
scripts supplied by instance OS definitions. Unlike simply copying the
264
raw data, this allows to use filesystem-specific utilities to dump only
265
used parts of the disk and to exclude certain disks from the move.
266
Compression should be used to further reduce the amount of data
267
transferred.
268

    
269
The export scripts writes all data to stdout and the import script reads
270
it from stdin again. To avoid copying data and reduce disk space
271
consumption, everything is read from the disk and sent over the network
272
directly, where it'll be written to the new block device directly again.
273

    
274
Workflow
275
^^^^^^^^
276

    
277
#. Third party tells source cluster to shut down instance, asks for the
278
   instance specification and for the public part of an encryption key
279

    
280
   - Instance information can already be retrieved using an existing API
281
     (``OpQueryInstanceData``).
282
   - An RSA encryption key and a corresponding self-signed X509
283
     certificate is generated using the "openssl" command. This key will
284
     be used to encrypt the data sent to the destination cluster.
285

    
286
     - Private keys never leave the cluster.
287
     - The public part (the X509 certificate) is signed using HMAC with
288
       salting and a secret shared between Ganeti clusters.
289

    
290
#. Third party tells destination cluster to create an instance with the
291
   same specifications as on source cluster and to prepare for an
292
   instance move with the key received from the source cluster and
293
   receives the public part of the destination's encryption key
294

    
295
   - The current API to create instances (``OpCreateInstance``) will be
296
     extended to support an import from a remote cluster.
297
   - A valid, unexpired X509 certificate signed with the destination
298
     cluster's secret will be required. By verifying the signature, we
299
     know the third party didn't modify the certificate.
300

    
301
     - The private keys never leave their cluster, hence the third party
302
       can not decrypt or intercept the instance's data by modifying the
303
       IP address or port sent by the destination cluster.
304

    
305
   - The destination cluster generates another key and certificate,
306
     signs and sends it to the third party, who will have to pass it to
307
     the API for exporting an instance (``OpExportInstance``). This
308
     certificate is used to ensure we're sending the disk data to the
309
     correct destination cluster.
310
   - Once a disk can be imported, the API sends the destination
311
     information (IP address and TCP port) together with an HMAC
312
     signature to the third party.
313

    
314
#. Third party hands public part of the destination's encryption key
315
   together with all necessary information to source cluster and tells
316
   it to start the move
317

    
318
   - The existing API for exporting instances (``OpExportInstance``)
319
     will be extended to export instances to remote clusters.
320

    
321
#. Source cluster connects to destination cluster for each disk and
322
   transfers its data using the instance OS definition's export and
323
   import scripts
324

    
325
   - Before starting, the source cluster must verify the HMAC signature
326
     of the certificate and destination information (IP address and TCP
327
     port).
328
   - When connecting to the remote machine, strong certificate checks
329
     must be employed.
330

    
331
#. Due to the asynchronous nature of the whole process, the destination
332
   cluster checks whether all disks have been transferred every time
333
   after transferring a single disk; if so, it destroys the encryption
334
   key
335
#. After sending all disks, the source cluster destroys its key
336
#. Destination cluster runs OS definition's rename script to adjust
337
   instance settings if needed (e.g. IP address)
338
#. Destination cluster starts the instance if requested at the beginning
339
   by the third party
340
#. Source cluster removes the instance if requested
341

    
342
Instance move in pseudo code
343
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
344

    
345
.. highlight:: python
346

    
347
The following pseudo code describes a script moving instances between
348
clusters and what happens on both clusters.
349

    
350
#. Script is started, gets the instance name and destination cluster::
351

    
352
    (instance_name, dest_cluster_name) = sys.argv[1:]
353

    
354
    # Get destination cluster object
355
    dest_cluster = db.FindCluster(dest_cluster_name)
356

    
357
    # Use database to find source cluster
358
    src_cluster = db.FindClusterByInstance(instance_name)
359

    
360
#. Script tells source cluster to stop instance::
361

    
362
    # Stop instance
363
    src_cluster.StopInstance(instance_name)
364

    
365
    # Get instance specification (memory, disk, etc.)
366
    inst_spec = src_cluster.GetInstanceInfo(instance_name)
367

    
368
    (src_key_name, src_cert) = src_cluster.CreateX509Certificate()
369

    
370
#. ``CreateX509Certificate`` on source cluster::
371

    
372
    key_file = mkstemp()
373
    cert_file = "%s.cert" % key_file
374
    RunCmd(["/usr/bin/openssl", "req", "-new",
375
             "-newkey", "rsa:1024", "-days", "1",
376
             "-nodes", "-x509", "-batch",
377
             "-keyout", key_file, "-out", cert_file])
378

    
379
    plain_cert = utils.ReadFile(cert_file)
380

    
381
    # HMAC sign using secret key, this adds a "X-Ganeti-Signature"
382
    # header to the beginning of the certificate
383
    signed_cert = utils.SignX509Certificate(plain_cert,
384
      utils.ReadFile(constants.X509_SIGNKEY_FILE))
385

    
386
    # The certificate now looks like the following:
387
    #
388
    #   X-Ganeti-Signature: $1234$28676f0516c6ab68062b[…]
389
    #   -----BEGIN CERTIFICATE-----
390
    #   MIICsDCCAhmgAwIBAgI[…]
391
    #   -----END CERTIFICATE-----
392

    
393
    # Return name of key file and signed certificate in PEM format
394
    return (os.path.basename(key_file), signed_cert)
395

    
396
#. Script creates instance on destination cluster and waits for move to
397
   finish::
398

    
399
    dest_cluster.CreateInstance(mode=constants.REMOTE_IMPORT,
400
                                spec=inst_spec,
401
                                source_cert=src_cert)
402

    
403
    # Wait until destination cluster gives us its certificate
404
    dest_cert = None
405
    disk_info = []
406
    while not (dest_cert and len(disk_info) < len(inst_spec.disks)):
407
      tmp = dest_cluster.WaitOutput()
408
      if tmp is Certificate:
409
        dest_cert = tmp
410
      elif tmp is DiskInfo:
411
        # DiskInfo contains destination address and port
412
        disk_info[tmp.index] = tmp
413

    
414
    # Tell source cluster to export disks
415
    for disk in disk_info:
416
      src_cluster.ExportDisk(instance_name, disk=disk,
417
                             key_name=src_key_name,
418
                             dest_cert=dest_cert)
419

    
420
    print ("Instance %s sucessfully moved to %s" %
421
           (instance_name, dest_cluster.name))
422

    
423
#. ``CreateInstance`` on destination cluster::
424

    
425
    # …
426

    
427
    if mode == constants.REMOTE_IMPORT:
428
      # Make sure certificate was not modified since it was generated by
429
      # source cluster (which must use the same secret)
430
      if (not utils.VerifySignedX509Cert(source_cert,
431
            utils.ReadFile(constants.X509_SIGNKEY_FILE))):
432
        raise Error("Certificate not signed with this cluster's secret")
433

    
434
      if utils.CheckExpiredX509Cert(source_cert):
435
        raise Error("X509 certificate is expired")
436

    
437
      source_cert_file = utils.WriteTempFile(source_cert)
438

    
439
      # See above for X509 certificate generation and signing
440
      (key_name, signed_cert) = CreateSignedX509Certificate()
441

    
442
      SendToClient("x509-cert", signed_cert)
443

    
444
      for disk in instance.disks:
445
        # Start socat
446
        RunCmd(("socat"
447
                " OPENSSL-LISTEN:%s,…,key=%s,cert=%s,cafile=%s,verify=1"
448
                " stdout > /dev/disk…") %
449
               port, GetRsaKeyPath(key_name, private=True),
450
               GetRsaKeyPath(key_name, private=False), src_cert_file)
451
        SendToClient("send-disk-to", disk, ip_address, port)
452

    
453
      DestroyX509Cert(key_name)
454

    
455
      RunRenameScript(instance_name)
456

    
457
#. ``ExportDisk`` on source cluster::
458

    
459
    # Make sure certificate was not modified since it was generated by
460
    # destination cluster (which must use the same secret)
461
    if (not utils.VerifySignedX509Cert(cert_pem,
462
          utils.ReadFile(constants.X509_SIGNKEY_FILE))):
463
      raise Error("Certificate not signed with this cluster's secret")
464

    
465
    if utils.CheckExpiredX509Cert(cert_pem):
466
      raise Error("X509 certificate is expired")
467

    
468
    dest_cert_file = utils.WriteTempFile(cert_pem)
469

    
470
    # Start socat
471
    RunCmd(("socat stdin"
472
            " OPENSSL:%s:%s,…,key=%s,cert=%s,cafile=%s,verify=1"
473
            " < /dev/disk…") %
474
           disk.host, disk.port,
475
           GetRsaKeyPath(key_name, private=True),
476
           GetRsaKeyPath(key_name, private=False), dest_cert_file)
477

    
478
    if instance.all_disks_done:
479
      DestroyX509Cert(key_name)
480

    
481
.. highlight:: text
482

    
483
Miscellaneous notes
484
^^^^^^^^^^^^^^^^^^^
485

    
486
- A very similar system could also be used for instance exports within
487
  the same cluster. Currently OpenSSH is being used, but could be
488
  replaced by socat and SSL/TLS.
489
- During the design of intra-cluster instance moves we also discussed
490
  encrypting instance exports using GnuPG.
491
- While most instances should have exactly the same configuration as
492
  on the source cluster, setting them up with a different disk layout
493
  might be helpful in some use-cases.
494
- A cleanup operation, similar to the one available for failed instance
495
  migrations, should be provided.
496
- ``ganeti-watcher`` should remove instances pending a move from another
497
  cluster after a certain amount of time. This takes care of failures
498
  somewhere in the process.
499
- RSA keys can be generated using the existing
500
  ``bootstrap.GenerateSelfSignedSslCert`` function, though it might be
501
  useful to not write both parts into a single file, requiring small
502
  changes to the function. The public part always starts with
503
  ``-----BEGIN CERTIFICATE-----`` and ends with ``-----END
504
  CERTIFICATE-----``.
505
- The source and destination cluster might be different when it comes
506
  to available hypervisors, kernels, etc. The destination cluster should
507
  refuse to accept an instance move if it can't fulfill an instance's
508
  requirements.
509

    
510

    
511
Feature changes
512
---------------
513

    
514
KVM Security
515
~~~~~~~~~~~~
516

    
517
Current state and shortcomings
518
++++++++++++++++++++++++++++++
519

    
520
Currently all kvm processes run as root. Taking ownership of the
521
hypervisor process, from inside a virtual machine, would mean a full
522
compromise of the whole Ganeti cluster, knowledge of all Ganeti
523
authentication secrets, full access to all running instances, and the
524
option of subverting other basic services on the cluster (eg: ssh).
525

    
526
Proposed changes
527
++++++++++++++++
528

    
529
We would like to decrease the surface of attack available if an
530
hypervisor is compromised. We can do so adding different features to
531
Ganeti, which will allow restricting the broken hypervisor
532
possibilities, in the absence of a local privilege escalation attack, to
533
subvert the node.
534

    
535
Dropping privileges in kvm to a single user (easy)
536
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
537

    
538
By passing the ``-runas`` option to kvm, we can make it drop privileges.
539
The user can be chosen by an hypervisor parameter, so that each instance
540
can have its own user, but by default they will all run under the same
541
one. It should be very easy to implement, and can easily be backported
542
to 2.1.X.
543

    
544
This mode protects the Ganeti cluster from a subverted hypervisor, but
545
doesn't protect the instances between each other, unless care is taken
546
to specify a different user for each. This would prevent the worst
547
attacks, including:
548

    
549
- logging in to other nodes
550
- administering the Ganeti cluster
551
- subverting other services
552

    
553
But the following would remain an option:
554

    
555
- terminate other VMs (but not start them again, as that requires root
556
  privileges to set up networking) (unless different users are used)
557
- trace other VMs, and probably subvert them and access their data
558
  (unless different users are used)
559
- send network traffic from the node
560
- read unprotected data on the node filesystem
561

    
562
Running kvm in a chroot (slightly harder)
563
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
564

    
565
By passing the ``-chroot`` option to kvm, we can restrict the kvm
566
process in its own (possibly empty) root directory. We need to set this
567
area up so that the instance disks and control sockets are accessible,
568
so it would require slightly more work at the Ganeti level.
569

    
570
Breaking out in a chroot would mean:
571

    
572
- a lot less options to find a local privilege escalation vector
573
- the impossibility to write local data, if the chroot is set up
574
  correctly
575
- the impossibility to read filesystem data on the host
576

    
577
It would still be possible though to:
578

    
579
- terminate other VMs
580
- trace other VMs, and possibly subvert them (if a tracer can be
581
  installed in the chroot)
582
- send network traffic from the node
583

    
584

    
585
Running kvm with a pool of users (slightly harder)
586
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
587

    
588
If rather than passing a single user as an hypervisor parameter, we have
589
a pool of useable ones, we can dynamically choose a free one to use and
590
thus guarantee that each machine will be separate from the others,
591
without putting the burden of this on the cluster administrator.
592

    
593
This would mean interfering between machines would be impossible, and
594
can still be combined with the chroot benefits.
595

    
596
Running iptables rules to limit network interaction (easy)
597
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
598

    
599
These don't need to be handled by Ganeti, but we can ship examples. If
600
the users used to run VMs would be blocked from sending some or all
601
network traffic, it would become impossible for a broken into hypervisor
602
to send arbitrary data on the node network, which is especially useful
603
when the instance and the node network are separated (using ganeti-nbma
604
or a separate set of network interfaces), or when a separate replication
605
network is maintained. We need to experiment to see how much restriction
606
we can properly apply, without limiting the instance legitimate traffic.
607

    
608

    
609
Running kvm inside a container (even harder)
610
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
611

    
612
Recent linux kernels support different process namespaces through
613
control groups. PIDs, users, filesystems and even network interfaces can
614
be separated. If we can set up ganeti to run kvm in a separate container
615
we could insulate all the host process from being even visible if the
616
hypervisor gets broken into. Most probably separating the network
617
namespace would require one extra hop in the host, through a veth
618
interface, thus reducing performance, so we may want to avoid that, and
619
just rely on iptables.
620

    
621
Implementation plan
622
+++++++++++++++++++
623

    
624
We will first implement dropping privileges for kvm processes as a
625
single user, and most probably backport it to 2.1. Then we'll ship
626
example iptables rules to show how the user can be limited in its
627
network activities.  After that we'll implement chroot restriction for
628
kvm processes, and extend the user limitation to use a user pool.
629

    
630
Finally we'll look into namespaces and containers, although that might
631
slip after the 2.2 release.
632

    
633
External interface changes
634
--------------------------
635

    
636
.. vim: set textwidth=72 :