Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.2.rst @ c3c5dc77

History | View | Annotate | Download (30.2 kB)

1
=================
2
Ganeti 2.2 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.2 compared to
6
the 2.1 version.
7

    
8
The 2.2 version will be a relatively small release. Its main aim is to
9
avoid changing too much of the core code, while addressing issues and
10
adding new features and improvements over 2.1, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Background
18
==========
19

    
20
Overview
21
========
22

    
23
Detailed design
24
===============
25

    
26
As for 2.1 we divide the 2.2 design into three areas:
27

    
28
- core changes, which affect the master daemon/job queue/locking or
29
  all/most logical units
30
- logical unit/feature changes
31
- external interface changes (eg. command line, os api, hooks, ...)
32

    
33
Core changes
34
------------
35

    
36
Master Daemon Scaling improvements
37
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38

    
39
Current state and shortcomings
40
++++++++++++++++++++++++++++++
41

    
42
Currently the Ganeti master daemon is based on four sets of threads:
43

    
44
- The main thread (1 thread) just accepts connections on the master
45
  socket
46
- The client worker pool (16 threads) handles those connections,
47
  one thread per connected socket, parses luxi requests, and sends data
48
  back to the clients
49
- The job queue worker pool (25 threads) executes the actual jobs
50
  submitted by the clients
51
- The rpc worker pool (10 threads) interacts with the nodes via
52
  http-based-rpc
53

    
54
This means that every masterd currently runs 52 threads to do its job.
55
Being able to reduce the number of thread sets would make the master's
56
architecture a lot simpler. Moreover having less threads can help
57
decrease lock contention, log pollution and memory usage.
58
Also, with the current architecture, masterd suffers from quite a few
59
scalability issues:
60

    
61
- Since the 16 client worker threads handle one connection each, it's
62
  very easy to exhaust them, by just connecting to masterd 16 times and
63
  not sending any data. While we could perhaps make those pools
64
  resizable, increasing the number of threads won't help with lock
65
  contention.
66
- Some luxi operations (in particular REQ_WAIT_FOR_JOB_CHANGE) make the
67
  relevant client thread block on its job for a relatively long time.
68
  This makes it easier to exhaust the 16 client threads.
69
- The job queue lock is quite heavily contended, and certain easily
70
  reproducible workloads show that's it's very easy to put masterd in
71
  trouble: for example running ~15 background instance reinstall jobs,
72
  results in a master daemon that, even without having finished the
73
  client worker threads, can't answer simple job list requests, or
74
  submit more jobs.
75

    
76
Proposed changes
77
++++++++++++++++
78

    
79
In order to be able to interact with the master daemon even when it's
80
under heavy load, and  to make it simpler to add core functionality
81
(such as an asynchronous rpc client) we propose three subsequent levels
82
of changes to the master core architecture.
83

    
84
After making this change we'll be able to re-evaluate the size of our
85
thread pool, if we see that we can make most threads in the client
86
worker pool always idle. In the future we should also investigate making
87
the rpc client asynchronous as well, so that we can make masterd a lot
88
smaller in number of threads, and memory size, and thus also easier to
89
understand, debug, and scale.
90

    
91
Connection handling
92
^^^^^^^^^^^^^^^^^^^
93

    
94
We'll move the main thread of ganeti-masterd to asyncore, so that it can
95
share the mainloop code with all other Ganeti daemons. Then all luxi
96
clients will be asyncore clients, and I/O to/from them will be handled
97
by the master thread asynchronously. Data will be read from the client
98
sockets as it becomes available, and kept in a buffer, then when a
99
complete message is found, it's passed to a client worker thread for
100
parsing and processing. The client worker thread is responsible for
101
serializing the reply, which can then be sent asynchronously by the main
102
thread on the socket.
103

    
104
Wait for job change
105
^^^^^^^^^^^^^^^^^^^
106

    
107
The REQ_WAIT_FOR_JOB_CHANGE luxi request is changed to be
108
subscription-based, so that the executing thread doesn't have to be
109
waiting for the changes to arrive. Threads producing messages (job queue
110
executors) will make sure that when there is a change another thread is
111
awaken and delivers it to the waiting clients. This can be either a
112
dedicated "wait for job changes" thread or pool, or one of the client
113
workers, depending on what's easier to implement. In either case the
114
main asyncore thread will only be involved in pushing of the actual
115
data, and not in fetching/serializing it.
116

    
117
Other features to look at, when implementing this code are:
118

    
119
  - Possibility not to need the job lock to know which updates to push.
120
  - Possibility to signal clients about to time out, when no update has
121
    been received, not to despair and to keep waiting (luxi level
122
    keepalive).
123
  - Possibility to defer updates if they are too frequent, providing
124
    them at a maximum rate (lower priority).
125

    
126
Job Queue lock
127
^^^^^^^^^^^^^^
128

    
129
Our tests show that the job queue lock is a point of high contention.
130
We'll try to decrease its contention, either by more granular locking,
131
the use of shared/exclusive locks, or reducing the size of the critical
132
sections. This section of the design should be updated with the proposed
133
changes for the 2.2 release, with regards to the job queue.
134

    
135
Remote procedure call timeouts
136
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
137

    
138
Current state and shortcomings
139
++++++++++++++++++++++++++++++
140

    
141
The current RPC protocol used by Ganeti is based on HTTP. Every request
142
consists of an HTTP PUT request (e.g. ``PUT /hooks_runner HTTP/1.0``)
143
and doesn't return until the function called has returned. Parameters
144
and return values are encoded using JSON.
145

    
146
On the server side, ``ganeti-noded`` handles every incoming connection
147
in a separate process by forking just after accepting the connection.
148
This process exits after sending the response.
149

    
150
There is one major problem with this design: Timeouts can not be used on
151
a per-request basis. Neither client or server know how long it will
152
take. Even if we might be able to group requests into different
153
categories (e.g. fast and slow), this is not reliable.
154

    
155
If a node has an issue or the network connection fails while a request
156
is being handled, the master daemon can wait for a long time for the
157
connection to time out (e.g. due to the operating system's underlying
158
TCP keep-alive packets or timeouts). While the settings for keep-alive
159
packets can be changed using Linux-specific socket options, we prefer to
160
use application-level timeouts because these cover both machine down and
161
unresponsive node daemon cases.
162

    
163
Proposed changes
164
++++++++++++++++
165

    
166
RPC glossary
167
^^^^^^^^^^^^
168

    
169
Function call ID
170
  Unique identifier returned by ``ganeti-noded`` after invoking a
171
  function.
172
Function process
173
  Process started by ``ganeti-noded`` to call actual (backend) function.
174

    
175
Protocol
176
^^^^^^^^
177

    
178
Initially we chose HTTP as our RPC protocol because there were existing
179
libraries, which, unfortunately, turned out to miss important features
180
(such as SSL certificate authentication) and we had to write our own.
181

    
182
This proposal can easily be implemented using HTTP, though it would
183
likely be more efficient and less complicated to use the LUXI protocol
184
already used to communicate between client tools and the Ganeti master
185
daemon. Switching to another protocol can occur at a later point. This
186
proposal should be implemented using HTTP as its underlying protocol.
187

    
188
The LUXI protocol currently contains two functions, ``WaitForJobChange``
189
and ``AutoArchiveJobs``, which can take a longer time. They both support
190
a parameter to specify the timeout. This timeout is usually chosen as
191
roughly half of the socket timeout, guaranteeing a response before the
192
socket times out. After the specified amount of time,
193
``AutoArchiveJobs`` returns and reports the number of archived jobs.
194
``WaitForJobChange`` returns and reports a timeout. In both cases, the
195
functions can be called again.
196

    
197
A similar model can be used for the inter-node RPC protocol. In some
198
sense, the node daemon will implement a light variant of *"node daemon
199
jobs"*. When the function call is sent, it specifies an initial timeout.
200
If the function didn't finish within this timeout, a response is sent
201
with a unique identifier, the function call ID. The client can then
202
choose to wait for the function to finish again with a timeout.
203
Inter-node RPC calls would no longer be blocking indefinitely and there
204
would be an implicit ping-mechanism.
205

    
206
Request handling
207
^^^^^^^^^^^^^^^^
208

    
209
To support the protocol changes described above, the way the node daemon
210
handles request will have to change. Instead of forking and handling
211
every connection in a separate process, there should be one child
212
process per function call and the master process will handle the
213
communication with clients and the function processes using asynchronous
214
I/O.
215

    
216
Function processes communicate with the parent process via stdio and
217
possibly their exit status. Every function process has a unique
218
identifier, though it shouldn't be the process ID only (PIDs can be
219
recycled and are prone to race conditions for this use case). The
220
proposed format is ``${ppid}:${cpid}:${time}:${random}``, where ``ppid``
221
is the ``ganeti-noded`` PID, ``cpid`` the child's PID, ``time`` the
222
current Unix timestamp with decimal places and ``random`` at least 16
223
random bits.
224

    
225
The following operations will be supported:
226

    
227
``StartFunction(fn_name, fn_args, timeout)``
228
  Starts a function specified by ``fn_name`` with arguments in
229
  ``fn_args`` and waits up to ``timeout`` seconds for the function
230
  to finish. Fire-and-forget calls can be made by specifying a timeout
231
  of 0 seconds (e.g. for powercycling the node). Returns three values:
232
  function call ID (if not finished), whether function finished (or
233
  timeout) and the function's return value.
234
``WaitForFunction(fnc_id, timeout)``
235
  Waits up to ``timeout`` seconds for function call to finish. Return
236
  value same as ``StartFunction``.
237

    
238
In the future, ``StartFunction`` could support an additional parameter
239
to specify after how long the function process should be aborted.
240

    
241
Simplified timing diagram::
242

    
243
  Master daemon        Node daemon                      Function process
244
   |
245
  Call function
246
  (timeout 10s) -----> Parse request and fork for ----> Start function
247
                       calling actual function, then     |
248
                       wait up to 10s for function to    |
249
                       finish                            |
250
                        |                                |
251
                       ...                              ...
252
                        |                                |
253
  Examine return <----  |                                |
254
  value and wait                                         |
255
  again -------------> Wait another 10s for function     |
256
                        |                                |
257
                       ...                              ...
258
                        |                                |
259
  Examine return <----  |                                |
260
  value and wait                                         |
261
  again -------------> Wait another 10s for function     |
262
                        |                                |
263
                       ...                              ...
264
                        |                                |
265
                        |                               Function ends,
266
                       Get return value and forward <-- process exits
267
  Process return <---- it to caller
268
  value and continue
269
   |
270

    
271
.. TODO: Convert diagram above to graphviz/dot graphic
272

    
273
On process termination (e.g. after having been sent a ``SIGTERM`` or
274
``SIGINT`` signal), ``ganeti-noded`` should send ``SIGTERM`` to all
275
function processes and wait for all of them to terminate.
276

    
277

    
278
Inter-cluster instance moves
279
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
280

    
281
Current state and shortcomings
282
++++++++++++++++++++++++++++++
283

    
284
With the current design of Ganeti, moving whole instances between
285
different clusters involves a lot of manual work. There are several ways
286
to move instances, one of them being to export the instance, manually
287
copying all data to the new cluster before importing it again. Manual
288
changes to the instances configuration, such as the IP address, may be
289
necessary in the new environment. The goal is to improve and automate
290
this process in Ganeti 2.2.
291

    
292
Proposed changes
293
++++++++++++++++
294

    
295
Authorization, Authentication and Security
296
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
297

    
298
Until now, each Ganeti cluster was a self-contained entity and wouldn't
299
talk to other Ganeti clusters. Nodes within clusters only had to trust
300
the other nodes in the same cluster and the network used for replication
301
was trusted, too (hence the ability the use a separate, local network
302
for replication).
303

    
304
For inter-cluster instance transfers this model must be weakened. Nodes
305
in one cluster will have to talk to nodes in other clusters, sometimes
306
in other locations and, very important, via untrusted network
307
connections.
308

    
309
Various option have been considered for securing and authenticating the
310
data transfer from one machine to another. To reduce the risk of
311
accidentally overwriting data due to software bugs, authenticating the
312
arriving data was considered critical. Eventually we decided to use
313
socat's OpenSSL options (``OPENSSL:``, ``OPENSSL-LISTEN:`` et al), which
314
provide us with encryption, authentication and authorization when used
315
with separate keys and certificates.
316

    
317
Combinations of OpenSSH, GnuPG and Netcat were deemed too complex to set
318
up from within Ganeti. Any solution involving OpenSSH would require a
319
dedicated user with a home directory and likely automated modifications
320
to the user's ``$HOME/.ssh/authorized_keys`` file. When using Netcat,
321
GnuPG or another encryption method would be necessary to transfer the
322
data over an untrusted network. socat combines both in one program and
323
is already a dependency.
324

    
325
Each of the two clusters will have to generate an RSA key. The public
326
parts are exchanged between the clusters by a third party, such as an
327
administrator or a system interacting with Ganeti via the remote API
328
("third party" from here on). After receiving each other's public key,
329
the clusters can start talking to each other.
330

    
331
All encrypted connections must be verified on both sides. Neither side
332
may accept unverified certificates. The generated certificate should
333
only be valid for the time necessary to move the instance.
334

    
335
For additional protection of the instance data, the two clusters can
336
verify the certificates and destination information exchanged via the
337
third party by checking an HMAC signature using a key shared among the
338
involved clusters. By default this secret key will be a random string
339
unique to the cluster, generated by running SHA1 over 20 bytes read from
340
``/dev/urandom`` and the administrator must synchronize the secrets
341
between clusters before instances can be moved. If the third party does
342
not know the secret, it can't forge the certificates or redirect the
343
data. Unless disabled by a new cluster parameter, verifying the HMAC
344
signatures must be mandatory. The HMAC signature for X509 certificates
345
will be prepended to the certificate similar to an RFC822 header and
346
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to
347
``-----END CERTIFICATE-----``). The header name will be
348
``X-Ganeti-Signature`` and its value will have the format
349
``$salt/$hash`` (salt and hash separated by slash). The salt may only
350
contain characters in the range ``[a-zA-Z0-9]``.
351

    
352
On the web, the destination cluster would be equivalent to an HTTPS
353
server requiring verifiable client certificates. The browser would be
354
equivalent to the source cluster and must verify the server's
355
certificate while providing a client certificate to the server.
356

    
357
Copying data
358
^^^^^^^^^^^^
359

    
360
To simplify the implementation, we decided to operate at a block-device
361
level only, allowing us to easily support non-DRBD instance moves.
362

    
363
Intra-cluster instance moves will re-use the existing export and import
364
scripts supplied by instance OS definitions. Unlike simply copying the
365
raw data, this allows to use filesystem-specific utilities to dump only
366
used parts of the disk and to exclude certain disks from the move.
367
Compression should be used to further reduce the amount of data
368
transferred.
369

    
370
The export scripts writes all data to stdout and the import script reads
371
it from stdin again. To avoid copying data and reduce disk space
372
consumption, everything is read from the disk and sent over the network
373
directly, where it'll be written to the new block device directly again.
374

    
375
Workflow
376
^^^^^^^^
377

    
378
#. Third party tells source cluster to shut down instance, asks for the
379
   instance specification and for the public part of an encryption key
380

    
381
   - Instance information can already be retrieved using an existing API
382
     (``OpQueryInstanceData``).
383
   - An RSA encryption key and a corresponding self-signed X509
384
     certificate is generated using the "openssl" command. This key will
385
     be used to encrypt the data sent to the destination cluster.
386

    
387
     - Private keys never leave the cluster.
388
     - The public part (the X509 certificate) is signed using HMAC with
389
       salting and a secret shared between Ganeti clusters.
390

    
391
#. Third party tells destination cluster to create an instance with the
392
   same specifications as on source cluster and to prepare for an
393
   instance move with the key received from the source cluster and
394
   receives the public part of the destination's encryption key
395

    
396
   - The current API to create instances (``OpCreateInstance``) will be
397
     extended to support an import from a remote cluster.
398
   - A valid, unexpired X509 certificate signed with the destination
399
     cluster's secret will be required. By verifying the signature, we
400
     know the third party didn't modify the certificate.
401

    
402
     - The private keys never leave their cluster, hence the third party
403
       can not decrypt or intercept the instance's data by modifying the
404
       IP address or port sent by the destination cluster.
405

    
406
   - The destination cluster generates another key and certificate,
407
     signs and sends it to the third party, who will have to pass it to
408
     the API for exporting an instance (``OpExportInstance``). This
409
     certificate is used to ensure we're sending the disk data to the
410
     correct destination cluster.
411
   - Once a disk can be imported, the API sends the destination
412
     information (IP address and TCP port) together with an HMAC
413
     signature to the third party.
414

    
415
#. Third party hands public part of the destination's encryption key
416
   together with all necessary information to source cluster and tells
417
   it to start the move
418

    
419
   - The existing API for exporting instances (``OpExportInstance``)
420
     will be extended to export instances to remote clusters.
421

    
422
#. Source cluster connects to destination cluster for each disk and
423
   transfers its data using the instance OS definition's export and
424
   import scripts
425

    
426
   - Before starting, the source cluster must verify the HMAC signature
427
     of the certificate and destination information (IP address and TCP
428
     port).
429
   - When connecting to the remote machine, strong certificate checks
430
     must be employed.
431

    
432
#. Due to the asynchronous nature of the whole process, the destination
433
   cluster checks whether all disks have been transferred every time
434
   after transferring a single disk; if so, it destroys the encryption
435
   key
436
#. After sending all disks, the source cluster destroys its key
437
#. Destination cluster runs OS definition's rename script to adjust
438
   instance settings if needed (e.g. IP address)
439
#. Destination cluster starts the instance if requested at the beginning
440
   by the third party
441
#. Source cluster removes the instance if requested
442

    
443
Instance move in pseudo code
444
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
445

    
446
.. highlight:: python
447

    
448
The following pseudo code describes a script moving instances between
449
clusters and what happens on both clusters.
450

    
451
#. Script is started, gets the instance name and destination cluster::
452

    
453
    (instance_name, dest_cluster_name) = sys.argv[1:]
454

    
455
    # Get destination cluster object
456
    dest_cluster = db.FindCluster(dest_cluster_name)
457

    
458
    # Use database to find source cluster
459
    src_cluster = db.FindClusterByInstance(instance_name)
460

    
461
#. Script tells source cluster to stop instance::
462

    
463
    # Stop instance
464
    src_cluster.StopInstance(instance_name)
465

    
466
    # Get instance specification (memory, disk, etc.)
467
    inst_spec = src_cluster.GetInstanceInfo(instance_name)
468

    
469
    (src_key_name, src_cert) = src_cluster.CreateX509Certificate()
470

    
471
#. ``CreateX509Certificate`` on source cluster::
472

    
473
    key_file = mkstemp()
474
    cert_file = "%s.cert" % key_file
475
    RunCmd(["/usr/bin/openssl", "req", "-new",
476
             "-newkey", "rsa:1024", "-days", "1",
477
             "-nodes", "-x509", "-batch",
478
             "-keyout", key_file, "-out", cert_file])
479

    
480
    plain_cert = utils.ReadFile(cert_file)
481

    
482
    # HMAC sign using secret key, this adds a "X-Ganeti-Signature"
483
    # header to the beginning of the certificate
484
    signed_cert = utils.SignX509Certificate(plain_cert,
485
      utils.ReadFile(constants.X509_SIGNKEY_FILE))
486

    
487
    # The certificate now looks like the following:
488
    #
489
    #   X-Ganeti-Signature: $1234$28676f0516c6ab68062b[…]
490
    #   -----BEGIN CERTIFICATE-----
491
    #   MIICsDCCAhmgAwIBAgI[…]
492
    #   -----END CERTIFICATE-----
493

    
494
    # Return name of key file and signed certificate in PEM format
495
    return (os.path.basename(key_file), signed_cert)
496

    
497
#. Script creates instance on destination cluster and waits for move to
498
   finish::
499

    
500
    dest_cluster.CreateInstance(mode=constants.REMOTE_IMPORT,
501
                                spec=inst_spec,
502
                                source_cert=src_cert)
503

    
504
    # Wait until destination cluster gives us its certificate
505
    dest_cert = None
506
    disk_info = []
507
    while not (dest_cert and len(disk_info) < len(inst_spec.disks)):
508
      tmp = dest_cluster.WaitOutput()
509
      if tmp is Certificate:
510
        dest_cert = tmp
511
      elif tmp is DiskInfo:
512
        # DiskInfo contains destination address and port
513
        disk_info[tmp.index] = tmp
514

    
515
    # Tell source cluster to export disks
516
    for disk in disk_info:
517
      src_cluster.ExportDisk(instance_name, disk=disk,
518
                             key_name=src_key_name,
519
                             dest_cert=dest_cert)
520

    
521
    print ("Instance %s sucessfully moved to %s" %
522
           (instance_name, dest_cluster.name))
523

    
524
#. ``CreateInstance`` on destination cluster::
525

    
526
    # …
527

    
528
    if mode == constants.REMOTE_IMPORT:
529
      # Make sure certificate was not modified since it was generated by
530
      # source cluster (which must use the same secret)
531
      if (not utils.VerifySignedX509Cert(source_cert,
532
            utils.ReadFile(constants.X509_SIGNKEY_FILE))):
533
        raise Error("Certificate not signed with this cluster's secret")
534

    
535
      if utils.CheckExpiredX509Cert(source_cert):
536
        raise Error("X509 certificate is expired")
537

    
538
      source_cert_file = utils.WriteTempFile(source_cert)
539

    
540
      # See above for X509 certificate generation and signing
541
      (key_name, signed_cert) = CreateSignedX509Certificate()
542

    
543
      SendToClient("x509-cert", signed_cert)
544

    
545
      for disk in instance.disks:
546
        # Start socat
547
        RunCmd(("socat"
548
                " OPENSSL-LISTEN:%s,…,key=%s,cert=%s,cafile=%s,verify=1"
549
                " stdout > /dev/disk…") %
550
               port, GetRsaKeyPath(key_name, private=True),
551
               GetRsaKeyPath(key_name, private=False), src_cert_file)
552
        SendToClient("send-disk-to", disk, ip_address, port)
553

    
554
      DestroyX509Cert(key_name)
555

    
556
      RunRenameScript(instance_name)
557

    
558
#. ``ExportDisk`` on source cluster::
559

    
560
    # Make sure certificate was not modified since it was generated by
561
    # destination cluster (which must use the same secret)
562
    if (not utils.VerifySignedX509Cert(cert_pem,
563
          utils.ReadFile(constants.X509_SIGNKEY_FILE))):
564
      raise Error("Certificate not signed with this cluster's secret")
565

    
566
    if utils.CheckExpiredX509Cert(cert_pem):
567
      raise Error("X509 certificate is expired")
568

    
569
    dest_cert_file = utils.WriteTempFile(cert_pem)
570

    
571
    # Start socat
572
    RunCmd(("socat stdin"
573
            " OPENSSL:%s:%s,…,key=%s,cert=%s,cafile=%s,verify=1"
574
            " < /dev/disk…") %
575
           disk.host, disk.port,
576
           GetRsaKeyPath(key_name, private=True),
577
           GetRsaKeyPath(key_name, private=False), dest_cert_file)
578

    
579
    if instance.all_disks_done:
580
      DestroyX509Cert(key_name)
581

    
582
.. highlight:: text
583

    
584
Miscellaneous notes
585
^^^^^^^^^^^^^^^^^^^
586

    
587
- A very similar system could also be used for instance exports within
588
  the same cluster. Currently OpenSSH is being used, but could be
589
  replaced by socat and SSL/TLS.
590
- During the design of intra-cluster instance moves we also discussed
591
  encrypting instance exports using GnuPG.
592
- While most instances should have exactly the same configuration as
593
  on the source cluster, setting them up with a different disk layout
594
  might be helpful in some use-cases.
595
- A cleanup operation, similar to the one available for failed instance
596
  migrations, should be provided.
597
- ``ganeti-watcher`` should remove instances pending a move from another
598
  cluster after a certain amount of time. This takes care of failures
599
  somewhere in the process.
600
- RSA keys can be generated using the existing
601
  ``bootstrap.GenerateSelfSignedSslCert`` function, though it might be
602
  useful to not write both parts into a single file, requiring small
603
  changes to the function. The public part always starts with
604
  ``-----BEGIN CERTIFICATE-----`` and ends with ``-----END
605
  CERTIFICATE-----``.
606
- The source and destination cluster might be different when it comes
607
  to available hypervisors, kernels, etc. The destination cluster should
608
  refuse to accept an instance move if it can't fulfill an instance's
609
  requirements.
610

    
611

    
612
Feature changes
613
---------------
614

    
615
KVM Security
616
~~~~~~~~~~~~
617

    
618
Current state and shortcomings
619
++++++++++++++++++++++++++++++
620

    
621
Currently all kvm processes run as root. Taking ownership of the
622
hypervisor process, from inside a virtual machine, would mean a full
623
compromise of the whole Ganeti cluster, knowledge of all Ganeti
624
authentication secrets, full access to all running instances, and the
625
option of subverting other basic services on the cluster (eg: ssh).
626

    
627
Proposed changes
628
++++++++++++++++
629

    
630
We would like to decrease the surface of attack available if an
631
hypervisor is compromised. We can do so adding different features to
632
Ganeti, which will allow restricting the broken hypervisor
633
possibilities, in the absence of a local privilege escalation attack, to
634
subvert the node.
635

    
636
Dropping privileges in kvm to a single user (easy)
637
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
638

    
639
By passing the ``-runas`` option to kvm, we can make it drop privileges.
640
The user can be chosen by an hypervisor parameter, so that each instance
641
can have its own user, but by default they will all run under the same
642
one. It should be very easy to implement, and can easily be backported
643
to 2.1.X.
644

    
645
This mode protects the Ganeti cluster from a subverted hypervisor, but
646
doesn't protect the instances between each other, unless care is taken
647
to specify a different user for each. This would prevent the worst
648
attacks, including:
649

    
650
- logging in to other nodes
651
- administering the Ganeti cluster
652
- subverting other services
653

    
654
But the following would remain an option:
655

    
656
- terminate other VMs (but not start them again, as that requires root
657
  privileges to set up networking) (unless different users are used)
658
- trace other VMs, and probably subvert them and access their data
659
  (unless different users are used)
660
- send network traffic from the node
661
- read unprotected data on the node filesystem
662

    
663
Running kvm in a chroot (slightly harder)
664
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
665

    
666
By passing the ``-chroot`` option to kvm, we can restrict the kvm
667
process in its own (possibly empty) root directory. We need to set this
668
area up so that the instance disks and control sockets are accessible,
669
so it would require slightly more work at the Ganeti level.
670

    
671
Breaking out in a chroot would mean:
672

    
673
- a lot less options to find a local privilege escalation vector
674
- the impossibility to write local data, if the chroot is set up
675
  correctly
676
- the impossibility to read filesystem data on the host
677

    
678
It would still be possible though to:
679

    
680
- terminate other VMs
681
- trace other VMs, and possibly subvert them (if a tracer can be
682
  installed in the chroot)
683
- send network traffic from the node
684

    
685

    
686
Running kvm with a pool of users (slightly harder)
687
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
688

    
689
If rather than passing a single user as an hypervisor parameter, we have
690
a pool of useable ones, we can dynamically choose a free one to use and
691
thus guarantee that each machine will be separate from the others,
692
without putting the burden of this on the cluster administrator.
693

    
694
This would mean interfering between machines would be impossible, and
695
can still be combined with the chroot benefits.
696

    
697
Running iptables rules to limit network interaction (easy)
698
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
699

    
700
These don't need to be handled by Ganeti, but we can ship examples. If
701
the users used to run VMs would be blocked from sending some or all
702
network traffic, it would become impossible for a broken into hypervisor
703
to send arbitrary data on the node network, which is especially useful
704
when the instance and the node network are separated (using ganeti-nbma
705
or a separate set of network interfaces), or when a separate replication
706
network is maintained. We need to experiment to see how much restriction
707
we can properly apply, without limiting the instance legitimate traffic.
708

    
709

    
710
Running kvm inside a container (even harder)
711
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
712

    
713
Recent linux kernels support different process namespaces through
714
control groups. PIDs, users, filesystems and even network interfaces can
715
be separated. If we can set up ganeti to run kvm in a separate container
716
we could insulate all the host process from being even visible if the
717
hypervisor gets broken into. Most probably separating the network
718
namespace would require one extra hop in the host, through a veth
719
interface, thus reducing performance, so we may want to avoid that, and
720
just rely on iptables.
721

    
722
Implementation plan
723
+++++++++++++++++++
724

    
725
We will first implement dropping privileges for kvm processes as a
726
single user, and most probably backport it to 2.1. Then we'll ship
727
example iptables rules to show how the user can be limited in its
728
network activities.  After that we'll implement chroot restriction for
729
kvm processes, and extend the user limitation to use a user pool.
730

    
731
Finally we'll look into namespaces and containers, although that might
732
slip after the 2.2 release.
733

    
734
External interface changes
735
--------------------------
736

    
737
.. vim: set textwidth=72 :