Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.2.rst @ a4978169

History | View | Annotate | Download (41.1 kB)

1
=================
2
Ganeti 2.2 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.2 compared to
6
the 2.1 version.
7

    
8
The 2.2 version will be a relatively small release. Its main aim is to
9
avoid changing too much of the core code, while addressing issues and
10
adding new features and improvements over 2.1, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Detailed design
15
===============
16

    
17
As for 2.1 we divide the 2.2 design into three areas:
18

    
19
- core changes, which affect the master daemon/job queue/locking or
20
  all/most logical units
21
- logical unit/feature changes
22
- external interface changes (eg. command line, os api, hooks, ...)
23

    
24
Core changes
25
------------
26

    
27
Master Daemon Scaling improvements
28
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
29

    
30
Current state and shortcomings
31
++++++++++++++++++++++++++++++
32

    
33
Currently the Ganeti master daemon is based on four sets of threads:
34

    
35
- The main thread (1 thread) just accepts connections on the master
36
  socket
37
- The client worker pool (16 threads) handles those connections,
38
  one thread per connected socket, parses luxi requests, and sends data
39
  back to the clients
40
- The job queue worker pool (25 threads) executes the actual jobs
41
  submitted by the clients
42
- The rpc worker pool (10 threads) interacts with the nodes via
43
  http-based-rpc
44

    
45
This means that every masterd currently runs 52 threads to do its job.
46
Being able to reduce the number of thread sets would make the master's
47
architecture a lot simpler. Moreover having less threads can help
48
decrease lock contention, log pollution and memory usage.
49
Also, with the current architecture, masterd suffers from quite a few
50
scalability issues:
51

    
52
Core daemon connection handling
53
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
54

    
55
Since the 16 client worker threads handle one connection each, it's very
56
easy to exhaust them, by just connecting to masterd 16 times and not
57
sending any data. While we could perhaps make those pools resizable,
58
increasing the number of threads won't help with lock contention nor
59
with better handling long running operations making sure the client is
60
informed that everything is proceeding, and doesn't need to time out.
61

    
62
Wait for job change
63
^^^^^^^^^^^^^^^^^^^
64

    
65
The REQ_WAIT_FOR_JOB_CHANGE luxi operation makes the relevant client
66
thread block on its job for a relative long time. This is another easy
67
way to exhaust the 16 client threads, and a place where clients often
68
time out, moreover this operation is negative for the job queue lock
69
contention (see below).
70

    
71
Job Queue lock
72
^^^^^^^^^^^^^^
73

    
74
The job queue lock is quite heavily contended, and certain easily
75
reproducible workloads show that's it's very easy to put masterd in
76
trouble: for example running ~15 background instance reinstall jobs,
77
results in a master daemon that, even without having finished the
78
client worker threads, can't answer simple job list requests, or
79
submit more jobs.
80

    
81
Currently the job queue lock is an exclusive non-fair lock insulating
82
the following job queue methods (called by the client workers).
83

    
84
  - AddNode
85
  - RemoveNode
86
  - SubmitJob
87
  - SubmitManyJobs
88
  - WaitForJobChanges
89
  - CancelJob
90
  - ArchiveJob
91
  - AutoArchiveJobs
92
  - QueryJobs
93
  - Shutdown
94

    
95
Moreover the job queue lock is acquired outside of the job queue in two
96
other classes:
97

    
98
  - jqueue._JobQueueWorker (in RunTask) before executing the opcode, after
99
    finishing its executing and when handling an exception.
100
  - jqueue._OpExecCallbacks (in NotifyStart and Feedback) when the
101
    processor (mcpu.Processor) is about to start working on the opcode
102
    (after acquiring the necessary locks) and when any data is sent back
103
    via the feedback function.
104

    
105
Of those the major critical points are:
106

    
107
  - Submit[Many]Job, QueryJobs, WaitForJobChanges, which can easily slow
108
    down and block client threads up to making the respective clients
109
    time out.
110
  - The code paths in NotifyStart, Feedback, and RunTask, which slow
111
    down job processing between clients and otherwise non-related jobs.
112

    
113
To increase the pain:
114

    
115
  - WaitForJobChanges is a bad offender because it's implemented with a
116
    notified condition which awakes waiting threads, who then try to
117
    acquire the global lock again
118
  - Many should-be-fast code paths are slowed down by replicating the
119
    change to remote nodes, and thus waiting, with the lock held, on
120
    remote rpcs to complete (starting, finishing, and submitting jobs)
121

    
122
Proposed changes
123
++++++++++++++++
124

    
125
In order to be able to interact with the master daemon even when it's
126
under heavy load, and  to make it simpler to add core functionality
127
(such as an asynchronous rpc client) we propose three subsequent levels
128
of changes to the master core architecture.
129

    
130
After making this change we'll be able to re-evaluate the size of our
131
thread pool, if we see that we can make most threads in the client
132
worker pool always idle. In the future we should also investigate making
133
the rpc client asynchronous as well, so that we can make masterd a lot
134
smaller in number of threads, and memory size, and thus also easier to
135
understand, debug, and scale.
136

    
137
Connection handling
138
^^^^^^^^^^^^^^^^^^^
139

    
140
We'll move the main thread of ganeti-masterd to asyncore, so that it can
141
share the mainloop code with all other Ganeti daemons. Then all luxi
142
clients will be asyncore clients, and I/O to/from them will be handled
143
by the master thread asynchronously. Data will be read from the client
144
sockets as it becomes available, and kept in a buffer, then when a
145
complete message is found, it's passed to a client worker thread for
146
parsing and processing. The client worker thread is responsible for
147
serializing the reply, which can then be sent asynchronously by the main
148
thread on the socket.
149

    
150
Wait for job change
151
^^^^^^^^^^^^^^^^^^^
152

    
153
The REQ_WAIT_FOR_JOB_CHANGE luxi request is changed to be
154
subscription-based, so that the executing thread doesn't have to be
155
waiting for the changes to arrive. Threads producing messages (job queue
156
executors) will make sure that when there is a change another thread is
157
awaken and delivers it to the waiting clients. This can be either a
158
dedicated "wait for job changes" thread or pool, or one of the client
159
workers, depending on what's easier to implement. In either case the
160
main asyncore thread will only be involved in pushing of the actual
161
data, and not in fetching/serializing it.
162

    
163
Other features to look at, when implementing this code are:
164

    
165
  - Possibility not to need the job lock to know which updates to push:
166
    if the thread producing the data pushed a copy of the update for the
167
    waiting clients, the thread sending it won't need to acquire the
168
    lock again to fetch the actual data.
169
  - Possibility to signal clients about to time out, when no update has
170
    been received, not to despair and to keep waiting (luxi level
171
    keepalive).
172
  - Possibility to defer updates if they are too frequent, providing
173
    them at a maximum rate (lower priority).
174

    
175
Job Queue lock
176
^^^^^^^^^^^^^^
177

    
178
In order to decrease the job queue lock contention, we will change the
179
code paths in the following ways, initially:
180

    
181
  - A per-job lock will be introduced. All operations affecting only one
182
    job (for example feedback, starting/finishing notifications,
183
    subscribing to or watching a job) will only require the job lock.
184
    This should be a leaf lock, but if a situation arises in which it
185
    must be acquired together with the global job queue lock the global
186
    one must always be acquired last (for the global section).
187
  - The locks will be converted to a sharedlock. Any read-only operation
188
    will be able to proceed in parallel.
189
  - During remote update (which happens already per-job) we'll drop the
190
    job lock level to shared mode, so that activities reading the lock
191
    (for example job change notifications or QueryJobs calls) will be
192
    able to proceed in parallel.
193
  - The wait for job changes improvements proposed above will be
194
    implemented.
195

    
196
In the future other improvements may include splitting off some of the
197
work (eg replication of a job to remote nodes) to a separate thread pool
198
or asynchronous thread, not tied with the code path for answering client
199
requests or the one executing the "real" work. This can be discussed
200
again after we used the more granular job queue in production and tested
201
its benefits.
202

    
203

    
204
Remote procedure call timeouts
205
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206

    
207
Current state and shortcomings
208
++++++++++++++++++++++++++++++
209

    
210
The current RPC protocol used by Ganeti is based on HTTP. Every request
211
consists of an HTTP PUT request (e.g. ``PUT /hooks_runner HTTP/1.0``)
212
and doesn't return until the function called has returned. Parameters
213
and return values are encoded using JSON.
214

    
215
On the server side, ``ganeti-noded`` handles every incoming connection
216
in a separate process by forking just after accepting the connection.
217
This process exits after sending the response.
218

    
219
There is one major problem with this design: Timeouts can not be used on
220
a per-request basis. Neither client or server know how long it will
221
take. Even if we might be able to group requests into different
222
categories (e.g. fast and slow), this is not reliable.
223

    
224
If a node has an issue or the network connection fails while a request
225
is being handled, the master daemon can wait for a long time for the
226
connection to time out (e.g. due to the operating system's underlying
227
TCP keep-alive packets or timeouts). While the settings for keep-alive
228
packets can be changed using Linux-specific socket options, we prefer to
229
use application-level timeouts because these cover both machine down and
230
unresponsive node daemon cases.
231

    
232
Proposed changes
233
++++++++++++++++
234

    
235
RPC glossary
236
^^^^^^^^^^^^
237

    
238
Function call ID
239
  Unique identifier returned by ``ganeti-noded`` after invoking a
240
  function.
241
Function process
242
  Process started by ``ganeti-noded`` to call actual (backend) function.
243

    
244
Protocol
245
^^^^^^^^
246

    
247
Initially we chose HTTP as our RPC protocol because there were existing
248
libraries, which, unfortunately, turned out to miss important features
249
(such as SSL certificate authentication) and we had to write our own.
250

    
251
This proposal can easily be implemented using HTTP, though it would
252
likely be more efficient and less complicated to use the LUXI protocol
253
already used to communicate between client tools and the Ganeti master
254
daemon. Switching to another protocol can occur at a later point. This
255
proposal should be implemented using HTTP as its underlying protocol.
256

    
257
The LUXI protocol currently contains two functions, ``WaitForJobChange``
258
and ``AutoArchiveJobs``, which can take a longer time. They both support
259
a parameter to specify the timeout. This timeout is usually chosen as
260
roughly half of the socket timeout, guaranteeing a response before the
261
socket times out. After the specified amount of time,
262
``AutoArchiveJobs`` returns and reports the number of archived jobs.
263
``WaitForJobChange`` returns and reports a timeout. In both cases, the
264
functions can be called again.
265

    
266
A similar model can be used for the inter-node RPC protocol. In some
267
sense, the node daemon will implement a light variant of *"node daemon
268
jobs"*. When the function call is sent, it specifies an initial timeout.
269
If the function didn't finish within this timeout, a response is sent
270
with a unique identifier, the function call ID. The client can then
271
choose to wait for the function to finish again with a timeout.
272
Inter-node RPC calls would no longer be blocking indefinitely and there
273
would be an implicit ping-mechanism.
274

    
275
Request handling
276
^^^^^^^^^^^^^^^^
277

    
278
To support the protocol changes described above, the way the node daemon
279
handles request will have to change. Instead of forking and handling
280
every connection in a separate process, there should be one child
281
process per function call and the master process will handle the
282
communication with clients and the function processes using asynchronous
283
I/O.
284

    
285
Function processes communicate with the parent process via stdio and
286
possibly their exit status. Every function process has a unique
287
identifier, though it shouldn't be the process ID only (PIDs can be
288
recycled and are prone to race conditions for this use case). The
289
proposed format is ``${ppid}:${cpid}:${time}:${random}``, where ``ppid``
290
is the ``ganeti-noded`` PID, ``cpid`` the child's PID, ``time`` the
291
current Unix timestamp with decimal places and ``random`` at least 16
292
random bits.
293

    
294
The following operations will be supported:
295

    
296
``StartFunction(fn_name, fn_args, timeout)``
297
  Starts a function specified by ``fn_name`` with arguments in
298
  ``fn_args`` and waits up to ``timeout`` seconds for the function
299
  to finish. Fire-and-forget calls can be made by specifying a timeout
300
  of 0 seconds (e.g. for powercycling the node). Returns three values:
301
  function call ID (if not finished), whether function finished (or
302
  timeout) and the function's return value.
303
``WaitForFunction(fnc_id, timeout)``
304
  Waits up to ``timeout`` seconds for function call to finish. Return
305
  value same as ``StartFunction``.
306

    
307
In the future, ``StartFunction`` could support an additional parameter
308
to specify after how long the function process should be aborted.
309

    
310
Simplified timing diagram::
311

    
312
  Master daemon        Node daemon                      Function process
313
   |
314
  Call function
315
  (timeout 10s) -----> Parse request and fork for ----> Start function
316
                       calling actual function, then     |
317
                       wait up to 10s for function to    |
318
                       finish                            |
319
                        |                                |
320
                       ...                              ...
321
                        |                                |
322
  Examine return <----  |                                |
323
  value and wait                                         |
324
  again -------------> Wait another 10s for function     |
325
                        |                                |
326
                       ...                              ...
327
                        |                                |
328
  Examine return <----  |                                |
329
  value and wait                                         |
330
  again -------------> Wait another 10s for function     |
331
                        |                                |
332
                       ...                              ...
333
                        |                                |
334
                        |                               Function ends,
335
                       Get return value and forward <-- process exits
336
  Process return <---- it to caller
337
  value and continue
338
   |
339

    
340
.. TODO: Convert diagram above to graphviz/dot graphic
341

    
342
On process termination (e.g. after having been sent a ``SIGTERM`` or
343
``SIGINT`` signal), ``ganeti-noded`` should send ``SIGTERM`` to all
344
function processes and wait for all of them to terminate.
345

    
346

    
347
Inter-cluster instance moves
348
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
349

    
350
Current state and shortcomings
351
++++++++++++++++++++++++++++++
352

    
353
With the current design of Ganeti, moving whole instances between
354
different clusters involves a lot of manual work. There are several ways
355
to move instances, one of them being to export the instance, manually
356
copying all data to the new cluster before importing it again. Manual
357
changes to the instances configuration, such as the IP address, may be
358
necessary in the new environment. The goal is to improve and automate
359
this process in Ganeti 2.2.
360

    
361
Proposed changes
362
++++++++++++++++
363

    
364
Authorization, Authentication and Security
365
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
366

    
367
Until now, each Ganeti cluster was a self-contained entity and wouldn't
368
talk to other Ganeti clusters. Nodes within clusters only had to trust
369
the other nodes in the same cluster and the network used for replication
370
was trusted, too (hence the ability the use a separate, local network
371
for replication).
372

    
373
For inter-cluster instance transfers this model must be weakened. Nodes
374
in one cluster will have to talk to nodes in other clusters, sometimes
375
in other locations and, very important, via untrusted network
376
connections.
377

    
378
Various option have been considered for securing and authenticating the
379
data transfer from one machine to another. To reduce the risk of
380
accidentally overwriting data due to software bugs, authenticating the
381
arriving data was considered critical. Eventually we decided to use
382
socat's OpenSSL options (``OPENSSL:``, ``OPENSSL-LISTEN:`` et al), which
383
provide us with encryption, authentication and authorization when used
384
with separate keys and certificates.
385

    
386
Combinations of OpenSSH, GnuPG and Netcat were deemed too complex to set
387
up from within Ganeti. Any solution involving OpenSSH would require a
388
dedicated user with a home directory and likely automated modifications
389
to the user's ``$HOME/.ssh/authorized_keys`` file. When using Netcat,
390
GnuPG or another encryption method would be necessary to transfer the
391
data over an untrusted network. socat combines both in one program and
392
is already a dependency.
393

    
394
Each of the two clusters will have to generate an RSA key. The public
395
parts are exchanged between the clusters by a third party, such as an
396
administrator or a system interacting with Ganeti via the remote API
397
("third party" from here on). After receiving each other's public key,
398
the clusters can start talking to each other.
399

    
400
All encrypted connections must be verified on both sides. Neither side
401
may accept unverified certificates. The generated certificate should
402
only be valid for the time necessary to move the instance.
403

    
404
For additional protection of the instance data, the two clusters can
405
verify the certificates and destination information exchanged via the
406
third party by checking an HMAC signature using a key shared among the
407
involved clusters. By default this secret key will be a random string
408
unique to the cluster, generated by running SHA1 over 20 bytes read from
409
``/dev/urandom`` and the administrator must synchronize the secrets
410
between clusters before instances can be moved. If the third party does
411
not know the secret, it can't forge the certificates or redirect the
412
data. Unless disabled by a new cluster parameter, verifying the HMAC
413
signatures must be mandatory. The HMAC signature for X509 certificates
414
will be prepended to the certificate similar to an :rfc:`822` header and
415
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to
416
``-----END CERTIFICATE-----``). The header name will be
417
``X-Ganeti-Signature`` and its value will have the format
418
``$salt/$hash`` (salt and hash separated by slash). The salt may only
419
contain characters in the range ``[a-zA-Z0-9]``.
420

    
421
On the web, the destination cluster would be equivalent to an HTTPS
422
server requiring verifiable client certificates. The browser would be
423
equivalent to the source cluster and must verify the server's
424
certificate while providing a client certificate to the server.
425

    
426
Copying data
427
^^^^^^^^^^^^
428

    
429
To simplify the implementation, we decided to operate at a block-device
430
level only, allowing us to easily support non-DRBD instance moves.
431

    
432
Intra-cluster instance moves will re-use the existing export and import
433
scripts supplied by instance OS definitions. Unlike simply copying the
434
raw data, this allows to use filesystem-specific utilities to dump only
435
used parts of the disk and to exclude certain disks from the move.
436
Compression should be used to further reduce the amount of data
437
transferred.
438

    
439
The export scripts writes all data to stdout and the import script reads
440
it from stdin again. To avoid copying data and reduce disk space
441
consumption, everything is read from the disk and sent over the network
442
directly, where it'll be written to the new block device directly again.
443

    
444
Workflow
445
^^^^^^^^
446

    
447
#. Third party tells source cluster to shut down instance, asks for the
448
   instance specification and for the public part of an encryption key
449

    
450
   - Instance information can already be retrieved using an existing API
451
     (``OpQueryInstanceData``).
452
   - An RSA encryption key and a corresponding self-signed X509
453
     certificate is generated using the "openssl" command. This key will
454
     be used to encrypt the data sent to the destination cluster.
455

    
456
     - Private keys never leave the cluster.
457
     - The public part (the X509 certificate) is signed using HMAC with
458
       salting and a secret shared between Ganeti clusters.
459

    
460
#. Third party tells destination cluster to create an instance with the
461
   same specifications as on source cluster and to prepare for an
462
   instance move with the key received from the source cluster and
463
   receives the public part of the destination's encryption key
464

    
465
   - The current API to create instances (``OpCreateInstance``) will be
466
     extended to support an import from a remote cluster.
467
   - A valid, unexpired X509 certificate signed with the destination
468
     cluster's secret will be required. By verifying the signature, we
469
     know the third party didn't modify the certificate.
470

    
471
     - The private keys never leave their cluster, hence the third party
472
       can not decrypt or intercept the instance's data by modifying the
473
       IP address or port sent by the destination cluster.
474

    
475
   - The destination cluster generates another key and certificate,
476
     signs and sends it to the third party, who will have to pass it to
477
     the API for exporting an instance (``OpExportInstance``). This
478
     certificate is used to ensure we're sending the disk data to the
479
     correct destination cluster.
480
   - Once a disk can be imported, the API sends the destination
481
     information (IP address and TCP port) together with an HMAC
482
     signature to the third party.
483

    
484
#. Third party hands public part of the destination's encryption key
485
   together with all necessary information to source cluster and tells
486
   it to start the move
487

    
488
   - The existing API for exporting instances (``OpExportInstance``)
489
     will be extended to export instances to remote clusters.
490

    
491
#. Source cluster connects to destination cluster for each disk and
492
   transfers its data using the instance OS definition's export and
493
   import scripts
494

    
495
   - Before starting, the source cluster must verify the HMAC signature
496
     of the certificate and destination information (IP address and TCP
497
     port).
498
   - When connecting to the remote machine, strong certificate checks
499
     must be employed.
500

    
501
#. Due to the asynchronous nature of the whole process, the destination
502
   cluster checks whether all disks have been transferred every time
503
   after transferring a single disk; if so, it destroys the encryption
504
   key
505
#. After sending all disks, the source cluster destroys its key
506
#. Destination cluster runs OS definition's rename script to adjust
507
   instance settings if needed (e.g. IP address)
508
#. Destination cluster starts the instance if requested at the beginning
509
   by the third party
510
#. Source cluster removes the instance if requested
511

    
512
Instance move in pseudo code
513
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
514

    
515
.. highlight:: python
516

    
517
The following pseudo code describes a script moving instances between
518
clusters and what happens on both clusters.
519

    
520
#. Script is started, gets the instance name and destination cluster::
521

    
522
    (instance_name, dest_cluster_name) = sys.argv[1:]
523

    
524
    # Get destination cluster object
525
    dest_cluster = db.FindCluster(dest_cluster_name)
526

    
527
    # Use database to find source cluster
528
    src_cluster = db.FindClusterByInstance(instance_name)
529

    
530
#. Script tells source cluster to stop instance::
531

    
532
    # Stop instance
533
    src_cluster.StopInstance(instance_name)
534

    
535
    # Get instance specification (memory, disk, etc.)
536
    inst_spec = src_cluster.GetInstanceInfo(instance_name)
537

    
538
    (src_key_name, src_cert) = src_cluster.CreateX509Certificate()
539

    
540
#. ``CreateX509Certificate`` on source cluster::
541

    
542
    key_file = mkstemp()
543
    cert_file = "%s.cert" % key_file
544
    RunCmd(["/usr/bin/openssl", "req", "-new",
545
             "-newkey", "rsa:1024", "-days", "1",
546
             "-nodes", "-x509", "-batch",
547
             "-keyout", key_file, "-out", cert_file])
548

    
549
    plain_cert = utils.ReadFile(cert_file)
550

    
551
    # HMAC sign using secret key, this adds a "X-Ganeti-Signature"
552
    # header to the beginning of the certificate
553
    signed_cert = utils.SignX509Certificate(plain_cert,
554
      utils.ReadFile(constants.X509_SIGNKEY_FILE))
555

    
556
    # The certificate now looks like the following:
557
    #
558
    #   X-Ganeti-Signature: $1234$28676f0516c6ab68062b[โ€ฆ]
559
    #   -----BEGIN CERTIFICATE-----
560
    #   MIICsDCCAhmgAwIBAgI[โ€ฆ]
561
    #   -----END CERTIFICATE-----
562

    
563
    # Return name of key file and signed certificate in PEM format
564
    return (os.path.basename(key_file), signed_cert)
565

    
566
#. Script creates instance on destination cluster and waits for move to
567
   finish::
568

    
569
    dest_cluster.CreateInstance(mode=constants.REMOTE_IMPORT,
570
                                spec=inst_spec,
571
                                source_cert=src_cert)
572

    
573
    # Wait until destination cluster gives us its certificate
574
    dest_cert = None
575
    disk_info = []
576
    while not (dest_cert and len(disk_info) < len(inst_spec.disks)):
577
      tmp = dest_cluster.WaitOutput()
578
      if tmp is Certificate:
579
        dest_cert = tmp
580
      elif tmp is DiskInfo:
581
        # DiskInfo contains destination address and port
582
        disk_info[tmp.index] = tmp
583

    
584
    # Tell source cluster to export disks
585
    for disk in disk_info:
586
      src_cluster.ExportDisk(instance_name, disk=disk,
587
                             key_name=src_key_name,
588
                             dest_cert=dest_cert)
589

    
590
    print ("Instance %s sucessfully moved to %s" %
591
           (instance_name, dest_cluster.name))
592

    
593
#. ``CreateInstance`` on destination cluster::
594

    
595
    # โ€ฆ
596

    
597
    if mode == constants.REMOTE_IMPORT:
598
      # Make sure certificate was not modified since it was generated by
599
      # source cluster (which must use the same secret)
600
      if (not utils.VerifySignedX509Cert(source_cert,
601
            utils.ReadFile(constants.X509_SIGNKEY_FILE))):
602
        raise Error("Certificate not signed with this cluster's secret")
603

    
604
      if utils.CheckExpiredX509Cert(source_cert):
605
        raise Error("X509 certificate is expired")
606

    
607
      source_cert_file = utils.WriteTempFile(source_cert)
608

    
609
      # See above for X509 certificate generation and signing
610
      (key_name, signed_cert) = CreateSignedX509Certificate()
611

    
612
      SendToClient("x509-cert", signed_cert)
613

    
614
      for disk in instance.disks:
615
        # Start socat
616
        RunCmd(("socat"
617
                " OPENSSL-LISTEN:%s,โ€ฆ,key=%s,cert=%s,cafile=%s,verify=1"
618
                " stdout > /dev/diskโ€ฆ") %
619
               port, GetRsaKeyPath(key_name, private=True),
620
               GetRsaKeyPath(key_name, private=False), src_cert_file)
621
        SendToClient("send-disk-to", disk, ip_address, port)
622

    
623
      DestroyX509Cert(key_name)
624

    
625
      RunRenameScript(instance_name)
626

    
627
#. ``ExportDisk`` on source cluster::
628

    
629
    # Make sure certificate was not modified since it was generated by
630
    # destination cluster (which must use the same secret)
631
    if (not utils.VerifySignedX509Cert(cert_pem,
632
          utils.ReadFile(constants.X509_SIGNKEY_FILE))):
633
      raise Error("Certificate not signed with this cluster's secret")
634

    
635
    if utils.CheckExpiredX509Cert(cert_pem):
636
      raise Error("X509 certificate is expired")
637

    
638
    dest_cert_file = utils.WriteTempFile(cert_pem)
639

    
640
    # Start socat
641
    RunCmd(("socat stdin"
642
            " OPENSSL:%s:%s,โ€ฆ,key=%s,cert=%s,cafile=%s,verify=1"
643
            " < /dev/diskโ€ฆ") %
644
           disk.host, disk.port,
645
           GetRsaKeyPath(key_name, private=True),
646
           GetRsaKeyPath(key_name, private=False), dest_cert_file)
647

    
648
    if instance.all_disks_done:
649
      DestroyX509Cert(key_name)
650

    
651
.. highlight:: text
652

    
653
Miscellaneous notes
654
^^^^^^^^^^^^^^^^^^^
655

    
656
- A very similar system could also be used for instance exports within
657
  the same cluster. Currently OpenSSH is being used, but could be
658
  replaced by socat and SSL/TLS.
659
- During the design of intra-cluster instance moves we also discussed
660
  encrypting instance exports using GnuPG.
661
- While most instances should have exactly the same configuration as
662
  on the source cluster, setting them up with a different disk layout
663
  might be helpful in some use-cases.
664
- A cleanup operation, similar to the one available for failed instance
665
  migrations, should be provided.
666
- ``ganeti-watcher`` should remove instances pending a move from another
667
  cluster after a certain amount of time. This takes care of failures
668
  somewhere in the process.
669
- RSA keys can be generated using the existing
670
  ``bootstrap.GenerateSelfSignedSslCert`` function, though it might be
671
  useful to not write both parts into a single file, requiring small
672
  changes to the function. The public part always starts with
673
  ``-----BEGIN CERTIFICATE-----`` and ends with ``-----END
674
  CERTIFICATE-----``.
675
- The source and destination cluster might be different when it comes
676
  to available hypervisors, kernels, etc. The destination cluster should
677
  refuse to accept an instance move if it can't fulfill an instance's
678
  requirements.
679

    
680

    
681
Privilege separation
682
~~~~~~~~~~~~~~~~~~~~
683

    
684
Current state and shortcomings
685
++++++++++++++++++++++++++++++
686

    
687
All Ganeti daemons are run under the user root. This is not ideal from a
688
security perspective as for possible exploitation of any daemon the user
689
has full access to the system.
690

    
691
In order to overcome this situation we'll allow Ganeti to run its daemon
692
under different users and a dedicated group. This also will allow some
693
side effects, like letting the user run some ``gnt-*`` commands if one
694
is in the same group.
695

    
696
Implementation
697
++++++++++++++
698

    
699
For Ganeti 2.2 the implementation will be focused on a the RAPI daemon
700
only. This involves changes to ``daemons.py`` so it's possible to drop
701
privileges on daemonize the process. Though, this will be a short term
702
solution which will be replaced by a privilege drop already on daemon
703
startup in Ganeti 2.3.
704

    
705
It also needs changes in the master daemon to create the socket with new
706
permissions/owners to allow RAPI access. There will be no other
707
permission/owner changes in the file structure as the RAPI daemon is
708
started with root permission. In that time it will read all needed files
709
and then drop privileges before contacting the master daemon.
710

    
711

    
712
Feature changes
713
---------------
714

    
715
KVM Security
716
~~~~~~~~~~~~
717

    
718
Current state and shortcomings
719
++++++++++++++++++++++++++++++
720

    
721
Currently all kvm processes run as root. Taking ownership of the
722
hypervisor process, from inside a virtual machine, would mean a full
723
compromise of the whole Ganeti cluster, knowledge of all Ganeti
724
authentication secrets, full access to all running instances, and the
725
option of subverting other basic services on the cluster (eg: ssh).
726

    
727
Proposed changes
728
++++++++++++++++
729

    
730
We would like to decrease the surface of attack available if an
731
hypervisor is compromised. We can do so adding different features to
732
Ganeti, which will allow restricting the broken hypervisor
733
possibilities, in the absence of a local privilege escalation attack, to
734
subvert the node.
735

    
736
Dropping privileges in kvm to a single user (easy)
737
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
738

    
739
By passing the ``-runas`` option to kvm, we can make it drop privileges.
740
The user can be chosen by an hypervisor parameter, so that each instance
741
can have its own user, but by default they will all run under the same
742
one. It should be very easy to implement, and can easily be backported
743
to 2.1.X.
744

    
745
This mode protects the Ganeti cluster from a subverted hypervisor, but
746
doesn't protect the instances between each other, unless care is taken
747
to specify a different user for each. This would prevent the worst
748
attacks, including:
749

    
750
- logging in to other nodes
751
- administering the Ganeti cluster
752
- subverting other services
753

    
754
But the following would remain an option:
755

    
756
- terminate other VMs (but not start them again, as that requires root
757
  privileges to set up networking) (unless different users are used)
758
- trace other VMs, and probably subvert them and access their data
759
  (unless different users are used)
760
- send network traffic from the node
761
- read unprotected data on the node filesystem
762

    
763
Running kvm in a chroot (slightly harder)
764
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
765

    
766
By passing the ``-chroot`` option to kvm, we can restrict the kvm
767
process in its own (possibly empty) root directory. We need to set this
768
area up so that the instance disks and control sockets are accessible,
769
so it would require slightly more work at the Ganeti level.
770

    
771
Breaking out in a chroot would mean:
772

    
773
- a lot less options to find a local privilege escalation vector
774
- the impossibility to write local data, if the chroot is set up
775
  correctly
776
- the impossibility to read filesystem data on the host
777

    
778
It would still be possible though to:
779

    
780
- terminate other VMs
781
- trace other VMs, and possibly subvert them (if a tracer can be
782
  installed in the chroot)
783
- send network traffic from the node
784

    
785

    
786
Running kvm with a pool of users (slightly harder)
787
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
788

    
789
If rather than passing a single user as an hypervisor parameter, we have
790
a pool of useable ones, we can dynamically choose a free one to use and
791
thus guarantee that each machine will be separate from the others,
792
without putting the burden of this on the cluster administrator.
793

    
794
This would mean interfering between machines would be impossible, and
795
can still be combined with the chroot benefits.
796

    
797
Running iptables rules to limit network interaction (easy)
798
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
799

    
800
These don't need to be handled by Ganeti, but we can ship examples. If
801
the users used to run VMs would be blocked from sending some or all
802
network traffic, it would become impossible for a broken into hypervisor
803
to send arbitrary data on the node network, which is especially useful
804
when the instance and the node network are separated (using ganeti-nbma
805
or a separate set of network interfaces), or when a separate replication
806
network is maintained. We need to experiment to see how much restriction
807
we can properly apply, without limiting the instance legitimate traffic.
808

    
809

    
810
Running kvm inside a container (even harder)
811
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
812

    
813
Recent linux kernels support different process namespaces through
814
control groups. PIDs, users, filesystems and even network interfaces can
815
be separated. If we can set up ganeti to run kvm in a separate container
816
we could insulate all the host process from being even visible if the
817
hypervisor gets broken into. Most probably separating the network
818
namespace would require one extra hop in the host, through a veth
819
interface, thus reducing performance, so we may want to avoid that, and
820
just rely on iptables.
821

    
822
Implementation plan
823
+++++++++++++++++++
824

    
825
We will first implement dropping privileges for kvm processes as a
826
single user, and most probably backport it to 2.1. Then we'll ship
827
example iptables rules to show how the user can be limited in its
828
network activities.  After that we'll implement chroot restriction for
829
kvm processes, and extend the user limitation to use a user pool.
830

    
831
Finally we'll look into namespaces and containers, although that might
832
slip after the 2.2 release.
833

    
834

    
835
External interface changes
836
--------------------------
837

    
838

    
839
OS API
840
~~~~~~
841

    
842
The OS variants implementation in Ganeti 2.1 didn't prove to be useful
843
enough to alleviate the need to hack around the Ganeti API in order to
844
provide flexible OS parameters.
845

    
846
As such, for Ganeti 2.2 we will provide support for arbitrary OS
847
parameters. However, since OSes are not registered in Ganeti, but
848
instead discovered at runtime, the interface is not entirely
849
straightforward.
850

    
851
Furthermore, to support the system administrator in keeping OSes
852
properly in sync across the nodes of a cluster, Ganeti will also verify
853
(if existing) the consistence of a new ``os_version`` file.
854

    
855
These changes to the OS API will bump the API version to 20.
856

    
857

    
858
OS version
859
++++++++++
860

    
861
A new ``os_version`` file will be supported by Ganeti. This file is not
862
required, but if existing, its contents will be checked for consistency
863
across nodes. The file should hold only one line of text (any extra data
864
will be discarded), and its contents will be shown in the OS information
865
and diagnose commands.
866

    
867
It is recommended that OS authors increase the contents of this file for
868
any changes; at a minimum, modifications that change the behaviour of
869
import/export scripts must increase the version, since they break
870
intra-cluster migration.
871

    
872
Parameters
873
++++++++++
874

    
875
The interface between Ganeti and the OS scripts will be based on
876
environment variables, and as such the parameters and their values will
877
need to be valid in this context.
878

    
879
Names
880
^^^^^
881

    
882
The parameter names will be declared in a new file, ``parameters.list``,
883
together with a one-line documentation (whitespace-separated). Example::
884

    
885
  $ cat parameters.list
886
  ns1    Specifies the first name server to add to /etc/resolv.conf
887
  extra_packages  Specifies additional packages to install
888
  rootfs_size     Specifies the root filesystem size (the rest will be left unallocated)
889
  track  Specifies the distribution track, one of 'stable', 'testing' or 'unstable'
890

    
891
As seen above, the documentation can be separate via multiple
892
spaces/tabs from the names.
893

    
894
The parameter names as read from the file will be used for the command
895
line interface in lowercased form; as such, there shouldn't be any two
896
parameters which differ in case only.
897

    
898
Values
899
^^^^^^
900

    
901
The values of the parameters are, from Ganeti's point of view,
902
completely freeform. If a given parameter has, from the OS' point of
903
view, a fixed set of valid values, these should be documented as such
904
and verified by the OS, but Ganeti will not handle such parameters
905
specially.
906

    
907
An empty value must be handled identically as a missing parameter. In
908
other words, the validation script should only test for non-empty
909
values, and not for declared versus undeclared parameters.
910

    
911
Furthermore, each parameter should have an (internal to the OS) default
912
value, that will be used if not passed from Ganeti. More precisely, it
913
should be possible for any parameter to specify a value that will have
914
the same effect as not passing the parameter, and no in no case should
915
the absence of a parameter be treated as an exceptional case (outside
916
the value space).
917

    
918

    
919
Environment variables
920
+++++++++++++++++++++
921

    
922
The parameters will be exposed in the environment upper-case and
923
prefixed with the string ``OSP_``. For example, a parameter declared in
924
the 'parameters' file as ``ns1`` will appear in the environment as the
925
variable ``OSP_NS1``.
926

    
927
Validation
928
++++++++++
929

    
930
For the purpose of parameter name/value validation, the OS scripts
931
*must* provide an additional script, named ``verify``. This script will
932
be called with the argument ``parameters``, and all the parameters will
933
be passed in via environment variables, as described above.
934

    
935
The script should signify result/failure based on its exit code, and
936
show explanatory messages either on its standard output or standard
937
error. These messages will be passed on to the master, and stored as in
938
the OpCode result/error message.
939

    
940
The parameters must be constructed to be independent of the instance
941
specifications. In general, the validation script will only be called
942
with the parameter variables set, but not with the normal per-instance
943
variables, in order for Ganeti to be able to validate default parameters
944
too, when they change. Validation will only be performed on one cluster
945
node, and it will be up to the ganeti administrator to keep the OS
946
scripts in sync between all nodes.
947

    
948
Instance operations
949
+++++++++++++++++++
950

    
951
The parameters will be passed, as described above, to all the other
952
instance operations (creation, import, export). Ideally, these scripts
953
will not abort with parameter validation errors, if the ``verify``
954
script has verified them correctly.
955

    
956
Note: when changing an instance's OS type, any OS parameters defined at
957
instance level will be kept as-is. If the parameters differ between the
958
new and the old OS, the user should manually remove/update them as
959
needed.
960

    
961
Declaration and modification
962
++++++++++++++++++++++++++++
963

    
964
Since the OSes are not registered in Ganeti, we will only make a 'weak'
965
link between the parameters as declared in Ganeti and the actual OSes
966
existing on the cluster.
967

    
968
It will be possible to declare parameters either globally, per cluster
969
(where they are indexed per OS/variant), or individually, per
970
instance. The declaration of parameters will not be tied to current
971
existing OSes. When specifying a parameter, if the OS exists, it will be
972
validated; if not, then it will simply be stored as-is.
973

    
974
A special note is that it will not be possible to 'unset' at instance
975
level a parameter that is declared globally. Instead, at instance level
976
the parameter should be given an explicit value, or the default value as
977
explained above.
978

    
979
CLI interface
980
+++++++++++++
981

    
982
The modification of global (default) parameters will be done via the
983
``gnt-os`` command, and the per-instance parameters via the
984
``gnt-instance`` command. Both these commands will take an addition
985
``--os-parameters`` or ``-O`` flag that specifies the parameters in the
986
familiar comma-separated, key=value format. For removing a parameter, a
987
``-key`` syntax will be used, e.g.::
988

    
989
  # initial modification
990
  $ gnt-instance modify -O use_dchp=true instance1
991
  # later revert (to the cluster default, or the OS default if not
992
  # defined at cluster level)
993
  $ gnt-instance modify -O -use_dhcp instance1
994

    
995
Internal storage
996
++++++++++++++++
997

    
998
Internally, the OS parameters will be stored in a new ``osparams``
999
attribute. The global parameters will be stored on the cluster object,
1000
and the value of this attribute will be a dictionary indexed by OS name
1001
(this also accepts an OS+variant name, which will override a simple OS
1002
name, see below), and for values the key/name dictionary. For the
1003
instances, the value will be directly the key/name dictionary.
1004

    
1005
Overriding rules
1006
++++++++++++++++
1007

    
1008
Any instance-specific parameters will override any variant-specific
1009
parameters, which in turn will override any global parameters. The
1010
global parameters, in turn, override the built-in defaults (of the OS
1011
scripts).
1012

    
1013

    
1014
.. vim: set textwidth=72 :
1015
.. Local Variables:
1016
.. mode: rst
1017
.. fill-column: 72
1018
.. End: