Statistics
| Branch: | Tag: | Revision:

root / doc / design-node-security.rst @ b123fb31

History | View | Annotate | Download (27.8 kB)

1
=============================
2
Improvements of Node Security
3
=============================
4

    
5
This document describes an enhancement of Ganeti's security by restricting
6
the distribution of security-sensitive data to the master and master
7
candidates only.
8

    
9
Note: In this document, we will use the term 'normal node' for a node that
10
is neither master nor master-candidate.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Up till 2.10, Ganeti distributes security-relevant keys to all nodes,
18
including nodes that are neither master nor master-candidates. Those
19
keys are the private and public SSH keys for node communication and the
20
SSL certficate and private key for RPC communication. Objective of this
21
design is to limit the set of nodes that can establish ssh and RPC
22
connections to the master and master candidates.
23

    
24
As pointed out in
25
`issue 377 <https://code.google.com/p/ganeti/issues/detail?id=377>`_, this
26
is a security risk. Since all nodes have these keys, compromising
27
any of those nodes would possibly give an attacker access to all other
28
machines in the cluster. Reducing the set of nodes that are able to
29
make ssh and RPC connections to the master and master candidates would
30
significantly reduce the risk simply because fewer machines would be a
31
valuable target for attackers.
32

    
33
Note: For bigger installations of Ganeti, it is advisable to run master
34
candidate nodes as non-vm-capable nodes. This would reduce the attack
35
surface for the hypervisor exploitation.
36

    
37

    
38
Detailed design
39
===============
40

    
41

    
42
Current state and shortcomings
43
------------------------------
44

    
45
Currently (as of 2.10), all nodes hold the following information:
46

    
47
- the ssh host keys (public and private)
48
- the ssh root keys (public and private)
49
- node daemon certificate (the SSL client certificate and its
50
  corresponding private key)
51

    
52
Concerning ssh, this setup contains the following security issue. Since
53
all nodes of a cluster can ssh as root into any other cluster node, one
54
compromised node can harm all other nodes of a cluster.
55

    
56
Regarding the SSL encryption of the RPC communication with the node
57
daemon, we currently have the following setup. There is only one
58
certificate which is used as both, client and server certificate. Besides
59
the SSL client verification, we check if the used client certificate is
60
the same as the certificate stored on the server.
61

    
62
This means that any node running a node daemon can also act as an RPC
63
client and use it to issue RPC calls to other cluster nodes. This in
64
turn means that any compromised node could be used to make RPC calls to
65
any node (including itself) to gain full control over VMs. This could
66
be used by an attacker to for example bring down the VMs or exploit bugs
67
in the virtualization stacks to gain access to the host machines as well.
68

    
69

    
70
Proposal concerning SSH host key distribution
71
---------------------------------------------
72

    
73
We propose the following design regarding the SSH host key handling. The
74
root keys are untouched by this design.
75

    
76
Each node gets its own ssh private/public key pair, but only the public
77
keys of the master candidates get added to the ``authorized_keys`` file
78
of all nodes. This has the advantages, that:
79

    
80
- Only master candidates can ssh into other nodes, thus compromised
81
  nodes cannot compromise the cluster further.
82
- One can remove a compromised master candidate from a cluster
83
  (including removing it's public key from all nodes' ``authorized_keys``
84
  file) without having to regenerate and distribute new ssh keys for all
85
  master candidates. (Even though it is be good practice to do that anyway,
86
  since the compromising of the other master candidates might have taken
87
  place already.)
88
- If a (uncompromised) master candidate is offlined to be sent for
89
  repair due to a hardware failure before Ganeti can remove any keys
90
  from it (for example when the network adapter of the machine is broken),
91
  we don't have to worry about the keys being on a machine that is
92
  physically accessible.
93

    
94
To ensure security while transferring public key information and
95
updating the ``authorized_keys``, there are several other changes
96
necessary:
97

    
98
- Any distribution of keys (in this case only public keys) is done via
99
  SSH and not via RPC. An attacker who has RPC control should not be
100
  able to get SSH access where he did not have SSH access before
101
  already.
102
- The only RPC calls that are made in this context are from the master
103
  daemon to the node daemon on its own host and noded ensures as much
104
  as possible that the change to be made does not harm the cluster's
105
  security boundary.
106
- The nodes that are potential master candidates keep a list of public
107
  keys of potential master candidates of the cluster in a separate
108
  file called ``ganeti_pub_keys`` to keep track of which keys could
109
  possibly be added ``authorized_keys`` files of the nodes. We come
110
  to what "potential" means in this case in the next section. The key
111
  list is only transferred via SSH or written directly by noded. It
112
  is not stored in the cluster config, because the config is
113
  distributed via RPC.
114

    
115
The following sections describe in detail which Ganeti commands are
116
affected by the proposed changes.
117

    
118

    
119
RAPI
120
~~~~
121

    
122
The design goal to limit SSH powers to master candidates conflicts with
123
the current powers a user of the RAPI interface would have. The
124
``master_capable`` flag of nodes can be modified via RAPI.
125
That means, an attacker that has access to the RAPI interface, can make
126
all non-master-capable nodes master-capable, and then increase the master
127
candidate pool size till all machines are master candidates (or at least
128
a particular machine that he is aming for). This means that with RAPI
129
access and a compromised normal node, one can make this node a master
130
candidate and then still have the power to compromise the whole cluster.
131

    
132
To mitigate this issue, we propose the following changes:
133
- Add a flag to the cluster configuration
134
  ``master_capability_rapi_modifiable`` which indicates whether or
135
  not it should be possible to modify the ``master_capable`` flag of
136
  nodes via RAPI. The flag is set to ``False`` by default and can
137
  itself only be changed on the commandline. In this design doc, we
138
  refer to the flag as the "rapi flag" from here on.
139
- Only if the ``master_capabability_rapi_modifiable`` switch is set to
140
  ``True``, it is possible to modify the master-capability flag of
141
  nodes.
142

    
143
With this setup, there are the following definitions of "potential
144
master candidates" depending on the rapi flag:
145
- If the rapi flag is set to ``True``, all cluster nodes are potential
146
  master candidates, because as described above, all of them can
147
  eventually be made master candidates via RAPI and thus security-wise,
148
  we haven't won anything above the current SSH handling.
149
- If the rapi flag is set to ``False``, only the master capable nodes
150
  are considered potential master candidates, as it is not possible to
151
  make them master candidates via RAPI at all.
152

    
153
Note that when the rapi flag is changed, the state of the
154
``ganeti_pub_keys`` file on all nodes  has to be updated accordingly.
155
This should be done in the client script ``gnt_cluster`` before the
156
RPC call to update the configuration is made, because this way, if
157
someone would try to perform that RPC call on master to trick it into
158
thinking that the flag is enabled, this would not help as the content of
159
the ``ganeti_pub_keys`` file is a crucial part in the design of the
160
distribution of the SSH keys.
161

    
162
Note: One could think of always allowing to disable the master-capability
163
via RAPI and just restrict the enabling of it, thus making it possible
164
to RAPI-"freeze" the nodes' master-capability state once it disabled.
165
However, we think these are rather confusing semantics of the involved
166
flags and thus we go with proposed design.
167

    
168
Note that this change will break RAPI compatibility, at least if the
169
rapi flag is not explicitely set to ``True``. We made this choice to
170
have the more secure option as default, because otherwise it is
171
unlikely to be widely used.
172

    
173

    
174
Cluster initialization
175
~~~~~~~~~~~~~~~~~~~~~~
176

    
177
On cluster initialization, the following steps are taken in
178
bootstrap.py:
179
- A public/private key pair is generated (as before), but only used
180
  by the first (and thus master) node. In particular, the private key
181
  never leaves the node.
182
- A mapping of node UUIDs to public SSH keys is created and stored
183
  as text file in ``/var/lib/ganeti/ganeti_pub_keys`` only accessible
184
  by root (permissions 0600). The master node's uuid and its public
185
  key is added as first entry. The format of the file is one
186
  line per node, each line composed as ``node_uuid ssh_key``.
187
- The node's public key is added to it's own ``authorized_keys`` file.
188

    
189

    
190
(Re-)Adding nodes to a cluster
191
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
192

    
193
According to :doc:`design-node-add`, Ganeti transfers the ssh keys to
194
every node that gets added to the cluster.
195

    
196
Adding a new node will require the following steps.
197

    
198
In gnt_node.py:
199
- On the new node, a new public/private SSH key pair is generated.
200
- The public key of the new node is fetched (via SSH) to the master
201
  node and if it is a potential master candidate (see definition above),
202
  it is added to the ``ganeti_pub_keys`` list on the master node.
203
- The public keys of all current master candidates are added to the
204
  new node's ``authorized_keys`` file (also via SSH).
205

    
206
In LUNodeAdd in cmdlib/node.py:
207
- The LUNodeAdd determines whether or not the new node is a master
208
  candidate and in any case updates the cluster's configuration with the
209
  new nodes information. (This is not changed by the proposed design.)
210
- If the new node is a master candidate, we make an RPC call to the node
211
  daemon of the master node to add the new node's public key to all
212
  nodes' ``authorized_keys`` files. The implementation of this RPC call
213
  has to be extra careful as described in the next steps, because
214
  compromised RPC security should not compromise SSH security.
215

    
216
RPC call execution in noded (on master node):
217
- Check that the public key of the new node is in the
218
  ``ganeti_pub_keys`` file of the master node to make sure that no keys
219
  of nodes outside the Ganeti cluster and no keys that are not potential
220
  master candidates gain SSH access in the cluster.
221
- Via SSH, transfer the new node's public key to all nodes (including
222
  the new node) and add it to their ``authorized_keys`` file.
223
- The ``ganeti_pub_keys`` file is transferred via SSH to all
224
  potential master candidates nodes except the master node
225
  (including the new one).
226

    
227
In case of readding a node that used to be in the cluster before,
228
handling of the SSH keys would basically be the same, in particular also
229
a new SSH key pair is generated for the node, because we cannot be sure
230
that the old key pair has not been compromised while the node was
231
offlined.
232

    
233

    
234
Pro- and demoting a node to/from master candidate
235
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
236

    
237
If the role of a node is changed from 'normal' to 'master_candidate',
238
the procedure is the same as for adding nodes from the step "In
239
LUNodeAdd ..." on.
240

    
241
If a node gets demoted to 'normal', the master daemon makes a similar
242
RPC call to the master node's node daemon as for adding a node.
243

    
244
In the RPC call, noded will perform the following steps:
245
- Check that the public key of the node to be demoted is indeed in the
246
  ``ganeti_pub_keys`` file to avoid deleting ssh keys of machines that
247
  don't belong to the cluster (and thus potentially lock out the
248
  administrator).
249
- Via SSH, remove the key from all node's ``authorized_keys`` files.
250

    
251
This affected the behavior of the following commands:
252

    
253
::
254
  gnt-node modify --master-candidate=yes
255
  gnt-node modify --master-candidate=no [--auto-promote]
256

    
257
If the node has been master candidate already before the command to promote
258
it was issued, Ganeti does not do anything.
259

    
260
Note that when you demote a node from master candidate to normal node, another
261
master-capable and normal node will be promoted to master candidate. For this
262
newly promoted node, the same changes apply as if it was explicitely promoted.
263

    
264
The same behavior should be ensured for the corresponding rapi command.
265

    
266

    
267
Offlining and onlining a node
268
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
269

    
270
When offlining a node, it immediately loses its role as master or master
271
candidate as well. When it is onlined again, it will become master
272
candidate again if it was so before. The handling of the keys should be done
273
in the same way as when the node is explicitely promoted or demoted to or from
274
master candidate. See the previous section for details.
275

    
276
This affects the command:
277

    
278
::
279
  gnt-node modify --offline=yes
280
  gnt-node modify --offline=no [--auto-promote]
281

    
282
For offlining, the removal of the keys is particularly important, as the
283
detection of a compromised node might be the very reason for the offlining.
284
Of course we cannot guarantee that removal of the key is always successful,
285
because the node might not be reachable anymore. Even though it is a
286
best-effort operation, it is still an improvement over the status quo,
287
because currently Ganeti does not even try to remove any keys.
288

    
289
The same behavior should be ensured for the corresponding rapi command.
290

    
291

    
292
Cluster verify
293
~~~~~~~~~~~~~~
294

    
295
So far, 'gnt-cluster verify' checks the SSH connectivity of all nodes to
296
all other nodes. We propose to replace this by the following checks:
297

    
298
- For all master candidates, we check if they can connect any other node
299
  in the cluster (other master candidates and normal nodes).
300
- We check if the ``ganeti_pub_keys`` file contains keys of nodes that
301
  are no longer in the cluster or that are not potential master
302
  candidates.
303
- For all normal nodes, we check if their key does not appear in other
304
  node's ``authorized_keys``. For now, we will only emit a warning
305
  rather than an error if this check fails, because Ganeti might be
306
  run in a setup where Ganeti is not the only system manipulating the
307
  SSH keys.
308

    
309

    
310
Upgrades
311
~~~~~~~~
312

    
313
When upgrading from a version that has the previous SSH setup to the one
314
proposed in this design, the upgrade procedure has to involve the
315
following steps in the post-upgrade hook:
316
- For all nodes, new SSH key pairs are generated.
317
- All nodes and their public keys are added to the ``ganeti_pub_keys``
318
  file and the file is copied to all nodes.
319
- All keys of master candidate nodes are added to the
320
  ``authorized_keys`` files of all other nodes.
321

    
322
Since this upgrade significantly changes the configuration of the
323
clusters' nodes, we will add a note to the UPGRADE notes to make the
324
administrator aware of this fact (in case he intends to enable access
325
from normal nodes to master candidates for other reasons than Ganeti
326
uses the machines).
327

    
328
Also, in any operation where Ganeti creates new SSH keys, the old keys
329
will be backed up and not simply overridden.
330

    
331

    
332
Downgrades
333
~~~~~~~~~~
334

    
335
These downgrading steps will be implemtented from 2.12 to 2.11:
336
- The master node's private/public key pair will be distributed to all
337
  nodes (via SSH) and the individual SSH keys will be backed up.
338
- The obsolete individual ssh keys will be removed from all nodes'
339
  ``authorized_keys`` file.
340

    
341

    
342
Renew-Crypto
343
~~~~~~~~~~~~
344

    
345
The ``gnt-cluster renew-crypto`` command is not affected by the proposed
346
changes related to SSH.
347

    
348

    
349
Proposal regarding node daemon certificates
350
-------------------------------------------
351

    
352
Regarding the node daemon certificates, we propose the following changes
353
in the design.
354

    
355
- Instead of using the same certificate for all nodes as both, server
356
  and client certificate, we generate a common server certificate (and
357
  the corresponding private key) for all nodes and a different client
358
  certificate (and the corresponding private key) for each node. All
359
  those certificates will be self-signed for now. The client
360
  certificates will use the node UUID as serial number to ensure
361
  uniqueness within the cluster.
362
- In addition, we store a mapping of
363
  (node UUID, client certificate digest) in the cluster's configuration
364
  and ssconf for hosts that are master or master candidate.
365
  The client certificate digest is a hash of the client certificate.
366
  We suggest a 'sha1' hash here. We will call this mapping 'candidate map'
367
  from here on.
368
- The node daemon will be modified in a way that on an incoming RPC
369
  request, it first performs a client verification (same as before) to
370
  ensure that the requesting host is indeed the holder of the
371
  corresponding private key. Additionally, it compares the digest of
372
  the certificate of the incoming request to the respective entry of
373
  the candidate map. If the digest does not match the entry of the host
374
  in the mapping or is not included in the mapping at all, the SSL
375
  connection is refused.
376

    
377
This design has the following advantages:
378

    
379
- A compromised normal node cannot issue RPC calls, because it will
380
  not be in the candidate map. (See the ``Drawbacks`` section regarding
381
  an indirect way of achieving this though.)
382
- A compromised master candidate would be able to issue RPC requests,
383
  but on detection of its compromised state, it can be removed from the
384
  cluster (and thus from the candidate map) without the need for
385
  redistribution of any certificates, because the other master candidates
386
  can continue using their own certificates. However, it is best
387
  practice to issue a complete key renewal even in this case, unless one
388
  can ensure no actions compromising other nodes have not already been
389
  carried out.
390
- A compromised node would not be able to use the other (possibly master
391
  candidate) nodes' information from the candidate map to issue RPCs,
392
  because the config just stores the digests and not the certificate
393
  itself.
394
- A compromised node would be able to obtain another node's certificate
395
  by waiting for incoming RPCs from this other node. However, the node
396
  cannot use the certificate to issue RPC calls, because the SSL client
397
  verification would require the node to hold the corresponding private
398
  key as well.
399

    
400
Drawbacks of this design:
401

    
402
- Complexity of node and certificate management will be increased (see
403
  following sections for details).
404
- If the candidate map is not distributed fast enough to all nodes after
405
  an update of the configuration, it might be possible to issue RPC calls
406
  from a compromised master candidate node that has been removed
407
  from the Ganeti cluster already. However, this is still a better
408
  situation than before and an inherent problem when one wants to
409
  distinguish between master candidates and normal nodes.
410
- A compromised master candidate would still be able to issue RPC calls,
411
  if it uses ssh to retrieve another master candidate's client
412
  certificate and the corresponding private SSL key. This is an issue
413
  even with the first part of the improved handling of ssh keys in this
414
  design (limiting ssh keys to master candidates), but it will be
415
  eliminated with the second part of the design (separate ssh keys for
416
  each master candidate).
417
- Even though this proposal is an improvement towards the previous
418
  situation in Ganeti, it still does not use the full power of SSL. For
419
  further improvements, see Section "Related and future work".
420

    
421
Alternative proposals:
422

    
423
- Instead of generating a client certificate per node, one could think
424
  of just generating two different client certificates, one for normal
425
  nodes and one for master candidates. Noded could then just check if
426
  the requesting node has the master candidate certificate. Drawback of
427
  this proposal is that once one master candidate gets compromised, all
428
  master candidates would need to get a new certificate even if the
429
  compromised master candidate had not yet fetched the certificates
430
  from the other master candidates via ssh.
431
- In addition to our main proposal, one could think of including a
432
  piece of data (for example the node's host name or UUID) in the RPC
433
  call which is encrypted with the requesting node's private key. The
434
  node daemon could check if the datum can be decrypted using the node's
435
  certificate. However, this would ensure similar functionality as
436
  SSL's built-in client verification and add significant complexity
437
  to Ganeti's RPC protocol.
438

    
439
In the following sections, we describe how our design affects various
440
Ganeti operations.
441

    
442

    
443
Cluster initialization
444
~~~~~~~~~~~~~~~~~~~~~~
445

    
446
On cluster initialization, so far only the node daemon certificate was
447
created. With our design, two certificates (and corresponding keys)
448
need to be created, a server certificate to be distributed to all nodes
449
and a client certificate only to be used by this particular node. In the
450
following, we use the term node daemon certificate for the server
451
certficate only.
452

    
453
In the cluster configuration, the candidate map is created. It is
454
populated with the respective entry for the master node. It is also
455
written to ssconf.
456

    
457

    
458
(Re-)Adding nodes
459
~~~~~~~~~~~~~~~~~
460

    
461
When a node is added, the server certificate is copied to the node (as
462
before). Additionally, a new client certificate (and the corresponding
463
private key) is created on the new node to be used only by the new node
464
as client certifcate.
465

    
466
If the new node is a master candidate, the candidate map is extended by
467
the new node's data. As before, the updated configuration is distributed
468
to all nodes (as complete configuration on the master candidates and
469
ssconf on all nodes). Note that distribution of the configuration after
470
adding a node is already implemented, since all nodes hold the list of
471
nodes in the cluster in ssconf anyway.
472

    
473
If the configuration for whatever reason already holds an entry for this
474
node, it will be overriden.
475

    
476
When readding a node, the procedure is the same as for adding a node.
477

    
478

    
479
Promotion and demotion of master candidates
480
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
481

    
482
When a normal node gets promoted to be master candidate, an entry to the
483
candidate map has to be added and the updated configuration has to be
484
distributed to all nodes. If there was already an entry for the node,
485
we override it.
486

    
487
On demotion of a master candidate, the node's entry in the candidate map
488
gets removed and the updated configuration gets redistibuted.
489

    
490
The same procedure applied to onlining and offlining master candidates.
491

    
492

    
493
Cluster verify
494
~~~~~~~~~~~~~~
495

    
496
Cluster verify will be extended by the following checks:
497

    
498
- Whether each entry in the candidate map indeed corresponds to a master
499
  candidate.
500
- Whether the master candidate's certificate digest match their entry
501
  in the candidate map.
502
- Whether no node tries to use the certificate of another node. In
503
  particular, it is important to check that no normal node tries to
504
  use the certificate of a master candidate.
505

    
506

    
507
Crypto renewal
508
~~~~~~~~~~~~~~
509

    
510
Currently, when the cluster's cryptographic tokens are renewed using the
511
``gnt-cluster renew-crypto`` command the node daemon certificate is
512
renewed (among others). Option ``--new-cluster-certificate`` renews the
513
node daemon certificate only.
514

    
515
By adding an option ``--new-node-certificates`` we offer to renew the
516
client certificate. Whenever the client certificates are renewed, the
517
candidate map has to be updated and redistributed.
518

    
519
If for whatever reason, the candidate map becomes inconsistent, for example
520
due inconsistent updating after a demotion or offlining), the user can use
521
this option to renew the client certificates and update the candidate
522
certificate map.
523

    
524

    
525
Further considerations
526
----------------------
527

    
528
Watcher
529
~~~~~~~
530

    
531
The watcher is a script that is run on all nodes in regular intervals. The
532
changes proposed in this design will not affect the watcher's implementation,
533
because it behaves differently on the master than on non-master nodes.
534

    
535
Only on the master, it issues query calls which would require a client
536
certificate of a node in the candidate mapping. This is the case for the
537
master node. On non-master node, it's only external communication is done via
538
the ConfD protocol, which uses the hmac key, which is present on all nodes.
539
Besides that, the watcher does not make any ssh connections, and thus is
540
not affected by the changes in ssh key handling either.
541

    
542

    
543
Other Keys and Daemons
544
~~~~~~~~~~~~~~~~~~~~~~
545

    
546
Ganeti handles a couple of other keys/certificates that have not been mentioned
547
in this design so far. Also, other daemons than the ones mentioned so far
548
perform intra-cluster communication. Neither the keys nor the daemons will
549
be affected by this design for several reasons:
550

    
551
- The hmac key used by ConfD (see :doc:`design-2.1`): the hmac key is still
552
  distributed to all nodes, because it was designed to be used for
553
  communicating with ConfD, which should be possible from all nodes.
554
  For example, the monitoring daemon which runs on all nodes uses it to
555
  retrieve information from ConfD. However, since communication with ConfD
556
  is read-only, a compromised node holding the hmac key does not enable an
557
  attacker to change the cluster's state.
558

    
559
- The WConfD daemon writes the configuration to all master candidates
560
  via RPC. Since it only runs on the master node, it's ability to run
561
  RPC requests is maintained with this design.
562

    
563
- The rapi SSL key certificate and rapi user/password file 'rapi_users' is
564
  already only copied to the master candidates (see :doc:`design-2.1`,
565
  Section ``Redistribute Config``).
566

    
567
- The spice certificates are still distributed to all nodes, since it should
568
  be possible to use spice to access VMs on any cluster node.
569

    
570
- The cluster domain secret is used for inter-cluster instance moves.
571
  Since instances can be moved from any normal node of the source cluster to
572
  any normal node of the destination cluster, the presence of this
573
  secret on all nodes is necessary.
574

    
575

    
576
Related and Future Work
577
~~~~~~~~~~~~~~~~~~~~~~~
578

    
579
There a couple of suggestions on how to improve the SSL setup even more.
580
As a trade-off wrt to complexity and implementation effort, we did not
581
implement them yet (as of version 2.11) but describe them here for
582
future reference.
583

    
584
- All SSL certificates that Ganeti uses so far are self-signed. It would
585
  increase the security if they were signed by a common CA. There is
586
  already a design doc for a Ganeti CA which was suggested in a
587
  different context (related to import/export). This would also be a
588
  benefit for the RPC calls. See design doc :doc:`design-impexp2` for
589
  more information. Implementing a CA is rather complex, because it
590
  would mean also to support renewing the CA certificate and providing
591
  and supporting infrastructure to revoke compromised certificates.
592
- An extension of the previous suggestion would be to even enable the
593
  system administrator to use an external CA. Especially in bigger
594
  setups, where already an SSL infrastructure exists, it would be useful
595
  if Ganeti can simply be integrated with it, rather than forcing the
596
  user to use the Ganeti CA.
597
- A lighter version of using a CA would be to use the server certificate
598
  to sign the client certificate instead of using self-signed
599
  certificates for both. The probleme here is that this would make
600
  renewing the server certificate rather complicated, because all client
601
  certificates would need to be resigned and redistributed as well,
602
  which leads to interesting chicken-and-egg problems when this is done
603
  via RPC calls.
604
- Ganeti RPC calls are currently done without checking if the hostname
605
  of the node complies with the common name of the certificate. This
606
  might be a desirable feature, but would increase the effort when a
607
  node is renamed.
608
- The typical use case for SSL is to have one certificate per node
609
  rather than one shared certificate (Ganeti's noded server certificate)
610
  and a client certificate. One could change the design in a way that
611
  only one certificate per node is used, but this would require a common
612
  CA so that the validity of the certificate can be established by every
613
  node in the cluster.
614
- With the proposed design, the serial numbers of the client
615
  certificates are set to the node UUIDs. This is technically also not
616
  complying to how SSL is supposed to be used, as the serial numbers
617
  should reflect the enumeration of certificates created by the CA. Once
618
  a CA is implemented, it might be reasonable to change this
619
  accordingly. The implementation of the proposed design also has the
620
  drawback of the serial number not changing even if the certificate is
621
  replaced by a new one (for example when calling ``gnt-cluster renew-
622
  crypt``), which also does not comply to way SSL was designed to be
623
  used.
624

    
625
.. vim: set textwidth=72 :
626
.. Local Variables:
627
.. mode: rst
628
.. fill-column: 72
629
.. End: