root / doc / design-2.2.rst @ 545d1f1a
History | View | Annotate | Download (40 kB)
1 |
================= |
---|---|
2 |
Ganeti 2.2 design |
3 |
================= |
4 |
|
5 |
This document describes the major changes in Ganeti 2.2 compared to |
6 |
the 2.1 version. |
7 |
|
8 |
The 2.2 version will be a relatively small release. Its main aim is to |
9 |
avoid changing too much of the core code, while addressing issues and |
10 |
adding new features and improvements over 2.1, in a timely fashion. |
11 |
|
12 |
.. contents:: :depth: 4 |
13 |
|
14 |
Objective |
15 |
========= |
16 |
|
17 |
Background |
18 |
========== |
19 |
|
20 |
Overview |
21 |
======== |
22 |
|
23 |
Detailed design |
24 |
=============== |
25 |
|
26 |
As for 2.1 we divide the 2.2 design into three areas: |
27 |
|
28 |
- core changes, which affect the master daemon/job queue/locking or |
29 |
all/most logical units |
30 |
- logical unit/feature changes |
31 |
- external interface changes (eg. command line, os api, hooks, ...) |
32 |
|
33 |
Core changes |
34 |
------------ |
35 |
|
36 |
Master Daemon Scaling improvements |
37 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
38 |
|
39 |
Current state and shortcomings |
40 |
++++++++++++++++++++++++++++++ |
41 |
|
42 |
Currently the Ganeti master daemon is based on four sets of threads: |
43 |
|
44 |
- The main thread (1 thread) just accepts connections on the master |
45 |
socket |
46 |
- The client worker pool (16 threads) handles those connections, |
47 |
one thread per connected socket, parses luxi requests, and sends data |
48 |
back to the clients |
49 |
- The job queue worker pool (25 threads) executes the actual jobs |
50 |
submitted by the clients |
51 |
- The rpc worker pool (10 threads) interacts with the nodes via |
52 |
http-based-rpc |
53 |
|
54 |
This means that every masterd currently runs 52 threads to do its job. |
55 |
Being able to reduce the number of thread sets would make the master's |
56 |
architecture a lot simpler. Moreover having less threads can help |
57 |
decrease lock contention, log pollution and memory usage. |
58 |
Also, with the current architecture, masterd suffers from quite a few |
59 |
scalability issues: |
60 |
|
61 |
Core daemon connection handling |
62 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
63 |
|
64 |
Since the 16 client worker threads handle one connection each, it's very |
65 |
easy to exhaust them, by just connecting to masterd 16 times and not |
66 |
sending any data. While we could perhaps make those pools resizable, |
67 |
increasing the number of threads won't help with lock contention nor |
68 |
with better handling long running operations making sure the client is |
69 |
informed that everything is proceeding, and doesn't need to time out. |
70 |
|
71 |
Wait for job change |
72 |
^^^^^^^^^^^^^^^^^^^ |
73 |
|
74 |
The REQ_WAIT_FOR_JOB_CHANGE luxi operation makes the relevant client |
75 |
thread block on its job for a relative long time. This is another easy |
76 |
way to exhaust the 16 client threads, and a place where clients often |
77 |
time out, moreover this operation is negative for the job queue lock |
78 |
contention (see below). |
79 |
|
80 |
Job Queue lock |
81 |
^^^^^^^^^^^^^^ |
82 |
|
83 |
The job queue lock is quite heavily contended, and certain easily |
84 |
reproducible workloads show that's it's very easy to put masterd in |
85 |
trouble: for example running ~15 background instance reinstall jobs, |
86 |
results in a master daemon that, even without having finished the |
87 |
client worker threads, can't answer simple job list requests, or |
88 |
submit more jobs. |
89 |
|
90 |
Currently the job queue lock is an exclusive non-fair lock insulating |
91 |
the following job queue methods (called by the client workers). |
92 |
|
93 |
- AddNode |
94 |
- RemoveNode |
95 |
- SubmitJob |
96 |
- SubmitManyJobs |
97 |
- WaitForJobChanges |
98 |
- CancelJob |
99 |
- ArchiveJob |
100 |
- AutoArchiveJobs |
101 |
- QueryJobs |
102 |
- Shutdown |
103 |
|
104 |
Moreover the job queue lock is acquired outside of the job queue in two |
105 |
other classes: |
106 |
|
107 |
- jqueue._JobQueueWorker (in RunTask) before executing the opcode, after |
108 |
finishing its executing and when handling an exception. |
109 |
- jqueue._OpExecCallbacks (in NotifyStart and Feedback) when the |
110 |
processor (mcpu.Processor) is about to start working on the opcode |
111 |
(after acquiring the necessary locks) and when any data is sent back |
112 |
via the feedback function. |
113 |
|
114 |
Of those the major critical points are: |
115 |
|
116 |
- Submit[Many]Job, QueryJobs, WaitForJobChanges, which can easily slow |
117 |
down and block client threads up to making the respective clients |
118 |
time out. |
119 |
- The code paths in NotifyStart, Feedback, and RunTask, which slow |
120 |
down job processing between clients and otherwise non-related jobs. |
121 |
|
122 |
To increase the pain: |
123 |
|
124 |
- WaitForJobChanges is a bad offender because it's implemented with a |
125 |
notified condition which awakes waiting threads, who then try to |
126 |
acquire the global lock again |
127 |
- Many should-be-fast code paths are slowed down by replicating the |
128 |
change to remote nodes, and thus waiting, with the lock held, on |
129 |
remote rpcs to complete (starting, finishing, and submitting jobs) |
130 |
|
131 |
Proposed changes |
132 |
++++++++++++++++ |
133 |
|
134 |
In order to be able to interact with the master daemon even when it's |
135 |
under heavy load, and to make it simpler to add core functionality |
136 |
(such as an asynchronous rpc client) we propose three subsequent levels |
137 |
of changes to the master core architecture. |
138 |
|
139 |
After making this change we'll be able to re-evaluate the size of our |
140 |
thread pool, if we see that we can make most threads in the client |
141 |
worker pool always idle. In the future we should also investigate making |
142 |
the rpc client asynchronous as well, so that we can make masterd a lot |
143 |
smaller in number of threads, and memory size, and thus also easier to |
144 |
understand, debug, and scale. |
145 |
|
146 |
Connection handling |
147 |
^^^^^^^^^^^^^^^^^^^ |
148 |
|
149 |
We'll move the main thread of ganeti-masterd to asyncore, so that it can |
150 |
share the mainloop code with all other Ganeti daemons. Then all luxi |
151 |
clients will be asyncore clients, and I/O to/from them will be handled |
152 |
by the master thread asynchronously. Data will be read from the client |
153 |
sockets as it becomes available, and kept in a buffer, then when a |
154 |
complete message is found, it's passed to a client worker thread for |
155 |
parsing and processing. The client worker thread is responsible for |
156 |
serializing the reply, which can then be sent asynchronously by the main |
157 |
thread on the socket. |
158 |
|
159 |
Wait for job change |
160 |
^^^^^^^^^^^^^^^^^^^ |
161 |
|
162 |
The REQ_WAIT_FOR_JOB_CHANGE luxi request is changed to be |
163 |
subscription-based, so that the executing thread doesn't have to be |
164 |
waiting for the changes to arrive. Threads producing messages (job queue |
165 |
executors) will make sure that when there is a change another thread is |
166 |
awaken and delivers it to the waiting clients. This can be either a |
167 |
dedicated "wait for job changes" thread or pool, or one of the client |
168 |
workers, depending on what's easier to implement. In either case the |
169 |
main asyncore thread will only be involved in pushing of the actual |
170 |
data, and not in fetching/serializing it. |
171 |
|
172 |
Other features to look at, when implementing this code are: |
173 |
|
174 |
- Possibility not to need the job lock to know which updates to push: |
175 |
if the thread producing the data pushed a copy of the update for the |
176 |
waiting clients, the thread sending it won't need to acquire the |
177 |
lock again to fetch the actual data. |
178 |
- Possibility to signal clients about to time out, when no update has |
179 |
been received, not to despair and to keep waiting (luxi level |
180 |
keepalive). |
181 |
- Possibility to defer updates if they are too frequent, providing |
182 |
them at a maximum rate (lower priority). |
183 |
|
184 |
Job Queue lock |
185 |
^^^^^^^^^^^^^^ |
186 |
|
187 |
In order to decrease the job queue lock contention, we will change the |
188 |
code paths in the following ways, initially: |
189 |
|
190 |
- A per-job lock will be introduced. All operations affecting only one |
191 |
job (for example feedback, starting/finishing notifications, |
192 |
subscribing to or watching a job) will only require the job lock. |
193 |
This should be a leaf lock, but if a situation arises in which it |
194 |
must be acquired together with the global job queue lock the global |
195 |
one must always be acquired last (for the global section). |
196 |
- The locks will be converted to a sharedlock. Any read-only operation |
197 |
will be able to proceed in parallel. |
198 |
- During remote update (which happens already per-job) we'll drop the |
199 |
job lock level to shared mode, so that activities reading the lock |
200 |
(for example job change notifications or QueryJobs calls) will be |
201 |
able to proceed in parallel. |
202 |
- The wait for job changes improvements proposed above will be |
203 |
implemented. |
204 |
|
205 |
In the future other improvements may include splitting off some of the |
206 |
work (eg replication of a job to remote nodes) to a separate thread pool |
207 |
or asynchronous thread, not tied with the code path for answering client |
208 |
requests or the one executing the "real" work. This can be discussed |
209 |
again after we used the more granular job queue in production and tested |
210 |
its benefits. |
211 |
|
212 |
|
213 |
Remote procedure call timeouts |
214 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
215 |
|
216 |
Current state and shortcomings |
217 |
++++++++++++++++++++++++++++++ |
218 |
|
219 |
The current RPC protocol used by Ganeti is based on HTTP. Every request |
220 |
consists of an HTTP PUT request (e.g. ``PUT /hooks_runner HTTP/1.0``) |
221 |
and doesn't return until the function called has returned. Parameters |
222 |
and return values are encoded using JSON. |
223 |
|
224 |
On the server side, ``ganeti-noded`` handles every incoming connection |
225 |
in a separate process by forking just after accepting the connection. |
226 |
This process exits after sending the response. |
227 |
|
228 |
There is one major problem with this design: Timeouts can not be used on |
229 |
a per-request basis. Neither client or server know how long it will |
230 |
take. Even if we might be able to group requests into different |
231 |
categories (e.g. fast and slow), this is not reliable. |
232 |
|
233 |
If a node has an issue or the network connection fails while a request |
234 |
is being handled, the master daemon can wait for a long time for the |
235 |
connection to time out (e.g. due to the operating system's underlying |
236 |
TCP keep-alive packets or timeouts). While the settings for keep-alive |
237 |
packets can be changed using Linux-specific socket options, we prefer to |
238 |
use application-level timeouts because these cover both machine down and |
239 |
unresponsive node daemon cases. |
240 |
|
241 |
Proposed changes |
242 |
++++++++++++++++ |
243 |
|
244 |
RPC glossary |
245 |
^^^^^^^^^^^^ |
246 |
|
247 |
Function call ID |
248 |
Unique identifier returned by ``ganeti-noded`` after invoking a |
249 |
function. |
250 |
Function process |
251 |
Process started by ``ganeti-noded`` to call actual (backend) function. |
252 |
|
253 |
Protocol |
254 |
^^^^^^^^ |
255 |
|
256 |
Initially we chose HTTP as our RPC protocol because there were existing |
257 |
libraries, which, unfortunately, turned out to miss important features |
258 |
(such as SSL certificate authentication) and we had to write our own. |
259 |
|
260 |
This proposal can easily be implemented using HTTP, though it would |
261 |
likely be more efficient and less complicated to use the LUXI protocol |
262 |
already used to communicate between client tools and the Ganeti master |
263 |
daemon. Switching to another protocol can occur at a later point. This |
264 |
proposal should be implemented using HTTP as its underlying protocol. |
265 |
|
266 |
The LUXI protocol currently contains two functions, ``WaitForJobChange`` |
267 |
and ``AutoArchiveJobs``, which can take a longer time. They both support |
268 |
a parameter to specify the timeout. This timeout is usually chosen as |
269 |
roughly half of the socket timeout, guaranteeing a response before the |
270 |
socket times out. After the specified amount of time, |
271 |
``AutoArchiveJobs`` returns and reports the number of archived jobs. |
272 |
``WaitForJobChange`` returns and reports a timeout. In both cases, the |
273 |
functions can be called again. |
274 |
|
275 |
A similar model can be used for the inter-node RPC protocol. In some |
276 |
sense, the node daemon will implement a light variant of *"node daemon |
277 |
jobs"*. When the function call is sent, it specifies an initial timeout. |
278 |
If the function didn't finish within this timeout, a response is sent |
279 |
with a unique identifier, the function call ID. The client can then |
280 |
choose to wait for the function to finish again with a timeout. |
281 |
Inter-node RPC calls would no longer be blocking indefinitely and there |
282 |
would be an implicit ping-mechanism. |
283 |
|
284 |
Request handling |
285 |
^^^^^^^^^^^^^^^^ |
286 |
|
287 |
To support the protocol changes described above, the way the node daemon |
288 |
handles request will have to change. Instead of forking and handling |
289 |
every connection in a separate process, there should be one child |
290 |
process per function call and the master process will handle the |
291 |
communication with clients and the function processes using asynchronous |
292 |
I/O. |
293 |
|
294 |
Function processes communicate with the parent process via stdio and |
295 |
possibly their exit status. Every function process has a unique |
296 |
identifier, though it shouldn't be the process ID only (PIDs can be |
297 |
recycled and are prone to race conditions for this use case). The |
298 |
proposed format is ``${ppid}:${cpid}:${time}:${random}``, where ``ppid`` |
299 |
is the ``ganeti-noded`` PID, ``cpid`` the child's PID, ``time`` the |
300 |
current Unix timestamp with decimal places and ``random`` at least 16 |
301 |
random bits. |
302 |
|
303 |
The following operations will be supported: |
304 |
|
305 |
``StartFunction(fn_name, fn_args, timeout)`` |
306 |
Starts a function specified by ``fn_name`` with arguments in |
307 |
``fn_args`` and waits up to ``timeout`` seconds for the function |
308 |
to finish. Fire-and-forget calls can be made by specifying a timeout |
309 |
of 0 seconds (e.g. for powercycling the node). Returns three values: |
310 |
function call ID (if not finished), whether function finished (or |
311 |
timeout) and the function's return value. |
312 |
``WaitForFunction(fnc_id, timeout)`` |
313 |
Waits up to ``timeout`` seconds for function call to finish. Return |
314 |
value same as ``StartFunction``. |
315 |
|
316 |
In the future, ``StartFunction`` could support an additional parameter |
317 |
to specify after how long the function process should be aborted. |
318 |
|
319 |
Simplified timing diagram:: |
320 |
|
321 |
Master daemon Node daemon Function process |
322 |
| |
323 |
Call function |
324 |
(timeout 10s) -----> Parse request and fork for ----> Start function |
325 |
calling actual function, then | |
326 |
wait up to 10s for function to | |
327 |
finish | |
328 |
| | |
329 |
... ... |
330 |
| | |
331 |
Examine return <---- | | |
332 |
value and wait | |
333 |
again -------------> Wait another 10s for function | |
334 |
| | |
335 |
... ... |
336 |
| | |
337 |
Examine return <---- | | |
338 |
value and wait | |
339 |
again -------------> Wait another 10s for function | |
340 |
| | |
341 |
... ... |
342 |
| | |
343 |
| Function ends, |
344 |
Get return value and forward <-- process exits |
345 |
Process return <---- it to caller |
346 |
value and continue |
347 |
| |
348 |
|
349 |
.. TODO: Convert diagram above to graphviz/dot graphic |
350 |
|
351 |
On process termination (e.g. after having been sent a ``SIGTERM`` or |
352 |
``SIGINT`` signal), ``ganeti-noded`` should send ``SIGTERM`` to all |
353 |
function processes and wait for all of them to terminate. |
354 |
|
355 |
|
356 |
Inter-cluster instance moves |
357 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
358 |
|
359 |
Current state and shortcomings |
360 |
++++++++++++++++++++++++++++++ |
361 |
|
362 |
With the current design of Ganeti, moving whole instances between |
363 |
different clusters involves a lot of manual work. There are several ways |
364 |
to move instances, one of them being to export the instance, manually |
365 |
copying all data to the new cluster before importing it again. Manual |
366 |
changes to the instances configuration, such as the IP address, may be |
367 |
necessary in the new environment. The goal is to improve and automate |
368 |
this process in Ganeti 2.2. |
369 |
|
370 |
Proposed changes |
371 |
++++++++++++++++ |
372 |
|
373 |
Authorization, Authentication and Security |
374 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
375 |
|
376 |
Until now, each Ganeti cluster was a self-contained entity and wouldn't |
377 |
talk to other Ganeti clusters. Nodes within clusters only had to trust |
378 |
the other nodes in the same cluster and the network used for replication |
379 |
was trusted, too (hence the ability the use a separate, local network |
380 |
for replication). |
381 |
|
382 |
For inter-cluster instance transfers this model must be weakened. Nodes |
383 |
in one cluster will have to talk to nodes in other clusters, sometimes |
384 |
in other locations and, very important, via untrusted network |
385 |
connections. |
386 |
|
387 |
Various option have been considered for securing and authenticating the |
388 |
data transfer from one machine to another. To reduce the risk of |
389 |
accidentally overwriting data due to software bugs, authenticating the |
390 |
arriving data was considered critical. Eventually we decided to use |
391 |
socat's OpenSSL options (``OPENSSL:``, ``OPENSSL-LISTEN:`` et al), which |
392 |
provide us with encryption, authentication and authorization when used |
393 |
with separate keys and certificates. |
394 |
|
395 |
Combinations of OpenSSH, GnuPG and Netcat were deemed too complex to set |
396 |
up from within Ganeti. Any solution involving OpenSSH would require a |
397 |
dedicated user with a home directory and likely automated modifications |
398 |
to the user's ``$HOME/.ssh/authorized_keys`` file. When using Netcat, |
399 |
GnuPG or another encryption method would be necessary to transfer the |
400 |
data over an untrusted network. socat combines both in one program and |
401 |
is already a dependency. |
402 |
|
403 |
Each of the two clusters will have to generate an RSA key. The public |
404 |
parts are exchanged between the clusters by a third party, such as an |
405 |
administrator or a system interacting with Ganeti via the remote API |
406 |
("third party" from here on). After receiving each other's public key, |
407 |
the clusters can start talking to each other. |
408 |
|
409 |
All encrypted connections must be verified on both sides. Neither side |
410 |
may accept unverified certificates. The generated certificate should |
411 |
only be valid for the time necessary to move the instance. |
412 |
|
413 |
For additional protection of the instance data, the two clusters can |
414 |
verify the certificates and destination information exchanged via the |
415 |
third party by checking an HMAC signature using a key shared among the |
416 |
involved clusters. By default this secret key will be a random string |
417 |
unique to the cluster, generated by running SHA1 over 20 bytes read from |
418 |
``/dev/urandom`` and the administrator must synchronize the secrets |
419 |
between clusters before instances can be moved. If the third party does |
420 |
not know the secret, it can't forge the certificates or redirect the |
421 |
data. Unless disabled by a new cluster parameter, verifying the HMAC |
422 |
signatures must be mandatory. The HMAC signature for X509 certificates |
423 |
will be prepended to the certificate similar to an RFC822 header and |
424 |
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to |
425 |
``-----END CERTIFICATE-----``). The header name will be |
426 |
``X-Ganeti-Signature`` and its value will have the format |
427 |
``$salt/$hash`` (salt and hash separated by slash). The salt may only |
428 |
contain characters in the range ``[a-zA-Z0-9]``. |
429 |
|
430 |
On the web, the destination cluster would be equivalent to an HTTPS |
431 |
server requiring verifiable client certificates. The browser would be |
432 |
equivalent to the source cluster and must verify the server's |
433 |
certificate while providing a client certificate to the server. |
434 |
|
435 |
Copying data |
436 |
^^^^^^^^^^^^ |
437 |
|
438 |
To simplify the implementation, we decided to operate at a block-device |
439 |
level only, allowing us to easily support non-DRBD instance moves. |
440 |
|
441 |
Intra-cluster instance moves will re-use the existing export and import |
442 |
scripts supplied by instance OS definitions. Unlike simply copying the |
443 |
raw data, this allows to use filesystem-specific utilities to dump only |
444 |
used parts of the disk and to exclude certain disks from the move. |
445 |
Compression should be used to further reduce the amount of data |
446 |
transferred. |
447 |
|
448 |
The export scripts writes all data to stdout and the import script reads |
449 |
it from stdin again. To avoid copying data and reduce disk space |
450 |
consumption, everything is read from the disk and sent over the network |
451 |
directly, where it'll be written to the new block device directly again. |
452 |
|
453 |
Workflow |
454 |
^^^^^^^^ |
455 |
|
456 |
#. Third party tells source cluster to shut down instance, asks for the |
457 |
instance specification and for the public part of an encryption key |
458 |
|
459 |
- Instance information can already be retrieved using an existing API |
460 |
(``OpQueryInstanceData``). |
461 |
- An RSA encryption key and a corresponding self-signed X509 |
462 |
certificate is generated using the "openssl" command. This key will |
463 |
be used to encrypt the data sent to the destination cluster. |
464 |
|
465 |
- Private keys never leave the cluster. |
466 |
- The public part (the X509 certificate) is signed using HMAC with |
467 |
salting and a secret shared between Ganeti clusters. |
468 |
|
469 |
#. Third party tells destination cluster to create an instance with the |
470 |
same specifications as on source cluster and to prepare for an |
471 |
instance move with the key received from the source cluster and |
472 |
receives the public part of the destination's encryption key |
473 |
|
474 |
- The current API to create instances (``OpCreateInstance``) will be |
475 |
extended to support an import from a remote cluster. |
476 |
- A valid, unexpired X509 certificate signed with the destination |
477 |
cluster's secret will be required. By verifying the signature, we |
478 |
know the third party didn't modify the certificate. |
479 |
|
480 |
- The private keys never leave their cluster, hence the third party |
481 |
can not decrypt or intercept the instance's data by modifying the |
482 |
IP address or port sent by the destination cluster. |
483 |
|
484 |
- The destination cluster generates another key and certificate, |
485 |
signs and sends it to the third party, who will have to pass it to |
486 |
the API for exporting an instance (``OpExportInstance``). This |
487 |
certificate is used to ensure we're sending the disk data to the |
488 |
correct destination cluster. |
489 |
- Once a disk can be imported, the API sends the destination |
490 |
information (IP address and TCP port) together with an HMAC |
491 |
signature to the third party. |
492 |
|
493 |
#. Third party hands public part of the destination's encryption key |
494 |
together with all necessary information to source cluster and tells |
495 |
it to start the move |
496 |
|
497 |
- The existing API for exporting instances (``OpExportInstance``) |
498 |
will be extended to export instances to remote clusters. |
499 |
|
500 |
#. Source cluster connects to destination cluster for each disk and |
501 |
transfers its data using the instance OS definition's export and |
502 |
import scripts |
503 |
|
504 |
- Before starting, the source cluster must verify the HMAC signature |
505 |
of the certificate and destination information (IP address and TCP |
506 |
port). |
507 |
- When connecting to the remote machine, strong certificate checks |
508 |
must be employed. |
509 |
|
510 |
#. Due to the asynchronous nature of the whole process, the destination |
511 |
cluster checks whether all disks have been transferred every time |
512 |
after transferring a single disk; if so, it destroys the encryption |
513 |
key |
514 |
#. After sending all disks, the source cluster destroys its key |
515 |
#. Destination cluster runs OS definition's rename script to adjust |
516 |
instance settings if needed (e.g. IP address) |
517 |
#. Destination cluster starts the instance if requested at the beginning |
518 |
by the third party |
519 |
#. Source cluster removes the instance if requested |
520 |
|
521 |
Instance move in pseudo code |
522 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
523 |
|
524 |
.. highlight:: python |
525 |
|
526 |
The following pseudo code describes a script moving instances between |
527 |
clusters and what happens on both clusters. |
528 |
|
529 |
#. Script is started, gets the instance name and destination cluster:: |
530 |
|
531 |
(instance_name, dest_cluster_name) = sys.argv[1:] |
532 |
|
533 |
# Get destination cluster object |
534 |
dest_cluster = db.FindCluster(dest_cluster_name) |
535 |
|
536 |
# Use database to find source cluster |
537 |
src_cluster = db.FindClusterByInstance(instance_name) |
538 |
|
539 |
#. Script tells source cluster to stop instance:: |
540 |
|
541 |
# Stop instance |
542 |
src_cluster.StopInstance(instance_name) |
543 |
|
544 |
# Get instance specification (memory, disk, etc.) |
545 |
inst_spec = src_cluster.GetInstanceInfo(instance_name) |
546 |
|
547 |
(src_key_name, src_cert) = src_cluster.CreateX509Certificate() |
548 |
|
549 |
#. ``CreateX509Certificate`` on source cluster:: |
550 |
|
551 |
key_file = mkstemp() |
552 |
cert_file = "%s.cert" % key_file |
553 |
RunCmd(["/usr/bin/openssl", "req", "-new", |
554 |
"-newkey", "rsa:1024", "-days", "1", |
555 |
"-nodes", "-x509", "-batch", |
556 |
"-keyout", key_file, "-out", cert_file]) |
557 |
|
558 |
plain_cert = utils.ReadFile(cert_file) |
559 |
|
560 |
# HMAC sign using secret key, this adds a "X-Ganeti-Signature" |
561 |
# header to the beginning of the certificate |
562 |
signed_cert = utils.SignX509Certificate(plain_cert, |
563 |
utils.ReadFile(constants.X509_SIGNKEY_FILE)) |
564 |
|
565 |
# The certificate now looks like the following: |
566 |
# |
567 |
# X-Ganeti-Signature: $1234$28676f0516c6ab68062b[…] |
568 |
# -----BEGIN CERTIFICATE----- |
569 |
# MIICsDCCAhmgAwIBAgI[…] |
570 |
# -----END CERTIFICATE----- |
571 |
|
572 |
# Return name of key file and signed certificate in PEM format |
573 |
return (os.path.basename(key_file), signed_cert) |
574 |
|
575 |
#. Script creates instance on destination cluster and waits for move to |
576 |
finish:: |
577 |
|
578 |
dest_cluster.CreateInstance(mode=constants.REMOTE_IMPORT, |
579 |
spec=inst_spec, |
580 |
source_cert=src_cert) |
581 |
|
582 |
# Wait until destination cluster gives us its certificate |
583 |
dest_cert = None |
584 |
disk_info = [] |
585 |
while not (dest_cert and len(disk_info) < len(inst_spec.disks)): |
586 |
tmp = dest_cluster.WaitOutput() |
587 |
if tmp is Certificate: |
588 |
dest_cert = tmp |
589 |
elif tmp is DiskInfo: |
590 |
# DiskInfo contains destination address and port |
591 |
disk_info[tmp.index] = tmp |
592 |
|
593 |
# Tell source cluster to export disks |
594 |
for disk in disk_info: |
595 |
src_cluster.ExportDisk(instance_name, disk=disk, |
596 |
key_name=src_key_name, |
597 |
dest_cert=dest_cert) |
598 |
|
599 |
print ("Instance %s sucessfully moved to %s" % |
600 |
(instance_name, dest_cluster.name)) |
601 |
|
602 |
#. ``CreateInstance`` on destination cluster:: |
603 |
|
604 |
# … |
605 |
|
606 |
if mode == constants.REMOTE_IMPORT: |
607 |
# Make sure certificate was not modified since it was generated by |
608 |
# source cluster (which must use the same secret) |
609 |
if (not utils.VerifySignedX509Cert(source_cert, |
610 |
utils.ReadFile(constants.X509_SIGNKEY_FILE))): |
611 |
raise Error("Certificate not signed with this cluster's secret") |
612 |
|
613 |
if utils.CheckExpiredX509Cert(source_cert): |
614 |
raise Error("X509 certificate is expired") |
615 |
|
616 |
source_cert_file = utils.WriteTempFile(source_cert) |
617 |
|
618 |
# See above for X509 certificate generation and signing |
619 |
(key_name, signed_cert) = CreateSignedX509Certificate() |
620 |
|
621 |
SendToClient("x509-cert", signed_cert) |
622 |
|
623 |
for disk in instance.disks: |
624 |
# Start socat |
625 |
RunCmd(("socat" |
626 |
" OPENSSL-LISTEN:%s,…,key=%s,cert=%s,cafile=%s,verify=1" |
627 |
" stdout > /dev/disk…") % |
628 |
port, GetRsaKeyPath(key_name, private=True), |
629 |
GetRsaKeyPath(key_name, private=False), src_cert_file) |
630 |
SendToClient("send-disk-to", disk, ip_address, port) |
631 |
|
632 |
DestroyX509Cert(key_name) |
633 |
|
634 |
RunRenameScript(instance_name) |
635 |
|
636 |
#. ``ExportDisk`` on source cluster:: |
637 |
|
638 |
# Make sure certificate was not modified since it was generated by |
639 |
# destination cluster (which must use the same secret) |
640 |
if (not utils.VerifySignedX509Cert(cert_pem, |
641 |
utils.ReadFile(constants.X509_SIGNKEY_FILE))): |
642 |
raise Error("Certificate not signed with this cluster's secret") |
643 |
|
644 |
if utils.CheckExpiredX509Cert(cert_pem): |
645 |
raise Error("X509 certificate is expired") |
646 |
|
647 |
dest_cert_file = utils.WriteTempFile(cert_pem) |
648 |
|
649 |
# Start socat |
650 |
RunCmd(("socat stdin" |
651 |
" OPENSSL:%s:%s,…,key=%s,cert=%s,cafile=%s,verify=1" |
652 |
" < /dev/disk…") % |
653 |
disk.host, disk.port, |
654 |
GetRsaKeyPath(key_name, private=True), |
655 |
GetRsaKeyPath(key_name, private=False), dest_cert_file) |
656 |
|
657 |
if instance.all_disks_done: |
658 |
DestroyX509Cert(key_name) |
659 |
|
660 |
.. highlight:: text |
661 |
|
662 |
Miscellaneous notes |
663 |
^^^^^^^^^^^^^^^^^^^ |
664 |
|
665 |
- A very similar system could also be used for instance exports within |
666 |
the same cluster. Currently OpenSSH is being used, but could be |
667 |
replaced by socat and SSL/TLS. |
668 |
- During the design of intra-cluster instance moves we also discussed |
669 |
encrypting instance exports using GnuPG. |
670 |
- While most instances should have exactly the same configuration as |
671 |
on the source cluster, setting them up with a different disk layout |
672 |
might be helpful in some use-cases. |
673 |
- A cleanup operation, similar to the one available for failed instance |
674 |
migrations, should be provided. |
675 |
- ``ganeti-watcher`` should remove instances pending a move from another |
676 |
cluster after a certain amount of time. This takes care of failures |
677 |
somewhere in the process. |
678 |
- RSA keys can be generated using the existing |
679 |
``bootstrap.GenerateSelfSignedSslCert`` function, though it might be |
680 |
useful to not write both parts into a single file, requiring small |
681 |
changes to the function. The public part always starts with |
682 |
``-----BEGIN CERTIFICATE-----`` and ends with ``-----END |
683 |
CERTIFICATE-----``. |
684 |
- The source and destination cluster might be different when it comes |
685 |
to available hypervisors, kernels, etc. The destination cluster should |
686 |
refuse to accept an instance move if it can't fulfill an instance's |
687 |
requirements. |
688 |
|
689 |
|
690 |
Feature changes |
691 |
--------------- |
692 |
|
693 |
KVM Security |
694 |
~~~~~~~~~~~~ |
695 |
|
696 |
Current state and shortcomings |
697 |
++++++++++++++++++++++++++++++ |
698 |
|
699 |
Currently all kvm processes run as root. Taking ownership of the |
700 |
hypervisor process, from inside a virtual machine, would mean a full |
701 |
compromise of the whole Ganeti cluster, knowledge of all Ganeti |
702 |
authentication secrets, full access to all running instances, and the |
703 |
option of subverting other basic services on the cluster (eg: ssh). |
704 |
|
705 |
Proposed changes |
706 |
++++++++++++++++ |
707 |
|
708 |
We would like to decrease the surface of attack available if an |
709 |
hypervisor is compromised. We can do so adding different features to |
710 |
Ganeti, which will allow restricting the broken hypervisor |
711 |
possibilities, in the absence of a local privilege escalation attack, to |
712 |
subvert the node. |
713 |
|
714 |
Dropping privileges in kvm to a single user (easy) |
715 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
716 |
|
717 |
By passing the ``-runas`` option to kvm, we can make it drop privileges. |
718 |
The user can be chosen by an hypervisor parameter, so that each instance |
719 |
can have its own user, but by default they will all run under the same |
720 |
one. It should be very easy to implement, and can easily be backported |
721 |
to 2.1.X. |
722 |
|
723 |
This mode protects the Ganeti cluster from a subverted hypervisor, but |
724 |
doesn't protect the instances between each other, unless care is taken |
725 |
to specify a different user for each. This would prevent the worst |
726 |
attacks, including: |
727 |
|
728 |
- logging in to other nodes |
729 |
- administering the Ganeti cluster |
730 |
- subverting other services |
731 |
|
732 |
But the following would remain an option: |
733 |
|
734 |
- terminate other VMs (but not start them again, as that requires root |
735 |
privileges to set up networking) (unless different users are used) |
736 |
- trace other VMs, and probably subvert them and access their data |
737 |
(unless different users are used) |
738 |
- send network traffic from the node |
739 |
- read unprotected data on the node filesystem |
740 |
|
741 |
Running kvm in a chroot (slightly harder) |
742 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
743 |
|
744 |
By passing the ``-chroot`` option to kvm, we can restrict the kvm |
745 |
process in its own (possibly empty) root directory. We need to set this |
746 |
area up so that the instance disks and control sockets are accessible, |
747 |
so it would require slightly more work at the Ganeti level. |
748 |
|
749 |
Breaking out in a chroot would mean: |
750 |
|
751 |
- a lot less options to find a local privilege escalation vector |
752 |
- the impossibility to write local data, if the chroot is set up |
753 |
correctly |
754 |
- the impossibility to read filesystem data on the host |
755 |
|
756 |
It would still be possible though to: |
757 |
|
758 |
- terminate other VMs |
759 |
- trace other VMs, and possibly subvert them (if a tracer can be |
760 |
installed in the chroot) |
761 |
- send network traffic from the node |
762 |
|
763 |
|
764 |
Running kvm with a pool of users (slightly harder) |
765 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
766 |
|
767 |
If rather than passing a single user as an hypervisor parameter, we have |
768 |
a pool of useable ones, we can dynamically choose a free one to use and |
769 |
thus guarantee that each machine will be separate from the others, |
770 |
without putting the burden of this on the cluster administrator. |
771 |
|
772 |
This would mean interfering between machines would be impossible, and |
773 |
can still be combined with the chroot benefits. |
774 |
|
775 |
Running iptables rules to limit network interaction (easy) |
776 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
777 |
|
778 |
These don't need to be handled by Ganeti, but we can ship examples. If |
779 |
the users used to run VMs would be blocked from sending some or all |
780 |
network traffic, it would become impossible for a broken into hypervisor |
781 |
to send arbitrary data on the node network, which is especially useful |
782 |
when the instance and the node network are separated (using ganeti-nbma |
783 |
or a separate set of network interfaces), or when a separate replication |
784 |
network is maintained. We need to experiment to see how much restriction |
785 |
we can properly apply, without limiting the instance legitimate traffic. |
786 |
|
787 |
|
788 |
Running kvm inside a container (even harder) |
789 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
790 |
|
791 |
Recent linux kernels support different process namespaces through |
792 |
control groups. PIDs, users, filesystems and even network interfaces can |
793 |
be separated. If we can set up ganeti to run kvm in a separate container |
794 |
we could insulate all the host process from being even visible if the |
795 |
hypervisor gets broken into. Most probably separating the network |
796 |
namespace would require one extra hop in the host, through a veth |
797 |
interface, thus reducing performance, so we may want to avoid that, and |
798 |
just rely on iptables. |
799 |
|
800 |
Implementation plan |
801 |
+++++++++++++++++++ |
802 |
|
803 |
We will first implement dropping privileges for kvm processes as a |
804 |
single user, and most probably backport it to 2.1. Then we'll ship |
805 |
example iptables rules to show how the user can be limited in its |
806 |
network activities. After that we'll implement chroot restriction for |
807 |
kvm processes, and extend the user limitation to use a user pool. |
808 |
|
809 |
Finally we'll look into namespaces and containers, although that might |
810 |
slip after the 2.2 release. |
811 |
|
812 |
|
813 |
External interface changes |
814 |
-------------------------- |
815 |
|
816 |
|
817 |
OS API |
818 |
~~~~~~ |
819 |
|
820 |
The OS variants implementation in Ganeti 2.1 didn't prove to be useful |
821 |
enough to alleviate the need to hack around the Ganeti API in order to |
822 |
provide flexible OS parameters. |
823 |
|
824 |
As such, for Ganeti 2.2 we will provide support for arbitrary OS |
825 |
parameters. However, since OSes are not registered in Ganeti, but |
826 |
instead discovered at runtime, the interface is not entirely |
827 |
straightforward. |
828 |
|
829 |
Furthermore, to support the system administrator in keeping OSes |
830 |
properly in sync across the nodes of a cluster, Ganeti will also verify |
831 |
(if existing) the consistence of a new ``os_version`` file. |
832 |
|
833 |
These changes to the OS API will bump the API version to 20. |
834 |
|
835 |
|
836 |
OS version |
837 |
++++++++++ |
838 |
|
839 |
A new ``os_version`` file will be supported by Ganeti. This file is not |
840 |
required, but if existing, its contents will be checked for consistency |
841 |
across nodes. The file should hold only one line of text (any extra data |
842 |
will be discarded), and its contents will be shown in the OS information |
843 |
and diagnose commands. |
844 |
|
845 |
It is recommended that OS authors increase the contents of this file for |
846 |
any changes; at a minimum, modifications that change the behaviour of |
847 |
import/export scripts must increase the version, since they break |
848 |
intra-cluster migration. |
849 |
|
850 |
Parameters |
851 |
++++++++++ |
852 |
|
853 |
The interface between Ganeti and the OS scripts will be based on |
854 |
environment variables, and as such the parameters and their values will |
855 |
need to be valid in this context. |
856 |
|
857 |
Names |
858 |
^^^^^ |
859 |
|
860 |
The parameter names will be declared in a new file, ``parameters.list``, |
861 |
together with a one-line documentation (whitespace-separated). Example:: |
862 |
|
863 |
$ cat parameters.list |
864 |
ns1 Specifies the first name server to add to /etc/resolv.conf |
865 |
extra_packages Specifies additional packages to install |
866 |
rootfs_size Specifies the root filesystem size (the rest will be left unallocated) |
867 |
track Specifies the distribution track, one of 'stable', 'testing' or 'unstable' |
868 |
|
869 |
As seen above, the documentation can be separate via multiple |
870 |
spaces/tabs from the names. |
871 |
|
872 |
The parameter names as read from the file will be used for the command |
873 |
line interface in lowercased form; as such, there shouldn't be any two |
874 |
parameters which differ in case only. |
875 |
|
876 |
Values |
877 |
^^^^^^ |
878 |
|
879 |
The values of the parameters are, from Ganeti's point of view, |
880 |
completely freeform. If a given parameter has, from the OS' point of |
881 |
view, a fixed set of valid values, these should be documented as such |
882 |
and verified by the OS, but Ganeti will not handle such parameters |
883 |
specially. |
884 |
|
885 |
An empty value must be handled identically as a missing parameter. In |
886 |
other words, the validation script should only test for non-empty |
887 |
values, and not for declared versus undeclared parameters. |
888 |
|
889 |
Furthermore, each parameter should have an (internal to the OS) default |
890 |
value, that will be used if not passed from Ganeti. More precisely, it |
891 |
should be possible for any parameter to specify a value that will have |
892 |
the same effect as not passing the parameter, and no in no case should |
893 |
the absence of a parameter be treated as an exceptional case (outside |
894 |
the value space). |
895 |
|
896 |
|
897 |
Environment variables |
898 |
+++++++++++++++++++++ |
899 |
|
900 |
The parameters will be exposed in the environment upper-case and |
901 |
prefixed with the string ``OSP_``. For example, a parameter declared in |
902 |
the 'parameters' file as ``ns1`` will appear in the environment as the |
903 |
variable ``OSP_NS1``. |
904 |
|
905 |
Validation |
906 |
++++++++++ |
907 |
|
908 |
For the purpose of parameter name/value validation, the OS scripts |
909 |
*must* provide an additional script, named ``verify``. This script will |
910 |
be called with the argument ``parameters``, and all the parameters will |
911 |
be passed in via environment variables, as described above. |
912 |
|
913 |
The script should signify result/failure based on its exit code, and |
914 |
show explanatory messages either on its standard output or standard |
915 |
error. These messages will be passed on to the master, and stored as in |
916 |
the OpCode result/error message. |
917 |
|
918 |
The parameters must be constructed to be independent of the instance |
919 |
specifications. In general, the validation script will only be called |
920 |
with the parameter variables set, but not with the normal per-instance |
921 |
variables, in order for Ganeti to be able to validate default parameters |
922 |
too, when they change. Validation will only be performed on one cluster |
923 |
node, and it will be up to the ganeti administrator to keep the OS |
924 |
scripts in sync between all nodes. |
925 |
|
926 |
Instance operations |
927 |
+++++++++++++++++++ |
928 |
|
929 |
The parameters will be passed, as described above, to all the other |
930 |
instance operations (creation, import, export). Ideally, these scripts |
931 |
will not abort with parameter validation errors, if the ``verify`` |
932 |
script has verified them correctly. |
933 |
|
934 |
Note: when changing an instance's OS type, any OS parameters defined at |
935 |
instance level will be kept as-is. If the parameters differ between the |
936 |
new and the old OS, the user should manually remove/update them as |
937 |
needed. |
938 |
|
939 |
Declaration and modification |
940 |
++++++++++++++++++++++++++++ |
941 |
|
942 |
Since the OSes are not registered in Ganeti, we will only make a 'weak' |
943 |
link between the parameters as declared in Ganeti and the actual OSes |
944 |
existing on the cluster. |
945 |
|
946 |
It will be possible to declare parameters either globally, per cluster |
947 |
(where they are indexed per OS/variant), or individually, per |
948 |
instance. The declaration of parameters will not be tied to current |
949 |
existing OSes. When specifying a parameter, if the OS exists, it will be |
950 |
validated; if not, then it will simply be stored as-is. |
951 |
|
952 |
A special note is that it will not be possible to 'unset' at instance |
953 |
level a parameter that is declared globally. Instead, at instance level |
954 |
the parameter should be given an explicit value, or the default value as |
955 |
explained above. |
956 |
|
957 |
CLI interface |
958 |
+++++++++++++ |
959 |
|
960 |
The modification of global (default) parameters will be done via the |
961 |
``gnt-os`` command, and the per-instance parameters via the |
962 |
``gnt-instance`` command. Both these commands will take an addition |
963 |
``--os-parameters`` or ``-O`` flag that specifies the parameters in the |
964 |
familiar comma-separated, key=value format. For removing a parameter, a |
965 |
``-key`` syntax will be used, e.g.:: |
966 |
|
967 |
# initial modification |
968 |
$ gnt-instance modify -O use_dchp=true instance1 |
969 |
# later revert (to the cluster default, or the OS default if not |
970 |
# defined at cluster level) |
971 |
$ gnt-instance modify -O -use_dhcp instance1 |
972 |
|
973 |
Internal storage |
974 |
++++++++++++++++ |
975 |
|
976 |
Internally, the OS parameters will be stored in a new ``osparams`` |
977 |
attribute. The global parameters will be stored on the cluster object, |
978 |
and the value of this attribute will be a dictionary indexed by OS name |
979 |
(this also accepts an OS+variant name, which will override a simple OS |
980 |
name, see below), and for values the key/name dictionary. For the |
981 |
instances, the value will be directly the key/name dictionary. |
982 |
|
983 |
Overriding rules |
984 |
++++++++++++++++ |
985 |
|
986 |
Any instance-specific parameters will override any variant-specific |
987 |
parameters, which in turn will override any global parameters. The |
988 |
global parameters, in turn, override the built-in defaults (of the OS |
989 |
scripts). |
990 |
|
991 |
|
992 |
.. vim: set textwidth=72 : |
993 |
.. Local Variables: |
994 |
.. mode: rst |
995 |
.. fill-column: 72 |
996 |
.. End: |