Statistics
| Branch: | Tag: | Revision:

root / doc / design-2.1.rst @ f8e233a3

History | View | Annotate | Download (46.3 kB)

1
=================
2
Ganeti 2.1 design
3
=================
4

    
5
This document describes the major changes in Ganeti 2.1 compared to
6
the 2.0 version.
7

    
8
The 2.1 version will be a relatively small release. Its main aim is to
9
avoid changing too much of the core code, while addressing issues and
10
adding new features and improvements over 2.0, in a timely fashion.
11

    
12
.. contents:: :depth: 4
13

    
14
Objective
15
=========
16

    
17
Ganeti 2.1 will add features to help further automatization of cluster
18
operations, further improve scalability to even bigger clusters, and
19
make it easier to debug the Ganeti core.
20

    
21
Detailed design
22
===============
23

    
24
As for 2.0 we divide the 2.1 design into three areas:
25

    
26
- core changes, which affect the master daemon/job queue/locking or
27
  all/most logical units
28
- logical unit/feature changes
29
- external interface changes (eg. command line, os api, hooks, ...)
30

    
31
Core changes
32
------------
33

    
34
Storage units modelling
35
~~~~~~~~~~~~~~~~~~~~~~~
36

    
37
Currently, Ganeti has a good model of the block devices for instances
38
(e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
39
storage pools that are providing the space for these front-end
40
devices. For example, there are hardcoded inter-node RPC calls for
41
volume group listing, file storage creation/deletion, etc.
42

    
43
The storage units framework will implement a generic handling for all
44
kinds of storage backends:
45

    
46
- LVM physical volumes
47
- LVM volume groups
48
- File-based storage directories
49
- any other future storage method
50

    
51
There will be a generic list of methods that each storage unit type
52
will provide, like:
53

    
54
- list of storage units of this type
55
- check status of the storage unit
56

    
57
Additionally, there will be specific methods for each method, for
58
example:
59

    
60
- enable/disable allocations on a specific PV
61
- file storage directory creation/deletion
62
- VG consistency fixing
63

    
64
This will allow a much better modeling and unification of the various
65
RPC calls related to backend storage pool in the future. Ganeti 2.1 is
66
intended to add the basics of the framework, and not necessarilly move
67
all the curent VG/FileBased operations to it.
68

    
69
Note that while we model both LVM PVs and LVM VGs, the framework will
70
**not** model any relationship between the different types. In other
71
words, we don't model neither inheritances nor stacking, since this is
72
too complex for our needs. While a ``vgreduce`` operation on a LVM VG
73
could actually remove a PV from it, this will not be handled at the
74
framework level, but at individual operation level. The goal is that
75
this is a lightweight framework, for abstracting the different storage
76
operation, and not for modelling the storage hierarchy.
77

    
78

    
79
Locking improvements
80
~~~~~~~~~~~~~~~~~~~~
81

    
82
Current State and shortcomings
83
++++++++++++++++++++++++++++++
84

    
85
The class ``LockSet`` (see ``lib/locking.py``) is a container for one or
86
many ``SharedLock`` instances. It provides an interface to add/remove
87
locks and to acquire and subsequently release any number of those locks
88
contained in it.
89

    
90
Locks in a ``LockSet`` are always acquired in alphabetic order. Due to
91
the way we're using locks for nodes and instances (the single cluster
92
lock isn't affected by this issue) this can lead to long delays when
93
acquiring locks if another operation tries to acquire multiple locks but
94
has to wait for yet another operation.
95

    
96
In the following demonstration we assume to have the instance locks
97
``inst1``, ``inst2``, ``inst3`` and ``inst4``.
98

    
99
#. Operation A grabs lock for instance ``inst4``.
100
#. Operation B wants to acquire all instance locks in alphabetic order,
101
   but it has to wait for ``inst4``.
102
#. Operation C tries to lock ``inst1``, but it has to wait until
103
   Operation B (which is trying to acquire all locks) releases the lock
104
   again.
105
#. Operation A finishes and releases lock on ``inst4``. Operation B can
106
   continue and eventually releases all locks.
107
#. Operation C can get ``inst1`` lock and finishes.
108

    
109
Technically there's no need for Operation C to wait for Operation A, and
110
subsequently Operation B, to finish. Operation B can't continue until
111
Operation A is done (it has to wait for ``inst4``), anyway.
112

    
113
Proposed changes
114
++++++++++++++++
115

    
116
Non-blocking lock acquiring
117
^^^^^^^^^^^^^^^^^^^^^^^^^^^
118

    
119
Acquiring locks for OpCode execution is always done in blocking mode.
120
They won't return until the lock has successfully been acquired (or an
121
error occurred, although we won't cover that case here).
122

    
123
``SharedLock`` and ``LockSet`` must be able to be acquired in a
124
non-blocking way. They must support a timeout and abort trying to
125
acquire the lock(s) after the specified amount of time.
126

    
127
Retry acquiring locks
128
^^^^^^^^^^^^^^^^^^^^^
129

    
130
To prevent other operations from waiting for a long time, such as
131
described in the demonstration before, ``LockSet`` must not keep locks
132
for a prolonged period of time when trying to acquire two or more locks.
133
Instead it should, with an increasing timeout for acquiring all locks,
134
release all locks again and sleep some time if it fails to acquire all
135
requested locks.
136

    
137
A good timeout value needs to be determined. In any case should
138
``LockSet`` proceed to acquire locks in blocking mode after a few
139
(unsuccessful) attempts to acquire all requested locks.
140

    
141
One proposal for the timeout is to use ``2**tries`` seconds, where
142
``tries`` is the number of unsuccessful tries.
143

    
144
In the demonstration before this would allow Operation C to continue
145
after Operation B unsuccessfully tried to acquire all locks and released
146
all acquired locks (``inst1``, ``inst2`` and ``inst3``) again.
147

    
148
Other solutions discussed
149
+++++++++++++++++++++++++
150

    
151
There was also some discussion on going one step further and extend the
152
job queue (see ``lib/jqueue.py``) to select the next task for a worker
153
depending on whether it can acquire the necessary locks. While this may
154
reduce the number of necessary worker threads and/or increase throughput
155
on large clusters with many jobs, it also brings many potential
156
problems, such as contention and increased memory usage, with it. As
157
this would be an extension of the changes proposed before it could be
158
implemented at a later point in time, but we decided to stay with the
159
simpler solution for now.
160

    
161
Implementation details
162
++++++++++++++++++++++
163

    
164
``SharedLock`` redesign
165
^^^^^^^^^^^^^^^^^^^^^^^
166

    
167
The current design of ``SharedLock`` is not good for supporting timeouts
168
when acquiring a lock and there are also minor fairness issues in it. We
169
plan to address both with a redesign. A proof of concept implementation
170
was written and resulted in significantly simpler code.
171

    
172
Currently ``SharedLock`` uses two separate queues for shared and
173
exclusive acquires and waiters get to run in turns. This means if an
174
exclusive acquire is released, the lock will allow shared waiters to run
175
and vice versa.  Although it's still fair in the end there is a slight
176
bias towards shared waiters in the current implementation. The same
177
implementation with two shared queues can not support timeouts without
178
adding a lot of complexity.
179

    
180
Our proposed redesign changes ``SharedLock`` to have only one single
181
queue.  There will be one condition (see Condition_ for a note about
182
performance) in the queue per exclusive acquire and two for all shared
183
acquires (see below for an explanation). The maximum queue length will
184
always be ``2 + (number of exclusive acquires waiting)``. The number of
185
queue entries for shared acquires can vary from 0 to 2.
186

    
187
The two conditions for shared acquires are a bit special. They will be
188
used in turn. When the lock is instantiated, no conditions are in the
189
queue. As soon as the first shared acquire arrives (and there are
190
holder(s) or waiting acquires; see Acquire_), the active condition is
191
added to the queue. Until it becomes the topmost condition in the queue
192
and has been notified, any shared acquire is added to this active
193
condition. When the active condition is notified, the conditions are
194
swapped and further shared acquires are added to the previously inactive
195
condition (which has now become the active condition). After all waiters
196
on the previously active (now inactive) and now notified condition
197
received the notification, it is removed from the queue of pending
198
acquires.
199

    
200
This means shared acquires will skip any exclusive acquire in the queue.
201
We believe it's better to improve parallelization on operations only
202
asking for shared (or read-only) locks. Exclusive operations holding the
203
same lock can not be parallelized.
204

    
205

    
206
Acquire
207
*******
208

    
209
For exclusive acquires a new condition is created and appended to the
210
queue.  Shared acquires are added to the active condition for shared
211
acquires and if the condition is not yet on the queue, it's appended.
212

    
213
The next step is to wait for our condition to be on the top of the queue
214
(to guarantee fairness). If the timeout expired, we return to the caller
215
without acquiring the lock. On every notification we check whether the
216
lock has been deleted, in which case an error is returned to the caller.
217

    
218
The lock can be acquired if we're on top of the queue (there is no one
219
else ahead of us). For an exclusive acquire, there must not be other
220
exclusive or shared holders. For a shared acquire, there must not be an
221
exclusive holder.  If these conditions are all true, the lock is
222
acquired and we return to the caller. In any other case we wait again on
223
the condition.
224

    
225
If it was the last waiter on a condition, the condition is removed from
226
the queue.
227

    
228
Optimization: There's no need to touch the queue if there are no pending
229
acquires and no current holders. The caller can have the lock
230
immediately.
231

    
232
.. graphviz:: design-2.1-lock-acquire.dot
233

    
234

    
235
Release
236
*******
237

    
238
First the lock removes the caller from the internal owner list. If there
239
are pending acquires in the queue, the first (the oldest) condition is
240
notified.
241

    
242
If the first condition was the active condition for shared acquires, the
243
inactive condition will be made active. This ensures fairness with
244
exclusive locks by forcing consecutive shared acquires to wait in the
245
queue.
246

    
247
.. graphviz:: design-2.1-lock-release.dot
248

    
249

    
250
Delete
251
******
252

    
253
The caller must either hold the lock in exclusive mode already or the
254
lock must be acquired in exclusive mode. Trying to delete a lock while
255
it's held in shared mode must fail.
256

    
257
After ensuring the lock is held in exclusive mode, the lock will mark
258
itself as deleted and continue to notify all pending acquires. They will
259
wake up, notice the deleted lock and return an error to the caller.
260

    
261

    
262
Condition
263
^^^^^^^^^
264

    
265
Note: This is not necessary for the locking changes above, but it may be
266
a good optimization (pending performance tests).
267

    
268
The existing locking code in Ganeti 2.0 uses Python's built-in
269
``threading.Condition`` class. Unfortunately ``Condition`` implements
270
timeouts by sleeping 1ms to 20ms between tries to acquire the condition
271
lock in non-blocking mode. This requires unnecessary context switches
272
and contention on the CPython GIL (Global Interpreter Lock).
273

    
274
By using POSIX pipes (see ``pipe(2)``) we can use the operating system's
275
support for timeouts on file descriptors (see ``select(2)``). A custom
276
condition class will have to be written for this.
277

    
278
On instantiation the class creates a pipe. After each notification the
279
previous pipe is abandoned and re-created (technically the old pipe
280
needs to stay around until all notifications have been delivered).
281

    
282
All waiting clients of the condition use ``select(2)`` or ``poll(2)`` to
283
wait for notifications, optionally with a timeout. A notification will
284
be signalled to the waiting clients by closing the pipe. If the pipe
285
wasn't closed during the timeout, the waiting function returns to its
286
caller nonetheless.
287

    
288

    
289
Node daemon availability
290
~~~~~~~~~~~~~~~~~~~~~~~~
291

    
292
Current State and shortcomings
293
++++++++++++++++++++++++++++++
294

    
295
Currently, when a Ganeti node suffers serious system disk damage, the
296
migration/failover of an instance may not correctly shutdown the virtual
297
machine on the broken node causing instances duplication. The ``gnt-node
298
powercycle`` command can be used to force a node reboot and thus to
299
avoid duplicated instances. This command relies on node daemon
300
availability, though, and thus can fail if the node daemon has some
301
pages swapped out of ram, for example.
302

    
303

    
304
Proposed changes
305
++++++++++++++++
306

    
307
The proposed solution forces node daemon to run exclusively in RAM. It
308
uses python ctypes to to call ``mlockall(MCL_CURRENT | MCL_FUTURE)`` on
309
the node daemon process and all its children. In addition another log
310
handler has been implemented for node daemon to redirect to
311
``/dev/console`` messages that cannot be written on the logfile.
312

    
313
With these changes node daemon can successfully run basic tasks such as
314
a powercycle request even when the system disk is heavily damaged and
315
reading/writing to disk fails constantly.
316

    
317

    
318
New Features
319
------------
320

    
321
Automated Ganeti Cluster Merger
322
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
323

    
324
Current situation
325
+++++++++++++++++
326

    
327
Currently there's no easy way to merge two or more clusters together.
328
But in order to optimize resources this is a needed missing piece. The
329
goal of this design doc is to come up with a easy to use solution which
330
allows you to merge two or more cluster together.
331

    
332
Initial contact
333
+++++++++++++++
334

    
335
As the design of Ganeti is based on an autonomous system, Ganeti by
336
itself has no way to reach nodes outside of its cluster. To overcome
337
this situation we're required to prepare the cluster before we can go
338
ahead with the actual merge: We've to replace at least the ssh keys on
339
the affected nodes before we can do any operation within ``gnt-``
340
commands.
341

    
342
To make this a automated process we'll ask the user to provide us with
343
the root password of every cluster we've to merge. We use the password
344
to grab the current ``id_dsa`` key and then rely on that ssh key for any
345
further communication to be made until the cluster is fully merged.
346

    
347
Cluster merge
348
+++++++++++++
349

    
350
After initial contact we do the cluster merge:
351

    
352
1. Grab the list of nodes
353
2. On all nodes add our own ``id_dsa.pub`` key to ``authorized_keys``
354
3. Stop all instances running on the merging cluster
355
4. Disable ``ganeti-watcher`` as it tries to restart Ganeti daemons
356
5. Stop all Ganeti daemons on all merging nodes
357
6. Grab the ``config.data`` from the master of the merging cluster
358
7. Stop local ``ganeti-masterd``
359
8. Merge the config:
360

    
361
   1. Open our own cluster ``config.data``
362
   2. Open cluster ``config.data`` of the merging cluster
363
   3. Grab all nodes of the merging cluster
364
   4. Set ``master_candidate`` to false on all merging nodes
365
   5. Add the nodes to our own cluster ``config.data``
366
   6. Grab all the instances on the merging cluster
367
   7. Adjust the port if the instance has drbd layout:
368

    
369
      1. In ``logical_id`` (index 2)
370
      2. In ``physical_id`` (index 1 and 3)
371

    
372
   8. Add the instances to our own cluster ``config.data``
373

    
374
9. Start ``ganeti-masterd`` with ``--no-voting`` ``--yes-do-it``
375
10. ``gnt-node add --readd`` on all merging nodes
376
11. ``gnt-cluster redist-conf``
377
12. Restart ``ganeti-masterd`` normally
378
13. Enable ``ganeti-watcher`` again
379
14. Start all merging instances again
380

    
381
Rollback
382
++++++++
383

    
384
Until we actually (re)add any nodes we can abort and rollback the merge
385
at any point. After merging the config, though, we've to get the backup
386
copy of ``config.data`` (from another master candidate node). And for
387
security reasons it's a good idea to undo ``id_dsa.pub`` distribution by
388
going on every affected node and remove the ``id_dsa.pub`` key again.
389
Also we've to keep in mind, that we've to start the Ganeti daemons and
390
starting up the instances again.
391

    
392
Verification
393
++++++++++++
394

    
395
Last but not least we should verify that the merge was successful.
396
Therefore we run ``gnt-cluster verify``, which ensures that the cluster
397
overall is in a healthy state. Additional it's also possible to compare
398
the list of instances/nodes with a list made prior to the upgrade to
399
make sure we didn't lose any data/instance/node.
400

    
401
Appendix
402
++++++++
403

    
404
cluster-merge.py
405
^^^^^^^^^^^^^^^^
406

    
407
Used to merge the cluster config. This is a POC and might differ from
408
actual production code.
409

    
410
::
411

    
412
  #!/usr/bin/python
413

    
414
  import sys
415
  from ganeti import config
416
  from ganeti import constants
417

    
418
  c_mine = config.ConfigWriter(offline=True)
419
  c_other = config.ConfigWriter(sys.argv[1])
420

    
421
  fake_id = 0
422
  for node in c_other.GetNodeList():
423
    node_info = c_other.GetNodeInfo(node)
424
    node_info.master_candidate = False
425
    c_mine.AddNode(node_info, str(fake_id))
426
    fake_id += 1
427

    
428
  for instance in c_other.GetInstanceList():
429
    instance_info = c_other.GetInstanceInfo(instance)
430
    for dsk in instance_info.disks:
431
      if dsk.dev_type in constants.LDS_DRBD:
432
         port = c_mine.AllocatePort()
433
         logical_id = list(dsk.logical_id)
434
         logical_id[2] = port
435
         dsk.logical_id = tuple(logical_id)
436
         physical_id = list(dsk.physical_id)
437
         physical_id[1] = physical_id[3] = port
438
         dsk.physical_id = tuple(physical_id)
439
    c_mine.AddInstance(instance_info, str(fake_id))
440
    fake_id += 1
441

    
442

    
443
Feature changes
444
---------------
445

    
446
Ganeti Confd
447
~~~~~~~~~~~~
448

    
449
Current State and shortcomings
450
++++++++++++++++++++++++++++++
451

    
452
In Ganeti 2.0 all nodes are equal, but some are more equal than others.
453
In particular they are divided between "master", "master candidates" and
454
"normal".  (Moreover they can be offline or drained, but this is not
455
important for the current discussion). In general the whole
456
configuration is only replicated to master candidates, and some partial
457
information is spread to all nodes via ssconf.
458

    
459
This change was done so that the most frequent Ganeti operations didn't
460
need to contact all nodes, and so clusters could become bigger. If we
461
want more information to be available on all nodes, we need to add more
462
ssconf values, which is counter-balancing the change, or to talk with
463
the master node, which is not designed to happen now, and requires its
464
availability.
465

    
466
Information such as the instance->primary_node mapping will be needed on
467
all nodes, and we also want to make sure services external to the
468
cluster can query this information as well. This information must be
469
available at all times, so we can't query it through RAPI, which would
470
be a single point of failure, as it's only available on the master.
471

    
472

    
473
Proposed changes
474
++++++++++++++++
475

    
476
In order to allow fast and highly available access read-only to some
477
configuration values, we'll create a new ganeti-confd daemon, which will
478
run on master candidates. This daemon will talk via UDP, and
479
authenticate messages using HMAC with a cluster-wide shared key. This
480
key will be generated at cluster init time, and stored on the clusters
481
alongside the ganeti SSL keys, and readable only by root.
482

    
483
An interested client can query a value by making a request to a subset
484
of the cluster master candidates. It will then wait to get a few
485
responses, and use the one with the highest configuration serial number.
486
Since the configuration serial number is increased each time the ganeti
487
config is updated, and the serial number is included in all answers,
488
this can be used to make sure to use the most recent answer, in case
489
some master candidates are stale or in the middle of a configuration
490
update.
491

    
492
In order to prevent replay attacks queries will contain the current unix
493
timestamp according to the client, and the server will verify that its
494
timestamp is in the same 5 minutes range (this requires synchronized
495
clocks, which is a good idea anyway). Queries will also contain a "salt"
496
which they expect the answers to be sent with, and clients are supposed
497
to accept only answers which contain salt generated by them.
498

    
499
The configuration daemon will be able to answer simple queries such as:
500

    
501
- master candidates list
502
- master node
503
- offline nodes
504
- instance list
505
- instance primary nodes
506

    
507
Wire protocol
508
^^^^^^^^^^^^^
509

    
510
A confd query will look like this, on the wire::
511

    
512
  plj0{
513
    "msg": "{\"type\": 1,
514
             \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
515
             \"protocol\": 1,
516
             \"query\": \"node1.example.com\"}\n",
517
    "salt": "1249637704",
518
    "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
519
  }
520

    
521
``plj0`` is a fourcc that details the message content. It stands for plain
522
json 0, and can be changed as we move on to different type of protocols
523
(for example protocol buffers, or encrypted json). What follows is a
524
json encoded string, with the following fields:
525

    
526
- ``msg`` contains a JSON-encoded query, its fields are:
527

    
528
  - ``protocol``, integer, is the confd protocol version (initially
529
    just ``constants.CONFD_PROTOCOL_VERSION``, with a value of 1)
530
  - ``type``, integer, is the query type. For example "node role by
531
    name" or "node primary ip by instance ip". Constants will be
532
    provided for the actual available query types
533
  - ``query`` is a multi-type field (depending on the ``type`` field):
534

    
535
    - it can be missing, when the request is fully determined by the
536
      ``type`` field
537
    - it can contain a string which denotes the search key: for
538
      example an IP, or a node name
539
    - it can contain a dictionary, in which case the actual details
540
      vary further per request type
541

    
542
  - ``rsalt``, string, is the required response salt; the client must
543
    use it to recognize which answer it's getting.
544

    
545
- ``salt`` must be the current unix timestamp, according to the
546
  client; servers should refuse messages which have a wrong timing,
547
  according to their configuration and clock
548
- ``hmac`` is an hmac signature of salt+msg, with the cluster hmac key
549

    
550
If an answer comes back (which is optional, since confd works over UDP)
551
it will be in this format::
552

    
553
  plj0{
554
    "msg": "{\"status\": 0,
555
             \"answer\": 0,
556
             \"serial\": 42,
557
             \"protocol\": 1}\n",
558
    "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
559
    "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
560
  }
561

    
562
Where:
563

    
564
- ``plj0`` the message type magic fourcc, as discussed above
565
- ``msg`` contains a JSON-encoded answer, its fields are:
566

    
567
  - ``protocol``, integer, is the confd protocol version (initially
568
    just constants.CONFD_PROTOCOL_VERSION, with a value of 1)
569
  - ``status``, integer, is the error code; initially just ``0`` for
570
    'ok' or ``1`` for 'error' (in which case answer contains an error
571
    detail, rather than an answer), but in the future it may be
572
    expanded to have more meanings (e.g. ``2`` if the answer is
573
    compressed)
574
  - ``answer``, is the actual answer; its type and meaning is query
575
    specific: for example for "node primary ip by instance ip" queries
576
    it will be a string containing an IP address, for "node role by
577
    name" queries it will be an integer which encodes the role
578
    (master, candidate, drained, offline) according to constants
579

    
580
- ``salt`` is the requested salt from the query; a client can use it
581
  to recognize what query the answer is answering.
582
- ``hmac`` is an hmac signature of salt+msg, with the cluster hmac key
583

    
584

    
585
Redistribute Config
586
~~~~~~~~~~~~~~~~~~~
587

    
588
Current State and shortcomings
589
++++++++++++++++++++++++++++++
590

    
591
Currently LUClusterRedistConf triggers a copy of the updated
592
configuration file to all master candidates and of the ssconf files to
593
all nodes. There are other files which are maintained manually but which
594
are important to keep in sync. These are:
595

    
596
- rapi SSL key certificate file (rapi.pem) (on master candidates)
597
- rapi user/password file rapi_users (on master candidates)
598

    
599
Furthermore there are some files which are hypervisor specific but we
600
may want to keep in sync:
601

    
602
- the xen-hvm hypervisor uses one shared file for all vnc passwords, and
603
  copies the file once, during node add. This design is subject to
604
  revision to be able to have different passwords for different groups
605
  of instances via the use of hypervisor parameters, and to allow
606
  xen-hvm and kvm to use an equal system to provide password-protected
607
  vnc sessions. In general, though, it would be useful if the vnc
608
  password files were copied as well, to avoid unwanted vnc password
609
  changes on instance failover/migrate.
610

    
611
Optionally the admin may want to also ship files such as the global
612
xend.conf file, and the network scripts to all nodes.
613

    
614
Proposed changes
615
++++++++++++++++
616

    
617
RedistributeConfig will be changed to copy also the rapi files, and to
618
call every enabled hypervisor asking for a list of additional files to
619
copy. Users will have the possibility to populate a file containing a
620
list of files to be distributed; this file will be propagated as well.
621
Such solution is really simple to implement and it's easily usable by
622
scripts.
623

    
624
This code will be also shared (via tasklets or by other means, if
625
tasklets are not ready for 2.1) with the AddNode and SetNodeParams LUs
626
(so that the relevant files will be automatically shipped to new master
627
candidates as they are set).
628

    
629
VNC Console Password
630
~~~~~~~~~~~~~~~~~~~~
631

    
632
Current State and shortcomings
633
++++++++++++++++++++++++++++++
634

    
635
Currently just the xen-hvm hypervisor supports setting a password to
636
connect the the instances' VNC console, and has one common password
637
stored in a file.
638

    
639
This doesn't allow different passwords for different instances/groups of
640
instances, and makes it necessary to remember to copy the file around
641
the cluster when the password changes.
642

    
643
Proposed changes
644
++++++++++++++++
645

    
646
We'll change the VNC password file to a vnc_password_file hypervisor
647
parameter.  This way it can have a cluster default, but also a different
648
value for each instance. The VNC enabled hypervisors (xen and kvm) will
649
publish all the password files in use through the cluster so that a
650
redistribute-config will ship them to all nodes (see the Redistribute
651
Config proposed changes above).
652

    
653
The current VNC_PASSWORD_FILE constant will be removed, but its value
654
will be used as the default HV_VNC_PASSWORD_FILE value, thus retaining
655
backwards compatibility with 2.0.
656

    
657
The code to export the list of VNC password files from the hypervisors
658
to RedistributeConfig will be shared between the KVM and xen-hvm
659
hypervisors.
660

    
661
Disk/Net parameters
662
~~~~~~~~~~~~~~~~~~~
663

    
664
Current State and shortcomings
665
++++++++++++++++++++++++++++++
666

    
667
Currently disks and network interfaces have a few tweakable options and
668
all the rest is left to a default we chose. We're finding that we need
669
more and more to tweak some of these parameters, for example to disable
670
barriers for DRBD devices, or allow striping for the LVM volumes.
671

    
672
Moreover for many of these parameters it will be nice to have
673
cluster-wide defaults, and then be able to change them per
674
disk/interface.
675

    
676
Proposed changes
677
++++++++++++++++
678

    
679
We will add new cluster level diskparams and netparams, which will
680
contain all the tweakable parameters. All values which have a sensible
681
cluster-wide default will go into this new structure while parameters
682
which have unique values will not.
683

    
684
Example of network parameters:
685
  - mode: bridge/route
686
  - link: for mode "bridge" the bridge to connect to, for mode route it
687
    can contain the routing table, or the destination interface
688

    
689
Example of disk parameters:
690
  - stripe: lvm stripes
691
  - stripe_size: lvm stripe size
692
  - meta_flushes: drbd, enable/disable metadata "barriers"
693
  - data_flushes: drbd, enable/disable data "barriers"
694

    
695
Some parameters are bound to be disk-type specific (drbd, vs lvm, vs
696
files) or hypervisor specific (nic models for example), but for now they
697
will all live in the same structure. Each component is supposed to
698
validate only the parameters it knows about, and ganeti itself will make
699
sure that no "globally unknown" parameters are added, and that no
700
parameters have overridden meanings for different components.
701

    
702
The parameters will be kept, as for the BEPARAMS into a "default"
703
category, which will allow us to expand on by creating instance
704
"classes" in the future.  Instance classes is not a feature we plan
705
implementing in 2.1, though.
706

    
707

    
708
Global hypervisor parameters
709
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
710

    
711
Current State and shortcomings
712
++++++++++++++++++++++++++++++
713

    
714
Currently all hypervisor parameters are modifiable both globally
715
(cluster level) and at instance level. However, there is no other
716
framework to held hypervisor-specific parameters, so if we want to add
717
a new class of hypervisor parameters that only makes sense on a global
718
level, we have to change the hvparams framework.
719

    
720
Proposed changes
721
++++++++++++++++
722

    
723
We add a new (global, not per-hypervisor) list of parameters which are
724
not changeable on a per-instance level. The create, modify and query
725
instance operations are changed to not allow/show these parameters.
726

    
727
Furthermore, to allow transition of parameters to the global list, and
728
to allow cleanup of inadverdently-customised parameters, the
729
``UpgradeConfig()`` method of instances will drop any such parameters
730
from their list of hvparams, such that a restart of the master daemon
731
is all that is needed for cleaning these up.
732

    
733
Also, the framework is simple enough that if we need to replicate it
734
at beparams level we can do so easily.
735

    
736

    
737
Non bridged instances support
738
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
739

    
740
Current State and shortcomings
741
++++++++++++++++++++++++++++++
742

    
743
Currently each instance NIC must be connected to a bridge, and if the
744
bridge is not specified the default cluster one is used. This makes it
745
impossible to use the vif-route xen network scripts, or other
746
alternative mechanisms that don't need a bridge to work.
747

    
748
Proposed changes
749
++++++++++++++++
750

    
751
The new "mode" network parameter will distinguish between bridged
752
interfaces and routed ones.
753

    
754
When mode is "bridge" the "link" parameter will contain the bridge the
755
instance should be connected to, effectively making things as today. The
756
value has been migrated from a nic field to a parameter to allow for an
757
easier manipulation of the cluster default.
758

    
759
When mode is "route" the ip field of the interface will become
760
mandatory, to allow for a route to be set. In the future we may want
761
also to accept multiple IPs or IP/mask values for this purpose. We will
762
evaluate possible meanings of the link parameter to signify a routing
763
table to be used, which would allow for insulation between instance
764
groups (as today happens for different bridges).
765

    
766
For now we won't add a parameter to specify which network script gets
767
called for which instance, so in a mixed cluster the network script must
768
be able to handle both cases. The default kvm vif script will be changed
769
to do so. (Xen doesn't have a ganeti provided script, so nothing will be
770
done for that hypervisor)
771

    
772
Introducing persistent UUIDs
773
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
774

    
775
Current state and shortcomings
776
++++++++++++++++++++++++++++++
777

    
778
Some objects in the Ganeti configurations are tracked by their name
779
while also supporting renames. This creates an extra difficulty,
780
because neither Ganeti nor external management tools can then track
781
the actual entity, and due to the name change it behaves like a new
782
one.
783

    
784
Proposed changes part 1
785
+++++++++++++++++++++++
786

    
787
We will change Ganeti to use UUIDs for entity tracking, but in a
788
staggered way. In 2.1, we will simply add an “uuid” attribute to each
789
of the instances, nodes and cluster itself. This will be reported on
790
instance creation for nodes, and on node adds for the nodes. It will
791
be of course avaiblable for querying via the OpNodeQuery/Instance and
792
cluster information, and via RAPI as well.
793

    
794
Note that Ganeti will not provide any way to change this attribute.
795

    
796
Upgrading from Ganeti 2.0 will automatically add an ‘uuid’ attribute
797
to all entities missing it.
798

    
799

    
800
Proposed changes part 2
801
+++++++++++++++++++++++
802

    
803
In the next release (e.g. 2.2), the tracking of objects will change
804
from the name to the UUID internally, and externally Ganeti will
805
accept both forms of identification; e.g. an RAPI call would be made
806
either against ``/2/instances/foo.bar`` or against
807
``/2/instances/bb3b2e42…``. Since an FQDN must have at least a dot,
808
and dots are not valid characters in UUIDs, we will not have namespace
809
issues.
810

    
811
Another change here is that node identification (during cluster
812
operations/queries like master startup, “am I the master?” and
813
similar) could be done via UUIDs which is more stable than the current
814
hostname-based scheme.
815

    
816
Internal tracking refers to the way the configuration is stored; a
817
DRBD disk of an instance refers to the node name (so that IPs can be
818
changed easily), but this is still a problem for name changes; thus
819
these will be changed to point to the node UUID to ease renames.
820

    
821
The advantages of this change (after the second round of changes), is
822
that node rename becomes trivial, whereas today node rename would
823
require a complete lock of all instances.
824

    
825

    
826
Automated disk repairs infrastructure
827
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
828

    
829
Replacing defective disks in an automated fashion is quite difficult
830
with the current version of Ganeti. These changes will introduce
831
additional functionality and interfaces to simplify automating disk
832
replacements on a Ganeti node.
833

    
834
Fix node volume group
835
+++++++++++++++++++++
836

    
837
This is the most difficult addition, as it can lead to dataloss if it's
838
not properly safeguarded.
839

    
840
The operation must be done only when all the other nodes that have
841
instances in common with the target node are fine, i.e. this is the only
842
node with problems, and also we have to double-check that all instances
843
on this node have at least a good copy of the data.
844

    
845
This might mean that we have to enhance the GetMirrorStatus calls, and
846
introduce and a smarter version that can tell us more about the status
847
of an instance.
848

    
849
Stop allocation on a given PV
850
+++++++++++++++++++++++++++++
851

    
852
This is somewhat simple. First we need a "list PVs" opcode (and its
853
associated logical unit) and then a set PV status opcode/LU. These in
854
combination should allow both checking and changing the disk/PV status.
855

    
856
Instance disk status
857
++++++++++++++++++++
858

    
859
This new opcode or opcode change must list the instance-disk-index and
860
node combinations of the instance together with their status. This will
861
allow determining what part of the instance is broken (if any).
862

    
863
Repair instance
864
+++++++++++++++
865

    
866
This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in
867
order to fix the instance status. It only affects primary instances;
868
secondaries can just be moved away.
869

    
870
Migrate node
871
++++++++++++
872

    
873
This new opcode/LU/RAPI call will take over the current ``gnt-node
874
migrate`` code and run migrate for all instances on the node.
875

    
876
Evacuate node
877
++++++++++++++
878

    
879
This new opcode/LU/RAPI call will take over the current ``gnt-node
880
evacuate`` code and run replace-secondary with an iallocator script for
881
all instances on the node.
882

    
883

    
884
User-id pool
885
~~~~~~~~~~~~
886

    
887
In order to allow running different processes under unique user-ids
888
on a node, we introduce the user-id pool concept.
889

    
890
The user-id pool is a cluster-wide configuration parameter.
891
It is a list of user-ids and/or user-id ranges that are reserved
892
for running Ganeti processes (including KVM instances).
893
The code guarantees that on a given node a given user-id is only
894
handed out if there is no other process running with that user-id.
895

    
896
Please note, that this can only be guaranteed if all processes in
897
the system - that run under a user-id belonging to the pool - are
898
started by reserving a user-id first. That can be accomplished
899
either by using the RequestUnusedUid() function to get an unused
900
user-id or by implementing the same locking mechanism.
901

    
902
Implementation
903
++++++++++++++
904

    
905
The functions that are specific to the user-id pool feature are located
906
in a separate module: ``lib/uidpool.py``.
907

    
908
Storage
909
^^^^^^^
910

    
911
The user-id pool is a single cluster parameter. It is stored in the
912
*Cluster* object under the ``uid_pool`` name as a list of integer
913
tuples. These tuples represent the boundaries of user-id ranges.
914
For single user-ids, the boundaries are equal.
915

    
916
The internal user-id pool representation is converted into a
917
string: a newline separated list of user-ids or user-id ranges.
918
This string representation is distributed to all the nodes via the
919
*ssconf* mechanism. This means that the user-id pool can be
920
accessed in a read-only way on any node without consulting the master
921
node or master candidate nodes.
922

    
923
Initial value
924
^^^^^^^^^^^^^
925

    
926
The value of the user-id pool cluster parameter can be initialized
927
at cluster initialization time using the
928

    
929
``gnt-cluster init --uid-pool <uid-pool definition> ...``
930

    
931
command.
932

    
933
As there is no sensible default value for the user-id pool parameter,
934
it is initialized to an empty list if no ``--uid-pool`` option is
935
supplied at cluster init time.
936

    
937
If the user-id pool is empty, the user-id pool feature is considered
938
to be disabled.
939

    
940
Manipulation
941
^^^^^^^^^^^^
942

    
943
The user-id pool cluster parameter can be modified from the
944
command-line with the following commands:
945

    
946
- ``gnt-cluster modify --uid-pool <uid-pool definition>``
947
- ``gnt-cluster modify --add-uids <uid-pool definition>``
948
- ``gnt-cluster modify --remove-uids <uid-pool definition>``
949

    
950
The ``--uid-pool`` option overwrites the current setting with the
951
supplied ``<uid-pool definition>``, while
952
``--add-uids``/``--remove-uids`` adds/removes the listed uids
953
or uid-ranges from the pool.
954

    
955
The ``<uid-pool definition>`` should be a comma-separated list of
956
user-ids or user-id ranges. A range should be defined by a lower and
957
a higher boundary. The boundaries should be separated with a dash.
958
The boundaries are inclusive.
959

    
960
The ``<uid-pool definition>`` is parsed into the internal
961
representation, sanity-checked and stored in the ``uid_pool``
962
attribute of the *Cluster* object.
963

    
964
It is also immediately converted into a string (formatted in the
965
input format) and distributed to all nodes via the *ssconf* mechanism.
966

    
967
Inspection
968
^^^^^^^^^^
969

    
970
The current value of the user-id pool cluster parameter is printed
971
by the ``gnt-cluster info`` command.
972

    
973
The output format is accepted by the ``gnt-cluster modify --uid-pool``
974
command.
975

    
976
Locking
977
^^^^^^^
978

    
979
The ``uidpool.py`` module provides a function (``RequestUnusedUid``)
980
for requesting an unused user-id from the pool.
981

    
982
This will try to find a random user-id that is not currently in use.
983
The algorithm is the following:
984

    
985
1) Randomize the list of user-ids in the user-id pool
986
2) Iterate over this randomized UID list
987
3) Create a lock file (it doesn't matter if it already exists)
988
4) Acquire an exclusive POSIX lock on the file, to provide mutual
989
   exclusion for the following non-atomic operations
990
5) Check if there is a process in the system with the given UID
991
6) If there isn't, return the UID, otherwise unlock the file and
992
   continue the iteration over the user-ids
993

    
994
The user can than start a new process with this user-id.
995
Once a process is successfully started, the exclusive POSIX lock can
996
be released, but the lock file will remain in the filesystem.
997
The presence of such a lock file means that the given user-id is most
998
probably in use. The lack of a uid lock file does not guarantee that
999
there are no processes with that user-id.
1000

    
1001
After acquiring the exclusive POSIX lock, ``RequestUnusedUid``
1002
always performs a check to see if there is a process running with the
1003
given uid.
1004

    
1005
A user-id can be returned to the pool, by calling the
1006
``ReleaseUid`` function. This will remove the corresponding lock file.
1007
Note, that it doesn't check if there is any process still running
1008
with that user-id. The removal of the lock file only means that there
1009
are most probably no processes with the given user-id. This helps
1010
in speeding up the process of finding a user-id that is guaranteed to
1011
be unused.
1012

    
1013
There is a convenience function, called ``ExecWithUnusedUid`` that
1014
wraps the execution of a function (or any callable) that requires a
1015
unique user-id. ``ExecWithUnusedUid`` takes care of requesting an
1016
unused user-id and unlocking the lock file. It also automatically
1017
returns the user-id to the pool if the callable raises an exception.
1018

    
1019
Code examples
1020
+++++++++++++
1021

    
1022
Requesting a user-id from the pool:
1023

    
1024
::
1025

    
1026
  from ganeti import ssconf
1027
  from ganeti import uidpool
1028

    
1029
  # Get list of all user-ids in the uid-pool from ssconf
1030
  ss = ssconf.SimpleStore()
1031
  uid_pool = uidpool.ParseUidPool(ss.GetUidPool(), separator="\n")
1032
  all_uids = set(uidpool.ExpandUidPool(uid_pool))
1033

    
1034
  uid = uidpool.RequestUnusedUid(all_uids)
1035
  try:
1036
    <start a process with the UID>
1037
    # Once the process is started, we can release the file lock
1038
    uid.Unlock()
1039
  except ..., err:
1040
    # Return the UID to the pool
1041
    uidpool.ReleaseUid(uid)
1042

    
1043

    
1044
Releasing a user-id:
1045

    
1046
::
1047

    
1048
  from ganeti import uidpool
1049

    
1050
  uid = <get the UID the process is running under>
1051
  <stop the process>
1052
  uidpool.ReleaseUid(uid)
1053

    
1054

    
1055
External interface changes
1056
--------------------------
1057

    
1058
OS API
1059
~~~~~~
1060

    
1061
The OS API of Ganeti 2.0 has been built with extensibility in mind.
1062
Since we pass everything as environment variables it's a lot easier to
1063
send new information to the OSes without breaking retrocompatibility.
1064
This section of the design outlines the proposed extensions to the API
1065
and their implementation.
1066

    
1067
API Version Compatibility Handling
1068
++++++++++++++++++++++++++++++++++
1069

    
1070
In 2.1 there will be a new OS API version (eg. 15), which should be
1071
mostly compatible with api 10, except for some new added variables.
1072
Since it's easy not to pass some variables we'll be able to handle
1073
Ganeti 2.0 OSes by just filtering out the newly added piece of
1074
information. We will still encourage OSes to declare support for the new
1075
API after checking that the new variables don't provide any conflict for
1076
them, and we will drop api 10 support after ganeti 2.1 has released.
1077

    
1078
New Environment variables
1079
+++++++++++++++++++++++++
1080

    
1081
Some variables have never been added to the OS api but would definitely
1082
be useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable
1083
to allow the OS to make changes relevant to the virtualization the
1084
instance is going to use. Since this field is immutable for each
1085
instance, the os can tight the install without caring of making sure the
1086
instance can run under any virtualization technology.
1087

    
1088
We also want the OS to know the particular hypervisor parameters, to be
1089
able to customize the install even more.  Since the parameters can
1090
change, though, we will pass them only as an "FYI": if an OS ties some
1091
instance functionality to the value of a particular hypervisor parameter
1092
manual changes or a reinstall may be needed to adapt the instance to the
1093
new environment. This is not a regression as of today, because even if
1094
the OSes are left blind about this information, sometimes they still
1095
need to make compromises and cannot satisfy all possible parameter
1096
values.
1097

    
1098
OS Variants
1099
+++++++++++
1100

    
1101
Currently we are assisting to some degree of "os proliferation" just to
1102
change a simple installation behavior. This means that the same OS gets
1103
installed on the cluster multiple times, with different names, to
1104
customize just one installation behavior. Usually such OSes try to share
1105
as much as possible through symlinks, but this still causes
1106
complications on the user side, especially when multiple parameters must
1107
be cross-matched.
1108

    
1109
For example today if you want to install debian etch, lenny or squeeze
1110
you probably need to install the debootstrap OS multiple times, changing
1111
its configuration file, and calling it debootstrap-etch,
1112
debootstrap-lenny or debootstrap-squeeze. Furthermore if you have for
1113
example a "server" and a "development" environment which installs
1114
different packages/configuration files and must be available for all
1115
installs you'll probably end  up with deboostrap-etch-server,
1116
debootstrap-etch-dev, debootrap-lenny-server, debootstrap-lenny-dev,
1117
etc. Crossing more than two parameters quickly becomes not manageable.
1118

    
1119
In order to avoid this we plan to make OSes more customizable, by
1120
allowing each OS to declare a list of variants which can be used to
1121
customize it. The variants list is mandatory and must be written, one
1122
variant per line, in the new "variants.list" file inside the main os
1123
dir. At least one supported variant must be supported. When choosing the
1124
OS exactly one variant will have to be specified, and will be encoded in
1125
the os name as <OS-name>+<variant>. As for today it will be possible to
1126
change an instance's OS at creation or install time.
1127

    
1128
The 2.1 OS list will be the combination of each OS, plus its supported
1129
variants. This will cause the name name proliferation to remain, but at
1130
least the internal OS code will be simplified to just parsing the passed
1131
variant, without the need for symlinks or code duplication.
1132

    
1133
Also we expect the OSes to declare only "interesting" variants, but to
1134
accept some non-declared ones which a user will be able to pass in by
1135
overriding the checks ganeti does. This will be useful for allowing some
1136
variations to be used without polluting the OS list (per-OS
1137
documentation should list all supported variants). If a variant which is
1138
not internally supported is forced through, the OS scripts should abort.
1139

    
1140
In the future (post 2.1) we may want to move to full fledged parameters
1141
all orthogonal to each other (for example "architecture" (i386, amd64),
1142
"suite" (lenny, squeeze, ...), etc). (As opposed to the variant, which
1143
is a single parameter, and you need a different variant for all the set
1144
of combinations you want to support).  In this case we envision the
1145
variants to be moved inside of Ganeti and be associated with lists
1146
parameter->values associations, which will then be passed to the OS.
1147

    
1148

    
1149
IAllocator changes
1150
~~~~~~~~~~~~~~~~~~
1151

    
1152
Current State and shortcomings
1153
++++++++++++++++++++++++++++++
1154

    
1155
The iallocator interface allows creation of instances without manually
1156
specifying nodes, but instead by specifying plugins which will do the
1157
required computations and produce a valid node list.
1158

    
1159
However, the interface is quite akward to use:
1160

    
1161
- one cannot set a 'default' iallocator script
1162
- one cannot use it to easily test if allocation would succeed
1163
- some new functionality, such as rebalancing clusters and calculating
1164
  capacity estimates is needed
1165

    
1166
Proposed changes
1167
++++++++++++++++
1168

    
1169
There are two area of improvements proposed:
1170

    
1171
- improving the use of the current interface
1172
- extending the IAllocator API to cover more automation
1173

    
1174

    
1175
Default iallocator names
1176
^^^^^^^^^^^^^^^^^^^^^^^^
1177

    
1178
The cluster will hold, for each type of iallocator, a (possibly empty)
1179
list of modules that will be used automatically.
1180

    
1181
If the list is empty, the behaviour will remain the same.
1182

    
1183
If the list has one entry, then ganeti will behave as if
1184
'--iallocator' was specifyed on the command line. I.e. use this
1185
allocator by default. If the user however passed nodes, those will be
1186
used in preference.
1187

    
1188
If the list has multiple entries, they will be tried in order until
1189
one gives a successful answer.
1190

    
1191
Dry-run allocation
1192
^^^^^^^^^^^^^^^^^^
1193

    
1194
The create instance LU will get a new 'dry-run' option that will just
1195
simulate the placement, and return the chosen node-lists after running
1196
all the usual checks.
1197

    
1198
Cluster balancing
1199
^^^^^^^^^^^^^^^^^
1200

    
1201
Instance add/removals/moves can create a situation where load on the
1202
nodes is not spread equally. For this, a new iallocator mode will be
1203
implemented called ``balance`` in which the plugin, given the current
1204
cluster state, and a maximum number of operations, will need to
1205
compute the instance relocations needed in order to achieve a "better"
1206
(for whatever the script believes it's better) cluster.
1207

    
1208
Cluster capacity calculation
1209
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1210

    
1211
In this mode, called ``capacity``, given an instance specification and
1212
the current cluster state (similar to the ``allocate`` mode), the
1213
plugin needs to return:
1214

    
1215
- how many instances can be allocated on the cluster with that
1216
  specification
1217
- on which nodes these will be allocated (in order)
1218

    
1219
.. vim: set textwidth=72 :