5 This document describes the major changes in Ganeti 2.1 compared to
8 The 2.1 version will be a relatively small release. Its main aim is to
9 avoid changing too much of the core code, while addressing issues and
10 adding new features and improvements over 2.0, in a timely fashion.
12 .. contents:: :depth: 4
17 Ganeti 2.1 will add features to help further automatization of cluster
18 operations, further improve scalability to even bigger clusters, and
19 make it easier to debug the Ganeti core.
24 As for 2.0 we divide the 2.1 design into three areas:
26 - core changes, which affect the master daemon/job queue/locking or
27 all/most logical units
28 - logical unit/feature changes
29 - external interface changes (eg. command line, os api, hooks, ...)
34 Storage units modelling
35 ~~~~~~~~~~~~~~~~~~~~~~~
37 Currently, Ganeti has a good model of the block devices for instances
38 (e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
39 storage pools that are providing the space for these front-end
40 devices. For example, there are hardcoded inter-node RPC calls for
41 volume group listing, file storage creation/deletion, etc.
43 The storage units framework will implement a generic handling for all
44 kinds of storage backends:
46 - LVM physical volumes
48 - File-based storage directories
49 - any other future storage method
51 There will be a generic list of methods that each storage unit type
54 - list of storage units of this type
55 - check status of the storage unit
57 Additionally, there will be specific methods for each method, for
60 - enable/disable allocations on a specific PV
61 - file storage directory creation/deletion
62 - VG consistency fixing
64 This will allow a much better modeling and unification of the various
65 RPC calls related to backend storage pool in the future. Ganeti 2.1 is
66 intended to add the basics of the framework, and not necessarilly move
67 all the curent VG/FileBased operations to it.
69 Note that while we model both LVM PVs and LVM VGs, the framework will
70 **not** model any relationship between the different types. In other
71 words, we don't model neither inheritances nor stacking, since this is
72 too complex for our needs. While a ``vgreduce`` operation on a LVM VG
73 could actually remove a PV from it, this will not be handled at the
74 framework level, but at individual operation level. The goal is that
75 this is a lightweight framework, for abstracting the different storage
76 operation, and not for modelling the storage hierarchy.
82 Current State and shortcomings
83 ++++++++++++++++++++++++++++++
85 The class ``LockSet`` (see ``lib/locking.py``) is a container for one or
86 many ``SharedLock`` instances. It provides an interface to add/remove
87 locks and to acquire and subsequently release any number of those locks
90 Locks in a ``LockSet`` are always acquired in alphabetic order. Due to
91 the way we're using locks for nodes and instances (the single cluster
92 lock isn't affected by this issue) this can lead to long delays when
93 acquiring locks if another operation tries to acquire multiple locks but
94 has to wait for yet another operation.
96 In the following demonstration we assume to have the instance locks
97 ``inst1``, ``inst2``, ``inst3`` and ``inst4``.
99 #. Operation A grabs lock for instance ``inst4``.
100 #. Operation B wants to acquire all instance locks in alphabetic order,
101 but it has to wait for ``inst4``.
102 #. Operation C tries to lock ``inst1``, but it has to wait until
103 Operation B (which is trying to acquire all locks) releases the lock
105 #. Operation A finishes and releases lock on ``inst4``. Operation B can
106 continue and eventually releases all locks.
107 #. Operation C can get ``inst1`` lock and finishes.
109 Technically there's no need for Operation C to wait for Operation A, and
110 subsequently Operation B, to finish. Operation B can't continue until
111 Operation A is done (it has to wait for ``inst4``), anyway.
116 Non-blocking lock acquiring
117 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
119 Acquiring locks for OpCode execution is always done in blocking mode.
120 They won't return until the lock has successfully been acquired (or an
121 error occurred, although we won't cover that case here).
123 ``SharedLock`` and ``LockSet`` must be able to be acquired in a
124 non-blocking way. They must support a timeout and abort trying to
125 acquire the lock(s) after the specified amount of time.
127 Retry acquiring locks
128 ^^^^^^^^^^^^^^^^^^^^^
130 To prevent other operations from waiting for a long time, such as
131 described in the demonstration before, ``LockSet`` must not keep locks
132 for a prolonged period of time when trying to acquire two or more locks.
133 Instead it should, with an increasing timeout for acquiring all locks,
134 release all locks again and sleep some time if it fails to acquire all
137 A good timeout value needs to be determined. In any case should
138 ``LockSet`` proceed to acquire locks in blocking mode after a few
139 (unsuccessful) attempts to acquire all requested locks.
141 One proposal for the timeout is to use ``2**tries`` seconds, where
142 ``tries`` is the number of unsuccessful tries.
144 In the demonstration before this would allow Operation C to continue
145 after Operation B unsuccessfully tried to acquire all locks and released
146 all acquired locks (``inst1``, ``inst2`` and ``inst3``) again.
148 Other solutions discussed
149 +++++++++++++++++++++++++
151 There was also some discussion on going one step further and extend the
152 job queue (see ``lib/jqueue.py``) to select the next task for a worker
153 depending on whether it can acquire the necessary locks. While this may
154 reduce the number of necessary worker threads and/or increase throughput
155 on large clusters with many jobs, it also brings many potential
156 problems, such as contention and increased memory usage, with it. As
157 this would be an extension of the changes proposed before it could be
158 implemented at a later point in time, but we decided to stay with the
159 simpler solution for now.
161 Implementation details
162 ++++++++++++++++++++++
164 ``SharedLock`` redesign
165 ^^^^^^^^^^^^^^^^^^^^^^^
167 The current design of ``SharedLock`` is not good for supporting timeouts
168 when acquiring a lock and there are also minor fairness issues in it. We
169 plan to address both with a redesign. A proof of concept implementation
170 was written and resulted in significantly simpler code.
172 Currently ``SharedLock`` uses two separate queues for shared and
173 exclusive acquires and waiters get to run in turns. This means if an
174 exclusive acquire is released, the lock will allow shared waiters to run
175 and vice versa. Although it's still fair in the end there is a slight
176 bias towards shared waiters in the current implementation. The same
177 implementation with two shared queues can not support timeouts without
178 adding a lot of complexity.
180 Our proposed redesign changes ``SharedLock`` to have only one single
181 queue. There will be one condition (see Condition_ for a note about
182 performance) in the queue per exclusive acquire and two for all shared
183 acquires (see below for an explanation). The maximum queue length will
184 always be ``2 + (number of exclusive acquires waiting)``. The number of
185 queue entries for shared acquires can vary from 0 to 2.
187 The two conditions for shared acquires are a bit special. They will be
188 used in turn. When the lock is instantiated, no conditions are in the
189 queue. As soon as the first shared acquire arrives (and there are
190 holder(s) or waiting acquires; see Acquire_), the active condition is
191 added to the queue. Until it becomes the topmost condition in the queue
192 and has been notified, any shared acquire is added to this active
193 condition. When the active condition is notified, the conditions are
194 swapped and further shared acquires are added to the previously inactive
195 condition (which has now become the active condition). After all waiters
196 on the previously active (now inactive) and now notified condition
197 received the notification, it is removed from the queue of pending
200 This means shared acquires will skip any exclusive acquire in the queue.
201 We believe it's better to improve parallelization on operations only
202 asking for shared (or read-only) locks. Exclusive operations holding the
203 same lock can not be parallelized.
209 For exclusive acquires a new condition is created and appended to the
210 queue. Shared acquires are added to the active condition for shared
211 acquires and if the condition is not yet on the queue, it's appended.
213 The next step is to wait for our condition to be on the top of the queue
214 (to guarantee fairness). If the timeout expired, we return to the caller
215 without acquiring the lock. On every notification we check whether the
216 lock has been deleted, in which case an error is returned to the caller.
218 The lock can be acquired if we're on top of the queue (there is no one
219 else ahead of us). For an exclusive acquire, there must not be other
220 exclusive or shared holders. For a shared acquire, there must not be an
221 exclusive holder. If these conditions are all true, the lock is
222 acquired and we return to the caller. In any other case we wait again on
225 If it was the last waiter on a condition, the condition is removed from
228 Optimization: There's no need to touch the queue if there are no pending
229 acquires and no current holders. The caller can have the lock
232 .. image:: design-2.1-lock-acquire.png
238 First the lock removes the caller from the internal owner list. If there
239 are pending acquires in the queue, the first (the oldest) condition is
242 If the first condition was the active condition for shared acquires, the
243 inactive condition will be made active. This ensures fairness with
244 exclusive locks by forcing consecutive shared acquires to wait in the
247 .. image:: design-2.1-lock-release.png
253 The caller must either hold the lock in exclusive mode already or the
254 lock must be acquired in exclusive mode. Trying to delete a lock while
255 it's held in shared mode must fail.
257 After ensuring the lock is held in exclusive mode, the lock will mark
258 itself as deleted and continue to notify all pending acquires. They will
259 wake up, notice the deleted lock and return an error to the caller.
265 Note: This is not necessary for the locking changes above, but it may be
266 a good optimization (pending performance tests).
268 The existing locking code in Ganeti 2.0 uses Python's built-in
269 ``threading.Condition`` class. Unfortunately ``Condition`` implements
270 timeouts by sleeping 1ms to 20ms between tries to acquire the condition
271 lock in non-blocking mode. This requires unnecessary context switches
272 and contention on the CPython GIL (Global Interpreter Lock).
274 By using POSIX pipes (see ``pipe(2)``) we can use the operating system's
275 support for timeouts on file descriptors (see ``select(2)``). A custom
276 condition class will have to be written for this.
278 On instantiation the class creates a pipe. After each notification the
279 previous pipe is abandoned and re-created (technically the old pipe
280 needs to stay around until all notifications have been delivered).
282 All waiting clients of the condition use ``select(2)`` or ``poll(2)`` to
283 wait for notifications, optionally with a timeout. A notification will
284 be signalled to the waiting clients by closing the pipe. If the pipe
285 wasn't closed during the timeout, the waiting function returns to its
289 Node daemon availability
290 ~~~~~~~~~~~~~~~~~~~~~~~~
292 Current State and shortcomings
293 ++++++++++++++++++++++++++++++
295 Currently, when a Ganeti node suffers serious system disk damage, the
296 migration/failover of an instance may not correctly shutdown the virtual
297 machine on the broken node causing instances duplication. The ``gnt-node
298 powercycle`` command can be used to force a node reboot and thus to
299 avoid duplicated instances. This command relies on node daemon
300 availability, though, and thus can fail if the node daemon has some
301 pages swapped out of ram, for example.
307 The proposed solution forces node daemon to run exclusively in RAM. It
308 uses python ctypes to to call ``mlockall(MCL_CURRENT | MCL_FUTURE)`` on
309 the node daemon process and all its children. In addition another log
310 handler has been implemented for node daemon to redirect to
311 ``/dev/console`` messages that cannot be written on the logfile.
313 With these changes node daemon can successfully run basic tasks such as
314 a powercycle request even when the system disk is heavily damaged and
315 reading/writing to disk fails constantly.
321 Automated Ganeti Cluster Merger
322 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
327 Currently there's no easy way to merge two or more clusters together.
328 But in order to optimize resources this is a needed missing piece. The
329 goal of this design doc is to come up with a easy to use solution which
330 allows you to merge two or more cluster together.
335 As the design of Ganeti is based on an autonomous system, Ganeti by
336 itself has no way to reach nodes outside of its cluster. To overcome
337 this situation we're required to prepare the cluster before we can go
338 ahead with the actual merge: We've to replace at least the ssh keys on
339 the affected nodes before we can do any operation within ``gnt-``
342 To make this a automated process we'll ask the user to provide us with
343 the root password of every cluster we've to merge. We use the password
344 to grab the current ``id_dsa`` key and then rely on that ssh key for any
345 further communication to be made until the cluster is fully merged.
350 After initial contact we do the cluster merge:
352 1. Grab the list of nodes
353 2. On all nodes add our own ``id_dsa.pub`` key to ``authorized_keys``
354 3. Stop all instances running on the merging cluster
355 4. Disable ``ganeti-watcher`` as it tries to restart Ganeti daemons
356 5. Stop all Ganeti daemons on all merging nodes
357 6. Grab the ``config.data`` from the master of the merging cluster
358 7. Stop local ``ganeti-masterd``
361 1. Open our own cluster ``config.data``
362 2. Open cluster ``config.data`` of the merging cluster
363 3. Grab all nodes of the merging cluster
364 4. Set ``master_candidate`` to false on all merging nodes
365 5. Add the nodes to our own cluster ``config.data``
366 6. Grab all the instances on the merging cluster
367 7. Adjust the port if the instance has drbd layout:
369 1. In ``logical_id`` (index 2)
370 2. In ``physical_id`` (index 1 and 3)
372 8. Add the instances to our own cluster ``config.data``
374 9. Start ``ganeti-masterd`` with ``--no-voting`` ``--yes-do-it``
375 10. ``gnt-node add --readd`` on all merging nodes
376 11. ``gnt-cluster redist-conf``
377 12. Restart ``ganeti-masterd`` normally
378 13. Enable ``ganeti-watcher`` again
379 14. Start all merging instances again
384 Until we actually (re)add any nodes we can abort and rollback the merge
385 at any point. After merging the config, though, we've to get the backup
386 copy of ``config.data`` (from another master candidate node). And for
387 security reasons it's a good idea to undo ``id_dsa.pub`` distribution by
388 going on every affected node and remove the ``id_dsa.pub`` key again.
389 Also we've to keep in mind, that we've to start the Ganeti daemons and
390 starting up the instances again.
395 Last but not least we should verify that the merge was successful.
396 Therefore we run ``gnt-cluster verify``, which ensures that the cluster
397 overall is in a healthy state. Additional it's also possible to compare
398 the list of instances/nodes with a list made prior to the upgrade to
399 make sure we didn't lose any data/instance/node.
407 Used to merge the cluster config. This is a POC and might differ from
408 actual production code.
415 from ganeti import config
416 from ganeti import constants
418 c_mine = config.ConfigWriter(offline=True)
419 c_other = config.ConfigWriter(sys.argv[1])
422 for node in c_other.GetNodeList():
423 node_info = c_other.GetNodeInfo(node)
424 node_info.master_candidate = False
425 c_mine.AddNode(node_info, str(fake_id))
428 for instance in c_other.GetInstanceList():
429 instance_info = c_other.GetInstanceInfo(instance)
430 for dsk in instance_info.disks:
431 if dsk.dev_type in constants.LDS_DRBD:
432 port = c_mine.AllocatePort()
433 logical_id = list(dsk.logical_id)
435 dsk.logical_id = tuple(logical_id)
436 physical_id = list(dsk.physical_id)
437 physical_id[1] = physical_id[3] = port
438 dsk.physical_id = tuple(physical_id)
439 c_mine.AddInstance(instance_info, str(fake_id))
449 Current State and shortcomings
450 ++++++++++++++++++++++++++++++
452 In Ganeti 2.0 all nodes are equal, but some are more equal than others.
453 In particular they are divided between "master", "master candidates" and
454 "normal". (Moreover they can be offline or drained, but this is not
455 important for the current discussion). In general the whole
456 configuration is only replicated to master candidates, and some partial
457 information is spread to all nodes via ssconf.
459 This change was done so that the most frequent Ganeti operations didn't
460 need to contact all nodes, and so clusters could become bigger. If we
461 want more information to be available on all nodes, we need to add more
462 ssconf values, which is counter-balancing the change, or to talk with
463 the master node, which is not designed to happen now, and requires its
466 Information such as the instance->primary_node mapping will be needed on
467 all nodes, and we also want to make sure services external to the
468 cluster can query this information as well. This information must be
469 available at all times, so we can't query it through RAPI, which would
470 be a single point of failure, as it's only available on the master.
476 In order to allow fast and highly available access read-only to some
477 configuration values, we'll create a new ganeti-confd daemon, which will
478 run on master candidates. This daemon will talk via UDP, and
479 authenticate messages using HMAC with a cluster-wide shared key. This
480 key will be generated at cluster init time, and stored on the clusters
481 alongside the ganeti SSL keys, and readable only by root.
483 An interested client can query a value by making a request to a subset
484 of the cluster master candidates. It will then wait to get a few
485 responses, and use the one with the highest configuration serial number.
486 Since the configuration serial number is increased each time the ganeti
487 config is updated, and the serial number is included in all answers,
488 this can be used to make sure to use the most recent answer, in case
489 some master candidates are stale or in the middle of a configuration
492 In order to prevent replay attacks queries will contain the current unix
493 timestamp according to the client, and the server will verify that its
494 timestamp is in the same 5 minutes range (this requires synchronized
495 clocks, which is a good idea anyway). Queries will also contain a "salt"
496 which they expect the answers to be sent with, and clients are supposed
497 to accept only answers which contain salt generated by them.
499 The configuration daemon will be able to answer simple queries such as:
501 - master candidates list
505 - instance primary nodes
510 A confd query will look like this, on the wire::
513 "msg": "{\"type\": 1,
514 \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
516 \"query\": \"node1.example.com\"}\n",
517 "salt": "1249637704",
518 "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
521 "plj0" is a fourcc that details the message content. It stands for plain
522 json 0, and can be changed as we move on to different type of protocols
523 (for example protocol buffers, or encrypted json). What follows is a
524 json encoded string, with the following fields:
526 - 'msg' contains a JSON-encoded query, its fields are:
528 - 'protocol', integer, is the confd protocol version (initially just
529 constants.CONFD_PROTOCOL_VERSION, with a value of 1)
530 - 'type', integer, is the query type. For example "node role by name"
531 or "node primary ip by instance ip". Constants will be provided for
532 the actual available query types.
533 - 'query', string, is the search key. For example an ip, or a node
535 - 'rsalt', string, is the required response salt. The client must use
536 it to recognize which answer it's getting.
538 - 'salt' must be the current unix timestamp, according to the client.
539 Servers can refuse messages which have a wrong timing, according to
540 their configuration and clock.
541 - 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
543 If an answer comes back (which is optional, since confd works over UDP)
544 it will be in this format::
547 "msg": "{\"status\": 0,
551 "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
552 "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
557 - 'plj0' the message type magic fourcc, as discussed above
558 - 'msg' contains a JSON-encoded answer, its fields are:
560 - 'protocol', integer, is the confd protocol version (initially just
561 constants.CONFD_PROTOCOL_VERSION, with a value of 1)
562 - 'status', integer, is the error code. Initially just 0 for 'ok' or
563 '1' for 'error' (in which case answer contains an error detail,
564 rather than an answer), but in the future it may be expanded to have
565 more meanings (eg: 2, the answer is compressed)
566 - 'answer', is the actual answer. Its type and meaning is query
567 specific. For example for "node primary ip by instance ip" queries
568 it will be a string containing an IP address, for "node role by
569 name" queries it will be an integer which encodes the role (master,
570 candidate, drained, offline) according to constants.
572 - 'salt' is the requested salt from the query. A client can use it to
573 recognize what query the answer is answering.
574 - 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
580 Current State and shortcomings
581 ++++++++++++++++++++++++++++++
583 Currently LUClusterRedistConf triggers a copy of the updated
584 configuration file to all master candidates and of the ssconf files to
585 all nodes. There are other files which are maintained manually but which
586 are important to keep in sync. These are:
588 - rapi SSL key certificate file (rapi.pem) (on master candidates)
589 - rapi user/password file rapi_users (on master candidates)
591 Furthermore there are some files which are hypervisor specific but we
592 may want to keep in sync:
594 - the xen-hvm hypervisor uses one shared file for all vnc passwords, and
595 copies the file once, during node add. This design is subject to
596 revision to be able to have different passwords for different groups
597 of instances via the use of hypervisor parameters, and to allow
598 xen-hvm and kvm to use an equal system to provide password-protected
599 vnc sessions. In general, though, it would be useful if the vnc
600 password files were copied as well, to avoid unwanted vnc password
601 changes on instance failover/migrate.
603 Optionally the admin may want to also ship files such as the global
604 xend.conf file, and the network scripts to all nodes.
609 RedistributeConfig will be changed to copy also the rapi files, and to
610 call every enabled hypervisor asking for a list of additional files to
611 copy. Users will have the possibility to populate a file containing a
612 list of files to be distributed; this file will be propagated as well.
613 Such solution is really simple to implement and it's easily usable by
616 This code will be also shared (via tasklets or by other means, if
617 tasklets are not ready for 2.1) with the AddNode and SetNodeParams LUs
618 (so that the relevant files will be automatically shipped to new master
619 candidates as they are set).
624 Current State and shortcomings
625 ++++++++++++++++++++++++++++++
627 Currently just the xen-hvm hypervisor supports setting a password to
628 connect the the instances' VNC console, and has one common password
631 This doesn't allow different passwords for different instances/groups of
632 instances, and makes it necessary to remember to copy the file around
633 the cluster when the password changes.
638 We'll change the VNC password file to a vnc_password_file hypervisor
639 parameter. This way it can have a cluster default, but also a different
640 value for each instance. The VNC enabled hypervisors (xen and kvm) will
641 publish all the password files in use through the cluster so that a
642 redistribute-config will ship them to all nodes (see the Redistribute
643 Config proposed changes above).
645 The current VNC_PASSWORD_FILE constant will be removed, but its value
646 will be used as the default HV_VNC_PASSWORD_FILE value, thus retaining
647 backwards compatibility with 2.0.
649 The code to export the list of VNC password files from the hypervisors
650 to RedistributeConfig will be shared between the KVM and xen-hvm
656 Current State and shortcomings
657 ++++++++++++++++++++++++++++++
659 Currently disks and network interfaces have a few tweakable options and
660 all the rest is left to a default we chose. We're finding that we need
661 more and more to tweak some of these parameters, for example to disable
662 barriers for DRBD devices, or allow striping for the LVM volumes.
664 Moreover for many of these parameters it will be nice to have
665 cluster-wide defaults, and then be able to change them per
671 We will add new cluster level diskparams and netparams, which will
672 contain all the tweakable parameters. All values which have a sensible
673 cluster-wide default will go into this new structure while parameters
674 which have unique values will not.
676 Example of network parameters:
678 - link: for mode "bridge" the bridge to connect to, for mode route it
679 can contain the routing table, or the destination interface
681 Example of disk parameters:
682 - stripe: lvm stripes
683 - stripe_size: lvm stripe size
684 - meta_flushes: drbd, enable/disable metadata "barriers"
685 - data_flushes: drbd, enable/disable data "barriers"
687 Some parameters are bound to be disk-type specific (drbd, vs lvm, vs
688 files) or hypervisor specific (nic models for example), but for now they
689 will all live in the same structure. Each component is supposed to
690 validate only the parameters it knows about, and ganeti itself will make
691 sure that no "globally unknown" parameters are added, and that no
692 parameters have overridden meanings for different components.
694 The parameters will be kept, as for the BEPARAMS into a "default"
695 category, which will allow us to expand on by creating instance
696 "classes" in the future. Instance classes is not a feature we plan
697 implementing in 2.1, though.
700 Global hypervisor parameters
701 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
703 Current State and shortcomings
704 ++++++++++++++++++++++++++++++
706 Currently all hypervisor parameters are modifiable both globally
707 (cluster level) and at instance level. However, there is no other
708 framework to held hypervisor-specific parameters, so if we want to add
709 a new class of hypervisor parameters that only makes sense on a global
710 level, we have to change the hvparams framework.
715 We add a new (global, not per-hypervisor) list of parameters which are
716 not changeable on a per-instance level. The create, modify and query
717 instance operations are changed to not allow/show these parameters.
719 Furthermore, to allow transition of parameters to the global list, and
720 to allow cleanup of inadverdently-customised parameters, the
721 ``UpgradeConfig()`` method of instances will drop any such parameters
722 from their list of hvparams, such that a restart of the master daemon
723 is all that is needed for cleaning these up.
725 Also, the framework is simple enough that if we need to replicate it
726 at beparams level we can do so easily.
729 Non bridged instances support
730 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
732 Current State and shortcomings
733 ++++++++++++++++++++++++++++++
735 Currently each instance NIC must be connected to a bridge, and if the
736 bridge is not specified the default cluster one is used. This makes it
737 impossible to use the vif-route xen network scripts, or other
738 alternative mechanisms that don't need a bridge to work.
743 The new "mode" network parameter will distinguish between bridged
744 interfaces and routed ones.
746 When mode is "bridge" the "link" parameter will contain the bridge the
747 instance should be connected to, effectively making things as today. The
748 value has been migrated from a nic field to a parameter to allow for an
749 easier manipulation of the cluster default.
751 When mode is "route" the ip field of the interface will become
752 mandatory, to allow for a route to be set. In the future we may want
753 also to accept multiple IPs or IP/mask values for this purpose. We will
754 evaluate possible meanings of the link parameter to signify a routing
755 table to be used, which would allow for insulation between instance
756 groups (as today happens for different bridges).
758 For now we won't add a parameter to specify which network script gets
759 called for which instance, so in a mixed cluster the network script must
760 be able to handle both cases. The default kvm vif script will be changed
761 to do so. (Xen doesn't have a ganeti provided script, so nothing will be
762 done for that hypervisor)
764 Introducing persistent UUIDs
765 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
767 Current state and shortcomings
768 ++++++++++++++++++++++++++++++
770 Some objects in the Ganeti configurations are tracked by their name
771 while also supporting renames. This creates an extra difficulty,
772 because neither Ganeti nor external management tools can then track
773 the actual entity, and due to the name change it behaves like a new
776 Proposed changes part 1
777 +++++++++++++++++++++++
779 We will change Ganeti to use UUIDs for entity tracking, but in a
780 staggered way. In 2.1, we will simply add an “uuid” attribute to each
781 of the instances, nodes and cluster itself. This will be reported on
782 instance creation for nodes, and on node adds for the nodes. It will
783 be of course avaiblable for querying via the OpNodeQuery/Instance and
784 cluster information, and via RAPI as well.
786 Note that Ganeti will not provide any way to change this attribute.
788 Upgrading from Ganeti 2.0 will automatically add an ‘uuid’ attribute
789 to all entities missing it.
792 Proposed changes part 2
793 +++++++++++++++++++++++
795 In the next release (e.g. 2.2), the tracking of objects will change
796 from the name to the UUID internally, and externally Ganeti will
797 accept both forms of identification; e.g. an RAPI call would be made
798 either against ``/2/instances/foo.bar`` or against
799 ``/2/instances/bb3b2e42…``. Since an FQDN must have at least a dot,
800 and dots are not valid characters in UUIDs, we will not have namespace
803 Another change here is that node identification (during cluster
804 operations/queries like master startup, “am I the master?” and
805 similar) could be done via UUIDs which is more stable than the current
806 hostname-based scheme.
808 Internal tracking refers to the way the configuration is stored; a
809 DRBD disk of an instance refers to the node name (so that IPs can be
810 changed easily), but this is still a problem for name changes; thus
811 these will be changed to point to the node UUID to ease renames.
813 The advantages of this change (after the second round of changes), is
814 that node rename becomes trivial, whereas today node rename would
815 require a complete lock of all instances.
818 Automated disk repairs infrastructure
819 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
821 Replacing defective disks in an automated fashion is quite difficult
822 with the current version of Ganeti. These changes will introduce
823 additional functionality and interfaces to simplify automating disk
824 replacements on a Ganeti node.
826 Fix node volume group
827 +++++++++++++++++++++
829 This is the most difficult addition, as it can lead to dataloss if it's
830 not properly safeguarded.
832 The operation must be done only when all the other nodes that have
833 instances in common with the target node are fine, i.e. this is the only
834 node with problems, and also we have to double-check that all instances
835 on this node have at least a good copy of the data.
837 This might mean that we have to enhance the GetMirrorStatus calls, and
838 introduce and a smarter version that can tell us more about the status
841 Stop allocation on a given PV
842 +++++++++++++++++++++++++++++
844 This is somewhat simple. First we need a "list PVs" opcode (and its
845 associated logical unit) and then a set PV status opcode/LU. These in
846 combination should allow both checking and changing the disk/PV status.
851 This new opcode or opcode change must list the instance-disk-index and
852 node combinations of the instance together with their status. This will
853 allow determining what part of the instance is broken (if any).
858 This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in
859 order to fix the instance status. It only affects primary instances;
860 secondaries can just be moved away.
865 This new opcode/LU/RAPI call will take over the current ``gnt-node
866 migrate`` code and run migrate for all instances on the node.
871 This new opcode/LU/RAPI call will take over the current ``gnt-node
872 evacuate`` code and run replace-secondary with an iallocator script for
873 all instances on the node.
879 In order to allow running different processes under unique user-ids
880 on a node, we introduce the user-id pool concept.
882 The user-id pool is a cluster-wide configuration parameter.
883 It is a list of user-ids and/or user-id ranges that are reserved
884 for running Ganeti processes (including KVM instances).
885 The code guarantees that on a given node a given user-id is only
886 handed out if there is no other process running with that user-id.
888 Please note, that this can only be guaranteed if all processes in
889 the system - that run under a user-id belonging to the pool - are
890 started by reserving a user-id first. That can be accomplished
891 either by using the RequestUnusedUid() function to get an unused
892 user-id or by implementing the same locking mechanism.
897 The functions that are specific to the user-id pool feature are located
898 in a separate module: ``lib/uidpool.py``.
903 The user-id pool is a single cluster parameter. It is stored in the
904 *Cluster* object under the ``uid_pool`` name as a list of integer
905 tuples. These tuples represent the boundaries of user-id ranges.
906 For single user-ids, the boundaries are equal.
908 The internal user-id pool representation is converted into a
909 string: a newline separated list of user-ids or user-id ranges.
910 This string representation is distributed to all the nodes via the
911 *ssconf* mechanism. This means that the user-id pool can be
912 accessed in a read-only way on any node without consulting the master
913 node or master candidate nodes.
918 The value of the user-id pool cluster parameter can be initialized
919 at cluster initialization time using the
921 ``gnt-cluster init --uid-pool <uid-pool definition> ...``
925 As there is no sensible default value for the user-id pool parameter,
926 it is initialized to an empty list if no ``--uid-pool`` option is
927 supplied at cluster init time.
929 If the user-id pool is empty, the user-id pool feature is considered
935 The user-id pool cluster parameter can be modified from the
936 command-line with the following commands:
938 - ``gnt-cluster modify --uid-pool <uid-pool definition>``
939 - ``gnt-cluster modify --add-uids <uid-pool definition>``
940 - ``gnt-cluster modify --remove-uids <uid-pool definition>``
942 The ``--uid-pool`` option overwrites the current setting with the
943 supplied ``<uid-pool definition>``, while
944 ``--add-uids``/``--remove-uids`` adds/removes the listed uids
945 or uid-ranges from the pool.
947 The ``<uid-pool definition>`` should be a comma-separated list of
948 user-ids or user-id ranges. A range should be defined by a lower and
949 a higher boundary. The boundaries should be separated with a dash.
950 The boundaries are inclusive.
952 The ``<uid-pool definition>`` is parsed into the internal
953 representation, sanity-checked and stored in the ``uid_pool``
954 attribute of the *Cluster* object.
956 It is also immediately converted into a string (formatted in the
957 input format) and distributed to all nodes via the *ssconf* mechanism.
962 The current value of the user-id pool cluster parameter is printed
963 by the ``gnt-cluster info`` command.
965 The output format is accepted by the ``gnt-cluster modify --uid-pool``
971 The ``uidpool.py`` module provides a function (``RequestUnusedUid``)
972 for requesting an unused user-id from the pool.
974 This will try to find a random user-id that is not currently in use.
975 The algorithm is the following:
977 1) Randomize the list of user-ids in the user-id pool
978 2) Iterate over this randomized UID list
979 3) Create a lock file (it doesn't matter if it already exists)
980 4) Acquire an exclusive POSIX lock on the file, to provide mutual
981 exclusion for the following non-atomic operations
982 5) Check if there is a process in the system with the given UID
983 6) If there isn't, return the UID, otherwise unlock the file and
984 continue the iteration over the user-ids
986 The user can than start a new process with this user-id.
987 Once a process is successfully started, the exclusive POSIX lock can
988 be released, but the lock file will remain in the filesystem.
989 The presence of such a lock file means that the given user-id is most
990 probably in use. The lack of a uid lock file does not guarantee that
991 there are no processes with that user-id.
993 After acquiring the exclusive POSIX lock, ``RequestUnusedUid``
994 always performs a check to see if there is a process running with the
997 A user-id can be returned to the pool, by calling the
998 ``ReleaseUid`` function. This will remove the corresponding lock file.
999 Note, that it doesn't check if there is any process still running
1000 with that user-id. The removal of the lock file only means that there
1001 are most probably no processes with the given user-id. This helps
1002 in speeding up the process of finding a user-id that is guaranteed to
1005 There is a convenience function, called ``ExecWithUnusedUid`` that
1006 wraps the execution of a function (or any callable) that requires a
1007 unique user-id. ``ExecWithUnusedUid`` takes care of requesting an
1008 unused user-id and unlocking the lock file. It also automatically
1009 returns the user-id to the pool if the callable raises an exception.
1014 Requesting a user-id from the pool:
1018 from ganeti import ssconf
1019 from ganeti import uidpool
1021 # Get list of all user-ids in the uid-pool from ssconf
1022 ss = ssconf.SimpleStore()
1023 uid_pool = uidpool.ParseUidPool(ss.GetUidPool(), separator="\n")
1024 all_uids = set(uidpool.ExpandUidPool(uid_pool))
1026 uid = uidpool.RequestUnusedUid(all_uids)
1028 <start a process with the UID>
1029 # Once the process is started, we can release the file lock
1032 # Return the UID to the pool
1033 uidpool.ReleaseUid(uid)
1036 Releasing a user-id:
1040 from ganeti import uidpool
1042 uid = <get the UID the process is running under>
1044 uidpool.ReleaseUid(uid)
1047 External interface changes
1048 --------------------------
1053 The OS API of Ganeti 2.0 has been built with extensibility in mind.
1054 Since we pass everything as environment variables it's a lot easier to
1055 send new information to the OSes without breaking retrocompatibility.
1056 This section of the design outlines the proposed extensions to the API
1057 and their implementation.
1059 API Version Compatibility Handling
1060 ++++++++++++++++++++++++++++++++++
1062 In 2.1 there will be a new OS API version (eg. 15), which should be
1063 mostly compatible with api 10, except for some new added variables.
1064 Since it's easy not to pass some variables we'll be able to handle
1065 Ganeti 2.0 OSes by just filtering out the newly added piece of
1066 information. We will still encourage OSes to declare support for the new
1067 API after checking that the new variables don't provide any conflict for
1068 them, and we will drop api 10 support after ganeti 2.1 has released.
1070 New Environment variables
1071 +++++++++++++++++++++++++
1073 Some variables have never been added to the OS api but would definitely
1074 be useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable
1075 to allow the OS to make changes relevant to the virtualization the
1076 instance is going to use. Since this field is immutable for each
1077 instance, the os can tight the install without caring of making sure the
1078 instance can run under any virtualization technology.
1080 We also want the OS to know the particular hypervisor parameters, to be
1081 able to customize the install even more. Since the parameters can
1082 change, though, we will pass them only as an "FYI": if an OS ties some
1083 instance functionality to the value of a particular hypervisor parameter
1084 manual changes or a reinstall may be needed to adapt the instance to the
1085 new environment. This is not a regression as of today, because even if
1086 the OSes are left blind about this information, sometimes they still
1087 need to make compromises and cannot satisfy all possible parameter
1093 Currently we are assisting to some degree of "os proliferation" just to
1094 change a simple installation behavior. This means that the same OS gets
1095 installed on the cluster multiple times, with different names, to
1096 customize just one installation behavior. Usually such OSes try to share
1097 as much as possible through symlinks, but this still causes
1098 complications on the user side, especially when multiple parameters must
1101 For example today if you want to install debian etch, lenny or squeeze
1102 you probably need to install the debootstrap OS multiple times, changing
1103 its configuration file, and calling it debootstrap-etch,
1104 debootstrap-lenny or debootstrap-squeeze. Furthermore if you have for
1105 example a "server" and a "development" environment which installs
1106 different packages/configuration files and must be available for all
1107 installs you'll probably end up with deboostrap-etch-server,
1108 debootstrap-etch-dev, debootrap-lenny-server, debootstrap-lenny-dev,
1109 etc. Crossing more than two parameters quickly becomes not manageable.
1111 In order to avoid this we plan to make OSes more customizable, by
1112 allowing each OS to declare a list of variants which can be used to
1113 customize it. The variants list is mandatory and must be written, one
1114 variant per line, in the new "variants.list" file inside the main os
1115 dir. At least one supported variant must be supported. When choosing the
1116 OS exactly one variant will have to be specified, and will be encoded in
1117 the os name as <OS-name>+<variant>. As for today it will be possible to
1118 change an instance's OS at creation or install time.
1120 The 2.1 OS list will be the combination of each OS, plus its supported
1121 variants. This will cause the name name proliferation to remain, but at
1122 least the internal OS code will be simplified to just parsing the passed
1123 variant, without the need for symlinks or code duplication.
1125 Also we expect the OSes to declare only "interesting" variants, but to
1126 accept some non-declared ones which a user will be able to pass in by
1127 overriding the checks ganeti does. This will be useful for allowing some
1128 variations to be used without polluting the OS list (per-OS
1129 documentation should list all supported variants). If a variant which is
1130 not internally supported is forced through, the OS scripts should abort.
1132 In the future (post 2.1) we may want to move to full fledged parameters
1133 all orthogonal to each other (for example "architecture" (i386, amd64),
1134 "suite" (lenny, squeeze, ...), etc). (As opposed to the variant, which
1135 is a single parameter, and you need a different variant for all the set
1136 of combinations you want to support). In this case we envision the
1137 variants to be moved inside of Ganeti and be associated with lists
1138 parameter->values associations, which will then be passed to the OS.
1144 Current State and shortcomings
1145 ++++++++++++++++++++++++++++++
1147 The iallocator interface allows creation of instances without manually
1148 specifying nodes, but instead by specifying plugins which will do the
1149 required computations and produce a valid node list.
1151 However, the interface is quite akward to use:
1153 - one cannot set a 'default' iallocator script
1154 - one cannot use it to easily test if allocation would succeed
1155 - some new functionality, such as rebalancing clusters and calculating
1156 capacity estimates is needed
1161 There are two area of improvements proposed:
1163 - improving the use of the current interface
1164 - extending the IAllocator API to cover more automation
1167 Default iallocator names
1168 ^^^^^^^^^^^^^^^^^^^^^^^^
1170 The cluster will hold, for each type of iallocator, a (possibly empty)
1171 list of modules that will be used automatically.
1173 If the list is empty, the behaviour will remain the same.
1175 If the list has one entry, then ganeti will behave as if
1176 '--iallocator' was specifyed on the command line. I.e. use this
1177 allocator by default. If the user however passed nodes, those will be
1180 If the list has multiple entries, they will be tried in order until
1181 one gives a successful answer.
1186 The create instance LU will get a new 'dry-run' option that will just
1187 simulate the placement, and return the chosen node-lists after running
1188 all the usual checks.
1193 Instance add/removals/moves can create a situation where load on the
1194 nodes is not spread equally. For this, a new iallocator mode will be
1195 implemented called ``balance`` in which the plugin, given the current
1196 cluster state, and a maximum number of operations, will need to
1197 compute the instance relocations needed in order to achieve a "better"
1198 (for whatever the script believes it's better) cluster.
1200 Cluster capacity calculation
1201 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1203 In this mode, called ``capacity``, given an instance specification and
1204 the current cluster state (similar to the ``allocate`` mode), the
1205 plugin needs to return:
1207 - how many instances can be allocated on the cluster with that
1209 - on which nodes these will be allocated (in order)
1211 .. vim: set textwidth=72 :