5 This document describes the major changes in Ganeti 2.1 compared to
8 The 2.1 version will be a relatively small release. Its main aim is to
9 avoid changing too much of the core code, while addressing issues and
10 adding new features and improvements over 2.0, in a timely fashion.
12 .. contents:: :depth: 4
17 Ganeti 2.1 will add features to help further automatization of cluster
18 operations, further improbe scalability to even bigger clusters, and
19 make it easier to debug the Ganeti core.
30 As for 2.0 we divide the 2.1 design into three areas:
32 - core changes, which affect the master daemon/job queue/locking or
33 all/most logical units
34 - logical unit/feature changes
35 - external interface changes (eg. command line, os api, hooks, ...)
40 Storage units modelling
41 ~~~~~~~~~~~~~~~~~~~~~~~
43 Currently, Ganeti has a good model of the block devices for instances
44 (e.g. LVM logical volumes, files, DRBD devices, etc.) but none of the
45 storage pools that are providing the space for these front-end
46 devices. For example, there are hardcoded inter-node RPC calls for
47 volume group listing, file storage creation/deletion, etc.
49 The storage units framework will implement a generic handling for all
50 kinds of storage backends:
52 - LVM physical volumes
54 - File-based storage directories
55 - any other future storage method
57 There will be a generic list of methods that each storage unit type
60 - list of storage units of this type
61 - check status of the storage unit
63 Additionally, there will be specific methods for each method, for
66 - enable/disable allocations on a specific PV
67 - file storage directory creation/deletion
68 - VG consistency fixing
70 This will allow a much better modeling and unification of the various
71 RPC calls related to backend storage pool in the future. Ganeti 2.1 is
72 intended to add the basics of the framework, and not necessarilly move
73 all the curent VG/FileBased operations to it.
75 Note that while we model both LVM PVs and LVM VGs, the framework will
76 **not** model any relationship between the different types. In other
77 words, we don't model neither inheritances nor stacking, since this is
78 too complex for our needs. While a ``vgreduce`` operation on a LVM VG
79 could actually remove a PV from it, this will not be handled at the
80 framework level, but at individual operation level. The goal is that
81 this is a lightweight framework, for abstracting the different storage
82 operation, and not for modelling the storage hierarchy.
88 Current State and shortcomings
89 ++++++++++++++++++++++++++++++
91 The class ``LockSet`` (see ``lib/locking.py``) is a container for one or
92 many ``SharedLock`` instances. It provides an interface to add/remove
93 locks and to acquire and subsequently release any number of those locks
96 Locks in a ``LockSet`` are always acquired in alphabetic order. Due to
97 the way we're using locks for nodes and instances (the single cluster
98 lock isn't affected by this issue) this can lead to long delays when
99 acquiring locks if another operation tries to acquire multiple locks but
100 has to wait for yet another operation.
102 In the following demonstration we assume to have the instance locks
103 ``inst1``, ``inst2``, ``inst3`` and ``inst4``.
105 #. Operation A grabs lock for instance ``inst4``.
106 #. Operation B wants to acquire all instance locks in alphabetic order,
107 but it has to wait for ``inst4``.
108 #. Operation C tries to lock ``inst1``, but it has to wait until
109 Operation B (which is trying to acquire all locks) releases the lock
111 #. Operation A finishes and releases lock on ``inst4``. Operation B can
112 continue and eventually releases all locks.
113 #. Operation C can get ``inst1`` lock and finishes.
115 Technically there's no need for Operation C to wait for Operation A, and
116 subsequently Operation B, to finish. Operation B can't continue until
117 Operation A is done (it has to wait for ``inst4``), anyway.
122 Non-blocking lock acquiring
123 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
125 Acquiring locks for OpCode execution is always done in blocking mode.
126 They won't return until the lock has successfully been acquired (or an
127 error occurred, although we won't cover that case here).
129 ``SharedLock`` and ``LockSet`` must be able to be acquired in a
130 non-blocking way. They must support a timeout and abort trying to
131 acquire the lock(s) after the specified amount of time.
133 Retry acquiring locks
134 ^^^^^^^^^^^^^^^^^^^^^
136 To prevent other operations from waiting for a long time, such as
137 described in the demonstration before, ``LockSet`` must not keep locks
138 for a prolonged period of time when trying to acquire two or more locks.
139 Instead it should, with an increasing timeout for acquiring all locks,
140 release all locks again and sleep some time if it fails to acquire all
143 A good timeout value needs to be determined. In any case should
144 ``LockSet`` proceed to acquire locks in blocking mode after a few
145 (unsuccessful) attempts to acquire all requested locks.
147 One proposal for the timeout is to use ``2**tries`` seconds, where
148 ``tries`` is the number of unsuccessful tries.
150 In the demonstration before this would allow Operation C to continue
151 after Operation B unsuccessfully tried to acquire all locks and released
152 all acquired locks (``inst1``, ``inst2`` and ``inst3``) again.
154 Other solutions discussed
155 +++++++++++++++++++++++++
157 There was also some discussion on going one step further and extend the
158 job queue (see ``lib/jqueue.py``) to select the next task for a worker
159 depending on whether it can acquire the necessary locks. While this may
160 reduce the number of necessary worker threads and/or increase throughput
161 on large clusters with many jobs, it also brings many potential
162 problems, such as contention and increased memory usage, with it. As
163 this would be an extension of the changes proposed before it could be
164 implemented at a later point in time, but we decided to stay with the
165 simpler solution for now.
167 Implementation details
168 ++++++++++++++++++++++
170 ``SharedLock`` redesign
171 ^^^^^^^^^^^^^^^^^^^^^^^
173 The current design of ``SharedLock`` is not good for supporting timeouts
174 when acquiring a lock and there are also minor fairness issues in it. We
175 plan to address both with a redesign. A proof of concept implementation
176 was written and resulted in significantly simpler code.
178 Currently ``SharedLock`` uses two separate queues for shared and
179 exclusive acquires and waiters get to run in turns. This means if an
180 exclusive acquire is released, the lock will allow shared waiters to run
181 and vice versa. Although it's still fair in the end there is a slight
182 bias towards shared waiters in the current implementation. The same
183 implementation with two shared queues can not support timeouts without
184 adding a lot of complexity.
186 Our proposed redesign changes ``SharedLock`` to have only one single
187 queue. There will be one condition (see Condition_ for a note about
188 performance) in the queue per exclusive acquire and two for all shared
189 acquires (see below for an explanation). The maximum queue length will
190 always be ``2 + (number of exclusive acquires waiting)``. The number of
191 queue entries for shared acquires can vary from 0 to 2.
193 The two conditions for shared acquires are a bit special. They will be
194 used in turn. When the lock is instantiated, no conditions are in the
195 queue. As soon as the first shared acquire arrives (and there are
196 holder(s) or waiting acquires; see Acquire_), the active condition is
197 added to the queue. Until it becomes the topmost condition in the queue
198 and has been notified, any shared acquire is added to this active
199 condition. When the active condition is notified, the conditions are
200 swapped and further shared acquires are added to the previously inactive
201 condition (which has now become the active condition). After all waiters
202 on the previously active (now inactive) and now notified condition
203 received the notification, it is removed from the queue of pending
206 This means shared acquires will skip any exclusive acquire in the queue.
207 We believe it's better to improve parallelization on operations only
208 asking for shared (or read-only) locks. Exclusive operations holding the
209 same lock can not be parallelized.
215 For exclusive acquires a new condition is created and appended to the
216 queue. Shared acquires are added to the active condition for shared
217 acquires and if the condition is not yet on the queue, it's appended.
219 The next step is to wait for our condition to be on the top of the queue
220 (to guarantee fairness). If the timeout expired, we return to the caller
221 without acquiring the lock. On every notification we check whether the
222 lock has been deleted, in which case an error is returned to the caller.
224 The lock can be acquired if we're on top of the queue (there is no one
225 else ahead of us). For an exclusive acquire, there must not be other
226 exclusive or shared holders. For a shared acquire, there must not be an
227 exclusive holder. If these conditions are all true, the lock is
228 acquired and we return to the caller. In any other case we wait again on
231 If it was the last waiter on a condition, the condition is removed from
234 Optimization: There's no need to touch the queue if there are no pending
235 acquires and no current holders. The caller can have the lock
238 .. image:: design-2.1-lock-acquire.png
244 First the lock removes the caller from the internal owner list. If there
245 are pending acquires in the queue, the first (the oldest) condition is
248 If the first condition was the active condition for shared acquires, the
249 inactive condition will be made active. This ensures fairness with
250 exclusive locks by forcing consecutive shared acquires to wait in the
253 .. image:: design-2.1-lock-release.png
259 The caller must either hold the lock in exclusive mode already or the
260 lock must be acquired in exclusive mode. Trying to delete a lock while
261 it's held in shared mode must fail.
263 After ensuring the lock is held in exclusive mode, the lock will mark
264 itself as deleted and continue to notify all pending acquires. They will
265 wake up, notice the deleted lock and return an error to the caller.
271 Note: This is not necessary for the locking changes above, but it may be
272 a good optimization (pending performance tests).
274 The existing locking code in Ganeti 2.0 uses Python's built-in
275 ``threading.Condition`` class. Unfortunately ``Condition`` implements
276 timeouts by sleeping 1ms to 20ms between tries to acquire the condition
277 lock in non-blocking mode. This requires unnecessary context switches
278 and contention on the CPython GIL (Global Interpreter Lock).
280 By using POSIX pipes (see ``pipe(2)``) we can use the operating system's
281 support for timeouts on file descriptors (see ``select(2)``). A custom
282 condition class will have to be written for this.
284 On instantiation the class creates a pipe. After each notification the
285 previous pipe is abandoned and re-created (technically the old pipe
286 needs to stay around until all notifications have been delivered).
288 All waiting clients of the condition use ``select(2)`` or ``poll(2)`` to
289 wait for notifications, optionally with a timeout. A notification will
290 be signalled to the waiting clients by closing the pipe. If the pipe
291 wasn't closed during the timeout, the waiting function returns to its
301 Current State and shortcomings
302 ++++++++++++++++++++++++++++++
304 In Ganeti 2.0 all nodes are equal, but some are more equal than others.
305 In particular they are divided between "master", "master candidates" and
306 "normal". (Moreover they can be offline or drained, but this is not
307 important for the current discussion). In general the whole
308 configuration is only replicated to master candidates, and some partial
309 information is spread to all nodes via ssconf.
311 This change was done so that the most frequent Ganeti operations didn't
312 need to contact all nodes, and so clusters could become bigger. If we
313 want more information to be available on all nodes, we need to add more
314 ssconf values, which is counter-balancing the change, or to talk with
315 the master node, which is not designed to happen now, and requires its
318 Information such as the instance->primary_node mapping will be needed on
319 all nodes, and we also want to make sure services external to the
320 cluster can query this information as well. This information must be
321 available at all times, so we can't query it through RAPI, which would
322 be a single point of failure, as it's only available on the master.
328 In order to allow fast and highly available access read-only to some
329 configuration values, we'll create a new ganeti-confd daemon, which will
330 run on master candidates. This daemon will talk via UDP, and
331 authenticate messages using HMAC with a cluster-wide shared key. This
332 key will be generated at cluster init time, and stored on the clusters
333 alongside the ganeti SSL keys, and readable only by root.
335 An interested client can query a value by making a request to a subset
336 of the cluster master candidates. It will then wait to get a few
337 responses, and use the one with the highest configuration serial number.
338 Since the configuration serial number is increased each time the ganeti
339 config is updated, and the serial number is included in all answers,
340 this can be used to make sure to use the most recent answer, in case
341 some master candidates are stale or in the middle of a configuration
344 In order to prevent replay attacks queries will contain the current unix
345 timestamp according to the client, and the server will verify that its
346 timestamp is in the same 5 minutes range (this requires synchronized
347 clocks, which is a good idea anyway). Queries will also contain a "salt"
348 which they expect the answers to be sent with, and clients are supposed
349 to accept only answers which contain salt generated by them.
351 The configuration daemon will be able to answer simple queries such as:
353 - master candidates list
357 - instance primary nodes
362 A confd query will look like this, on the wire::
365 "msg": "{\"type\": 1,
366 \"rsalt\": \"9aa6ce92-8336-11de-af38-001d093e835f\",
368 \"query\": \"node1.example.com\"}\n",
369 "salt": "1249637704",
370 "hmac": "4a4139b2c3c5921f7e439469a0a45ad200aead0f"
373 "plj0" is a fourcc that details the message content. It stands for plain
374 json 0, and can be changed as we move on to different type of protocols
375 (for example protocol buffers, or encrypted json). What follows is a
376 json encoded string, with the following fields:
378 - 'msg' contains a JSON-encoded query, its fields are:
380 - 'protocol', integer, is the confd protocol version (initially just
381 constants.CONFD_PROTOCOL_VERSION, with a value of 1)
382 - 'type', integer, is the query type. For example "node role by name"
383 or "node primary ip by instance ip". Constants will be provided for
384 the actual available query types.
385 - 'query', string, is the search key. For example an ip, or a node
387 - 'rsalt', string, is the required response salt. The client must use
388 it to recognize which answer it's getting.
390 - 'salt' must be the current unix timestamp, according to the client.
391 Servers can refuse messages which have a wrong timing, according to
392 their configuration and clock.
393 - 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
395 If an answer comes back (which is optional, since confd works over UDP)
396 it will be in this format::
399 "msg": "{\"status\": 0,
403 "salt": "9aa6ce92-8336-11de-af38-001d093e835f",
404 "hmac": "aaeccc0dff9328fdf7967cb600b6a80a6a9332af"
409 - 'plj0' the message type magic fourcc, as discussed above
410 - 'msg' contains a JSON-encoded answer, its fields are:
412 - 'protocol', integer, is the confd protocol version (initially just
413 constants.CONFD_PROTOCOL_VERSION, with a value of 1)
414 - 'status', integer, is the error code. Initially just 0 for 'ok' or
415 '1' for 'error' (in which case answer contains an error detail,
416 rather than an answer), but in the future it may be expanded to have
417 more meanings (eg: 2, the answer is compressed)
418 - 'answer', is the actual answer. Its type and meaning is query
419 specific. For example for "node primary ip by instance ip" queries
420 it will be a string containing an IP address, for "node role by
421 name" queries it will be an integer which encodes the role (master,
422 candidate, drained, offline) according to constants.
424 - 'salt' is the requested salt from the query. A client can use it to
425 recognize what query the answer is answering.
426 - 'hmac' is an hmac signature of salt+msg, with the cluster hmac key
432 Current State and shortcomings
433 ++++++++++++++++++++++++++++++
435 Currently LURedistributeConfig triggers a copy of the updated
436 configuration file to all master candidates and of the ssconf files to
437 all nodes. There are other files which are maintained manually but which
438 are important to keep in sync. These are:
440 - rapi SSL key certificate file (rapi.pem) (on master candidates)
441 - rapi user/password file rapi_users (on master candidates)
443 Furthermore there are some files which are hypervisor specific but we
444 may want to keep in sync:
446 - the xen-hvm hypervisor uses one shared file for all vnc passwords, and
447 copies the file once, during node add. This design is subject to
448 revision to be able to have different passwords for different groups
449 of instances via the use of hypervisor parameters, and to allow
450 xen-hvm and kvm to use an equal system to provide password-protected
451 vnc sessions. In general, though, it would be useful if the vnc
452 password files were copied as well, to avoid unwanted vnc password
453 changes on instance failover/migrate.
455 Optionally the admin may want to also ship files such as the global
456 xend.conf file, and the network scripts to all nodes.
461 RedistributeConfig will be changed to copy also the rapi files, and to
462 call every enabled hypervisor asking for a list of additional files to
463 copy. Users will have the possibility to populate a file containing a
464 list of files to be distributed; this file will be propagated as well.
465 Such solution is really simple to implement and it's easily usable by
468 This code will be also shared (via tasklets or by other means, if
469 tasklets are not ready for 2.1) with the AddNode and SetNodeParams LUs
470 (so that the relevant files will be automatically shipped to new master
471 candidates as they are set).
476 Current State and shortcomings
477 ++++++++++++++++++++++++++++++
479 Currently just the xen-hvm hypervisor supports setting a password to
480 connect the the instances' VNC console, and has one common password
483 This doesn't allow different passwords for different instances/groups of
484 instances, and makes it necessary to remember to copy the file around
485 the cluster when the password changes.
490 We'll change the VNC password file to a vnc_password_file hypervisor
491 parameter. This way it can have a cluster default, but also a different
492 value for each instance. The VNC enabled hypervisors (xen and kvm) will
493 publish all the password files in use through the cluster so that a
494 redistribute-config will ship them to all nodes (see the Redistribute
495 Config proposed changes above).
497 The current VNC_PASSWORD_FILE constant will be removed, but its value
498 will be used as the default HV_VNC_PASSWORD_FILE value, thus retaining
499 backwards compatibility with 2.0.
501 The code to export the list of VNC password files from the hypervisors
502 to RedistributeConfig will be shared between the KVM and xen-hvm
508 Current State and shortcomings
509 ++++++++++++++++++++++++++++++
511 Currently disks and network interfaces have a few tweakable options and
512 all the rest is left to a default we chose. We're finding that we need
513 more and more to tweak some of these parameters, for example to disable
514 barriers for DRBD devices, or allow striping for the LVM volumes.
516 Moreover for many of these parameters it will be nice to have
517 cluster-wide defaults, and then be able to change them per
523 We will add new cluster level diskparams and netparams, which will
524 contain all the tweakable parameters. All values which have a sensible
525 cluster-wide default will go into this new structure while parameters
526 which have unique values will not.
528 Example of network parameters:
530 - link: for mode "bridge" the bridge to connect to, for mode route it
531 can contain the routing table, or the destination interface
533 Example of disk parameters:
534 - stripe: lvm stripes
535 - stripe_size: lvm stripe size
536 - meta_flushes: drbd, enable/disable metadata "barriers"
537 - data_flushes: drbd, enable/disable data "barriers"
539 Some parameters are bound to be disk-type specific (drbd, vs lvm, vs
540 files) or hypervisor specific (nic models for example), but for now they
541 will all live in the same structure. Each component is supposed to
542 validate only the parameters it knows about, and ganeti itself will make
543 sure that no "globally unknown" parameters are added, and that no
544 parameters have overridden meanings for different components.
546 The parameters will be kept, as for the BEPARAMS into a "default"
547 category, which will allow us to expand on by creating instance
548 "classes" in the future. Instance classes is not a feature we plan
549 implementing in 2.1, though.
552 Global hypervisor parameters
553 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
555 Current State and shortcomings
556 ++++++++++++++++++++++++++++++
558 Currently all hypervisor parameters are modifiable both globally
559 (cluster level) and at instance level. However, there is no other
560 framework to held hypervisor-specific parameters, so if we want to add
561 a new class of hypervisor parameters that only makes sense on a global
562 level, we have to change the hvparams framework.
567 We add a new (global, not per-hypervisor) list of parameters which are
568 not changeable on a per-instance level. The create, modify and query
569 instance operations are changed to not allow/show these parameters.
571 Furthermore, to allow transition of parameters to the global list, and
572 to allow cleanup of inadverdently-customised parameters, the
573 ``UpgradeConfig()`` method of instances will drop any such parameters
574 from their list of hvparams, such that a restart of the master daemon
575 is all that is needed for cleaning these up.
577 Also, the framework is simple enough that if we need to replicate it
578 at beparams level we can do so easily.
581 Non bridged instances support
582 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
584 Current State and shortcomings
585 ++++++++++++++++++++++++++++++
587 Currently each instance NIC must be connected to a bridge, and if the
588 bridge is not specified the default cluster one is used. This makes it
589 impossible to use the vif-route xen network scripts, or other
590 alternative mechanisms that don't need a bridge to work.
595 The new "mode" network parameter will distinguish between bridged
596 interfaces and routed ones.
598 When mode is "bridge" the "link" parameter will contain the bridge the
599 instance should be connected to, effectively making things as today. The
600 value has been migrated from a nic field to a parameter to allow for an
601 easier manipulation of the cluster default.
603 When mode is "route" the ip field of the interface will become
604 mandatory, to allow for a route to be set. In the future we may want
605 also to accept multiple IPs or IP/mask values for this purpose. We will
606 evaluate possible meanings of the link parameter to signify a routing
607 table to be used, which would allow for insulation between instance
608 groups (as today happens for different bridges).
610 For now we won't add a parameter to specify which network script gets
611 called for which instance, so in a mixed cluster the network script must
612 be able to handle both cases. The default kvm vif script will be changed
613 to do so. (Xen doesn't have a ganeti provided script, so nothing will be
614 done for that hypervisor)
616 Introducing persistent UUIDs
617 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
619 Current state and shortcomings
620 ++++++++++++++++++++++++++++++
622 Some objects in the Ganeti configurations are tracked by their name
623 while also supporting renames. This creates an extra difficulty,
624 because neither Ganeti nor external management tools can then track
625 the actual entity, and due to the name change it behaves like a new
628 Proposed changes part 1
629 +++++++++++++++++++++++
631 We will change Ganeti to use UUIDs for entity tracking, but in a
632 staggered way. In 2.1, we will simply add an “uuid” attribute to each
633 of the instances, nodes and cluster itself. This will be reported on
634 instance creation for nodes, and on node adds for the nodes. It will
635 be of course avaiblable for querying via the OpQueryNodes/Instance and
636 cluster information, and via RAPI as well.
638 Note that Ganeti will not provide any way to change this attribute.
640 Upgrading from Ganeti 2.0 will automatically add an ‘uuid’ attribute
641 to all entities missing it.
644 Proposed changes part 2
645 +++++++++++++++++++++++
647 In the next release (e.g. 2.2), the tracking of objects will change
648 from the name to the UUID internally, and externally Ganeti will
649 accept both forms of identification; e.g. an RAPI call would be made
650 either against ``/2/instances/foo.bar`` or against
651 ``/2/instances/bb3b2e42…``. Since an FQDN must have at least a dot,
652 and dots are not valid characters in UUIDs, we will not have namespace
655 Another change here is that node identification (during cluster
656 operations/queries like master startup, “am I the master?” and
657 similar) could be done via UUIDs which is more stable than the current
658 hostname-based scheme.
660 Internal tracking refers to the way the configuration is stored; a
661 DRBD disk of an instance refers to the node name (so that IPs can be
662 changed easily), but this is still a problem for name changes; thus
663 these will be changed to point to the node UUID to ease renames.
665 The advantages of this change (after the second round of changes), is
666 that node rename becomes trivial, whereas today node rename would
667 require a complete lock of all instances.
670 Automated disk repairs infrastructure
671 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
673 Replacing defective disks in an automated fashion is quite difficult
674 with the current version of Ganeti. These changes will introduce
675 additional functionality and interfaces to simplify automating disk
676 replacements on a Ganeti node.
678 Fix node volume group
679 +++++++++++++++++++++
681 This is the most difficult addition, as it can lead to dataloss if it's
682 not properly safeguarded.
684 The operation must be done only when all the other nodes that have
685 instances in common with the target node are fine, i.e. this is the only
686 node with problems, and also we have to double-check that all instances
687 on this node have at least a good copy of the data.
689 This might mean that we have to enhance the GetMirrorStatus calls, and
690 introduce and a smarter version that can tell us more about the status
693 Stop allocation on a given PV
694 +++++++++++++++++++++++++++++
696 This is somewhat simple. First we need a "list PVs" opcode (and its
697 associated logical unit) and then a set PV status opcode/LU. These in
698 combination should allow both checking and changing the disk/PV status.
703 This new opcode or opcode change must list the instance-disk-index and
704 node combinations of the instance together with their status. This will
705 allow determining what part of the instance is broken (if any).
710 This new opcode/LU/RAPI call will run ``replace-disks -p`` as needed, in
711 order to fix the instance status. It only affects primary instances;
712 secondaries can just be moved away.
717 This new opcode/LU/RAPI call will take over the current ``gnt-node
718 migrate`` code and run migrate for all instances on the node.
723 This new opcode/LU/RAPI call will take over the current ``gnt-node
724 evacuate`` code and run replace-secondary with an iallocator script for
725 all instances on the node.
728 External interface changes
729 --------------------------
734 The OS API of Ganeti 2.0 has been built with extensibility in mind.
735 Since we pass everything as environment variables it's a lot easier to
736 send new information to the OSes without breaking retrocompatibility.
737 This section of the design outlines the proposed extensions to the API
738 and their implementation.
740 API Version Compatibility Handling
741 ++++++++++++++++++++++++++++++++++
743 In 2.1 there will be a new OS API version (eg. 15), which should be
744 mostly compatible with api 10, except for some new added variables.
745 Since it's easy not to pass some variables we'll be able to handle
746 Ganeti 2.0 OSes by just filtering out the newly added piece of
747 information. We will still encourage OSes to declare support for the new
748 API after checking that the new variables don't provide any conflict for
749 them, and we will drop api 10 support after ganeti 2.1 has released.
751 New Environment variables
752 +++++++++++++++++++++++++
754 Some variables have never been added to the OS api but would definitely
755 be useful for the OSes. We plan to add an INSTANCE_HYPERVISOR variable
756 to allow the OS to make changes relevant to the virtualization the
757 instance is going to use. Since this field is immutable for each
758 instance, the os can tight the install without caring of making sure the
759 instance can run under any virtualization technology.
761 We also want the OS to know the particular hypervisor parameters, to be
762 able to customize the install even more. Since the parameters can
763 change, though, we will pass them only as an "FYI": if an OS ties some
764 instance functionality to the value of a particular hypervisor parameter
765 manual changes or a reinstall may be needed to adapt the instance to the
766 new environment. This is not a regression as of today, because even if
767 the OSes are left blind about this information, sometimes they still
768 need to make compromises and cannot satisfy all possible parameter
774 Currently we are assisting to some degree of "os proliferation" just to
775 change a simple installation behavior. This means that the same OS gets
776 installed on the cluster multiple times, with different names, to
777 customize just one installation behavior. Usually such OSes try to share
778 as much as possible through symlinks, but this still causes
779 complications on the user side, especially when multiple parameters must
782 For example today if you want to install debian etch, lenny or squeeze
783 you probably need to install the debootstrap OS multiple times, changing
784 its configuration file, and calling it debootstrap-etch,
785 debootstrap-lenny or debootstrap-squeeze. Furthermore if you have for
786 example a "server" and a "development" environment which installs
787 different packages/configuration files and must be available for all
788 installs you'll probably end up with deboostrap-etch-server,
789 debootstrap-etch-dev, debootrap-lenny-server, debootstrap-lenny-dev,
790 etc. Crossing more than two parameters quickly becomes not manageable.
792 In order to avoid this we plan to make OSes more customizable, by
793 allowing each OS to declare a list of variants which can be used to
794 customize it. The variants list is mandatory and must be written, one
795 variant per line, in the new "variants.list" file inside the main os
796 dir. At least one supported variant must be supported. When choosing the
797 OS exactly one variant will have to be specified, and will be encoded in
798 the os name as <OS-name>+<variant>. As for today it will be possible to
799 change an instance's OS at creation or install time.
801 The 2.1 OS list will be the combination of each OS, plus its supported
802 variants. This will cause the name name proliferation to remain, but at
803 least the internal OS code will be simplified to just parsing the passed
804 variant, without the need for symlinks or code duplication.
806 Also we expect the OSes to declare only "interesting" variants, but to
807 accept some non-declared ones which a user will be able to pass in by
808 overriding the checks ganeti does. This will be useful for allowing some
809 variations to be used without polluting the OS list (per-OS
810 documentation should list all supported variants). If a variant which is
811 not internally supported is forced through, the OS scripts should abort.
813 In the future (post 2.1) we may want to move to full fledged parameters
814 all orthogonal to each other (for example "architecture" (i386, amd64),
815 "suite" (lenny, squeeze, ...), etc). (As opposed to the variant, which
816 is a single parameter, and you need a different variant for all the set
817 of combinations you want to support). In this case we envision the
818 variants to be moved inside of Ganeti and be associated with lists
819 parameter->values associations, which will then be passed to the OS.
825 Current State and shortcomings
826 ++++++++++++++++++++++++++++++
828 The iallocator interface allows creation of instances without manually
829 specifying nodes, but instead by specifying plugins which will do the
830 required computations and produce a valid node list.
832 However, the interface is quite akward to use:
834 - one cannot set a 'default' iallocator script
835 - one cannot use it to easily test if allocation would succeed
836 - some new functionality, such as rebalancing clusters and calculating
837 capacity estimates is needed
842 There are two area of improvements proposed:
844 - improving the use of the current interface
845 - extending the IAllocator API to cover more automation
848 Default iallocator names
849 ^^^^^^^^^^^^^^^^^^^^^^^^
851 The cluster will hold, for each type of iallocator, a (possibly empty)
852 list of modules that will be used automatically.
854 If the list is empty, the behaviour will remain the same.
856 If the list has one entry, then ganeti will behave as if
857 '--iallocator' was specifyed on the command line. I.e. use this
858 allocator by default. If the user however passed nodes, those will be
861 If the list has multiple entries, they will be tried in order until
862 one gives a successful answer.
867 The create instance LU will get a new 'dry-run' option that will just
868 simulate the placement, and return the chosen node-lists after running
869 all the usual checks.
874 Instance add/removals/moves can create a situation where load on the
875 nodes is not spread equally. For this, a new iallocator mode will be
876 implemented called ``balance`` in which the plugin, given the current
877 cluster state, and a maximum number of operations, will need to
878 compute the instance relocations needed in order to achieve a "better"
879 (for whatever the script believes it's better) cluster.
881 Cluster capacity calculation
882 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
884 In this mode, called ``capacity``, given an instance specification and
885 the current cluster state (similar to the ``allocate`` mode), the
886 plugin needs to return:
888 - how many instances can be allocated on the cluster with that
890 - on which nodes these will be allocated (in order)
892 .. vim: set textwidth=72 :