root / doc / design-multi-reloc.rst @ 4055b109
History | View | Annotate | Download (2.9 kB)
1 |
==================================== |
---|---|
2 |
Moving instances accross node groups |
3 |
==================================== |
4 |
|
5 |
This design document explains the changes needed in Ganeti to perform |
6 |
instance moves across node groups. Reader familiarity with the following |
7 |
existing documents is advised: |
8 |
|
9 |
- :doc:`Current IAllocator specification <iallocator>` |
10 |
- :doc:`Shared storage model in 2.3+ <design-shared-storage>` |
11 |
|
12 |
Motivation and and design proposal |
13 |
================================== |
14 |
|
15 |
At the moment, moving instances away from their primary or secondary |
16 |
nodes with the ``relocate`` and ``multi-evacuate`` IAllocator calls |
17 |
restricts target nodes to those on the same node group. This ensures a |
18 |
mobility domain is never crossed, and allows normal operation of each |
19 |
node group to be confined within itself. |
20 |
|
21 |
It is desirable, however, to have a way of moving instances across node |
22 |
groups so that, for example, it is possible to move a set of instances |
23 |
to another group for policy reasons, or completely empty a given group |
24 |
to perform maintenance operations. |
25 |
|
26 |
To implement this, we propose a new ``multi-relocate`` IAllocator call |
27 |
that will be able to compute inter-group instance moves, taking into |
28 |
account mobility domains as appropriate. The interface proposed below |
29 |
should be enough to cover the use cases mentioned above. |
30 |
|
31 |
Detailed design |
32 |
=============== |
33 |
|
34 |
We introduce a new ``multi-relocate`` IAllocator call whose input will |
35 |
be a list of instances to move, and a "mode of operation" that will |
36 |
determine what groups will be candidates to receive the new instances. |
37 |
|
38 |
The mode of operation will be one of: |
39 |
|
40 |
- *Stay in group*: the instances will be moved off their current nodes, |
41 |
but will stay in the same group; this is what the ``relocate`` call |
42 |
does, but here it can act on multiple instances. (Typically, the |
43 |
source nodes will be marked as drained, to avoid just exchanging |
44 |
instances among them.) |
45 |
|
46 |
- *Change group*: this mode accepts one extra parameter, a list of node |
47 |
group UUIDs; the instances will be moved away from their current |
48 |
group, to any of the groups in this list. If the list is empty, the |
49 |
request is, simply, "change group": the instances are placed in any |
50 |
group but their original one. |
51 |
|
52 |
- *Any*: for each instance, any group is valid, including its current |
53 |
one. |
54 |
|
55 |
In all modes, the groups' ``alloc_policy`` attribute will be honored. |
56 |
|
57 |
Result |
58 |
------ |
59 |
|
60 |
In all storage models, an inter-group move can be modeled as a sequence |
61 |
of **replace secondary** and **failover** operations (when shared |
62 |
storage is used, they will all be failover operations within the |
63 |
corresponding mobility domain). This will be represented as a list of |
64 |
``(instance, [operations])`` pairs. |
65 |
|
66 |
For replace secondary operations, a new secondary node must be |
67 |
specified. For failover operations, a node *may* be specified when |
68 |
necessary, e.g. when shared storage is in use and there's no designated |
69 |
secondary for the instance. |
70 |
|
71 |
.. vim: set textwidth=72 : |
72 |
.. Local Variables: |
73 |
.. mode: rst |
74 |
.. fill-column: 72 |
75 |
.. End: |