Statistics
| Branch: | Tag: | Revision:

root / doc / design-opportunistic-locking.rst @ 56c934da

History | View | Annotate | Download (5.6 kB)

1
Design for parallelized instance creations and opportunistic locking
2
====================================================================
3

    
4
.. contents:: :depth: 3
5

    
6

    
7
Current state and shortcomings
8
------------------------------
9

    
10
As of Ganeti 2.6, instance creations acquire all node locks when an
11
:doc:`instance allocator <iallocator>` (henceforth "iallocator") is
12
used. In situations where many instance should be created in a short
13
timeframe, there is a lot of congestion on node locks. Effectively all
14
instance creations are serialized, even on big clusters with multiple
15
groups.
16

    
17
The situation gets worse when disk wiping is enabled (see
18
:manpage:`gnt-cluster(8)`) as that can take, depending on disk size and
19
hardware performance, from minutes to hours. Not waiting for DRBD disks
20
to synchronize (``wait_for_sync=false``) makes instance creations
21
slightly faster, but there's a risk of impacting I/O of other instances.
22

    
23

    
24
Proposed changes
25
----------------
26

    
27
The target is to speed up instance creations in combination with an
28
iallocator even when the cluster's balance is sacrificed in the process.
29
The cluster can later be re-balanced using ``hbal``. The main objective
30
is to reduce the number of node locks acquired for creation and to
31
release un-used locks as fast as possible (the latter is already being
32
done). To do this safely, several changes are necessary.
33

    
34
Locking library
35
~~~~~~~~~~~~~~~
36

    
37
Instead of forcibly acquiring all node locks for creating an instance
38
using an iallocator, only those currently available will be acquired.
39

    
40
To this end, the locking library must be extended to implement
41
opportunistic locking. Lock sets must be able to only acquire all locks
42
available at the time, ignoring and not waiting for those held by
43
another thread.
44

    
45
Locks (``SharedLock``) already support a timeout of zero. The latter is
46
different from a blocking acquisition, in which case the timeout would
47
be ``None``.
48

    
49
Lock sets can essentially be acquired in two different modes. One is to
50
acquire the whole set, which in turn will also block adding new locks
51
from other threads, and the other is to acquire specific locks by name.
52
The function to acquire locks in a set accepts a timeout which, if not
53
``None`` for blocking acquisitions, counts for the whole duration of
54
acquiring, if necessary, the lock set's internal lock, as well as the
55
member locks. For opportunistic acquisitions the timeout is only
56
meaningful when acquiring the whole set, in which case it is only used
57
for acquiring the set's internal lock (used to block lock additions).
58
For acquiring member locks the timeout is effectively zero to make them
59
opportunistic.
60

    
61
A new and optional boolean parameter named ``opportunistic`` is added to
62
``LockSet.acquire`` and re-exported through
63
``GanetiLockManager.acquire`` for use by ``mcpu``. Internally, lock sets
64
do the lock acquisition using a helper function, ``__acquire_inner``. It
65
will be extended to support opportunistic acquisitions. The algorithm is
66
very similar to acquiring the whole set with the difference that
67
acquisitions timing out will be ignored (the timeout in this case is
68
zero).
69

    
70

    
71
New lock level
72
~~~~~~~~~~~~~~
73

    
74
With opportunistic locking used for instance creations (controlled by a
75
parameter), multiple such requests can start at (essentially) the same
76
time and compete for node locks. Some logical units, such as
77
``LUClusterVerifyGroup``, need to acquire all node locks. In the latter
78
case all instance allocations would fail to get their locks. This also
79
applies when multiple instance creations are started at roughly the same
80
time.
81

    
82
To avoid situations where an opcode holding all or many node locks
83
causes allocations to fail, a new lock level must be added to control
84
allocations. The logical units for instance failover and migration can
85
only safely determine whether they need all node locks after the
86
instance lock has been acquired. Therefore the new lock level, named
87
"node-alloc" (shorthand for "node-allocation") will be inserted after
88
instances (``LEVEL_INSTANCE``) and before node groups
89
(``LEVEL_NODEGROUP``). Similar to the "big cluster lock" ("BGL") there
90
is only a single lock at this level whose name is "node allocation lock"
91
("NAL").
92

    
93
As a rule-of-thumb, the node allocation lock must be acquired in the
94
same mode as nodes and/or node resources. If all or a large number of
95
node locks are acquired, the node allocation lock should be acquired as
96
well. Special attention should be given to logical units started for all
97
node groups, such as ``LUGroupVerifyDisks``, as they also block many
98
nodes over a short amount of time.
99

    
100

    
101
iallocator
102
~~~~~~~~~~
103

    
104
The :doc:`iallocator interface <iallocator>` does not need any
105
modification. When an instance is created, the information for all nodes
106
is passed to the iallocator plugin. Nodes for which the lock couldn't be
107
acquired and therefore shouldn't be used for the instance in question,
108
will be shown as offline.
109

    
110

    
111
Opcodes
112
~~~~~~~
113

    
114
The opcodes ``OpInstanceCreate`` and ``OpInstanceMultiAlloc`` will gain
115
a new parameter to enable opportunistic locking. By default this mode is
116
disabled as to not break backwards compatibility.
117

    
118
A new error type is added to describe a temporary lack of resources. Its
119
name will be ``ECODE_TEMP_NORES``. With opportunistic locks the opcodes
120
mentioned before only have a partial view of the cluster and can no
121
longer decide if an instance could not be allocated due to the locks it
122
has been given or whether the whole cluster is lacking resources.
123
Therefore it is required, upon encountering the error code for a
124
temporary lack of resources, for the job submitter to make this decision
125
by re-submitting the job or by re-directing it to another cluster.
126

    
127
.. vim: set textwidth=72 :
128
.. Local Variables:
129
.. mode: rst
130
.. fill-column: 72
131
.. End: