Statistics
| Branch: | Tag: | Revision:

root / doc / design-storagetypes.rst @ 513c5e25

History | View | Annotate | Download (14.1 kB)

1
=============================================================================
2
Management of storage types and disk templates, incl. storage space reporting
3
=============================================================================
4

    
5
.. contents:: :depth: 4
6

    
7
Background
8
==========
9

    
10
Currently, there is no consistent management of different variants of storage
11
in Ganeti. One direct consequence is that storage space reporting is currently
12
broken for all storage that is not based on lvm technolgy. This design looks at
13
the root causes and proposes a way to fix it.
14

    
15
Proposed changes
16
================
17

    
18
We propose to streamline handling of different storage types and disk templates.
19
Currently, there is no consistent implementation for dis/enabling of disk
20
templates and/or storage types.
21

    
22
Our idea is to introduce a list of enabled disk templates, which can be
23
used by instances in the cluster. Based on this list, we want to provide
24
storage reporting mechanisms for the available disk templates. Since some
25
disk templates share the same underlying storage technology (for example
26
``drbd`` and ``plain`` are based on ``lvm``), we map disk templates to storage
27
types and implement storage space reporting for each storage type.
28

    
29
Configuration changes
30
---------------------
31

    
32
Add a new attribute "enabled_disk_templates" (type: list of strings) to the
33
cluster config which holds disk templates, for example, "drbd", "file",
34
or "ext". This attribute represents the list of disk templates that are enabled
35
cluster-wide for usage by the instances. It will not be possible to create
36
instances with a disk template that is not enabled, as well as it will not be
37
possible to remove a disk template from the list if there are still instances
38
using it.
39

    
40
The list of enabled disk templates can contain any non-empty subset of
41
the currently implemented disk templates: ``blockdev``, ``diskless``, ``drbd``,
42
``ext``, ``file``, ``plain``, ``rbd``, and ``sharedfile``. See
43
``DISK_TEMPLATES`` in ``constants.py``.
44

    
45
Note that the abovementioned list of enabled disk types is just a "mechanism"
46
parameter that defines which disk templates the cluster can use. Further
47
filtering about what's allowed can go in the ipolicy, which is not covered in
48
this design doc. Note that it is possible to force an instance to use a disk
49
template that is not allowed by the ipolicy. This is not possible if the
50
template is not enabled by the cluster.
51

    
52
The ipolicy also contains a list of enabled disk templates. Since the cluster-
53
wide enabled disk templates should be a stronger constraint, the list of
54
enabled disk templates in the ipolicy should be a subset of those. In case the
55
user tries to create an inconsistent situation here, gnt-cluster should emit
56
a warning.
57

    
58
We consider the first disk template in the list to be the default template for
59
instance creation and storage reporting. This will remove the need to specify
60
the disk template with ``-t`` on instance creation. Note: It would be
61
better to take the default disk template from the node-group-specific
62
ipolicy. However, when using the iallocator, the nodegroup can only be
63
determined from the node which is determined by the iallocator, which in
64
turn needs the disk-template first. To solve this
65
chicken-and-egg-problem we first need to extend 'gnt-instance add' to
66
accept a nodegroup in the first place.
67

    
68
Currently, cluster-wide dis/enabling of disk templates is not implemented
69
consistently. ``lvm`` based disk templates are enabled by specifying a volume
70
group name on cluster initialization and can only be disabled by explicitly
71
using the option ``--no-lvm-storage``. This will be replaced by adding/removing
72
``drbd`` and ``plain`` from the set of enabled disk templates.
73

    
74
Up till now, file storage and shared file storage could be dis/enabled at
75
``./configure`` time. This will also be replaced by adding/removing the
76
respective disk templates from the set of enabled disk templates.
77

    
78
There is currently no possibility to dis/enable the disk templates
79
``diskless``, ``blockdev``, ``ext``, and ``rdb``. By introducing the set of
80
enabled disk templates, we will require these disk templates to be explicitly
81
enabled in order to be used. The idea is that the administrator of the cluster
82
can tailor the cluster configuration to what is actually needed in the cluster.
83
There is hope that this will lead to cleaner code, better performance and fewer
84
bugs.
85

    
86
When upgrading the configuration from a version that did not have the list
87
of enabled disk templates, we have to decide which disk templates are enabled
88
based on the current configuration of the cluster. We propose the following
89
update logic to be implemented in the online update of the config in
90
the ``Cluster`` class in ``objects.py``:
91
- If a ``volume_group_name`` is existing, then enable ``drbd`` and ``plain``.
92
(TODO: can we narrow that down further?)
93
- If ``file`` or ``sharedfile`` was enabled at configure time, add the
94
respective disk template to the list of enabled disk templates.
95
- For disk templates ``diskless``, ``blockdev``, ``ext``, and ``rbd``, we
96
inspect the current cluster configuration regarding whether or not there
97
are instances that use one of those disk templates. We will add only those
98
that are currently in use.
99
The order in which the list of enabled disk templates is built up will be
100
determined by a preference order based on when in the history of Ganeti the
101
disk templates were introduced (thus being a heuristic for which are used
102
more than others).
103

    
104
The list of enabled disk templates can be specified on cluster initialization
105
with ``gnt-cluster init`` using the optional parameter
106
``--enabled-disk-templates``. If it is not set, it will be set to a default
107
set of enabled disk templates, which includes the following disk templates:
108
``drbd`` and ``plain``. The list can be shrunk or extended by
109
``gnt-cluster modify`` using the same parameter.
110

    
111
Storage reporting
112
-----------------
113

    
114
The storage reporting in ``gnt-node list`` will be the first user of the
115
newly introduced list of enabled disk templates. Currently, storage reporting
116
works only for lvm-based storage. We want to extend that and report storage
117
for the enabled disk templates. The default of ``gnt-node list`` will only
118
report on storage of the default disk template (the first in the list of enabled
119
disk templates). One can explicitly ask for storage reporting on the other
120
enabled disk templates with the ``-o`` option.
121

    
122
Some of the currently implemented disk templates share the same base storage
123
technology. Since the storage reporting is based on the underlying technology
124
rather than on the user-facing disk templates, we introduce storage types to
125
represent the underlying technology. There will be a mapping from disk templates
126
to storage types, which will be used by the storage reporting backend to pick
127
the right method for estimating the storage for the different disk templates.
128

    
129
The proposed storage types are ``blockdev``, ``diskless``, ``ext``, ``file``,
130
``lvm-pv``, ``lvm-vg``, ``rados``.
131

    
132
The mapping from disk templates to storage types will be: ``drbd`` and ``plain``
133
to ``lvm-vg``, ``file`` and ``sharedfile`` to ``file``, and all others to their
134
obvious counterparts.
135

    
136
Note that there is no disk template mapping to ``lvm-pv``, because this storage
137
type is currently only used to enable the user to mark it as (un)allocatable.
138
(See ``man gnt-node``.) It is not possible to create an instance on a storage
139
unit that is of type ``lvm-pv`` directly, therefore it is not included in the
140
mapping.
141

    
142
The storage reporting for file storage will report space on the file storage
143
dir, which is currently limited to one directory. In the future, if we'll have
144
support for more directories, or for per-nodegroup directories this can be
145
changed.
146

    
147
For now, we will implement only the storage reporting for non-shared storage,
148
that is disk templates ``file``, ``lvm``, and ``drbd``. For disk template
149
``diskless``, there is obviously nothing to report about. When implementing
150
storage reporting for file, we can also use it for ``sharedfile``, since it
151
uses the same file system mechanisms to determine the free space. In the
152
future, we can optimize storage reporting for shared storage by not querying
153
all nodes that use a common shared file for the same space information.
154

    
155
In the future, we extend storage reporting for shared storage types like
156
``rados`` and ``ext``. Note that it will not make sense to query each node for
157
storage reporting on a storage unit that is used by several nodes.
158

    
159
We will not implement storage reporting for the ``blockdev`` disk template,
160
because block devices are always adopted after being provided by the system
161
administrator, thus coming from outside Ganeti. There is no point in storage
162
reporting for block devices, because Ganeti will never try to allocate storage
163
inside a block device.
164

    
165
RPC changes
166
-----------
167

    
168
The noded RPC call that reports node storage space will be changed to
169
accept a list of <storage_type>,<key> string tuples. For each of them, it will
170
report the free amount of storage space found on storage <key> as known
171
by the requested storage_type. Depending on the storage_type, the key would
172
be a volume group name in case of lvm, a directory name for the file-based
173
storage, and a rados pool name for rados storage.
174

    
175
Masterd will know through the mapping of storage types to storage calculation
176
functions which storage type uses which mechanism for storage calculation
177
and invoke only the needed ones.
178

    
179
Note that for file and sharedfile the node knows which directories are allowed
180
and won't allow any other directory to be queried for security reasons. The
181
actual path still needs to be passed to distinguish the two, as the type will
182
be the same for both.
183

    
184
These calculations will be implemented in the node storage system
185
(currently lib/storage.py) but querying will still happen through the
186
``node info`` call, to avoid requiring an extra RPC each time.
187

    
188
Ganeti reporting
189
----------------
190

    
191
`gnt-node list`` can be queried for the different disk templates, if they
192
are enabled. By default, it will just report information about the default
193
disk template. Examples::
194

    
195
  > gnt-node list
196
  Node                       DTotal DFree MTotal MNode MFree Pinst Sinst
197
  mynode1                      3.6T  3.6T  64.0G 1023M 62.2G     1     0
198
  mynode2                      3.6T  3.6T  64.0G 1023M 62.0G     2     1
199
  mynode3                      3.6T  3.6T  64.0G 1023M 62.3G     0     2
200

    
201
  > gnt-node list -o dtotal/drbd,dfree/file
202
  Node      DTotal (drbd, myvg) DFree (file, mydir)
203
  mynode1                 3.6T                    -
204
  mynode2                 3.6T                    -
205

    
206
Note that for drbd, we only report the space of the vg and only if it was not
207
renamed to something different than the default volume group name. With this
208
design, there is also no possibility to ask about the meta volume group. We
209
restrict the design here to make the transition to storage pools easier (as it
210
is an interim state only). It is the administrator's responsibility to ensure
211
that there is enough space for the meta volume group.
212

    
213
When storage pools are implemented, we switch from referencing the disk template
214
to referencing the storage pool name. For that, of course, the pool names need
215
to be unique over all storage types. For drbd, we will use the default 'drbd'
216
storage pool and possibly a second lvm-based storage pool for the metavg. It
217
will be possible to rename storage pools (thus also the default lvm storage
218
pool). There will be new functionality to ask about what storage pools are
219
available and of what type. Storage pools will have a storage pool type which is
220
one of the disk templates. There can be more than one storage pool based on the
221
same disk template, therefore we will then start referencing the storage pool
222
name instead of the disk template.
223

    
224
``gnt-cluster info`` will report which disk templates are enabled, i.e.
225
which ones are supported according to the cluster configuration. Example
226
output::
227

    
228
  > gnt-cluster info
229
  [...]
230
  Cluster parameters:
231
    - [...]
232
    - enabled disk templates: plain, drbd, sharedfile, rados
233
    - [...]
234

    
235
``gnt-node list-storage`` will not be affected by any changes, since this design
236
is restricted only to free storage reporting for non-shared storage types.
237

    
238
Allocator changes
239
-----------------
240

    
241
The iallocator protocol doesn't need to change: since we know which
242
disk template an instance has, we'll pass only the "free" value for that
243
disk template to the iallocator, when asking for an allocation to be
244
made. Note that for DRBD nowadays we ignore the case when vg and metavg
245
are different, and we only consider the main volume group. Fixing this is
246
outside the scope of this design.
247

    
248
With this design, we ensure forward-compatibility with respect to storage
249
pools. For now, we'll report space for all available disk templates that
250
are based on non-shared storage types, in the future, for all available
251
storage pools.
252

    
253
Rebalancing changes
254
-------------------
255

    
256
Hbal will not need changes, as it handles it already. We don't forecast
257
any changes needed to it.
258

    
259
Space reporting changes
260
-----------------------
261

    
262
Hspace will by default report by assuming the allocation will happen on
263
the default disk template for the cluster/nodegroup. An option will be added
264
to manually specify a different storage.
265

    
266
Interactions with Partitioned Ganeti
267
------------------------------------
268

    
269
Also the design for :doc:`Partitioned Ganeti <design-partitioned>` deals
270
with reporting free space. Partitioned Ganeti has a different way to
271
report free space for LVM on nodes where the ``exclusive_storage`` flag
272
is set. That doesn't interact directly with this design, as the specifics
273
of how the free space is computed is not in the scope of this design.
274
But the ``node info`` call contains the value of the
275
``exclusive_storage`` flag, which is currently only meaningful for the
276
LVM storage type. Additional flags like the ``exclusive_storage`` flag
277
for lvm might be useful for other disk templates / storage types as well.
278
We therefore extend the RPC call with <storage_type>,<key> to
279
<storage_type>,<key>,[<param>] to include any disk-template-specific
280
(or storage-type specific) parameters in the RPC call.
281

    
282
The reporting of free spindles, also part of Partitioned Ganeti, is not
283
concerned with this design doc, as those are seen as a separate resource.
284

    
285
.. vim: set textwidth=72 :
286
.. Local Variables:
287
.. mode: rst
288
.. fill-column: 72
289
.. End: