|
1 |
============================
|
|
2 |
Storage free space reporting
|
|
3 |
============================
|
|
4 |
|
|
5 |
.. contents:: :depth: 4
|
|
6 |
|
|
7 |
Background
|
|
8 |
==========
|
|
9 |
|
|
10 |
Currently Space reporting is broken for all storage types except drbd or
|
|
11 |
lvm (plain). This design looks at the root causes and proposes a way to
|
|
12 |
fix it.
|
|
13 |
|
|
14 |
Proposed changes
|
|
15 |
================
|
|
16 |
|
|
17 |
The changes below will streamline Ganeti to properly support
|
|
18 |
interaction with different storage types.
|
|
19 |
|
|
20 |
Configuration changes
|
|
21 |
---------------------
|
|
22 |
|
|
23 |
Each storage type will have a new "pools" parameter added (type list of
|
|
24 |
strings). This will be list of vgs for plain and drbd (note that we make
|
|
25 |
no distinction at this level between allowed vgs and metavgs), the list
|
|
26 |
of rados pools for rados, or the storage directory for file and
|
|
27 |
sharedfile. The parameters already present in the cluster config object
|
|
28 |
will be moved to the storage parameters.
|
|
29 |
|
|
30 |
Since currently file and sharedfile only support a single directory this
|
|
31 |
list will be limited to one. In the future if we'll have support for
|
|
32 |
more directories, or for per-nodegroup directories this can be changed.
|
|
33 |
|
|
34 |
Note that these are just "mechanisms" parameters that define which
|
|
35 |
storage pools the cluster can use. Further filtering about what's
|
|
36 |
allowed can go in the ipolicy, but these changes are not covered in this
|
|
37 |
design doc.
|
|
38 |
|
|
39 |
Since the ipolicy currently has a list of enabled storage types, we'll
|
|
40 |
use that to decide which storage type is the default, and to self-select
|
|
41 |
it for new instance creations, and reporting.
|
|
42 |
|
|
43 |
Enabling/disabling of storage types at ``./configure`` time will be
|
|
44 |
eventually removed.
|
|
45 |
|
|
46 |
RPC changes
|
|
47 |
-----------
|
|
48 |
|
|
49 |
The noded RPC call that reports node storage space will be changed to
|
|
50 |
accept a list of <method>,<key> string tuples. For each of them it will
|
|
51 |
report the free amount of space found on storage <key> as known by the
|
|
52 |
requested method. Methods are for example ``lvm``, ``filesystem``,
|
|
53 |
``rados``, and the key would be a volume group name in the case of lvm,
|
|
54 |
a directory name for the filesystem and a rados pool name, for
|
|
55 |
rados_pool.
|
|
56 |
|
|
57 |
Masterd will know (through a constant map) which storage type uses which
|
|
58 |
method for storage calculation (i.e. ``plain`` and ``drbd`` use ``lvm``,
|
|
59 |
``file`` and ``sharedfile`` use ``filesystem``, etc) and query the one
|
|
60 |
needed (or all of the needed ones).
|
|
61 |
|
|
62 |
Note that for file and sharedfile the node knows which directories are
|
|
63 |
allowed and won't allow any other directory to be queried for security
|
|
64 |
reasons. The actual path still needs to be passed to distinguish the
|
|
65 |
two, as the method will be the same for both.
|
|
66 |
|
|
67 |
These calculations will be implemented in the node storage system
|
|
68 |
(currently lib/storage.py) but querying will still happen through the
|
|
69 |
``node info`` call, to avoid requiring an extra RPC each time.
|
|
70 |
|
|
71 |
Ganeti reporting
|
|
72 |
----------------
|
|
73 |
|
|
74 |
``gnt-node list`` will by default report information just about the
|
|
75 |
default storage type. It will be possible to add fields to ask about
|
|
76 |
other ones, if they're enabled.
|
|
77 |
|
|
78 |
``gnt-node info`` will report information about all enabled storage
|
|
79 |
types, without querying them (just say which ones are supported
|
|
80 |
according to the cluster configuration).
|
|
81 |
|
|
82 |
``gnt-node list-storage`` will change to report information about all
|
|
83 |
available storage pools in each storage type. An extra flag will be
|
|
84 |
added to filter by storage pool name (alternatively we can implement
|
|
85 |
this by allowing to query by a list of ``type:pool`` string tuples to
|
|
86 |
have a more comprehensive filter).
|
|
87 |
|
|
88 |
|
|
89 |
Allocator changes
|
|
90 |
-----------------
|
|
91 |
|
|
92 |
The iallocator protocol doesn't need to change: since we know which
|
|
93 |
storage type an instance has, we'll pass only the "free" value for that
|
|
94 |
storage type to the iallocator, when asking for an allocation to be
|
|
95 |
made. Note that for DRBD nowadays we ignore the case when vg and metavg
|
|
96 |
are different, and we only consider the main VG. Fixing this is outside
|
|
97 |
the scope of this design.
|
|
98 |
|
|
99 |
Rebalancing changes
|
|
100 |
-------------------
|
|
101 |
|
|
102 |
Hbal will not need changes, as it handles it already. We don't forecast
|
|
103 |
any changes needed to it.
|
|
104 |
|
|
105 |
Space reporting changes
|
|
106 |
-----------------------
|
|
107 |
|
|
108 |
Hspace will by default report by assuming the allocation will happen on
|
|
109 |
the default storage for the cluster/nodegroup. An option will be added
|
|
110 |
to manually specify a different storage.
|
|
111 |
|
|
112 |
.. vim: set textwidth=72 :
|
|
113 |
.. Local Variables:
|
|
114 |
.. mode: rst
|
|
115 |
.. fill-column: 72
|
|
116 |
.. End:
|