Statistics
| Branch: | Tag: | Revision:

root / doc / design-monitoring-agent.rst @ 9ef3e121

History | View | Annotate | Download (13.7 kB)

1
=======================
2
Ganeti monitoring agent
3
=======================
4

    
5
.. contents:: :depth: 4
6

    
7
This is a design document detailing the implementation of a Ganeti
8
monitoring agent report system, that can be queried by a monitoring
9
system to calculate health information for a Ganeti cluster.
10

    
11
Current state and shortcomings
12
==============================
13

    
14
There is currently no monitoring support in Ganeti. While we don't want
15
to build something like Nagios or Pacemaker as part of Ganeti, it would
16
be useful if such tools could easily extract information from a Ganeti
17
machine in order to take actions (example actions include logging an
18
outage for future reporting or alerting a person or system about it).
19

    
20
Proposed changes
21
================
22

    
23
Each Ganeti node should export a status page that can be queried by a
24
monitoring system. Such status page will be exported on a network port
25
and will be encoded in JSON (simple text) over HTTP.
26

    
27
The choice of json is obvious as we already depend on it in Ganeti and
28
thus we don't need to add extra libraries to use it, as opposed to what
29
would happen for XML or some other markup format.
30

    
31
Location of agent report
32
------------------------
33

    
34
The report will be available from all nodes, and be concerned for all
35
node-local resources. This allows more real-time information to be
36
available, at the cost of querying all nodes.
37

    
38
Information reported
39
--------------------
40

    
41
The monitoring agent system will report on the following basic information:
42

    
43
- Instance status
44
- Instance disk status
45
- Status of storage for instances
46
- Ganeti daemons status, CPU usage, memory footprint
47
- Hypervisor resources report (memory, CPU, network interfaces)
48
- Node OS resources report (memory, CPU, network interfaces)
49
- Information from a plugin system
50

    
51
Instance status
52
+++++++++++++++
53

    
54
At the moment each node knows which instances are running on it, which
55
instances it is primary for, but not the cause why an instance might not
56
be running. On the other hand we don't want to distribute full instance
57
"admin" status information to all nodes, because of the performance
58
impact this would have.
59

    
60
As such we propose that:
61

    
62
- Any operation that can affect instance status will have an optional
63
  "reason" attached to it (at opcode level). This can be used for
64
  example to distinguish an admin request, from a scheduled maintenance
65
  or an automated tool's work. If this reason is not passed, Ganeti will
66
  just use the information it has about the source of the request: for
67
  example a cli shutdown operation will have "cli:shutdown" as a reason,
68
  a cli failover operation will have "cli:failover". Operations coming
69
  from the remote API will use "rapi" instead of "cli". Of course
70
  setting a real site-specific reason is still preferred.
71
- RPCs that affect the instance status will be changed so that the
72
  "reason" and the version of the config object they ran on is passed to
73
  them. They will then export the new expected instance status, together
74
  with the associated reason and object version to the status report
75
  system, which then will export those themselves.
76

    
77
Monitoring and auditing systems can then use the reason to understand
78
the cause of an instance status, and they can use the object version to
79
understand the freshness of their data even in the absence of an atomic
80
cross-node reporting: for example if they see an instance "up" on a node
81
after seeing it running on a previous one, they can compare these values
82
to understand which data is freshest, and repoll the "older" node. Of
83
course if they keep seeing this status this represents an error (either
84
an instance continuously "flapping" between nodes, or an instance is
85
constantly up on more than one), which should be reported and acted
86
upon.
87

    
88
The instance status will be on each node, for the instances it is
89
primary for and will contain at least:
90

    
91
- The instance name
92
- The instance UUID (stable on name change)
93
- The instance running status (up or down)
94
- The uptime, as detected by the hypervisor
95
- The timestamp of last known change
96
- The timestamp of when the status was last checked (see caching, below)
97
- The last known reason for change, if any
98

    
99
More information about all the fields and their type will be available
100
in the "Format of the report" section.
101

    
102
Note that as soon as a node knows it's not the primary anymore for an
103
instance it will stop reporting status for it: this means the instance
104
will either disappear, if it has been deleted, or appear on another
105
node, if it's been moved.
106

    
107
Instance Disk status
108
++++++++++++++++++++
109

    
110
As for the instance status Ganeti has now only partial information about
111
its instance disks: in particular each node is unaware of the disk to
112
instance mapping, that exists only on the master.
113

    
114
For this design doc we plan to fix this by changing all RPCs that create
115
a backend storage or that put an already existing one in use and passing
116
the relevant instance to the node. The node can then export these to the
117
status reporting tool.
118

    
119
While we haven't implemented these RPC changes yet, we'll use confd to
120
fetch this information in the data collector.
121

    
122
Since Ganeti supports many type of disks for instances (drbd, rbd,
123
plain, file) we will export both a "generic" status which will work for
124
any type of disk and will be very opaque (at minimum just an "healthy"
125
or "error" state, plus perhaps some human readable comment and a
126
"per-type" status which will explain more about the internal details but
127
will not be compatible between different storage types (and will for
128
example export the drbd connection status, sync, and so on).
129

    
130
Status of storage for instances
131
+++++++++++++++++++++++++++++++
132

    
133
The node will also be reporting on all storage types it knows about for
134
the current node (this is right now hardcoded to the enabled storage
135
types, and in the future tied to the enabled storage pools for the
136
nodegroup). For this kind of information also we will report both a
137
generic health status (healthy or error) for each type of storage, and
138
some more generic statistics (free space, used space, total visible
139
space). In addition type specific information can be exported: for
140
example, in case of error, the nature of the error can be disclosed as a
141
type specific information. Examples of these are "backend pv
142
unavailable" for lvm storage, "unreachable" for network based storage or
143
"filesystem error" for filesystem based implementations.
144

    
145
Ganeti daemons status
146
+++++++++++++++++++++
147

    
148
Ganeti will report what information it has about its own daemons: this
149
includes memory usage, uptime, CPU usage. This should allow identifying
150
possible problems with the Ganeti system itself: for example memory
151
leaks, crashes and high resource utilization should be evident by
152
analyzing this information.
153

    
154
Ganeti daemons will also be able to export extra internal information to
155
the status reporting, through the plugin system (see below).
156

    
157
Hypervisor resources report
158
+++++++++++++++++++++++++++
159

    
160
Each hypervisor has a view of system resources that sometimes is
161
different than the one the OS sees (for example in Xen the Node OS,
162
running as Dom0, has access to only part of those resources). In this
163
section we'll report all information we can in a "non hypervisor
164
specific" way. Each hypervisor can then add extra specific information
165
that is not generic enough be abstracted.
166

    
167
Node OS resources report
168
++++++++++++++++++++++++
169

    
170
Since Ganeti assumes it's running on Linux, it's useful to export some
171
basic information as seen by the host system. This includes number and
172
status of CPUs, memory, filesystems and network intefaces as well as the
173
version of components Ganeti interacts with (Linux, drbd, hypervisor,
174
etc).
175

    
176
Note that we won't go into any hardware specific details (e.g. querying a
177
node RAID is outside the scope of this, and can be implemented as a
178
plugin) but we can easily just report the information above, since it's
179
standard enough across all systems.
180

    
181
Plugin system
182
+++++++++++++
183

    
184
The monitoring system will be equipped with a plugin system that can
185
export specific local information through it. The plugin system will be
186
in the form of either scripts whose output will be inserted in the
187
report, plain text files which will be inserted into the report, or
188
local unix or network sockets from which the information has to be read.
189
This should allow most flexibility for implementing an efficient system,
190
while being able to keep it as simple as possible.
191

    
192
The plugin system is expected to be used by local installations to
193
export any installation specific information that they want to be
194
monitored, about either hardware or software on their systems.
195

    
196

    
197
Format of the query
198
-------------------
199

    
200
The query will be an HTTP GET request on a particular port. At the
201
beginning it will only be possible to query the full status report.
202

    
203

    
204
Format of the report
205
--------------------
206

    
207
The report of the will be in JSON format, and it will present an array
208
of report objects.
209
Each report object will be produced by a specific data collector.
210
Each report object includes some mandatory fields, to be provided by all
211
the data collectors, and a field to contain data collector-specific
212
data.
213

    
214
Here follows a minimal example of a report::
215

    
216
  [
217
  {
218
      "name" : "TheCollectorIdentifier",
219
      "version" : "1.2",
220
      "format_version" : 1,
221
      "timestamp" : 1351607182000000000,
222
      "data" : { "plugin_specific_data" : "go_here" }
223
  },
224
  {
225
      "name" : "AnotherDataCollector",
226
      "version" : "B",
227
      "format_version" : 7,
228
      "timestamp" : 1351609526123854000,
229
      "data" : { "plugin_specific" : "data",
230
                 "some_late_data" : { "timestamp" : "SPECIFIC_TIME",
231
                                      ... }
232
               }
233
  }
234
  ]
235

    
236
Here is the description of the mandatory fields of each object:
237

    
238
name
239
  the name of the data collector that produced this part of the report.
240
  It is supposed to be unique inside a report.
241

    
242
version
243
  the version of the data collector that produces this part of the
244
  report. Built-in data collectors (as opposed to those implemented as
245
  plugins) should have "B" as the version number.
246

    
247
format_version
248
  the format of what is represented in the "data" field for each data
249
  collector might change over time. Every time this happens, the
250
  format_version should be changed, so that who reads the report knows
251
  what format to expect, and how to correctly interpret it.
252

    
253
timestamp
254
  the time when the reported data were gathered. Is has to be expressed
255
  in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
256
  enough precision is available (or needed) it can be padded with
257
  zeroes. If a report object needs multiple timestamps, it can add more
258
  and/or override this one inside its own "data" section.
259

    
260
data
261
  this field contains all the data generated by the data collector, in
262
  its own independently defined format. The monitoring agent could check
263
  this syntactically (according to the JSON specifications) but not
264
  semantically.
265

    
266

    
267
Data collectors
268
---------------
269

    
270
In order to ease testing as well as to make it simple to reuse this
271
subsystem it will be possible to run just the "data collectors" on each
272
node without passing through the agent daemon. Each data collector will
273
report specific data about its subsystem and will be documented
274
separately.
275

    
276
If a data collector is run independently, it should print on stdout its
277
report, according to the format corresponding to a single data collector
278
report object, as described in the previous paragraph.
279

    
280

    
281
Mode of operation
282
-----------------
283

    
284
In order to be able to report information fast the monitoring agent
285
daemon will keep an in-memory or on-disk cache of the status, which will
286
be returned when queries are made. The status system will then
287
periodically check resources to make sure the status is up to date.
288

    
289
Different parts of the report will be queried at different speeds. These
290
will depend on:
291
- how often they vary (or we expect them to vary)
292
- how fast they are to query
293
- how important their freshness is
294

    
295
Of course the last parameter is installation specific, and while we'll
296
try to have defaults, it will be configurable. The first two instead we
297
can use adaptively to query a certain resource faster or slower
298
depending on those two parameters.
299

    
300

    
301
Implementation place
302
--------------------
303

    
304
The status daemon will be implemented as a standalone Haskell daemon. In
305
the future it should be easy to merge multiple daemons into one with
306
multiple entry points, should we find out it saves resources and doesn't
307
impact functionality.
308

    
309
The libekg library should be looked at for easily providing metrics in
310
json format.
311

    
312

    
313
Implementation order
314
--------------------
315

    
316
We will implement the agent system in this order:
317

    
318
- initial example data collectors (eg. for drbd and instance status.
319
  Data collector-specific report format TBD).
320
- initial daemon for exporting data
321
- RPC updates for instance status reasons and disk to instance mapping
322
- more data collectors
323
- cache layer for the daemon (if needed)
324

    
325

    
326
Future work
327
===========
328

    
329
As a future step it can be useful to "centralize" all this reporting
330
data on a single place. This for example can be just the master node, or
331
all the master candidates. We will evaluate doing this after the first
332
node-local version has been developed and tested.
333

    
334
Another possible change is replacing the "read-only" RPCs with queries
335
to the agent system, thus having only one way of collecting information
336
from the nodes from a monitoring system and for Ganeti itself.
337

    
338
One extra feature we may need is a way to query for only sub-parts of
339
the report (eg. instances status only). This can be done by passing
340
arguments to the HTTP GET, which will be defined when we get to this
341
funtionality.
342

    
343
Finally the :doc:`autorepair system design <design-autorepair>`. system
344
(see its design) can be expanded to use the monitoring agent system as a
345
source of information to decide which repairs it can perform.
346

    
347
.. vim: set textwidth=72 :
348
.. Local Variables:
349
.. mode: rst
350
.. fill-column: 72
351
.. End: