Statistics
| Branch: | Tag: | Revision:

root / doc / design-monitoring-agent.rst @ 431ff2c1

History | View | Annotate | Download (23 kB)

1
=======================
2
Ganeti monitoring agent
3
=======================
4

    
5
.. contents:: :depth: 4
6

    
7
This is a design document detailing the implementation of a Ganeti
8
monitoring agent report system, that can be queried by a monitoring
9
system to calculate health information for a Ganeti cluster.
10

    
11
Current state and shortcomings
12
==============================
13

    
14
There is currently no monitoring support in Ganeti. While we don't want
15
to build something like Nagios or Pacemaker as part of Ganeti, it would
16
be useful if such tools could easily extract information from a Ganeti
17
machine in order to take actions (example actions include logging an
18
outage for future reporting or alerting a person or system about it).
19

    
20
Proposed changes
21
================
22

    
23
Each Ganeti node should export a status page that can be queried by a
24
monitoring system. Such status page will be exported on a network port
25
and will be encoded in JSON (simple text) over HTTP.
26

    
27
The choice of JSON is obvious as we already depend on it in Ganeti and
28
thus we don't need to add extra libraries to use it, as opposed to what
29
would happen for XML or some other markup format.
30

    
31
Location of agent report
32
------------------------
33

    
34
The report will be available from all nodes, and be concerned for all
35
node-local resources. This allows more real-time information to be
36
available, at the cost of querying all nodes.
37

    
38
Information reported
39
--------------------
40

    
41
The monitoring agent system will report on the following basic information:
42

    
43
- Instance status
44
- Instance disk status
45
- Status of storage for instances
46
- Ganeti daemons status, CPU usage, memory footprint
47
- Hypervisor resources report (memory, CPU, network interfaces)
48
- Node OS resources report (memory, CPU, network interfaces)
49
- Information from a plugin system
50

    
51
Format of the report
52
--------------------
53

    
54
The report of the will be in JSON format, and it will present an array
55
of report objects.
56
Each report object will be produced by a specific data collector.
57
Each report object includes some mandatory fields, to be provided by all
58
the data collectors:
59

    
60
``name``
61
  The name of the data collector that produced this part of the report.
62
  It is supposed to be unique inside a report.
63

    
64
``version``
65
  The version of the data collector that produces this part of the
66
  report. Built-in data collectors (as opposed to those implemented as
67
  plugins) should have "B" as the version number.
68

    
69
``format_version``
70
  The format of what is represented in the "data" field for each data
71
  collector might change over time. Every time this happens, the
72
  format_version should be changed, so that who reads the report knows
73
  what format to expect, and how to correctly interpret it.
74

    
75
``timestamp``
76
  The time when the reported data were gathered. It has to be expressed
77
  in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
78
  enough precision is available (or needed) it can be padded with
79
  zeroes. If a report object needs multiple timestamps, it can add more
80
  and/or override this one inside its own "data" section.
81

    
82
``category``
83
  A collector can belong to a given category of collectors (e.g.: storage
84
  collectors, daemon collector). This means that it will have to provide a
85
  minumum set of prescribed fields, as documented for each category.
86
  This field will contain the name of the category the collector belongs to,
87
  if any, or just the ``null`` value.
88

    
89
``kind``
90
  Two kinds of collectors are possible:
91
  `Performance reporting collectors`_ and `Status reporting collectors`_.
92
  The respective paragraphs will describe them and the value of this field.
93

    
94
``data``
95
  This field contains all the data generated by the specific data collector,
96
  in its own independently defined format. The monitoring agent could check
97
  this syntactically (according to the JSON specifications) but not
98
  semantically.
99

    
100
Here follows a minimal example of a report::
101

    
102
  [
103
  {
104
      "name" : "TheCollectorIdentifier",
105
      "version" : "1.2",
106
      "format_version" : 1,
107
      "timestamp" : 1351607182000000000,
108
      "category" : null,
109
      "kind" : 0,
110
      "data" : { "plugin_specific_data" : "go_here" }
111
  },
112
  {
113
      "name" : "AnotherDataCollector",
114
      "version" : "B",
115
      "format_version" : 7,
116
      "timestamp" : 1351609526123854000,
117
      "category" : "storage",
118
      "kind" : 1,
119
      "data" : { "status" : { "code" : 1,
120
                              "message" : "Error on disk 2"
121
                            },
122
                 "plugin_specific" : "data",
123
                 "some_late_data" : { "timestamp" : 1351609526123942720,
124
                                      ...
125
                                    }
126
               }
127
  }
128
  ]
129

    
130
Performance reporting collectors
131
++++++++++++++++++++++++++++++++
132

    
133
These collectors only provide data about some component of the system, without
134
giving any interpretation over their meaning.
135

    
136
The value of the ``kind`` field of the report will be ``0``.
137

    
138
Status reporting collectors
139
+++++++++++++++++++++++++++
140

    
141
These collectors will provide information about the status of some
142
component of ganeti, or managed by ganeti.
143

    
144
The value of their ``kind`` field will be ``1``.
145

    
146
The rationale behind this kind of collectors is that there are some situations
147
where exporting data about the underlying subsystems would expose potential
148
issues. But if Ganeti itself is able (and going) to fix the problem, conflicts
149
might arise between Ganeti and something/somebody else trying to fix the same
150
problem.
151
Also, some external monitoring systems might not be aware of the internals of a
152
particular subsystem (e.g.: DRBD) and might only exploit the high level
153
response of its data collector, alerting an administrator if anything is wrong.
154
Still, completely hiding the underlying data is not a good idea, as they might
155
still be of use in some cases. So status reporting plugins will provide two
156
output modes: one just exporting a high level information about the status,
157
and one also exporting all the data they gathered.
158
The default output mode will be the status-only one. Through a command line
159
parameter (for stand-alone data collectors) or through the HTTP request to the
160
monitoring agent
161
(when collectors are executed as part of it) the verbose output mode providing
162
all the data can be selected.
163

    
164
When exporting just the status each status reporting collector will provide,
165
in its ``data`` section, at least the following field:
166

    
167
``status``
168
  summarizes the status of the component being monitored and consists of two
169
  subfields:
170

    
171
  ``code``
172
    It assumes a numeric value, encoded in such a way to allow using a bitset
173
    to easily distinguish which states are currently present in the whole cluster.
174
    If the bitwise OR of all the ``status`` fields is 0, the cluster is
175
    completely healty.
176
    The status codes are as follows:
177

    
178
    ``0``
179
      The collector can determine that everything is working as
180
      intended.
181

    
182
    ``1``
183
      Something is temporarily wrong but it is being automatically fixed by
184
      Ganeti.
185
      There is no need of external intervention.
186

    
187
    ``2``
188
      The collector has failed to understand whether the status is good or
189
      bad. Further analysis is required. Interpret this status as a
190
      potentially dangerous situation.
191

    
192
    ``4``
193
      The collector can determine that something is wrong and Ganeti has no
194
      way to fix it autonomously. External intervention is required.
195

    
196
  ``message``
197
    A message to better explain the reason of the status.
198
    The exact format of the message string is data collector dependent.
199

    
200
    The field is mandatory, but the content can be an empty string if the
201
    ``code`` is ``0`` (working as intended) or ``1`` (being fixed
202
    automatically).
203

    
204
    If the status code is ``2``, the message should specify what has gone
205
    wrong.
206
    If the status code is ``4``, the message shoud explain why it was not
207
    possible to determine a proper status.
208

    
209
The ``data`` section will also contain all the fields describing the gathered
210
data, according to a collector-specific format.
211

    
212
Instance status
213
+++++++++++++++
214

    
215
At the moment each node knows which instances are running on it, which
216
instances it is primary for, but not the cause why an instance might not
217
be running. On the other hand we don't want to distribute full instance
218
"admin" status information to all nodes, because of the performance
219
impact this would have.
220

    
221
As such we propose that:
222

    
223
- Any operation that can affect instance status will have an optional
224
  "reason" attached to it (at opcode level). This can be used for
225
  example to distinguish an admin request, from a scheduled maintenance
226
  or an automated tool's work. If this reason is not passed, Ganeti will
227
  just use the information it has about the source of the request.
228
  This reason information will be structured according to the
229
  :doc:`Ganeti reason trail <design-reason-trail>` design document.
230
- RPCs that affect the instance status will be changed so that the
231
  "reason" and the version of the config object they ran on is passed to
232
  them. They will then export the new expected instance status, together
233
  with the associated reason and object version to the status report
234
  system, which then will export those themselves.
235

    
236
Monitoring and auditing systems can then use the reason to understand
237
the cause of an instance status, and they can use the timestamp to
238
understand the freshness of their data even in the absence of an atomic
239
cross-node reporting: for example if they see an instance "up" on a node
240
after seeing it running on a previous one, they can compare these values
241
to understand which data is freshest, and repoll the "older" node. Of
242
course if they keep seeing this status this represents an error (either
243
an instance continuously "flapping" between nodes, or an instance is
244
constantly up on more than one), which should be reported and acted
245
upon.
246

    
247
The instance status will be on each node, for the instances it is
248
primary for, and its ``data`` section of the report will contain a list
249
of instances, with at least the following fields for each instance:
250

    
251
``name``
252
  The name of the instance.
253

    
254
``uuid``
255
  The UUID of the instance (stable on name change).
256

    
257
``admin_state``
258
  The status of the instance (up/down/offline) as requested by the admin.
259

    
260
``actual_state``
261
  The actual status of the instance. It can be ``up``, ``down``, or
262
  ``hung`` if the instance is up but it appears to be completely stuck.
263

    
264
``uptime``
265
  The uptime of the instance (if it is up, "null" otherwise).
266

    
267
``mtime``
268
  The timestamp of the last known change to the instance state.
269

    
270
``state_reason``
271
  The last known reason for state change of the instance, described according
272
  to the JSON representation of a reason trail, as detailed in the :doc:`reason trail
273
  design document <design-reason-trail>`.
274

    
275
``status``
276
  It represents the status of the instance, and its format is the same as that
277
  of the ``status`` field of `Status reporting collectors`_.
278

    
279
Each hypervisor should provide its own instance status data collector, possibly
280
with the addition of more, specific, fields.
281
The ``category`` field of all of them will be ``instance``.
282
The ``kind`` field will be ``1``.
283

    
284
Note that as soon as a node knows it's not the primary anymore for an
285
instance it will stop reporting status for it: this means the instance
286
will either disappear, if it has been deleted, or appear on another
287
node, if it's been moved.
288

    
289
The ``code`` of the ``status`` field of the report of the Instance status data
290
collector will be:
291

    
292
``0``
293
  if ``status`` is ``0`` for all the instances it is reporting about.
294

    
295
``1``
296
  otherwise.
297

    
298
Storage status
299
++++++++++++++
300

    
301
The storage status collectors will be a series of data collectors
302
(drbd, rbd, plain, file) that will gather data about all the storage types
303
for the current node (this is right now hardcoded to the enabled storage
304
types, and in the future tied to the enabled storage pools for the nodegroup).
305

    
306
The ``name`` of each of these collector will reflect what storage type each of
307
them refers to.
308

    
309
The ``category`` field of these collector will be ``storage``.
310

    
311
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
312

    
313
The ``data`` section of the report will provide at least the following fields:
314

    
315
``free``
316
  The amount of free space (in KBytes).
317

    
318
``used``
319
  The amount of used space (in KBytes).
320

    
321
``total``
322
  The total visible space (in KBytes).
323

    
324
Each specific storage type might provide more type-specific fields.
325

    
326
In case of error, the ``message`` subfield of the ``status`` field of the
327
report of the instance status collector will disclose the nature of the error
328
as a type specific information. Examples of these are "backend pv unavailable"
329
for lvm storage, "unreachable" for network based storage or "filesystem error"
330
for filesystem based implementations.
331

    
332
DRBD status
333
***********
334

    
335
This data collector will run only on nodes where DRBD is actually
336
present and it will gather information about DRBD devices.
337

    
338
Its ``kind`` in the report will be ``1`` (`Status reporting collectors`_).
339

    
340
Its ``category`` field in the report will contain the value ``storage``.
341

    
342
When executed in verbose mode, the ``data`` section of the report of this
343
collector will provide the following fields:
344

    
345
``versionInfo``
346
  Information about the DRBD version number, given by a combination of
347
  any (but at least one) of the following fields:
348

    
349
  ``version``
350
    The DRBD driver version.
351

    
352
  ``api``
353
    The API version number.
354

    
355
  ``proto``
356
    The protocol version.
357

    
358
  ``srcversion``
359
    The version of the source files.
360

    
361
  ``gitHash``
362
    Git hash of the source files.
363

    
364
  ``buildBy``
365
    Who built the binary, and, optionally, when.
366

    
367
``device``
368
  A list of structures, each describing a DRBD device (a minor) and containing
369
  the following fields:
370

    
371
  ``minor``
372
    The device minor number.
373

    
374
  ``connectionState``
375
    The state of the connection. If it is "Unconfigured", all the following
376
    fields are not present.
377

    
378
  ``localRole``
379
    The role of the local resource.
380

    
381
  ``remoteRole``
382
    The role of the remote resource.
383

    
384
  ``localState``
385
    The status of the local disk.
386

    
387
  ``remoteState``
388
    The status of the remote disk.
389

    
390
  ``replicationProtocol``
391
    The replication protocol being used.
392

    
393
  ``ioFlags``
394
    The input/output flags.
395

    
396
  ``perfIndicators``
397
    The performance indicators. This field will contain the following
398
    sub-fields:
399

    
400
    ``networkSend``
401
      KiB of data sent on the network.
402

    
403
    ``networkReceive``
404
      KiB of data received from the network.
405

    
406
    ``diskWrite``
407
      KiB of data written on local disk.
408

    
409
    ``diskRead``
410
      KiB of date read from the local disk.
411

    
412
    ``activityLog``
413
      Number of updates of the activity log.
414

    
415
    ``bitMap``
416
      Number of updates to the bitmap area of the metadata.
417

    
418
    ``localCount``
419
      Number of open requests to the local I/O subsystem.
420

    
421
    ``pending``
422
      Number of requests sent to the partner but not yet answered.
423

    
424
    ``unacknowledged``
425
      Number of requests received by the partner but still to be answered.
426

    
427
    ``applicationPending``
428
      Num of block input/output requests forwarded to DRBD but that have not yet
429
      been answered.
430

    
431
    ``epochs``
432
      (Optional) Number of epoch objects. Not provided by all DRBD versions.
433

    
434
    ``writeOrder``
435
      (Optional) Currently used write ordering method. Not provided by all DRBD
436
      versions.
437

    
438
    ``outOfSync``
439
      (Optional) KiB of storage currently out of sync. Not provided by all DRBD
440
      versions.
441

    
442
  ``syncStatus``
443
    (Optional) The status of the synchronization of the disk. This is present
444
    only if the disk is being synchronized, and includes the following fields:
445

    
446
    ``percentage``
447
      The percentage of synchronized data.
448

    
449
    ``progress``
450
      How far the synchronization is. Written as "x/y", where x and y are
451
      integer numbers expressed in the measurement unit stated in
452
      ``progressUnit``
453

    
454
    ``progressUnit``
455
      The measurement unit for the progress indicator.
456

    
457
    ``timeToFinish``
458
      The expected time before finishing the synchronization.
459

    
460
    ``speed``
461
      The speed of the synchronization.
462

    
463
    ``want``
464
      The desiderd speed of the synchronization.
465

    
466
    ``speedUnit``
467
      The measurement unit of the ``speed`` and ``want`` values. Expressed
468
      as "size/time".
469

    
470
  ``instance``
471
    The name of the Ganeti instance this disk is associated to.
472

    
473

    
474
Ganeti daemons status
475
+++++++++++++++++++++
476

    
477
Ganeti will report what information it has about its own daemons.
478
This should allow identifying possible problems with the Ganeti system itself:
479
for example memory leaks, crashes and high resource utilization should be
480
evident by analyzing this information.
481

    
482
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
483

    
484
Each daemon will have its own data collector, and each of them will have
485
a ``category`` field valued ``daemon``.
486

    
487
When executed in verbose mode, their data section will include at least:
488

    
489
``memory``
490
  The amount of used memory.
491

    
492
``size_unit``
493
  The measurement unit used for the memory.
494

    
495
``uptime``
496
  The uptime of the daemon.
497

    
498
``CPU usage``
499
  How much cpu the daemon is using (percentage).
500

    
501
Any other daemon-specific information can be included as well in the ``data``
502
section.
503

    
504
Hypervisor resources report
505
+++++++++++++++++++++++++++
506

    
507
Each hypervisor has a view of system resources that sometimes is
508
different than the one the OS sees (for example in Xen the Node OS,
509
running as Dom0, has access to only part of those resources). In this
510
section we'll report all information we can in a "non hypervisor
511
specific" way. Each hypervisor can then add extra specific information
512
that is not generic enough be abstracted.
513

    
514
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
515

    
516
Each of the hypervisor data collectory will be of ``category``: ``hypervisor``.
517

    
518
Node OS resources report
519
++++++++++++++++++++++++
520

    
521
Since Ganeti assumes it's running on Linux, it's useful to export some
522
basic information as seen by the host system.
523

    
524
The ``category`` field of the report will be ``null``.
525

    
526
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
527

    
528
The ``data`` section will include:
529

    
530
``cpu_number``
531
  The number of available cpus.
532

    
533
``cpus``
534
  A list with one element per cpu, showing its average load.
535

    
536
``memory``
537
  The current view of memory (free, used, cached, etc.)
538

    
539
``filesystem``
540
  A list with one element per filesystem, showing a summary of the
541
  total/available space.
542

    
543
``NICs``
544
  A list with one element per network interface, showing the amount of
545
  sent/received data, error rate, IP address of the interface, etc.
546

    
547
``versions``
548
  A map using the name of a component Ganeti interacts (Linux, drbd,
549
  hypervisor, etc) as the key and its version number as the value.
550

    
551
Note that we won't go into any hardware specific details (e.g. querying a
552
node RAID is outside the scope of this, and can be implemented as a
553
plugin) but we can easily just report the information above, since it's
554
standard enough across all systems.
555

    
556
Format of the query
557
-------------------
558

    
559
.. include:: monitoring-query-format.rst
560

    
561
Instance disk status propagation
562
--------------------------------
563

    
564
As for the instance status Ganeti has now only partial information about
565
its instance disks: in particular each node is unaware of the disk to
566
instance mapping, that exists only on the master.
567

    
568
For this design doc we plan to fix this by changing all RPCs that create
569
a backend storage or that put an already existing one in use and passing
570
the relevant instance to the node. The node can then export these to the
571
status reporting tool.
572

    
573
While we haven't implemented these RPC changes yet, we'll use Confd to
574
fetch this information in the data collectors.
575

    
576
Plugin system
577
-------------
578

    
579
The monitoring system will be equipped with a plugin system that can
580
export specific local information through it.
581

    
582
The plugin system is expected to be used by local installations to
583
export any installation specific information that they want to be
584
monitored, about either hardware or software on their systems.
585

    
586
The plugin system will be in the form of either scripts or binaries whose output
587
will be inserted in the report.
588

    
589
Eventually support for other kinds of plugins might be added as well, such as
590
plain text files which will be inserted into the report, or local unix or
591
network sockets from which the information has to be read.  This should allow
592
most flexibility for implementing an efficient system, while being able to keep
593
it as simple as possible.
594

    
595
Data collectors
596
---------------
597

    
598
In order to ease testing as well as to make it simple to reuse this
599
subsystem it will be possible to run just the "data collectors" on each
600
node without passing through the agent daemon.
601

    
602
If a data collector is run independently, it should print on stdout its
603
report, according to the format corresponding to a single data collector
604
report object, as described in the previous paragraphs.
605

    
606
Mode of operation
607
-----------------
608

    
609
In order to be able to report information fast the monitoring agent
610
daemon will keep an in-memory or on-disk cache of the status, which will
611
be returned when queries are made. The status system will then
612
periodically check resources to make sure the status is up to date.
613

    
614
Different parts of the report will be queried at different speeds. These
615
will depend on:
616
- how often they vary (or we expect them to vary)
617
- how fast they are to query
618
- how important their freshness is
619

    
620
Of course the last parameter is installation specific, and while we'll
621
try to have defaults, it will be configurable. The first two instead we
622
can use adaptively to query a certain resource faster or slower
623
depending on those two parameters.
624

    
625
When run as stand-alone binaries, the data collector will not using any
626
caching system, and just fetch and return the data immediately.
627

    
628
Implementation place
629
--------------------
630

    
631
The status daemon will be implemented as a standalone Haskell daemon. In
632
the future it should be easy to merge multiple daemons into one with
633
multiple entry points, should we find out it saves resources and doesn't
634
impact functionality.
635

    
636
The libekg library should be looked at for easily providing metrics in
637
json format.
638

    
639
Implementation order
640
--------------------
641

    
642
We will implement the agent system in this order:
643

    
644
- initial example data collectors (eg. for drbd and instance status).
645
- initial daemon for exporting data, integrating the existing collectors
646
- plugin system
647
- RPC updates for instance status reasons and disk to instance mapping
648
- cache layer for the daemon
649
- more data collectors
650

    
651

    
652
Future work
653
===========
654

    
655
As a future step it can be useful to "centralize" all this reporting
656
data on a single place. This for example can be just the master node, or
657
all the master candidates. We will evaluate doing this after the first
658
node-local version has been developed and tested.
659

    
660
Another possible change is replacing the "read-only" RPCs with queries
661
to the agent system, thus having only one way of collecting information
662
from the nodes from a monitoring system and for Ganeti itself.
663

    
664
One extra feature we may need is a way to query for only sub-parts of
665
the report (eg. instances status only). This can be done by passing
666
arguments to the HTTP GET, which will be defined when we get to this
667
funtionality.
668

    
669
Finally the :doc:`autorepair system design <design-autorepair>`. system
670
(see its design) can be expanded to use the monitoring agent system as a
671
source of information to decide which repairs it can perform.
672

    
673
.. vim: set textwidth=72 :
674
.. Local Variables:
675
.. mode: rst
676
.. fill-column: 72
677
.. End: