Statistics
| Branch: | Tag: | Revision:

root / doc / design-monitoring-agent.rst @ 87c7621a

History | View | Annotate | Download (24.8 kB)

1
=======================
2
Ganeti monitoring agent
3
=======================
4

    
5
.. contents:: :depth: 4
6

    
7
This is a design document detailing the implementation of a Ganeti
8
monitoring agent report system, that can be queried by a monitoring
9
system to calculate health information for a Ganeti cluster.
10

    
11
Current state and shortcomings
12
==============================
13

    
14
There is currently no monitoring support in Ganeti. While we don't want
15
to build something like Nagios or Pacemaker as part of Ganeti, it would
16
be useful if such tools could easily extract information from a Ganeti
17
machine in order to take actions (example actions include logging an
18
outage for future reporting or alerting a person or system about it).
19

    
20
Proposed changes
21
================
22

    
23
Each Ganeti node should export a status page that can be queried by a
24
monitoring system. Such status page will be exported on a network port
25
and will be encoded in JSON (simple text) over HTTP.
26

    
27
The choice of JSON is obvious as we already depend on it in Ganeti and
28
thus we don't need to add extra libraries to use it, as opposed to what
29
would happen for XML or some other markup format.
30

    
31
Location of agent report
32
------------------------
33

    
34
The report will be available from all nodes, and be concerned for all
35
node-local resources. This allows more real-time information to be
36
available, at the cost of querying all nodes.
37

    
38
Information reported
39
--------------------
40

    
41
The monitoring agent system will report on the following basic information:
42

    
43
- Instance status
44
- Instance disk status
45
- Status of storage for instances
46
- Ganeti daemons status, CPU usage, memory footprint
47
- Hypervisor resources report (memory, CPU, network interfaces)
48
- Node OS resources report (memory, CPU, network interfaces)
49
- Information from a plugin system
50

    
51
Format of the report
52
--------------------
53

    
54
The report of the will be in JSON format, and it will present an array
55
of report objects.
56
Each report object will be produced by a specific data collector.
57
Each report object includes some mandatory fields, to be provided by all
58
the data collectors:
59

    
60
``name``
61
  The name of the data collector that produced this part of the report.
62
  It is supposed to be unique inside a report.
63

    
64
``version``
65
  The version of the data collector that produces this part of the
66
  report. Built-in data collectors (as opposed to those implemented as
67
  plugins) should have "B" as the version number.
68

    
69
``format_version``
70
  The format of what is represented in the "data" field for each data
71
  collector might change over time. Every time this happens, the
72
  format_version should be changed, so that who reads the report knows
73
  what format to expect, and how to correctly interpret it.
74

    
75
``timestamp``
76
  The time when the reported data were gathered. It has to be expressed
77
  in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
78
  enough precision is available (or needed) it can be padded with
79
  zeroes. If a report object needs multiple timestamps, it can add more
80
  and/or override this one inside its own "data" section.
81

    
82
``category``
83
  A collector can belong to a given category of collectors (e.g.: storage
84
  collectors, daemon collector). This means that it will have to provide a
85
  minumum set of prescribed fields, as documented for each category.
86
  This field will contain the name of the category the collector belongs to,
87
  if any, or just the ``null`` value.
88

    
89
``kind``
90
  Two kinds of collectors are possible:
91
  `Performance reporting collectors`_ and `Status reporting collectors`_.
92
  The respective paragraphs will describe them and the value of this field.
93

    
94
``data``
95
  This field contains all the data generated by the specific data collector,
96
  in its own independently defined format. The monitoring agent could check
97
  this syntactically (according to the JSON specifications) but not
98
  semantically.
99

    
100
Here follows a minimal example of a report::
101

    
102
  [
103
  {
104
      "name" : "TheCollectorIdentifier",
105
      "version" : "1.2",
106
      "format_version" : 1,
107
      "timestamp" : 1351607182000000000,
108
      "category" : null,
109
      "kind" : 0,
110
      "data" : { "plugin_specific_data" : "go_here" }
111
  },
112
  {
113
      "name" : "AnotherDataCollector",
114
      "version" : "B",
115
      "format_version" : 7,
116
      "timestamp" : 1351609526123854000,
117
      "category" : "storage",
118
      "kind" : 1,
119
      "data" : { "status" : { "code" : 1,
120
                              "message" : "Error on disk 2"
121
                            },
122
                 "plugin_specific" : "data",
123
                 "some_late_data" : { "timestamp" : 1351609526123942720,
124
                                      ...
125
                                    }
126
               }
127
  }
128
  ]
129

    
130
Performance reporting collectors
131
++++++++++++++++++++++++++++++++
132

    
133
These collectors only provide data about some component of the system, without
134
giving any interpretation over their meaning.
135

    
136
The value of the ``kind`` field of the report will be ``0``.
137

    
138
Status reporting collectors
139
+++++++++++++++++++++++++++
140

    
141
These collectors will provide information about the status of some
142
component of ganeti, or managed by ganeti.
143

    
144
The value of their ``kind`` field will be ``1``.
145

    
146
The rationale behind this kind of collectors is that there are some situations
147
where exporting data about the underlying subsystems would expose potential
148
issues. But if Ganeti itself is able (and going) to fix the problem, conflicts
149
might arise between Ganeti and something/somebody else trying to fix the same
150
problem.
151
Also, some external monitoring systems might not be aware of the internals of a
152
particular subsystem (e.g.: DRBD) and might only exploit the high level
153
response of its data collector, alerting an administrator if anything is wrong.
154
Still, completely hiding the underlying data is not a good idea, as they might
155
still be of use in some cases. So status reporting plugins will provide two
156
output modes: one just exporting a high level information about the status,
157
and one also exporting all the data they gathered.
158
The default output mode will be the status-only one. Through a command line
159
parameter (for stand-alone data collectors) or through the HTTP request to the
160
monitoring agent
161
(when collectors are executed as part of it) the verbose output mode providing
162
all the data can be selected.
163

    
164
When exporting just the status each status reporting collector will provide,
165
in its ``data`` section, at least the following field:
166

    
167
``status``
168
  summarizes the status of the component being monitored and consists of two
169
  subfields:
170

    
171
  ``code``
172
    It assumes a numeric value, encoded in such a way to allow using a bitset
173
    to easily distinguish which states are currently present in the whole cluster.
174
    If the bitwise OR of all the ``status`` fields is 0, the cluster is
175
    completely healty.
176
    The status codes are as follows:
177

    
178
    ``0``
179
      The collector can determine that everything is working as
180
      intended.
181

    
182
    ``1``
183
      Something is temporarily wrong but it is being automatically fixed by
184
      Ganeti.
185
      There is no need of external intervention.
186

    
187
    ``2``
188
      The collector has failed to understand whether the status is good or
189
      bad. Further analysis is required. Interpret this status as a
190
      potentially dangerous situation.
191

    
192
    ``4``
193
      The collector can determine that something is wrong and Ganeti has no
194
      way to fix it autonomously. External intervention is required.
195

    
196
  ``message``
197
    A message to better explain the reason of the status.
198
    The exact format of the message string is data collector dependent.
199

    
200
    The field is mandatory, but the content can be an empty string if the
201
    ``code`` is ``0`` (working as intended) or ``1`` (being fixed
202
    automatically).
203

    
204
    If the status code is ``2``, the message should specify what has gone
205
    wrong.
206
    If the status code is ``4``, the message shoud explain why it was not
207
    possible to determine a proper status.
208

    
209
The ``data`` section will also contain all the fields describing the gathered
210
data, according to a collector-specific format.
211

    
212
Instance status
213
+++++++++++++++
214

    
215
At the moment each node knows which instances are running on it, which
216
instances it is primary for, but not the cause why an instance might not
217
be running. On the other hand we don't want to distribute full instance
218
"admin" status information to all nodes, because of the performance
219
impact this would have.
220

    
221
As such we propose that:
222

    
223
- Any operation that can affect instance status will have an optional
224
  "reason" attached to it (at opcode level). This can be used for
225
  example to distinguish an admin request, from a scheduled maintenance
226
  or an automated tool's work. If this reason is not passed, Ganeti will
227
  just use the information it has about the source of the request: for
228
  example a cli shutdown operation will have "cli:shutdown" as a reason,
229
  a cli failover operation will have "cli:failover". Operations coming
230
  from the remote API will use "rapi" instead of "cli". Of course
231
  setting a real site-specific reason is still preferred.
232
- RPCs that affect the instance status will be changed so that the
233
  "reason" and the version of the config object they ran on is passed to
234
  them. They will then export the new expected instance status, together
235
  with the associated reason and object version to the status report
236
  system, which then will export those themselves.
237

    
238
Monitoring and auditing systems can then use the reason to understand
239
the cause of an instance status, and they can use the timestamp to
240
understand the freshness of their data even in the absence of an atomic
241
cross-node reporting: for example if they see an instance "up" on a node
242
after seeing it running on a previous one, they can compare these values
243
to understand which data is freshest, and repoll the "older" node. Of
244
course if they keep seeing this status this represents an error (either
245
an instance continuously "flapping" between nodes, or an instance is
246
constantly up on more than one), which should be reported and acted
247
upon.
248

    
249
The instance status will be on each node, for the instances it is
250
primary for, and its ``data`` section of the report will contain a list
251
of instances, with at least the following fields for each instance:
252

    
253
``name``
254
  The name of the instance.
255

    
256
``uuid``
257
  The UUID of the instance (stable on name change).
258

    
259
``admin_state``
260
  The status of the instance (up/down/offline) as requested by the admin.
261

    
262
``actual_state``
263
  The actual status of the instance. It can be ``up``, ``down``, or
264
  ``hung`` if the instance is up but it appears to be completely stuck.
265

    
266
``uptime``
267
  The uptime of the instance (if it is up, "null" otherwise).
268

    
269
``mtime``
270
  The timestamp of the last known change to the instance state.
271

    
272
``state_reason``
273
  The last known reason for state change, described according to the
274
  following subfields:
275

    
276
  ``text``
277
    Either a user-provided reason (if any), or the name of the command that
278
    triggered the state change, as a fallback.
279

    
280
  ``jobID``
281
    The ID of the job that caused the state change.
282

    
283
  ``source``
284
    Where the state change was triggered (RAPI, CLI).
285

    
286
``status``
287
  It represents the status of the instance, and its format is the same as that
288
  of the ``status`` field of `Status reporting collectors`_.
289

    
290
Each hypervisor should provide its own instance status data collector, possibly
291
with the addition of more, specific, fields.
292
The ``category`` field of all of them will be ``instance``.
293
The ``kind`` field will be ``1``.
294

    
295
Note that as soon as a node knows it's not the primary anymore for an
296
instance it will stop reporting status for it: this means the instance
297
will either disappear, if it has been deleted, or appear on another
298
node, if it's been moved.
299

    
300
The ``code`` of the ``status`` field of the report of the Instance status data
301
collector will be:
302

    
303
``0``
304
  if ``status`` is ``0`` for all the instances it is reporting about.
305

    
306
``1``
307
  otherwise.
308

    
309
Storage status
310
++++++++++++++
311

    
312
The storage status collectors will be a series of data collectors
313
(drbd, rbd, plain, file) that will gather data about all the storage types
314
for the current node (this is right now hardcoded to the enabled storage
315
types, and in the future tied to the enabled storage pools for the nodegroup).
316

    
317
The ``name`` of each of these collector will reflect what storage type each of
318
them refers to.
319

    
320
The ``category`` field of these collector will be ``storage``.
321

    
322
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
323

    
324
The ``data`` section of the report will provide at least the following fields:
325

    
326
``free``
327
  The amount of free space (in KBytes).
328

    
329
``used``
330
  The amount of used space (in KBytes).
331

    
332
``total``
333
  The total visible space (in KBytes).
334

    
335
Each specific storage type might provide more type-specific fields.
336

    
337
In case of error, the ``message`` subfield of the ``status`` field of the
338
report of the instance status collector will disclose the nature of the error
339
as a type specific information. Examples of these are "backend pv unavailable"
340
for lvm storage, "unreachable" for network based storage or "filesystem error"
341
for filesystem based implementations.
342

    
343
DRBD status
344
***********
345

    
346
This data collector will run only on nodes where DRBD is actually
347
present and it will gather information about DRBD devices.
348

    
349
Its ``kind`` in the report will be ``1`` (`Status reporting collectors`_).
350

    
351
Its ``category`` field in the report will contain the value ``storage``.
352

    
353
When executed in verbose mode, the ``data`` section of the report of this
354
collector will provide the following fields:
355

    
356
``versionInfo``
357
  Information about the DRBD version number, given by a combination of
358
  any (but at least one) of the following fields:
359

    
360
  ``version``
361
    The DRBD driver version.
362

    
363
  ``api``
364
    The API version number.
365

    
366
  ``proto``
367
    The protocol version.
368

    
369
  ``srcversion``
370
    The version of the source files.
371

    
372
  ``gitHash``
373
    Git hash of the source files.
374

    
375
  ``buildBy``
376
    Who built the binary, and, optionally, when.
377

    
378
``device``
379
  A list of structures, each describing a DRBD device (a minor) and containing
380
  the following fields:
381

    
382
  ``minor``
383
    The device minor number.
384

    
385
  ``connectionState``
386
    The state of the connection. If it is "Unconfigured", all the following
387
    fields are not present.
388

    
389
  ``localRole``
390
    The role of the local resource.
391

    
392
  ``remoteRole``
393
    The role of the remote resource.
394

    
395
  ``localState``
396
    The status of the local disk.
397

    
398
  ``remoteState``
399
    The status of the remote disk.
400

    
401
  ``replicationProtocol``
402
    The replication protocol being used.
403

    
404
  ``ioFlags``
405
    The input/output flags.
406

    
407
  ``perfIndicators``
408
    The performance indicators. This field will contain the following
409
    sub-fields:
410

    
411
    ``networkSend``
412
      KiB of data sent on the network.
413

    
414
    ``networkReceive``
415
      KiB of data received from the network.
416

    
417
    ``diskWrite``
418
      KiB of data written on local disk.
419

    
420
    ``diskRead``
421
      KiB of date read from the local disk.
422

    
423
    ``activityLog``
424
      Number of updates of the activity log.
425

    
426
    ``bitMap``
427
      Number of updates to the bitmap area of the metadata.
428

    
429
    ``localCount``
430
      Number of open requests to the local I/O subsystem.
431

    
432
    ``pending``
433
      Number of requests sent to the partner but not yet answered.
434

    
435
    ``unacknowledged``
436
      Number of requests received by the partner but still to be answered.
437

    
438
    ``applicationPending``
439
      Num of block input/output requests forwarded to DRBD but that have not yet
440
      been answered.
441

    
442
    ``epochs``
443
      (Optional) Number of epoch objects. Not provided by all DRBD versions.
444

    
445
    ``writeOrder``
446
      (Optional) Currently used write ordering method. Not provided by all DRBD
447
      versions.
448

    
449
    ``outOfSync``
450
      (Optional) KiB of storage currently out of sync. Not provided by all DRBD
451
      versions.
452

    
453
  ``syncStatus``
454
    (Optional) The status of the synchronization of the disk. This is present
455
    only if the disk is being synchronized, and includes the following fields:
456

    
457
    ``percentage``
458
      The percentage of synchronized data.
459

    
460
    ``progress``
461
      How far the synchronization is. Written as "x/y", where x and y are
462
      integer numbers expressed in the measurement unit stated in
463
      ``progressUnit``
464

    
465
    ``progressUnit``
466
      The measurement unit for the progress indicator.
467

    
468
    ``timeToFinish``
469
      The expected time before finishing the synchronization.
470

    
471
    ``speed``
472
      The speed of the synchronization.
473

    
474
    ``want``
475
      The desiderd speed of the synchronization.
476

    
477
    ``speedUnit``
478
      The measurement unit of the ``speed`` and ``want`` values. Expressed
479
      as "size/time".
480

    
481
  ``instance``
482
    The name of the Ganeti instance this disk is associated to.
483

    
484

    
485
Ganeti daemons status
486
+++++++++++++++++++++
487

    
488
Ganeti will report what information it has about its own daemons.
489
This should allow identifying possible problems with the Ganeti system itself:
490
for example memory leaks, crashes and high resource utilization should be
491
evident by analyzing this information.
492

    
493
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
494

    
495
Each daemon will have its own data collector, and each of them will have
496
a ``category`` field valued ``daemon``.
497

    
498
When executed in verbose mode, their data section will include at least:
499

    
500
``memory``
501
  The amount of used memory.
502

    
503
``size_unit``
504
  The measurement unit used for the memory.
505

    
506
``uptime``
507
  The uptime of the daemon.
508

    
509
``CPU usage``
510
  How much cpu the daemon is using (percentage).
511

    
512
Any other daemon-specific information can be included as well in the ``data``
513
section.
514

    
515
Hypervisor resources report
516
+++++++++++++++++++++++++++
517

    
518
Each hypervisor has a view of system resources that sometimes is
519
different than the one the OS sees (for example in Xen the Node OS,
520
running as Dom0, has access to only part of those resources). In this
521
section we'll report all information we can in a "non hypervisor
522
specific" way. Each hypervisor can then add extra specific information
523
that is not generic enough be abstracted.
524

    
525
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
526

    
527
Each of the hypervisor data collectory will be of ``category``: ``hypervisor``.
528

    
529
Node OS resources report
530
++++++++++++++++++++++++
531

    
532
Since Ganeti assumes it's running on Linux, it's useful to export some
533
basic information as seen by the host system.
534

    
535
The ``category`` field of the report will be ``null``.
536

    
537
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
538

    
539
The ``data`` section will include:
540

    
541
``cpu_number``
542
  The number of available cpus.
543

    
544
``cpus``
545
  A list with one element per cpu, showing its average load.
546

    
547
``memory``
548
  The current view of memory (free, used, cached, etc.)
549

    
550
``filesystem``
551
  A list with one element per filesystem, showing a summary of the
552
  total/available space.
553

    
554
``NICs``
555
  A list with one element per network interface, showing the amount of
556
  sent/received data, error rate, IP address of the interface, etc.
557

    
558
``versions``
559
  A map using the name of a component Ganeti interacts (Linux, drbd,
560
  hypervisor, etc) as the key and its version number as the value.
561

    
562
Note that we won't go into any hardware specific details (e.g. querying a
563
node RAID is outside the scope of this, and can be implemented as a
564
plugin) but we can easily just report the information above, since it's
565
standard enough across all systems.
566

    
567
Format of the query
568
-------------------
569

    
570
The queries to the monitoring agent will be HTTP GET requests on port 1815.
571
The answer will be encoded in JSON format and will depend on the specific
572
accessed resource.
573

    
574
If a request is sent to a non-existing resource, a 404 error will be returned by
575
the HTTP server.
576

    
577
The following paragraphs will present the existing resources supported by the
578
current protocol version, that is version 1.
579

    
580
``/``
581
+++++
582
The root resource. It will return the list of the supported protocol version
583
numbers.
584

    
585
Currently, this will include only version 1.
586

    
587
``/1``
588
++++++
589
Not an actual resource per-se, it is the root of all the resources of protocol
590
version 1.
591

    
592
If requested through GET, the null JSON value will be returned.
593

    
594
``/1/list/collectors``
595
++++++++++++++++++++++
596
Returns a list of tuples (kind, category, name) showing all the collectors
597
available in the system.
598

    
599
``/1/report/all``
600
+++++++++++++++++
601
A list of the reports of all the data collectors, as described in the section
602
`Format of the report`_.
603

    
604
`Status reporting collectors`_ will provide their output in non-verbose format.
605
The verbose format can be requested by adding the parameter ``verbose=1`` to the
606
request.
607

    
608
``/1/report/[category]/[collector_name]``
609
+++++++++++++++++++++++++++++++++++++++++
610
Returns the report of the collector ``[collector_name]`` that belongs to the
611
specified ``[category]``.
612

    
613
If a collector does not belong to any category, ``collector`` will be used as
614
the value for ``[category]``.
615

    
616
`Status reporting collectors`_ will provide their output in non-verbose format.
617
The verbose format can be requested by adding the parameter ``verbose=1`` to the
618
request.
619

    
620
Instance disk status propagation
621
--------------------------------
622

    
623
As for the instance status Ganeti has now only partial information about
624
its instance disks: in particular each node is unaware of the disk to
625
instance mapping, that exists only on the master.
626

    
627
For this design doc we plan to fix this by changing all RPCs that create
628
a backend storage or that put an already existing one in use and passing
629
the relevant instance to the node. The node can then export these to the
630
status reporting tool.
631

    
632
While we haven't implemented these RPC changes yet, we'll use Confd to
633
fetch this information in the data collectors.
634

    
635
Plugin system
636
-------------
637

    
638
The monitoring system will be equipped with a plugin system that can
639
export specific local information through it.
640

    
641
The plugin system is expected to be used by local installations to
642
export any installation specific information that they want to be
643
monitored, about either hardware or software on their systems.
644

    
645
The plugin system will be in the form of either scripts or binaries whose output
646
will be inserted in the report.
647

    
648
Eventually support for other kinds of plugins might be added as well, such as
649
plain text files which will be inserted into the report, or local unix or
650
network sockets from which the information has to be read.  This should allow
651
most flexibility for implementing an efficient system, while being able to keep
652
it as simple as possible.
653

    
654
Data collectors
655
---------------
656

    
657
In order to ease testing as well as to make it simple to reuse this
658
subsystem it will be possible to run just the "data collectors" on each
659
node without passing through the agent daemon.
660

    
661
If a data collector is run independently, it should print on stdout its
662
report, according to the format corresponding to a single data collector
663
report object, as described in the previous paragraphs.
664

    
665
Mode of operation
666
-----------------
667

    
668
In order to be able to report information fast the monitoring agent
669
daemon will keep an in-memory or on-disk cache of the status, which will
670
be returned when queries are made. The status system will then
671
periodically check resources to make sure the status is up to date.
672

    
673
Different parts of the report will be queried at different speeds. These
674
will depend on:
675
- how often they vary (or we expect them to vary)
676
- how fast they are to query
677
- how important their freshness is
678

    
679
Of course the last parameter is installation specific, and while we'll
680
try to have defaults, it will be configurable. The first two instead we
681
can use adaptively to query a certain resource faster or slower
682
depending on those two parameters.
683

    
684
When run as stand-alone binaries, the data collector will not using any
685
caching system, and just fetch and return the data immediately.
686

    
687
Implementation place
688
--------------------
689

    
690
The status daemon will be implemented as a standalone Haskell daemon. In
691
the future it should be easy to merge multiple daemons into one with
692
multiple entry points, should we find out it saves resources and doesn't
693
impact functionality.
694

    
695
The libekg library should be looked at for easily providing metrics in
696
json format.
697

    
698
Implementation order
699
--------------------
700

    
701
We will implement the agent system in this order:
702

    
703
- initial example data collectors (eg. for drbd and instance status).
704
- initial daemon for exporting data, integrating the existing collectors
705
- plugin system
706
- RPC updates for instance status reasons and disk to instance mapping
707
- cache layer for the daemon
708
- more data collectors
709

    
710

    
711
Future work
712
===========
713

    
714
As a future step it can be useful to "centralize" all this reporting
715
data on a single place. This for example can be just the master node, or
716
all the master candidates. We will evaluate doing this after the first
717
node-local version has been developed and tested.
718

    
719
Another possible change is replacing the "read-only" RPCs with queries
720
to the agent system, thus having only one way of collecting information
721
from the nodes from a monitoring system and for Ganeti itself.
722

    
723
One extra feature we may need is a way to query for only sub-parts of
724
the report (eg. instances status only). This can be done by passing
725
arguments to the HTTP GET, which will be defined when we get to this
726
funtionality.
727

    
728
Finally the :doc:`autorepair system design <design-autorepair>`. system
729
(see its design) can be expanded to use the monitoring agent system as a
730
source of information to decide which repairs it can perform.
731

    
732
.. vim: set textwidth=72 :
733
.. Local Variables:
734
.. mode: rst
735
.. fill-column: 72
736
.. End: