1 =======================
2 Ganeti monitoring agent
3 =======================
5 .. contents:: :depth: 4
7 This is a design document detailing the implementation of a Ganeti
8 monitoring agent report system, that can be queried by a monitoring
9 system to calculate health information for a Ganeti cluster.
11 Current state and shortcomings
12 ==============================
14 There is currently no monitoring support in Ganeti. While we don't want
15 to build something like Nagios or Pacemaker as part of Ganeti, it would
16 be useful if such tools could easily extract information from a Ganeti
17 machine in order to take actions (example actions include logging an
18 outage for future reporting or alerting a person or system about it).
23 Each Ganeti node should export a status page that can be queried by a
24 monitoring system. Such status page will be exported on a network port
25 and will be encoded in JSON (simple text) over HTTP.
27 The choice of JSON is obvious as we already depend on it in Ganeti and
28 thus we don't need to add extra libraries to use it, as opposed to what
29 would happen for XML or some other markup format.
31 Location of agent report
32 ------------------------
34 The report will be available from all nodes, and be concerned for all
35 node-local resources. This allows more real-time information to be
36 available, at the cost of querying all nodes.
41 The monitoring agent system will report on the following basic information:
44 - Instance disk status
45 - Status of storage for instances
46 - Ganeti daemons status, CPU usage, memory footprint
47 - Hypervisor resources report (memory, CPU, network interfaces)
48 - Node OS resources report (memory, CPU, network interfaces)
49 - Information from a plugin system
54 The report of the will be in JSON format, and it will present an array
56 Each report object will be produced by a specific data collector.
57 Each report object includes some mandatory fields, to be provided by all
61 The name of the data collector that produced this part of the report.
62 It is supposed to be unique inside a report.
65 The version of the data collector that produces this part of the
66 report. Built-in data collectors (as opposed to those implemented as
67 plugins) should have "B" as the version number.
70 The format of what is represented in the "data" field for each data
71 collector might change over time. Every time this happens, the
72 format_version should be changed, so that who reads the report knows
73 what format to expect, and how to correctly interpret it.
76 The time when the reported data were gathered. It has to be expressed
77 in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
78 enough precision is available (or needed) it can be padded with
79 zeroes. If a report object needs multiple timestamps, it can add more
80 and/or override this one inside its own "data" section.
83 A collector can belong to a given category of collectors (e.g.: storage
84 collectors, daemon collector). This means that it will have to provide a
85 minumum set of prescribed fields, as documented for each category.
86 This field will contain the name of the category the collector belongs to,
87 if any, or just the ``null`` value.
90 Two kinds of collectors are possible:
91 `Performance reporting collectors`_ and `Status reporting collectors`_.
92 The respective paragraphs will describe them and the value of this field.
95 This field contains all the data generated by the specific data collector,
96 in its own independently defined format. The monitoring agent could check
97 this syntactically (according to the JSON specifications) but not
100 Here follows a minimal example of a report::
104 "name" : "TheCollectorIdentifier",
106 "format_version" : 1,
107 "timestamp" : 1351607182000000000,
110 "data" : { "plugin_specific_data" : "go_here" }
113 "name" : "AnotherDataCollector",
115 "format_version" : 7,
116 "timestamp" : 1351609526123854000,
117 "category" : "storage",
119 "data" : { "status" : { "code" : 1,
120 "message" : "Error on disk 2"
122 "plugin_specific" : "data",
123 "some_late_data" : { "timestamp" : 1351609526123942720,
130 Performance reporting collectors
131 ++++++++++++++++++++++++++++++++
133 These collectors only provide data about some component of the system, without
134 giving any interpretation over their meaning.
136 The value of the ``kind`` field of the report will be ``0``.
138 Status reporting collectors
139 +++++++++++++++++++++++++++
141 These collectors will provide information about the status of some
142 component of ganeti, or managed by ganeti.
144 The value of their ``kind`` field will be ``1``.
146 The rationale behind this kind of collectors is that there are some situations
147 where exporting data about the underlying subsystems would expose potential
148 issues. But if Ganeti itself is able (and going) to fix the problem, conflicts
149 might arise between Ganeti and something/somebody else trying to fix the same
151 Also, some external monitoring systems might not be aware of the internals of a
152 particular subsystem (e.g.: DRBD) and might only exploit the high level
153 response of its data collector, alerting an administrator if anything is wrong.
154 Still, completely hiding the underlying data is not a good idea, as they might
155 still be of use in some cases. So status reporting plugins will provide two
156 output modes: one just exporting a high level information about the status,
157 and one also exporting all the data they gathered.
158 The default output mode will be the status-only one. Through a command line
159 parameter (for stand-alone data collectors) or through the HTTP request to the
161 (when collectors are executed as part of it) the verbose output mode providing
162 all the data can be selected.
164 When exporting just the status each status reporting collector will provide,
165 in its ``data`` section, at least the following field:
168 summarizes the status of the component being monitored and consists of two
172 It assumes a numeric value, encoded in such a way to allow using a bitset
173 to easily distinguish which states are currently present in the whole cluster.
174 If the bitwise OR of all the ``status`` fields is 0, the cluster is
176 The status codes are as follows:
179 The collector can determine that everything is working as
183 Something is temporarily wrong but it is being automatically fixed by
185 There is no need of external intervention.
188 The collector has failed to understand whether the status is good or
189 bad. Further analysis is required. Interpret this status as a
190 potentially dangerous situation.
193 The collector can determine that something is wrong and Ganeti has no
194 way to fix it autonomously. External intervention is required.
197 A message to better explain the reason of the status.
198 The exact format of the message string is data collector dependent.
200 The field is mandatory, but the content can be an empty string if the
201 ``code`` is ``0`` (working as intended) or ``1`` (being fixed
204 If the status code is ``2``, the message should specify what has gone
206 If the status code is ``4``, the message shoud explain why it was not
207 possible to determine a proper status.
209 The ``data`` section will also contain all the fields describing the gathered
210 data, according to a collector-specific format.
215 At the moment each node knows which instances are running on it, which
216 instances it is primary for, but not the cause why an instance might not
217 be running. On the other hand we don't want to distribute full instance
218 "admin" status information to all nodes, because of the performance
219 impact this would have.
221 As such we propose that:
223 - Any operation that can affect instance status will have an optional
224 "reason" attached to it (at opcode level). This can be used for
225 example to distinguish an admin request, from a scheduled maintenance
226 or an automated tool's work. If this reason is not passed, Ganeti will
227 just use the information it has about the source of the request: for
228 example a cli shutdown operation will have "cli:shutdown" as a reason,
229 a cli failover operation will have "cli:failover". Operations coming
230 from the remote API will use "rapi" instead of "cli". Of course
231 setting a real site-specific reason is still preferred.
232 - RPCs that affect the instance status will be changed so that the
233 "reason" and the version of the config object they ran on is passed to
234 them. They will then export the new expected instance status, together
235 with the associated reason and object version to the status report
236 system, which then will export those themselves.
238 Monitoring and auditing systems can then use the reason to understand
239 the cause of an instance status, and they can use the timestamp to
240 understand the freshness of their data even in the absence of an atomic
241 cross-node reporting: for example if they see an instance "up" on a node
242 after seeing it running on a previous one, they can compare these values
243 to understand which data is freshest, and repoll the "older" node. Of
244 course if they keep seeing this status this represents an error (either
245 an instance continuously "flapping" between nodes, or an instance is
246 constantly up on more than one), which should be reported and acted
249 The instance status will be on each node, for the instances it is
250 primary for, and its ``data`` section of the report will contain a list
251 of instances, with at least the following fields for each instance:
254 The name of the instance.
257 The UUID of the instance (stable on name change).
260 The status of the instance (up/down/offline) as requested by the admin.
263 The actual status of the instance. It can be ``up``, ``down``, or
264 ``hung`` if the instance is up but it appears to be completely stuck.
267 The uptime of the instance (if it is up, "null" otherwise).
270 The timestamp of the last known change to the instance state.
273 The last known reason for state change, described according to the
277 Either a user-provided reason (if any), or the name of the command that
278 triggered the state change, as a fallback.
281 The ID of the job that caused the state change.
284 Where the state change was triggered (RAPI, CLI).
287 It represents the status of the instance, and its format is the same as that
288 of the ``status`` field of `Status reporting collectors`_.
290 Each hypervisor should provide its own instance status data collector, possibly
291 with the addition of more, specific, fields.
292 The ``category`` field of all of them will be ``instance``.
293 The ``kind`` field will be ``1``.
295 Note that as soon as a node knows it's not the primary anymore for an
296 instance it will stop reporting status for it: this means the instance
297 will either disappear, if it has been deleted, or appear on another
298 node, if it's been moved.
300 The ``code`` of the ``status`` field of the report of the Instance status data
304 if ``status`` is ``0`` for all the instances it is reporting about.
312 The storage status collectors will be a series of data collectors
313 (drbd, rbd, plain, file) that will gather data about all the storage types
314 for the current node (this is right now hardcoded to the enabled storage
315 types, and in the future tied to the enabled storage pools for the nodegroup).
317 The ``name`` of each of these collector will reflect what storage type each of
320 The ``category`` field of these collector will be ``storage``.
322 The ``kind`` field will be ``1`` (`Status reporting collectors`_).
324 The ``data`` section of the report will provide at least the following fields:
327 The amount of free space (in KBytes).
330 The amount of used space (in KBytes).
333 The total visible space (in KBytes).
335 Each specific storage type might provide more type-specific fields.
337 In case of error, the ``message`` subfield of the ``status`` field of the
338 report of the instance status collector will disclose the nature of the error
339 as a type specific information. Examples of these are "backend pv unavailable"
340 for lvm storage, "unreachable" for network based storage or "filesystem error"
341 for filesystem based implementations.
346 This data collector will run only on nodes where DRBD is actually
347 present and it will gather information about DRBD devices.
349 Its ``kind`` in the report will be ``1`` (`Status reporting collectors`_).
351 Its ``category`` field in the report will contain the value ``storage``.
353 When executed in verbose mode, the ``data`` section of the report of this
354 collector will provide the following fields:
357 Information about the DRBD version number, given by a combination of
358 any (but at least one) of the following fields:
361 The DRBD driver version.
364 The API version number.
367 The protocol version.
370 The version of the source files.
373 Git hash of the source files.
376 Who built the binary, and, optionally, when.
379 A list of structures, each describing a DRBD device (a minor) and containing
380 the following fields:
383 The device minor number.
386 The state of the connection. If it is "Unconfigured", all the following
387 fields are not present.
390 The role of the local resource.
393 The role of the remote resource.
396 The status of the local disk.
399 The status of the remote disk.
401 ``replicationProtocol``
402 The replication protocol being used.
405 The input/output flags.
408 The performance indicators. This field will contain the following
412 KiB of data sent on the network.
415 KiB of data received from the network.
418 KiB of data written on local disk.
421 KiB of date read from the local disk.
424 Number of updates of the activity log.
427 Number of updates to the bitmap area of the metadata.
430 Number of open requests to the local I/O subsystem.
433 Number of requests sent to the partner but not yet answered.
436 Number of requests received by the partner but still to be answered.
438 ``applicationPending``
439 Num of block input/output requests forwarded to DRBD but that have not yet
443 (Optional) Number of epoch objects. Not provided by all DRBD versions.
446 (Optional) Currently used write ordering method. Not provided by all DRBD
450 (Optional) KiB of storage currently out of sync. Not provided by all DRBD
454 (Optional) The status of the synchronization of the disk. This is present
455 only if the disk is being synchronized, and includes the following fields:
458 The percentage of synchronized data.
461 How far the synchronization is. Written as "x/y", where x and y are
462 integer numbers expressed in the measurement unit stated in
466 The measurement unit for the progress indicator.
469 The expected time before finishing the synchronization.
472 The speed of the synchronization.
475 The desiderd speed of the synchronization.
478 The measurement unit of the ``speed`` and ``want`` values. Expressed
482 The name of the Ganeti instance this disk is associated to.
485 Ganeti daemons status
486 +++++++++++++++++++++
488 Ganeti will report what information it has about its own daemons.
489 This should allow identifying possible problems with the Ganeti system itself:
490 for example memory leaks, crashes and high resource utilization should be
491 evident by analyzing this information.
493 The ``kind`` field will be ``1`` (`Status reporting collectors`_).
495 Each daemon will have its own data collector, and each of them will have
496 a ``category`` field valued ``daemon``.
498 When executed in verbose mode, their data section will include at least:
501 The amount of used memory.
504 The measurement unit used for the memory.
507 The uptime of the daemon.
510 How much cpu the daemon is using (percentage).
512 Any other daemon-specific information can be included as well in the ``data``
515 Hypervisor resources report
516 +++++++++++++++++++++++++++
518 Each hypervisor has a view of system resources that sometimes is
519 different than the one the OS sees (for example in Xen the Node OS,
520 running as Dom0, has access to only part of those resources). In this
521 section we'll report all information we can in a "non hypervisor
522 specific" way. Each hypervisor can then add extra specific information
523 that is not generic enough be abstracted.
525 The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
527 Each of the hypervisor data collectory will be of ``category``: ``hypervisor``.
529 Node OS resources report
530 ++++++++++++++++++++++++
532 Since Ganeti assumes it's running on Linux, it's useful to export some
533 basic information as seen by the host system.
535 The ``category`` field of the report will be ``null``.
537 The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
539 The ``data`` section will include:
542 The number of available cpus.
545 A list with one element per cpu, showing its average load.
548 The current view of memory (free, used, cached, etc.)
551 A list with one element per filesystem, showing a summary of the
552 total/available space.
555 A list with one element per network interface, showing the amount of
556 sent/received data, error rate, IP address of the interface, etc.
559 A map using the name of a component Ganeti interacts (Linux, drbd,
560 hypervisor, etc) as the key and its version number as the value.
562 Note that we won't go into any hardware specific details (e.g. querying a
563 node RAID is outside the scope of this, and can be implemented as a
564 plugin) but we can easily just report the information above, since it's
565 standard enough across all systems.
570 The queries to the monitoring agent will be HTTP GET requests on port 1815.
571 The answer will be encoded in JSON format and will depend on the specific
574 If a request is sent to a non-existing resource, a 404 error will be returned by
577 The following paragraphs will present the existing resources supported by the
578 current protocol version, that is version 1.
582 The root resource. It will return the list of the supported protocol version
585 Currently, this will include only version 1.
589 Not an actual resource per-se, it is the root of all the resources of protocol
592 If requested through GET, the null JSON value will be returned.
594 ``/1/list/collectors``
595 ++++++++++++++++++++++
596 Returns a list of tuples (kind, category, name) showing all the collectors
597 available in the system.
601 A list of the reports of all the data collectors, as described in the section
602 `Format of the report`_.
604 `Status reporting collectors`_ will provide their output in non-verbose format.
605 The verbose format can be requested by adding the parameter ``verbose=1`` to the
608 ``/1/report/[category]/[collector_name]``
609 +++++++++++++++++++++++++++++++++++++++++
610 Returns the report of the collector ``[collector_name]`` that belongs to the
611 specified ``[category]``.
613 If a collector does not belong to any category, ``collector`` will be used as
614 the value for ``[category]``.
616 `Status reporting collectors`_ will provide their output in non-verbose format.
617 The verbose format can be requested by adding the parameter ``verbose=1`` to the
620 Instance disk status propagation
621 --------------------------------
623 As for the instance status Ganeti has now only partial information about
624 its instance disks: in particular each node is unaware of the disk to
625 instance mapping, that exists only on the master.
627 For this design doc we plan to fix this by changing all RPCs that create
628 a backend storage or that put an already existing one in use and passing
629 the relevant instance to the node. The node can then export these to the
630 status reporting tool.
632 While we haven't implemented these RPC changes yet, we'll use Confd to
633 fetch this information in the data collectors.
638 The monitoring system will be equipped with a plugin system that can
639 export specific local information through it.
641 The plugin system is expected to be used by local installations to
642 export any installation specific information that they want to be
643 monitored, about either hardware or software on their systems.
645 The plugin system will be in the form of either scripts or binaries whose output
646 will be inserted in the report.
648 Eventually support for other kinds of plugins might be added as well, such as
649 plain text files which will be inserted into the report, or local unix or
650 network sockets from which the information has to be read. This should allow
651 most flexibility for implementing an efficient system, while being able to keep
652 it as simple as possible.
657 In order to ease testing as well as to make it simple to reuse this
658 subsystem it will be possible to run just the "data collectors" on each
659 node without passing through the agent daemon.
661 If a data collector is run independently, it should print on stdout its
662 report, according to the format corresponding to a single data collector
663 report object, as described in the previous paragraphs.
668 In order to be able to report information fast the monitoring agent
669 daemon will keep an in-memory or on-disk cache of the status, which will
670 be returned when queries are made. The status system will then
671 periodically check resources to make sure the status is up to date.
673 Different parts of the report will be queried at different speeds. These
675 - how often they vary (or we expect them to vary)
676 - how fast they are to query
677 - how important their freshness is
679 Of course the last parameter is installation specific, and while we'll
680 try to have defaults, it will be configurable. The first two instead we
681 can use adaptively to query a certain resource faster or slower
682 depending on those two parameters.
684 When run as stand-alone binaries, the data collector will not using any
685 caching system, and just fetch and return the data immediately.
690 The status daemon will be implemented as a standalone Haskell daemon. In
691 the future it should be easy to merge multiple daemons into one with
692 multiple entry points, should we find out it saves resources and doesn't
693 impact functionality.
695 The libekg library should be looked at for easily providing metrics in
701 We will implement the agent system in this order:
703 - initial example data collectors (eg. for drbd and instance status).
704 - initial daemon for exporting data, integrating the existing collectors
706 - RPC updates for instance status reasons and disk to instance mapping
707 - cache layer for the daemon
708 - more data collectors
714 As a future step it can be useful to "centralize" all this reporting
715 data on a single place. This for example can be just the master node, or
716 all the master candidates. We will evaluate doing this after the first
717 node-local version has been developed and tested.
719 Another possible change is replacing the "read-only" RPCs with queries
720 to the agent system, thus having only one way of collecting information
721 from the nodes from a monitoring system and for Ganeti itself.
723 One extra feature we may need is a way to query for only sub-parts of
724 the report (eg. instances status only). This can be done by passing
725 arguments to the HTTP GET, which will be defined when we get to this
728 Finally the :doc:`autorepair system design <design-autorepair>`. system
729 (see its design) can be expanded to use the monitoring agent system as a
730 source of information to decide which repairs it can perform.
732 .. vim: set textwidth=72 :