Statistics
| Branch: | Tag: | Revision:

root / doc / design-monitoring-agent.rst @ 92070017

History | View | Annotate | Download (26.9 kB)

1
=======================
2
Ganeti monitoring agent
3
=======================
4

    
5
.. contents:: :depth: 4
6

    
7
This is a design document detailing the implementation of a Ganeti
8
monitoring agent report system, that can be queried by a monitoring
9
system to calculate health information for a Ganeti cluster.
10

    
11
Current state and shortcomings
12
==============================
13

    
14
There is currently no monitoring support in Ganeti. While we don't want
15
to build something like Nagios or Pacemaker as part of Ganeti, it would
16
be useful if such tools could easily extract information from a Ganeti
17
machine in order to take actions (example actions include logging an
18
outage for future reporting or alerting a person or system about it).
19

    
20
Proposed changes
21
================
22

    
23
Each Ganeti node should export a status page that can be queried by a
24
monitoring system. Such status page will be exported on a network port
25
and will be encoded in JSON (simple text) over HTTP.
26

    
27
The choice of JSON is obvious as we already depend on it in Ganeti and
28
thus we don't need to add extra libraries to use it, as opposed to what
29
would happen for XML or some other markup format.
30

    
31
Location of agent report
32
------------------------
33

    
34
The report will be available from all nodes, and be concerned for all
35
node-local resources. This allows more real-time information to be
36
available, at the cost of querying all nodes.
37

    
38
Information reported
39
--------------------
40

    
41
The monitoring agent system will report on the following basic information:
42

    
43
- Instance status
44
- Instance disk status
45
- Status of storage for instances
46
- Ganeti daemons status, CPU usage, memory footprint
47
- Hypervisor resources report (memory, CPU, network interfaces)
48
- Node OS resources report (memory, CPU, network interfaces)
49
- Information from a plugin system
50

    
51
Format of the report
52
--------------------
53

    
54
The report of the will be in JSON format, and it will present an array
55
of report objects.
56
Each report object will be produced by a specific data collector.
57
Each report object includes some mandatory fields, to be provided by all
58
the data collectors:
59

    
60
``name``
61
  The name of the data collector that produced this part of the report.
62
  It is supposed to be unique inside a report.
63

    
64
``version``
65
  The version of the data collector that produces this part of the
66
  report. Built-in data collectors (as opposed to those implemented as
67
  plugins) should have "B" as the version number.
68

    
69
``format_version``
70
  The format of what is represented in the "data" field for each data
71
  collector might change over time. Every time this happens, the
72
  format_version should be changed, so that who reads the report knows
73
  what format to expect, and how to correctly interpret it.
74

    
75
``timestamp``
76
  The time when the reported data were gathered. It has to be expressed
77
  in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
78
  enough precision is available (or needed) it can be padded with
79
  zeroes. If a report object needs multiple timestamps, it can add more
80
  and/or override this one inside its own "data" section.
81

    
82
``category``
83
  A collector can belong to a given category of collectors (e.g.: storage
84
  collectors, daemon collector). This means that it will have to provide a
85
  minumum set of prescribed fields, as documented for each category.
86
  This field will contain the name of the category the collector belongs to,
87
  if any, or just the ``null`` value.
88

    
89
``kind``
90
  Two kinds of collectors are possible:
91
  `Performance reporting collectors`_ and `Status reporting collectors`_.
92
  The respective paragraphs will describe them and the value of this field.
93

    
94
``data``
95
  This field contains all the data generated by the specific data collector,
96
  in its own independently defined format. The monitoring agent could check
97
  this syntactically (according to the JSON specifications) but not
98
  semantically.
99

    
100
Here follows a minimal example of a report::
101

    
102
  [
103
  {
104
      "name" : "TheCollectorIdentifier",
105
      "version" : "1.2",
106
      "format_version" : 1,
107
      "timestamp" : 1351607182000000000,
108
      "category" : null,
109
      "kind" : 0,
110
      "data" : { "plugin_specific_data" : "go_here" }
111
  },
112
  {
113
      "name" : "AnotherDataCollector",
114
      "version" : "B",
115
      "format_version" : 7,
116
      "timestamp" : 1351609526123854000,
117
      "category" : "storage",
118
      "kind" : 1,
119
      "data" : { "status" : { "code" : 1,
120
                              "message" : "Error on disk 2"
121
                            },
122
                 "plugin_specific" : "data",
123
                 "some_late_data" : { "timestamp" : 1351609526123942720,
124
                                      ...
125
                                    }
126
               }
127
  }
128
  ]
129

    
130
Performance reporting collectors
131
++++++++++++++++++++++++++++++++
132

    
133
These collectors only provide data about some component of the system, without
134
giving any interpretation over their meaning.
135

    
136
The value of the ``kind`` field of the report will be ``0``.
137

    
138
Status reporting collectors
139
+++++++++++++++++++++++++++
140

    
141
These collectors will provide information about the status of some
142
component of ganeti, or managed by ganeti.
143

    
144
The value of their ``kind`` field will be ``1``.
145

    
146
The rationale behind this kind of collectors is that there are some situations
147
where exporting data about the underlying subsystems would expose potential
148
issues. But if Ganeti itself is able (and going) to fix the problem, conflicts
149
might arise between Ganeti and something/somebody else trying to fix the same
150
problem.
151
Also, some external monitoring systems might not be aware of the internals of a
152
particular subsystem (e.g.: DRBD) and might only exploit the high level
153
response of its data collector, alerting an administrator if anything is wrong.
154
Still, completely hiding the underlying data is not a good idea, as they might
155
still be of use in some cases. So status reporting plugins will provide two
156
output modes: one just exporting a high level information about the status,
157
and one also exporting all the data they gathered.
158
The default output mode will be the status-only one. Through a command line
159
parameter (for stand-alone data collectors) or through the HTTP request to the
160
monitoring agent
161
(when collectors are executed as part of it) the verbose output mode providing
162
all the data can be selected.
163

    
164
When exporting just the status each status reporting collector will provide,
165
in its ``data`` section, at least the following field:
166

    
167
``status``
168
  summarizes the status of the component being monitored and consists of two
169
  subfields:
170

    
171
  ``code``
172
    It assumes a numeric value, encoded in such a way to allow using a bitset
173
    to easily distinguish which states are currently present in the whole cluster.
174
    If the bitwise OR of all the ``status`` fields is 0, the cluster is
175
    completely healty.
176
    The status codes are as follows:
177

    
178
    ``0``
179
      The collector can determine that everything is working as
180
      intended.
181

    
182
    ``1``
183
      Something is temporarily wrong but it is being automatically fixed by
184
      Ganeti.
185
      There is no need of external intervention.
186

    
187
    ``2``
188
      The collector has failed to understand whether the status is good or
189
      bad. Further analysis is required. Interpret this status as a
190
      potentially dangerous situation.
191

    
192
    ``4``
193
      The collector can determine that something is wrong and Ganeti has no
194
      way to fix it autonomously. External intervention is required.
195

    
196
  ``message``
197
    A message to better explain the reason of the status.
198
    The exact format of the message string is data collector dependent.
199

    
200
    The field is mandatory, but the content can be an empty string if the
201
    ``code`` is ``0`` (working as intended) or ``1`` (being fixed
202
    automatically).
203

    
204
    If the status code is ``2``, the message should specify what has gone
205
    wrong.
206
    If the status code is ``4``, the message shoud explain why it was not
207
    possible to determine a proper status.
208

    
209
The ``data`` section will also contain all the fields describing the gathered
210
data, according to a collector-specific format.
211

    
212
Instance status
213
+++++++++++++++
214

    
215
At the moment each node knows which instances are running on it, which
216
instances it is primary for, but not the cause why an instance might not
217
be running. On the other hand we don't want to distribute full instance
218
"admin" status information to all nodes, because of the performance
219
impact this would have.
220

    
221
As such we propose that:
222

    
223
- Any operation that can affect instance status will have an optional
224
  "reason" attached to it (at opcode level). This can be used for
225
  example to distinguish an admin request, from a scheduled maintenance
226
  or an automated tool's work. If this reason is not passed, Ganeti will
227
  just use the information it has about the source of the request.
228
  This reason information will be structured according to the
229
  :doc:`Ganeti reason trail <design-reason-trail>` design document.
230
- RPCs that affect the instance status will be changed so that the
231
  "reason" and the version of the config object they ran on is passed to
232
  them. They will then export the new expected instance status, together
233
  with the associated reason and object version to the status report
234
  system, which then will export those themselves.
235

    
236
Monitoring and auditing systems can then use the reason to understand
237
the cause of an instance status, and they can use the timestamp to
238
understand the freshness of their data even in the absence of an atomic
239
cross-node reporting: for example if they see an instance "up" on a node
240
after seeing it running on a previous one, they can compare these values
241
to understand which data is freshest, and repoll the "older" node. Of
242
course if they keep seeing this status this represents an error (either
243
an instance continuously "flapping" between nodes, or an instance is
244
constantly up on more than one), which should be reported and acted
245
upon.
246

    
247
The instance status will be on each node, for the instances it is
248
primary for, and its ``data`` section of the report will contain a list
249
of instances, named ``instances``, with at least the following fields for
250
each instance:
251

    
252
``name``
253
  The name of the instance.
254

    
255
``uuid``
256
  The UUID of the instance (stable on name change).
257

    
258
``admin_state``
259
  The status of the instance (up/down/offline) as requested by the admin.
260

    
261
``actual_state``
262
  The actual status of the instance. It can be ``up``, ``down``, or
263
  ``hung`` if the instance is up but it appears to be completely stuck.
264

    
265
``uptime``
266
  The uptime of the instance (if it is up, "null" otherwise).
267

    
268
``mtime``
269
  The timestamp of the last known change to the instance state.
270

    
271
``state_reason``
272
  The last known reason for state change of the instance, described according
273
  to the JSON representation of a reason trail, as detailed in the :doc:`reason
274
  trail design document <design-reason-trail>`.
275

    
276
``status``
277
  It represents the status of the instance, and its format is the same as that
278
  of the ``status`` field of `Status reporting collectors`_.
279

    
280
Each hypervisor should provide its own instance status data collector, possibly
281
with the addition of more, specific, fields.
282
The ``category`` field of all of them will be ``instance``.
283
The ``kind`` field will be ``1``.
284

    
285
Note that as soon as a node knows it's not the primary anymore for an
286
instance it will stop reporting status for it: this means the instance
287
will either disappear, if it has been deleted, or appear on another
288
node, if it's been moved.
289

    
290
The ``code`` of the ``status`` field of the report of the Instance status data
291
collector will be:
292

    
293
``0``
294
  if ``status`` is ``0`` for all the instances it is reporting about.
295

    
296
``1``
297
  otherwise.
298

    
299
Storage collectors
300
++++++++++++++++++
301

    
302
The storage collectors will be a series of data collectors
303
that will gather data about storage for the current node. The collection
304
will be performed at different granularity and abstraction levels, from
305
the physical disks, to partitions, logical volumes and to the specific
306
storage types used by Ganeti itself (drbd, rbd, plain, file).
307

    
308
The ``name`` of each of these collector will reflect what storage type each of
309
them refers to.
310

    
311
The ``category`` field of these collector will be ``storage``.
312

    
313
The ``kind`` field will depend on the specific collector.
314

    
315
Each ``storage`` collector's ``data`` section will provide collector-specific
316
fields.
317

    
318
In case of error, the ``message`` subfield of the ``status`` field of the
319
report of the instance status collector will disclose the nature of the error
320
as a type specific information. Examples of these are "backend pv unavailable"
321
for lvm storage, "unreachable" for network based storage or "filesystem error"
322
for filesystem based implementations.
323

    
324
Diskstats collector
325
*******************
326

    
327
This storage data collector will gather information about the status of the
328
disks installed in the system, as listed in the /proc/diskstats file. This means
329
that not only physical hard drives, but also ramdisks and loopback devices will
330
be listed.
331

    
332
Its ``kind`` in the report will be ``0`` (`Performance reporting collectors`_).
333

    
334
Its ``category`` field in the report will contain the value ``storage``.
335

    
336
When executed in verbose mode, the ``data`` section of the report of this
337
collector will be a list of items, each representing one disk, each providing
338
the following fields:
339

    
340
``major``
341
  The major number of the device.
342

    
343
``minor``
344
  The minor number of the device.
345

    
346
``name``
347
  The name of the device.
348

    
349
``readsNum``
350
  This is the total number of reads completed successfully.
351

    
352
``mergedReads``
353
  Reads which are adjacent to each other may be merged for efficiency. Thus
354
  two 4K reads may become one 8K read before it is ultimately handed to the
355
  disk, and so it will be counted (and queued) as only one I/O. This field
356
  specifies how often this was done.
357

    
358
``secRead``
359
  This is the total number of sectors read successfully.
360

    
361
``timeRead``
362
  This is the total number of milliseconds spent by all reads.
363

    
364
``writes``
365
  This is the total number of writes completed successfully.
366

    
367
``mergedWrites``
368
  Writes which are adjacent to each other may be merged for efficiency. Thus
369
  two 4K writes may become one 8K read before it is ultimately handed to the
370
  disk, and so it will be counted (and queued) as only one I/O. This field
371
  specifies how often this was done.
372

    
373
``secWritten``
374
  This is the total number of sectors written successfully.
375

    
376
``timeWrite``
377
  This is the total number of milliseconds spent by all writes
378

    
379
``ios``
380
  The number of I/Os currently in progress.
381
  The only field that should go to zero, it is incremented as requests are
382
  given to appropriate struct request_queue and decremented as they finish.
383

    
384
``timeIO``
385
  The number of milliseconds spent doing I/Os. This field increases so long
386
  as field ``IOs`` is nonzero.
387

    
388
``wIOmillis``
389
  The weighted number of milliseconds spent doing I/Os.
390
  This field is incremented at each I/O start, I/O completion, I/O merge,
391
  or read of these stats by the number of I/Os in progress (field ``IOs``)
392
  times the number of milliseconds spent doing I/O since the last update of
393
  this field. This can provide an easy measure of both I/O completion time
394
  and the backlog that may be accumulating.
395

    
396
DRBD status
397
***********
398

    
399
This data collector will run only on nodes where DRBD is actually
400
present and it will gather information about DRBD devices.
401

    
402
Its ``kind`` in the report will be ``1`` (`Status reporting collectors`_).
403

    
404
Its ``category`` field in the report will contain the value ``storage``.
405

    
406
When executed in verbose mode, the ``data`` section of the report of this
407
collector will provide the following fields:
408

    
409
``versionInfo``
410
  Information about the DRBD version number, given by a combination of
411
  any (but at least one) of the following fields:
412

    
413
  ``version``
414
    The DRBD driver version.
415

    
416
  ``api``
417
    The API version number.
418

    
419
  ``proto``
420
    The protocol version.
421

    
422
  ``srcversion``
423
    The version of the source files.
424

    
425
  ``gitHash``
426
    Git hash of the source files.
427

    
428
  ``buildBy``
429
    Who built the binary, and, optionally, when.
430

    
431
``device``
432
  A list of structures, each describing a DRBD device (a minor) and containing
433
  the following fields:
434

    
435
  ``minor``
436
    The device minor number.
437

    
438
  ``connectionState``
439
    The state of the connection. If it is "Unconfigured", all the following
440
    fields are not present.
441

    
442
  ``localRole``
443
    The role of the local resource.
444

    
445
  ``remoteRole``
446
    The role of the remote resource.
447

    
448
  ``localState``
449
    The status of the local disk.
450

    
451
  ``remoteState``
452
    The status of the remote disk.
453

    
454
  ``replicationProtocol``
455
    The replication protocol being used.
456

    
457
  ``ioFlags``
458
    The input/output flags.
459

    
460
  ``perfIndicators``
461
    The performance indicators. This field will contain the following
462
    sub-fields:
463

    
464
    ``networkSend``
465
      KiB of data sent on the network.
466

    
467
    ``networkReceive``
468
      KiB of data received from the network.
469

    
470
    ``diskWrite``
471
      KiB of data written on local disk.
472

    
473
    ``diskRead``
474
      KiB of date read from the local disk.
475

    
476
    ``activityLog``
477
      Number of updates of the activity log.
478

    
479
    ``bitMap``
480
      Number of updates to the bitmap area of the metadata.
481

    
482
    ``localCount``
483
      Number of open requests to the local I/O subsystem.
484

    
485
    ``pending``
486
      Number of requests sent to the partner but not yet answered.
487

    
488
    ``unacknowledged``
489
      Number of requests received by the partner but still to be answered.
490

    
491
    ``applicationPending``
492
      Num of block input/output requests forwarded to DRBD but that have not yet
493
      been answered.
494

    
495
    ``epochs``
496
      (Optional) Number of epoch objects. Not provided by all DRBD versions.
497

    
498
    ``writeOrder``
499
      (Optional) Currently used write ordering method. Not provided by all DRBD
500
      versions.
501

    
502
    ``outOfSync``
503
      (Optional) KiB of storage currently out of sync. Not provided by all DRBD
504
      versions.
505

    
506
  ``syncStatus``
507
    (Optional) The status of the synchronization of the disk. This is present
508
    only if the disk is being synchronized, and includes the following fields:
509

    
510
    ``percentage``
511
      The percentage of synchronized data.
512

    
513
    ``progress``
514
      How far the synchronization is. Written as "x/y", where x and y are
515
      integer numbers expressed in the measurement unit stated in
516
      ``progressUnit``
517

    
518
    ``progressUnit``
519
      The measurement unit for the progress indicator.
520

    
521
    ``timeToFinish``
522
      The expected time before finishing the synchronization.
523

    
524
    ``speed``
525
      The speed of the synchronization.
526

    
527
    ``want``
528
      The desiderd speed of the synchronization.
529

    
530
    ``speedUnit``
531
      The measurement unit of the ``speed`` and ``want`` values. Expressed
532
      as "size/time".
533

    
534
  ``instance``
535
    The name of the Ganeti instance this disk is associated to.
536

    
537

    
538
Ganeti daemons status
539
+++++++++++++++++++++
540

    
541
Ganeti will report what information it has about its own daemons.
542
This should allow identifying possible problems with the Ganeti system itself:
543
for example memory leaks, crashes and high resource utilization should be
544
evident by analyzing this information.
545

    
546
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
547

    
548
Each daemon will have its own data collector, and each of them will have
549
a ``category`` field valued ``daemon``.
550

    
551
When executed in verbose mode, their data section will include at least:
552

    
553
``memory``
554
  The amount of used memory.
555

    
556
``size_unit``
557
  The measurement unit used for the memory.
558

    
559
``uptime``
560
  The uptime of the daemon.
561

    
562
``CPU usage``
563
  How much cpu the daemon is using (percentage).
564

    
565
Any other daemon-specific information can be included as well in the ``data``
566
section.
567

    
568
Hypervisor resources report
569
+++++++++++++++++++++++++++
570

    
571
Each hypervisor has a view of system resources that sometimes is
572
different than the one the OS sees (for example in Xen the Node OS,
573
running as Dom0, has access to only part of those resources). In this
574
section we'll report all information we can in a "non hypervisor
575
specific" way. Each hypervisor can then add extra specific information
576
that is not generic enough be abstracted.
577

    
578
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
579

    
580
Each of the hypervisor data collectory will be of ``category``: ``hypervisor``.
581

    
582
Node OS resources report
583
++++++++++++++++++++++++
584

    
585
Since Ganeti assumes it's running on Linux, it's useful to export some
586
basic information as seen by the host system.
587

    
588
The ``category`` field of the report will be ``null``.
589

    
590
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
591

    
592
The ``data`` section will include:
593

    
594
``cpu_number``
595
  The number of available cpus.
596

    
597
``cpus``
598
  A list with one element per cpu, showing its average load.
599

    
600
``memory``
601
  The current view of memory (free, used, cached, etc.)
602

    
603
``filesystem``
604
  A list with one element per filesystem, showing a summary of the
605
  total/available space.
606

    
607
``NICs``
608
  A list with one element per network interface, showing the amount of
609
  sent/received data, error rate, IP address of the interface, etc.
610

    
611
``versions``
612
  A map using the name of a component Ganeti interacts (Linux, drbd,
613
  hypervisor, etc) as the key and its version number as the value.
614

    
615
Note that we won't go into any hardware specific details (e.g. querying a
616
node RAID is outside the scope of this, and can be implemented as a
617
plugin) but we can easily just report the information above, since it's
618
standard enough across all systems.
619

    
620
Format of the query
621
-------------------
622

    
623
The queries to the monitoring agent will be HTTP GET requests on port 1815.
624
The answer will be encoded in JSON format and will depend on the specific
625
accessed resource.
626

    
627
If a request is sent to a non-existing resource, a 404 error will be returned by
628
the HTTP server.
629

    
630
The following paragraphs will present the existing resources supported by the
631
current protocol version, that is version 1.
632

    
633
``/``
634
+++++
635
The root resource. It will return the list of the supported protocol version
636
numbers.
637

    
638
Currently, this will include only version 1.
639

    
640
``/1``
641
++++++
642
Not an actual resource per-se, it is the root of all the resources of protocol
643
version 1.
644

    
645
If requested through GET, the null JSON value will be returned.
646

    
647
``/1/list/collectors``
648
++++++++++++++++++++++
649
Returns a list of tuples (kind, category, name) showing all the collectors
650
available in the system.
651

    
652
``/1/report/all``
653
+++++++++++++++++
654
A list of the reports of all the data collectors, as described in the section
655
`Format of the report`_.
656

    
657
`Status reporting collectors`_ will provide their output in non-verbose format.
658
The verbose format can be requested by adding the parameter ``verbose=1`` to the
659
request.
660

    
661
``/1/report/[category]/[collector_name]``
662
+++++++++++++++++++++++++++++++++++++++++
663
Returns the report of the collector ``[collector_name]`` that belongs to the
664
specified ``[category]``.
665

    
666
The ``category`` has to be written in lowercase.
667

    
668
If a collector does not belong to any category, ``default`` will have to be
669
used as the value for ``[category]``.
670

    
671
`Status reporting collectors`_ will provide their output in non-verbose format.
672
The verbose format can be requested by adding the parameter ``verbose=1`` to the
673
request.
674

    
675
Instance disk status propagation
676
--------------------------------
677

    
678
As for the instance status Ganeti has now only partial information about
679
its instance disks: in particular each node is unaware of the disk to
680
instance mapping, that exists only on the master.
681

    
682
For this design doc we plan to fix this by changing all RPCs that create
683
a backend storage or that put an already existing one in use and passing
684
the relevant instance to the node. The node can then export these to the
685
status reporting tool.
686

    
687
While we haven't implemented these RPC changes yet, we'll use Confd to
688
fetch this information in the data collectors.
689

    
690
Plugin system
691
-------------
692

    
693
The monitoring system will be equipped with a plugin system that can
694
export specific local information through it.
695

    
696
The plugin system is expected to be used by local installations to
697
export any installation specific information that they want to be
698
monitored, about either hardware or software on their systems.
699

    
700
The plugin system will be in the form of either scripts or binaries whose output
701
will be inserted in the report.
702

    
703
Eventually support for other kinds of plugins might be added as well, such as
704
plain text files which will be inserted into the report, or local unix or
705
network sockets from which the information has to be read.  This should allow
706
most flexibility for implementing an efficient system, while being able to keep
707
it as simple as possible.
708

    
709
Data collectors
710
---------------
711

    
712
In order to ease testing as well as to make it simple to reuse this
713
subsystem it will be possible to run just the "data collectors" on each
714
node without passing through the agent daemon.
715

    
716
If a data collector is run independently, it should print on stdout its
717
report, according to the format corresponding to a single data collector
718
report object, as described in the previous paragraphs.
719

    
720
Mode of operation
721
-----------------
722

    
723
In order to be able to report information fast the monitoring agent
724
daemon will keep an in-memory or on-disk cache of the status, which will
725
be returned when queries are made. The status system will then
726
periodically check resources to make sure the status is up to date.
727

    
728
Different parts of the report will be queried at different speeds. These
729
will depend on:
730
- how often they vary (or we expect them to vary)
731
- how fast they are to query
732
- how important their freshness is
733

    
734
Of course the last parameter is installation specific, and while we'll
735
try to have defaults, it will be configurable. The first two instead we
736
can use adaptively to query a certain resource faster or slower
737
depending on those two parameters.
738

    
739
When run as stand-alone binaries, the data collector will not using any
740
caching system, and just fetch and return the data immediately.
741

    
742
Implementation place
743
--------------------
744

    
745
The status daemon will be implemented as a standalone Haskell daemon. In
746
the future it should be easy to merge multiple daemons into one with
747
multiple entry points, should we find out it saves resources and doesn't
748
impact functionality.
749

    
750
The libekg library should be looked at for easily providing metrics in
751
json format.
752

    
753
Implementation order
754
--------------------
755

    
756
We will implement the agent system in this order:
757

    
758
- initial example data collectors (eg. for drbd and instance status).
759
- initial daemon for exporting data, integrating the existing collectors
760
- plugin system
761
- RPC updates for instance status reasons and disk to instance mapping
762
- cache layer for the daemon
763
- more data collectors
764

    
765

    
766
Future work
767
===========
768

    
769
As a future step it can be useful to "centralize" all this reporting
770
data on a single place. This for example can be just the master node, or
771
all the master candidates. We will evaluate doing this after the first
772
node-local version has been developed and tested.
773

    
774
Another possible change is replacing the "read-only" RPCs with queries
775
to the agent system, thus having only one way of collecting information
776
from the nodes from a monitoring system and for Ganeti itself.
777

    
778
One extra feature we may need is a way to query for only sub-parts of
779
the report (eg. instances status only). This can be done by passing
780
arguments to the HTTP GET, which will be defined when we get to this
781
funtionality.
782

    
783
Finally the :doc:`autorepair system design <design-autorepair>`. system
784
(see its design) can be expanded to use the monitoring agent system as a
785
source of information to decide which repairs it can perform.
786

    
787
.. vim: set textwidth=72 :
788
.. Local Variables:
789
.. mode: rst
790
.. fill-column: 72
791
.. End: