Statistics
| Branch: | Tag: | Revision:

root / doc / design-monitoring-agent.rst @ fbfa1d19

History | View | Annotate | Download (26.8 kB)

1
=======================
2
Ganeti monitoring agent
3
=======================
4

    
5
.. contents:: :depth: 4
6

    
7
This is a design document detailing the implementation of a Ganeti
8
monitoring agent report system, that can be queried by a monitoring
9
system to calculate health information for a Ganeti cluster.
10

    
11
Current state and shortcomings
12
==============================
13

    
14
There is currently no monitoring support in Ganeti. While we don't want
15
to build something like Nagios or Pacemaker as part of Ganeti, it would
16
be useful if such tools could easily extract information from a Ganeti
17
machine in order to take actions (example actions include logging an
18
outage for future reporting or alerting a person or system about it).
19

    
20
Proposed changes
21
================
22

    
23
Each Ganeti node should export a status page that can be queried by a
24
monitoring system. Such status page will be exported on a network port
25
and will be encoded in JSON (simple text) over HTTP.
26

    
27
The choice of JSON is obvious as we already depend on it in Ganeti and
28
thus we don't need to add extra libraries to use it, as opposed to what
29
would happen for XML or some other markup format.
30

    
31
Location of agent report
32
------------------------
33

    
34
The report will be available from all nodes, and be concerned for all
35
node-local resources. This allows more real-time information to be
36
available, at the cost of querying all nodes.
37

    
38
Information reported
39
--------------------
40

    
41
The monitoring agent system will report on the following basic information:
42

    
43
- Instance status
44
- Instance disk status
45
- Status of storage for instances
46
- Ganeti daemons status, CPU usage, memory footprint
47
- Hypervisor resources report (memory, CPU, network interfaces)
48
- Node OS resources report (memory, CPU, network interfaces)
49
- Information from a plugin system
50

    
51
Format of the report
52
--------------------
53

    
54
The report of the will be in JSON format, and it will present an array
55
of report objects.
56
Each report object will be produced by a specific data collector.
57
Each report object includes some mandatory fields, to be provided by all
58
the data collectors:
59

    
60
``name``
61
  The name of the data collector that produced this part of the report.
62
  It is supposed to be unique inside a report.
63

    
64
``version``
65
  The version of the data collector that produces this part of the
66
  report. Built-in data collectors (as opposed to those implemented as
67
  plugins) should have "B" as the version number.
68

    
69
``format_version``
70
  The format of what is represented in the "data" field for each data
71
  collector might change over time. Every time this happens, the
72
  format_version should be changed, so that who reads the report knows
73
  what format to expect, and how to correctly interpret it.
74

    
75
``timestamp``
76
  The time when the reported data were gathered. It has to be expressed
77
  in nanoseconds since the unix epoch (0:00:00 January 01, 1970). If not
78
  enough precision is available (or needed) it can be padded with
79
  zeroes. If a report object needs multiple timestamps, it can add more
80
  and/or override this one inside its own "data" section.
81

    
82
``category``
83
  A collector can belong to a given category of collectors (e.g.: storage
84
  collectors, daemon collector). This means that it will have to provide a
85
  minumum set of prescribed fields, as documented for each category.
86
  This field will contain the name of the category the collector belongs to,
87
  if any, or just the ``null`` value.
88

    
89
``kind``
90
  Two kinds of collectors are possible:
91
  `Performance reporting collectors`_ and `Status reporting collectors`_.
92
  The respective paragraphs will describe them and the value of this field.
93

    
94
``data``
95
  This field contains all the data generated by the specific data collector,
96
  in its own independently defined format. The monitoring agent could check
97
  this syntactically (according to the JSON specifications) but not
98
  semantically.
99

    
100
Here follows a minimal example of a report::
101

    
102
  [
103
  {
104
      "name" : "TheCollectorIdentifier",
105
      "version" : "1.2",
106
      "format_version" : 1,
107
      "timestamp" : 1351607182000000000,
108
      "category" : null,
109
      "kind" : 0,
110
      "data" : { "plugin_specific_data" : "go_here" }
111
  },
112
  {
113
      "name" : "AnotherDataCollector",
114
      "version" : "B",
115
      "format_version" : 7,
116
      "timestamp" : 1351609526123854000,
117
      "category" : "storage",
118
      "kind" : 1,
119
      "data" : { "status" : { "code" : 1,
120
                              "message" : "Error on disk 2"
121
                            },
122
                 "plugin_specific" : "data",
123
                 "some_late_data" : { "timestamp" : 1351609526123942720,
124
                                      ...
125
                                    }
126
               }
127
  }
128
  ]
129

    
130
Performance reporting collectors
131
++++++++++++++++++++++++++++++++
132

    
133
These collectors only provide data about some component of the system, without
134
giving any interpretation over their meaning.
135

    
136
The value of the ``kind`` field of the report will be ``0``.
137

    
138
Status reporting collectors
139
+++++++++++++++++++++++++++
140

    
141
These collectors will provide information about the status of some
142
component of ganeti, or managed by ganeti.
143

    
144
The value of their ``kind`` field will be ``1``.
145

    
146
The rationale behind this kind of collectors is that there are some situations
147
where exporting data about the underlying subsystems would expose potential
148
issues. But if Ganeti itself is able (and going) to fix the problem, conflicts
149
might arise between Ganeti and something/somebody else trying to fix the same
150
problem.
151
Also, some external monitoring systems might not be aware of the internals of a
152
particular subsystem (e.g.: DRBD) and might only exploit the high level
153
response of its data collector, alerting an administrator if anything is wrong.
154
Still, completely hiding the underlying data is not a good idea, as they might
155
still be of use in some cases. So status reporting plugins will provide two
156
output modes: one just exporting a high level information about the status,
157
and one also exporting all the data they gathered.
158
The default output mode will be the status-only one. Through a command line
159
parameter (for stand-alone data collectors) or through the HTTP request to the
160
monitoring agent
161
(when collectors are executed as part of it) the verbose output mode providing
162
all the data can be selected.
163

    
164
When exporting just the status each status reporting collector will provide,
165
in its ``data`` section, at least the following field:
166

    
167
``status``
168
  summarizes the status of the component being monitored and consists of two
169
  subfields:
170

    
171
  ``code``
172
    It assumes a numeric value, encoded in such a way to allow using a bitset
173
    to easily distinguish which states are currently present in the whole
174
    cluster. If the bitwise OR of all the ``status`` fields is 0, the cluster
175
    is completely healty.
176
    The status codes are as follows:
177

    
178
    ``0``
179
      The collector can determine that everything is working as
180
      intended.
181

    
182
    ``1``
183
      Something is temporarily wrong but it is being automatically fixed by
184
      Ganeti.
185
      There is no need of external intervention.
186

    
187
    ``2``
188
      The collector has failed to understand whether the status is good or
189
      bad. Further analysis is required. Interpret this status as a
190
      potentially dangerous situation.
191

    
192
    ``4``
193
      The collector can determine that something is wrong and Ganeti has no
194
      way to fix it autonomously. External intervention is required.
195

    
196
  ``message``
197
    A message to better explain the reason of the status.
198
    The exact format of the message string is data collector dependent.
199

    
200
    The field is mandatory, but the content can be an empty string if the
201
    ``code`` is ``0`` (working as intended) or ``1`` (being fixed
202
    automatically).
203

    
204
    If the status code is ``2``, the message should specify what has gone
205
    wrong.
206
    If the status code is ``4``, the message shoud explain why it was not
207
    possible to determine a proper status.
208

    
209
The ``data`` section will also contain all the fields describing the gathered
210
data, according to a collector-specific format.
211

    
212
Instance status
213
+++++++++++++++
214

    
215
At the moment each node knows which instances are running on it, which
216
instances it is primary for, but not the cause why an instance might not
217
be running. On the other hand we don't want to distribute full instance
218
"admin" status information to all nodes, because of the performance
219
impact this would have.
220

    
221
As such we propose that:
222

    
223
- Any operation that can affect instance status will have an optional
224
  "reason" attached to it (at opcode level). This can be used for
225
  example to distinguish an admin request, from a scheduled maintenance
226
  or an automated tool's work. If this reason is not passed, Ganeti will
227
  just use the information it has about the source of the request.
228
  This reason information will be structured according to the
229
  :doc:`Ganeti reason trail <design-reason-trail>` design document.
230
- RPCs that affect the instance status will be changed so that the
231
  "reason" and the version of the config object they ran on is passed to
232
  them. They will then export the new expected instance status, together
233
  with the associated reason and object version to the status report
234
  system, which then will export those themselves.
235

    
236
Monitoring and auditing systems can then use the reason to understand
237
the cause of an instance status, and they can use the timestamp to
238
understand the freshness of their data even in the absence of an atomic
239
cross-node reporting: for example if they see an instance "up" on a node
240
after seeing it running on a previous one, they can compare these values
241
to understand which data is freshest, and repoll the "older" node. Of
242
course if they keep seeing this status this represents an error (either
243
an instance continuously "flapping" between nodes, or an instance is
244
constantly up on more than one), which should be reported and acted
245
upon.
246

    
247
The instance status will be on each node, for the instances it is
248
primary for, and its ``data`` section of the report will contain a list
249
of instances, named ``instances``, with at least the following fields for
250
each instance:
251

    
252
``name``
253
  The name of the instance.
254

    
255
``uuid``
256
  The UUID of the instance (stable on name change).
257

    
258
``admin_state``
259
  The status of the instance (up/down/offline) as requested by the admin.
260

    
261
``actual_state``
262
  The actual status of the instance. It can be ``up``, ``down``, or
263
  ``hung`` if the instance is up but it appears to be completely stuck.
264

    
265
``uptime``
266
  The uptime of the instance (if it is up, "null" otherwise).
267

    
268
``mtime``
269
  The timestamp of the last known change to the instance state.
270

    
271
``state_reason``
272
  The last known reason for state change of the instance, described according
273
  to the JSON representation of a reason trail, as detailed in the :doc:`reason
274
  trail design document <design-reason-trail>`.
275

    
276
``status``
277
  It represents the status of the instance, and its format is the same as that
278
  of the ``status`` field of `Status reporting collectors`_.
279

    
280
Each hypervisor should provide its own instance status data collector, possibly
281
with the addition of more, specific, fields.
282
The ``category`` field of all of them will be ``instance``.
283
The ``kind`` field will be ``1``.
284

    
285
Note that as soon as a node knows it's not the primary anymore for an
286
instance it will stop reporting status for it: this means the instance
287
will either disappear, if it has been deleted, or appear on another
288
node, if it's been moved.
289

    
290
The ``code`` of the ``status`` field of the report of the Instance status data
291
collector will be:
292

    
293
``0``
294
  if ``status`` is ``0`` for all the instances it is reporting about.
295

    
296
``1``
297
  otherwise.
298

    
299
Storage collectors
300
++++++++++++++++++
301

    
302
The storage collectors will be a series of data collectors
303
that will gather data about storage for the current node. The collection
304
will be performed at different granularity and abstraction levels, from
305
the physical disks, to partitions, logical volumes and to the specific
306
storage types used by Ganeti itself (drbd, rbd, plain, file).
307

    
308
The ``name`` of each of these collector will reflect what storage type each of
309
them refers to.
310

    
311
The ``category`` field of these collector will be ``storage``.
312

    
313
The ``kind`` field will depend on the specific collector.
314

    
315
Each ``storage`` collector's ``data`` section will provide collector-specific
316
fields.
317

    
318
The various storage collectors will provide keys to join the data they provide,
319
in order to allow the user to get a better understanding of the system. E.g.:
320
through device names, or instance names.
321

    
322
Diskstats collector
323
*******************
324

    
325
This storage data collector will gather information about the status of the
326
disks installed in the system, as listed in the /proc/diskstats file. This means
327
that not only physical hard drives, but also ramdisks and loopback devices will
328
be listed.
329

    
330
Its ``kind`` in the report will be ``0`` (`Performance reporting collectors`_).
331

    
332
Its ``category`` field in the report will contain the value ``storage``.
333

    
334
When executed in verbose mode, the ``data`` section of the report of this
335
collector will be a list of items, each representing one disk, each providing
336
the following fields:
337

    
338
``major``
339
  The major number of the device.
340

    
341
``minor``
342
  The minor number of the device.
343

    
344
``name``
345
  The name of the device.
346

    
347
``readsNum``
348
  This is the total number of reads completed successfully.
349

    
350
``mergedReads``
351
  Reads which are adjacent to each other may be merged for efficiency. Thus
352
  two 4K reads may become one 8K read before it is ultimately handed to the
353
  disk, and so it will be counted (and queued) as only one I/O. This field
354
  specifies how often this was done.
355

    
356
``secRead``
357
  This is the total number of sectors read successfully.
358

    
359
``timeRead``
360
  This is the total number of milliseconds spent by all reads.
361

    
362
``writes``
363
  This is the total number of writes completed successfully.
364

    
365
``mergedWrites``
366
  Writes which are adjacent to each other may be merged for efficiency. Thus
367
  two 4K writes may become one 8K read before it is ultimately handed to the
368
  disk, and so it will be counted (and queued) as only one I/O. This field
369
  specifies how often this was done.
370

    
371
``secWritten``
372
  This is the total number of sectors written successfully.
373

    
374
``timeWrite``
375
  This is the total number of milliseconds spent by all writes.
376

    
377
``ios``
378
  The number of I/Os currently in progress.
379
  The only field that should go to zero, it is incremented as requests are
380
  given to appropriate struct request_queue and decremented as they finish.
381

    
382
``timeIO``
383
  The number of milliseconds spent doing I/Os. This field increases so long
384
  as field ``IOs`` is nonzero.
385

    
386
``wIOmillis``
387
  The weighted number of milliseconds spent doing I/Os.
388
  This field is incremented at each I/O start, I/O completion, I/O merge,
389
  or read of these stats by the number of I/Os in progress (field ``IOs``)
390
  times the number of milliseconds spent doing I/O since the last update of
391
  this field. This can provide an easy measure of both I/O completion time
392
  and the backlog that may be accumulating.
393

    
394
Logical Volume collector
395
************************
396

    
397
This data collector will gather information about the attributes of logical
398
volumes present in the system.
399

    
400
Its ``kind`` in the report will be ``0`` (`Performance reporting collectors`_).
401

    
402
Its ``category`` field in the report will contain the value ``storage``.
403

    
404
The ``data`` section of the report of this collector will be a list of items,
405
each representing one logical volume and providing the following fields:
406

    
407
``uuid``
408
  The UUID of the logical volume.
409

    
410
``name``
411
  The name of the logical volume.
412

    
413
``attr``
414
  The attributes of the logical volume.
415

    
416
``major``
417
  Persistent major number or -1 if not persistent.
418

    
419
``minor``
420
  Persistent minor number or -1 if not persistent.
421

    
422
``kernel_major``
423
  Currently assigned major number or -1 if LV is not active.
424

    
425
``kernel_minor``
426
  Currently assigned minor number or -1 if LV is not active.
427

    
428
``size``
429
  Size of LV in bytes.
430

    
431
``seg_count``
432
  Number of segments in LV.
433

    
434
``tags``
435
  Tags, if any.
436

    
437
``modules``
438
  Kernel device-mapper modules required for this LV, if any.
439

    
440
``vg_uuid``
441
  Unique identifier of the volume group.
442

    
443
``vg_name``
444
  Name of the volume group.
445

    
446
``segtype``
447
  Type of LV segment.
448

    
449
``seg_start``
450
  Offset within the LVto the start of the segment in bytes.
451

    
452
``seg_start_pe``
453
  Offset within the LV to the start of the segment in physical extents.
454

    
455
``seg_size``
456
  Size of the segment in bytes.
457

    
458
``seg_tags``
459
  Tags for the segment, if any.
460

    
461
``seg_pe_ranges``
462
  Ranges of Physical Extents of underlying devices in lvs command line format.
463

    
464
``devices``
465
  Underlying devices used with starting extent numbers.
466

    
467
``instance``
468
  The name of the instance this LV is used by, or ``null`` if it was not
469
  possible to determine it.
470

    
471
DRBD status
472
***********
473

    
474
This data collector will run only on nodes where DRBD is actually
475
present and it will gather information about DRBD devices.
476

    
477
Its ``kind`` in the report will be ``1`` (`Status reporting collectors`_).
478

    
479
Its ``category`` field in the report will contain the value ``storage``.
480

    
481
When executed in verbose mode, the ``data`` section of the report of this
482
collector will provide the following fields:
483

    
484
``versionInfo``
485
  Information about the DRBD version number, given by a combination of
486
  any (but at least one) of the following fields:
487

    
488
  ``version``
489
    The DRBD driver version.
490

    
491
  ``api``
492
    The API version number.
493

    
494
  ``proto``
495
    The protocol version.
496

    
497
  ``srcversion``
498
    The version of the source files.
499

    
500
  ``gitHash``
501
    Git hash of the source files.
502

    
503
  ``buildBy``
504
    Who built the binary, and, optionally, when.
505

    
506
``device``
507
  A list of structures, each describing a DRBD device (a minor) and containing
508
  the following fields:
509

    
510
  ``minor``
511
    The device minor number.
512

    
513
  ``connectionState``
514
    The state of the connection. If it is "Unconfigured", all the following
515
    fields are not present.
516

    
517
  ``localRole``
518
    The role of the local resource.
519

    
520
  ``remoteRole``
521
    The role of the remote resource.
522

    
523
  ``localState``
524
    The status of the local disk.
525

    
526
  ``remoteState``
527
    The status of the remote disk.
528

    
529
  ``replicationProtocol``
530
    The replication protocol being used.
531

    
532
  ``ioFlags``
533
    The input/output flags.
534

    
535
  ``perfIndicators``
536
    The performance indicators. This field will contain the following
537
    sub-fields:
538

    
539
    ``networkSend``
540
      KiB of data sent on the network.
541

    
542
    ``networkReceive``
543
      KiB of data received from the network.
544

    
545
    ``diskWrite``
546
      KiB of data written on local disk.
547

    
548
    ``diskRead``
549
      KiB of date read from the local disk.
550

    
551
    ``activityLog``
552
      Number of updates of the activity log.
553

    
554
    ``bitMap``
555
      Number of updates to the bitmap area of the metadata.
556

    
557
    ``localCount``
558
      Number of open requests to the local I/O subsystem.
559

    
560
    ``pending``
561
      Number of requests sent to the partner but not yet answered.
562

    
563
    ``unacknowledged``
564
      Number of requests received by the partner but still to be answered.
565

    
566
    ``applicationPending``
567
      Num of block input/output requests forwarded to DRBD but that have not yet
568
      been answered.
569

    
570
    ``epochs``
571
      (Optional) Number of epoch objects. Not provided by all DRBD versions.
572

    
573
    ``writeOrder``
574
      (Optional) Currently used write ordering method. Not provided by all DRBD
575
      versions.
576

    
577
    ``outOfSync``
578
      (Optional) KiB of storage currently out of sync. Not provided by all DRBD
579
      versions.
580

    
581
  ``syncStatus``
582
    (Optional) The status of the synchronization of the disk. This is present
583
    only if the disk is being synchronized, and includes the following fields:
584

    
585
    ``percentage``
586
      The percentage of synchronized data.
587

    
588
    ``progress``
589
      How far the synchronization is. Written as "x/y", where x and y are
590
      integer numbers expressed in the measurement unit stated in
591
      ``progressUnit``
592

    
593
    ``progressUnit``
594
      The measurement unit for the progress indicator.
595

    
596
    ``timeToFinish``
597
      The expected time before finishing the synchronization.
598

    
599
    ``speed``
600
      The speed of the synchronization.
601

    
602
    ``want``
603
      The desiderd speed of the synchronization.
604

    
605
    ``speedUnit``
606
      The measurement unit of the ``speed`` and ``want`` values. Expressed
607
      as "size/time".
608

    
609
  ``instance``
610
    The name of the Ganeti instance this disk is associated to.
611

    
612

    
613
Ganeti daemons status
614
+++++++++++++++++++++
615

    
616
Ganeti will report what information it has about its own daemons.
617
This should allow identifying possible problems with the Ganeti system itself:
618
for example memory leaks, crashes and high resource utilization should be
619
evident by analyzing this information.
620

    
621
The ``kind`` field will be ``1`` (`Status reporting collectors`_).
622

    
623
Each daemon will have its own data collector, and each of them will have
624
a ``category`` field valued ``daemon``.
625

    
626
When executed in verbose mode, their data section will include at least:
627

    
628
``memory``
629
  The amount of used memory.
630

    
631
``size_unit``
632
  The measurement unit used for the memory.
633

    
634
``uptime``
635
  The uptime of the daemon.
636

    
637
``CPU usage``
638
  How much cpu the daemon is using (percentage).
639

    
640
Any other daemon-specific information can be included as well in the ``data``
641
section.
642

    
643
Hypervisor resources report
644
+++++++++++++++++++++++++++
645

    
646
Each hypervisor has a view of system resources that sometimes is
647
different than the one the OS sees (for example in Xen the Node OS,
648
running as Dom0, has access to only part of those resources). In this
649
section we'll report all information we can in a "non hypervisor
650
specific" way. Each hypervisor can then add extra specific information
651
that is not generic enough be abstracted.
652

    
653
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
654

    
655
Each of the hypervisor data collectory will be of ``category``: ``hypervisor``.
656

    
657
Node OS resources report
658
++++++++++++++++++++++++
659

    
660
Since Ganeti assumes it's running on Linux, it's useful to export some
661
basic information as seen by the host system.
662

    
663
The ``category`` field of the report will be ``null``.
664

    
665
The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
666

    
667
The ``data`` section will include:
668

    
669
``cpu_number``
670
  The number of available cpus.
671

    
672
``cpus``
673
  A list with one element per cpu, showing its average load.
674

    
675
``memory``
676
  The current view of memory (free, used, cached, etc.)
677

    
678
``filesystem``
679
  A list with one element per filesystem, showing a summary of the
680
  total/available space.
681

    
682
``NICs``
683
  A list with one element per network interface, showing the amount of
684
  sent/received data, error rate, IP address of the interface, etc.
685

    
686
``versions``
687
  A map using the name of a component Ganeti interacts (Linux, drbd,
688
  hypervisor, etc) as the key and its version number as the value.
689

    
690
Note that we won't go into any hardware specific details (e.g. querying a
691
node RAID is outside the scope of this, and can be implemented as a
692
plugin) but we can easily just report the information above, since it's
693
standard enough across all systems.
694

    
695
Format of the query
696
-------------------
697

    
698
.. include:: monitoring-query-format.rst
699

    
700
Instance disk status propagation
701
--------------------------------
702

    
703
As for the instance status Ganeti has now only partial information about
704
its instance disks: in particular each node is unaware of the disk to
705
instance mapping, that exists only on the master.
706

    
707
For this design doc we plan to fix this by changing all RPCs that create
708
a backend storage or that put an already existing one in use and passing
709
the relevant instance to the node. The node can then export these to the
710
status reporting tool.
711

    
712
While we haven't implemented these RPC changes yet, we'll use Confd to
713
fetch this information in the data collectors.
714

    
715
Plugin system
716
-------------
717

    
718
The monitoring system will be equipped with a plugin system that can
719
export specific local information through it.
720

    
721
The plugin system is expected to be used by local installations to
722
export any installation specific information that they want to be
723
monitored, about either hardware or software on their systems.
724

    
725
The plugin system will be in the form of either scripts or binaries whose output
726
will be inserted in the report.
727

    
728
Eventually support for other kinds of plugins might be added as well, such as
729
plain text files which will be inserted into the report, or local unix or
730
network sockets from which the information has to be read.  This should allow
731
most flexibility for implementing an efficient system, while being able to keep
732
it as simple as possible.
733

    
734
Data collectors
735
---------------
736

    
737
In order to ease testing as well as to make it simple to reuse this
738
subsystem it will be possible to run just the "data collectors" on each
739
node without passing through the agent daemon.
740

    
741
If a data collector is run independently, it should print on stdout its
742
report, according to the format corresponding to a single data collector
743
report object, as described in the previous paragraphs.
744

    
745
Mode of operation
746
-----------------
747

    
748
In order to be able to report information fast the monitoring agent
749
daemon will keep an in-memory or on-disk cache of the status, which will
750
be returned when queries are made. The status system will then
751
periodically check resources to make sure the status is up to date.
752

    
753
Different parts of the report will be queried at different speeds. These
754
will depend on:
755
- how often they vary (or we expect them to vary)
756
- how fast they are to query
757
- how important their freshness is
758

    
759
Of course the last parameter is installation specific, and while we'll
760
try to have defaults, it will be configurable. The first two instead we
761
can use adaptively to query a certain resource faster or slower
762
depending on those two parameters.
763

    
764
When run as stand-alone binaries, the data collector will not using any
765
caching system, and just fetch and return the data immediately.
766

    
767
Implementation place
768
--------------------
769

    
770
The status daemon will be implemented as a standalone Haskell daemon. In
771
the future it should be easy to merge multiple daemons into one with
772
multiple entry points, should we find out it saves resources and doesn't
773
impact functionality.
774

    
775
The libekg library should be looked at for easily providing metrics in
776
json format.
777

    
778
Implementation order
779
--------------------
780

    
781
We will implement the agent system in this order:
782

    
783
- initial example data collectors (eg. for drbd and instance status).
784
- initial daemon for exporting data, integrating the existing collectors
785
- plugin system
786
- RPC updates for instance status reasons and disk to instance mapping
787
- cache layer for the daemon
788
- more data collectors
789

    
790

    
791
Future work
792
===========
793

    
794
As a future step it can be useful to "centralize" all this reporting
795
data on a single place. This for example can be just the master node, or
796
all the master candidates. We will evaluate doing this after the first
797
node-local version has been developed and tested.
798

    
799
Another possible change is replacing the "read-only" RPCs with queries
800
to the agent system, thus having only one way of collecting information
801
from the nodes from a monitoring system and for Ganeti itself.
802

    
803
One extra feature we may need is a way to query for only sub-parts of
804
the report (eg. instances status only). This can be done by passing
805
arguments to the HTTP GET, which will be defined when we get to this
806
funtionality.
807

    
808
Finally the :doc:`autorepair system design <design-autorepair>`. system
809
(see its design) can be expanded to use the monitoring agent system as a
810
source of information to decide which repairs it can perform.
811

    
812
.. vim: set textwidth=72 :
813
.. Local Variables:
814
.. mode: rst
815
.. fill-column: 72
816
.. End: