python-esmonitor/README.md

131 lines
3.2 KiB
Markdown
Raw Permalink Normal View History

2015-12-06 16:03:02 -08:00
python-esmonitor
================
2018-10-04 18:50:34 -07:00
**Modular monitoring tool for logging data to various timeseries databases**
2015-12-06 16:03:02 -08:00
Quick start
-----------
2015-12-06 22:42:16 -08:00
* Install: `pip3 install -r requirements.txt ; python3 setup.py install`
2015-12-06 16:03:02 -08:00
* Configure: `cd examples ; vim config.json`
* Run: `pymonitor -c config.json`
Configuring
-----------
2018-10-04 18:50:34 -07:00
The config file should contain a json object with the keys `backend` and `monitors`. Backend contains a key, `type`, to
select what database backend to use. The remaining keys are specific to that database.
For Elasticsearch 6.x, this should be the full url to elasticsearch:
2015-12-06 16:03:02 -08:00
```
{
"backend": {
2018-10-04 18:50:34 -07:00
"type": "elasticsearch"
2015-12-06 16:03:02 -08:00
"url": "http://192.168.1.210:8297/"
},
```
2018-10-04 18:50:34 -07:00
Or for InfluxDB 6.x, several fields describing the connection:
```
{
"backend": {
"type": "influxdb",
"host": "10.0.0.10",
"port": "8086",
"user": "root",
"password": "root",
"database": "monitoring"
},
```
2015-12-06 16:03:02 -08:00
The `monitors` key contains a list of monitor modules to run:
```
"monitors": [
{
"type":"diskspace",
"freq":"30",
"args": {
"filesystems": [
"/",
"/tmp/monitor"
]
}
},
{ ... }
]
}
```
2018-10-04 18:50:34 -07:00
The name of the module to run for a monitor is `type`. The `freq` option is the frequency, in seconds, that this monitor
will check and report data. If the monitor being used takes any options, they can be passed as a object with the
`args` option,
2015-12-06 16:03:02 -08:00
2015-12-27 15:51:42 -08:00
A yaml config can also be used. The data structure must be identical and the filename MUST end in `.yml`.
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
Developing Modules
------------------
**How to create a module:**
Add a new python file in *pymonitor/monitors/*, such as `uptime.py`. Add a function named the same as the file, accepting any needed params as keyword args:
```
2018-10-04 18:50:34 -07:00
from pymonitor import Metric
2015-12-06 16:03:02 -08:00
def uptime():
```
Add your code to retrieve any metrics:
```
with open("/proc/uptime", "r") as f:
uptime_stats = {"uptime":int(float(f.read().split(" ")[0]))}
```
2018-10-04 18:50:34 -07:00
This function must yield one or more Metric objects. This object will be sent to the database backend, with a `type`
field matching the name if this module ("uptime"). System hostname, ip address, and timestamp will be
added automatically.
2015-12-06 16:03:02 -08:00
```
2018-10-04 18:50:34 -07:00
yield Metric(uptime_stats)
2015-12-06 16:03:02 -08:00
```
2018-10-04 18:50:34 -07:00
The module file must set a variable named `mapping`. For backends that need it, such as Elasticsearch, this contains
data mapping information so our data is structured correctly. This value is used verbatim, so any other Elasticsearch
options for this type can be specified here.
2015-12-06 16:03:02 -08:00
```
mapping = {
"uptime": {
2018-10-04 18:50:34 -07:00
"type": "integer"
2015-12-06 16:03:02 -08:00
}
}
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
```
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
Finally, it's often convenient to test your monitor by adding some code so the script can be ran individually:
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
```
if __name__ == '__main__':
for item in uptime():
print(item["uptime"])
```
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
Since this module is named 'uptime' and takes no args, the following added to the monitors array in `config.json` would activate it:
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
```
{
"type":"uptime",
"freq":"30",
"args":{}
}
```
2018-10-04 18:50:34 -07:00
2015-12-06 16:03:02 -08:00
Roadmap
-------
* Complete API docs
* More builtin monitors
* Local logging in case ES can't be reached