Influxdb backend
This commit is contained in:
parent
a2cfc9059d
commit
de5b6e2e71
65
README.md
65
README.md
|
@ -1,6 +1,6 @@
|
|||
python-esmonitor
|
||||
================
|
||||
**Modular monitoring tool for logging data to elasticsearch**
|
||||
**Modular monitoring tool for logging data to various timeseries databases**
|
||||
|
||||
Quick start
|
||||
-----------
|
||||
|
@ -8,21 +8,37 @@ Quick start
|
|||
* Install: `pip3 install -r requirements.txt ; python3 setup.py install`
|
||||
* Configure: `cd examples ; vim config.json`
|
||||
* Run: `pymonitor -c config.json`
|
||||
|
||||
Requires the [python elasticsearch module](https://github.com/elastic/elasticsearch-py).
|
||||
|
||||
Configuring
|
||||
-----------
|
||||
|
||||
The config file should contain a json object with the keys `backend` and `monitors`. Backend contains only one key, `url`. This should be the full url to elasticsearch:
|
||||
The config file should contain a json object with the keys `backend` and `monitors`. Backend contains a key, `type`, to
|
||||
select what database backend to use. The remaining keys are specific to that database.
|
||||
|
||||
For Elasticsearch 6.x, this should be the full url to elasticsearch:
|
||||
|
||||
```
|
||||
{
|
||||
"backend": {
|
||||
"type": "elasticsearch"
|
||||
"url": "http://192.168.1.210:8297/"
|
||||
},
|
||||
```
|
||||
|
||||
Or for InfluxDB 6.x, several fields describing the connection:
|
||||
|
||||
```
|
||||
{
|
||||
"backend": {
|
||||
"type": "influxdb",
|
||||
"host": "10.0.0.10",
|
||||
"port": "8086",
|
||||
"user": "root",
|
||||
"password": "root",
|
||||
"database": "monitoring"
|
||||
},
|
||||
```
|
||||
|
||||
The `monitors` key contains a list of monitor modules to run:
|
||||
|
||||
```
|
||||
|
@ -42,10 +58,13 @@ The `monitors` key contains a list of monitor modules to run:
|
|||
}
|
||||
```
|
||||
|
||||
The name of the module to run for a monitor is `type`. The `freq` option is the frequency, in seconds, that this monitor will check and report data. If the monitor being used takes any options, they can be passed as a object with the `args` option,
|
||||
The name of the module to run for a monitor is `type`. The `freq` option is the frequency, in seconds, that this monitor
|
||||
will check and report data. If the monitor being used takes any options, they can be passed as a object with the
|
||||
`args` option,
|
||||
|
||||
A yaml config can also be used. The data structure must be identical and the filename MUST end in `.yml`.
|
||||
|
||||
|
||||
Developing Modules
|
||||
------------------
|
||||
|
||||
|
@ -53,6 +72,9 @@ Developing Modules
|
|||
|
||||
Add a new python file in *pymonitor/monitors/*, such as `uptime.py`. Add a function named the same as the file, accepting any needed params as keyword args:
|
||||
```
|
||||
from pymonitor import Metric
|
||||
|
||||
|
||||
def uptime():
|
||||
```
|
||||
Add your code to retrieve any metrics:
|
||||
|
@ -60,29 +82,38 @@ Add your code to retrieve any metrics:
|
|||
with open("/proc/uptime", "r") as f:
|
||||
uptime_stats = {"uptime":int(float(f.read().split(" ")[0]))}
|
||||
```
|
||||
This function must yield one or more dictionaries. This dictonary will be sent as a document to elasticsearch, with a `_type` matching the name if this module ("uptime"). System hostname, ip address, and timestamp will be added automatically.
|
||||
|
||||
This function must yield one or more Metric objects. This object will be sent to the database backend, with a `type`
|
||||
field matching the name if this module ("uptime"). System hostname, ip address, and timestamp will be
|
||||
added automatically.
|
||||
|
||||
```
|
||||
yield uptime_stats
|
||||
yield Metric(uptime_stats)
|
||||
```
|
||||
The module file must set a variable named `mapping`. This contains data mapping information sent to elasticsearch so our data is structured correctly. This value is used verbatim, so any other elasticsearch options for this type can be specified here.
|
||||
|
||||
The module file must set a variable named `mapping`. For backends that need it, such as Elasticsearch, this contains
|
||||
data mapping information so our data is structured correctly. This value is used verbatim, so any other Elasticsearch
|
||||
options for this type can be specified here.
|
||||
|
||||
```
|
||||
mapping = {
|
||||
"uptime": {
|
||||
"properties": {
|
||||
"uptime": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Finally, it's often convenient to test your monitor by adding some code so the script can be ran individually:
|
||||
|
||||
```
|
||||
if __name__ == '__main__':
|
||||
for item in uptime():
|
||||
print(item["uptime"])
|
||||
```
|
||||
|
||||
Since this module is named 'uptime' and takes no args, the following added to the monitors array in `config.json` would activate it:
|
||||
|
||||
```
|
||||
{
|
||||
"type":"uptime",
|
||||
|
@ -90,16 +121,10 @@ Since this module is named 'uptime' and takes no args, the following added to th
|
|||
"args":{}
|
||||
}
|
||||
```
|
||||
|
||||
Roadmap
|
||||
-------
|
||||
|
||||
* Complete API docs
|
||||
* More builtin monitors
|
||||
* Local logging in case ES can't be reached
|
||||
|
||||
Changelog
|
||||
---------
|
||||
|
||||
*0.1.0:* renamed fields with names containing dots for elasticsearch 2.0 compatibility
|
||||
*0.0.1:* initial release!
|
||||
|
||||
|
|
|
@ -1,26 +1,23 @@
|
|||
backend:
|
||||
type: elasticsearch
|
||||
url: 'http://10.0.3.15:9200/'
|
||||
url: 'http://10.0.0.10:9200/'
|
||||
monitors:
|
||||
- type: uptime
|
||||
freq: '30'
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: load
|
||||
freq: '30'
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: meminfo
|
||||
freq: '30'
|
||||
args: {}
|
||||
- type: procs
|
||||
freq: '30'
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: diskspace
|
||||
freq: '30'
|
||||
freq: 30
|
||||
args:
|
||||
filesystems:
|
||||
- '/'
|
||||
- '/var'
|
||||
- '/home'
|
||||
- type: diskio
|
||||
freq: '30'
|
||||
freq: 30
|
||||
args: {}
|
|
@ -0,0 +1,27 @@
|
|||
backend:
|
||||
type: influxdb
|
||||
host: 10.0.0.10
|
||||
port: 8086
|
||||
user: root
|
||||
password: root
|
||||
database: monitoring
|
||||
monitors:
|
||||
- type: uptime
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: load
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: meminfo
|
||||
freq: 30
|
||||
args: {}
|
||||
- type: diskspace
|
||||
freq: 30
|
||||
args:
|
||||
filesystems:
|
||||
- '/'
|
||||
- '/var'
|
||||
- '/home'
|
||||
- type: diskio
|
||||
freq: 30
|
||||
args: {}
|
|
@ -1,43 +0,0 @@
|
|||
{
|
||||
"backend": {
|
||||
"url": "http://10.0.3.15:9200/"
|
||||
},
|
||||
"monitors": [
|
||||
{
|
||||
"type":"uptime",
|
||||
"freq":"30",
|
||||
"args":{}
|
||||
},
|
||||
{
|
||||
"type":"load",
|
||||
"freq":"30",
|
||||
"args":{}
|
||||
},
|
||||
{
|
||||
"type":"meminfo",
|
||||
"freq":"30",
|
||||
"args":{}
|
||||
},
|
||||
{
|
||||
"type":"procs",
|
||||
"freq":"30",
|
||||
"args":{}
|
||||
},
|
||||
{
|
||||
"type":"diskspace",
|
||||
"freq":"30",
|
||||
"args": {
|
||||
"filesystems": [
|
||||
"/",
|
||||
"/var",
|
||||
"/home"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type":"diskio",
|
||||
"freq":"30",
|
||||
"args":{}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -1,5 +1,38 @@
|
|||
__version__ = "0.2.0"
|
||||
from itertools import chain
|
||||
import logging
|
||||
from pymonitor.builtins import sysinfo
|
||||
|
||||
class Backend(object):
|
||||
"""
|
||||
Base class for data storage backends
|
||||
"""
|
||||
def __init__(self, master, conf):
|
||||
self.master = master
|
||||
self.conf = conf
|
||||
self.sysinfo = {}
|
||||
self.logger = logging.getLogger("monitordaemon.backend")
|
||||
self.update_sys_info()
|
||||
|
||||
def update_sys_info(self):
|
||||
"""
|
||||
Fetch generic system info that is sent with every piece of monitoring data
|
||||
"""
|
||||
self.sysinfo["hostname"] = sysinfo.hostname()
|
||||
self.sysinfo["ipaddr"] = sysinfo.ipaddr()
|
||||
|
||||
def connect(self):
|
||||
"""
|
||||
Connect to the backend and do any prep work
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
def add_data(self, metric):
|
||||
"""
|
||||
Accept a Metric() object and send it off to the backend
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class Metric(object):
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
|
||||
from threading import Thread
|
||||
from time import time, sleep
|
||||
from pymonitor.builtins import sysinfo
|
||||
import traceback
|
||||
import datetime
|
||||
import logging
|
||||
import json
|
||||
import sys
|
||||
import os
|
||||
from pymonitor.elasticsearch import ESBackend
|
||||
from pymonitor.influxdb import InfluxBackend
|
||||
|
||||
|
||||
class MonitorDaemon(Thread):
|
||||
|
@ -57,124 +57,6 @@ class MonitorDaemon(Thread):
|
|||
monitor_thread.shutdown()
|
||||
|
||||
|
||||
class Backend(object):
|
||||
"""
|
||||
Base class for data storage backends
|
||||
"""
|
||||
def __init__(self, master, conf):
|
||||
self.master = master
|
||||
self.conf = conf
|
||||
self.logger = logging.getLogger("monitordaemon.backend")
|
||||
|
||||
def connect(self):
|
||||
"""
|
||||
Connect to the backend and do any prep work
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def update_sys_info(self):
|
||||
"""
|
||||
Fetch generic system info that is sent with every piece of monitoring data
|
||||
"""
|
||||
self.sysinfo["hostname"] = sysinfo.hostname()
|
||||
#self.sysinfo["hostname_raw"] = self.sysinfo["hostname"]
|
||||
#self.sysinfo["ipaddr"] = sysinfo.ipaddr()
|
||||
|
||||
def add_data(self, metric):
|
||||
"""
|
||||
Accept a Metric() object and send it off to the backend
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class InfluxBackend(Backend):
|
||||
pass
|
||||
|
||||
|
||||
class ESBackend(Backend):
|
||||
def __init__(self, master, conf):
|
||||
"""
|
||||
Init elasticsearch client
|
||||
"""
|
||||
super().__init__(master, conf)
|
||||
self.mapping = {}
|
||||
|
||||
self.sysinfo = {}
|
||||
self.update_sys_info()
|
||||
#self.logger.debug("running on %(hostname)s (%(ipaddr)s)" % self.sysinfo)
|
||||
|
||||
def connect(self):
|
||||
self.logger.debug("connecting to elasticsearch at %s" % self.conf["url"])
|
||||
from elasticsearch import Elasticsearch
|
||||
self.es = Elasticsearch([self.conf["url"]])
|
||||
self.logger.debug("connected to backend")
|
||||
|
||||
for monitor_thread in self.master.threads:
|
||||
self.mapping.update(monitor_thread.imported.mapping)
|
||||
self.logger.debug("final mapping: ", self.mapping)
|
||||
self.create_mapping_template()
|
||||
|
||||
self.current_index = ""
|
||||
self.check_index()
|
||||
|
||||
def get_index_name(self):
|
||||
"""
|
||||
Return name of current index such as 'monitor-2015.12.05'
|
||||
"""
|
||||
return "monitor-%s" % datetime.datetime.now().strftime("%Y.%m.%d")
|
||||
|
||||
def check_index(self):
|
||||
"""
|
||||
Called before adding any data to ES. Checks if a new index should be created due to date change
|
||||
"""
|
||||
indexName = self.get_index_name()
|
||||
if indexName != self.current_index:
|
||||
self.create_index(indexName)
|
||||
|
||||
def create_index(self, indexName):
|
||||
"""
|
||||
Check if current index exists, and if not, create it
|
||||
"""
|
||||
if not self.es.indices.exists(index=indexName):
|
||||
self.es.indices.create(index=indexName, ignore=400) # ignore already exists error
|
||||
self.current_index = indexName
|
||||
|
||||
def create_mapping_template(self):
|
||||
default_fields = {"ipaddr": {"type": "ip"}, # TODO i dont like how these default fields are handled in general
|
||||
"hostname": {"type": "text"},
|
||||
"hostname_raw": {"type": "keyword"},
|
||||
"@timestamp": {"type": "date"}} #"field": "@timestamp"
|
||||
|
||||
fields = dict(**self.mapping, **default_fields)
|
||||
template = {"index_patterns": ["monitor-*"],
|
||||
"settings": {"number_of_shards": 1}, # TODO shard info from config file
|
||||
"mappings": {"_default_": {"properties": fields}}}
|
||||
self.logger.debug("creating template with body %s", json.dumps(template, indent=4))
|
||||
self.es.indices.put_template(name="monitor", body=template)
|
||||
|
||||
def add_data(self, metric):
|
||||
"""
|
||||
Submit a piece of monitoring data
|
||||
"""
|
||||
self.check_index()
|
||||
|
||||
metric.tags.update(**self.sysinfo)
|
||||
metric.values["@timestamp"] = datetime.datetime.utcnow().isoformat() # TODO elasticsearch server-side timestamp
|
||||
|
||||
metric_dict = {}
|
||||
metric_dict.update(metric.values)
|
||||
metric_dict.update(metric.tags)
|
||||
|
||||
# We'll likely group by tags on the eventual frontend, and under elasticsearch this works best if the entire
|
||||
# field is handled as a single keyword. Duplicate all tags into ${NAME}_raw fields, expected to be not analyzed
|
||||
for k, v in metric.tags.items():
|
||||
metric_dict["{}_raw".format(k)] = v
|
||||
|
||||
self.logger.debug("logging type %s: %s" % (metric.tags["type"], metric))
|
||||
res = self.es.index(index=self.current_index, doc_type="monitor_data", body=metric_dict)
|
||||
self.logger.debug("%s created %s" % (metric.tags["type"], res["_id"]))
|
||||
|
||||
|
||||
class MonitorThread(Thread):
|
||||
def __init__(self, config, backend):
|
||||
"""
|
||||
|
@ -231,13 +113,13 @@ class MonitorThread(Thread):
|
|||
self.alive = False
|
||||
|
||||
|
||||
def run_cli():
|
||||
def main():
|
||||
from optparse import OptionParser
|
||||
|
||||
parser = OptionParser()
|
||||
parser.add_option("-c", "--config", action="store", type="string", dest="config", help="Path to config file")
|
||||
parser.add_option("-l", "--logging", action="store", dest="logging", help="Logging level", default="INFO",
|
||||
choices=['WARN', 'CRITICAL', 'WARNING', 'INFO', 'ERROR', 'DEBUG'])
|
||||
choices=['CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG'])
|
||||
|
||||
(options, args) = parser.parse_args()
|
||||
|
||||
|
@ -252,9 +134,9 @@ def run_cli():
|
|||
sys.exit()
|
||||
|
||||
with open(options.config, "r") as c:
|
||||
if options.config[-5:] == '.json':
|
||||
if options.config.endswith('.json'):
|
||||
conf = json.load(c)
|
||||
elif options.config[-4:] == '.yml':
|
||||
elif options.config.endswith('.yml'):
|
||||
from yaml import load as yaml_load
|
||||
conf = yaml_load(c)
|
||||
else:
|
||||
|
@ -269,7 +151,3 @@ def run_cli():
|
|||
except KeyboardInterrupt:
|
||||
print("")
|
||||
daemon.shutdown()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
run_cli()
|
||||
|
|
|
@ -0,0 +1,84 @@
|
|||
from pymonitor import Backend
|
||||
import datetime
|
||||
import json
|
||||
|
||||
|
||||
class ESBackend(Backend):
|
||||
def __init__(self, master, conf):
|
||||
"""
|
||||
Init elasticsearch client
|
||||
"""
|
||||
super().__init__(master, conf)
|
||||
self.mapping = {}
|
||||
self.current_index = None
|
||||
|
||||
def connect(self):
|
||||
self.logger.debug("connecting to elasticsearch at %s" % self.conf["url"])
|
||||
from elasticsearch import Elasticsearch
|
||||
self.es = Elasticsearch([self.conf["url"]])
|
||||
self.logger.debug("connected to backend")
|
||||
|
||||
for monitor_thread in self.master.threads:
|
||||
self.mapping.update(monitor_thread.imported.mapping)
|
||||
self.logger.debug("final mapping: ", self.mapping)
|
||||
self.create_mapping_template()
|
||||
|
||||
self.check_index()
|
||||
|
||||
def get_index_name(self):
|
||||
"""
|
||||
Return name of current index such as 'monitor-2015.12.05'
|
||||
"""
|
||||
return "monitor-%s" % datetime.datetime.now().strftime("%Y.%m.%d")
|
||||
|
||||
def check_index(self):
|
||||
"""
|
||||
Called before adding any data to ES. Checks if a new index should be created due to date change
|
||||
"""
|
||||
indexName = self.get_index_name()
|
||||
if indexName != self.current_index:
|
||||
self.create_index(indexName)
|
||||
|
||||
def create_index(self, indexName):
|
||||
"""
|
||||
Check if current index exists, and if not, create it
|
||||
"""
|
||||
if not self.es.indices.exists(index=indexName):
|
||||
self.es.indices.create(index=indexName, ignore=400) # ignore already exists error
|
||||
self.current_index = indexName
|
||||
|
||||
def create_mapping_template(self):
|
||||
default_fields = {"ipaddr": {"type": "ip"}, # TODO i dont like how these default fields are handled in general
|
||||
"hostname": {"type": "text"},
|
||||
"hostname_raw": {"type": "keyword"},
|
||||
"@timestamp": {"type": "date"}} #"field": "@timestamp"
|
||||
|
||||
fields = dict(**self.mapping)
|
||||
fields.update(**default_fields)
|
||||
template = {"index_patterns": ["monitor-*"],
|
||||
"settings": {"number_of_shards": 1}, # TODO shard info from config file
|
||||
"mappings": {"_default_": {"properties": fields}}}
|
||||
self.logger.debug("creating template with body %s", json.dumps(template, indent=4))
|
||||
self.es.indices.put_template(name="monitor", body=template)
|
||||
|
||||
def add_data(self, metric):
|
||||
"""
|
||||
Submit a piece of monitoring data
|
||||
"""
|
||||
self.check_index()
|
||||
|
||||
metric.tags.update(**self.sysinfo)
|
||||
metric.values["@timestamp"] = datetime.datetime.utcnow().isoformat()
|
||||
|
||||
metric_dict = {}
|
||||
metric_dict.update(metric.values)
|
||||
metric_dict.update(metric.tags)
|
||||
|
||||
# We'll likely group by tags on the eventual frontend, and under elasticsearch this works best if the entire
|
||||
# field is handled as a single keyword. Duplicate all tags into ${NAME}_raw fields, expected to be not analyzed
|
||||
for k, v in metric.tags.items():
|
||||
metric_dict["{}_raw".format(k)] = v
|
||||
|
||||
self.logger.debug("logging type %s: %s" % (metric.tags["type"], metric))
|
||||
res = self.es.index(index=self.current_index, doc_type="monitor_data", body=metric_dict)
|
||||
self.logger.debug("%s created %s" % (metric.tags["type"], res["_id"]))
|
|
@ -0,0 +1,33 @@
|
|||
from pymonitor import Backend
|
||||
from influxdb import InfluxDBClient
|
||||
import datetime
|
||||
|
||||
|
||||
class InfluxBackend(Backend):
|
||||
def __init__(self, master, conf):
|
||||
super().__init__(master, conf)
|
||||
self.client = None
|
||||
|
||||
def connect(self):
|
||||
"""
|
||||
Connect to the backend and do any prep work
|
||||
"""
|
||||
self.client = InfluxDBClient(self.conf["host"], self.conf["port"], self.conf["user"], self.conf["password"]) # DBNAME
|
||||
dbname = self.conf.get("database", "monitoring")
|
||||
self.client.create_database(dbname)
|
||||
self.client.switch_database(dbname)
|
||||
|
||||
|
||||
def add_data(self, metric):
|
||||
"""
|
||||
Accept a Metric() object and send it off to the backend
|
||||
"""
|
||||
metric.tags.update(**self.sysinfo)
|
||||
body = [{
|
||||
"measurement": metric.tags["type"],
|
||||
"tags": metric.tags,
|
||||
"time": datetime.datetime.utcnow().isoformat(),
|
||||
"fields": metric.values
|
||||
}
|
||||
]
|
||||
self.client.write_points(body)
|
|
@ -21,8 +21,8 @@ def diskio(disks=[]):
|
|||
"writes": stats.write_count,
|
||||
"read": stats.read_bytes,
|
||||
"written": stats.write_bytes,
|
||||
"read_size": round(stats.read_bytes / stats.read_count, 2) if stats.read_count > 0 else 0,
|
||||
"write_size": round(stats.write_bytes / stats.write_count, 2) if stats.write_count > 0 else 0
|
||||
"read_size": round(stats.read_bytes / stats.read_count, 2) if stats.read_count > 0 else 0.0,
|
||||
"write_size": round(stats.write_bytes / stats.write_count, 2) if stats.write_count > 0 else 0.0
|
||||
}
|
||||
yield Metric(stats, {"disk": disk})
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ def diskspace(filesystems=[], discover=True, omit=[]):
|
|||
filesystems param will be ignored.
|
||||
:param omit: list of paths that, if prefix a discovered mountpoint, to not report on
|
||||
"""
|
||||
filesystems = [f.rstrip("/") for f in filesystems]
|
||||
filesystems = [f.rstrip("/") if f != "/" else f for f in filesystems]
|
||||
if discover:
|
||||
with open("/proc/mounts") as f:
|
||||
for line in f.readlines():
|
||||
|
@ -39,11 +39,11 @@ def diskspace(filesystems=[], discover=True, omit=[]):
|
|||
"inodesused": stats.f_files - stats.f_favail
|
||||
}
|
||||
|
||||
info["diskpctused"] = round(info["diskused"] / info["disksize"] if info["disksize"] > 0 else 0, 5)
|
||||
info["diskpctfree"] = round(info["diskfree"] / info["disksize"] if info["disksize"] > 0 else 0, 5)
|
||||
info["diskpctused"] = round(info["diskused"] / info["disksize"] if info["disksize"] > 0 else 0.0, 5)
|
||||
info["diskpctfree"] = round(info["diskfree"] / info["disksize"] if info["disksize"] > 0 else 0.0, 5)
|
||||
|
||||
info["inodesused_pct"] = round(info["inodesused"] / info["inodesmax"] if info["inodesmax"] > 0 else 0, 5)
|
||||
info["inodesfree_pct"] = round(info["inodesfree"] / info["inodesmax"] if info["inodesmax"] > 0 else 0, 5)
|
||||
info["inodesused_pct"] = round(info["inodesused"] / info["inodesmax"] if info["inodesmax"] > 0 else 0.0, 5)
|
||||
info["inodesfree_pct"] = round(info["inodesfree"] / info["inodesmax"] if info["inodesmax"] > 0 else 0.0, 5)
|
||||
|
||||
yield Metric(info, {"fs": fs})
|
||||
|
||||
|
|
|
@ -3,9 +3,9 @@ from pymonitor import Metric
|
|||
def load():
|
||||
with open("/proc/loadavg", "r") as f:
|
||||
m1, m5, m15, procs, pid = f.read().strip().split(" ")
|
||||
yield Metric({"load_1m": m1,
|
||||
"load_5m": m5,
|
||||
"load_15m": m15})
|
||||
yield Metric({"load_1m": float(m1),
|
||||
"load_5m": float(m5),
|
||||
"load_15m": float(m15)})
|
||||
|
||||
|
||||
mapping = {
|
||||
|
|
|
@ -11,9 +11,9 @@ computed_fields = {
|
|||
"mempctfree_nocache": lambda items: 1 - round((items["memtotal"] - items["memfree"] - items["cached"]) /
|
||||
items["memtotal"], 5),
|
||||
"swappctused": lambda items: round((items["swaptotal"] - items["swapfree"]) /
|
||||
items["swaptotal"] if items["swaptotal"] > 0 else 0, 5),
|
||||
items["swaptotal"] if items["swaptotal"] > 0 else 0.0, 5),
|
||||
"swappctfree": lambda items: 1 - round((items["swaptotal"] - items["swapfree"]) /
|
||||
items["swaptotal"] if items["swaptotal"] > 0 else 0, 5)
|
||||
items["swaptotal"] if items["swaptotal"] > 0 else 0.0, 5)
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,13 @@
|
|||
certifi==2018.8.13
|
||||
chardet==3.0.4
|
||||
elasticsearch==6.3.1
|
||||
psutil==3.3.0
|
||||
PyYAML==3.11
|
||||
idna==2.7
|
||||
influxdb==5.2.0
|
||||
psutil==5.4.7
|
||||
pymonitor==0.2.0
|
||||
python-dateutil==2.7.3
|
||||
pytz==2018.5
|
||||
PyYAML==3.13
|
||||
requests==2.19.1
|
||||
six==1.11.0
|
||||
urllib3==1.23
|
||||
|
|
Loading…
Reference in New Issue