METADATA 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600
  1. Metadata-Version: 2.1
  2. Name: prometheus-client
  3. Version: 0.8.0
  4. Summary: Python client for the Prometheus monitoring system.
  5. Home-page: https://github.com/prometheus/client_python
  6. Author: Brian Brazil
  7. Author-email: brian.brazil@robustperception.io
  8. License: Apache Software License 2.0
  9. Keywords: prometheus monitoring instrumentation client
  10. Platform: UNKNOWN
  11. Classifier: Development Status :: 4 - Beta
  12. Classifier: Intended Audience :: Developers
  13. Classifier: Intended Audience :: Information Technology
  14. Classifier: Intended Audience :: System Administrators
  15. Classifier: Programming Language :: Python
  16. Classifier: Programming Language :: Python :: 2
  17. Classifier: Programming Language :: Python :: 2.6
  18. Classifier: Programming Language :: Python :: 2.7
  19. Classifier: Programming Language :: Python :: 3
  20. Classifier: Programming Language :: Python :: 3.4
  21. Classifier: Programming Language :: Python :: 3.5
  22. Classifier: Programming Language :: Python :: 3.6
  23. Classifier: Programming Language :: Python :: 3.7
  24. Classifier: Programming Language :: Python :: 3.8
  25. Classifier: Programming Language :: Python :: Implementation :: CPython
  26. Classifier: Programming Language :: Python :: Implementation :: PyPy
  27. Classifier: Topic :: System :: Monitoring
  28. Classifier: License :: OSI Approved :: Apache Software License
  29. Description-Content-Type: text/markdown
  30. Provides-Extra: twisted
  31. Requires-Dist: twisted ; extra == 'twisted'
  32. # Prometheus Python Client
  33. The official Python 2 and 3 client for [Prometheus](http://prometheus.io).
  34. ## Three Step Demo
  35. **One**: Install the client:
  36. ```
  37. pip install prometheus_client
  38. ```
  39. **Two**: Paste the following into a Python interpreter:
  40. ```python
  41. from prometheus_client import start_http_server, Summary
  42. import random
  43. import time
  44. # Create a metric to track time spent and requests made.
  45. REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
  46. # Decorate function with metric.
  47. @REQUEST_TIME.time()
  48. def process_request(t):
  49. """A dummy function that takes some time."""
  50. time.sleep(t)
  51. if __name__ == '__main__':
  52. # Start up the server to expose the metrics.
  53. start_http_server(8000)
  54. # Generate some requests.
  55. while True:
  56. process_request(random.random())
  57. ```
  58. **Three**: Visit [http://localhost:8000/](http://localhost:8000/) to view the metrics.
  59. From one easy to use decorator you get:
  60. * `request_processing_seconds_count`: Number of times this function was called.
  61. * `request_processing_seconds_sum`: Total amount of time spent in this function.
  62. Prometheus's `rate` function allows calculation of both requests per second,
  63. and latency over time from this data.
  64. In addition if you're on Linux the `process` metrics expose CPU, memory and
  65. other information about the process for free!
  66. ## Installation
  67. ```
  68. pip install prometheus_client
  69. ```
  70. This package can be found on
  71. [PyPI](https://pypi.python.org/pypi/prometheus_client).
  72. ## Instrumenting
  73. Four types of metric are offered: Counter, Gauge, Summary and Histogram.
  74. See the documentation on [metric types](http://prometheus.io/docs/concepts/metric_types/)
  75. and [instrumentation best practices](https://prometheus.io/docs/practices/instrumentation/#counter-vs-gauge-summary-vs-histogram)
  76. on how to use them.
  77. ### Counter
  78. Counters go up, and reset when the process restarts.
  79. ```python
  80. from prometheus_client import Counter
  81. c = Counter('my_failures', 'Description of counter')
  82. c.inc() # Increment by 1
  83. c.inc(1.6) # Increment by given value
  84. ```
  85. If there is a suffix of `_total` on the metric name, it will be removed. When
  86. exposing the time series for counter, a `_total` suffix will be added. This is
  87. for compatibility between OpenMetrics and the Prometheus text format, as OpenMetrics
  88. requires the `_total` suffix.
  89. There are utilities to count exceptions raised:
  90. ```python
  91. @c.count_exceptions()
  92. def f():
  93. pass
  94. with c.count_exceptions():
  95. pass
  96. # Count only one type of exception
  97. with c.count_exceptions(ValueError):
  98. pass
  99. ```
  100. ### Gauge
  101. Gauges can go up and down.
  102. ```python
  103. from prometheus_client import Gauge
  104. g = Gauge('my_inprogress_requests', 'Description of gauge')
  105. g.inc() # Increment by 1
  106. g.dec(10) # Decrement by given value
  107. g.set(4.2) # Set to a given value
  108. ```
  109. There are utilities for common use cases:
  110. ```python
  111. g.set_to_current_time() # Set to current unixtime
  112. # Increment when entered, decrement when exited.
  113. @g.track_inprogress()
  114. def f():
  115. pass
  116. with g.track_inprogress():
  117. pass
  118. ```
  119. A Gauge can also take its value from a callback:
  120. ```python
  121. d = Gauge('data_objects', 'Number of objects')
  122. my_dict = {}
  123. d.set_function(lambda: len(my_dict))
  124. ```
  125. ### Summary
  126. Summaries track the size and number of events.
  127. ```python
  128. from prometheus_client import Summary
  129. s = Summary('request_latency_seconds', 'Description of summary')
  130. s.observe(4.7) # Observe 4.7 (seconds in this case)
  131. ```
  132. There are utilities for timing code:
  133. ```python
  134. @s.time()
  135. def f():
  136. pass
  137. with s.time():
  138. pass
  139. ```
  140. The Python client doesn't store or expose quantile information at this time.
  141. ### Histogram
  142. Histograms track the size and number of events in buckets.
  143. This allows for aggregatable calculation of quantiles.
  144. ```python
  145. from prometheus_client import Histogram
  146. h = Histogram('request_latency_seconds', 'Description of histogram')
  147. h.observe(4.7) # Observe 4.7 (seconds in this case)
  148. ```
  149. The default buckets are intended to cover a typical web/rpc request from milliseconds to seconds.
  150. They can be overridden by passing `buckets` keyword argument to `Histogram`.
  151. There are utilities for timing code:
  152. ```python
  153. @h.time()
  154. def f():
  155. pass
  156. with h.time():
  157. pass
  158. ```
  159. ### Info
  160. Info tracks key-value information, usually about a whole target.
  161. ```python
  162. from prometheus_client import Info
  163. i = Info('my_build_version', 'Description of info')
  164. i.info({'version': '1.2.3', 'buildhost': 'foo@bar'})
  165. ```
  166. ### Enum
  167. Enum tracks which of a set of states something is currently in.
  168. ```python
  169. from prometheus_client import Enum
  170. e = Enum('my_task_state', 'Description of enum',
  171. states=['starting', 'running', 'stopped'])
  172. e.state('running')
  173. ```
  174. ### Labels
  175. All metrics can have labels, allowing grouping of related time series.
  176. See the best practices on [naming](http://prometheus.io/docs/practices/naming/)
  177. and [labels](http://prometheus.io/docs/practices/instrumentation/#use-labels).
  178. Taking a counter as an example:
  179. ```python
  180. from prometheus_client import Counter
  181. c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
  182. c.labels('get', '/').inc()
  183. c.labels('post', '/submit').inc()
  184. ```
  185. Labels can also be passed as keyword-arguments:
  186. ```python
  187. from prometheus_client import Counter
  188. c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
  189. c.labels(method='get', endpoint='/').inc()
  190. c.labels(method='post', endpoint='/submit').inc()
  191. ```
  192. ### Process Collector
  193. The Python client automatically exports metrics about process CPU usage, RAM,
  194. file descriptors and start time. These all have the prefix `process`, and
  195. are only currently available on Linux.
  196. The namespace and pid constructor arguments allows for exporting metrics about
  197. other processes, for example:
  198. ```
  199. ProcessCollector(namespace='mydaemon', pid=lambda: open('/var/run/daemon.pid').read())
  200. ```
  201. ### Platform Collector
  202. The client also automatically exports some metadata about Python. If using Jython,
  203. metadata about the JVM in use is also included. This information is available as
  204. labels on the `python_info` metric. The value of the metric is 1, since it is the
  205. labels that carry information.
  206. ## Exporting
  207. There are several options for exporting metrics.
  208. ### HTTP
  209. Metrics are usually exposed over HTTP, to be read by the Prometheus server.
  210. The easiest way to do this is via `start_http_server`, which will start a HTTP
  211. server in a daemon thread on the given port:
  212. ```python
  213. from prometheus_client import start_http_server
  214. start_http_server(8000)
  215. ```
  216. Visit [http://localhost:8000/](http://localhost:8000/) to view the metrics.
  217. To add Prometheus exposition to an existing HTTP server, see the `MetricsHandler` class
  218. which provides a `BaseHTTPRequestHandler`. It also serves as a simple example of how
  219. to write a custom endpoint.
  220. #### Twisted
  221. To use prometheus with [twisted](https://twistedmatrix.com/), there is `MetricsResource` which exposes metrics as a twisted resource.
  222. ```python
  223. from prometheus_client.twisted import MetricsResource
  224. from twisted.web.server import Site
  225. from twisted.web.resource import Resource
  226. from twisted.internet import reactor
  227. root = Resource()
  228. root.putChild(b'metrics', MetricsResource())
  229. factory = Site(root)
  230. reactor.listenTCP(8000, factory)
  231. reactor.run()
  232. ```
  233. #### WSGI
  234. To use Prometheus with [WSGI](http://wsgi.readthedocs.org/en/latest/), there is
  235. `make_wsgi_app` which creates a WSGI application.
  236. ```python
  237. from prometheus_client import make_wsgi_app
  238. from wsgiref.simple_server import make_server
  239. app = make_wsgi_app()
  240. httpd = make_server('', 8000, app)
  241. httpd.serve_forever()
  242. ```
  243. Such an application can be useful when integrating Prometheus metrics with WSGI
  244. apps.
  245. The method `start_wsgi_server` can be used to serve the metrics through the
  246. WSGI reference implementation in a new thread.
  247. ```python
  248. from prometheus_client import start_wsgi_server
  249. start_wsgi_server(8000)
  250. ```
  251. #### ASGI
  252. To use Prometheus with [ASGI](http://asgi.readthedocs.org/en/latest/), there is
  253. `make_asgi_app` which creates an ASGI application.
  254. ```python
  255. from prometheus_client import make_asgi_app
  256. app = make_asgi_app()
  257. ```
  258. Such an application can be useful when integrating Prometheus metrics with ASGI
  259. apps.
  260. #### Flask
  261. To use Prometheus with [Flask](http://flask.pocoo.org/) we need to serve metrics through a Prometheus WSGI application. This can be achieved using [Flask's application dispatching](http://flask.pocoo.org/docs/latest/patterns/appdispatch/). Below is a working example.
  262. Save the snippet below in a `myapp.py` file
  263. ```python
  264. from flask import Flask
  265. from werkzeug.middleware.dispatcher import DispatcherMiddleware
  266. from prometheus_client import make_wsgi_app
  267. # Create my app
  268. app = Flask(__name__)
  269. # Add prometheus wsgi middleware to route /metrics requests
  270. app_dispatch = DispatcherMiddleware(app, {
  271. '/metrics': make_wsgi_app()
  272. })
  273. ```
  274. Run the example web application like this
  275. ```bash
  276. # Install uwsgi if you do not have it
  277. pip install uwsgi
  278. uwsgi --http 127.0.0.1:8000 --wsgi-file myapp.py --callable app_dispatch
  279. ```
  280. Visit http://localhost:8000/metrics to see the metrics
  281. ### Node exporter textfile collector
  282. The [textfile collector](https://github.com/prometheus/node_exporter#textfile-collector)
  283. allows machine-level statistics to be exported out via the Node exporter.
  284. This is useful for monitoring cronjobs, or for writing cronjobs to expose metrics
  285. about a machine system that the Node exporter does not support or would not make sense
  286. to perform at every scrape (for example, anything involving subprocesses).
  287. ```python
  288. from prometheus_client import CollectorRegistry, Gauge, write_to_textfile
  289. registry = CollectorRegistry()
  290. g = Gauge('raid_status', '1 if raid array is okay', registry=registry)
  291. g.set(1)
  292. write_to_textfile('/configured/textfile/path/raid.prom', registry)
  293. ```
  294. A separate registry is used, as the default registry may contain other metrics
  295. such as those from the Process Collector.
  296. ## Exporting to a Pushgateway
  297. The [Pushgateway](https://github.com/prometheus/pushgateway)
  298. allows ephemeral and batch jobs to expose their metrics to Prometheus.
  299. ```python
  300. from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
  301. registry = CollectorRegistry()
  302. g = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)
  303. g.set_to_current_time()
  304. push_to_gateway('localhost:9091', job='batchA', registry=registry)
  305. ```
  306. A separate registry is used, as the default registry may contain other metrics
  307. such as those from the Process Collector.
  308. Pushgateway functions take a grouping key. `push_to_gateway` replaces metrics
  309. with the same grouping key, `pushadd_to_gateway` only replaces metrics with the
  310. same name and grouping key and `delete_from_gateway` deletes metrics with the
  311. given job and grouping key. See the
  312. [Pushgateway documentation](https://github.com/prometheus/pushgateway/blob/master/README.md)
  313. for more information.
  314. `instance_ip_grouping_key` returns a grouping key with the instance label set
  315. to the host's IP address.
  316. ### Handlers for authentication
  317. If the push gateway you are connecting to is protected with HTTP Basic Auth,
  318. you can use a special handler to set the Authorization header.
  319. ```python
  320. from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
  321. from prometheus_client.exposition import basic_auth_handler
  322. def my_auth_handler(url, method, timeout, headers, data):
  323. username = 'foobar'
  324. password = 'secret123'
  325. return basic_auth_handler(url, method, timeout, headers, data, username, password)
  326. registry = CollectorRegistry()
  327. g = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)
  328. g.set_to_current_time()
  329. push_to_gateway('localhost:9091', job='batchA', registry=registry, handler=my_auth_handler)
  330. ```
  331. ## Bridges
  332. It is also possible to expose metrics to systems other than Prometheus.
  333. This allows you to take advantage of Prometheus instrumentation even
  334. if you are not quite ready to fully transition to Prometheus yet.
  335. ### Graphite
  336. Metrics are pushed over TCP in the Graphite plaintext format.
  337. ```python
  338. from prometheus_client.bridge.graphite import GraphiteBridge
  339. gb = GraphiteBridge(('graphite.your.org', 2003))
  340. # Push once.
  341. gb.push()
  342. # Push every 10 seconds in a daemon thread.
  343. gb.start(10.0)
  344. ```
  345. ## Custom Collectors
  346. Sometimes it is not possible to directly instrument code, as it is not
  347. in your control. This requires you to proxy metrics from other systems.
  348. To do so you need to create a custom collector, for example:
  349. ```python
  350. from prometheus_client.core import GaugeMetricFamily, CounterMetricFamily, REGISTRY
  351. class CustomCollector(object):
  352. def collect(self):
  353. yield GaugeMetricFamily('my_gauge', 'Help text', value=7)
  354. c = CounterMetricFamily('my_counter_total', 'Help text', labels=['foo'])
  355. c.add_metric(['bar'], 1.7)
  356. c.add_metric(['baz'], 3.8)
  357. yield c
  358. REGISTRY.register(CustomCollector())
  359. ```
  360. `SummaryMetricFamily` and `HistogramMetricFamily` work similarly.
  361. A collector may implement a `describe` method which returns metrics in the same
  362. format as `collect` (though you don't have to include the samples). This is
  363. used to predetermine the names of time series a `CollectorRegistry` exposes and
  364. thus to detect collisions and duplicate registrations.
  365. Usually custom collectors do not have to implement `describe`. If `describe` is
  366. not implemented and the CollectorRegistry was created with `auto_describe=True`
  367. (which is the case for the default registry) then `collect` will be called at
  368. registration time instead of `describe`. If this could cause problems, either
  369. implement a proper `describe`, or if that's not practical have `describe`
  370. return an empty list.
  371. ## Multiprocess Mode (Gunicorn)
  372. Prometheus client libraries presume a threaded model, where metrics are shared
  373. across workers. This doesn't work so well for languages such as Python where
  374. it's common to have processes rather than threads to handle large workloads.
  375. To handle this the client library can be put in multiprocess mode.
  376. This comes with a number of limitations:
  377. - Registries can not be used as normal, all instantiated metrics are exported
  378. - Custom collectors do not work (e.g. cpu and memory metrics)
  379. - Info and Enum metrics do not work
  380. - The pushgateway cannot be used
  381. - Gauges cannot use the `pid` label
  382. There's several steps to getting this working:
  383. **1. Gunicorn deployment**:
  384. The `prometheus_multiproc_dir` environment variable must be set to a directory
  385. that the client library can use for metrics. This directory must be wiped
  386. between Gunicorn runs (before startup is recommended).
  387. This environment variable should be set from a start-up shell script,
  388. and not directly from Python (otherwise it may not propagate to child processes).
  389. **2. Metrics collector**:
  390. The application must initialize a new `CollectorRegistry`,
  391. and store the multi-process collector inside.
  392. ```python
  393. from prometheus_client import multiprocess
  394. from prometheus_client import generate_latest, CollectorRegistry, CONTENT_TYPE_LATEST
  395. # Expose metrics.
  396. def app(environ, start_response):
  397. registry = CollectorRegistry()
  398. multiprocess.MultiProcessCollector(registry)
  399. data = generate_latest(registry)
  400. status = '200 OK'
  401. response_headers = [
  402. ('Content-type', CONTENT_TYPE_LATEST),
  403. ('Content-Length', str(len(data)))
  404. ]
  405. start_response(status, response_headers)
  406. return iter([data])
  407. ```
  408. **3. Gunicorn configuration**:
  409. The `gunicorn` configuration file needs to include the following function:
  410. ```python
  411. from prometheus_client import multiprocess
  412. def child_exit(server, worker):
  413. multiprocess.mark_process_dead(worker.pid)
  414. ```
  415. **4. Metrics tuning (Gauge)**:
  416. When `Gauge` metrics are used, additional tuning needs to be performed.
  417. Gauges have several modes they can run in, which can be selected with the `multiprocess_mode` parameter.
  418. - 'all': Default. Return a timeseries per process alive or dead.
  419. - 'liveall': Return a timeseries per process that is still alive.
  420. - 'livesum': Return a single timeseries that is the sum of the values of alive processes.
  421. - 'max': Return a single timeseries that is the maximum of the values of all processes, alive or dead.
  422. - 'min': Return a single timeseries that is the minimum of the values of all processes, alive or dead.
  423. ```python
  424. from prometheus_client import Gauge
  425. # Example gauge
  426. IN_PROGRESS = Gauge("inprogress_requests", "help", multiprocess_mode='livesum')
  427. ```
  428. ## Parser
  429. The Python client supports parsing the Prometheus text format.
  430. This is intended for advanced use cases where you have servers
  431. exposing Prometheus metrics and need to get them into some other
  432. system.
  433. ```python
  434. from prometheus_client.parser import text_string_to_metric_families
  435. for family in text_string_to_metric_families(u"my_gauge 1.0\n"):
  436. for sample in family.samples:
  437. print("Name: {0} Labels: {1} Value: {2}".format(*sample))
  438. ```