README.md
1# PostgreSQL plugin
2Provides native Zabbix solution for monitoring PostgreSQL (object-relational database system).
3It can monitor several PostgreSQL instances simultaneously, remote or local to the Zabbix Agent.
4Native connection encryption is supported. The plugin keeps connections in the open state to reduce network
5congestion, latency, CPU and memory usage. Best for use in conjunction with the official
6[PostgreSQL template.](https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/db/postgresql_agent2)
7You can extend it or create your template for your specific needs.
8
9## Requirements
10- Zabbix Agent 2
11- Go >= 1.13 (required only to build from source)
12
13## Supported versions
14PostgreSQL, version 10, 11, 12
15
16## Installation
17The plugin is supplied as part of the Zabbix Agent 2 and does not require any special installation steps.
18Once Zabbix Agent 2 is installed, the plugin is ready to work. You only need to make sure a PostgreSQL instance is
19available for connection, and you have necessary access rights.
20
21## Configuration
22The Zabbix Agent's configuration file is used to configure plugins.
23
24**Plugins.Postgres.CallTimeout** — The maximum time in seconds for waiting when a request has to be done.
25*Default value:* equals the global Timeout configuration parameter.
26*Limits:* 1-30
27
28**Plugins.Postgres.Timeout** — The maximum time in seconds for waiting when a connection has to be established.
29*Default value:* equals the global Timeout configuration parameter.
30*Limits:* 1-30
31
32**Plugins.Postgres.CustomQueriesPath** — Full pathname of a directory containing *.sql* files with custom queries.
33*Default value:* — (the feature is disabled by default)
34
35**Plugins.Postgres.KeepAlive** — Sets a time for waiting before unused connections will be closed.
36*Default value:* 300 sec.
37*Limits:* 60-900
38
39**Plugins.Postgres.Sessions.<session_name>.TLSConnect** — Encryption type for postgres connection. "*" should be replaced with a session name.
40*Default value:*
41*Accepted values:* required, verify_ca, verify_full
42
43**Plugins.Postgres.Sessions.<session_name>.TLSCAFile** — Full pathname of a file containing the top-level CA(s) certificates for postgres
44*Default value:*
45
46**Plugins.Postgres.Sessions.<session_name>.TLSCertFile** — Full pathname of a file containing the postgres certificate or certificate chain.
47*Default value:*
48
49**Plugins.Postgres.Sessions.*.TLSKeyFile** — Full pathname of a file containing the postgres private key.
50*Default value:*
51
52### Configuring connection
53A connection can be configured using either keys' parameters or named sessions.
54
55*Notes*:
56* It is not possible to mix configuration using named sessions and keys' parameters simultaneously.
57* You can leave any connection parameter empty, a default hard-coded value will be used in the such case.
58* TLS information can be passed only with sessions.
59* Embedded URI credentials (userinfo) are forbidden and will be ignored. So, you can't pass the credentials by this:
60
61 pgsql.ping[tcp://user:password@127.0.0.1/postgres] — WRONG
62
63 The correct way is:
64
65 pgsql.ping[tcp://127.0.0.1,user,password,postgres]
66
67* The only supported network schema for a URI are "tcp" and "unix".
68Examples of valid URIs:
69 - tcp://127.0.0.1:5432
70 - tcp://localhost
71 - localhost
72 - unix:/var/run/postgresql/.s.PGSQL.5432 (**Note:** a full socket file path expected, not a socket directory)
73 - /var/run/postgresql/.s.PGSQL.5432
74
75#### Using keys' parameters
76The common parameters for all keys are: [ConnString][,User][,Password][,Database]
77Where ConnString can be either a URI or a session name.
78ConnString will be treated as a URI if no session with the given name is found.
79If you use ConnString as a session name, just skip the rest of the connection parameters.
80
81#### Using named sessions
82Named sessions allow you to define specific parameters for each PostgreSQL instance. Currently, these are the
83supported parameters: Uri, User, Password, Service, TLSConnect, TLSCAFile, TLSCertFile and TLSKeyFile.
84It's a bit more secure way to store credentials compared to item keys or macros.
85
86E.g: suppose you have two PostgreSQL instances: "Prod" and "Test".
87You should add the following options to the agent configuration file:
88
89 Plugins.Postgres.Sessions.Prod.Uri=tcp://192.168.1.1:5432
90 Plugins.Postgres.Sessions.Prod.User=<UserForProd>
91 Plugins.Postgres.Sessions.Prod.Password=<PasswordForProd>
92 Plugins.Postgres.Sessions.Prod.Database=proddb
93 Plugins.Postgres.Sessions.Prod.TLSConnect=verify_full
94 Plugins.Postgres.Sessions.Prod.TLSCAFile=/path/to/ca_file
95 Plugins.Postgres.Sessions.Prod.TLSCertFile=/path/to/cert_file
96 Plugins.Postgres.Sessions.Prod.TLSKeyFile=/path/to/key_file
97
98 Plugins.Postgres.Sessions.Test.Uri=tcp://192.168.0.1:5432
99 Plugins.Postgres.Sessions.Test.User=<UserForTest>
100 Plugins.Postgres.Sessions.Test.Password=<PasswordForTest>
101 Plugins.Postgres.Sessions.Test.Service=testdb
102 Plugins.Postgres.Sessions.Test.TLSConnect=verify_ca
103 Plugins.Postgres.Sessions.Test.TLSCAFile=/path/to/test/ca_file
104 Plugins.Postgres.Sessions.Test.TLSCertFile=/path/to/test/cert_file
105 Plugins.Postgres.Sessions.Test.TLSKeyFile=/path/to/test/key_file
106
107Then you will be able to use these names as the 1st parameter (ConnString) in keys instead of URIs, e.g:
108
109 pgsql.ping[Prod]
110 pgsql.ping[Test]
111
112*Note*: sessions names are case-sensitive, the first letter of a name must be upper-cased.
113
114## Supported keys
115**pgsql.archive[\<commonParams\>]** — returns info about archive files.
116*Returns:* Result of the
117```sql
118SELECT row_to_json(T)
119FROM (SELECT archived_count, failed_count from pg_stat_archiver) T
120SELECT row_to_json(T)
121FROM ( SELECT count(name) AS count_files ,
122coalesce(sum((pg_stat_file('./pg_wal/' || rtrim(ready.name,'.ready'))).size),0) AS size_files
123FROM ( SELECT name
124FROM pg_ls_dir('./pg_wal/archive_status') name WHERE right( name,6)= '.ready' ) ready) T;
125```
126> SQL query JSON format.
127
128Then JSON is proceeded by dependent items of:
129- pgsql.archive.count_archived_files — number of WAL files that have been successfully archived.
130- pgsql.archive.failed_trying_to_archive — number of failed attempts for archiving WAL files.
131- pgsql.archive.count_files_to_archive — number of files to archive.
132- pgsql.archive.size_files_to_archive — size of files to archive.
133
134**pgsql.autovacum.count[\<commonParams\>]** — number of autovacuum workers.
135*Returns:* Result of the
136```sql
137SELECT count(*)
138FROM pg_catalog.pg_stat_activity
139WHERE query like '%%autovacuum%%'
140AND state <> 'idle'
141AND pid <> pg_catalog.pg_backend_pid()
142```
143> SQL query.
144
145**pgsql.bgwriter[\<commonParams\>]** — statistics about the background writer process's activity.
146*Returns:* Result of the
147```sql
148SELECT row_to_json (T)
149FROM (
150SELECT
151checkpoints_timed
152, checkpoints_req
153, checkpoint_write_time
154, checkpoint_sync_time
155, buffers_checkpoint
156, buffers_clean
157, maxwritten_clean
158, buffers_backend
159, buffers_backend_fsync
160, buffers_alloc
161FROM pg_catalog.pg_stat_bgwriter
162) T
163```
164> SQL query JSON format.
165
166Then JSON is proceeded by dependent items of:
167- pgsql.bgwriter.buffers_alloc — number of buffers allocated.
168- pgsql.bgwriter.buffers_backend — number of buffers written directly by a backend.
169- pgsql.bgwriter.maxwritten_clean — number of times the background writer stopped a cleaning scan because it had written
170 too many buffers.
171- pgsql.bgwriter.buffers_backend_fsync — number of times a backend had to execute its own fsync call (normally the
172background writer handles those even when the backend does its own write).
173- pgsql.bgwriter.buffers_clean — number of buffers written by the background writer.
174- pgsql.bgwriter.buffers_checkpoint — number of buffers written during checkpoints.
175- pgsql.bgwriter.checkpoints_timed — number of scheduled checkpoints that have been performed.
176- pgsql.bgwriter.checkpoints_req — number of requested checkpoints that have been performed.
177- pgsql.bgwriter.checkpoint_write_time — total amount of time has been spent in the portion of checkpoint processing
178where files are written to disk, in milliseconds.
179- pgsql.bgwriter.sync_time — total amount of time has been spent in the portion of checkpoint processing where files
180are synchronized to disk.
181
182**pgsql.cache.hit[\<commonParams\>]** — cache hit rate.
183*Returns:* Result of the
184```sql
185SELECT round(sum(blks_hit)*100/sum(blks_hit+blks_read), 2)
186FROM pg_catalog.pg_stat_database;
187```
188> SQL query in percentage.
189
190**pgsql.connections[\<commonParams\>]** — connections by types.
191*Returns:* Result of the
192```sql
193SELECT row_to_json(T)
194FROM (
195SELECT
196sum(CASE WHEN state = 'active' THEN 1 ELSE 0 END) AS active,
197sum(CASE WHEN state = 'idle' THEN 1 ELSE 0 END) AS idle,
198sum(CASE WHEN state = 'idle in transaction' THEN 1 ELSE 0 END) AS idle_in_transaction,
199sum(CASE WHEN state = 'idle in transaction (aborted)' THEN 1 ELSE 0 END) AS idle_in_transaction_aborted,
200sum(CASE WHEN state = 'fastpath function call' THEN 1 ELSE 0 END) AS fastpath_function_call,
201count(*) AS total,
202count(*)*100/(SELECT current_setting('max_connections')::int) AS total_pct,
203sum(CASE WHEN wait_event IS NOT NULL THEN 1 ELSE 0 END) AS waiting,
204(SELECT count(*) FROM pg_prepared_xacts) AS prepared
205FROM pg_stat_activity
206WHERE datid is not NULL) T;
207```
208> SQL query JSON format.
209
210Then JSON is proceeded by dependent items of pgsql.connections:
211- pgsql.connections.active — the backend is executing a query.
212- pgsql.connections.fastpath_function_call — the backend is executing a fast-path function.
213- pgsql.connections.idle — the backend is waiting for a new client command.
214- pgsql.connections.idle_in_transaction — the backend is in a transaction, but is not currently executing a query.
215- pgsql.connections.prepared — number of prepared connections.
216- pgsql.connections.total — total number of connection.
217- pgsql.connections.total_pct — percentage of total connections in respect to ‘max_connections’ setting of PostgreSQL
218server.
219- pgsql.connections.waiting — number of waiting connections.
220- pgsql.connections.idle_in_transaction_aborted — This state is similar to idle in transaction, except one of the
221statements in the transaction caused an error.
222
223**pgsql.custom.query[\<commonParams\>,queryName[,args...]]** — Returns result of a custom query.
224*Parameters:*
225queryName (required) — name of a custom query (must be equal to a name of a sql file without an extension).
226args (optional) — one or more arguments to pass to a query.
227
228**pgsql.dbstat[\<commonParams\>]** — statistics per database. Used in databases discovery.
229*Returns:* Result of the
230```sql
231SELECT
232json_object_agg(coalesce (datname,'null'), row_to_json(T))
233FROM (
234SELECT
235datname
236, numbackends as numbackends
237, xact_commit as xact_commit
238, xact_rollback as xact_rollback
239, blks_read as blks_read
240, blks_hit as blks_hit
241, tup_returned as tup_returned
242, tup_fetched as tup_fetched
243, tup_inserted as tup_inserted
244, tup_updated as tup_updated
245, tup_deleted as tup_deleted
246, conflicts as conflicts
247, temp_files as temp_files
248, temp_bytes as temp_bytes
249, deadlocks as deadlocks
250, %s as checksum_failures
251, blk_read_time as blk_read_time
252, blk_write_time as blk_write_time
253FROM pg_catalog.pg_stat_database
254) T;
255```
256> SQL query JSON format.
257
258Then JSON is proceeded by dependent items of:
259- pgsql.dbstat.numbackends["{#DBNAME}"] — number of backends currently connected to this database.
260- pgsql.dbstat.sum.blk_read_time["{#DBNAME}"] — time spent reading data file blocks by backends in this database,
261in milliseconds.
262- pgsql.dbstat.sum.blk_write_time["{#DBNAME}"] — time spent writing data file blocks by backends in this database,
263in milliseconds.
264- pgsql.dbstat.sum.checksum_failures["{#DBNAME}"] — number of data page checksum failures detected (or on a shared
265object), or NULL if data checksums are not enabled (PostgreSQL version 12 only).
266- pgsql.dbstat.blks_read.rate["{#DBNAME}"] — number of disk blocks read in this database.
267- pgsql.dbstat.deadlocks.rate["{#DBNAME}"] — number of deadlocks detected in this database.
268- pgsql.dbstat.blks_hit.rate["{#DBNAME}"] — number of times disk blocks were found already in the buffer cache, so that
269a read was not necessary (this only includes hits in the PostgreSQL Pro buffer cache, not the operating system's file
270system cache).
271- pgsql.dbstat.xact_rollback.rate["{#DBNAME}"] — number of transactions in this database that have been rolled back.
272- pgsql.dbstat.xact_commit.rate["{#DBNAME}"] — number of transactions in this database that have been committed.
273- pgsql.dbstat.tup_updated.rate["{#DBNAME}"] — number of rows updated by queries in this database.
274- pgsql.dbstat.tup_returned.rate["{#DBNAME}"] — number of rows returned by queries in this database.
275- pgsql.dbstat.tup_inserted.rate["{#DBNAME}"] — number of rows inserted by queries in this database.
276- pgsql.dbstat.tup_fetched.rate["{#DBNAME}"] — number of rows fetched by queries in this database.
277- pgsql.dbstat.tup_deleted.rate["{#DBNAME}"] — number of rows deleted by queries in this database.
278- pgsql.dbstat.conflicts.rate["{#DBNAME}"] — number of queries canceled due to conflicts with recovery in this database.
279Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.
280- pgsql.dbstat.temp_files.rate["{#DBNAME}"] — number of temporary files created by queries in this database.
281All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and
282regardless of the log_temp_files setting.
283- pgsql.dbstat.temp_bytes.rate["{#DBNAME}"] — total amount of data written to temporary files by queries in this
284database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the
285log_temp_files setting.
286
287**pgsql.dbstat.sum[\<commonParams\>]** — statistics for all databases combined.
288*Returns:* Result of the
289```sql
290SELECT row_to_json (T)
291FROM (
292SELECT
293sum(numbackends) as numbackends
294, sum(xact_commit) as xact_commit
295, sum(xact_rollback) as xact_rollback
296, sum(blks_read) as blks_read
297, sum(blks_hit) as blks_hit
298, sum(tup_returned) as tup_returned
299, sum(tup_fetched) as tup_fetched
300, sum(tup_inserted) as tup_inserted
301, sum(tup_updated) as tup_updated
302, sum(tup_deleted) as tup_deleted
303, sum(conflicts) as conflicts
304, sum(temp_files) as temp_files
305, sum(temp_bytes) as temp_bytes
306, sum(deadlocks) as deadlocks
307, sum(checksum_failures) as checksum_failures
308, sum(blk_read_time) as blk_read_time
309, sum(blk_write_time) as blk_write_time
310FROM pg_catalog.pg_stat_database
311) T
312```
313> SQL query JSON format.
314
315Then JSON is proceeded by dependent items of:
316- pgsql.dbstat.numbackends — number of backends currently connected to this database.
317- pgsql.dbstat.sum.blk_read_time — time spent reading data file blocks by backends in this database, in milliseconds.
318- pgsql.dbstat.sum.blk_write_time — time spent writing data file blocks by backends in this database, in milliseconds.
319- pgsql.dbstat.sum.checksum_failures — number of data page checksum failures detected (or on a shared object), or NULL
320if data checksums are not enabled (PostgreSQL version 12 only).
321- pgsql.dbstat.sum.xact_commit — number of transactions in this database that have been committed.
322- pgsql.dbstat.sum.conflicts — number of queries canceled due to conflicts with recovery in this database.
323Conflicts occur only on standby servers; see pg_stat_database_conflicts for details.
324- pgsql.dbstat.sum.deadlocks — number of deadlocks detected in this database.
325- pgsql.dbstat.sum.blks_read — number of disk blocks read in this database.
326- pgsql.dbstat.sum.blks_hit — number of times disk blocks were found already in the buffer cache, so that a read was not
327necessary (this only includes hits in the PostgreSQL Pro buffer cache, not the operating system's file system cache).
328- pgsql.dbstat.sum.temp_bytes — total amount of data written to temporary files by queries in this database. All
329temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files
330setting.
331- pgsql.dbstat.sum.temp_files — number of temporary files created by queries in this database. All temporary files are
332counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the
333log_temp_files setting.
334- pgsql.dbstat.sum.xact_rollback — number of transactions in this database that have been rolled back.
335- pgsql.dbstat.sum.tup_deleted — number of rows deleted by queries in this database.
336- pgsql.dbstat.sum.tup_fetched — number of rows fetched by queries in this database.
337- pgsql.dbstat.sum.tup_inserted — number of rows inserted by queries in this database.
338- pgsql.dbstat.sum.tup_returned — number of rows returned by queries in this database.
339- pgsql.dbstat.sum.tup_updated — number of rows updated by queries in this database.
340
341**pgsql.db.age[\<commonParams\>]** — age of the oldest xid for the specific database. Used in databases discovery.
342*Returns:* Result of the
343```sql
344SELECT age(datfrozenxid)
345FROM pg_catalog.pg_database
346WHERE datistemplate = false
347AND datname = <dbName>
348```
349> SQL query for specific database in transactions.
350
351**pgsql.db.bloating_tables[\<commonParams\>]** — number of bloating tables per database. Used in databases discovery.
352*Returns:* Result of the
353```sql
354SELECT count(*)
355FROM pg_catalog.pg_stat_all_tables
356WHERE (n_dead_tup/(n_live_tup+n_dead_tup)::float8) > 0.2
357AND (n_live_tup+n_dead_tup) > 50;
358```
359> SQL query.
360
361Result of this query differs depending on the database to which agent is currently connected.
362
363**pgsql.db.discovery[\<commonParams\>]** — Databases discovery.
364*Returns:* Result of the
365```sql
366SELECT json_build_object('data',json_agg(json_build_object('{#DBNAME}',d.datname)))
367FROM pg_database
368WHERE NOT datistemplate
369AND datallowconn;
370```
371> SQL query in LLD JSON format.
372
373**pgsql.db.size[\<commonParams\>]** — database size in bytes. Used in databases discovery.
374*Returns:* Result of the
375```sql
376SELECT pg_database_size(datname::text)
377FROM pg_catalog.pg_database
378WHERE datistemplate = false
379AND datname = <dbName>;
380```
381> SQL query for specific database in bytes.
382
383**pgsql.locks[\<commonParams\>]** — locks statistics per database. Used in databases discovery.
384*Returns:* Result of the
385```sql
386WITH T AS
387(SELECT
388db.datname dbname,
389lower(replace(Q.mode, 'Lock', '')) AS MODE,
390coalesce(T.qty, 0) val
391FROM pg_database db
392JOIN (
393VALUES ('AccessShareLock') ,('RowShareLock') ,('RowExclusiveLock') ,('ShareUpdateExclusiveLock') ,('ShareLock') ,('ShareRowExclusiveLock') ,('ExclusiveLock') ,('AccessExclusiveLock')) Q(MODE) ON TRUE NATURAL
394LEFT JOIN
395(SELECT datname,
396MODE,
397count(MODE) qty
398FROM pg_locks lc
399RIGHT JOIN pg_database db ON db.oid = lc.database
400GROUP BY 1, 2) T
401WHERE NOT db.datistemplate
402ORDER BY 1, 2)
403SELECT json_object_agg(dbname, row_to_json(T2))
404FROM
405(SELECT dbname,
406sum(val) AS total,
407sum(CASE
408WHEN MODE = 'accessexclusive' THEN val
409END) AS accessexclusive,
410sum (CASE
411WHEN MODE = 'accessshare' THEN val
412END) AS accessshare,
413sum(CASE
414WHEN MODE = 'exclusive' THEN val
415END) AS EXCLUSIVE,
416sum(CASE
417WHEN MODE = 'rowexclusive' THEN val
418END) AS rowexclusive,
419sum(CASE
420WHEN MODE = 'rowshare' THEN val
421END) AS rowshare,
422sum(CASE
423WHEN MODE = 'share' THEN val
424END) AS SHARE,
425sum(CASE
426WHEN MODE = 'sharerowexclusive' THEN val
427END) AS sharerowexclusive,
428sum(CASE
429WHEN MODE = 'shareupdateexclusive' THEN val
430END) AS shareupdateexclusive
431FROM T
432GROUP BY dbname) T2;
433```
434> SQL query JSON format.
435
436Then JSON is proceeded by dependent items of:
437- pgsql.locks.shareupdateexclusive["{#DBNAME}"] — number of share update exclusive locks.
438- pgsql.locks.accessexclusive["{#DBNAME}"] — number of access exclusive locks.
439- pgsql.locks.accessshare["{#DBNAME}"] — number of access share locks.
440- pgsql.locks.exclusive["{#DBNAME}"] — number of exclusive locks.
441- pgsql.locks.rowexclusive["{#DBNAME}"] — number of row exclusive locks.
442- pgsql.locks.rowshare["{#DBNAME}"] — number of row share locks.
443- pgsql.locks.share["{#DBNAME}"] — number of share locks.
444- pgsql.locks.sharerowexclusive["{#DBNAME}"] — number of share row exclusive locks.
445
446**pgsql.pgsql.oldest.xid[\<commonParams\>]** — PostgreSQL age of the oldest XID.
447*Returns:* Result of the
448```sql
449SELECT greatest(max(age(backend_xmin)), max(age(backend_xid)))
450FROM pg_catalog.pg_stat_activity" SQL query.
451```
452
453**pgsql.ping[\<commonParams\>]** — tests whether a connection is alive or not.
454*Returns:*
455- "1" if the connection is alive.
456- "0" if the connection is broken (returned if there was any error during the test, including AUTH and configuration issues).
457
458**pgsql.replication.count[uri,username,password]** — number of standby servers.
459*Returns:* Result of the
460```sql
461SELECT count(*) FROM pg_stat_replication
462```
463> SQL query.
464
465**pgsql.replication_lag.b[uri,username,password]** — replication lag in bytes.
466*Returns:* Result of the
467```sql
468SELECT pg_catalog.pg_wal_lsn_diff (received_lsn, pg_last_wal_replay_lsn())
469FROM pg_stat_wal_receiver;
470```
471> SQL query in bytes
472
473**pgsql.replication_lag.sec[uri,username,password]** — replication lag in seconds.
474*Returns:* Result of the
475```sql
476SELECT
477CASE
478WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn() THEN 0
479ELSE
480COALESCE(EXTRACT(EPOCH FROM now() - pg_last_xact_replay_timestamp())::integer, 0)
481END as lag
482```
483> SQL query in seconds.
484
485**pgsql.replication.recovery_role[uri,username,password]** — recovery status.
486*Returns:*
487- 1 — recovery is still in progress (standby mode)
488- 0 — master mode.
489
490**pgsql.replication.status[uri,username,password]** — status of replication.
491*Returns:*
492- 0 — streaming is down
493- 1 — streaming is up
494- 2 — mastermode
495
496**pgsql.replication.process[uri,username,password]** — flush lag, write lag and replay lag per each sender process.
497*Returns:* Result of the
498```sql
499SELECT json_object_agg(application_name, row_to_json(T))
500FROM (
501 SELECT
502 CONCAT(application_name, ' ', pid) AS application_name,
503 EXTRACT(epoch FROM COALESCE(flush_lag,'0'::interval)) as flush_lag,
504 EXTRACT(epoch FROM COALESCE(replay_lag,'0'::interval)) as replay_lag,
505 EXTRACT(epoch FROM COALESCE(write_lag, '0'::interval)) as write_lag
506 FROM pg_stat_replication
507 ) T;
508```
509
510**pgsql.replication.process.discovery[uri,username,password]** - replication procces name discovery.
511*Returns:* Result of the
512```sql
513SELECT
514json_build_object('data',
515json_agg(json_build_object('{#APPLICATION_NAME}',
516CONCAT(application_name, ' ', pid))))
517FROM
518pg_stat_replication
519```
520
521**pgsql.uptime[\<commonParams\>]** — PostgreSQL uptime, in milliseconds.
522*Returns:* Result of the
523```sql
524SELECT date_part('epoch', now() - pg_postmaster_start_time());
525```
526> SQL query in ms.
527
528**pgsql.wal.stat[\<commonParams\>]** — returns WAL statistics.
529*Returns:* Result of the
530```sql
531SELECT row_to_json(T)
532FROM (
533SELECT
534pg_wal_lsn_diff(pg_current_wal_lsn(),'0/00000000') AS WRITE,
535count(*)
536FROM pg_ls_waldir() AS COUNT
537) T;
538```
539> SQL query JSON format.
540
541Then JSON is proceeded by dependent items of:
542- pgsql.wal.count — number of wal files.
543- pgsql.wal.write — wal lsn used, in bytes.
544
545## Custom queries
546It's possible to extend functionality of the plugin using user-defined queries. To do that you should place all your
547queries in a directory specified in Plugins.Postgres.CustomQueriesPath (there is no default path) as *.sql files.
548For example, you have a tree:
549
550 /etc/zabbix/postgres/sql/
551 ├── long_tx.sql
552 ├── payment.sql
553 └── top_proc.sql
554
555You should set Plugins.Postgres.CustomQueriesPath=/etc/zabbix/postgres/sql
556
557So, when the queries are in place, you can execute them:
558
559 pgsql.custom.query[<commonParams>,top_proc]
560 pgsql.custom.query[<commonParams>,long_tx,600]
561
562You can pass as many parameters to a query as you need.
563The syntax for placeholder parameters uses "$#", where "#" is an index number of a parameter.
564E.g:
565```
566/* payment.sql */
567
568SELECT
569 amount
570FROM
571 payment
572WHERE
573 user = $1
574 AND service_id = $2
575 AND date = $3
576```
577
578 pgsql.custom.query[<commonParams>,payment,"John Doe",1,"10/25/2020"]
579
580## Troubleshooting
581The plugin uses Zabbix agent's logs. You can increase debugging level of Zabbix Agent if you need more details about
582what is happening.
583