Starting with version 1.8.0,
pgmetrics can extract information from
PostgreSQL log files and make them available in it’s JSON output.
Currently, the following information is collected:
- Query execution plans logged by the auto_explain
extension. Plans in JSON or text formats, along with the SQL query text,
the username of the user executing the query and the name of the database
on which it is executed are collected.
- Autovaccum log entries, with the name of the table being autovaccummed,
the time of start and the duration.
- Deadlock detection logs that include the queries that caused the deadlock.
Locating the Log Files
pgmetrics will attempt to find the log file(s) at the following locations, in
- the path to a single log file specified on the command line with the
- all the files in the directory specified on the command line with the
- the pg_current_logfile()
function, if available
- the most recent file in /var/log/postgresql
Note: this behavior changed between pgmetrics v1.9.0 and v1.10.0.
Specifying How Much to Collect
pgmetrics will examine the last 5 minutes worth of log file content,
pgmetrics is intended to be invoked periodically (like every 5
minutes) to collect metrics and information. You can change this duration using
the command line option
Log Line Prefix
pgmetrics will examine the configuration setting log_line_prefix
directly from the database. Any value for this setting is acceptable to
pgmetrics, as long as it includes one of
%t (timestamp without milliseconds),
%m (timestamp with milliseconds) or
%n (epoch timestamp).
Additionally, it is highly recommended to include
%u (username) and
%d (database name).
Skipping Log File Processing
pgmetrics will attempt to read and process logs by default. If this behavior
is not required, disable it using the
--omit=log command-line option.
Starting with version 1.10.0, pgmetrics supports reading from CSV logs.
If the setting
csvlog and the setting
logging_collector is enabled, then pgmetrics assumes that the logs are in the