Configure PostgreSQL logs for Flow Production Tracking Enterprise
Local installations of Flow Production Tracking are no longer offered. This documentation is intended only for those with existing instances of Flow Production Tracking Enterprise Docker. Click here for a list of our current offerings.
This article gives information about how to configure PostgreSQL logging for Flow Production Tracking Enterprise. Tuning PostgreSQL logging is very useful to troubleshoot performance issues, bad patterns, and database locks.
Scope
This document is intended for Flow Production Tracking System Administrators (local installs).
The following procedure is valid for PostgreSQL version 15
PostgreSQL log configuration
PostgreSQL can be configured to log slow queries and locks. This is very helpful information when trying to diagnose performance issues. Flow Production Tracking is, in the end, a layer over PostgreSQL, allowing the user to build queries. These queries are not always optimized, which is usually what hurts Flow Production Tracking performance.
Enabling slow query logging and change logs format for pgBadger is really easy and just requires a PostgreSQL configuration reload (no downtime). The following steps will allow you to configure PostgreSQL to output that information. In addition, it will allow your logs to be parsed by pgBadger, a useful PostgreSQL diagnostic tool.
Assumptions
- The Postgresql version is 15
sudo su -
Backup current configuration file. Revert to this backup and restart if things go wrong.
cp /var/lib/pgsql/15/data/postgresql.conf /var/lib/pgsql/15/data/postgresql.conf.bak # Change the following
Edit the configuration.
vi /var/lib/pgsql/15/data/postgresql.conf
Change the following
You can use a different line prefix format, but make it compatible to pgBadger
log_min_duration_statement = 500 # In ms
log_line_prefix = '%t [%p]: [%l-1] db=%d '
Log request that are waiting for a lock for more then 1 second by default.
log_lock_waits = on
Reload postgres configuration. This won't disturb current connections.
sudo su postgres
/usr/pgsql-15/bin/pg_ctl reload -D /var/lib/pgsql/15/data/
## Further tuning
You can slowly reduce the `log_min_duration_statement` up to 100 ms if the database server performance is not affected. The more data you have, the more chances we have of detecting issues using the database logs.
## PgBadger
[PgBadger](https://github.com/darold/pgbadger) is a very helpful tool that can analyze PostgresSQL logs and produce reports underlining possible performance issues. PostgreSQL logs need to be configured properly for pgBadger to work. The above procedure should have made your logs compatible with this tool.
## PgBadger installation
Install pgBadger on the database server:
sudo yum install -y pgbadger
Generate a report for a specific day:
sudo pgBadger /var/lib/pgsql/15/data/pg_log/postgresql-Mon.log
The result will be in out.html
## Report generation strategy
We recommend generating reports for completed days. Daily reports can even be automated. Daily reports are useful for identifying bad patterns at an high level.
You can also `grep` the database logs for a specific time slice before feeding them to pgBadger. If you have a specific period in a day where your studio is having issues, generating a report for that period only can be very useful.
## Understanding pgBadger
### Overview page
Will help identifying your activity peaks. Remember that only queries slower than the `log_min_duration_statement` values are logged and therefore appearing in this report.
### Locks
Give information about database locks. This information can be critical, as a main source of contention is database locks that prevent users from querying the database.
### Top
Information about more expensive queries. Queries over 10 seconds are always bad, and should be investigated. Indexes can be added by the Flow Production Tracking Operation team, but more often than not, these queries must be optimized at the application level. As a System Administrator, your role is not necessarily to understand these queries, but to find pointers to users and scripts that could be doing harmful operations. Look at the variable names; it can give useful information about what is causing performance issues.
More information can be find on [https://github.com/darold/pgbadger](https://github.com/darold/pgbadger).
## FAQ
### Will activating slow request logging hurt performance?
Logging does add additional load on the disk I/O, but since not all queries are logged, the added load should not be significant. On our production servers, we are logging all queries slower than 250 ms. Nonetheless, we recommend keeping an eye on your server load in the days following the activation.
### Fine, I have a pgBadger report. What are the next steps?
Logs may seem cryptic at first, but the more you look at them, the more they'll talk to you. Try looking for values that will hint about which page, API script, or users may be doing harmful queries. The "Top" and "Lock" section is particularly useful to find out performance issues and bottlenecks. You can also ask for help through [Autodesk Support](https://www.autodesk.com/support/contact-support).