How to Enable Slow Query Log

How to Enable Slow Query Log The Slow Query Log is one of the most powerful diagnostic tools available to database administrators, developers, and system engineers working with relational databases such as MySQL, MariaDB, and PostgreSQL. It records queries that take longer than a specified threshold to execute, providing critical insights into performance bottlenecks, inefficient indexing, and res

Nov 6, 2025 - 10:52
Nov 6, 2025 - 10:52
 0

How to Enable Slow Query Log

The Slow Query Log is one of the most powerful diagnostic tools available to database administrators, developers, and system engineers working with relational databases such as MySQL, MariaDB, and PostgreSQL. It records queries that take longer than a specified threshold to execute, providing critical insights into performance bottlenecks, inefficient indexing, and resource-heavy operations. Enabling the Slow Query Log is not merely a technical configurationits a proactive strategy for maintaining database health, optimizing application responsiveness, and preventing system degradation under load.

Many applications suffer from slow page loads, timeouts, or intermittent failures that are ultimately rooted in poorly performing database queries. Without visibility into which queries are causing delays, troubleshooting becomes a game of guesswork. The Slow Query Log transforms this ambiguity into actionable data. By capturing the exact SQL statements, execution times, and resource usage, it empowers teams to identify and fix problematic queries before they impact end users.

This guide provides a comprehensive, step-by-step walkthrough on how to enable the Slow Query Log across multiple database systems. Well cover configuration details, best practices for tuning thresholds, tools to analyze the logs, real-world examples of query optimization, and answers to common questions. Whether youre managing a small web application or a high-traffic enterprise system, understanding and leveraging the Slow Query Log is essential for sustainable performance.

Step-by-Step Guide

Enabling Slow Query Log in MySQL

MySQL is one of the most widely used relational databases, and enabling its Slow Query Log is straightforward but requires attention to configuration details. The process varies slightly depending on whether youre using MySQL 5.6 and earlier or MySQL 5.7 and later.

First, locate your MySQL configuration file. On most Linux systems, this is typically found at /etc/mysql/my.cnf or /etc/my.cnf. On systems using systemd, you may also find configuration in /etc/mysql/mysql.conf.d/mysqld.cnf. On Windows, the file is usually named my.ini and located in the MySQL installation directory.

Open the configuration file in a text editor with administrative privileges. Add or modify the following lines under the [mysqld] section:

slow_query_log = 1

slow_query_log_file = /var/log/mysql/mysql-slow.log

long_query_time = 2

log_queries_not_using_indexes = 1

Lets break down each directive:

  • slow_query_log = 1 Enables the Slow Query Log. Set to 0 to disable.
  • slow_query_log_file Specifies the path and filename where the log will be written. Ensure the directory exists and the MySQL process has write permissions.
  • long_query_time Defines the minimum execution time (in seconds) for a query to be logged. The default is 10 seconds; setting it to 2 or 1 is recommended for development and staging environments.
  • log_queries_not_using_indexes Logs queries that do not use indexes, even if they execute quickly. This helps identify potential indexing issues before they become performance problems.

After making changes, restart the MySQL service for the configuration to take effect:

sudo systemctl restart mysql

On some systems, you may need to use:

sudo systemctl restart mysqld

To verify that the Slow Query Log is active, connect to MySQL using the command-line client:

mysql -u root -p

Then run:

SHOW VARIABLES LIKE 'slow_query_log';

SHOW VARIABLES LIKE 'slow_query_log_file';

SHOW VARIABLES LIKE 'long_query_time';

If the values reflect your configuration, the log is enabled. You can also check the log file directly:

tail -f /var/log/mysql/mysql-slow.log

Enabling Slow Query Log in MariaDB

MariaDB, a community-developed fork of MySQL, uses the same Slow Query Log configuration syntax. The steps are nearly identical to MySQL.

Open the MariaDB configuration file, typically located at /etc/mysql/mariadb.conf.d/50-server.cnf or /etc/my.cnf.d/server.cnf. Add the following under the [mysqld] section:

slow_query_log = 1

slow_query_log_file = /var/log/mariadb/mariadb-slow.log

long_query_time = 1

log_queries_not_using_indexes = 1

Ensure the log directory exists and is writable:

sudo mkdir -p /var/log/mariadb

sudo chown mysql:mysql /var/log/mariadb

Restart the service:

sudo systemctl restart mariadb

Verify the settings using the MariaDB client:

mysql -u root -p

SHOW VARIABLES LIKE 'slow_query_log%';

SHOW VARIABLES LIKE 'long_query_time';

Enabling Slow Query Log in PostgreSQL

PostgreSQL does not have a direct equivalent to MySQLs Slow Query Log, but it provides similar functionality through its log_min_duration_statement parameter. This setting logs any query that takes longer than the specified duration (in milliseconds).

Locate your PostgreSQL configuration file, typically named postgresql.conf. Its location varies by installation:

  • Ubuntu/Debian: /etc/postgresql/[version]/main/postgresql.conf
  • CentOS/RHEL: /var/lib/pgsql/[version]/data/postgresql.conf

Open the file and locate or add the following lines:

log_min_duration_statement = 1000

log_statement = 'none'

log_destination = 'stderr'

logging_collector = on

log_directory = '/var/log/postgresql'

log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

Heres what each setting does:

  • log_min_duration_statement = 1000 Logs any statement taking longer than 1000 milliseconds (1 second). Adjust based on your performance expectations.
  • log_statement Set to 'none' to avoid logging every query. You can also use 'ddl' or 'mod' for more targeted logging.
  • logging_collector = on Enables log file collection instead of outputting to stdout.
  • log_directory and log_filename Define where logs are stored and how they are named.

Create the log directory if it doesnt exist:

sudo mkdir -p /var/log/postgresql

sudo chown postgres:postgres /var/log/postgresql

Restart PostgreSQL to apply changes:

sudo systemctl restart postgresql

To verify the configuration, connect to your database and run:

SHOW log_min_duration_statement;

SHOW logging_collector;

Check the log files in the specified directory:

ls -la /var/log/postgresql/

tail -f /var/log/postgresql/postgresql-*.log

Enabling Slow Query Log in SQL Server

Microsoft SQL Server does not have a native Slow Query Log, but it offers robust alternatives through Extended Events and Query Store.

Option 1: Using Extended Events

Extended Events is the modern, lightweight replacement for SQL Server Profiler. To capture slow queries:

  1. Open SQL Server Management Studio (SSMS).
  2. Expand Management ? Extended Events ? Sessions.
  3. Right-click and select New Session.
  4. Name the session (e.g., SlowQueries).
  5. Under Events Library, add the event sql_statement_completed.
  6. Click Configure next to the event and set a filter: duration > 5000000 (5 seconds in microseconds).
  7. Under Data Storage, select Ring Buffer or File Target. File Target is recommended for long-term analysis.
  8. Click OK and start the session.

Option 2: Using Query Store

Query Store (available in SQL Server 2016+) automatically captures query performance data. Enable it per database:

ALTER DATABASE [YourDatabaseName] SET QUERY_STORE = ON;

ALTER DATABASE [YourDatabaseName] SET QUERY_STORE (OPERATION_MODE = READ_WRITE);

Once enabled, navigate to the database ? Query Store in SSMS to view top resource-consuming queries by duration, CPU, or I/O.

Best Practices

Set Appropriate Thresholds

The long_query_time (or equivalent) threshold should be tuned to your environment. A value too high (e.g., 10 seconds) may miss subtle performance issues. A value too low (e.g., 0.1 seconds) may flood the log with irrelevant data, making analysis difficult.

Recommendations:

  • Development/Testing: Set to 0.51 second to catch early issues.
  • Staging: Set to 12 seconds to simulate production behavior.
  • Production: Set to 25 seconds to avoid excessive logging while still capturing critical queries.

Monitor log volume over time and adjust thresholds accordingly. If logs grow beyond 12 GB per day, increase the threshold or implement log rotation.

Use Log Rotation

Slow Query Logs can grow rapidly, especially on high-traffic systems. Unmanaged logs can consume disk space and degrade performance.

On Linux systems, use logrotate to automate log rotation. Create a configuration file at /etc/logrotate.d/mysql-slow:

/var/log/mysql/mysql-slow.log {

daily

missingok

rotate 7

compress

delaycompress

notifempty

create 640 mysql adm

sharedscripts

postrotate

/usr/bin/mysqladmin flush-logs > /dev/null 2>&1 || true

endscript

}

Test the configuration:

sudo logrotate -d /etc/logrotate.d/mysql-slow

Apply it:

sudo logrotate -f /etc/logrotate.d/mysql-slow

Separate Logs by Environment

Never use the same Slow Query Log file across development, staging, and production environments. Each environment has different traffic patterns and query behavior. Mixing logs makes analysis inaccurate and misleading.

Use distinct log files:

  • Production: /var/log/mysql/prod-slow.log
  • Staging: /var/log/mysql/stage-slow.log
  • Development: /var/log/mysql/dev-slow.log

This allows you to analyze performance trends independently and avoid contamination from non-production activity.

Enable Index Usage Logging

Always enable log_queries_not_using_indexes in MySQL/MariaDB. Queries that scan entire tables without indexes are often the most resource-intensive and easiest to fix. This setting helps you identify missing indexes before they cause production outages.

Be aware: This may increase log volume significantly. Use it selectively during performance tuning windows, then disable it once indexing is optimized.

Monitor Log File Permissions

Ensure the database user has write permissions to the log directory. If the MySQL or PostgreSQL process cannot write to the log file, the log will fail silently. Check ownership and permissions regularly:

ls -l /var/log/mysql/mysql-slow.log

The file should be owned by the database user (e.g., mysql or postgres) and writable by that user.

Integrate with Monitoring Tools

Manual log analysis is time-consuming. Integrate Slow Query Logs with monitoring platforms like Prometheus + Grafana, Datadog, or New Relic. Many tools can parse log files and visualize slow query trends over time.

For example, use pt-query-digest (from Percona Toolkit) to generate summary reports and feed them into a dashboard. Schedule it as a cron job:

0 2 * * * /usr/bin/pt-query-digest /var/log/mysql/mysql-slow.log > /var/log/mysql/slow-report-$(date +\%F).txt

Review Logs Regularly

Enable the log, but dont ignore it. Schedule weekly reviews of slow query reports. Assign ownership to a database administrator or senior developer. Treat slow queries as technical debtaddress them proactively, not reactively.

Avoid Logging All Queries

While tempting, logging every query (log_queries_not_using_indexes = 1 combined with long_query_time = 0) is rarely practical in production. It generates massive volumes of data, consumes I/O, and makes analysis unmanageable. Use it only during targeted performance investigations.

Tools and Resources

Percona Toolkit pt-query-digest

pt-query-digest is the industry-standard tool for analyzing MySQL and MariaDB Slow Query Logs. It parses log files and generates a human-readable report ranking queries by total execution time, lock time, rows examined, and more.

Install it on Ubuntu/Debian:

sudo apt-get install percona-toolkit

On CentOS/RHEL:

sudo yum install percona-toolkit

Run it against your log:

pt-query-digest /var/log/mysql/mysql-slow.log

The output includes:

  • Top queries by total time
  • Query frequency
  • Rows examined vs. rows sent
  • Execution plan hints

Example output snippet:

Query 1: 0.25 QPS, 0.20x concurrency, ID 0x1234567890ABCDEF at byte 12345

This item is included in the report because it matches --limit.

Scores: V/M = 1.11

Time range: 2024-04-01T08:00:00 to 2024-04-01T09:00:00

Attribute pct total min max avg 95% stddev median

============ === ======= ======= ======= ======= ======= ======= =======

Count 100 100

Exec time 100 100s 1s 2s 1s 2s 0s 1s

Lock time 100 100ms 50us 20ms 1ms 2ms 2ms 1ms

Rows sent 100 10000 0 50 100 49 2 99

Rows examine 100 1000000 0 100000 100000 99999 0 99999

Query size 100 15.56k 155 155 155 155 0 155

String:

Databases production

Hosts 192.168.1.10

Users app_user

Query_time distribution

1us

10us

100us

1ms

10ms

100ms

1s

###########################################################

10s+

Tables

SHOW TABLE STATUS LIKE 'orders'\G

SHOW CREATE TABLE orders\G

EXPLAIN /*!50100 PARTITIONS*/

SELECT SUM(amount) FROM orders WHERE user_id = ? AND created_at > ?\G

This report immediately reveals that a single query is scanning 100,000 rows per executionlikely due to a missing index on user_id or created_at.

MySQL Workbench Performance Dashboard

MySQL Workbench includes a built-in Performance Dashboard that connects to live MySQL instances and displays slow queries in real time. Its ideal for interactive analysis during development.

Open MySQL Workbench ? Connect to your server ? Navigate to Performance ? Performance Dashboard.

Under Slow Queries, youll see a live list of queries with execution time, rows examined, and lock time. Click any query to view its execution plan and suggest indexes.

pgBadger PostgreSQL Log Analyzer

pgBadger is a fast, standalone log analyzer for PostgreSQL. It generates rich HTML reports from PostgreSQL logs, including slow queries, top functions, and connection patterns.

Install it via Perl CPAN:

cpan App::pgbadger

Or use package managers:

sudo apt-get install pgbadger

Generate a report:

pgbadger -f stderr /var/log/postgresql/postgresql-*.log -o /var/log/postgresql/report.html

Open report.html in a browser to view detailed visualizations, including top slow queries, query types, and duration trends.

Cloud-Based Solutions

For cloud-hosted databases, leverage native tools:

  • AWS RDS Enable Enhanced Monitoring and use the Slow Query Log section in the RDS console. Export logs to S3 and analyze with Athena.
  • Google Cloud SQL Use Cloud Logging to filter for slow queries and integrate with Looker Studio.
  • Microsoft Azure Database for MySQL/PostgreSQL Enable Query Store and use the Query Performance Insight feature.

Custom Scripts and Automation

Write simple shell or Python scripts to automate log analysis. For example, a Python script using py-mysqlslowlog can extract and alert on queries with high rows examined:

import mysqlslowlog

for query in mysqlslowlog.parse('/var/log/mysql/mysql-slow.log'):

if query.rows_examined > 10000:

print(f"High rows examined: {query.query} | Rows: {query.rows_examined}")

Integrate this into your CI/CD pipeline or alerting system to notify developers when new slow queries are introduced.

Real Examples

Example 1: Missing Index on WHERE Clause

Scenario: A web applications product search page loads slowly during peak hours. Users report delays of 58 seconds.

Log Entry:

Time: 2024-04-01T08:15:23.123456Z

User@Host: app_user[app_user] @ localhost []

Query_time: 6.789012 Lock_time: 0.000123 Rows_sent: 10 Rows_examined: 892345

SET timestamp=1712000123;

SELECT * FROM products WHERE category_id = 45 AND status = 'active' ORDER BY created_at DESC LIMIT 10;

Analysis: The query examines nearly 900,000 rows to return 10 results. This indicates a missing composite index on (category_id, status, created_at).

Fix: Add the index:

CREATE INDEX idx_products_category_status_created ON products (category_id, status, created_at);

Result: After the index is created, the same query now examines 15 rows and executes in 0.012 seconds.

Example 2: Query with Suboptimal JOIN

Scenario: A reporting dashboard loads slowly. The database server shows high CPU usage.

Log Entry:

Time: 2024-04-01T09:30:45.678901Z

User@Host: report_user[report_user] @ analytics-server []

Query_time: 12.456789 Lock_time: 0.000000 Rows_sent: 5000 Rows_examined: 12000000

SET timestamp=1712004645;

SELECT u.name, o.total, p.name AS product_name

FROM users u

JOIN orders o ON u.id = o.user_id

JOIN products p ON o.product_id = p.id

WHERE o.created_at BETWEEN '2024-01-01' AND '2024-03-31';

Analysis: The query scans 12 million rows. The orders table lacks an index on created_at, forcing a full table scan. The JOINs are correct, but the filtering happens too late.

Fix: Add an index on orders(created_at) and consider partitioning the table by date if its very large.

CREATE INDEX idx_orders_created ON orders (created_at);

Result: Query time drops from 12 seconds to 0.8 seconds. CPU usage on the server returns to normal.

Example 3: PostgreSQL Query Without Index on JSONB Field

Scenario: A microservice storing user preferences in a JSONB column experiences high latency.

Log Entry:

2024-04-01 08:22:15 UTC [12345]: [1-1] user=app_user,db=app,host=192.168.1.100 LOG: duration: 4820.321 ms statement: SELECT * FROM user_settings WHERE preferences @> '{"theme": "dark", "notifications": true}';

Analysis: The query uses a JSONB containment operator (@>) but lacks a GIN index on the preferences column.

Fix: Create a GIN index:

CREATE INDEX idx_user_settings_preferences_gin ON user_settings USING GIN (preferences);

Result: Query time reduces from 4.8 seconds to 8 milliseconds.

Example 4: N+1 Query Problem

Scenario: A CMS loads a blog post with comments. The page takes 4 seconds to render.

Log Entry (MySQL):

Time: 2024-04-01T10:10:10.123456Z

User@Host: webapp[webapp] @ frontend-server []

Query_time: 0.012345 Lock_time: 0.000001 Rows_sent: 1 Rows_examined: 1

SET timestamp=1712007010;

SELECT * FROM posts WHERE id = 12345;

Repeated 50 times:

Query_time: 0.009876 Lock_time: 0.000000 Rows_sent: 5 Rows_examined: 5

SET timestamp=1712007010;

SELECT * FROM comments WHERE post_id = 12345;

Analysis: This is a classic N+1 query problem. The application loads one post, then executes 50 individual queries to fetch commentsone per post. Each query is fast, but the cumulative time is high.

Fix: Modify the application code to fetch all comments in a single query:

SELECT * FROM comments WHERE post_id IN (12345);

Result: 50 queries reduced to 1. Page load time drops from 4 seconds to 0.3 seconds.

FAQs

What is the difference between slow query log and general query log?

The Slow Query Log only records queries that exceed a specified execution time threshold. The General Query Log records every query executed by the server, regardless of performance. The General Query Log is useful for auditing and debugging but generates massive log files and should never be enabled in production for extended periods.

Can I enable Slow Query Log without restarting the database?

In MySQL and MariaDB, you can enable the Slow Query Log dynamically without restarting:

SET GLOBAL slow_query_log = 'ON';

SET GLOBAL long_query_time = 2;

SET GLOBAL slow_query_log_file = '/var/log/mysql/mysql-slow.log';

However, changes to slow_query_log_file may require a restart on some versions. Always verify the setting with SHOW VARIABLES.

In PostgreSQL, you can reload the configuration without restarting:

SELECT pg_reload_conf();

This applies changes to postgresql.conf without interrupting connections.

Why is my Slow Query Log empty even after enabling it?

Common reasons include:

  • The long_query_time threshold is too high for your workload.
  • The log file path is incorrect or not writable.
  • The database has no slow queriesyour application may already be well-optimized.
  • Youre querying a different instance than the one you configured.

Test by running a deliberately slow query:

SELECT SLEEP(5);

If it appears in the log, your configuration is correct.

How often should I analyze the Slow Query Log?

For production systems, analyze logs weekly. For high-traffic applications, use automated tools to generate daily reports and alert on new or regressing queries. In development, analyze logs after every major code deployment.

Does enabling Slow Query Log affect database performance?

Yes, but minimally. Writing to a log file adds slight I/O overhead. On modern SSDs and well-tuned systems, this impact is negligible (typically less than 1% CPU usage). The performance cost of not identifying slow queries far outweighs the cost of logging.

Can I use Slow Query Log with replication?

Yes. In MySQL, you can enable log_slow_slave_statements to log slow queries executed on replica servers. This helps identify replication lag caused by slow queries on slaves.

What should I do if a query is slow but uses an index?

Even with an index, queries can be slow due to:

  • Using functions on indexed columns (e.g., WHERE YEAR(date_column) = 2024)
  • Index selectivity issues (e.g., indexing a column with only 2 distinct values)
  • Large result sets requiring sorting or temporary tables
  • Lock contention or I/O bottlenecks

Use EXPLAIN or EXPLAIN ANALYZE to inspect the execution plan. Look for Using filesort, Using temporary, or high rows values.

Is it safe to delete old Slow Query Log files?

Yes. Once youve analyzed and archived the logs, you can safely delete them. Use log rotation to automate this process. Never delete logs while the database is actively writing to themalways rotate or restart the service first.

Conclusion

Enabling the Slow Query Log is not a one-time taskits a continuous practice essential for maintaining high-performance database systems. Whether youre running MySQL, MariaDB, PostgreSQL, or SQL Server, the ability to capture, analyze, and act on slow queries transforms your approach to performance from reactive to proactive.

This guide has walked you through the configuration steps across multiple platforms, emphasized best practices for log management, introduced powerful analysis tools like pt-query-digest and pgBadger, and demonstrated real-world examples where identifying a single slow query led to dramatic performance gains.

The most important takeaway: slow queries are symptoms, not root causes. They reveal deeper issuesmissing indexes, inefficient joins, application-level anti-patterns, or poor schema design. By regularly reviewing your Slow Query Log, you dont just fix queriesyou improve your entire systems architecture.

Start small: enable the log in your staging environment, set a reasonable threshold, and run a weekly report. Gradually extend the practice to production. Over time, youll reduce latency, improve user satisfaction, and build more resilient applications. The Slow Query Log isnt just a diagnostic toolits your databases early warning system. Use it wisely.