How To Read the Query Logs

Henry Hwang -

There are already some documentation regarding query logging:

but we'll focus on understanding the query log entries and how you can use them to identify long running queries.

Why Bother?

Although the UI is useful in identifying potential query issues and performing historical comparisons, reviewing the query logs may uncover issues that don't show up in the UI. 

Query Log

By default the ClustrixDB logs are located in the directory:


Each query log entry is timestamped and categorized as:

  • SLOW: query execution time exceeded the global variable session_log_slow_threshold_ms value (default is 20000 ms); enabled by default with global variable session_log_slow_queries.
  • DDL: query is a DDL (CREATE, DROP, ALTER) or a SET GLOBAL; enabled by default with global variable session_log_ddl.
  • BAD: query reads more rows than were returned which may indicate a bad plan or missing index; disabled by default with global variable session_log_bad_queries.
  • SQLERR: a syntax error, timeout, permission denied, etc.; enabled by default with global variable session_log_bad_queries. 

Global Variables

Query logging is governed by these global variables (defaults shown):

mysql> show global variables like 'session_log%';
| Variable_name                  | Value |
| session_log_bad_queries        | false |
| session_log_bad_read_ratio     |   100 |
| session_log_bad_read_threshold |  4000 |
| session_log_ddl                |  true |
| session_log_error_queries      |  true |
| session_log_slow_queries       |  true |
| session_log_slow_threshold_ms  | 20000 |
7 rows in set (0.00 sec)

With v5.2 onwards, measurement units have been added to variable and field names to help identify the value better. For example, the global variable session_log_slow_threshold_ms was called session_log_slow_threshold prior to v5.2.

Query Log Entry

Each query log entry is made up of the following bits of information:

[timestamp] [nodename] [process name] [category] [SID] [db] [user] [ac] [xid] [sql] [status] [execution time] [reads] [inserts] [deletes] [updates] [counts] [rows_read] [forwards] [rows_output] [semaphore_matches] [runtime] [attempts]

  • Timestamp: Includes the timezone. It is extremely important to have the clocks in sync on all the nodes.
  • Hostname: Name of the node where the entry was logged.
  • Process name: The Clustrix process name, "clxnode". Will be "real" on the legacy appliance.
  • Category: Type of entry in the log. SLOW, DDL, BAD or SQLERR.
  • SID: session ID. Useful when trying to identify a particular session with the queries that were logged.
  • DB: database name the query was run from.
  • User: User executing the query. If you're using statement-based replication, you may want to search for the replication account when troubleshooting statements from the master.
  • AC: auto-commit. This is useful to determine whether the query was in an explicit transaction. Note that DDLs will all have ac=N even if the DDLs are not in an explicit transaction.
  • XID: transaction ID. Useful to link a session to an XID when troubleshooting locking issues.
  • SQL: full query, unless the query exceeds the 4KB statement buffer; the logger will then insert ellipses in the middle of the query and remove part of the query to fit the 4KB limit.
  • Status: result of the query like rows affected or error message.
  • Time: total time from when the query was received, compiled, processed and output returned. This is what we look at for SLOW queries.

The rest are internal counters which may be of interest in some troubleshooting exercises.

Sample Log Entries

Here are some sample log entries:

2014-04-28 00:00:00.630935 UTC 2014 clxnode001 clxnode: INSTR SLOW SID:3756073989 db=test user=user1@ ac=Y xid=535eff740597800a sql="delete from test_tbl" [Ok: 1103697 rows deleted] time 220365.5ms; reads: 1103700; inserts: 3311091; deletes: 6622182; updates: 0; counts: 1103697; rows_read: 2207394; forwards: 9933282; rows_output: 2; semaphore_matches: 0; runtime: 80800528947; attempts: 1

2014-04-28 00:00:05.918485 UTC 2014 clxnode001 clxnode: INSTR DDL SID:3751805957 db=#undef user=root@localhost ac=N xid=535eff796dffe80a sql="trim binlog `binlogtest` before file 'binlogtest.010086'" [no error] time 1236.3ms; reads: 0; inserts: 0; deletes: 0; updates: 0; counts: 0; rows_read: 0; forwards: 0; rows_output: 0; semaphore_matches: 0; runtime: 0; attempts: 1

2014-04-28 00:00:00.988627 UTC 2014 clxnode001 clxnode: INSTR SQLERR SID:3737319429 db=test user=user2@ ac=Y xid=535eff74fac4780a sql="SELECT ID INTO ID_Rule FROM test2_tbl WHERE ID = ID2" [NOT FOUND] time 0.7ms; reads: 3; inserts: 1; deletes: 0; updates: 0; counts: 0; rows_read: 4; forwards: 3; rows_output: 2; semaphore_matches: 0; runtime: 84060; attempts: 1

2014-04-28 00:01:06.780817 UTC 2014 clxnode001 clxnode: INSTR BAD SID:3737319429 db=test user=user2@ ac=Y xid=535effb51a2a580a sql="SELECT IFNULL(ID, 0) AS ID, COL2, IFNULL(DATE1,'2014-01-01'), IFNULL(COL4, 0), IFNULL(Code,''), IFNULL(Date2, '2050-01-01'), IFNULL(Date3,'2014-01-01'), IFNULL(Col8, '') FROM test3_tbl WHERE ID = OtherID" [Ok: 1 rows selected] time 1672.7ms; reads: 3; inserts: 1; deletes: 0; updates: 0; counts: 1; rows_read: 1485493; forwards: 4; rows_output: 3; semaphore_matches: 0; runtime: 1119399218; attempts: 1


Some observations gleaned from the sample entries above:

  • The first query (delete from test_tbl) has been logged as SLOW because it took 220 secs (recorded as 220365.5ms) to delete 1103697 rows. When troubleshooting replication slave lags, you may want to look for large "rows deleted" or "rows updated" in the MASTER query logs if you suspect large writes to be the cause of the slave lag.
  • Note that ac=N on the second log entry as trim binlog is a DDL, but other statements could also be in an explicit transaction when started with START TRANSACTION or BEGIN.
  • Entries 3 and 4 are from the same session as they have the same session ID.

Collecting the Log Entries

With a distributed database, each node will only log what it is processing so you will need to look at the query logs from all of the nodes to identify long running queries.

It's generally easier to work with the log entries in one location so use the clx utility to make your life easier.

For the legacy appliance a tool that is also useful is the "uncontinued" script ( as the longer queries will be split up with "---continued---". The uncontinued utility strips out this string and joins the lines to make the log entries more readable.

The following is an example of what you may want to run to collect the entries. Replace the logfile names and date/timestamp accordingly:

cd /data/clustrix/  # or some other location like /tmp that has sufficient space
/opt/clustrix/bin/clx cmd 'grep "^[today's date]" /data/clustrix/log/query.log' > q.log.todaysdate
# For the legacy appliance it is slightly different<
clx cmd 'grep "^[today's date]" /var/log/query.log' > q.log.todaysdate
cat q.log.todaysdate | ./uncontinued > q.log.todaysdate.uc 

Identifying Slow Queries

With the extracted query log from above, you can parse out the longest running queries with:

export TMPDIR=/clustrix  # For the appliances, you may want to set TMPDIR to /clustrix.
cat q.log.todaysdate.uc | sed 's/^.*time \([0-9]*\).*$/\1 \0/' | sort -rn > longrunningq.todaysdate

The longest running queries would be at the top of the output file. Here's a sample output after processing the query logs for long running queries:

103166 Jul 11 17:47:58.330347 UTC 2014 clxnode1 real: INSTR SLOW SID:2464620545 db=test user=user1@ ac=Y xid=53c1881e3c271802 sql="select col1, col2, col3, col4, col5, col6, col7, col8, col9 from tbl12 where col1>25 and col2=0 and col3<>200" [Ok: 0 rows selected] time 103166.3ms; reads: 5; inserts: 1; deletes: 0; updates: 0; counts: 0; rows_read: 1154141; forwards: 5; rows_output: 2; semaphore_matches: 0; runtime: 4921071027; attempts: 1
58055 Jul 11 17:25:39.453805 UTC 2014 clxnode1 real: INSTR SLOW SID:2464048129 db=test user=user1@ ac=Y xid=53c1831077c1f802 sql="select col1, col2, col3, col4, col5, col6, col7, col8, col9, col10, col11, col12, col13 from tbl2 where col1 like '123456' and date(col2) = '2014-07-11'" [Ok: 0 rows selected] time 58055.0ms; reads: 5; inserts: 1; deletes: 0; updates: 0; counts: 0; rows_read: 11797794; forwards: 5; rows_output: 2; semaphore_matches: 0; runtime: 40765655404; attempts: 1
55666 Jul 11 17:46:10.868463 UTC 2014 clxnode1 real: INSTR SLOW SID:2445069313 db=test user=user1@ ac=Y xid=53c187e245f42002 sql="select col1, col2, col3, col4, col5, col6, col7, col8, col9, col10, col11, col12, col13, col4, col15, col16, col17, col18, col19, col20, col21, col22, col23, col24, col25, col26, col27, col28, col29, col30, col31, col32, col33, col34, col35, col36, col37, col38, col39, col40, col41, col42, col43, col44, col45, col46, col47, col48, col49, col50, col51, col52, col53, col54, col55, col56, col57 from tbl3 where col1='1234567'" [Ok: 1 rows selected] time 55666.1ms; reads: 5; inserts: 1; deletes: 0; updates: 0; counts: 1; rows_read: 11798489; forwards: 6; rows_output: 3; semaphore_matches: 0; runtime: 125292702595; attempts: 1


After having identified the long running queries, you will then need to determine why the queries are slow. This could be various reasons for this:

  • heavy system load
  • poor data distribution
  • missing indexes
  • network latencies

We may cover these topics in future postings.

Best Practice

The best practice would be to use both the UI and the query log processing to identify possible sources of performance issues.

You may want to setup a nightly cron job that collects and parses out the previous day's query logs entries for SLOW, BAD and SQLERR entries and review them daily just to make sure that your application or users aren't loading the cluster with poor queries. Fixing poor performing queries would free up system resources to process more queries and make the cluster more responsive.

Have more questions? Submit a request


  • 0
    Henry Hwang

    With version 6 of ClustrixDB, the query log entries have some additional information regarding how often and how long a query has been waiting to acquire a lock.  Here's a part of a query log entry from an update statement:

    time 350374.9ms; reads: 434; inserts: 1093; deletes: 0; updates: 434; counts: 217; rows_read: 434; forwards: 1942; broadcasts: 0; rows_output: 191; semaphore_matches: 0; runtime_ns: 40999258; bm_fixes: 3015; bm_loads: 0; bm_waittime_ns: 0; lockman_waits: 110; lockman_waittime_ms: 2905551; trxstate_waits: 2; trxstate_waittime_ms: 1; wal_perm_waittime_ms: 0; bm_perm_waittime_ms: 0; attempts: 8

    Note that this update query took 350 seconds to execute, but out of that elapsed time, the query was waiting 290 seconds to acquire the locks.  It also tried to acquire the lock 110 times before being successful.

    You can look for query log entries that don't have "lockman_waits: 0" and that might help narrow down what queries and tables are being affected.

    You can see whether there is any long running query that is blocking these other queries from acquiring locks.

Please sign in to leave a comment.
Powered by Zendesk