MariaDB/query performance
Here it is a recipe on how to measure query performance in order to compare 2 different servers, or the same server at two different periods of time or with two different mysql versions:
Gathering data
Run the mysql command line client:
# mysql --defaults-file=/root/.my.cnf
SET @slow_query_log_file := @@GLOBAL.slow_query_log_file; SET @log_slow_rate_limit := @@GLOBAL.log_slow_rate_limit; SET @long_query_time := @@GLOBAL.long_query_time; SET @log_slow_verbosity := @@GLOBAL.log_slow_verbosity; SET @slow_query_log := @@GLOBAL.slow_query_log; SELECT @slow_query_log, @slow_query_log_file, @long_query_time, @log_slow_rate_limit, @log_slow_verbosity INTO OUTFILE '/tmp/variables.txt'; -- just in case we drop the connection SET GLOBAL slow_query_log_file := '/tmp/slow.log'; SET GLOBAL slow_query_log := ON; SET GLOBAL log_slow_rate_limit := 20; -- set for an an appropriate rate, only 1 every log_slow_rate_limit queries will be logged SET GLOBAL long_query_time := 0; -- we log all queries SET GLOBAL log_slow_verbosity := 'query_plan'; -- extended logging
...
We now monitor on another session now the size of /tmp/slow.log
, so it doesn't grow disproportionately.
I found that a rate of 20 for regular hosts, and 100 for busy hosts, for 24 hours is a good value.
If the client application uses some kind of connection pooling, you may want to restart at least one client application/connection pool/ORM.
After some time passes (e.g. 10GB of queries), we revert the changes: ...
SET GLOBAL slow_query_log := @slow_query_log; SET GLOBAL long_query_time := @long_query_time; SET GLOBAL log_slow_rate_limit := @log_slow_rate_limit; SET GLOBAL slow_query_log_file := @slow_query_log_file; SET GLOBAL log_slow_verbosity := @log_slow_verbosity;
Analysing data
If you do not want to affect your production server, copy the file /tmp/slow.log to a server where you can maximize the CPU usage without problem. pt-query-digest
uses only one core, though.
I recommend compressing this file before sending it away: compressing it with something like bzip can shrink the size 10x!
Important: Never take the file out of the cluster, as it probably will contain sensitive data (query data)
In order to run pt-query-digest, just write:
pt-query-digest --no-version-check --limit="100%" slow.log > /tmp/pt-query-digest.txt
If you have a compressed file, you do not need to decompress it first:
pv slow.log.bz2 | bzip2 -d -c | pt-query-digest --no-version-check --limit="100%" > /tmp/pt-query-digest.txt
This new file, pt-query-digest.txt, will contain a report of all queries logged with its frequency, performance and statistics.
This log can also be used as the input for other tools like pt-upgrade
.
Reports generated
- Parser Cache (2015-05-25)
- External Storage (2015-06-04)
- Core Production: