How To Measure MySQL Query Performance with mysqlslap
Introduction
MySQL comes with a handy little diagnostic tool called mysqlslap that's been around since version 5.1.4. It's a benchmarking tool that can help DBAs and developers load test their database servers.
mysqlslap can emulate a large number of client connections hitting the database server at the same time. The load testing parameters are fully configurable and the results from different test runs can be used to fine-tune database design or hardware resources.
In this tutorial we will learn how to use mysqlslap to load test a MySQL database with some basic queries and see how benchmarking can help us fine-tune those queries. After some basic demonstrations, we will run through a fairly realistic test scenario where we create a copy of an existing database for testing, glean queries from a log, and run the test from a script.
The commands, packages, and files shown in this tutorial were tested on CentOS 7. The concepts remain the same for other distributions.
What size server should I use?
If you're interested in benchmarking a specific database server, you should test on a server with the same specifications and with an exact copy of your database installed.
If you want to run through this tutorial for learning purposes and execute every command in it, we recommend at least a 2 GB Droplet. As the commands in this tutorial are meant to tax the server, you may find that they time out on a smaller server.
The sample output in this tutorial was produced in a variety of ways to optimize the examples for teaching.
Step One — Installing MySQL Community Server on a Test System
We will begin by installing a fresh copy of MySQL Community Server on a test database. You should not run any commands or queries from this tutorial on a production database server.
These tests are meant to stress the test server and could cause lag or downtime on a production server. This tutorial was tested with the following environment:
- CentOS 7
- Commands executed by a sudo user
- 2 GB Droplet recommended; keep in mind that the benchmark results shown in this tutorial were produced for teaching purposes and do not reflect specific DigitalOcean benchmarks
First, we will create a directory to hold all the files related to this tutorial. This will help keep things tidy. Navigate into this directory:
sudo mkdir /mysqlslap_tutorial
cd /mysqlslap_tutorial
Next, we will download the MySQL Community Release yum repository. The repository we are downloading is for Red Hat Enterprise Linux 7 which works for CentOS 7:
sudo wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
Next, we can run the
rpm -Uvh
command to install the repository:sudo rpm -Uvh mysql-community-release-el7-5.noarch.rpm
Check that the repositories have been installed by looking at the contents of the
/etc/yum.repos.d
folder:sudo ls -l /etc/yum.repos.d
The output should look like this:
-rw-r--r--. 1 root root 1612 Jul 4 21:00 CentOS-Base.repo
-rw-r--r--. 1 root root 640 Jul 4 21:00 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root 1331 Jul 4 21:00 CentOS-Sources.repo
-rw-r--r--. 1 root root 156 Jul 4 21:00 CentOS-Vault.repo
-rw-r--r--. 1 root root 1209 Jan 29 2014 mysql-community.repo
-rw-r--r--. 1 root root 1060 Jan 29 2014 mysql-community-source.repo
We can also check that the correct MySQL release is enabled for installation:
sudo yum repolist enabled | grep mysql
In our case, MySQL 5.6 Community Server is what we wanted:
mysql-connectors-community/x86_64 MySQL Connectors Community 10
mysql-tools-community/x86_64 MySQL Tools Community 6
mysql56-community/x86_64 MySQL 5.6 Community Server 64
Install the MySQL Community Server:
sudo yum install mysql-community-server
Once the process completes, let's check the components intalled:
sudo yum list installed | grep mysql
The list should look like this:
mysql-community-client.x86_64 5.6.20-4.el7 @mysql56-community
mysql-community-common.x86_64 5.6.20-4.el7 @mysql56-community
mysql-community-libs.x86_64 5.6.20-4.el7 @mysql56-community
mysql-community-release.noarch el7-5 installed
mysql-community-server.x86_64 5.6.20-4.el7 @mysql56-community
Next we need to make sure the MySQL daemon is running and is starting automatically when the server boots. Check the status of the mysqld daemon.
sudo systemctl status mysqld.service
If it's stopped, it will show this output:
mysqld.service - MySQL Community Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled)
Active: inactive (dead)
Start the service:
sudo systemctl start mysqld.service
Make sure it is configured to auto-start at boot time:
sudo systemctl enable mysqld.service
Finally, we have to secure MySQL:
sudo mysql_secure_installation
This will bring up a series of prompts. We'll show the prompts below, with answers you should input in red. At the beginning there is no password for the MySQL root user, so just press Enter.
At the prompts you'll need to provide a new secure root password which you should choose yourself. You should answer y to remove the anonymous database user account, disable the remote root login, reload the privilege tables, etc.:
...
Enter current password for root (enter for none):
OK, successfully used password, moving on...
...
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
...
Remove anonymous users? [Y/n] y
... Success!
...
Disallow root login remotely? [Y/n] y
... Success!
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
...
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
We can now connect to the database and make sure everything is working:
sudo mysql -h localhost -u root -p
Enter the root MySQL password you just set at the prompt. You should see output like the following:
Enter password:
Welcome to the MySQL monitor....
mysql>
At the mysql> prompt, enter the command to view all of your databases:
show databases;
You should see output like the following:
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
Finally, let's create a user account called sysadmin. This account will be used to log in to MySQL instead of the root user. Be sure to replace mypassword with your own password for this user. We will also grant all privileges to this account. At the MySQL prompt, enter these commands:
create user sysadmin identified by 'mypassword';
Output:
Query OK, 0 rows affected (0.00 sec)
Grant the privileges:
grant all on *.* to sysadmin;
Output:
Query OK, 0 rows affected (0.01 sec)
Let's go back to the operating system prompt for now:
quit;
Output:
Bye
Step Two — Installing a Sample Database
Next, we need to install a sample database for testing. This database is called employees and it's freely accessible from the MySQL web site. The database can also be downloaded from Launchpad. The employees database was developed by Patrick Crews and Giuseppe Maxia. The original data was created by Fusheng Wang and Carlo Zaniolo at Siemens Corporate Research.
We are choosing the employees database because it features a large data set. The database structure is simple enough: it's got only six tables; but the data it contains has more than 3,000,000 employee records (the salaries table itself has nearly three million rows). This will help us emulate a more realistic production workload.
First, let's make sure we're in the /mysqlslap_tutorial directory:
cd /mysqlslap_tutorial
Download the latest version of the employees sample database:
sudo wget https://launchpad.net/test-db/employees-db-1/1.0.6/+download/employees_db-full-1.0.6.tar.bz2
Install the bzip2 tool so we can unzip the archive:
sudo yum install bzip2
Unzip the database archive. This will take a minute. We are doing it in two steps here:
sudo bzip2 -dfv employees_db-full-1.0.6.tar.bz2
sudo tar -xf employees_db-full-1.0.6.tar
The contents will be uncompressed into a separate, new directory called employees_db. We need to navigate into this directory to run the query that installs the database. The contents include a README document, a change log, data dumps, and various SQL query files that will create the database structures:
cd employees_db
ls -l
Here's what you should see:
-rw-r--r--. 1 501 games 752 Mar 30 2009 Changelog
-rw-r--r--. 1 501 games 6460 Oct 9 2008 employees_partitioned2.sql
-rw-r--r--. 1 501 games 7624 Feb 6 2009 employees_partitioned3.sql
-rw-r--r--. 1 501 games 5660 Feb 6 2009 employees_partitioned.sql
-rw-r--r--. 1 501 games 3861 Nov 28 2008 employees.sql
-rw-r--r--. 1 501 games 241 Jul 30 2008 load_departments.dump
-rw-r--r--. 1 501 games 13828291 Mar 30 2009 load_dept_emp.dump
-rw-r--r--. 1 501 games 1043 Jul 30 2008 load_dept_manager.dump
-rw-r--r--. 1 501 games 17422825 Jul 30 2008 load_employees.dump
-rw-r--r--. 1 501 games 115848997 Jul 30 2008 load_salaries.dump
-rw-r--r--. 1 501 games 21265449 Jul 30 2008 load_titles.dump
-rw-r--r--. 1 501 games 3889 Mar 30 2009 objects.sql
-rw-r--r--. 1 501 games 2211 Jul 30 2008 README
-rw-r--r--. 1 501 games 4455 Mar 30 2009 test_employees_md5.sql
-rw-r--r--. 1 501 games 4450 Mar 30 2009 test_employees_sha.sql
Run this command to connect to MySQL and run the employees.sql script, which will create the database and load the data:
sudo mysql -h localhost -u sysadmin -p -t < employees.sql
At the prompt, enter the password you created for the sysadmin MySQL user in the previous section.
The process output will look like this. It will take a minute or so to run:
+-----------------------------+
| INFO |
+-----------------------------+
| CREATING DATABASE STRUCTURE |
+-----------------------------+
+------------------------+
| INFO |
+------------------------+
| storage engine: InnoDB |
+------------------------+
+---------------------+
| INFO |
+---------------------+
| LOADING departments |
+---------------------+
+-------------------+
| INFO |
+-------------------+
| LOADING employees |
+-------------------+
+------------------+
| INFO |
+------------------+
| LOADING dept_emp |
+------------------+
+----------------------+
| INFO |
+----------------------+
| LOADING dept_manager |
+----------------------+
+----------------+
| INFO |
+----------------+
| LOADING titles |
+----------------+
+------------------+
| INFO |
+------------------+
| LOADING salaries |
+------------------+
Now you can log into MySQL and run some basic queries to check that the data was imported successfully.
sudo mysql -h localhost -u sysadmin -p
Enter the password for the sysadmin MySQL user.
Check the list of databases for the new employees database:
show databases;
Output:
+--------------------+
| Database |
+--------------------+
| information_schema |
| employees |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.01 sec)
Use the employees database:
use employees;
Check the tables in it:
show tables;
Output:
+---------------------+
| Tables_in_employees |
+---------------------+
| departments |
| dept_emp |
| dept_manager |
| employees |
| salaries |
| titles |
+---------------------+
6 rows in set (0.01 sec)
If you want to, you can check the details for each of these tables. We'll just check the information for thetitles table:
describe titles;
Output:
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| emp_no | int(11) | NO | PRI | NULL | |
| title | varchar(50) | NO | PRI | NULL | |
| from_date | date | NO | PRI | NULL | |
| to_date | date | YES | | NULL | |
+-----------+-------------+------+-----+---------+-------+
4 rows in set (0.01 sec)
Check the number of entries:
mysql> select count(*) from titles;
+----------+
| count(*) |
+----------+
| 443308 |
+----------+
1 row in set (0.14 sec)
Check any of the other data you want. We can now go back to our operating system prompt:
quit;
Step Three — Using mysqlslap
We can now start using mysqlslap. mysqlslap can be invoked from a regular shell prompt so there's no need to explicitly log in to MySQL. For this tutorial, though, we will open another terminal connection to our Linux server and start a new MySQL session from there with the sysadmin user we created before, so we can check and update a few things in MySQL more easily. So, we'll have one prompt open with our sudo user, and one prompt logged into MySQL.
Before we get into specific commands for testing, you may want to take a look at this list of the most useful mysqlslap options. This can help you design your own mysqlslap commands later.
Option | What it means |
---|---|
--user | MySQL username to connect to the database server |
--password | Password for the user account. It's best to leave it blank in command line |
--host | MySQL database server name |
--port | Port number for connecting to MySQL if the default is not used |
--concurrency | The number of simultaneous client connections mysqlslap will emulate |
--iterations | The number of times the test query will be run |
--create-schema | The database where the query will be run |
--query | The query to execute. This can either be a SQL query string or a path to a SQL script file |
--create | The query to create a table. Again, this can be a query string or a path to a SQL file |
--delimiter | The delimiter used to separate multiple SQL statements |
--engine | The MySQL database engine to use (e.g., InnoDB) |
--auto-generate-sql | Lets MySQL perform load testing with its own auto-generated SQL command |
Use Case: Benchmarking with Auto-generated SQL and Data
We will begin by using mysqlslap's auto-generate-sql feature. When we use auto-generated SQL, mysqlslap will create a separate temporary database - aptly called mysqlslap. This database will have a simple table in it with one integer and one varchar type column populated with sample data. This can be a quick and easy way to check the overall performance of the database server.
We start by testing a single client connection doing one iteration of an auto-generated SQL:
sudo mysqlslap --user=sysadmin --password --host=localhost --auto-generate-sql --verbose
The output should look like this:
Benchmark
Average number of seconds to run all queries: 0.009 seconds
Minimum number of seconds to run all queries: 0.009 seconds
Maximum number of seconds to run all queries: 0.009 seconds
Number of clients running queries: 1
Average number of queries per client: 0
mysqlslap reports a few benchmarking statistics as shown in the output. It reports the average, minimum, and maximum number of seconds it took to run the query. We can also see that the number of client connections used for this load test was one.
Now try 50 concurrent connections, and have the auto-generated query run 10 times:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=50 --iterations=10 --auto-generate-sql --verbose
What this command means is that fifty simulated client connections will each throw the same test query at the same time, and this test will be repeated ten times.
The output shows us a marked difference with the increased load:
Benchmark
Average number of seconds to run all queries: 0.197 seconds
Minimum number of seconds to run all queries: 0.168 seconds
Maximum number of seconds to run all queries: 0.399 seconds
Number of clients running queries: 50
Average number of queries per client: 0
Note how the Number of clients running queries: field is now showing a value of 50. The average number of queries per client is zero.
Auto-generated SQL creates a simple table with two fields. In most production environments the table structures will be much larger than that. We can instruct mysqlslap to emulate this by adding additional fields to the test table. To do so, we can make use of two new parameters:
--number-char-cols
and --number-int-cols
. These parameters specify the number of varchar and int types of columns to add to the test table.
In the following example, we are testing with an auto-generated SQL query that runs against a table with 5 numeric columns and 20 character type columns. We are also simulating 50 client connections and we want the test to repeat 100 times:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=50 --iterations=100 --number-int-cols=5 --number-char-cols=20 --auto-generate-sql --verbose
This one should take a bit longer. While the test is running, we can switch to the other terminal window where we have a MySQL session running and see what's going on. Note that if you wait too long, the test will complete and you won't be able to see the test database.
From the MySQL prompt:
show databases;
Note the mysqlslap database:
+--------------------+
| Database |
+--------------------+
| information_schema |
| employees |
| mysql |
| mysqlslap |
| performance_schema |
+--------------------+
5 rows in set (0.01 sec)
You can check the table in the test database if you want to; it's called t1.
Check your other terminal window again. When the test finishes, you'll find that the performance has slowed down even more with the increased load:
Benchmark
Average number of seconds to run all queries: 0.695 seconds
Minimum number of seconds to run all queries: 0.627 seconds
Maximum number of seconds to run all queries: 1.442 seconds
Number of clients running queries: 50
Average number of queries per client: 0
Go back to your MySQL terminal session. We can see that mysqlslap has dropped its throwaway database. At the MySQL prompt:
show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| employees |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)
Use Case: Benchmarking with Custom Queries
Auto-generated SQL is good if you are evaluating the server's physical resources. It's useful when you want to find the level of load a given system can take.
When you want to troubleshoot performance for a specific database-dependent application, however, you'll want to test real queries on real data. These queries might be coming from your web or application server.
For now, we'll assume you know the specific query you want to test. In the next section we'll show you a way to find queries that are running on your server.
We will begin with in-line queries. You can give an in-line query to mysqlslap with the --query option. The SQL statements can't have line breaks in them, and they need to be delimited by semicolons (;). The queries also need to be enclosed in double quotes.
In the following code snippet we are running a simple query against the deptemp table. The `deptemp` table has more than three hundred thousand records. Note how we have specified the employeesdatabase with the --create-schema option:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=50 --iterations=10 --create-schema=employees --query="SELECT * FROM dept_emp;" --verbose
This will take a while to run. You should receive a performance benchmark like this after a minute or two:
Benchmark
Average number of seconds to run all queries: 18.486 seconds
Minimum number of seconds to run all queries: 15.590 seconds
Maximum number of seconds to run all queries: 28.381 seconds
Number of clients running queries: 50
Average number of queries per client: 1
(Note: If this query hangs for more than ten minutes or does not give any output, you should try it again with a lower number for --concurrency and/or --iterations, or try it on a bigger server.)
Next, we will use multiple SQL statements in the --query parameter. In the following example we are terminating each query with a semicolon. mysqlslap knows we are using a number of separate SQL commands because we have specified the --delimiter option:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=20 --iterations=10 --create-schema=employees --query="SELECT * FROM employees;SELECT * FROM titles;SELECT * FROM dept_emp;SELECT * FROM dept_manager;SELECT * FROM departments;" --delimiter=";" --verbose
This test uses the same number of connections and the same number of iterations. However, the performance was incrementally slower for multiple SELECT statements (averaging 23.8 seconds vs. 18.486 seconds):
Benchmark
Average number of seconds to run all queries: 23.800 seconds
Minimum number of seconds to run all queries: 22.751 seconds
Maximum number of seconds to run all queries: 26.788 seconds
Number of clients running queries: 20
Average number of queries per client: 5
Production SQL statements can be complicated. It's easier to add a complicated SQL statement to a script than to type it out for tests. So, we can instruct mysqlslap to read the query from a script file.
To illustrate this, let's create a script file from the SQL commands. We can use the code snippet below to create such a file:
sudo echo "SELECT * FROM employees;SELECT * FROM titles;SELECT * FROM dept_emp;SELECT * FROM dept_manager;SELECT * FROM departments;" > ~/select_query.sql
sudo cp ~/select_query.sql /mysqlslap_tutorial/
The select_query.sql file now holds all five SELECT statements.
Since this script has multiple queries, we can introduce a new testing concept. mysqlslap can parallelizethe queries. We can do this by specifying the number of queries each test client should execute. mysqlslap does this with the --number-of-queries option. So, if we have 50 connections and 1000 queries to run, each client will execute approximately 20 queries each.
Finally, we can also use the --debug-info switch, which will give us an indication of the computing resources used.
In the following code snippet, we are asking mysqlslap to use the script file we just created. We are also specifying the number-of-queries parameter. The process will be repeated twice and we want debugging information in the output:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=20 --number-of-queries=1000 --create-schema=employees --query="/mysqlslap_tutorial/select_query.sql" --delimiter=";" --verbose --iterations=2 --debug-info
After this command completes, we can see some interesting results:
Benchmark
Average number of seconds to run all queries: 217.151 seconds
Minimum number of seconds to run all queries: 213.368 seconds
Maximum number of seconds to run all queries: 220.934 seconds
Number of clients running queries: 20
Average number of queries per client: 50
User time 58.16, System time 18.31
Maximum resident set size 909008, Integral resident set size 0
Non-physical pagefaults 2353672, Physical pagefaults 0, Swaps 0
Blocks in 0 out 0, Messages in 0 out 0, Signals 0
Voluntary context switches 102785, Involuntary context switches 43
Here the average number of seconds to run all queries in our MySQL instance is 217 seconds, almost 4 minutes. While that was certainly affected by the amount of RAM and CPU available to our virtual machine, it was also due to the large number of queries from a moderate number of client connections repeating twice.
We can see there were a large number of non-physical page faults. Page faults happen when data cannot be found in memory and the system has to go and fetch it from the swap file on disk. The output also shows CPU-related information. In this case we see a large number of context switches.
Use Case: Practical Benchmarking Scenario and Capturing Live Queries
So far in our examples, we have been running queries against the original employees database. That's something DBAs certainly wouldn't want you to do. And there's a good reason for it. You don't want to add load your production database and you don't want to run test queries that might delete, update, or insert data into your production tables.
We'll show you how to make a backup of a production database and copy it to a testing environment. In this example it's on the same server, but you ideally you would copy it to a separate server with the same hardware capacity.
More importantly, we'll show you how to record queries live from the production database and add them to a testing script. That is, you'll get queries from the production database, but run tests against the test database.
The general steps are as follows, and you can use them for any mysqlslap test:
1. Copy the production database to a test environment.
2. Configure MySQL to record and capture all connection requests and queries on the production database.
3. Simulate the use case you are trying to test. For example, if you run a shopping cart, you should buy something to trigger all the appropriate database queries from your application.
4. Turn off query logging.
5. Look at the query log and make a list of the queries you want to test.
6. Create a test file for each query you want to test.
7. Run the tests.
8. Use the output to improve your database performance.
2. Configure MySQL to record and capture all connection requests and queries on the production database.
3. Simulate the use case you are trying to test. For example, if you run a shopping cart, you should buy something to trigger all the appropriate database queries from your application.
4. Turn off query logging.
5. Look at the query log and make a list of the queries you want to test.
6. Create a test file for each query you want to test.
7. Run the tests.
8. Use the output to improve your database performance.
To start, let's create a backup of the employees database. We will create a separate directory for its backup:
sudo mkdir /mysqlslap_tutorial/mysqlbackup
cd /mysqlslap_tutorial/mysqlbackup
Create the backup and move it to the new directory:
sudo mysqldump --user sysadmin --password --host localhost employees > ~/employees_backup.sql
sudo cp ~/employees_backup.sql /mysqlslap_tutorial/mysqlbackup/
Go to your MySQL test server. Create the employees_backup database:
CREATE DATABASE employees_backup;
At this point, if you are using a separate server for testing, you should copy the employeesbackup.sql file over to it. From your main terminal session, import the backup data into the employeesbackup database:
sudo mysql -u sysadmin -p employees_backup < /mysqlslap_tutorial/mysqlbackup/employees_backup.sql
On your production MySQL database server, enable the MySQL general query log and provide a file name for it. The general query log captures connection, disconnection, and query activities for a MySQL database instance.
SET GLOBAL general_log=1, general_log_file='capture_queries.log';
Now run the queries that you want to test on the production MySQL server. In this example we will run a query from the command line. However, you may want to generate queries from your application instead of running them directly. If you have a slow process or website page that you want to test, you should run through that process or access that web page now. For example, if you are running a shopping cart, you may want to complete the checkout process now, which should trigger all the appropriate queries on the database server.
This is the query we will run on the production MySQL server. First use the right database:
USE employees;
Now run the query:
SELECT e.first_name, e.last_name, d.dept_name, t.title, t.from_date, t.to_date FROM employees e INNER JOIN dept_emp de ON e.emp_no=de.emp_no INNER JOIN departments d ON de.dept_no=d.dept_no INNER JOIN titles t ON e.emp_no=t.emp_no ORDER BY e.first_name, e.last_name, d.dept_name, t.from_date;
Expected output:
489903 rows in set (4.33 sec)
We will turn off general logging when the query completes:
SET GLOBAL general_log=0;
Note that if you leave logging on, queries will continue to be added to the log, which may make testing harder. So, make sure you disable the log right after finishing your test. Let's check that the log file was created in the /var/lib/mysql directory:
sudo ls -l /var/lib/mysql/capt*
-rw-rw----. 1 mysql mysql 861 Sep 24 15:09 /var/lib/mysql/capture_queries.log
Let's copy this file to our MySQL test directory. If you're using a separate server for testing, copy it to that server.
sudo cp /var/lib/mysql/capture_queries.log /mysqlslap_tutorial/
There should be quite a bit of data in this log file. In this example, the query we want should be near the end. Check the last part of the file:
sudo tail /mysqlslap_tutorial/capture_queries.log
Expected output:
6294 Query show databases
6294 Query show tables
6294 Field List departments
6294 Field List dept_emp
6294 Field List dept_manager
6294 Field List employees
6294 Field List salaries
6294 Field List titles
140930 15:34:52 6294 Query SELECT e.first_name, e.last_name, d.dept_name, t.title, t.from_date, t.to_date FROM employees e INNER JOIN dept_emp de ON e.emp_no=de.emp_no INNER JOIN departments d ON de.dept_no=d.dept_no INNER JOIN titles t ON e.emp_no=t.emp_no ORDER BY e.first_name, e.last_name, d.dept_name, t.from_date
140930 15:35:06 6294 Query SET GLOBAL general_log=0
This log shows SQL commands and their timestamps. The SQL SELECT statement near the end of the file is what we are interested in. It should be exactly the same as the command we ran on the production database, since that's where we captured it.
In this example, we already knew the query. But, in a production environment, this method can be very useful for finding queries that you may not necessarily know about that are running on your server.
Note that if you ran or triggered different queries while logging, this file will look completely different. In a real scenario this file could be inundated with hundreds of entries coming from all different connections. Your goal is to find the query or queries that are causing a bottleneck. You can start by making a list of every line that includes the text Query. Then you'll have a list of exactly what queries were run on your database during the test.
For each query that you want to test, copy it into a file that ends with a
.sql
extension.
For example:
sudo vi /mysqlslap_tutorial/capture_queries.sql
The contents should be the MySQL query you want to test, without any line breaks and without a semicolon at the end:
SELECT e.first_name, e.last_name, d.dept_name, t.title, t.from_date, t.to_date FROM employees e INNER JOIN dept_emp de ON e.emp_no=de.emp_no INNER JOIN departments d ON de.dept_no=d.dept_no INNER JOIN titles t ON e.emp_no=t.emp_no ORDER BY e.first_name, e.last_name, d.dept_name, t.from_date
Next, make sure the query results are not cached. Go back to your test MySQL session. Run the following command:
RESET QUERY CACHE;
Now it's time to run the mysqlslap utility with the script file. Make sure you use the correct script file name in the --query parameter. We will use only ten concurrent connections and repeat the test twice. Run this from your test server:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=10 --iterations=2 --create-schema=employees_backup --query="/mysqlslap_tutorial/capture_queries.sql" --verbose
The benchmark output looks like this in our system:
Benchmark
Average number of seconds to run all queries: 68.692 seconds
Minimum number of seconds to run all queries: 59.301 seconds
Maximum number of seconds to run all queries: 78.084 seconds
Number of clients running queries: 10
Average number of queries per client: 1
So how can we improve this benchmark?
You'll need a certain amount of familiarity with MySQL queries to assess what the query is doing.
Looking back at the query, we can see it's doing a number of joins across multiple tables. The query shows employee job histories and in doing so, it's joining different tables by the empno field. It's also using the deptno field for joining, but since there are only a few department records, we will ignore this. Since there are many empno entries in the database, it's logical to assume that creating indexes on the empnofield could improve the query.
With a little practice, once you've located queries that are taxing the server (that's the part that mysqlslap helps with!), you'll be able to make assessments about the queries based on your knowledge of MySQL and your database.
Next, you can try to improve your database or the queries that are being executed on it.
In our case, let's add the indexes we mentioned above. We will create three indexes on empno. One index will be created on the empno field in the employees table, another index will be created on the empnofield in the deptemp table, and the last one will be created on the emp_no field in the titles table.
Let's go to our test MySQL session and execute the following commands:
USE employees_backup;
CREATE INDEX employees_empno ON employees(emp_no);
CREATE INDEX dept_emp_empno ON dept_emp(emp_no);
CREATE INDEX titles_empno ON titles(emp_no);
Coming back to our main terminal window on the test server, if we execute mysqlslap with the same parameters, we will see a difference in the benchmark:
sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=10 --iterations=2 --create-schema=employees_backup --query="/mysqlslap_tutorial/capture_queries.sql" --verbose
Benchmark
Average number of seconds to run all queries: 55.869 seconds
Minimum number of seconds to run all queries: 55.706 seconds
Maximum number of seconds to run all queries: 56.033 seconds
Number of clients running queries: 10
Average number of queries per client: 1
We can see that there is an immediate improvement in the average, minimum, and maximum time to execute the query. Instead of an average 68 seconds, the query now executes in 55 seconds. That's an improvement of 13 seconds for the same load.
Since this database change produced a good result in the test environment, you can now consider rolling it out to your production database server, although keep in mind that database changes typically have trade-offs in their advantages and disadvantages.
You can repeat the process of testing commands and improvements with all of the queries you gleaned from your log.
Troubleshooting - mysqlslap Doesn't Show Output
If you run a test command and don't get any output, this is a good indication that your server resources could be maxed out. Symptoms may include a lack of the Benchmark output, or an error like
mysqlslap: Error when storing result: 2013 Lost connection to MySQL server during query
.
You may want to try the test again with a smaller number in the --concurrency or --iterations parameter. Alternately, you can try upgrading your test server environment.
This can be a good way to find the outer limits of your database server's capacity.
Conclusion
mysqlslap is a simple, light-weight tool that's easy to use and that integrates natively with the MySQL database engine. It's available for all editions of MySQL from version 5.1.4.
In this tutorial we have seen how to use mysqlslap with its various options and played around with a sample database. You can download other sample databases from the MySQL site and practice with those too. As we mentioned before, please don't run tests on a production database server.
The last use case in this tutorial involved only one query. While we improved the performance of that query somewhat by adding extra indexes to all three tables, the process may not be so simple in real life. Adding extra indexes can sometimes degrade system performance and DBAs often need to weigh the benefits of adding an extra index against the performance cost it may incur.
Real life testing scenarios are more complex, but this should give you the tools to get started with testing and improving your database performance.
No comments:
Post a Comment