MySQL’s optimizer can do a lot of things, but subqueries are not always handled well. Take a look at the IN subquery below. If you see the DEPENDENT SUBQUERY in your explain plan, you may want to take a second look. This will run slow as a dog, when the tables get large.
SELECT * FROM bucket
WHERE bucket_id IN (
WHERE item_id = 1);
Here’s what the EXPLAIN looks like.
(sean@localhost:mysql.sock) [test]> explain select * from bucket where bucket_id in (select bucket_id from bucket_items where item_id = 1);
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | PRIMARY | bucket | ALL | NULL | NULL | NULL | NULL | 3 | Using where |
| 2 | DEPENDENT SUBQUERY | bucket_items | ALL | NULL | NULL | NULL | NULL | 2 | Using where |
2 rows in set (0.00 sec)
Lots and lots of web applications need to page through information. From customer records, to the albums in your itunes collection. So as web developers and architects, it’s important that we do all this efficiently.
Start by looking at how you’re fetching information from your MySQL database. We’ve outlined three ways to do just that.
Of course such a solution would only work if you were paging by ID. If you page by name, it might get messier as there may be more than one person with the same name. If ID doesn’t work for your application, perhaps returning paged users by USERNAME might work. Those would be unique:
SELECT id, username
WHERE username > ‘email@example.com’
ORDER BY username LIMIT 10;
Paging queries can be slow with SQL as they often involve the OFFSET keyword which instructs the server you only want a subset. However it typically scans collects and then discards those rows first. With deferred join or by maintaining a place or position column you can avoid this, and speedup your database dramatically.
2. Try using a Deferred Join
This is an interesting trick. Suppose you have pages of customers. Each page displays ten customers. The query will use LIMIT to get ten records, and OFFSET to skip all the previous page results. When you get to the 100th page, it’s doing LIMIT 10 OFFSET 990. So the server has to go and read all those records, then discard them.
SELECT id, name, address, phone FROM customers ORDER BY name LIMIT 10 OFFSET 990;
MySQL is first scanning an index then retrieving rows in the table by primary key id. So it’s doing double lookups and so forth. Turns out you can make this faster with a tricky thing called a deferred join.
The inside piece just uses the primary key. An explain plan shows us “using index” which we love!
ORDER BY name
LIMIT 10 OFFSET 990;
Now combine this using an INNER JOIN to get the ten rows and data you want:
SELECT id, name, address, phone
INNER JOIN (
ORDER BY name
LIMIT 10 OFFSET 990)
AS my_results USING(id);
That’s pretty cool!
3. Maintain a Page or Place column
Another way to trick the optimizer from retrieving rows it doesn’t need is to maintain a column for the page, place or position. Yes you need to update that column whenever you (a) INSERT a row (b) DELETE a row ( c) move a row with UPDATE. This could get messy with page, but a straight place or position might work easier.
SELECT id, name, address, phone
WHERE page = 100
ORDER BY name;
When I say debutantes, it’s a nod to beginners, for this book forges a very solid and complete introduction to the topic of MySQL. Start with installing the software & setting up your environment, and then move on to really understanding the SQL language, from commands to create objects, to ones for adding & modifying data, and then writing code around it.
There’s a thorough discussion of datatypes, stored procedures, functions and views.
Paul Dubois’ definitive reference makes a excellent compliment to High Performance MySQL. They should sit alongside eachother on your database bookshelf.
For developers there are chapters on writing applications in C, another for Perl and a third for PHP.
For DBAs there are chapters on security, backups, replication, understanding the data directory and general server administration. There is also good coverage of both 5.5 and the newly released 5.6 of MySQL.
What I like about this book
You can think of this book as a definitive reference to MySQL. It includes much of the online documentation that you would find at Oracle’s site, such as command & variable reference, and detailed explanation of how to use the client tools.
Dubois also goes beyond the online documentation though, giving you a bit of a background around concepts, a broader more complete discussion.
Want to find out how far your slaves *really* are behind? pt-heartbeat is your friend.
Want to analyze your slow query log to produce a useful summary report? pt-query-digest to the rescue.
I also see no mention of innotop, which I would also say is an essential tool. These aren’t really advanced topics, so It’s unclear why they are missing. In the real world you need these tools to do your job.
My more general criticism is where the book lacks real-world advice from a seasoned DBA. At times the writing feels a bit more of the official line on how things work. But in day-to-day devops and operations, things can be quite different.
For example, stored procedures. In MySQL they are there, however using them brings real performance challenges. They’re not always compatible with replication. Given all of that, why include a whole chapter with endless discussion of them without strong reservations. It would lead a novice user or developer to incorporate them into an application only to be shocked and surprised at the problems they bring.
Another example, looking through the system variables reference, I see the sync_binlog option. There is a short caution “…lower values provide greater safety in the event of a crash, but also affect performance more adversely”. Now reading this as a novice DBA I might think great, crash protection. But having tried this parameter in production, I found a huge impact on performance and had to disable it. What’s the advice here? It’s a bit confusing.
This is a really great book as an introduction to MySQL, and delving into intermediate topics. I would sit it on your bookshelf along side High Performance MySQL. What this book lacks in advice, you can turn to the latter book, and what High Performance MySQL lacks in terms of introductory material this book covers in spades. They make a great compliment to each other.
Why does it do this? UNION is defined that way in SQL. Duplicates must be removed and this is an efficient way for the MySQL engine to remove them. Combine results, sort, remove duplicates and return the set.
Queries with UNION can be accelerated in two ways. Switch to UNION ALL or try to push ORDER BY, LIMIT and WHERE conditions inside each subquery. You’ll be glad you did!
What if we did UNION ALL? The result would look like this:
Here the WHERE clause works on this 11 record temp table:
But it would be much faster to move the WHERE inside each subquery like this:
(SELECT type, release FROM short_sleeve WHERE release >=2013)
(SELECT type, release FROM long_sleeve WHERE release >=2013);
That would be operating on a combined 3 record table. Faster to sort & remove duplicates. Smaller result sets cache better too, providing a pay forward dividend. That’s what performance optimization is all about!
Remember multi-million row sets in each part of this query will quickly illustrate the optimization. We’re using very small results to make visualizing easier.
You can also use this optimization for ORDER BY and for LIMIT conditions. By reducing the number of records returned by EACH PART of the UNION, you reduce the work that happens at the stage where they are all combined.
If you’re seeing some UNION queries in your slow query log, I suggest you try this optimization out and see if you can tweak
All servers use disk to store files. Operating system libraries, webserver & application code, and most importantly databases all use disk constantly.
So disk speed is crucial to server speed.
Disk speed is crucial for MySQL databases. It has been a real challenge in multi-tenant environments like Amazon’s EBS. The provisioned IOPS feature addresses this head on, allowing customers to lock in great MySQL database performance!
Since Amazon is a multi-tenant environment, other customers are using that same network, and hitting those same disks. So if your neighbors are seeing a lot of traffic to disk, your web application can slow down. Not good!
What is Provisioned IOPS
We’ll agree that it’s one of the worst branded features ever, but you should know about it and use it, especially for your MySQL databases.
Provisioned means that you’ll lock in performance in advance, and IOPS stands for I/O operations. Think of it as google juice for your cloud database servers!
Join 19k others and follow Sean Hull on twitter @hullsean.
1. Slow Disk I/O – RAID 5 – Multi-tenant EBS
Disk is the grounding of all your servers, and the base of their performance. True with larger and larger main memory, much is available in cache, a server still needs to constantly read from disk and flush things from memory. So it’s a very very important component to performance and scalability.
What’s wrong with Raid 5?
Raid 5 was designed to give you more space, using fewer disks. It’s often used in a server with few slots or because ops misunderstand how bad it will impact performance. On a database server it can be particularly bad.
All writes see a performance hit. What’s worse is if you lose a disk, the RAID though technically still on line, will perform SO SLOWLY as to be offline. And a rebuild takes many hours. Worse still is the risk to lose another drive during that rebuild. What if you have order a drive and it takes a couple of days?
RAID 10 is the solution. Mirror each set of two drives, then stripe over those. Even with only four slots available, it’s worth it. Good read performance, good write performance, and protection.
What the heck is multi-tentant?
In the cloud, you share servers, network & disk just like you do apartments in a building. Hence the name. Amazon’s EBS or elastic block storage, extends this metaphor, offering you the welcome flexibility of a storage network. But your bottleneck can be fighting with other tenants on that same network.
Default servers do have this problem, however AWS has addressed this serious problem with a little known but VERY VERY useful feature called Provisioned IOPS. It’s a technical name, but means you can lock in reliable disk I/O. Just what the scalability doctor ordered.
MySQL is good at a lot of things, but it’s not ideal for managing application queues. Do you have a table like JOBS in your database, with a status column including values like “queued”, “working”, and “completed”? If so you’re probably using the database to queue work in your application.
It’s not a great use of MySQL because of locking problems that come up, as well as the search and scan to find the next task.
Luckily their are great solutions for developers. RabbitMQ is a great queuing solution, as is Amazon’s SQS solution. What’s more as external services they’re easier to scale.
Scalability becomes key to your business, as you customer base grows. But it doesn’t have to be impossible. Disk I/O, caching, queuing and searching are all key areas where you can make a big dent, in a manageable way. Juggle your technical debt too, and you’re golden!
Oracle has full text search support, why shouldn’t we assume the same in MySQL? Well MySQL *does* have this, but in many versions only with the old MyISAM storage engine. It has it’s set of corruption problems, and isn’t really very performant.
Better to use a proven search solution like Apache Solr. It is built specifically for search, includes excellent library support for developers of most modern web languages and best of all is easy to SCALE! Just add more servers on your network, or distributed globally.
For folks interested in the bleeding edge, Fulltext is coming to Innodb crash safe & transactional storage engine in 5.6. That said you’re still probably better off going with an external solution like Solr or Sphinx and the MySQL Sphinx SE plugin.
Cache, cache, and cache some more. Your webservers should use a solid memcache or other object cache between them & the database. All those little result sets will sit in resident memory, waiting for future web pages that need them.
Next use a page cache such as varnish. This sits in front of the webserver, think of it as a mini-webserver that handles very simple pages, but in a very high speed way. Like a pack of motorbikes riding down an otherwise packed freeway, they speed up your webserver to do more complex work.
Browser caching is also important. But you can’t get at your customers browsers, or can you? Well not directly, but you can instruct them what things to cache. Do that with proper expires headers. Have your system administrator configure apache to support this.
Technical debt can bite. What is it? As you’re developing an new idea, you’ll build prototypes. As those get deployed to customers, change gets harder, and past things you glossed over because problems. One team leaves, and another inherits the application, and the problems multiple. Overtime you’re building your technical debt as your team spends more time supporting old code and fixing bugs, and less time building new features. At some point a rewrite of problem code becomes necessary.