Accidental DBA's Guide to MySQL Management

problem solvingSo you’ve been tasked with managing the MySQL databases in your environment, but you’re not sure where to start.  Here’s the quick & dirty guide. Oh yeah, and for those who love our stuff, take a look to your right. See that subscribe button? Grab our newsletter!

1. Installation

The “yum” tool is your friend.  If you’re using debian, you’ll use apt-get but it’s very similar. You can do a “yum list” to see what packages are available. We prefer to use the Percona distribution of MySQL.  It’s fully compatible with stock MySQL distribution, but usually a bit ahead in terms of tweak and fixes.  Also if you’re not sure, go with MySQL 5.5 for new installations.

$ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
$ yum install Percona-Server-client-55
$ yum install Percona-Server-shared-55
$ yum install Percona-Server-shared-compat
$ yum install Percona-Server-server-55

The last command will create a fresh database for you as well.

Already have data in an existing database? Then you can migrate between MySQL and Oracle.

2. Setup replication

MySQL replication is a process you’ll need to setup over and over again. It’s statement based in MySQL. A lot of INSERT, UPDATE, DELETE & CREATE statements are transferred to the slave database, and applied by a thread running on that box.

The steps to setup are as follows:

A. lock the primary with FLUSH TABLES WITH READ LOCK;

B. issue SHOW MASTER STATUS and note the current file & position

C. make a copy of the data. You can dump the data:

$ mysqldump -A --single-transaction > full_primary.mysql

Alternatively you can use xtrabackup to take setup replication without locking!

D. copy the dump to the slave database (scp works, but rsync is even better as it can restart if the connection dies).

E. import the dump on the slave box (overwrites everything so make sure you got your boxes straight!)

$ mysql < full_primary.mysql

F. point to the master

mysql> change master to
> master_user='rep',
> master_password='rep',
> master_host='10.20.30.40',
> master_log_file='bin-log.001122',
> master_log_pos=11995533;

G. start replication & check

mysql> start slave;
mysql> show slave statusG;

You should see something like this:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

3. Analyze slow query & tune

If you’re managing an existing MySQL database and you hit a performance blip, it’s likely due to something that has changed. You may be getting a spike in user traffic, that’s new! Or you may have some new application code that has been recently deployed, that’s new SQL that’s running in your database. What to do?

If you haven’t already, enable the slow query log:

mysql> set global slow_query_log=1;
mysql> set global long_query_time=0.50;

Now wait a while. A few hours perhaps, or a few days. The file should default to

/var/lib/mysql/server-slow.log

Now analyze it. You’ll use a tool from the percona toolkit to do that. If you haven’t already done so, install the percona toolkit as well.

$ yum install percona-toolkit
$ pt-query-digest /var/lib/mysql/server-slow.log > /tmp/server-report.txt

Once you’ve done that “less” the file, and review. You’ll likely see the top five queries account for 75% of the output. That’s good news because it means less query tuning. Concentrate on those five and you’ll get the most bang for your buck.

Bounce your opinions about the queries off of the developers who build application code. Ask them where the code originates. What are those pages doing?  Check the tables, are there missing indexes? Look at the EXPLAIN output. Consider tuning the table data structures, multi-column or covering indexes. There is typically a lot that can improve these troublesome queries.

4. Monitoring command line tools

You’ll want to have a battery of day-to-day tools at your disposal for interactive monitoring of the database.  Don’t go overboard. Obsessive tuning means obsessively turning knobs and dials. If there are no problems, you’re likely to create some.  So keep that in mind.

innotop is a “top” like utility for monitoring what’s happening inside your little database universe.  It’s probably already available through yum and the “epel” repository:

$ yum install innotop

First edit the .my.cnf file and add:
[client]
user=root
password=mypw

From there you should be able to just fire up innotop without problems.

mysqltuner is a catch all tool that does a once over of your server, and gives you some nice feedback.  Get a copy as follows:

$ wget mysqltuner.pl

Then run it:
$ chmod +x mysqltuner.pl
$ ./mysqltuner.pl

Here are a couple of useful mysql shell commands to get database information:

mysql> show processlist;
mysql> show innodb status;
mysql> show status;

There is also one last tool which can come in handy for reviewing a new MySQL server. Also from percona toolkit, the summary tool. Run it as follows:

$ pt-summary

5. Backups

You absolutely need to know about backups if you want to sleep at night. Hardware and database servers fail, software has bugs that bite. And if all that doesn’t get you, people make mistakes. So-called operator error will surely get you at some point. There are three main types:

A. cold backups

With the database shutdown, make a complete copy of the /var/lib/mysql directory, along with perhaps the /etc/my.cnf file. That together amounts to a cold backup of your database.

B. hot backups

There has been an enterprise tool for MySQL that provides this for some time. But we’re all very lucky to also have the open source Percona xtrabackup at our disposal. Here’s a howto using it for replication setup.

C. logical backups

These will generate a file containing all the CREATE statements to recreate all your objects, and then INSERT statements to add data.

$ mysqldump -A > my_database_dump.mysql

6. Review existing servers

The percona toolkit summary tool is a great place to start.

$ pt-summary

Want to compare the my.cnf files of two different servers?

$ pt-config-diff h=localhost h=10.20.30.40

Of course you’ll want to review the my.cnf file overall. Be sure you have looked at these variables:

tmp_table_size
max_head_table_size
default_storage_engine
read_buffer_size
read_rnd_buffer_size
sort_buffer_size
join_buffer_size
log_slow_queries
log_bin
innodb_log_buffer_size
innodb_log_file_size
innodb_buffer_pool_size
key_buffer_size (for MyISAM)
query_cache_size
max_packet_size
max_connections
table_cache
thread_cache_size
thread_concurrency

7. Security essentials

The output of the pt-summary and mysqltuner.pl scripts should give you some useful information here. Be sure to have passwords set on all accounts. Use fewer privileges by default, and only add additional ones to accounts as necessary.

You can use wildcards for the IP address but try to be as specific as possible. Allow for a subnet, not the whole internet ‘10.20.30.%’ for example instead of just ‘%’.

Also keep in mind that at the operating system or command line level, anyone with root access can really mess up your database. Writing to the wrong datafile or changing permissions can hose a running database very quickly.

8. Monitoring

Use a monitoring system such as Nagios to keep eye on things.  At minimum check for:

A. connect to db
B. server load average
C. disk partitions have free space
D. replication running – see above IO & SQL running status messages
E. no swapping – plenty of free memory

9. Ongoing maintenance

Periodically it’s a good idea to review your systems even when they’re running smoothly. Don’t go overboard with this however. As they say if it ain’t broke, don’t fix it.

A. check for unused & duplicate indexes
B. check for table fragmentation
C. perform table checks (if using MyISAM)

10. Manage the surprises

MySQL is full of surprises. In the Oracle world you might be surprised at how arcane some things are to setup, or how much babysitting they require. Or you might be surprised at how obscure some tuning & troubleshooting techniques are. In the MySQL world there are big surprises too. Albeit sometimes of a different sort.

A. replication checksums

One that continues to defy my expectations are those surrounding replication. Even if it is running without error, you still have more checking today. Unfortunately many DBAs don’t even know this!  That’s because MySQL replication can drift out of sync without error. We go into specific details of what things can cause this, but more importantly how to check and prevent it, by bulletproofing MySQL with table checksums.

B. test & confirm restores of backups

Spinup a cloud server in Amazon EC2, and restore your logical dump or hotbackup onto that box. Point a test application at that database and verify that all is well. It may seem obvious that a backup will do all this. But besides the trouble when a filesystem fills up, or some command had the wrong flag or option included. There can be even bigger problems if some piece or section of the database was simply overlooked.  It’s surprising how easy it is to run into this trouble. Testing also gives you a sense of what restore time looks like in the real world. A bit of information your boss is sure to appreciate.

If you made it this far, you know you want to grab the newsletter.

Ten things to remember about MySQL backups

  1. Use Hot Backups
  2. Hot backups are an excellent way to backup MySQL.  They can run without blocking your application, and save tons on restore time.  Percona’s xtrabackup tool is a great way to do this.  We wrote a how-to on using xtrabackup for hotbackups.

  3. Use Logical Backups
  4. Just because we love hot backups using xtrabackup doesn’t mean mysqldump isn’t useful.  Want to load data into Amazon RDS?  Want to isolate and load only one schema, or just one table?  All these great uses make mysqldump indispensable.  Use it in combination with periodic hot backups to give you more recovery options.

  5. Replication isn’t a backup
  6. While replication provides a great way to keep a hot copy of your production database, it’s not the same as a backup.  Why?  Operator error, that’s why!  People make mistakes, drop tables and database schemas that later need to be restored.  This can and will happen, so head off the disaster by doing real backups.

    As an additional note, if you’re using replication, you surely want to perform regular checksums of your data.  These ensure that the primary and secondary do indeed contain the same data.

  7. Firedrills & Restore Time
  8. The only way to be sure your backup is complete is to test restoring everything.  Yes it’s a pain, but it will inevitably be a learning experience.  You’ll document the process to speed it up in future tests, you’ll learn how long recovery takes, and find additional pieces to the pie that must be kept in place.  Doing this in advance of d-day is

    Different backups have different recovery times.  In the industry vernacular, your RTO or recovery time objective should inform what will work for you.  Although a mysqldump may take 30 minutes to complete, your restore of that data might take 8 hours or more.  That’s due in part to rebuilding all those indexes.  When you perform the dump one create index statement is formulated from the data dictionary, but on import the data must be sorted and organized to rebuild the index from scratch.  Percona’s mysqldump utility will capitalize on MySQL’s fast index rebuild for Innodb tables.  According to the Percona guys this can bring a big improvement in import time.  Yet another great reason to use the Percona distro!

  9. Transaction Logs
  10. If you want to be able to do point in time recovery, you’ll need all the binlog files as well.  These are being created all the time, while new transactions are completed in your database. If your last backup was last night at 3am, and you want to recovery today until 3pm, you’ll need all the binary logs from the intervening hours to apply to that backup.  This process is called point-in-time recovery, and can bring your database restore up to the current commited transactions.

  11. Backup Config Files
  12. Don’t forget that lonely /etc/my.cnf file.  That’s an important part of a backup if you’re rebuilding on a newly built server.  It may not need to be backed up with the same frequency, but should be included.

  13. Stored Code & Grants
  14. Stored procedures, triggers and functions are all stored in the mysql database schema.  If you are doing a restore of just one database schema, you may not have this, or it may make the restore more complicated.  So it can be a good idea to backup code separately.  mysqldump can do this with the –routines option.  Hot backups by their nature, will capture everything in the entire instance – that is all database schemas including the system ones.

    Grants are another thing you may want to backup separately.  For the same reasons as stored code, grants are stored in the system tables.  Percona toolkit includes a nice tool for this called pt-show-grants.  We recommend running this periodically anyway, as it’ll give you some perspective on permissions granted in your database.  You’re reviewing those right?

  15. Events & Cronjobs
  16. MySQL allows the running of events inside the database.  SHOW EVENTS or SHOW EVENTS schema_name will display the events scheduled.

    You may also have cronjobs enabled.  Use crontab -l to display those for specific users.  Be sure to check at least “mysql” and “root” users as well as other possible application users on the server.

  17. Monitoring
  18. Backups are a nit picky job, and often you don’t know if they’re complete until it’s time to restore.  That’s why we recommend firedrills above, and they’re very important.  You can also monitor the backups themselves.  Use an error log with mysqldump or xtrabackup, and check that logfile for new messages.  In addition you can check the size of the resulting backup file.  If it has changed measurably from the recent backup sizes, it may indicate problems.  Is your backup size 0, something serious is wrong.  Half the size of recent ones, it may have failed halfway through, or the filesystem filled up.

  19. Security
  20. This is often overlooked area, but may be a concern for some environments.  Is the data contained in your backup sensitive?  Consider where the backups are stored and retained for long term.  Reason who has access to those files, and make use of the least privileges rule.

Like our stuff? Don’t forget to grab our newsletter!

Bulletproofing MySQL replication with checksums

bulletproof glass

Also find Sean Hull’s ramblings on twitter @hullsean.

Your MySQL replcas running well? You might not even know if they aren’t. One of the scariest things about MySQL replication is that it can drift out of sync with the master “silently”. No errors, no warnings.

  1. What and Why?
  2. MySQL’s replication solution evolved as a statement based technology. Instead of sending actual block changes, MySQL just has to log committed transactions, and reapply those on the slave side. This affords a wonderful array of topologies and different uses, but has it’s drawbacks. The biggest occur when data does not get updated or changed in the same way on the slave. If you’re new to MySQL or coming from the Oracle world you might expect that this would flag an error. But there are many scenarios in which MySQL will not flag an error:

    • mixed transactional and non-transactional tables
    • use of non-deterministic functions such as uuid()
    • stored procedures and functions
    • update with LIMIT clause

    There are others but suffice it to say if you want to rely on your slave being consistent, you need to check it!

  3. The solution – mathematical checksums
  4. If you’re a seasoned Linux user, you’re probably familiar with the md5sum command. It creates a checksum on a file. You can do so on different servers to compare a file in a mathematically exact way. In fact rsync uses this technique to efficiently determine what files or pieces of files need to be copied across a network. That’s what makes it so fast!

    It turns out that MySQL can checksum tables too. However were we to build our own solution, we might have trouble doing so manually as table data is constantly in a state of flux.

    Enter Percona’s pt-table-checksum tool formerly part of Maatkit. Run it periodically against your master schemas or the entire instance if you like. It will store checksums of all of your tables in a special checksum table. The data from this table then will propagate through replication to all of your connected slaves.

    The tool then has a check mode, which allows you to verify all the connected slaves are ok, or report the differences if it finds any.

  5. Step-by-step Setup
  6. First you’ll need to grab a copy of the percona toolkit. Note that if you previously installed maatkit then you may want to delete those old scripts to avoid confusion. mk-table-checksum if you used maatkit, or pt-table-checksum if you have 1.0 versions. You likely installed using wget or perl Makefile, so you may need to go and remove those manually.

    Assuming you’ve already got the percona repository installed issue:

    $ yum install -y percona-toolkit

    I’ve found some of the maatkit tools to be rather fussy about getting all the options right. The first thing to do which will help simplify this is to add a section in your local user’s “.my.cnf” file like this:

    [client]

    user=root

    password=myrootpassword

    That way the percona tools will look for this whenever it needs authentication credentials. Otherwise we assume localhost for this example, so you should verify you can connect with the mysql client as root from localhost.

    Now let’s checksum the “mysql” system schema.

    $ pt-table-checksum --replicate=test.checksum --create-replicate-table --databases=mysql localhost

    Note the –create-replicate-table option. You only need this option the first time. From there the test.checksum table will exist.

    You should see some output that looks like this:

    TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE

    04-24T16:06:45 0 0 0 1 0 0.099 mysql.columns_priv

    04-24T16:06:45 0 0 32 1 0 0.100 mysql.db

    04-24T16:06:45 0 0 0 1 0 0.096 mysql.event

    04-24T16:06:45 0 0 0 1 0 0.096 mysql.func

    04-24T16:06:45 0 0 38 1 0 0.102 mysql.help_category

    04-24T16:06:45 0 0 452 1 0 0.106 mysql.help_keyword

    04-24T16:06:46 0 0 993 1 0 0.096 mysql.help_relation

    04-24T16:06:46 0 0 506 1 0 0.100 mysql.help_topic

    04-24T16:06:46 0 0 0 1 0 0.099 mysql.host

    04-24T16:06:46 0 0 0 1 0 0.104 mysql.ndb_binlog_index

    04-24T16:06:46 0 0 0 1 0 0.107 mysql.plugin

    04-24T16:06:46 0 1 1 1 0 0.115 mysql.proc

    04-24T16:06:46 0 0 0 1 0 0.186 mysql.procs_priv

    04-24T16:06:46 0 1 1 1 0 0.097 mysql.proxies_priv

    04-24T16:06:47 0 0 0 1 0 0.097 mysql.servers

    04-24T16:06:47 0 0 0 1 0 0.096 mysql.tables_priv

    04-24T16:06:47 0 0 0 1 0 0.098 mysql.time_zone

    04-24T16:06:47 0 0 0 1 0 0.097 mysql.time_zone_leap_second

    04-24T16:06:47 0 0 0 1 0 0.100 mysql.time_zone_name

    04-24T16:06:47 0 0 0 1 0 0.100 mysql.time_zone_transition

    04-24T16:06:47 0 0 0 1 0 0.095 mysql.time_zone_transition_type

    04-24T16:06:47 0 1 38 1 0 0.100 mysql.user

  7. How to check slaves
  8. Once you’ve collected all those fancy checksums for your tables, nicely timestamped, you’ll want to verify that your slaves are happily in sync. You can do that with the following command, also on the master:

    $ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=mysql localhost

    If there’s no differences you’ll see no output. If you have a difference it’ll look something like this:

    Differences on ip-10-15-27-19

    TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARY

    mysql.user 1 1 1

    In our case you can see we created some users on the slaves accidentally, hence the differences. It illustrates how easy it is for differences to creep into your environment and also how easy it now is to find them!

  9. Special Cases
  10. Since one of my clients uses Drupal, they’ve had trouble replicating the semaphore table. This tables is a MyISAM table, and unfortunately no one dares convert it to InnoDB. So from time to time some gunk builds up in there, and it fails on the slave. We could clean out the table, but we decided to just filter out this one table. Since Drupal doesn’t use fully qualified schema.table names in it’s code, only “use” we have found this to be safe.

    However the percona toolkit explicitely checks for replication filters and will not run. It’ll stop with an error as follows:

    $ pt-table-checksum --replicate=test.checksum --databases=sean --ignore-tables=semaphore localhost

    04-24T15:59:29 Replication filters are set on these hosts:

    ip-10.15.27.19

    replicate_ignore_table = sean.semaphore

    ip-10-15-27-72

    replicate_ignore_table = sean.semaphore

    ip-10-15-27-18

    replicate_ignore_table = sean.semaphore

    Please read the --check-replication-filters documentation to learn how to solve this problem. at /usr/bin/pt-table-checksum line 6166.

The solution is the –nocheck-replication-filters option. Keep in mind that this sanity check is there for a reason, so be sure to skip the relevant tables in your checksum building, and checksum checks.

To build checksums skipping the semaphore table use this command:

$ pt-table-checksum --replicate=test.checksum --ignore-tables=prod.semaphore --nocheck-replication-filters localhost

Now you can check your slaves but ignore the semaphore table:


$ pt-table-checksum --replicate=test.checksum --replicate-check-only --ignore-tables=prod.semaphore --nocheck-replication-filters localhost

We also found a bug which preventing us from specifying multiple tabes on the ignore-tables line. So we used multiple invocations to do different schemas like this:


$ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=prod1 --ignore-tables=prod1.semaphore --nocheck-replication-filters localhost

$ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=prod2 --ignore-tables=prod2.semaphore --nocheck-replication-filters localhost

  • Crash Protection
  • If you’ve used MySQL replication for any length of time, you’ve probably seen a server crash. MySQL replication can have trouble restarting if you’re using temporary tables, as they’ll be missing upon restart. Also MySQL before 5.5 leaves syncing the info files to the operating system. So they may be incorrect after a crash.

    1. Use MySQL 5.5 if possible

    There are some new parameters in 5.5 that protect the info files from a crash. These are a great addition, and will make your slave databases more bulletproof.

    sync_master_info = 1

    sync_relay_log = 1

    sync_relay_log_info = 1

    2. Don’t use temporary tables

    After a restart they’ll simply be gone, so queries requiring or running against them will fail.

    The Percona guys’ new book High Performance MySQL third edition, suggests an alternate solution to using temporary tables. Use a special schema to hold your temp data, but create them as normal permanent tables. Be sure your application creates them with unique names, using the connection_id() for example. Lastly have a cleanup process drop tables periodically, based on closed connection_ids.