Tag Archives: backups

Accidental DBA's Guide to MySQL Management

problem solvingSo you’ve been tasked with managing the MySQL databases in your environment, but you’re not sure where to start.  Here’s the quick & dirty guide. Oh yeah, and for those who love our stuff, take a look to your right. See that subscribe button? Grab our newsletter!

1. Installation

The “yum” tool is your friend.  If you’re using debian, you’ll use apt-get but it’s very similar. You can do a “yum list” to see what packages are available. We prefer to use the Percona distribution of MySQL.  It’s fully compatible with stock MySQL distribution, but usually a bit ahead in terms of tweak and fixes.  Also if you’re not sure, go with MySQL 5.5 for new installations.

$ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
$ yum install Percona-Server-client-55
$ yum install Percona-Server-shared-55
$ yum install Percona-Server-shared-compat
$ yum install Percona-Server-server-55

The last command will create a fresh database for you as well.

Already have data in an existing database? Then you can migrate between MySQL and Oracle.

2. Setup replication

MySQL replication is a process you’ll need to setup over and over again. It’s statement based in MySQL. A lot of INSERT, UPDATE, DELETE & CREATE statements are transferred to the slave database, and applied by a thread running on that box.

The steps to setup are as follows:

A. lock the primary with FLUSH TABLES WITH READ LOCK;

B. issue SHOW MASTER STATUS and note the current file & position

C. make a copy of the data. You can dump the data:

$ mysqldump -A --single-transaction > full_primary.mysql

Alternatively you can use xtrabackup to take setup replication without locking!

D. copy the dump to the slave database (scp works, but rsync is even better as it can restart if the connection dies).

E. import the dump on the slave box (overwrites everything so make sure you got your boxes straight!)

$ mysql < full_primary.mysql

F. point to the master

mysql> change master to
> master_user='rep',
> master_password='rep',
> master_host='',
> master_log_file='bin-log.001122',
> master_log_pos=11995533;

G. start replication & check

mysql> start slave;
mysql> show slave statusG;

You should see something like this:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

3. Analyze slow query & tune

If you’re managing an existing MySQL database and you hit a performance blip, it’s likely due to something that has changed. You may be getting a spike in user traffic, that’s new! Or you may have some new application code that has been recently deployed, that’s new SQL that’s running in your database. What to do?

If you haven’t already, enable the slow query log:

mysql> set global slow_query_log=1;
mysql> set global long_query_time=0.50;

Now wait a while. A few hours perhaps, or a few days. The file should default to


Now analyze it. You’ll use a tool from the percona toolkit to do that. If you haven’t already done so, install the percona toolkit as well.

$ yum install percona-toolkit
$ pt-query-digest /var/lib/mysql/server-slow.log > /tmp/server-report.txt

Once you’ve done that “less” the file, and review. You’ll likely see the top five queries account for 75% of the output. That’s good news because it means less query tuning. Concentrate on those five and you’ll get the most bang for your buck.

Bounce your opinions about the queries off of the developers who build application code. Ask them where the code originates. What are those pages doing?  Check the tables, are there missing indexes? Look at the EXPLAIN output. Consider tuning the table data structures, multi-column or covering indexes. There is typically a lot that can improve these troublesome queries.

4. Monitoring command line tools

You’ll want to have a battery of day-to-day tools at your disposal for interactive monitoring of the database.  Don’t go overboard. Obsessive tuning means obsessively turning knobs and dials. If there are no problems, you’re likely to create some.  So keep that in mind.

innotop is a “top” like utility for monitoring what’s happening inside your little database universe.  It’s probably already available through yum and the “epel” repository:

$ yum install innotop

First edit the .my.cnf file and add:

From there you should be able to just fire up innotop without problems.

mysqltuner is a catch all tool that does a once over of your server, and gives you some nice feedback.  Get a copy as follows:

$ wget mysqltuner.pl

Then run it:
$ chmod +x mysqltuner.pl
$ ./mysqltuner.pl

Here are a couple of useful mysql shell commands to get database information:

mysql> show processlist;
mysql> show innodb status;
mysql> show status;

There is also one last tool which can come in handy for reviewing a new MySQL server. Also from percona toolkit, the summary tool. Run it as follows:

$ pt-summary

5. Backups

You absolutely need to know about backups if you want to sleep at night. Hardware and database servers fail, software has bugs that bite. And if all that doesn’t get you, people make mistakes. So-called operator error will surely get you at some point. There are three main types:

A. cold backups

With the database shutdown, make a complete copy of the /var/lib/mysql directory, along with perhaps the /etc/my.cnf file. That together amounts to a cold backup of your database.

B. hot backups

There has been an enterprise tool for MySQL that provides this for some time. But we’re all very lucky to also have the open source Percona xtrabackup at our disposal. Here’s a howto using it for replication setup.

C. logical backups

These will generate a file containing all the CREATE statements to recreate all your objects, and then INSERT statements to add data.

$ mysqldump -A > my_database_dump.mysql

6. Review existing servers

The percona toolkit summary tool is a great place to start.

$ pt-summary

Want to compare the my.cnf files of two different servers?

$ pt-config-diff h=localhost h=

Of course you’ll want to review the my.cnf file overall. Be sure you have looked at these variables:

key_buffer_size (for MyISAM)

7. Security essentials

The output of the pt-summary and mysqltuner.pl scripts should give you some useful information here. Be sure to have passwords set on all accounts. Use fewer privileges by default, and only add additional ones to accounts as necessary.

You can use wildcards for the IP address but try to be as specific as possible. Allow for a subnet, not the whole internet ‘10.20.30.%’ for example instead of just ‘%’.

Also keep in mind that at the operating system or command line level, anyone with root access can really mess up your database. Writing to the wrong datafile or changing permissions can hose a running database very quickly.

8. Monitoring

Use a monitoring system such as Nagios to keep eye on things.  At minimum check for:

A. connect to db
B. server load average
C. disk partitions have free space
D. replication running – see above IO & SQL running status messages
E. no swapping – plenty of free memory

9. Ongoing maintenance

Periodically it’s a good idea to review your systems even when they’re running smoothly. Don’t go overboard with this however. As they say if it ain’t broke, don’t fix it.

A. check for unused & duplicate indexes
B. check for table fragmentation
C. perform table checks (if using MyISAM)

10. Manage the surprises

MySQL is full of surprises. In the Oracle world you might be surprised at how arcane some things are to setup, or how much babysitting they require. Or you might be surprised at how obscure some tuning & troubleshooting techniques are. In the MySQL world there are big surprises too. Albeit sometimes of a different sort.

A. replication checksums

One that continues to defy my expectations are those surrounding replication. Even if it is running without error, you still have more checking today. Unfortunately many DBAs don’t even know this!  That’s because MySQL replication can drift out of sync without error. We go into specific details of what things can cause this, but more importantly how to check and prevent it, by bulletproofing MySQL with table checksums.

B. test & confirm restores of backups

Spinup a cloud server in Amazon EC2, and restore your logical dump or hotbackup onto that box. Point a test application at that database and verify that all is well. It may seem obvious that a backup will do all this. But besides the trouble when a filesystem fills up, or some command had the wrong flag or option included. There can be even bigger problems if some piece or section of the database was simply overlooked.  It’s surprising how easy it is to run into this trouble. Testing also gives you a sense of what restore time looks like in the real world. A bit of information your boss is sure to appreciate.

If you made it this far, you know you want to grab the newsletter.

Ten things to remember about MySQL backups

  1. Use Hot Backups
  2. Hot backups are an excellent way to backup MySQL.  They can run without blocking your application, and save tons on restore time.  Percona’s xtrabackup tool is a great way to do this.  We wrote a how-to on using xtrabackup for hotbackups.

  3. Use Logical Backups
  4. Just because we love hot backups using xtrabackup doesn’t mean mysqldump isn’t useful.  Want to load data into Amazon RDS?  Want to isolate and load only one schema, or just one table?  All these great uses make mysqldump indispensable.  Use it in combination with periodic hot backups to give you more recovery options.

  5. Replication isn’t a backup
  6. While replication provides a great way to keep a hot copy of your production database, it’s not the same as a backup.  Why?  Operator error, that’s why!  People make mistakes, drop tables and database schemas that later need to be restored.  This can and will happen, so head off the disaster by doing real backups.

    As an additional note, if you’re using replication, you surely want to perform regular checksums of your data.  These ensure that the primary and secondary do indeed contain the same data.

  7. Firedrills & Restore Time
  8. The only way to be sure your backup is complete is to test restoring everything.  Yes it’s a pain, but it will inevitably be a learning experience.  You’ll document the process to speed it up in future tests, you’ll learn how long recovery takes, and find additional pieces to the pie that must be kept in place.  Doing this in advance of d-day is

    Different backups have different recovery times.  In the industry vernacular, your RTO or recovery time objective should inform what will work for you.  Although a mysqldump may take 30 minutes to complete, your restore of that data might take 8 hours or more.  That’s due in part to rebuilding all those indexes.  When you perform the dump one create index statement is formulated from the data dictionary, but on import the data must be sorted and organized to rebuild the index from scratch.  Percona’s mysqldump utility will capitalize on MySQL’s fast index rebuild for Innodb tables.  According to the Percona guys this can bring a big improvement in import time.  Yet another great reason to use the Percona distro!

  9. Transaction Logs
  10. If you want to be able to do point in time recovery, you’ll need all the binlog files as well.  These are being created all the time, while new transactions are completed in your database. If your last backup was last night at 3am, and you want to recovery today until 3pm, you’ll need all the binary logs from the intervening hours to apply to that backup.  This process is called point-in-time recovery, and can bring your database restore up to the current commited transactions.

  11. Backup Config Files
  12. Don’t forget that lonely /etc/my.cnf file.  That’s an important part of a backup if you’re rebuilding on a newly built server.  It may not need to be backed up with the same frequency, but should be included.

  13. Stored Code & Grants
  14. Stored procedures, triggers and functions are all stored in the mysql database schema.  If you are doing a restore of just one database schema, you may not have this, or it may make the restore more complicated.  So it can be a good idea to backup code separately.  mysqldump can do this with the –routines option.  Hot backups by their nature, will capture everything in the entire instance – that is all database schemas including the system ones.

    Grants are another thing you may want to backup separately.  For the same reasons as stored code, grants are stored in the system tables.  Percona toolkit includes a nice tool for this called pt-show-grants.  We recommend running this periodically anyway, as it’ll give you some perspective on permissions granted in your database.  You’re reviewing those right?

  15. Events & Cronjobs
  16. MySQL allows the running of events inside the database.  SHOW EVENTS or SHOW EVENTS schema_name will display the events scheduled.

    You may also have cronjobs enabled.  Use crontab -l to display those for specific users.  Be sure to check at least “mysql” and “root” users as well as other possible application users on the server.

  17. Monitoring
  18. Backups are a nit picky job, and often you don’t know if they’re complete until it’s time to restore.  That’s why we recommend firedrills above, and they’re very important.  You can also monitor the backups themselves.  Use an error log with mysqldump or xtrabackup, and check that logfile for new messages.  In addition you can check the size of the resulting backup file.  If it has changed measurably from the recent backup sizes, it may indicate problems.  Is your backup size 0, something serious is wrong.  Half the size of recent ones, it may have failed halfway through, or the filesystem filled up.

  19. Security
  20. This is often overlooked area, but may be a concern for some environments.  Is the data contained in your backup sensitive?  Consider where the backups are stored and retained for long term.  Reason who has access to those files, and make use of the least privileges rule.

Like our stuff? Don’t forget to grab our newsletter!

Point-in-time Recovery – What is it and why is it important?

Web-facing database servers receive a barrage of activity 24 hours a day.  Sessions are managed for users logging in, ratings are clicked and comments are added.  Even more complex are web-based ecommerce applications.  All of this activity is organized into small chunks called transactions.  They are discrete sets of changes.  If you’re editing a word processing document, it might autosave every five minutes.  If you’re doing something in excel it may provide a similar feature.  There is also an in-built mechanism for undo and redo of recent edits you have made.  These are all analogous to transactions in a database.

These are important because all of these transactions are written to logfiles.  They make replication possible, by replaying those changes on another database server downstream.

If you have lost your database server because of hardware failure or instance failure in EC2, you’ll be faced with the challenge of restoring your database server.  How is this accomplished?  Well the first step would be to restore from the last full backup you have, perhaps a full database dump that you perform everyday late at night.  Great, now you’ve restored to 2am.  How do I get the rest of my data?

That is where point-in-time recovery comes in.  Since those transactions were being written to your transaction logs, all the changes made to your database since the last full backup must be reapplied.  In MySQL this transaction log is called the binlog, and there is a mysqlbinlog utility that reads the transaction log files, and replays those statements.  You’ll tell it the start time – in this case 2am when the backup happened.  And you’ll tell it the end time, which is the point-in-time you want to recover to.  That time will likely be the time you lost your database server hardware.

Point-in-time recovery is crucial to high availability, so be sure to backup your binlogs right alongside your full database backups that you keep every night.  If you lose the server or disk that the database is hosted on, you’ll want an alternate copy of those binlogs available for recovery!

Quora discussion on Point-in-time Recovery by Sean Hull

Offsite Backups – What are they and why are they important?

Backups are obviously an important part of any managed infrastructure deployment.  Computing systems are inherently fallible, through operator error or hardware failure.  Existing systems must be backed up, from configurations, software and media files, to the backend data store.

In a managed hosting environment or cloud hosting environment, it is convenient to use various filesystem snapshot technologies to perform backups of entire disk volumes in one go.  These are powerful, fast, reliable, and easy to execute.  In Amazon EC2 for example these EBS snapshots are stored on S3.  But what happens if your data center goes down – through network outage or power failure?  Or further what happens if S3 goes offline?  Similar failures can affect traditional managed hosting facilities as well.

This is where offsite backups come in handy.  You would the be able to rebuild your application stack and infrastructure despite your entire production servers being offline.  That’s peace of mind!  Offsite backups can come in many different flavors:

  • mysqldump of the entire database, performed daily and copied to alternate hosting facility
  • semi-synchronous replication slave to alternate datacenter or region
  • DRBD setup – distributed filesystem upon which your database runs
  • replicated copy of version control repository – housing software, documentation & configurations

Offsite backups can also be coupled with a frequent sync of the binlog files (transaction logs).  These in combination with your full database dump will allow you to perform point-in-time recovery to the exact point the outage began, further reducing potential data loss.

Offsite Backups – What are they – discussed on Quora by Sean Hull

Business Continuity Planning – What is it and why is it important?

BCP or BCRP if you want to also include “resiliency” in the acronym, basically outlines planning for the worst.  In the old days you had a filing cabinet with documents, for example there might be a central government office which houses birth certificates or titles and deeds.  Perhaps a copy of documents is regularly created, and kept offsite in case of fire.

In the digital era, more and more companies have most of their assets in digital form.  So the lifespan of those digital copies, and the contents thereof must be protected.  Since running computing systems and data centers always involves operational risk, planning for the worst is the essence of business continuity planning.

For example you may have redundant switches, routers and networking cables, but what if the whole data center goes down?  Do you have offsite backups that you’re creating regularly?  When was the last time those backups were tested with a full application restore and fire drill?

Discussed by Sean Hull on Quora – What is BCP and why is it important?

Deploying MySQL on Amazon EC2 – 8 Best Practices

Also find Sean Hull’s ramblings on twitter @hullsean.

There are a lot of considerations for deploying MySQL in the Cloud.  Some concepts and details won’t be obvious to DBAs used to deploying on traditional servers.  Here are eight best practices which will certainly set you off on the right foot.

This article is part of a multi-part series Intro to EC2 Cloud Deployments.

1. Replication

Master-Slave replication is easy to setup, and provides a hot online copy of your data.  One or more slaves can also be used for scaling your database tier horizontally.

Master-Master active/passive replication can also be used to bring higher uptime, and allow some operations such as ALTER statements and database upgrades to be done online with no downtime.  The secondary master can be used for offloading read queries, and additional slaves can also be added as in the master-slave configuration.

Caution: MySQL’s replication can drift silently out of sync with the master. If you’re using statement based replication with MySQL, be sure to perform integrity checking to make your setup run smoothly. Here’s our guide to bulletproofing MySQL replication.

2. Security

You’ll want to create an AWS security group for databases which opens port 3306, but don’t allow access to the internet at large.  Only to your AWS defined webserver security group.  You may also decide to use a single box and security group which allows port 22 (ssh) from the internet at large.  All ssh connections will then come in through that box, and internal security groups (database & webserver groups) should only allow port 22 connections from that security group.

When you setup replication, you’ll be creating users and granting privileges.  You’ll need to grant to the wildcard ‘%’ hostname designation as your internal and external IPs will change each time a server dies. This is safe since you expose your database server port 3306 only to other AWS security groups, and no internet hosts.

You may also decide to use an encrypted filesystem for your database mount point, your database backups, and/or your entire filesystem.  Be particularly careful of your most sensitive data.  If compliance requirements dictate, choose to store very sensitive data outside of the cloud and secure network connections to incorporate it into application pages.

Be particularly careful of your AWS logins.  The password recovery mechanism in Amazon Web Services is all that prevents an attacker from controlling your entire infrastructure, after all.

3. Backups

There are a few ways to backup a MySQL database.  By far the easiest way in EC2 is using the AWS snapshot mechanism for EBS volumes.  Keep in mind you’ll want to encrypt these snapshots as S3 may not be as secure as you might like.   Although you’ll need to lock your MySQL tables during the snapshot, it will typically only take a few seconds before you can release the database locks.

Now snapshots are great, but they can only be used within the AWS environment, so it also behooves you to be performing additional backups, and moving them offsite either to another cloud provider or to your own internal servers.  For this your choices are logical backups or hotbackups.

mysqldump can perform logical backups for you.  These backups perform SELECT * on every table in your database, so they can take quite some time, and really destroy the warm blocks in your InnoDB buffer cache.   What’s more rebuilding a database from a dump can take quite some time.  All these factors should be considered before deciding a dump is the best option for you.

xtrabackup is a great open source tool available from Percona.  It can perform hotbackups of all MySQL tables including MyISAM, InnoDB and XtraDB if you use them.  This means the database will be online, not locking tables, with smarter less destructive hits to your buffer cache and database server as a whole.  The hotbackup will build a complete copy of your datadir, so bringing up the server from a backup involves setting the datadir in your my.cnf file and starting.

We wrote a handy guide to using hotbackups to setup replication.

4. Disk I/O

Obviously Disk I/O is of paramount performance for any database server including MySQL.  In AWS you do not want to use instance store storage at all.  Be sure your AMI is built on EBS, and further, use a separate EBS mount point for the database datadir.

An even better configuration than the above, but slightly more complex to configure is a software raid stripe of a number of EBS volumes.  Linux’s software raid will create an md0 device file which you will then create a filesystem on top of – use xfs.  Keep in mind that this arrangement will require some care during snapshotting, but can still work well.  The performance gains are well worth it!

5. Network & IPs

When configuring Master & Slave replication, be sure to use the internal or private IPs and internal domain names so as not to incur additional network charges.  The same goes for your webservers which will point to your master database, and one or more slaves for read queries.

6. Availability Zones

Amazon Web Services provides a tremendous leap in options for high availability.  Take advantage of availability zones by putting one or more of your slaves in a separate zone where possible.  Interestingly if you ensure the use of internal or private IP addresses and names, you will not incur additional network charges to servers in other availability zones.

7. Disaster Recovery

EC2 servers are out of the gates *NOT* as reliable as traditional servers.  This should send shivers down your spine if you’re trying to treat AWS like a traditional hosted environment.  You shouldn’t.  It should force you to get serious about disaster recovery.  Build bulletproof scripts to spinup your servers from custom built AMIs and test them.  Finally you’re taking disaster recovery as seriously as you always wanted to.   Take advantage of Availability Zones as well, and various different scenarios.

8. Vertical and Horizontal Scaling

Interestingly vertical scaling can be done quite easily in EC2.  If you start with a 64bit AMI, you can stop such a server, without losing the root EBS mount.  From there you can then start a new larger instance in EC2 and use that existing EBS root volume and voila you’ve VERTICALLY scaled your server in place.  This is quite a powerful feature at the system administrators disposal.  Devops has never been smarter!  You can do the same to scale *DOWN* if you are no longer using all the power you thought you’d need.  Combine this phenomenal AWS feature with MySQL master-master active/passive configuration, and you can scale vertically with ZERO downtime.  Powerful indeed.

We wrote an EC2 Autoscaling Guide for MySQL that you should review.

Along with vertical scaling, you’ll also want the ability to scale out, that is add more servers to the mix as required, and scale back when your needs reduce.  Build in smarts in your application so you can point SELECT queries to read-only slaves.  Many web applications exhibit the bulk of there work in SELECTs so being able to scale those horizontally is very powerful and compelling.  By baking this logic into the application you also allow the application to check for slave lag.  If your slave is lagging slightly behind the master you can see stale data, or missing data.  In those cases your application can choose to go to the master to get the freshest data.

What about RDS?

Wondering whether RDS is right for you? It may be. We wrote a comprehensive guide to evaluating RDS over MySQL.

If you read this far, you should grab our newsletter!