Category Archives: All

3 things CEOs should know about the Cloud

You’ve heard all the buzz and spiel about the cloud, and there’re good reasons to want to get there. On-demand compute power makes new levels of scalability possible. Low up front costs means moving capital expenditure to operating expenditure and saving a bundle in the process. We won’t give you anymore of the rah rah marketing hoopla. You’ve heard enough of that. We’ll gently play devil’s advocate for a moment, and give you a few things to think about when deploying applications with a cloud provider. Our focus is mainly on Amazon EC2.

You might also be interested in a wide reaching introduction to deploying on Amazon EC2.

  1. Funky Performance
  2. One of the biggest hurdles we see clients struggle with on Amazon EC2 is performance. This is rooted in the nature of shared resources. Computer servers, just like desktops rely on CPUs, Memory, Network and Disk. In the virtual datacenter, you can be given more than your fair share without you even knowing it. More bandwidth, more CPU, more disk? Who would complain? Well if your application behaves erratically, while you suddenly compete for disk resources you’ll quickly feel the flip side of that coin. Stocks go up, and they can just as easily come right back down.

    Variability around disk I/O seems to be the one that hits applications the hardest, especially the database tier of many web applications. If your application requires extremely high database transaction throughput, you would do well to consider physical servers and a real RAID array to host your database server. Read more about IOPs

  3. Uncertain Reliability – A Loaded Gun
  4. Everybody has heard the saying, don’t hand someone a loaded gun. In the case of Amazon servers, you really do load your applications onto fickle and neurotic servers.

    Imagine you open a car rental business. You could have two brand new fully reliable cars to rent out to customers. Your customers would be very happy, but you’d have a very small business. Alternatively you could have twenty used Pintos. You’d have some breaking down a lot, but as long as you keep ten of them rented at a time, your business is booming.

    In the Amazon world you have all the tools to keep your Ford Pintos running, but it’s important to think long and hard about reliability, redundancy, and automation. Read more about Failures, Lessons & the Chaos Monkey

  5. Iffy Support
  6. Managed hosting providers vary drastically in terms of the support you can expect. Companies like Rackspace, Servint or Datapipe have Support built into their DNA. They’ve grown up around having a support tech that your team can reach when they’re having trouble.

    Amazon takes the opposite approach. They give you all the tools to do everything yourself. But in a crunch it can be great to have that service available to help troubleshoot and diagnose a problem. Although they’re now offering support contracts, it’s not how they started out.

    If you have a crack operations team at your disposal, or you hire a third party provider like Heavyweight Internet Group Amazon Web Services gives you the flexibility and power to build phenomenal and scalable architectures. But if you’re a very small team without tons of technical know-how, you may well do better with a service-oriented provider like Rackspace et al.

A few more considerations…

  • Will your cloud provider go out of business?
  • Could a Subpoena against your provider draw you into the net?
  • Since you don’t know where your sensitive data is, should you consider encryption?
  • Should I keep additional backups outside of the cloud?
  • Should I use multiple cloud providers?
  • Should I be concerned about the lack of perimeter security?

Ten things to remember about MySQL backups

  1. Use Hot Backups
  2. Hot backups are an excellent way to backup MySQL.  They can run without blocking your application, and save tons on restore time.  Percona’s xtrabackup tool is a great way to do this.  We wrote a how-to on using xtrabackup for hotbackups.

  3. Use Logical Backups
  4. Just because we love hot backups using xtrabackup doesn’t mean mysqldump isn’t useful.  Want to load data into Amazon RDS?  Want to isolate and load only one schema, or just one table?  All these great uses make mysqldump indispensable.  Use it in combination with periodic hot backups to give you more recovery options.

  5. Replication isn’t a backup
  6. While replication provides a great way to keep a hot copy of your production database, it’s not the same as a backup.  Why?  Operator error, that’s why!  People make mistakes, drop tables and database schemas that later need to be restored.  This can and will happen, so head off the disaster by doing real backups.

    As an additional note, if you’re using replication, you surely want to perform regular checksums of your data.  These ensure that the primary and secondary do indeed contain the same data.

  7. Firedrills & Restore Time
  8. The only way to be sure your backup is complete is to test restoring everything.  Yes it’s a pain, but it will inevitably be a learning experience.  You’ll document the process to speed it up in future tests, you’ll learn how long recovery takes, and find additional pieces to the pie that must be kept in place.  Doing this in advance of d-day is

    Different backups have different recovery times.  In the industry vernacular, your RTO or recovery time objective should inform what will work for you.  Although a mysqldump may take 30 minutes to complete, your restore of that data might take 8 hours or more.  That’s due in part to rebuilding all those indexes.  When you perform the dump one create index statement is formulated from the data dictionary, but on import the data must be sorted and organized to rebuild the index from scratch.  Percona’s mysqldump utility will capitalize on MySQL’s fast index rebuild for Innodb tables.  According to the Percona guys this can bring a big improvement in import time.  Yet another great reason to use the Percona distro!

  9. Transaction Logs
  10. If you want to be able to do point in time recovery, you’ll need all the binlog files as well.  These are being created all the time, while new transactions are completed in your database. If your last backup was last night at 3am, and you want to recovery today until 3pm, you’ll need all the binary logs from the intervening hours to apply to that backup.  This process is called point-in-time recovery, and can bring your database restore up to the current commited transactions.

  11. Backup Config Files
  12. Don’t forget that lonely /etc/my.cnf file.  That’s an important part of a backup if you’re rebuilding on a newly built server.  It may not need to be backed up with the same frequency, but should be included.

  13. Stored Code & Grants
  14. Stored procedures, triggers and functions are all stored in the mysql database schema.  If you are doing a restore of just one database schema, you may not have this, or it may make the restore more complicated.  So it can be a good idea to backup code separately.  mysqldump can do this with the –routines option.  Hot backups by their nature, will capture everything in the entire instance – that is all database schemas including the system ones.

    Grants are another thing you may want to backup separately.  For the same reasons as stored code, grants are stored in the system tables.  Percona toolkit includes a nice tool for this called pt-show-grants.  We recommend running this periodically anyway, as it’ll give you some perspective on permissions granted in your database.  You’re reviewing those right?

  15. Events & Cronjobs
  16. MySQL allows the running of events inside the database.  SHOW EVENTS or SHOW EVENTS schema_name will display the events scheduled.

    You may also have cronjobs enabled.  Use crontab -l to display those for specific users.  Be sure to check at least “mysql” and “root” users as well as other possible application users on the server.

  17. Monitoring
  18. Backups are a nit picky job, and often you don’t know if they’re complete until it’s time to restore.  That’s why we recommend firedrills above, and they’re very important.  You can also monitor the backups themselves.  Use an error log with mysqldump or xtrabackup, and check that logfile for new messages.  In addition you can check the size of the resulting backup file.  If it has changed measurably from the recent backup sizes, it may indicate problems.  Is your backup size 0, something serious is wrong.  Half the size of recent ones, it may have failed halfway through, or the filesystem filled up.

  19. Security
  20. This is often overlooked area, but may be a concern for some environments.  Is the data contained in your backup sensitive?  Consider where the backups are stored and retained for long term.  Reason who has access to those files, and make use of the least privileges rule.

Like our stuff? Don’t forget to grab our newsletter!

Tyranny of a Google vote

Image by Hajo de Reijger, politicallyillustrated.com

For the past year I’ve been seeing headline blogs analyzing the effect of Google’s last algorithm update, dubbed the Panda. There was much talk of unfair relegation from the first page of Google search results, and general indignance by the SEO community.

As with any subject in which I only have cursory knowledge I didn’t think much of it. I thought that as long as I didn’t engage in link-buying and whatever is known as “black hat” tactics, the search engines would be fair. What I didn’t realise with Google was how subjective it has become in ranking websites. I was particularly tripped up in the area of duplicate content.

Some of my articles are syndicated to DZone.com, a hub for tech bloggers. They’d approached me about a year ago asking if I’d like them to carry our content. It seemed like a good way to gain visibility so I agreed.

Recently, I ran a Google search on the actual content that was syndicated and found the following results:

Title Dzone rank iheavy rank
Zero Downtime – What is it? #21 (not in first 5 pages)
Deploying MySQL on EC2 #1 (not in first 5 pages)
Cloud Computing Use Cases #5 (not in first 5 pages)

Now DZone.com’s pagerank is a 6 while iheavy.com is a 3. Google’s algorithm is probably weighing the pagerank of Dzone higher, and serving up those results at the expense of the original. It could be that the algorithms can’t determine which is original but Google’s bots know full well when content is published, so it knows the iheavy.com content was created before.

More likely it cannot supersede the ranking algorithm. It is one based on popularity.

What’s Original Content?

All of this raises the question of the value of original content. In this case we’ve given this site permission to carry our content, not knowing whether that would be good or bad for us. Nevertheless, it does seem to go against common sense, and perhaps what Internet users intuition might tell them, that they were not clicking through to the original creator of some content.

Can Google’s Algorithm Discern Original Work? Is there an incentive to do so?

The update brings up interesting questions about Google’s ranking algorithm. In a world where the popularity indicator is given the highest weight, will we find what we are looking for? In my case, I write articles related to my area of expertise, which is on web architecture, scalability and general tech consulting matters. Of course I want people to find my site when they’re looking for solutions to problems relevant to them. Yet in Google’s calculations, popularity trumps provenance.

Many bigger sites are experiencing the same thing and at an even larger scale. According to SEO Moz, Panda is forcing a change onto the role of SEOs, turning them into that of web strategists. While traditional SEO methods of optimizing for keywords, and putting out quality content still count, design and user experience, shareability, likeability; what’s known as “signals” that could predict the site’s popularity, are influencing the overall results of your site.

As an independent business owner with limited resources my time off hours is invested in writing better articles that appeal to people searching for MySQL or scalability consulting and less about putting in the bells and whistles to raise site popularity. But if that’s what Google favors then I’ll probably have to rethink my approach.

As much as I can squeeze out of a busy schedule, there’s not a high chance that this website can surpass a giant such as Dzone in popularity.

The Internet is meant to be a place where the pint-sized can have a fair chance at making an impression. With the way search algorithms have evolved, things are looking more like a reality TV talent contest where skill alone without good looks and a nice smile are just not enough to win the popular vote.

The Power of Habit by Charles Duhigg

Habits. We all have them. The good ones we celebrate, but the bad ones we struggle with. Duhigg’s book may introduce some ideas to those of us less familiar with behavioral sciences but it fails to effectively teach us how to form good habits and break the bad ones.

Filled with pages of stories from successful brands such as Pepsodent which Duhigg credits for turning the brushing of teeth into a daily routine; and perhaps more tenuous ones about leaders such as Paul O’ Neill, the CEO of Alcoa who purportedly turned around the fortunes of an ailing organisation by changing its safety practices.

From cue, routine to reward we must first identify the habit, then in a way that parallels the success of Alcoholic Anonymous, you replace the routine, keeping the cue and reward. In discussing the success of AA and others, he brings up the importance of belief in long term success of habit change. He references William James’ famous quote “Believe that life is worth living, and your belief will help create that fact”.

Still, I couldn’t help thinking that for the average business manager it lacked actionable advice of the kind you might find in a Jim Collins Good to Great or Chip Conley’s Peak. These books also have excellent story telling, but break things down in a very specific set of steps and attributes that an organization or individual can apply today.

Duhigg’s writing is easy to read and that’s probably the book’s greatest strength. Yet with most of it grounded more in interesting anecdotes than credible research, the examples unfortunately give for more entertaining reading than any deep insight.

Bulletproofing MySQL replication with checksums

bulletproof glass

Also find Sean Hull’s ramblings on twitter @hullsean.

Your MySQL replcas running well? You might not even know if they aren’t. One of the scariest things about MySQL replication is that it can drift out of sync with the master “silently”. No errors, no warnings.

  1. What and Why?
  2. MySQL’s replication solution evolved as a statement based technology. Instead of sending actual block changes, MySQL just has to log committed transactions, and reapply those on the slave side. This affords a wonderful array of topologies and different uses, but has it’s drawbacks. The biggest occur when data does not get updated or changed in the same way on the slave. If you’re new to MySQL or coming from the Oracle world you might expect that this would flag an error. But there are many scenarios in which MySQL will not flag an error:

    • mixed transactional and non-transactional tables
    • use of non-deterministic functions such as uuid()
    • stored procedures and functions
    • update with LIMIT clause

    There are others but suffice it to say if you want to rely on your slave being consistent, you need to check it!

  3. The solution – mathematical checksums
  4. If you’re a seasoned Linux user, you’re probably familiar with the md5sum command. It creates a checksum on a file. You can do so on different servers to compare a file in a mathematically exact way. In fact rsync uses this technique to efficiently determine what files or pieces of files need to be copied across a network. That’s what makes it so fast!

    It turns out that MySQL can checksum tables too. However were we to build our own solution, we might have trouble doing so manually as table data is constantly in a state of flux.

    Enter Percona’s pt-table-checksum tool formerly part of Maatkit. Run it periodically against your master schemas or the entire instance if you like. It will store checksums of all of your tables in a special checksum table. The data from this table then will propagate through replication to all of your connected slaves.

    The tool then has a check mode, which allows you to verify all the connected slaves are ok, or report the differences if it finds any.

  5. Step-by-step Setup
  6. First you’ll need to grab a copy of the percona toolkit. Note that if you previously installed maatkit then you may want to delete those old scripts to avoid confusion. mk-table-checksum if you used maatkit, or pt-table-checksum if you have 1.0 versions. You likely installed using wget or perl Makefile, so you may need to go and remove those manually.

    Assuming you’ve already got the percona repository installed issue:

    $ yum install -y percona-toolkit

    I’ve found some of the maatkit tools to be rather fussy about getting all the options right. The first thing to do which will help simplify this is to add a section in your local user’s “.my.cnf” file like this:

    [client]

    user=root

    password=myrootpassword

    That way the percona tools will look for this whenever it needs authentication credentials. Otherwise we assume localhost for this example, so you should verify you can connect with the mysql client as root from localhost.

    Now let’s checksum the “mysql” system schema.

    $ pt-table-checksum --replicate=test.checksum --create-replicate-table --databases=mysql localhost

    Note the –create-replicate-table option. You only need this option the first time. From there the test.checksum table will exist.

    You should see some output that looks like this:

    TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE

    04-24T16:06:45 0 0 0 1 0 0.099 mysql.columns_priv

    04-24T16:06:45 0 0 32 1 0 0.100 mysql.db

    04-24T16:06:45 0 0 0 1 0 0.096 mysql.event

    04-24T16:06:45 0 0 0 1 0 0.096 mysql.func

    04-24T16:06:45 0 0 38 1 0 0.102 mysql.help_category

    04-24T16:06:45 0 0 452 1 0 0.106 mysql.help_keyword

    04-24T16:06:46 0 0 993 1 0 0.096 mysql.help_relation

    04-24T16:06:46 0 0 506 1 0 0.100 mysql.help_topic

    04-24T16:06:46 0 0 0 1 0 0.099 mysql.host

    04-24T16:06:46 0 0 0 1 0 0.104 mysql.ndb_binlog_index

    04-24T16:06:46 0 0 0 1 0 0.107 mysql.plugin

    04-24T16:06:46 0 1 1 1 0 0.115 mysql.proc

    04-24T16:06:46 0 0 0 1 0 0.186 mysql.procs_priv

    04-24T16:06:46 0 1 1 1 0 0.097 mysql.proxies_priv

    04-24T16:06:47 0 0 0 1 0 0.097 mysql.servers

    04-24T16:06:47 0 0 0 1 0 0.096 mysql.tables_priv

    04-24T16:06:47 0 0 0 1 0 0.098 mysql.time_zone

    04-24T16:06:47 0 0 0 1 0 0.097 mysql.time_zone_leap_second

    04-24T16:06:47 0 0 0 1 0 0.100 mysql.time_zone_name

    04-24T16:06:47 0 0 0 1 0 0.100 mysql.time_zone_transition

    04-24T16:06:47 0 0 0 1 0 0.095 mysql.time_zone_transition_type

    04-24T16:06:47 0 1 38 1 0 0.100 mysql.user

  7. How to check slaves
  8. Once you’ve collected all those fancy checksums for your tables, nicely timestamped, you’ll want to verify that your slaves are happily in sync. You can do that with the following command, also on the master:

    $ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=mysql localhost

    If there’s no differences you’ll see no output. If you have a difference it’ll look something like this:

    Differences on ip-10-15-27-19

    TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARY

    mysql.user 1 1 1

    In our case you can see we created some users on the slaves accidentally, hence the differences. It illustrates how easy it is for differences to creep into your environment and also how easy it now is to find them!

  9. Special Cases
  10. Since one of my clients uses Drupal, they’ve had trouble replicating the semaphore table. This tables is a MyISAM table, and unfortunately no one dares convert it to InnoDB. So from time to time some gunk builds up in there, and it fails on the slave. We could clean out the table, but we decided to just filter out this one table. Since Drupal doesn’t use fully qualified schema.table names in it’s code, only “use” we have found this to be safe.

    However the percona toolkit explicitely checks for replication filters and will not run. It’ll stop with an error as follows:

    $ pt-table-checksum --replicate=test.checksum --databases=sean --ignore-tables=semaphore localhost

    04-24T15:59:29 Replication filters are set on these hosts:

    ip-10.15.27.19

    replicate_ignore_table = sean.semaphore

    ip-10-15-27-72

    replicate_ignore_table = sean.semaphore

    ip-10-15-27-18

    replicate_ignore_table = sean.semaphore

    Please read the --check-replication-filters documentation to learn how to solve this problem. at /usr/bin/pt-table-checksum line 6166.

The solution is the –nocheck-replication-filters option. Keep in mind that this sanity check is there for a reason, so be sure to skip the relevant tables in your checksum building, and checksum checks.

To build checksums skipping the semaphore table use this command:

$ pt-table-checksum --replicate=test.checksum --ignore-tables=prod.semaphore --nocheck-replication-filters localhost

Now you can check your slaves but ignore the semaphore table:


$ pt-table-checksum --replicate=test.checksum --replicate-check-only --ignore-tables=prod.semaphore --nocheck-replication-filters localhost

We also found a bug which preventing us from specifying multiple tabes on the ignore-tables line. So we used multiple invocations to do different schemas like this:


$ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=prod1 --ignore-tables=prod1.semaphore --nocheck-replication-filters localhost

$ pt-table-checksum --replicate=test.checksum --replicate-check-only --databases=prod2 --ignore-tables=prod2.semaphore --nocheck-replication-filters localhost

  • Crash Protection
  • If you’ve used MySQL replication for any length of time, you’ve probably seen a server crash. MySQL replication can have trouble restarting if you’re using temporary tables, as they’ll be missing upon restart. Also MySQL before 5.5 leaves syncing the info files to the operating system. So they may be incorrect after a crash.

    1. Use MySQL 5.5 if possible

    There are some new parameters in 5.5 that protect the info files from a crash. These are a great addition, and will make your slave databases more bulletproof.

    sync_master_info = 1

    sync_relay_log = 1

    sync_relay_log_info = 1

    2. Don’t use temporary tables

    After a restart they’ll simply be gone, so queries requiring or running against them will fail.

    The Percona guys’ new book High Performance MySQL third edition, suggests an alternate solution to using temporary tables. Use a special schema to hold your temp data, but create them as normal permanent tables. Be sure your application creates them with unique names, using the connection_id() for example. Lastly have a cleanup process drop tables periodically, based on closed connection_ids.