Tag Archives: redundancy

Migrating to the Cloud – Why and why not?

A lot of technical forums and discussions have highlighted the limitations of EC2 and how it loses  on performance when compared to physical servers of equal cost.  They argue that you can get much more hardware and bigger iron for the same money.  So it then seems foolhardy to turn to the cloud.  Why this mad rush to the cloud then?  Of course if all you’re looking at is performance, it might seem odd indeed.  But another way of looking at it is, if performance is not as good, it’s clearly not the driving factor to cloud adoption.

CIOs and CTOs are often asking questions more along the lines of, “Can we deploy in the cloud and settle with the performance limitations, and if so how do we get there?”

Another question, “Is it a good idea to deploy your database in the cloud?”  It depends!  Let’s take a look at some of the strengths and weaknesses, then you decide.

8 big strengths of the cloud

  1. Flexibility in disaster recovery – it becomes a script, no need to buy additional hardware
  2. Easier roll out of patches and upgrades
  3. Reduced operational headache – scripting and automation becomes central
  4. Uniquely suited to seasonal traffic patterns – keep online only the capacity you’re using
  5. Low initial investment
  6. Auto-scaling – set thresholds and deploy new capacity automatically
  7. Easy compromise response – take server offline and spinup a new one
  8. Easy setup of dev, qa & test environments

Some challenges with deploying in the cloud

  1. Big cultural shift in how operations is done
  2. Lower SLAs and less reliable virtual servers – mitigate with automation
  3. No perimeter security – new model for managing & locking down servers
  4. Where is my data?  — concerns over compliance and privacy
  5. Variable disk performance – can be problematic for MySQL databases
  6. New procurement process can be a hurdle

Many of these challenges can be mitigated against.  The promise of the infrastructure deployed in the cloud is huge, so digging our heels in with gradual adoption is perhaps the best option for many firms.  Mitigate the weaknesses of the cloud by:

  • Use encrypted filesystems and backups where necessary
  • Also keep offsite backups inhouse or at an alternate cloud provider
  • Mitigate against EBS performance – cache at every layer of your application stack
  • Employ configuration management & automation tools such as Puppet & Chef

Quora discussion – Why or why not to migrate to the cloud?

Root Cause Analysis – What is it and why is it important?

Root Cause Analysis is the means to identify the ultimate source and cause of an outage.  When an outage occurs that causes serious downtime of a website, typically organizations are in crisis mode.  Urgency of resolution sometimes pushes aside due process, change management and general caution.  Root Cause Analysis attempts to as much as possible isolate logfiles, configurations, and the current state of systems for later analysis.

With traditional physical servers, physical hardware failure, operator error, or a security breach can cause outages.  Since you’re dealing with one physical machine, resolving that issue necessarily means moving around the things that broke.  So caution and later analysis must be balanced with the immediate problem resolution.

Another silver lining in cloud hosted solutions is around root cause analysis.  If a server was breached for example, that server can immediately be shutdown, while maintaining it’s current state as a disk or EBS snapshot.  A new server can then be fired up from a AMI image, then your server rebuilt from scripts or template and you’re back up and running.  Save the snapshot then for later analysis.

This could be used for analysis of operator error related outages as well.  Hardware failures are more expected and common in cloud hosted environments, so this should and really must push adoption of best practices around infrastructure, that is having scripts at hand that rebuild everything from bare metal.

More discussion of root cause analysis by Sean Hull on Quora.

Database Replication – What is it and why is it important?

Database replication is a service for shipping changes to your database, off to a copy housed on another server, potentially even in another data center.

Each change to the records of information in your database or groups of them are organized into transactions.  These each get unique identifiers or change numbers.  Those groupings of changes are logged to transaction logs which are then moved across to the sibling database server, and applied there.

Oracle offers this facility in the form of Standby Database aka Dataguard.  Although it can be messy to setup, it is fairly bulletproof.  If you don’t receive any errors in your logfiles, you can rest assured that the data on your main or master database server looks exactly the same as what’s on your secondary server.  With MySQL the situation can be a bit more complicated.  MySQL replication was built based on statements, not changed data.  So those same statements or instructions (SQL statements of DML & DDL) get rerun on the secondary database server.   Some of the problems associated with this are:

  1. MySQL standby transactions may execute at different times than master
  2. Serialization is important to maintain order.  Transactions need to get applied in the same order as on the primary.  However all tables & storage engines are not necessarily transactional in MySQL.  This can cause troubles, and potentially break replication on the slave side.
  3. MySQL replication can fail silently, that is data can drift out of sync with the master without your knowing.

Caveats aside, MySQL replication is powerful, and quite fast.  With the right caution, and correct tools, it can be as robust and reliable as Oracle’s own standby database technology.

  • Use checksum tools like Maatkit to verify & compare tables on primary & secondary servers
  • Use rigorous monitoring to watch error logs, and checksum logs hourly if necessary.
  • Use Maatkit sync tool to resync tables if they get out of whack.
  • Be especially vigilant if you’re taking backups off the secondary server

Replication can also be a powerful high availability tool in your arsenal.  Setup as a master-master cluster, you can keep your application pointed to the “active” side or role while performing blocking operations on the inactive database server.  Since master-master cluster also known as circular replication applies changes to both databases, you’ll need to stop replication during the blocking operation.  Once it is completed on the inactive side, redirect the webservers & application to the database where you completed the operation, then reenable replication and watch the databases catch up with each other!

Quora discussions – What is database replication?

MySQL Cluster In The Cloud – Managers Guide

The term clustering is often used loosely in the context of enterprise databases.  In relation to MySQL in the cloud you can configure:

  1. Master-master active/passive
  2. Sharded MySQL Database
  3. NDB Cluster

Master-Master active/passive replication

Also sometimes known as circular replication.  This is used for high availability. You can perform operations on the inactive node (backups, alter tables or slow operations) then switch roles so inactive becomes active.  You would then perform the same operations on the former master.  Applications sees “zero downtime” because they are always pointing at the active master database.  In addition the inactive master can be used as a read-only slave to run SELECT queries and large reporting queries.  This is quite powerful as typical web applications tend to have 80% or more of their work performed with read-only queries such as browsing, viewing, and verifying data and information.

Sharded MySQL Database

This is similar to what in the Oracle world is called “application partitioning”.   In fact before Oracle 10 most Parallel server and RAC installations required you to do this.  For example a user table might be sharded by putting names A-F on node A, G-L on node B and so forth.

You can also achieve this somewhat transparently with user_ids.  MySQL has an autoincrement column type to handle serving up unique ids.  It also has a cluster-friendly feature called auto_increment_increment.  So in an example where you had *TWO* nodes, all EVEN numbered IDs would be generated on node A and all ODD numbered IDs would be generated on node B.  They would also be replicating changes to eachother, yet avoid collisions.

Obviously all this has to be done with care, as the database is not otherwise preventing you from doing things that would break replication and your data integrity.

One further caution with sharding your database is that although it increases write throughput by horizontally scaling the master, it ultimately reduces availability.   An outage of any server in the cluster means at least a partial outage of the cluster itself.

NDB Cluster

This is actually a storage engine, and can be used in conjunction with InnoDB and MyISAM tables.  Normally you would use it sparingly for a few special tables, providing availability and read/write access to multiple masters.  This is decidedly *NOT* like Oracle RAC though many mistake it for that technology.

MySQL Clustering In The Cloud

The most common MySQL cluster configuration we see in the Amazon EC2 environment is by far the Master-Master configuration described above.  By itself it provides higher availability of the master node, and a single read-only node for which you can horizontally scale your application queries.  What’s more you can add additional read-only slaves to this setup allowing you to scale out tremendously.