Tag Archives: aws

Cloud Computing – Disciplined Deployments

With traditional managed hosting solutions, we have best practices, we have business continuity plans, we have disaster recovery, we document our processes and all the moving parts in our infrastructure.  At least we pay lip service to these goals, though from time to time we admit to getting side tracked with bigger fish to fry, high priorities and the emergency of the day.  We add “firedrill” to our todo list, promising we’ll test restoring our backups.  But many times we find it is in the event of an emergency that we are forced to find out if we actually have all the pieces backed up and can reassemble them properly.

** Original article — Intro to EC2 Cloud Deployments **

Cloud Computing is different.  These goals are no longer be lofty ideals, but must be put into practice.  Here’s why.

  1. Virtual servers are not as reliable as physical servers
  2. Amazon EC2 has a lower SLA than many managed hosting providers
  3. Devops introduces new paradigm, infrastructure scripts can be version controlled
  4. EC2 environment really demands scripting and repeatability
  5. New flexibility and peace of mind

Unreliable Servers

EC2 virtual servers can and will die.  Your spinup scripts and infrastructure should consider this possibility not as some far off anomalous event, but a day-to-day concern.  With proper scripts and testing of various scenarios, this should become manageable.  Use snapshots to backup EBS root volumes, and build spinup scripts with AMIs that have all the components your application requires.  Then test, test and test again.

Amazon EC2’s SLA – Only 99.95%

The computing industry throws around the 99.999% or five-nines uptime SLA standard around a lot.  That amounts to less than six minutes of downtime.  Amazon’s 99.95% allows for 263 minutes of downtime.  Greater downtime merely gets you a credit on your account.  With that in mind, repeatable processes and scripts to bring your infrastructure back up in different availability zones or even different datacenters is a necessity.  Along with your infrastructure scripts, offsite backups also become a wise choice.  You should further take advantage of availability zones and regions to make your infrastructure more robust.  By using private IP addresses and network, you can host a MySQL database slave in a separate zone, for instance.  You can also do GDLB or Geographically Distributed Load Balancing to send customers on the west coast to that zone, and those on the east coast to one closer to them.  In the event that one region or availability zone goes out, your application is still responding, though perhaps with slightly degraded performance.

Devops – Infrastructure as Code

With traditional hosting, you either physically manage all of the components in your infrastructure, or have someone do it for you.  Either way a phone call is required to get things done.  With EC2, every piece of your infrastructure can be managed from code, so your infrastructure itself can be managed as software.  Whether you’re using waterfall method, or agile as your software development lifecycle, you have the new flexibility to place all of these scripts and configuration files in version control.  This raises manageability of your environment tremendously.  It also provides a type of ongoing documentation of all of the moving parts.  In a word, it forces you to deliver on all of those best practices you’ve been preaching over the years.

EC2 Environment Considerations

When servers get restarted they get new IP addresses – both private and public.  This may affect configuration files from webservers to mail servers, and database replication too, for example.  Your new server may mount an external EBS volume which contains your database.  If that’s the case your start scripts should check for that, and not start MySQL until it finds that volume.  To further complicate things, you may choose to use software raid over a handful of EBS volumes to get better performance.

The more special cases you have, the more you quickly realize how important it is to manage these things in software.  The more the process needs to be repeated, the more the scripts will save you time.

New Flexibility in the Cloud

Ultimately if you take into consideration less reliable virtual servers, and mitigate that with zones and regions, and automated scripts, you can then enjoy all the new benefits of the cloud.

  • autoscaling
  • easy test & dev environment setup
  • robust load & scalability testing
  • vertically scaling servers in place – in minutes!
  • pause a server – incurring only storage costs for days or months as you like
  • cheaper costs for applications with seasonal traffic patterns
  • no huge up-front costs

MySQL Cluster In The Cloud – Managers Guide

The term clustering is often used loosely in the context of enterprise databases.  In relation to MySQL in the cloud you can configure:

  1. Master-master active/passive
  2. Sharded MySQL Database
  3. NDB Cluster

Master-Master active/passive replication

Also sometimes known as circular replication.  This is used for high availability. You can perform operations on the inactive node (backups, alter tables or slow operations) then switch roles so inactive becomes active.  You would then perform the same operations on the former master.  Applications sees “zero downtime” because they are always pointing at the active master database.  In addition the inactive master can be used as a read-only slave to run SELECT queries and large reporting queries.  This is quite powerful as typical web applications tend to have 80% or more of their work performed with read-only queries such as browsing, viewing, and verifying data and information.

Sharded MySQL Database

This is similar to what in the Oracle world is called “application partitioning”.   In fact before Oracle 10 most Parallel server and RAC installations required you to do this.  For example a user table might be sharded by putting names A-F on node A, G-L on node B and so forth.

You can also achieve this somewhat transparently with user_ids.  MySQL has an autoincrement column type to handle serving up unique ids.  It also has a cluster-friendly feature called auto_increment_increment.  So in an example where you had *TWO* nodes, all EVEN numbered IDs would be generated on node A and all ODD numbered IDs would be generated on node B.  They would also be replicating changes to eachother, yet avoid collisions.

Obviously all this has to be done with care, as the database is not otherwise preventing you from doing things that would break replication and your data integrity.

One further caution with sharding your database is that although it increases write throughput by horizontally scaling the master, it ultimately reduces availability.   An outage of any server in the cluster means at least a partial outage of the cluster itself.

NDB Cluster

This is actually a storage engine, and can be used in conjunction with InnoDB and MyISAM tables.  Normally you would use it sparingly for a few special tables, providing availability and read/write access to multiple masters.  This is decidedly *NOT* like Oracle RAC though many mistake it for that technology.

MySQL Clustering In The Cloud

The most common MySQL cluster configuration we see in the Amazon EC2 environment is by far the Master-Master configuration described above.  By itself it provides higher availability of the master node, and a single read-only node for which you can horizontally scale your application queries.  What’s more you can add additional read-only slaves to this setup allowing you to scale out tremendously.

Deploying MySQL on Amazon EC2 – 8 Best Practices

Also find Sean Hull’s ramblings on twitter @hullsean.

There are a lot of considerations for deploying MySQL in the Cloud.  Some concepts and details won’t be obvious to DBAs used to deploying on traditional servers.  Here are eight best practices which will certainly set you off on the right foot.

This article is part of a multi-part series Intro to EC2 Cloud Deployments.

1. Replication

Master-Slave replication is easy to setup, and provides a hot online copy of your data.  One or more slaves can also be used for scaling your database tier horizontally.

Master-Master active/passive replication can also be used to bring higher uptime, and allow some operations such as ALTER statements and database upgrades to be done online with no downtime.  The secondary master can be used for offloading read queries, and additional slaves can also be added as in the master-slave configuration.

Caution: MySQL’s replication can drift silently out of sync with the master. If you’re using statement based replication with MySQL, be sure to perform integrity checking to make your setup run smoothly. Here’s our guide to bulletproofing MySQL replication.

2. Security

You’ll want to create an AWS security group for databases which opens port 3306, but don’t allow access to the internet at large.  Only to your AWS defined webserver security group.  You may also decide to use a single box and security group which allows port 22 (ssh) from the internet at large.  All ssh connections will then come in through that box, and internal security groups (database & webserver groups) should only allow port 22 connections from that security group.

When you setup replication, you’ll be creating users and granting privileges.  You’ll need to grant to the wildcard ‘%’ hostname designation as your internal and external IPs will change each time a server dies. This is safe since you expose your database server port 3306 only to other AWS security groups, and no internet hosts.

You may also decide to use an encrypted filesystem for your database mount point, your database backups, and/or your entire filesystem.  Be particularly careful of your most sensitive data.  If compliance requirements dictate, choose to store very sensitive data outside of the cloud and secure network connections to incorporate it into application pages.

Be particularly careful of your AWS logins.  The password recovery mechanism in Amazon Web Services is all that prevents an attacker from controlling your entire infrastructure, after all.

3. Backups

There are a few ways to backup a MySQL database.  By far the easiest way in EC2 is using the AWS snapshot mechanism for EBS volumes.  Keep in mind you’ll want to encrypt these snapshots as S3 may not be as secure as you might like.   Although you’ll need to lock your MySQL tables during the snapshot, it will typically only take a few seconds before you can release the database locks.

Now snapshots are great, but they can only be used within the AWS environment, so it also behooves you to be performing additional backups, and moving them offsite either to another cloud provider or to your own internal servers.  For this your choices are logical backups or hotbackups.

mysqldump can perform logical backups for you.  These backups perform SELECT * on every table in your database, so they can take quite some time, and really destroy the warm blocks in your InnoDB buffer cache.   What’s more rebuilding a database from a dump can take quite some time.  All these factors should be considered before deciding a dump is the best option for you.

xtrabackup is a great open source tool available from Percona.  It can perform hotbackups of all MySQL tables including MyISAM, InnoDB and XtraDB if you use them.  This means the database will be online, not locking tables, with smarter less destructive hits to your buffer cache and database server as a whole.  The hotbackup will build a complete copy of your datadir, so bringing up the server from a backup involves setting the datadir in your my.cnf file and starting.

We wrote a handy guide to using hotbackups to setup replication.

4. Disk I/O

Obviously Disk I/O is of paramount performance for any database server including MySQL.  In AWS you do not want to use instance store storage at all.  Be sure your AMI is built on EBS, and further, use a separate EBS mount point for the database datadir.

An even better configuration than the above, but slightly more complex to configure is a software raid stripe of a number of EBS volumes.  Linux’s software raid will create an md0 device file which you will then create a filesystem on top of – use xfs.  Keep in mind that this arrangement will require some care during snapshotting, but can still work well.  The performance gains are well worth it!

5. Network & IPs

When configuring Master & Slave replication, be sure to use the internal or private IPs and internal domain names so as not to incur additional network charges.  The same goes for your webservers which will point to your master database, and one or more slaves for read queries.

6. Availability Zones

Amazon Web Services provides a tremendous leap in options for high availability.  Take advantage of availability zones by putting one or more of your slaves in a separate zone where possible.  Interestingly if you ensure the use of internal or private IP addresses and names, you will not incur additional network charges to servers in other availability zones.

7. Disaster Recovery

EC2 servers are out of the gates *NOT* as reliable as traditional servers.  This should send shivers down your spine if you’re trying to treat AWS like a traditional hosted environment.  You shouldn’t.  It should force you to get serious about disaster recovery.  Build bulletproof scripts to spinup your servers from custom built AMIs and test them.  Finally you’re taking disaster recovery as seriously as you always wanted to.   Take advantage of Availability Zones as well, and various different scenarios.

8. Vertical and Horizontal Scaling

Interestingly vertical scaling can be done quite easily in EC2.  If you start with a 64bit AMI, you can stop such a server, without losing the root EBS mount.  From there you can then start a new larger instance in EC2 and use that existing EBS root volume and voila you’ve VERTICALLY scaled your server in place.  This is quite a powerful feature at the system administrators disposal.  Devops has never been smarter!  You can do the same to scale *DOWN* if you are no longer using all the power you thought you’d need.  Combine this phenomenal AWS feature with MySQL master-master active/passive configuration, and you can scale vertically with ZERO downtime.  Powerful indeed.

We wrote an EC2 Autoscaling Guide for MySQL that you should review.

Along with vertical scaling, you’ll also want the ability to scale out, that is add more servers to the mix as required, and scale back when your needs reduce.  Build in smarts in your application so you can point SELECT queries to read-only slaves.  Many web applications exhibit the bulk of there work in SELECTs so being able to scale those horizontally is very powerful and compelling.  By baking this logic into the application you also allow the application to check for slave lag.  If your slave is lagging slightly behind the master you can see stale data, or missing data.  In those cases your application can choose to go to the master to get the freshest data.

What about RDS?

Wondering whether RDS is right for you? It may be. We wrote a comprehensive guide to evaluating RDS over MySQL.

If you read this far, you should grab our newsletter!

Managing Security in Amazon Web Services

Security is on everyone’s mind when talking about the cloud.  What are some important considerations?

For the web operations team:

  1. AWS has no perimeter security, should this be an overriding concern?
  2. How do I manage authentication keys?
  3. How do I harden my machine images?

** Original article — Intro to EC2 Cloud Deployments **

Amazon’s security groups can provide strong security if used properly.  Create security groups with specific minimum privileges, and do not expose your sensitive data – ie database to the internet directly, but only to other security groups.  On the positive side, AWS security groups mean there is no single point to mount an attack against as with a traditional enterprises network security.  What’s more there is no opportunity to accidentally erase network rules since they are defined in groups in AWS.

Authentication keys can be managed in a couple of different ways.  One way is to build them into the AMI.  From there any server spinup based on that AMI will be accessible by the owner of those credentials.  Alternatively a more flexible approach would be to pass in the credentials when you spinup the server, allowing you to dynamically control who has access to that server.

Hardening your AMIs in EC2 is much like hardening any Unix or Linux server.  Disable user accounts, ssh password authentication, and unnecessary services.  Consider a tool like AppArmor to fence applications in and keep them out of areas they don’t belong.  This can be an ongoing process that is repeated if the unfortunate happens and you are compromised.

You should also consider:

  • AWS password recovery mechanism is not as secure as a traditional managed hosting provider.  Use a very strong password to lock down your AWS account and monitor it’s usage.
  • Consider encrypted filesystems for your database mount point.  Pass in decryption key at server spinup time.
  • Consider storing particularly sensitive data outside of the cloud and expose through SSL API call.
  • Consider encrypting your backups.  S3 security is not proven.

For CTOs and Operations Managers:

  1. Where is my data physically located?
  2. Should I rely entirely on one provider?
  3. What if my cloud provider does not sufficiently protect the network?

Although you do not know where your data is physically located in S3 and EC2, you have the choice of whether or not to encrypt your data and/or the entire filesystem.  You also control access to the server.  So from a technical standpoint it may not matter whether you control where the server is physically.  Of course laws, standards and compliance rules may dictate otherwise.

You also don’t want to put all your eggs in one basket.  There are all sorts of things that can happen to a provider, from going out of business, to lawsuits that directly or indirectly affect you to even political pressure as in the wikileaks case.  A cloud provider may well choose the easier road and pull the plug rather than deal with any complicated legal entanglements.  For all these reasons you should be keeping regular backups of your data either on in-house servers, or alternatively at a second provider.

As a further insurance option, consider host intrusion detection software.  This will give you additional peace of mind against the potential of your cloud provider not sufficiently protecting their own network.

Additionally consider that:

  • A simple password recovery mechanism in AWS is all that sits between you and a hacker to your infrastructure.  Choose a very secure password, and monitor it’s usage.
  • EC2 servers are not nearly as reliable as traditional physical servers.  Test your deployment scripts, and your disaster recovery scenarios again and again.
  • Responding to a compromise will be much easier in the cloud.  Spinup the replacement server, and keep the EBS volume around for later analysis.

As with any new paradigm there is an element of the unknown and unproven which we are understandably concerned about.  Cloud hosted servers and computing can be just as secure if not more secure than traditional managed servers, or servers you can physically touch in-house.

How To Build Highly Scalable Web Applications For The Cloud

Scalability in the cloud depends a lot on application design.  Keep these important points in mind when you are designing your web application and you will scale much more naturally and easily in the cloud.

** Original article — Intro to EC2 Cloud Deployments **

1. Think twice before sharding

  • It increases your infrastructure and application complexity
  • it reduces availability – more servers mean more outages
  • have to worry about globally unique primary keys

2. Bake read/write database access into the application

  • allows you to check for stale data, fallback to write master
  • creates higher availability for read-only data
  • gracefully degrade to read-only website functionality if master goes down
  • horizontal scalability melds nicely with cloud infrastructure and IAAS

3. Save application state in the database

  • avoid in-memory locking structures that won’t scale with multiple web application servers
  • consider a database field for managing application locks
  • consider stored procedures for isolating and insulating developers from db particulars
  • a last updated timestamp field can be your friend

4. Consider Dynamic or Auto-scaling

  • great feature of cloud, spinup new servers to handle load on-demand
  • lean towards being proactive rather than reactive and measure growth and trends
  • watch the procurement process closely lest it come back to bite you

5. Setup Monitoring and Metrics

  • see trends over time
  • spot application trouble and bottlenecks
  • determine if your tuning efforts are paying off
  • review a traffic spike after the fact

The cloud is not a silver bullet that can automatically scale any web application.  Software design is still a crucial factor.  Baking in these features with the right flexibility and foresight, and you’ll manage your websites growth patterns with ease.

Have questions or need help with scalability?  Call us:  +1-213-537-4465

Review: Host Your Web Site In The Cloud, Amazon Web Services Made Easy

Jeff Barr’s book on AWS is a very readable howto and a quick way to get started with EC2, S3, CloudFront, CloudWatch and SimpleDB.  It is short on theory, but long on all the details of really getting your hands dirty.  Learn how to:

  • get started using the APIs to spinup servers
  • create a load balancer
  • add and remove application servers
  • build custom AMIs
  • create EBS volumes, attach them to your instances & format them
  • snapshot EBS volumes
  • use RAID with EBS
  • setup CloudWatch to monitor your instances
  • setup triggers with CloudWatch to enable AutoScaling

I would have liked to see examples in Chef rather than PHP, but hey you can’t have everything!

Review: Host Your Web Site In The Cloud by Jeff Barr

5 Steps to Cloud Computing

Believe it or not you can actually start playing around with virtual servers that are as real and powerful as the physical servers you’re already used to deploying.  And you can do it for literally pennies per month.

  1. Signup for an Amazon account or use the one you buy books with.
  2. Browse over to http://aws.amazon.com & click Sign Up Now
  3. Navigate to AWS Management Console, follow the Amazon EC2 link, and click Launch Instance
  4. Download Elastic Fox or the API tools & configure your credentials for easy browser or command line control of your virtual infrastructure and deployments.
  5. Terminate instances & delete volumes & snapshots so you’ll have no recurring charges.

At a mere 8 and 1/2 cents per hour, you can play around with the technology with no real ongoing costs.  And you can do it with your existing Amazon account and credit card info.

Good stuff!