Tag Archives: amazon ec2

3 things CEOs should know about the Cloud

You’ve heard all the buzz and spiel about the cloud, and there’re good reasons to want to get there. On-demand compute power makes new levels of scalability possible. Low up front costs means moving capital expenditure to operating expenditure and saving a bundle in the process. We won’t give you anymore of the rah rah marketing hoopla. You’ve heard enough of that. We’ll gently play devil’s advocate for a moment, and give you a few things to think about when deploying applications with a cloud provider. Our focus is mainly on Amazon EC2.

You might also be interested in a wide reaching introduction to deploying on Amazon EC2.

  1. Funky Performance
  2. One of the biggest hurdles we see clients struggle with on Amazon EC2 is performance. This is rooted in the nature of shared resources. Computer servers, just like desktops rely on CPUs, Memory, Network and Disk. In the virtual datacenter, you can be given more than your fair share without you even knowing it. More bandwidth, more CPU, more disk? Who would complain? Well if your application behaves erratically, while you suddenly compete for disk resources you’ll quickly feel the flip side of that coin. Stocks go up, and they can just as easily come right back down.

    Variability around disk I/O seems to be the one that hits applications the hardest, especially the database tier of many web applications. If your application requires extremely high database transaction throughput, you would do well to consider physical servers and a real RAID array to host your database server. Read more about IOPs

  3. Uncertain Reliability – A Loaded Gun
  4. Everybody has heard the saying, don’t hand someone a loaded gun. In the case of Amazon servers, you really do load your applications onto fickle and neurotic servers.

    Imagine you open a car rental business. You could have two brand new fully reliable cars to rent out to customers. Your customers would be very happy, but you’d have a very small business. Alternatively you could have twenty used Pintos. You’d have some breaking down a lot, but as long as you keep ten of them rented at a time, your business is booming.

    In the Amazon world you have all the tools to keep your Ford Pintos running, but it’s important to think long and hard about reliability, redundancy, and automation. Read more about Failures, Lessons & the Chaos Monkey

  5. Iffy Support
  6. Managed hosting providers vary drastically in terms of the support you can expect. Companies like Rackspace, Servint or Datapipe have Support built into their DNA. They’ve grown up around having a support tech that your team can reach when they’re having trouble.

    Amazon takes the opposite approach. They give you all the tools to do everything yourself. But in a crunch it can be great to have that service available to help troubleshoot and diagnose a problem. Although they’re now offering support contracts, it’s not how they started out.

    If you have a crack operations team at your disposal, or you hire a third party provider like Heavyweight Internet Group Amazon Web Services gives you the flexibility and power to build phenomenal and scalable architectures. But if you’re a very small team without tons of technical know-how, you may well do better with a service-oriented provider like Rackspace et al.

A few more considerations…

  • Will your cloud provider go out of business?
  • Could a Subpoena against your provider draw you into the net?
  • Since you don’t know where your sensitive data is, should you consider encryption?
  • Should I keep additional backups outside of the cloud?
  • Should I use multiple cloud providers?
  • Should I be concerned about the lack of perimeter security?

Autoscaling MySQL on Amazon EC2

Also find Sean Hull’s ramblings on twitter @hullsean.

Autoscaling your webserver tier is typically straightforward. Image your apache server with source code or without, then sync down files from S3 upon spinup. Roll that image into the autoscale configuration and you’re all set.
autoscaling MySQL
With the database tier though, things can be a bit tricky. The typical configuration we see is to have a single master database where your application writes. But scaling out or horizontally on Amazon EC2 should be as easy as adding more slaves, right? Why not automate that process?

Below we’ve set out to answer some of the questions you’re likely to face when setting up slaves against your master. We’ve included instructions on building an AMI that automatically spins up as a slave. Fancy!

  1. How can I autoscale my database tier?
    1. Build an auto-starting MySQL slave against your master.
    2. Configure those to spinup. Amazon’s autoscaling loadbalancer is one option, another is to use a roll-your-own solution, monitoring thresholds on servers, and spinning up or dropping off slaves as necessary.
  2. Does an AWS snapshot capture subvolume data or just the SIZE of the attached volume?
  3. In fact, if you have an attached EBS volume and you create an new AMI off of that, you will capture the entire root volume, plus your attached volume data. In fact we find this a great way to create an auto-building slave in the cloud.

  4. How do I freeze MySQL during AWS snapshot?
  5. mysql> flush tables with read lock;mysql> system xfs_freeze -f /data

    At this point you can use the Amazon web console, ylastic, or ec2-create-image API call to do so from the command line. When the server you are imaging off of above restarts – as it will do by default – it will start with /data partition unfrozen and mysql’s tables unlocked again. Voila!

    If you’re not using xfs for your /data filesystem, you should be. It’s fast! The xfsprogs docs seem to indicate this may also work with foreign filesystems. Check the docs for details.

  6. How do I build an AMI mysql slave that autoconnects to master?
  7. Install mysql_serverid script below.

    1. Configure mysql to use your /data EBS mount.
    2. Set all your my.cnf settings including server_id
    3. Configure the instance as a slave in the normal way.
    4. When using GRANT to create the ‘rep’ user on master, specify the host with a subnet wildcard. For example ‘10.20.%’. That will subsequently allow any 10.20.x.y servers to connect and replicate.
    5. Point the slave at the master.
    6. When all is running properly, edit the my.cnf file and remove server_id. Don’t restart mysql.
    7. Freeze the filesystem as described above.
    8. Use the Amazon console, ylastic or API call to create your new image.
    9. Test it of course, to make sure it spins up, sets server_id and connects to master.
    10. Make a change in the test schema, and verify that it propagates to all slaves.
  8. How do I set server_id uniquely?
  9. As you hopefully already know, in MySQL replication environment each node requires a unique server_id setting. In my Amazon Machine Images, I want the server to startup and if it doesn’t find the server_id in the /etc/my.cnf file, to add it there, correctly! Is that so much to ask?

    Here’s what I did. Fire up your editor of choice and drop in this bit of code:

    #!/bin/shif grep -q “server_id” /etc/my.cnf


    : # do nothing – it’s already set


    # extract numeric component from hostname – should be internet IP in Amazon environment

    export server_id=`echo $HOSTNAME | sed ‘s/[^0-9]*//g’`

    echo “server_id=$server_id” >> /etc/my.cnf

    # restart mysql

    /etc/init.d/mysql restart


    Save that snippet at /root/mysql_serverid. Also be sure to make it executable:

    $ chmod +x /root/mysql_serverid

    Then just append it to your /etc/rc.local file with an editor or echo:

    $ echo "/root/mysql_serverid" >> /etc/rc.local

    Assuming your my.cnf file does *NOT* contain the server_id setting when you re-image, then it’ll set this automagically each time you spinup a new server off of that AMI. Nice!

  10. Can you easily slave off of a slave? How?
  11. It’s not terribly different from slaving off of a normal master.

    1. First enable slave updates. The setting is not dynamic, so if you don’t already have it set, you’ll have to restart your slave.
    2. log_slave_updates=true
    3. Get an initial snapshot of your slave data. You can do that the locking way:
    4. mysql> flush tables with read lock;mysql> show master statusG;

      mysql> system mysqldump -A > full_slave_dump.mysql

      mysql> unlock tables;

      You may also choose to use Percona’s excellent xtrabackup utility to create hotbackups without locking any tables. We are very lucky to have an open-source tool like this at our disposal. MySQL Enterprise Backup from Oracle Corp can also do this.

    5. On the slave, seed the database with your dump created above.
    6. $ mysql < full_slave_dump.mysql
    7. Now point your slave to the original slave.
    8. mysql> change master to master_user='rep', master_password='rep', master_host='', master_log_file='server-bin-log.000004', master_log_pos=399;mysql> start slave;

      mysql> show slave statusG;

  12. Slave master is set as an IP address. Is there another way?
  13. It’s possible to use hostnames in MySQL replication, however it’s not recommended. Why? Because of the wacky world of DNS. Suffice it to say MySQL has to do a lot of work to resolve those names into IP addresses. A hickup in DNS can interrupt all MySQL services potentially as sessions will fail to authenticate. To avoid this problem do two things:

    1. Set this parameter in my.cnf
    2. skip_name_resolve = true
    3. Remove entries in mysql.user table where hostname is not an IP address. Those entries will be invalid for authentication after setting the above parameter.
  14. Doesn’t RDS take care of all of this for me?
  15. RDS is Amazon’s Relational Database Service which is built on MySQL. Amazon’s RDS solution presents MySQL as a service which brings certain benefits to administrators and startups:

    • Simpler administration. Nuts and bolts are handled for you.
    • Push-button replication. No more struggling with the nuances and issues of MySQL’s replication management.
    • Simplicity of administration of course has it’s downsides. Depending on your environment, these may or may not be dealbreakers.

    • No access to the slow query log.
    • This is huge. The single best tool for troubleshooting slow database response is this log file. Queries are a large part of keeping a relational database server healthy and happy, and without this facility, you are severely limited.

    • Locked in downtime window
    • When you signup for RDS, you must define a thirty minute maintenance window. This is a weekly window during which your instance *COULD* be unavailable. When you host yourself, you may not require as much downtime at all, especially if you’re using master-master mysql and zero-downtime configuration.

    • Can’t use Percona Server to host your MySQL data.
    • You won’t be able to do this in RDS. Percona server is a high performance distribution of MySQL which typically rolls in serious performance tweaks and updates before they make it to community addition. Well worth the effort to consider it.

    • No access to filesystem, server metrics & command line.
    • Again for troubleshooting problems, these are crucial. Gathering data about what’s really happening on the server is how you begin to diagnose and troubleshoot a server stall or pileup.

    • You are beholden to Amazon’s support services if things go awry.
    • That’s because you won’t have access to the raw iron to diagnose and troubleshoot things yourself. Want to call in an outside consultant to help you debug or troubleshoot? You’ll have your hands tied without access to the underlying server.

    • You can’t replicate to a non-RDS database.
    • Have your own datacenter connected to Amazon via VPC? Want to replication to a cloud server? RDS won’t fit the bill. You’ll have to roll your own – as we’ve described above. And if you want to replicate to an alternate cloud provider, again RDS won’t work for you.

4 Considerations Migrating to The Cloud

When migrating to the cloud consider security and resource variability, the cultural shift for operations and the new cost model. Continue reading 4 Considerations Migrating to The Cloud

Open Source Enables the Cloud

With the fast growth of virtualized data centers, and companies like Google, Amazon and Facebook, it’s easy to forget how much is built on open-source components, aka commodity software.  In a very real way open-source has enabled the huge explosion of commodity hardware, the fast growth of the internet itself, and now the further acceleration through cloud services, cloud infrastructure, and virtualization of data centers.

Your typical internet stack and application now stands on the shoulders of tens of thousands of open source developers and projects.  Let’s look at a few of them. Continue reading Open Source Enables the Cloud

3 Ways to Boost Cloud Scalability

Deploying in the Amazon cloud is touted as a great way to achieve high scalability while paying only for the computing power you use. How do you get the best scalability from the technology? Continue reading 3 Ways to Boost Cloud Scalability

Review – Test Driven Infrastructure with Chef – Stephen Nelson-Smith

In search of a good book on Chef itself, I picked up this new title on O’Reilly.  It’s one of their new format books, small in size, only 75 pages.

There was some very good material in this book.  Mr. Nelson-Smith’s writing style is good, readable, and informative.  The discussion of risks of infrastructure as code was instructive.  With the advent of APIs to build out virtual data centers, the idea of automating every aspect of systems administration, and building infrastructure itself as code is a new one.  So an honest discussion of the risks of such an approach is bold and much needed.  I also liked the introduction to Chef itself, and the discussion of installation.

Chef isn’t really the main focus of this book, unfortunately.  The book spends a lot of time introducing us to Agile Development, and specifically test driven development.  While these are lofty goals, and the first time I’ve seen treatment of the topic in relation to provisioning cloud infrastructure, I did feel too much time was spent on that.  Continue reading Review – Test Driven Infrastructure with Chef – Stephen Nelson-Smith

IOPs – What is it and why is it important?

IOPs are an attempt to standardize comparison of disk speeds across different environments.  When you turn on a computer, everything must be read from disk, but thereafter things are kept in memory.  However applications typically read and write to disk frequently.  When you move to enterprise class applications, especially relational databases, a lot of disk I/O is happening so performance of disk resources is crucial.

For a basic single SATA drive that you might have in server or laptop, you can typically get 30-40 IOPs from it.  These numbers vary if you are talking about random versus sequential reads or writes.  Picture the needle on a vinyl record.  It moves quicker around the center, and slower around the outside.  That’s what’s happening the the magnetic needle inside your harddrive too.

In Amazon EC2 environment, there is a lot of variability in performance from EBS.  You can stripe across four separate EBS volumes which will be on four different locations on the underlying RAID array and you’ll get a big boost in disk I/O.  Also disk performance will vary from an m1.small, m1.large and m1.xlarge instance type, with the latter getting the lions share of network bandwidth, so better disk I/O performance.  But in the end your best EBS performance will be in the range of 500-1000 IOPs.  That’s not huge by physical hardware standards, so an extremely disk intensive application will probably not perform well in the Amazon cloud.

Still the economic pressures and infrastructure and business flexibility continue to push cloud computing adoption, so expect the trend to continue.

Quora discussion – What are IOPs and why are they important?

Deploying MySQL on Amazon EC2 – 8 Best Practices

Also find Sean Hull’s ramblings on twitter @hullsean.

There are a lot of considerations for deploying MySQL in the Cloud.  Some concepts and details won’t be obvious to DBAs used to deploying on traditional servers.  Here are eight best practices which will certainly set you off on the right foot.

This article is part of a multi-part series Intro to EC2 Cloud Deployments.

1. Replication

Master-Slave replication is easy to setup, and provides a hot online copy of your data.  One or more slaves can also be used for scaling your database tier horizontally.

Master-Master active/passive replication can also be used to bring higher uptime, and allow some operations such as ALTER statements and database upgrades to be done online with no downtime.  The secondary master can be used for offloading read queries, and additional slaves can also be added as in the master-slave configuration.

Caution: MySQL’s replication can drift silently out of sync with the master. If you’re using statement based replication with MySQL, be sure to perform integrity checking to make your setup run smoothly. Here’s our guide to bulletproofing MySQL replication.

2. Security

You’ll want to create an AWS security group for databases which opens port 3306, but don’t allow access to the internet at large.  Only to your AWS defined webserver security group.  You may also decide to use a single box and security group which allows port 22 (ssh) from the internet at large.  All ssh connections will then come in through that box, and internal security groups (database & webserver groups) should only allow port 22 connections from that security group.

When you setup replication, you’ll be creating users and granting privileges.  You’ll need to grant to the wildcard ‘%’ hostname designation as your internal and external IPs will change each time a server dies. This is safe since you expose your database server port 3306 only to other AWS security groups, and no internet hosts.

You may also decide to use an encrypted filesystem for your database mount point, your database backups, and/or your entire filesystem.  Be particularly careful of your most sensitive data.  If compliance requirements dictate, choose to store very sensitive data outside of the cloud and secure network connections to incorporate it into application pages.

Be particularly careful of your AWS logins.  The password recovery mechanism in Amazon Web Services is all that prevents an attacker from controlling your entire infrastructure, after all.

3. Backups

There are a few ways to backup a MySQL database.  By far the easiest way in EC2 is using the AWS snapshot mechanism for EBS volumes.  Keep in mind you’ll want to encrypt these snapshots as S3 may not be as secure as you might like.   Although you’ll need to lock your MySQL tables during the snapshot, it will typically only take a few seconds before you can release the database locks.

Now snapshots are great, but they can only be used within the AWS environment, so it also behooves you to be performing additional backups, and moving them offsite either to another cloud provider or to your own internal servers.  For this your choices are logical backups or hotbackups.

mysqldump can perform logical backups for you.  These backups perform SELECT * on every table in your database, so they can take quite some time, and really destroy the warm blocks in your InnoDB buffer cache.   What’s more rebuilding a database from a dump can take quite some time.  All these factors should be considered before deciding a dump is the best option for you.

xtrabackup is a great open source tool available from Percona.  It can perform hotbackups of all MySQL tables including MyISAM, InnoDB and XtraDB if you use them.  This means the database will be online, not locking tables, with smarter less destructive hits to your buffer cache and database server as a whole.  The hotbackup will build a complete copy of your datadir, so bringing up the server from a backup involves setting the datadir in your my.cnf file and starting.

We wrote a handy guide to using hotbackups to setup replication.

4. Disk I/O

Obviously Disk I/O is of paramount performance for any database server including MySQL.  In AWS you do not want to use instance store storage at all.  Be sure your AMI is built on EBS, and further, use a separate EBS mount point for the database datadir.

An even better configuration than the above, but slightly more complex to configure is a software raid stripe of a number of EBS volumes.  Linux’s software raid will create an md0 device file which you will then create a filesystem on top of – use xfs.  Keep in mind that this arrangement will require some care during snapshotting, but can still work well.  The performance gains are well worth it!

5. Network & IPs

When configuring Master & Slave replication, be sure to use the internal or private IPs and internal domain names so as not to incur additional network charges.  The same goes for your webservers which will point to your master database, and one or more slaves for read queries.

6. Availability Zones

Amazon Web Services provides a tremendous leap in options for high availability.  Take advantage of availability zones by putting one or more of your slaves in a separate zone where possible.  Interestingly if you ensure the use of internal or private IP addresses and names, you will not incur additional network charges to servers in other availability zones.

7. Disaster Recovery

EC2 servers are out of the gates *NOT* as reliable as traditional servers.  This should send shivers down your spine if you’re trying to treat AWS like a traditional hosted environment.  You shouldn’t.  It should force you to get serious about disaster recovery.  Build bulletproof scripts to spinup your servers from custom built AMIs and test them.  Finally you’re taking disaster recovery as seriously as you always wanted to.   Take advantage of Availability Zones as well, and various different scenarios.

8. Vertical and Horizontal Scaling

Interestingly vertical scaling can be done quite easily in EC2.  If you start with a 64bit AMI, you can stop such a server, without losing the root EBS mount.  From there you can then start a new larger instance in EC2 and use that existing EBS root volume and voila you’ve VERTICALLY scaled your server in place.  This is quite a powerful feature at the system administrators disposal.  Devops has never been smarter!  You can do the same to scale *DOWN* if you are no longer using all the power you thought you’d need.  Combine this phenomenal AWS feature with MySQL master-master active/passive configuration, and you can scale vertically with ZERO downtime.  Powerful indeed.

We wrote an EC2 Autoscaling Guide for MySQL that you should review.

Along with vertical scaling, you’ll also want the ability to scale out, that is add more servers to the mix as required, and scale back when your needs reduce.  Build in smarts in your application so you can point SELECT queries to read-only slaves.  Many web applications exhibit the bulk of there work in SELECTs so being able to scale those horizontally is very powerful and compelling.  By baking this logic into the application you also allow the application to check for slave lag.  If your slave is lagging slightly behind the master you can see stale data, or missing data.  In those cases your application can choose to go to the master to get the freshest data.

What about RDS?

Wondering whether RDS is right for you? It may be. We wrote a comprehensive guide to evaluating RDS over MySQL.

If you read this far, you should grab our newsletter!

Success Story–Media and Entertainment Conglomerate

The Business

A website aggregating twitter feeds for celebrities, with sophisticated search functionality.

The Problem

Having been recently acquired by a large media and entertainment conglomerate, their traffic had already tripled.  What’s more they expected their unique pageviews to grow by 20 to 30 times in the coming six months.

Our Process

We worked closely with the lead architect and designer of the site to understand some of the technical difficulties they were encountering.  We discussed key areas of the site, and where performance was most lacking.

Next we reviewed the underlying infrastructure with an eye for misconfigurations, misuse of or badly allocated resources, and general configuration best practices.  They used Amazon EC2 cloud hosted servers for the database, webserver, and other components of the application.

The Solution

Our first round of reviews spanned a couple of days.  We found many issues with the configuration which could dramatically affect performance.  We adjusted settings in both the webserver, and the database to optimally maximize the platform upon which they were hosted.  These initial changes reduced the load average on the server from a steady level of 10.0 to an average of 2.0.

Our second round of review involved a serious look at the application.  We worked closely with the developer to understand what the application was doing.  We identified those areas of the application causing the heaviest footprint on the server, and worked with the developer to tune those specific areas.  In addition we examined the underlying database structures, tables and looked for relevant indexes, adding those as necessary to support the specific requirements of the application.

After this second round of changes, tweaks, adjustments, and rearchitecting, the load average on the server was reduced dramatically, to a mere 0.10.  The overall affect was dramatic.  With 100 times reduction in the load on the server, the websites performance was snappy, and very responsive.  The end user experience was noticeably changed.  A smile comes on your face when you visit your favorite site, to find it working fast and furious!


The results to the business were dramatic.  Not only were their short term troubles addressed, as the site was handling the new traffic without a hick up.  What’s more they had the confidence and peace of mind now to go forward with new advertising campaigns, secure in the knowledge that the site really could perform, and handle a 20 to 30 times increase in traffic with ease.