Category Archives: CTO/CIO

iHeavy Insights 83 – Shoe Leather Cost

Shoe leather cost is similar to opportunity cost.  It refers to the cost of counteracting inflation by keeping less of your assets in cash.  Your strategy would require more trips to the bank and more walking, and incur a cost in the wearing out of the leather in your shoes.

All joking aside, it’s an interesting idea.  It highlights how there are all sorts of hidden costs to different strategies.  There are hidden costs to using coupons, loyalty cards, frequent flyer miles, managing assets & investments, hiring resources and in general running a business.  Let’s look at a few. Continue reading iHeavy Insights 83 – Shoe Leather Cost

Book Review – Rework

rework coverRework is chock full of ideas

Jason Fried and David Heinemeier Hansson’s new book REWORK is one of the best startup business books I’ve read since Alan Weiss’ Million Dollar Consulting. If you’re already a fan of their signal vs noise blog, you’d be familiar with their terse style. Sharp and to the point.

Which is why you can pick it up and read it in a few hours.  You’ll want to because it’s well written and pared down to essentials.  In fact the book reads like their workflow advice, less mass, do it yourself, cut out the fat, concentrate on essentials.  As such they are clearly practicing what they preach, which I like. Continue reading Book Review – Rework

8 Questions to ask an AWS Expert

via GIPHY

If you’re headhunting a cloud computing expert, specifically someone who knows Amazon Web Services (AWS) and EC2, you’ll want to have a battery of questions to ask them to assess their knowledge.  As with any technical interview focus on concepts and big picture.  As the 37Signals folks like to say “hire for attitude, train for skill”.  Absolutely!

New: Top questions for hiring a serverless lambda expert

Also new: Top questions to ask on a devops expert interview

And: How to hire a developer that doesn’t suck

If you want more general info about Amazon Web Services, read our Intro to EC2 Deployments.

    1. Explain Elastic Block Storage?  What type of performance can you expect?  How do you back it up?  How do you improve performance?

    EBS is a virtualized SAN or storage area network.  That means it is RAID storage to start with so it’s redundant and fault tolerant.  If disks die in that RAID you don’t lose data.  Great!  It is also virtualized, so you can provision and allocate storage, and attach it to your server with various API calls.  No calling the storage expert and asking him or her to run specialized commands from the hardware vendor.

    Performance on EBS can exhibit variability.  That is it can go above the SLA performance level, then drop below it.  The SLA provides you with an average disk I/O rate you can expect.  This can frustrate some folks especially performance experts who expect reliable and consistent disk throughput on a server.  Traditional physically hosted servers behave that way.  Virtual AWS instances do not.

    Related: Is Amazon too big to fail?

    Backup EBS volumes by using the snapshot facility via API call or via a GUI interface like elasticfox.

    Improve performance by using Linux software raid and striping across four volumes.

    2. What is S3?  What is it used for? Should encryption be used?

    S3 stands for Simple Storage Service.  You can think of it like ftp storage, where you can move files to and from there, but not mount it like a filesystem.  AWS automatically puts your snapshots there, as well as AMIs there.  Encryption should be considered for sensitive data, as S3 is a proprietary technology developed by Amazon themselves, and as yet unproven vis-a-vis a security standpoint.

    3. What is an AMI?  How do I build one?

    AMI stands for Amazon Machine Image.  It is effectively a snapshot of the root filesystem.  Commodity hardware servers have a bios that points the the master boot record of the first block on a disk.  A disk image though can sit anywhere physically on a disk, so Linux can boot from an arbitrary location on the EBS storage network.

    Need an AWS expert? Email me for a quote hullsean @ gmail.com

    Build a new AMI by first spinning up and instance from a trusted AMI.  Then adding packages and components as required.  Be wary of putting sensitive data onto an AMI.  For instance your access credentials should be added to an instance after spinup.  With a database, mount an outside volume that holds your MySQL data after spinup as well.

    4. Can I vertically scale an Amazon instance? How?

    Yes.  This is an incredible feature of AWS and cloud virtualization.  Spinup a new larger instance than the one you are currently running.  Pause that instance and detach the root ebs volume from this server and discard.  Then stop your live instance, detach its root volume.  Note the unique device ID and attach that root volume to your new server.   And the start it again.  Voila you have scaled vertically in-place!!

    5. What is auto-scaling? How does it work?

    Autoscaling is a feature of AWS which allows you to configure and automatically provision and spinup new instances without the need for your intervention.  You do this by setting thresholds and metrics to monitor.  When those thresholds are crossed a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool.  Voila you’ve scaled horizontally without any operator intervention!

    Also: Are we fast approaching cloud-mageddon?

    With MySQL databases autoscaling can get a little dicey, so we wrote a guide to autoscaling MySQL on amazon EC2.

    6. What automation tools can I use to spinup servers?

    The most obvious way is to roll-your-own scripts, and use the AWS API tools.  Such scripts could be written in bash, python or another language or your choice.  Next option is to use a configuration management and provisioning tool like puppet or better it’s successor Opscode Chef. Ansible is also an excellent option because it doesn’t require an agent, and can run your shell scripts as-is.  You might also look towards CloudFormation or Terraform. The resulting code captures your entire infrastructure, can be checked into your git repository & version controlled. You can even unit test this way! 

    7. What is configuration management?  Why would I want to use it with cloud provisioning of resources?

    Configuration management has been around for a long time in web operations and systems administration.  Yet the cultural popularity of it has been limited.  Most systems administrators configure machines as software was developed before version control – that is manually making changes on servers.  Each server can then and usually is slightly different.  Troubleshooting though is straightforward as you login to the box and operate on it directly.  Configuration management brings a large automation tool into the picture, managing servers like strings of a puppet.  This forces standardization, best practices, and reproducibility as all configs are versioned and managed.  It also introduces a new way of working which is the biggest hurdle to its adoption.

    Read: When hosting data on Amazon turns bloodsport

    Enter the cloud, and configuration management becomes even more critical.  That’s because virtual servers such as amazons EC2 instances are much less reliable than physical ones.  You absolutely need a mechanism to rebuild them as-is at any moment.  This pushes best practices like automation, reproducibility and disaster recovery into center stage.

    While on the subject of configuration management take a quick peek at hiring a devops guide.

    8. Explain how you would simulate perimeter security using Amazon Web Services model?

    Traditional perimeter security that we’re already familiar with using firewalls and so forth is not supported in the Amazon EC2 world.  AWS supports security groups.  One can create a security group for a jump box with ssh access – only port 22 open.  From there a webserver group and database group are created.  The webserver group allows 80 and 443 from the world, but port 22 *only* from the jump box group.  Further the database group allows port 3306 from the webserver group and port 22 from the jump box group.  Add any machines to the webserver group and they can all hit the database.  No one from the world can, and no one can directly ssh to any of your boxes.

    The more full featured way to go is VPC. That’s Amazon’s acronym for virtual private cloud. You can create virtual networks both private & public, with subnets etc all within VPCs. You then spinup servers & resources inside those virtual networks. VPCs can be control with security groups or the more powerful but messy access control lists.

    Also: A history lesson for cloud detractors – January 2012

    Want to further lock this configuration down?  Only allow ssh access from specific IP addresses on your network, or allow just your subnet.

Did you make it this far?!?! Grab our newsletter.

The New Commodity Hardware Craze aka Cloud Computing

Does anyone remember 15 years ago when the dot-com boom was just starting?  A lot of companies were running on Sun.  Sun was the best hardware you could buy for the price.  It was reliable and a lot of engineers had experience with the operating system, SunOS a flavor of Unix.

Yet suddenly companies were switching to cheap crappy hardware.  The stuff failed more often, had lower quality control, and cheaper and slower buses.  Despite all of that, cutting edge firms and startups were moving to commodity hardware in droves.  Why was it so? Continue reading The New Commodity Hardware Craze aka Cloud Computing

5 Ways to Avoid EC2 Outages

1. Backup outside of the Cloud

Some of the high profile companies affected by Amazon’s April 2011 outage could have recovered had they kept a backup of their entire site outside of the cloud.  With any hosting provider, managed traditional data center or cloud provider, alternate backups are always a good idea.  A MySQL logical backup and/or incremental backup can be copied regularly offsite or to an alternate cloud provider.  That’s real insurance! Continue reading 5 Ways to Avoid EC2 Outages

3 Ways to Boost Cloud Scalability

Deploying in the Amazon cloud is touted as a great way to achieve high scalability while paying only for the computing power you use. How do you get the best scalability from the technology? Continue reading 3 Ways to Boost Cloud Scalability

iHeavy Insights 82 – Better Practices

Best Practices, the term we hear thrown around a lot.  But like going on that new years diet, too often ends up more talk than action.

Manage Processes

Operator error ie typing the wrong command is always a risk.  Logging into the wrong server to drop a database or typing the dump command such that you dump data into the database, these are risks that operations folks face everyday.

Accountability is important, be sure all of your systems folks login to their own accounts.  Apply the least privileges model, give permissions on an as needed basis.

Set prompts with big bold names that indicate production servers and their purpose.  Automate repetitive commands that are prone to typos.

Don’t be afraid to give developers read-only accounts on production servers.

Communicate Clearly

Regular team meetings, a la the Agile stand ups are a great way to encourage folks to communicate.  Bring the developers and operations folks together.   Ask everyone in turn to voice their current todos, their concerns and risks they see.  Encourage everyone to listen with an open mind.  Consider different perspectives.

Communication is a cultural attribute.  So it comes from the top.  Encourage this as a CTO or CIO by asking questions, communicating your concerns, repeat your own requests in different words and paraphrase.  Listen to what your team is saying, repeat and rephrase those concerns, and how and when they will be addressed.

Document Processes

A culture of documenting services, and processes is healthy.  It provides a central location and knowledge base for the team.  It also prevents sliding into the situation where only one team member understands how to administer critical business components.  Were that person to be unavailable or to leave the company, you’re stuck reverse engineering your infrastructure and guessing at architectural decisions.

Better Practices

Rather than think of best practices as something you need to achieve today, think of it as an ongoing day-to-day quest for improvement.

  • repetitive manual processes – employ automation & script those processes where possible.
  • where steps require investigation and research – document it
  • where production changes are involved – communicate with business units, qa & operations
  • always be improving – striving for better practices

Amazon Web Services – What is it and why is it important?

Amazon Web Services is a division of Amazon the bookseller, but this part of the business is devoted solely to infrastructure and internet servers.  These are the building blocks of data centers, the workhorses of the internet.  AWS’s offering of Cloud Computing solutions allows a business to setup or “spinup” in the jargon of cloud computing, new compute resources at will.  Need a small single cpu 32bit ubuntu server with two 20G disks attached?  One command and 30 seconds away, and you can have that!

As we discussed previously, Infrastructure Provisioning has evolved dramatically over the past fifteen years from something took time and cost a lot, to a fast automatic process that it is today with cloud computing.  This has also brought with it a dramatic culture shift in the way that systems administration is being done, from a fairly manual process of physical machines, and software configuration, one that took weeks to setup new services, to a scriptable and automateable process that can then take seconds.

This new realm of cloud computing infrastructure and provisioning is called Infrastructure as a Service or IaaS, and Amazon Web Services is one of the largest providers of such compute resources.  They’re not the only ones of course.  Others include:

  • Rackspace Cloud
  • Joyent
  • GoGrid
  • Terremark
  • 3Tera
  • IBM
  • Microsoft
  • Enomaly
  • AT&T

Cloud Computing is still in it’s infancy, but is growing quickly.   Amazon themselves had a major data center outage in April that we discussed in detail. It sent some hot internet startups into a tailspin!

More discussion of Amazon Web Services on Quora – Sean Hull

Point-in-time Recovery – What is it and why is it important?

Web-facing database servers receive a barrage of activity 24 hours a day.  Sessions are managed for users logging in, ratings are clicked and comments are added.  Even more complex are web-based ecommerce applications.  All of this activity is organized into small chunks called transactions.  They are discrete sets of changes.  If you’re editing a word processing document, it might autosave every five minutes.  If you’re doing something in excel it may provide a similar feature.  There is also an in-built mechanism for undo and redo of recent edits you have made.  These are all analogous to transactions in a database.

These are important because all of these transactions are written to logfiles.  They make replication possible, by replaying those changes on another database server downstream.

If you have lost your database server because of hardware failure or instance failure in EC2, you’ll be faced with the challenge of restoring your database server.  How is this accomplished?  Well the first step would be to restore from the last full backup you have, perhaps a full database dump that you perform everyday late at night.  Great, now you’ve restored to 2am.  How do I get the rest of my data?

That is where point-in-time recovery comes in.  Since those transactions were being written to your transaction logs, all the changes made to your database since the last full backup must be reapplied.  In MySQL this transaction log is called the binlog, and there is a mysqlbinlog utility that reads the transaction log files, and replays those statements.  You’ll tell it the start time – in this case 2am when the backup happened.  And you’ll tell it the end time, which is the point-in-time you want to recover to.  That time will likely be the time you lost your database server hardware.

Point-in-time recovery is crucial to high availability, so be sure to backup your binlogs right alongside your full database backups that you keep every night.  If you lose the server or disk that the database is hosted on, you’ll want an alternate copy of those binlogs available for recovery!

Quora discussion on Point-in-time Recovery by Sean Hull

Migrating to the Cloud – Why and why not?

A lot of technical forums and discussions have highlighted the limitations of EC2 and how it loses  on performance when compared to physical servers of equal cost.  They argue that you can get much more hardware and bigger iron for the same money.  So it then seems foolhardy to turn to the cloud.  Why this mad rush to the cloud then?  Of course if all you’re looking at is performance, it might seem odd indeed.  But another way of looking at it is, if performance is not as good, it’s clearly not the driving factor to cloud adoption.

CIOs and CTOs are often asking questions more along the lines of, “Can we deploy in the cloud and settle with the performance limitations, and if so how do we get there?”

Another question, “Is it a good idea to deploy your database in the cloud?”  It depends!  Let’s take a look at some of the strengths and weaknesses, then you decide.

8 big strengths of the cloud

  1. Flexibility in disaster recovery – it becomes a script, no need to buy additional hardware
  2. Easier roll out of patches and upgrades
  3. Reduced operational headache – scripting and automation becomes central
  4. Uniquely suited to seasonal traffic patterns – keep online only the capacity you’re using
  5. Low initial investment
  6. Auto-scaling – set thresholds and deploy new capacity automatically
  7. Easy compromise response – take server offline and spinup a new one
  8. Easy setup of dev, qa & test environments

Some challenges with deploying in the cloud

  1. Big cultural shift in how operations is done
  2. Lower SLAs and less reliable virtual servers – mitigate with automation
  3. No perimeter security – new model for managing & locking down servers
  4. Where is my data?  — concerns over compliance and privacy
  5. Variable disk performance – can be problematic for MySQL databases
  6. New procurement process can be a hurdle

Many of these challenges can be mitigated against.  The promise of the infrastructure deployed in the cloud is huge, so digging our heels in with gradual adoption is perhaps the best option for many firms.  Mitigate the weaknesses of the cloud by:

  • Use encrypted filesystems and backups where necessary
  • Also keep offsite backups inhouse or at an alternate cloud provider
  • Mitigate against EBS performance – cache at every layer of your application stack
  • Employ configuration management & automation tools such as Puppet & Chef

Quora discussion – Why or why not to migrate to the cloud?