Tag Archives: automation

Marching towards continuous deployment

cute code pipeline

If you’re like a lot of small dev teams & startups, you’ve dreamed of jumping on the continuous deployment train, but still aren’t quite there.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

You’ve got your code in some sort of repository. Now what? As it turns out the concepts aren’t terribly complicated. The hardest part is figuring out the process that works for your team.

1. Make a single script for deployment

Can you build easily? You want to take steps to simplify the build process & work towards everything being done from a single script. This might be an ant or maven script. It might be rake if you’re using ruby. Or it might be a makefile. Either way it organizes dependencies, checks your system & compiles things if necessary.

Also: Do startups need techops?

2. Do nightly builds

If you’re currently doing manual builds, work towards doing them nightly. Once you have that under your belt you can actually schedule these to happen automatically every night. But you’re not there yet. You want to work to improve the build process first. Work on the performance of this process. Quality is also important. Is the build quality poor?

Related: Is there a devops talent gap?

3. Is your build process slow?

If it takes a long time to do the build, it takes a long time to get to the point where you can smoke test. You want to shorten this time, so you can iterate faster. Look at ways to improve the overall performance of the whole chain. What’s it getting stuck on?

Read: 3 things devops can learn from aviation

4. Is your build quality poor?

Your tests are going to verify application functionality, do security checks, performance or even compliance checks. If these are often failing, you need dig in to find the source of your trouble. Tests aren’t specific enough? Or are you passing your tests, but finding a lot of bugs in QA?

It may take your team some time to get better at building the right tests, and reducing the bugs that show up in QA. But this will be crucial to increasing confidence level, where you’ll be ready to automate the whole pipeline. As you become more confident in your tests, then you’ll be confident to automatically deploy to production.

Also: How to deploy on Amazon EC2 with Vagrant

5. Evaluate tools to help

Continuous deployment is a lot about process. Once you’ve gotten a handle on that, you’ll have a much better idea of what you want out of the tools. Amazon’s own CodePipeline is one possible build server you can use. Because it’s a service, it’s one less server you don’t have to manage. And of course Jenkins is a popular option. Even with Jenkins there is a service based offering from CloudBees. You might also take a look at CircleCI, & Travis which are newer service based offerings, which although they don’t have all the plugins & integrations of Jenkins, they’ve learned from bumps in the road, and improved the formula.

We like CircleCI because it’s open source, smaller footprint than Jenkins, integrates with Slack & Hipchat, and has Docker support as well.

Also: 5 Tips for better database change management

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is AWS too complex for small dev teams & startups?

via GIPHY

I was discussing a server outage with a colleague recently. AWS had done some confusing things, and the team was rallying to troublehsoot & fix.

He made an offhand comment that caught my attention…


AWS is too complex for small dev teams. I’d recommend we host in a traditional datacenter.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

It’s an interesting point. For all the fanfare over Amazon, lost in the shuffle is the staggering complexity that we’re taking on. For small firms, this is a cost that’s often forgotten when we smell the on-demand cool-aid that is EC2.

Here are my thoughts…

1. Over 70 services offered

Everytime I login to the AWS console there’s a new service offering. Lambda & serverless computing. CodeDeploy, Redshift, EMR, VPC’s, developer tools, IOT, the list goes on. If you haven’t enabled MFA on your IAM accounts you’re not alone!

Also: Is Amazon too big to fail?

2. Still complex to build high availability

The song I hear out of Amazon is, we offer all the components for a high availability infrastructure. multiple availability zones, regions, load balancers, autoscaling, geo & latency dns routing. What’s more companies like Netflix have open sourced tools to help.

But at a lot of startups that I see, all these components are not in use, nor are they well understood. Many admins are still using Amazon like an old-school datacenter. And that’s not good.

Sometimes it seems that AWS is a patient in need of constant medication.

Related: Are we fast approaching cloud-mageddon?

3. Need a dedicated devops

As AWS becomes more complex, and the offering more robust, so too the need for dedicated ops. If you’re devs are already out of bandwidth, but you don’t quite have so much need for a fulltime resource a consultant may be an option. Round out the team & keep costs manageable.

If you’re looking for an aws solutions architect, we can help!

Check out: Does Amazon eat it’s own dogfood?

4. Orchestration involves many moving parts

Infrastructure as code offers the promise of completely versioning all your servers, configurations and changes. From there we can apply test driven development & bring a more professional level of service to our business. That’s the theory anyway.

In practice it brings an incredible number of new toolsets to master and a more complex stack besides. All those components can have bugs, need troubleshooting. This sometimes just kicks the can down the road, moving the complexity elsewhere.

It’s not clear that for smaller shops, all this complexity is manageable.

Also: 5 things toxic to scalability

5. Troubleshooting failed deployments

I was looking at a problem with a broken deploy recently. Turns out a developer had copy & pasted some code solution off the internet, possibly from a tutorial, and broke deployments to staging.

Yes perhaps this was avoidable, and more checks & balances can fix. But my thought is continuous integration & continuous deployments are not a panacea. More complexity brings a more complex web to unweave.

I sometimes wonder if we aren’t fast approaching cloud-mageddon?

Read: Why Airbnb didn’t have to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Sean Hull interviewed on the Doppler Cloud podcast

I recently got a chance to talk with Mike Kavis over at Cloud Technology Partners. It was fun to get away from the keyboard, and in front of the microphone for a change.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

1. Docker

Docker is making deployments easier & easier. But as the pace accelerates, are we introducing vulnerabilities & scalability problems faster than we can fix them?

Also: Are generalists better at scaling the web?

2. Redshift

I’ve blogged that I don’t work with recruiters but I do chat with them regularly.

In a recent conversation a recruiter asked me:

“Why is it that suddenly everyone is looking for Redshift?”

I’m seeing the same trend. And if you look at Hadoop you might see why. Writing SQL queries against Redshift data is wildly simpler than writing EMR jobs for Hadoop.

Related: Why Dropbox didn’t have to fail?

3. Devops automation

These days I hear a lot of talk that all operations is software development. Are you still SSHing into boxes. You’re doing it wrong!

Read: When hosting data on Amazon turns bloodsport

4. Hardware solves all speed problems

Having performance problems? Scale out! Database slow, scale up! These days it seems the old short sighted way of thinking is back with a vengence. Throw hardware at the problem and kick the can down the road.

Also: How to hire a developer that doesn’t suck?

5. Amazon disrupting VC

During dot-com version one-point-oh, you’d need hundreds of thousands to buy hardware & software licenses to get an idea off the ground. That necessarily meant real VC money to get off the ground.

Amazon web services & on-demand computing has brought world class infrastructure to even the smallest startups. For just dollars, they can get started.

Now we’re seeing startups get going with micro investments from the likes of Angel List syndicates. Cutting traditional VCs right out of the equation.

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 tech challenges I’m thinking about today

fast fish

Technical operations & startup tech are experiencing an incredible upheaval which is bringing a lot of great things.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Here are some of the questions it raises for me.

1. Are we adopting Docker without enough consideration?

Container deployments are accelerating at a blistering pace. I was reading Julian Dunn recently, and he had an interesting critical post Are container deployments like an oncoming train?

He argues that we should be wary of a few trends. One of taking legacy applications and blindly containerizing them. Now we can keep them alive forever. :) He also argues that there is a tendency for folks who aren’t particularly technical or qualified who start evangelizing it everywhere. A balm for every ailment!

Also: Is Amazon too big to fail?

2. Is Redshift supplanting hadoop & spark for startup analytics?

In a recent blog post I asked Is Redshift outpacing hadoop as the big data warehouse for startups.

On the one hand this is exciting. Speed & agile is always good right? But what of more Amazon & vendor lock-in?

Related: Did Dropbox have to fail?

3. Does devops automation make all of operations a software development exercise?

I asked this question a while back on my blog. Is automation killing old-school operations?

Automation suites like Chef & Puppet are very valuable, in enabling the administration of fleets of servers in the cloud. They’re essential. But there’s some risk in moving further away from the bare metal, that we might weaken our everyday tuning & troubleshooting skills that are essential to technical operations.

Read: When hosting data on Amazon turns bloodsport?

4. Is the cloud encouraging the old pattern of throwing hardware at the problem?

Want to scale your application? Forget tighter code. Don’t worry about tuning SQL queries that could be made 1000x faster. We’re in the cloud. Just scale out!

That’s right with virtualization, we can elastically scale anything. Infinitely. :)

I’ve argued that throwing hardware at the problem is like kicking the can down the road. Eventually you have to pay your technical debt & tune your application.

Also: Are SQL databases dead?

5. Is Amazon disrupting venture capital itself?

I’m not expert on the VC business. But Ben Thompson & James Allworth surely are. And they suggested that because of AWS, startups can setup their software for pennies.

This resonates loud & clear for me. Why? Because in the 90’s I remember startups needing major venture money to buy Sun hardware & Oracle licenses to get going. A half million easy.

They asked Is Amazon Web Services enabling AngelList syndicates to disrupt the Venture capital business? That’s a pretty interesting perspective. It would be ironic if all of this disruption that VC’s bring to entrenched businesses, began unravel their own business!

Also: Are we fast approaching cloud-mageddon?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 core pieces of the Amazon Cloud puzzle to get your project off the ground

amazon cloud automation

One of the most common engagements I do is working with firms in and around the NYC startup sector. I evaluate AWS infrastructures & applications built in the Amazon cloud.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

I’ve seen some patterns in customers usage of Amazon. Below is a laundry list of the most important ones.

On our products & pricing page you can see more detail including how we perform a performance review and a sample executive summary.

1. Use automation

When you first start using Amazon Web Services to host your application, you like many before you may think of it like you’re old school hosting. Setup a machine, configure it, get your code running. The traditional model of systems administration. It’s fine for a single server, but if you’re managing a more complex deploy with continuous integration, or want to be resilient to regular server failures you need to do more.

Enter the various automation tools on offer. The simplest of the three is Elastic Beanstalk. If you’re using a very standard stack & don’t need a lot of customizations, this may well work for you.

With more complex deployments you’ll likely want to look at Opsworks Sounds familiar? That’s because it *is* Opscode Chef. Everything you can do with Chef & all the templates out there will work with Amazon’s offering. Let AWS manage your templates & make sure your servers are in the right state, just like hosted chef.

If you want to get down to the assembly language layer of infrastructure in Amazon, you’ll eventually be dealing with CloudFormation. This is JSON code which defines everything, from a server with an attached EBS volume, to a VPC with security rules, IAM users & everything inbetween. It is ultimately what these other services utilize under the hood.

Also: Is Amazon too big to fail?

2. Use Advisor & Alerts

Amazon has a few cool tools to help you manage your infrastructure better. One is called Trusted Advisor . This helps you by looking at your aws usage for best practices. Cost, performance, security & high availability are the big focal points.

In order to make best use of alerts, you’ll want to do a few things. First define an auto scaling group. Even if you don’t want to use autoscaling, putting your instance into one allows amazon to do the monitoring you’ll want.

Next you’ll want to analyze your CloudWatch metrics for usage patterns. Notice a spike, could be a job that is running, or it could be a seasonal traffic spike that you need to manage. Once you have some ideas here, you can set alerts around normal & problematic usage patterns.

Related: Are we fast approaching cloud-mageddon?

3. Use Multi-factor at Login

If you haven’t already done so, you’ll want to enable multi-factor authentication on your AWS account. This provides much more security than a password (even a sufficiently long one) can ever do. You can use Google authenticator to generate the mfa codes and associated it with your smartphone.

While you’re at it, you’ll want to create at least one alternate IAM account so you’re not logging in through the root AWS account. This adds a layer of security to your infrastructure. Consider creating an account for your command line tools to spinup components in the cloud.

You can also use MFA for your command line SSH logins. This is also recommended & not terribly hard to setup.

Read: When hosting data on Amazon turns bloodsport

4. Use virtual networking

Amazon offers Virtual Private Cloud which allows you to create virtual networks within the Amazon cloud. Set your own ip address range, create route tables, gateways, subnets & control security settings.

There is another interesting offering called VPC peering. Previously, if you wanted to route between two VPCs or across the internet to your office network, you’d have to run a box within your VPC to do the networking. This became a single point of failure, and also had to be administered.

With VPC peering, Amazon can do this at the virtualization layer, without extra cost, without single point of failure & without overhead. You can even use VPC peering to network between two AWS accounts. Cool stuff!

Also: Are SQL databases dead?

5. Size instances & I/O

I worked with one startup that had been founded in 2010. They had initially built their infrastructure on AWS so they chose instances based on what was available at the time. Those were m1.large & m1.xlarge. A smart choice at the time, but oh how things evolve in the amazon world.

Now those instance types are “previous generation”. Newer instances offer SSD, more CPU & better I/O for roughly the same price. If you’re in this position, be sure to evaluate upgrading your instances.

If you’re on Amazon RDS, you may not be able to get to the newer instance sizes until you upgrade your database. Does upgrading MySQL involve much more downtime on Amazon RDS? In my experience it surely does.

Along with instance sizes, you’ll also want to evaluate disk I/O options. By default instances in amazon being multi-tenant, use disk as a shared resource. So they’ll see it go up & down dramatically. This can kill database performance & can be painful. There are expensive solutions. Consider looking at provisioned IOPS and additional SSD storage.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 Things I just learned from James Turnbull about Docker

docker containers

Join 28,000 others and follow Sean Hull on twitter @hullsean.

I just got my hands on a copy of James Turnbull’s new book The Docker Book. It’s an excellent introduction to Linux containers & the powerful things you can do with them. It’s 335 pages covering all the introductory topics to get you up and running and then more advanced topics like working with the docker API, building services & extending docker.

Here’s what I learned…

1. Containers aren’t new

The technology today we call containers in Unix is based on chroot mechanism which was introduced way back in the 80’s.

With traditional virtualization, we use a hypervisor layer, so we emulate hardware. The virtual machine running on top, can run anything, from Windows, to different flavors & versions of unix. It appears to be a completely separate piece of hardware.

With containers we move up to the operating system level, and we create isolation between users. These users all share the same parent operating system. This means it requires dramatically less overhead. That means speed!

Docker is an automation layer built on Lightweight Linux Containers or LXC. To applications it looks like they have their own machine, their own userspace, their own filesystem, their own network.

Also: Is Apple betting against big data?

2. No more VirtualBoxes

Are you tired of waiting for your VMs to spinup? Building dev & test environments becomes lightening fast with Docker. This accelerates software development, and makes a lot of things easier.

Also: When prospects mislead

3. Images, registries & containers

Images share some of the properties of images in hypervisor virtualization. However they are implemented with union file systems. While VirtualBox images take some time to boot, as the entire filesystem must be read & code executed anew, docker images are more like source code to the LXC subsystem.

Registries store your public and private images. The Docker Hub is one popular one. You can also host & deploy your own docker registry as your needs dictate.

Like VMs, containers can be started & stopped at will, albeit at lightening fast speed. They can also be deleted much as a VM can be.

Also: What can new york fashion week teach Chad Dickerson about Net Neutrality?

4. Lightning fast sandboxes

As we mentioned containers are fast. Did we mention really fast?

This can facilitate unit testing & continuous integration. A lot of shops are starting to use Jenkins for continuous integration, and fast testing is key to this process.

Also: Is automation killing old-school technical operations?

5. They work with Vagrant

Are you already using Vagrant to automate deployment of virtual environments. If so the transition is easy. Here Docker becomes your provisioner.

Mark Stratmann put together a great how to, Implementing a Vagrant / Docker Dev environment which we’d recommend you take a look at. You can also head over to the Vagrant docs themselves.

Also: Which tech do startups use most?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is automation killing old-school operations?

puppet logo

Join 27,000 others and follow Sean Hull on twitter @hullsean.

I was shocked to find this article on ReadWrite: The Truth About DevOps: IT Isn’t Dead; It’s not even Dying. Wait a second, do people really think this?

Truth is I have heard whispers of this before. I was at a meetup recently where the speaker claimed “With more automation you can eliminate ops. You can then spend more on devs”. To an audience of mostly developers & startup founders, I can imagine the appeal.

1. Does less ops mean more devs?

If you’re listening to a platform service sales person or a developer who needs more resources to get his or her job done, no one would be surprised to hear this. If we can automate away managing the stack, we’ll be able to clear the way for the real work that needs to be done!

This is a very seductive perspective. But it may be akin to taking on technical debt, ignoring the complexity of operations and the perspective that can inform a longer view.

chef logo

Puppet Labs’ Luke Kanies says “Become uniquely valuable. Become great at something the market finds useful.”. I couldn’t agree more.

Read: Are SQL Databases Dead?

2. What happens when developers leave?

I would argue that ops have a longer view of product lifecycle. I for one have been brought in to many projects after the first round of developers have left, and teams are trying to support that software five years after the first version was built.

That sort of long term view, of how to refresh performance, and revitalize code is a unique one. It isn’t the “building the future” mindset, the sexy products, and disruptive first mover “we’re changing the world” mentality.

It’s a more stodgy & conservative one. The mindset is of reliability, simplicity, and long term support.

Also: How to hire a developer that doesn’t suck

3. What’s your mandate?

From what I’ve seen, devs & ops are divided by a four letter word.

That word I believe is “risk”. Devs have a mandate from the business to build features & directly answer to customer requests today. Ops have a mandate to reliability, working against change and thinking in terms of making all that change manageable.

Different mandates mean different perspectives.

Related: What is Devops & why is it important?

4. Can infrastructure live as code?

Puppet along with infrastructure automation & configuration management tools like Chef offer the promise of fully automated infrastructure. But the truth is much much more complex. As typical technology stacks expand from load balancer, webserver & database, to multiple databases, caching server, search server, puppet masters, package repositories, monitoring & metrics collection & jump boxes we’re all reaching a saturation point.

Yes automation helps with that saturation, but ultimately you need people with those wide ranging skills, to manage the complex web of dependencies when things fail.

And fail they will.

Check out: Why are MySQL DBA’s and ops so hard to find?

5. ORM’s and architecture

If you aren’t familiar, ORM’s are a rather dry sounding name for a component that is regularly overlooked. It’s a middleware sitting between application & database, and they drastically simplify developers lives. It helps them write better code and get on with the work of delivering to the business. It’s no wonder they are popular.

But as Ward Cunningham elloquently explains, they are surely technical debt that eventually must get paid. Indeed.

There is broad agreement among professional DBA’s. Each query should be written, each one tuned, and each one deployed. Just like any other bit of code. Handing that process to a library is doomed to failure. Yet ORM’s are still evolving, and the dream still lives on.

And all that because devs & ops have a completely different perspective. We need both of them to run modern internet applications. Lets not forget folks. :)

Read this: Do managers and CTO’s underestimate operational costs?

Want more? Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 cloud ideas that aren’t actually true

storm coming

Join 20,000 others and follow Sean Hull’s scalability, startup & innovation content on twitter @hullsean.

Cloud computing is heralding us into a wonderful era where computing can be bought in small increments, like a utility. This changes the whole way we plan, manage budgets, and accelerates startups making them more agile.

But it’s not all wine & roses up there. I’ve heard a few refrains from clients over the years, and thought I’d share some of the most common.

1. Scaling is automatic

Rather recently I was working with a client on building some sophisticated reports. They needed to slice & dice customer data, over various time series, and summarize with invoices & tracking data. Unfortunately their dataset was large, in the half terabyte range.


Client: Can we just load all this data into the cloud?
Me: Yes we can do that. Build a system in Amazon public cloud, can support large datasets.
Client: I want it to scale easily. So we won’t have these slow reports. And as we add data, it’ll just manage it easily for us.
Me: Well it’s a little bit more complicated than that, unfortunately.

Unfortunately this is a rather familiar conversation that I have quite often. A lot of the press around cloud scalability, centers around auto-scaling, Amazon’s renowned & superb virtualization feature. Yes it’s true you can roll out webservers to scale out this way, but that’s not the end of the story. Typically web applications have a lot of components, from caching servers, to search servers, and of course their backend datastore.

But can we scrap our relational database, such as MySQL and go with one that scales out of the box like Riak, Cassandra or Dynamodb?

Those NoSQL solutions are built to be distributed from the start, it’s true. And they lend themselves to that type of architecture. However, if you’ve built up a dataset in MySQL or Oracle, and more so an application around that, you’ll have to migrate data into the NoSQL solution. That process will take some time.

Like teaching a fish to fly, it make take some time. They do well in water, but evolution takes a bit longer.

Related: RDS or MySQL 10 use cases

2. Disaster recovery is free

In the traditional datacenter, when you want DR, you setup a parallel environment. Hopefully not in the same room, same city or same coast even. Preferrably you do so in a different region. What you can’t get around is dishing out cash for that second datacenter. You need the servers, just in case.

In the cloud, things are different. That’s why we’re here, right? In amazon you have regions already setup & available for plugin-n-play use. Setup your various components, servers, software & configure. Once you’ve verified you can failover to the parallel environment you can just turn off all those instances. Great, no big charges for all that iron that you’d pay for to keep the rooms warm in an old-school datacenter. Or do you?

As it turns out, since you don’t have this environment running all the time, you’ll want to test it more often, run fire drills to bring the servers back online. That’ll incur some costs in terms of manpower. You’ll also want to include in there some scripts to start those servers up, and/or some detailed documentation on how to do that. And don’t lose that documentation, either will you?

You may also want to build some infrastructure as code unit tests. Things change, code checkouts evolve, especially in the agile & continuous integration world. Devops beware!

Read this: Why a killer title can make or break your content efforts

3. Machines are fast

Fast, fast, fast. That’s what we expect, things keep getting faster, right? Hard to believe then that the world of computing took a big step backward when it jumped into the cloud. Something similar happened when we jumped to commodity Linux a decade ago.

In amazon, it’s a multi-tenant world. And just like apartment buildings, popular restaurants, or busy highways you must share. When things are quiet you may have the road to yourself, but it’ll never be as quiet as a dirt road in the country!

Amazon is making big strides though. They now offer memory optimized & storage optimized instances. And an even bigger development is the addition of the most important feature for performance & scalability. That said the network & EBS can still be a real bottleneck.

Also: What is a relational database & why is it important?

4. Backups aren’t necessary

I’ve experienced a few horror stories over the years. I wrote about one noteworthy one When fat fingers take down your business.

True EBS snapshots make backing up your whole server, well a snap! That said a few extra steps have to happen (flush the filesystem & lock tables) to make this work for a relational database like MySQL or Oracle. And suddenly you have a verification step that you also need to perform. You see no backups are valid until they’ve been restored, remember?

But even with these wonderful disk snapshots, you’ll still want to do database dumps, and perhaps table dumps. Operator error, deleting the wrong data, or dropping the wrong tables, will always be a risk. Ignore backups at your own peril!

Check this: Why CTOs underestimate operational costs

5. Outages won’t happen

In an ideal world, everything is redundant, and outages will be a thing of the past. We’ll finally reach five nines uptime and devops everywhere will be out of work. :)

It’s true that Amazon provides all the components to build redundancy into your architecture, and very cutting edge firms that have taken netflix’s approach with chaos monkey are seeing big improvements here. But AirBNB did fail and at root it was an Amazon outage that shouldn’t ever happen.

Read: Why Oracle won’t kill MySQL

Get more. Monthly insights about scalability, startups & innovation.. Our latest Are SQL Databases Dead?

3 Things Devops Can Learn from Aviation

I recent went on a flight with a pilot friend & two others. With only 3 passengers, you get up front and center to all the action. Besides getting to chat directly with the pilot, you also see all of the many checks & balances in place. They call it the warrior aviation checklist. It inspired this post on disaster recovery in the datacenter.

Join 8000 others and follow Sean Hull on twitter @hullsean.

1. Have a real plan B

Looking over the document you can see there are procedures for engine failure in flight! That’s right those small planes can have an engine failure and they glide. So for starters the technology is designed for failure. But the technology design is not the end of the story.

When you have procedures and processes in place, and training to deal with failure, you’re expecting and planning for it.

Also check out 5 Conversational Ways to Evaluate Great Consultants.

[quote]
Expect & plan for failure. Use technologies designed to fail gracefully and build software to do so. Then put procedures in place to detect & alert you so you can put those processes into place quickly.
[/quote]

2. Use checklists

Checklists are an important part of good process. We use them for code deploys. We can use them for disaster recovery too. But how to create them?

Firedrills! That’s right, run through the entire process of disaster recovery with your team & document. As you do so, you’ll be creating the master checklist for real disaster recovery. Keep it in a safe place, or better multiple places so you’ll have it when you really need it.

Read this Scalability Tips & Greatest Hits.

3. Trust your instruments

Modern infrastructures include 10’s or 100’s of servers. With cloud instances so simple to spinup, we add new services and servers daily. What to do?

You obviously need to use the machines to monitor the machines. Have a good monitoring system like Nagios, and metrics collection such as New Relic to help you stay ahead of failure.

Setup the right monitoring automation, and then trust the instruments!

Related Real disaster recovery lessons from Sandy.

Get some in your inbox: Exclusive monthly Scalable Startups. We share tips and special content. Here’s a sample

Amazon Web Services – What is it and why is it important?

Amazon Web Services is a division of Amazon the bookseller, but this part of the business is devoted solely to infrastructure and internet servers.  These are the building blocks of data centers, the workhorses of the internet.  AWS’s offering of Cloud Computing solutions allows a business to setup or “spinup” in the jargon of cloud computing, new compute resources at will.  Need a small single cpu 32bit ubuntu server with two 20G disks attached?  One command and 30 seconds away, and you can have that!

As we discussed previously, Infrastructure Provisioning has evolved dramatically over the past fifteen years from something took time and cost a lot, to a fast automatic process that it is today with cloud computing.  This has also brought with it a dramatic culture shift in the way that systems administration is being done, from a fairly manual process of physical machines, and software configuration, one that took weeks to setup new services, to a scriptable and automateable process that can then take seconds.

This new realm of cloud computing infrastructure and provisioning is called Infrastructure as a Service or IaaS, and Amazon Web Services is one of the largest providers of such compute resources.  They’re not the only ones of course.  Others include:

  • Rackspace Cloud
  • Joyent
  • GoGrid
  • Terremark
  • 3Tera
  • IBM
  • Microsoft
  • Enomaly
  • AT&T

Cloud Computing is still in it’s infancy, but is growing quickly.   Amazon themselves had a major data center outage in April that we discussed in detail. It sent some hot internet startups into a tailspin!

More discussion of Amazon Web Services on Quora – Sean Hull