Tag Archives: automation

Is a dangerous anti-ops movement gaining momentum?

devops divide

I was talking with a colleague recently. He asked me …

What do you think of the #no-ops movement that seems to be gaining ground? How is it related to devops?

It’s an interesting question. With technologies like lambda & docker containers, the role & responsibilities & challenges of operations are definitely changing quickly.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

The tooling & automation stacks that are available now are great.  Groundbreaking. Paradigm shifting.  But there’s another devops story that’s buried there waiting to be heard…

1. What is ops anyway?

What exactly is operations anyway? Charity Majors wrote an amazing piece – WTF is operations which I highly recommend reading.

At root, operations is about providing a safe nest where software can live. From incubation, to birth, then care & feeding to maturity.

Also: Why Reddit CTO Martin Weiner wants a boring tech stack

2. Is Noops possible?

The trend to a #NOops movement I think is a dangerous one.

At first glance this might seem reflexive on my part.  After all I’ve specialized in operations & databases for years.  But I think there’s something more insidious here.

Devs are often presiding over the first wave of software. That’s the initial period of perhaps five years, where frenetic product development is happening.  After those years have passed, early innovators are long gone, and an OPS team is trying to keep things running, and patch where necessary.  This is when more conservative thinking, and the perspective of fewer moving parts & a simpler infrastructure seems so obvious.  All the technical debt is piled up & it’s hard to find the front door.

There’s an interesting article The ops identity crisis by Susan Fowler that I’d recommend for further reading.

Related: Is zero downtime even possible on RDS?

3. The dev mandate

I’ve sat in on teams talking about getting rid of ops & how it’ll mean more money to spend on devs etc.  It’s always a surprising sentiment to hear.

I would argue that developers have a mandate to build production & functionality that can directly help customers. This is in essence a mandate for change. Faster, more agile & responsive means quicker to market & more responsive to changes there.

Read: Five reasons to move data to Amazon Redshift

4. The ops mandate

I’ve also heard the other camp, ops talking about how stupid & short sighted devs can be. Deploying the lastest shiny toys, without operational or long term considerations being thought of.

The ops mandate then is for this longer term view. How can we keep systems stable at 2am in the morning? How can we keep them chugging along after five or more years?

This great article Happiness is a boring stack by Jason Kester really sums up the sentiment. The sure & steady, standard & reliable stack wins the operations test every time.

Also: Is Amazon too big to fail?

5. Coming together

Ultimately dev & ops have different mandates.  One for change & new product features, the other against change, for long term stability.  It’s about striking a balance between the two.

It’s always a dance. That’s why dev & ops need to come together. That’s really what devops is all about.

For some further reading, I found Julia Evans’ piece What Is Devops to be an excellent read.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How I use Terraform & Composer to automate wordpress on aws

iRobot1

How I setup wordpress to deploy automatically on aws

You want to make your wordpress site bulletproof? No server outage worries? Want to make it faster & more reliable. And also host on cheaper components?

I was after all these gains & also wanted to kick the tires on some of Amazon’s latest devops offerings. So I plotted a way forward to completely automate the deployment of my blog, hosted on wordpress.

Here’s how!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The article is divided into two parts…

Deploy a wordpress site on aws – decouple assets (part 1)

In this one I decouple the assets from the website. What do I mean by this? By moving the db to it’s own server or RDS of even simpler management, it means my server can be stopped & started or terminated at will, without losing all my content. Cool.

You’ll also need to decouple your assets. Those are all the files in the uploads directory. Amazon’s S3 offering is purpose built for this use case. It also comes with easy cloudfront integration for object caching, and lifecycle management to give your files backups over time. Cool !

Deploy a wordpress site on aws – automate (part 2)

The second part we move into all the automation pieces. We’ll use PHP’s Composer to manage dependencies. That’s fancy talk for fetching wordpress itself, and all of our plugins.

1. Isolate your config files

Create a directory & put your config files in it.

$ mkdir iheavy
$ cd iheavy
$ touch htaccess
$ touch httpd.conf
$ touch wp-config.php
$ touch a_simple_pingdom_test.php
$ touch composer.json
$ zip -r iheavy-config.zip *
$ aws s3 cp iheavy-config.zip s3://my-config-bucket/

In a future post we’re going to put all these files in version control. Amazon’s CodeCommit is feature compatible with Github, but integrated right into your account. Once you have your files there, you can use CodeDeploy to automatically place files on your server.

We chose to leave this step out, to simplify the server role you need, for your new EC2 webserver instance. In our case it only needs S3 permissions!

Also: When devops means resistance to change

2. Build your terraform script

Terraform is a lot like Vagrant. I wrote a Howto deploy on EC2 with Vagrant article a couple years ago.

The terraform configuration formalizes what you are asking of Amazon’s API. What size instance? Which AMI? What VPC should I launch in? Which role should my instance assume to get S3 access it needs? And lastly how do we make sure it gets the same Elastic IP each time it boots?

All the magic is inside the terraform config.

Here’s what I see:

levanter:~ sean$ cat iheavy.tf

resource "aws_iam_role" "web_iam_role" {
    name = "web_iam_role"
    assume_role_policy = <

And here's what it looks like when I ask terraform to build my infrastructure:

levanter:~ sean$ terraform apply
aws_iam_instance_profile.web_instance_profile: Refreshing state... (ID: web_instance_profile)
aws_iam_role.web_iam_role: Refreshing state... (ID: web_iam_role)
aws_s3_bucket.apps_bucket: Refreshing state... (ID: iheavy)
aws_iam_role_policy.web_iam_role_policy: Refreshing state... (ID: web_iam_role:web_iam_role_policy)
aws_instance.iheavy: Refreshing state... (ID: i-14e92e24)
aws_eip.bar: Refreshing state... (ID: eipalloc-78732a47)
aws_instance.iheavy: Creating...
  ami:                               "" => "ami-1a249873"
  availability_zone:                 "" => ""
  ebs_block_device.#:                "" => ""
  ephemeral_block_device.#:          "" => ""
  iam_instance_profile:              "" => "web_instance_profile"
  instance_state:                    "" => ""
  instance_type:                     "" => "t1.micro"
  key_name:                          "" => "iheavy"
  network_interface_id:              "" => ""
  placement_group:                   "" => ""
  private_dns:                       "" => ""
  private_ip:                        "" => ""
  public_dns:                        "" => ""
  public_ip:                         "" => ""
  root_block_device.#:               "" => ""
  security_groups.#:                 "" => ""
  source_dest_check:                 "" => "true"
  subnet_id:                         "" => "subnet-1f866434"
  tenancy:                           "" => ""
  user_data:                         "" => "ca8a661fffe09e4392b6813fbac68e62e9fd28b4"
  vpc_security_group_ids.#:          "" => "1"
  vpc_security_group_ids.2457389707: "" => "sg-46f0f223"
aws_instance.iheavy: Still creating... (10s elapsed)
aws_instance.iheavy: Still creating... (20s elapsed)
aws_instance.iheavy: Creation complete
aws_eip.bar: Modifying...
  instance: "" => "i-6af3345a"
aws_eip_association.eip_assoc: Creating...
  allocation_id:        "" => "eipalloc-78732a47"
  instance_id:          "" => "i-6af3345a"
  network_interface_id: "" => ""
  private_ip_address:   "" => ""
  public_ip:            "" => ""
aws_eip.bar: Modifications complete
aws_eip_association.eip_assoc: Creation complete

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:~ sean$ 

Also: Is Amazon too big to fail?

3. Use Composer to automate wordpress install

There is a PHP package manager called composer. It manages dependencies and we depend on a few things. First WordPress itself, and second the various plugins we have installed.

The file is a JSON file. Pretty vanilla. Have a look:

{
    "name": "acme/brilliant-wordpress-site",
    "description": "My brilliant WordPress site",
    "repositories":[
        {
            "type":"composer",
            "url":"https://wpackagist.org"
        }
    ],
    "require": {
       "aws/aws-sdk-php":"*",
	    "wpackagist-plugin/medium":"1.4.0",
       "wpackagist-plugin/google-sitemap-generator":"3.2.9",
       "wpackagist-plugin/amp":"0.3.1",
	    "wpackagist-plugin/w3-total-cache":"0.9.3",
	    "wpackagist-plugin/wordpress-importer":"0.6.1",
	    "wpackagist-plugin/yet-another-related-posts-plugin":"4.0.7",
	    "wpackagist-plugin/better-wp-security":"5.3.7",
	    "wpackagist-plugin/disqus-comment-system":"2.74",
	    "wpackagist-plugin/amazon-s3-and-cloudfront":"1.1",
	    "wpackagist-plugin/amazon-web-services":"1.0",
	    "wpackagist-plugin/feedburner-plugin":"1.48",
       "wpackagist-theme/hueman":"*",
	    "php": ">=5.3",
	    "johnpbloch/wordpress": "4.6.1"
    },
    "autoload": {
        "psr-0": {
            "Acme": "src/"
        }
    }

}

Read: Is aws a patient that needs constant medication?

4. build your user-data script

This captures all the commands you run once the instance starts. Update packages, install your own, move & configure files. You name it!

#!/bin/sh

yum update -y
yum install emacs -y
yum install mysql -y
yum install php -y
yum install git -y
yum install aws-cli -y
yum install gd -y
yum install php-gd -y
yum install ImageMagick -y
yum install php-mysql -y


yum install -y httpd24 
service httpd start
chkconfig httpd on

# configure mysql password file
echo "[client]" >> /root/.my.cnf
echo "host=my-rds.ccccjjjjuuuu.us-east-1.rds.amazonaws.com" >> /root/.my.cnf
echo "user=root" >> /root/.my.cnf
echo "password=abc123" >> /root/.my.cnf


# install PHP composer
export COMPOSE_HOME=/root
echo "installing composer..."
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"
mv composer.phar /usr/local/bin/composer


# fetch config files from private S3 folder
aws s3 cp s3://iheavy-config/iheavy_files.zip .

# unzip files
unzip iheavy_files.zip 

# use composer to get wordpress & plugins
composer update

# move wordpress software
mv wordpress/* /var/www/html/

# move plugins
mv wp-content/plugins/* /var/www/html/wp-content/plugins/

# move pingdom test
mv a_simple_pingdom_test.php /var/www/html

# move htaccess
mv htaccess /var/www/html/.htaccess

# move httpd.conf
mv iheavy_httpd.conf /etc/httpd/conf.d

# move our wp-config into place
mv wp-config.php /var/www/html

# restart apache
service httpd restart

# allow apache to create uploads & any files inside wp-content
chown apache /var/www/html/wp-content

You can monitor things as they're being installed. Use ssh to reach your new instance. Then as root:

$ tail -f /var/log/cloud-init.log

Related: Does Amazon eat it's own dogfood?

5. Time to test

Visit the domain name you specified inside your /etc/httpd/conf.d/mysite.conf

You have full automation now. Don't believe me? Go ahead & TERMINATE the instance in your aws console. Now drop back to your terminal and do:

$ terraform apply

Terraform will figure out that the resources that *should* be there are missing, and go ahead and build them for you. AGAIN. Fully automated style!

Don't forget your analytics beacon code

Hopefully you remember how your analytics is configured. The beacon code makes an API call everytime a page is loaded. This tells google analytics or other monitoring systems what your users are doing, and how much time they're spending & where.

This typically goes in the header.php file. We'll leave it as an exercise to automate this piece yourself!

Also: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Deploy wordpress on aws by first decoupling assets

too much inventory

You want to make your wordpress site bulletproof? No server outage worries? Want to make it faster & more reliable. And also host on cheaper components?

I was after all these gains & also wanted to kick the tires on some of Amazon’s latest devops offerings. So I plotted a way forward to completely automate the deployment of my blog, hosted on wordpress.

Here’s how!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The article is divided into two parts…

Deploy a wordpress site on aws – decouple assets (part 1)

In this one I decouple the assets from the website. What do I mean by this? By moving the db to it’s own server or RDS of even simpler management, it means my server can be stopped & started or terminated at will, without losing all my content. Cool.

You’ll also need to decouple your assets. Those are all the files in the uploads directory. Amazon’s S3 offering is purpose built for this use case. It also comes with easy cloudfront integration for object caching, and lifecycle management to give your files backups over time. Cool !

Terraform a wordpress site on aws – automate deploy (part 2)

The second part we move into all the automation pieces. We’ll use PHP’s Composer to manage dependencies. That’s fancy talk for fetching wordpress itself, and all of our plugins.

1. get your content into S3

How to do it?

A. move your content

$ cd html/wp-content/
$ aws s3 cp uploads s3://iheavy/wp-content/

Don’t have the aws command line tools installed?

$ yum install aws-cli -y
$ aws configure

B. Edit your .htaccess file with these lines:

These above steps handle all *existing* content. However you also want new content to go to S3. For that wordpress needs to understand how to put files there. Luckily there’s a plugin to help!


RewriteEngine On
RewriteRule ^wp-content/uploads/(.*)$ http://s3.amazonaws.com/your-bucket/wp-content/uploads/$1 [P]

C. Fetch WP Offload S3 Lite

You’ll see the plugin below in our composer.json file as “amazon-s3-and-cloudfront”

Theoretically you need to specify your aws credentials inside the wp-config.php. However this is insecure. You don’t ever want stuff like that in S3 or any code repository. What to do?

The best way is to use AWS ROLES for your instance. These give the whole instance access to API calls without credentials. Cool! Read more about AWS roles for instances.

Related: Is there a devops talent gap?

2. Move to your database to RDS

You may also use a roll-your-own MySQL instance. The point here is to make it a different EC2 instance. That way you can kill & rebuild the webserver at will. This offers us some cool advantages.

A. Create an RDS instance in a private subnet.

o Be sure it has no access to the outside world.
o note the root user, password
o note the endpoint or hostname

I recommend changing the password from your old instance. That way you can’t accidentally login to your old db. Well it’s still possible, but it’s one step harder.

B. mysqldump your current wp db from another server

$ cd /tmp
$ mysqldump –opts wp_database > wp_database.mysql

C. copy that dump to an instance in the same VPC & subnet as the rds instance

$ scp -i .ssh/mykey.pem ec2-user@oldbox.com:/tmp/wp_database.mysql /tmp/

D. import the data into your new db

$ cd /tmp
$ echo “create database wp_database” | mysql
$ mysql < wp_database.mysql E. Edit your wp-config.php

define(‘DB_PASSWORD’, ‘abc123’);
define(‘DB_HOST’, ‘my-rds.ccccjjjjuuuu.us-east-1.rds.amazonaws.com’);

Also: When hosting data on Amazon turns bloodsport?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Marching towards continuous deployment

cute code pipeline

If you’re like a lot of small dev teams & startups, you’ve dreamed of jumping on the continuous deployment train, but still aren’t quite there.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

You’ve got your code in some sort of repository. Now what? As it turns out the concepts aren’t terribly complicated. The hardest part is figuring out the process that works for your team.

1. Make a single script for deployment

Can you build easily? You want to take steps to simplify the build process & work towards everything being done from a single script. This might be an ant or maven script. It might be rake if you’re using ruby. Or it might be a makefile. Either way it organizes dependencies, checks your system & compiles things if necessary.

Also: Do startups need techops?

2. Do nightly builds

If you’re currently doing manual builds, work towards doing them nightly. Once you have that under your belt you can actually schedule these to happen automatically every night. But you’re not there yet. You want to work to improve the build process first. Work on the performance of this process. Quality is also important. Is the build quality poor?

Related: Is there a devops talent gap?

3. Is your build process slow?

If it takes a long time to do the build, it takes a long time to get to the point where you can smoke test. You want to shorten this time, so you can iterate faster. Look at ways to improve the overall performance of the whole chain. What’s it getting stuck on?

Read: 3 things devops can learn from aviation

4. Is your build quality poor?

Your tests are going to verify application functionality, do security checks, performance or even compliance checks. If these are often failing, you need dig in to find the source of your trouble. Tests aren’t specific enough? Or are you passing your tests, but finding a lot of bugs in QA?

It may take your team some time to get better at building the right tests, and reducing the bugs that show up in QA. But this will be crucial to increasing confidence level, where you’ll be ready to automate the whole pipeline. As you become more confident in your tests, then you’ll be confident to automatically deploy to production.

Also: How to deploy on Amazon EC2 with Vagrant

5. Evaluate tools to help

Continuous deployment is a lot about process. Once you’ve gotten a handle on that, you’ll have a much better idea of what you want out of the tools. Amazon’s own CodePipeline is one possible build server you can use. Because it’s a service, it’s one less server you don’t have to manage. And of course Jenkins is a popular option. Even with Jenkins there is a service based offering from CloudBees. You might also take a look at CircleCI, & Travis which are newer service based offerings, which although they don’t have all the plugins & integrations of Jenkins, they’ve learned from bumps in the road, and improved the formula.

We like CircleCI because it’s open source, smaller footprint than Jenkins, integrates with Slack & Hipchat, and has Docker support as well.

Also: 5 Tips for better database change management

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is AWS too complex for small dev teams & startups?

via GIPHY

I was discussing a server outage with a colleague recently. AWS had done some confusing things, and the team was rallying to troublehsoot & fix.

He made an offhand comment that caught my attention…


AWS is too complex for small dev teams. I’d recommend we host in a traditional datacenter.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

It’s an interesting point. For all the fanfare over Amazon, lost in the shuffle is the staggering complexity that we’re taking on. For small firms, this is a cost that’s often forgotten when we smell the on-demand cool-aid that is EC2.

Here are my thoughts…

1. Over 70 services offered

Everytime I login to the AWS console there’s a new service offering. Lambda & serverless computing. CodeDeploy, Redshift, EMR, VPC’s, developer tools, IOT, the list goes on. If you haven’t enabled MFA on your IAM accounts you’re not alone!

Also: Is Amazon too big to fail?

2. Still complex to build high availability

The song I hear out of Amazon is, we offer all the components for a high availability infrastructure. multiple availability zones, regions, load balancers, autoscaling, geo & latency dns routing. What’s more companies like Netflix have open sourced tools to help.

But at a lot of startups that I see, all these components are not in use, nor are they well understood. Many admins are still using Amazon like an old-school datacenter. And that’s not good.

Sometimes it seems that AWS is a patient in need of constant medication.

Related: Are we fast approaching cloud-mageddon?

3. Need a dedicated devops

As AWS becomes more complex, and the offering more robust, so too the need for dedicated ops. If you’re devs are already out of bandwidth, but you don’t quite have so much need for a fulltime resource a consultant may be an option. Round out the team & keep costs manageable.

If you’re looking for an aws solutions architect, we can help!

Check out: Does Amazon eat it’s own dogfood?

4. Orchestration involves many moving parts

Infrastructure as code offers the promise of completely versioning all your servers, configurations and changes. From there we can apply test driven development & bring a more professional level of service to our business. That’s the theory anyway.

In practice it brings an incredible number of new toolsets to master and a more complex stack besides. All those components can have bugs, need troubleshooting. This sometimes just kicks the can down the road, moving the complexity elsewhere.

It’s not clear that for smaller shops, all this complexity is manageable.

Also: 5 things toxic to scalability

5. Troubleshooting failed deployments

I was looking at a problem with a broken deploy recently. Turns out a developer had copy & pasted some code solution off the internet, possibly from a tutorial, and broke deployments to staging.

Yes perhaps this was avoidable, and more checks & balances can fix. But my thought is continuous integration & continuous deployments are not a panacea. More complexity brings a more complex web to unweave.

I sometimes wonder if we aren’t fast approaching cloud-mageddon?

Read: Why Airbnb didn’t have to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Sean Hull interviewed on the Doppler Cloud podcast

I recently got a chance to talk with Mike Kavis over at Cloud Technology Partners. It was fun to get away from the keyboard, and in front of the microphone for a change.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

1. Docker

Docker is making deployments easier & easier. But as the pace accelerates, are we introducing vulnerabilities & scalability problems faster than we can fix them?

Also: Are generalists better at scaling the web?

2. Redshift

I’ve blogged that I don’t work with recruiters but I do chat with them regularly.

In a recent conversation a recruiter asked me:

“Why is it that suddenly everyone is looking for Redshift?”

I’m seeing the same trend. And if you look at Hadoop you might see why. Writing SQL queries against Redshift data is wildly simpler than writing EMR jobs for Hadoop.

Related: Why Dropbox didn’t have to fail?

3. Devops automation

These days I hear a lot of talk that all operations is software development. Are you still SSHing into boxes. You’re doing it wrong!

Read: When hosting data on Amazon turns bloodsport

4. Hardware solves all speed problems

Having performance problems? Scale out! Database slow, scale up! These days it seems the old short sighted way of thinking is back with a vengence. Throw hardware at the problem and kick the can down the road.

Also: How to hire a developer that doesn’t suck?

5. Amazon disrupting VC

During dot-com version one-point-oh, you’d need hundreds of thousands to buy hardware & software licenses to get an idea off the ground. That necessarily meant real VC money to get off the ground.

Amazon web services & on-demand computing has brought world class infrastructure to even the smallest startups. For just dollars, they can get started.

Now we’re seeing startups get going with micro investments from the likes of Angel List syndicates. Cutting traditional VCs right out of the equation.

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 tech challenges I’m thinking about today

fast fish

Technical operations & startup tech are experiencing an incredible upheaval which is bringing a lot of great things.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Here are some of the questions it raises for me.

1. Are we adopting Docker without enough consideration?

Container deployments are accelerating at a blistering pace. I was reading Julian Dunn recently, and he had an interesting critical post Are container deployments like an oncoming train?

He argues that we should be wary of a few trends. One of taking legacy applications and blindly containerizing them. Now we can keep them alive forever. 🙂 He also argues that there is a tendency for folks who aren’t particularly technical or qualified who start evangelizing it everywhere. A balm for every ailment!

Also: Is Amazon too big to fail?

2. Is Redshift supplanting hadoop & spark for startup analytics?

In a recent blog post I asked Is Redshift outpacing hadoop as the big data warehouse for startups.

On the one hand this is exciting. Speed & agile is always good right? But what of more Amazon & vendor lock-in?

Related: Did Dropbox have to fail?

3. Does devops automation make all of operations a software development exercise?

I asked this question a while back on my blog. Is automation killing old-school operations?

Automation suites like Chef & Puppet are very valuable, in enabling the administration of fleets of servers in the cloud. They’re essential. But there’s some risk in moving further away from the bare metal, that we might weaken our everyday tuning & troubleshooting skills that are essential to technical operations.

Read: When hosting data on Amazon turns bloodsport?

4. Is the cloud encouraging the old pattern of throwing hardware at the problem?

Want to scale your application? Forget tighter code. Don’t worry about tuning SQL queries that could be made 1000x faster. We’re in the cloud. Just scale out!

That’s right with virtualization, we can elastically scale anything. Infinitely. 🙂

I’ve argued that throwing hardware at the problem is like kicking the can down the road. Eventually you have to pay your technical debt & tune your application.

Also: Are SQL databases dead?

5. Is Amazon disrupting venture capital itself?

I’m not expert on the VC business. But Ben Thompson & James Allworth surely are. And they suggested that because of AWS, startups can setup their software for pennies.

This resonates loud & clear for me. Why? Because in the 90’s I remember startups needing major venture money to buy Sun hardware & Oracle licenses to get going. A half million easy.

They asked Is Amazon Web Services enabling AngelList syndicates to disrupt the Venture capital business? That’s a pretty interesting perspective. It would be ironic if all of this disruption that VC’s bring to entrenched businesses, began unravel their own business!

Also: Are we fast approaching cloud-mageddon?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 core pieces of the Amazon Cloud puzzle to get your project off the ground

amazon cloud automation

One of the most common engagements I do is working with firms in and around the NYC startup sector. I evaluate AWS infrastructures & applications built in the Amazon cloud.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

I’ve seen some patterns in customers usage of Amazon. Below is a laundry list of the most important ones.

On our products & pricing page you can see more detail including how we perform a performance review and a sample executive summary.

1. Use automation

When you first start using Amazon Web Services to host your application, you like many before you may think of it like you’re old school hosting. Setup a machine, configure it, get your code running. The traditional model of systems administration. It’s fine for a single server, but if you’re managing a more complex deploy with continuous integration, or want to be resilient to regular server failures you need to do more.

Enter the various automation tools on offer. The simplest of the three is Elastic Beanstalk. If you’re using a very standard stack & don’t need a lot of customizations, this may well work for you.

With more complex deployments you’ll likely want to look at Opsworks Sounds familiar? That’s because it *is* Opscode Chef. Everything you can do with Chef & all the templates out there will work with Amazon’s offering. Let AWS manage your templates & make sure your servers are in the right state, just like hosted chef.

If you want to get down to the assembly language layer of infrastructure in Amazon, you’ll eventually be dealing with CloudFormation. This is JSON code which defines everything, from a server with an attached EBS volume, to a VPC with security rules, IAM users & everything inbetween. It is ultimately what these other services utilize under the hood.

Also: Is Amazon too big to fail?

2. Use Advisor & Alerts

Amazon has a few cool tools to help you manage your infrastructure better. One is called Trusted Advisor . This helps you by looking at your aws usage for best practices. Cost, performance, security & high availability are the big focal points.

In order to make best use of alerts, you’ll want to do a few things. First define an auto scaling group. Even if you don’t want to use autoscaling, putting your instance into one allows amazon to do the monitoring you’ll want.

Next you’ll want to analyze your CloudWatch metrics for usage patterns. Notice a spike, could be a job that is running, or it could be a seasonal traffic spike that you need to manage. Once you have some ideas here, you can set alerts around normal & problematic usage patterns.

Related: Are we fast approaching cloud-mageddon?

3. Use Multi-factor at Login

If you haven’t already done so, you’ll want to enable multi-factor authentication on your AWS account. This provides much more security than a password (even a sufficiently long one) can ever do. You can use Google authenticator to generate the mfa codes and associated it with your smartphone.

While you’re at it, you’ll want to create at least one alternate IAM account so you’re not logging in through the root AWS account. This adds a layer of security to your infrastructure. Consider creating an account for your command line tools to spinup components in the cloud.

You can also use MFA for your command line SSH logins. This is also recommended & not terribly hard to setup.

Read: When hosting data on Amazon turns bloodsport

4. Use virtual networking

Amazon offers Virtual Private Cloud which allows you to create virtual networks within the Amazon cloud. Set your own ip address range, create route tables, gateways, subnets & control security settings.

There is another interesting offering called VPC peering. Previously, if you wanted to route between two VPCs or across the internet to your office network, you’d have to run a box within your VPC to do the networking. This became a single point of failure, and also had to be administered.

With VPC peering, Amazon can do this at the virtualization layer, without extra cost, without single point of failure & without overhead. You can even use VPC peering to network between two AWS accounts. Cool stuff!

Also: Are SQL databases dead?

5. Size instances & I/O

I worked with one startup that had been founded in 2010. They had initially built their infrastructure on AWS so they chose instances based on what was available at the time. Those were m1.large & m1.xlarge. A smart choice at the time, but oh how things evolve in the amazon world.

Now those instance types are “previous generation”. Newer instances offer SSD, more CPU & better I/O for roughly the same price. If you’re in this position, be sure to evaluate upgrading your instances.

If you’re on Amazon RDS, you may not be able to get to the newer instance sizes until you upgrade your database. Does upgrading MySQL involve much more downtime on Amazon RDS? In my experience it surely does.

Along with instance sizes, you’ll also want to evaluate disk I/O options. By default instances in amazon being multi-tenant, use disk as a shared resource. So they’ll see it go up & down dramatically. This can kill database performance & can be painful. There are expensive solutions. Consider looking at provisioned IOPS and additional SSD storage.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 Things I just learned from James Turnbull about Docker

docker containers

Join 28,000 others and follow Sean Hull on twitter @hullsean.

I just got my hands on a copy of James Turnbull’s new book The Docker Book. It’s an excellent introduction to Linux containers & the powerful things you can do with them. It’s 335 pages covering all the introductory topics to get you up and running and then more advanced topics like working with the docker API, building services & extending docker.

Here’s what I learned…

1. Containers aren’t new

The technology today we call containers in Unix is based on chroot mechanism which was introduced way back in the 80’s.

With traditional virtualization, we use a hypervisor layer, so we emulate hardware. The virtual machine running on top, can run anything, from Windows, to different flavors & versions of unix. It appears to be a completely separate piece of hardware.

With containers we move up to the operating system level, and we create isolation between users. These users all share the same parent operating system. This means it requires dramatically less overhead. That means speed!

Docker is an automation layer built on Lightweight Linux Containers or LXC. To applications it looks like they have their own machine, their own userspace, their own filesystem, their own network.

Also: Is Apple betting against big data?

2. No more VirtualBoxes

Are you tired of waiting for your VMs to spinup? Building dev & test environments becomes lightening fast with Docker. This accelerates software development, and makes a lot of things easier.

Also: When prospects mislead

3. Images, registries & containers

Images share some of the properties of images in hypervisor virtualization. However they are implemented with union file systems. While VirtualBox images take some time to boot, as the entire filesystem must be read & code executed anew, docker images are more like source code to the LXC subsystem.

Registries store your public and private images. The Docker Hub is one popular one. You can also host & deploy your own docker registry as your needs dictate.

Like VMs, containers can be started & stopped at will, albeit at lightening fast speed. They can also be deleted much as a VM can be.

Also: What can new york fashion week teach Chad Dickerson about Net Neutrality?

4. Lightning fast sandboxes

As we mentioned containers are fast. Did we mention really fast?

This can facilitate unit testing & continuous integration. A lot of shops are starting to use Jenkins for continuous integration, and fast testing is key to this process.

Also: Is automation killing old-school technical operations?

5. They work with Vagrant

Are you already using Vagrant to automate deployment of virtual environments. If so the transition is easy. Here Docker becomes your provisioner.

Mark Stratmann put together a great how to, Implementing a Vagrant / Docker Dev environment which we’d recommend you take a look at. You can also head over to the Vagrant docs themselves.

Also: Which tech do startups use most?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is automation killing old-school operations?

puppet logo

Join 27,000 others and follow Sean Hull on twitter @hullsean.

I was shocked to find this article on ReadWrite: The Truth About DevOps: IT Isn’t Dead; It’s not even Dying. Wait a second, do people really think this?

Truth is I have heard whispers of this before. I was at a meetup recently where the speaker claimed “With more automation you can eliminate ops. You can then spend more on devs”. To an audience of mostly developers & startup founders, I can imagine the appeal.

1. Does less ops mean more devs?

If you’re listening to a platform service sales person or a developer who needs more resources to get his or her job done, no one would be surprised to hear this. If we can automate away managing the stack, we’ll be able to clear the way for the real work that needs to be done!

This is a very seductive perspective. But it may be akin to taking on technical debt, ignoring the complexity of operations and the perspective that can inform a longer view.

chef logo

Puppet Labs’ Luke Kanies says “Become uniquely valuable. Become great at something the market finds useful.”. I couldn’t agree more.

Read: Are SQL Databases Dead?

2. What happens when developers leave?

I would argue that ops have a longer view of product lifecycle. I for one have been brought in to many projects after the first round of developers have left, and teams are trying to support that software five years after the first version was built.

That sort of long term view, of how to refresh performance, and revitalize code is a unique one. It isn’t the “building the future” mindset, the sexy products, and disruptive first mover “we’re changing the world” mentality.

It’s a more stodgy & conservative one. The mindset is of reliability, simplicity, and long term support.

Also: How to hire a developer that doesn’t suck

3. What’s your mandate?

From what I’ve seen, devs & ops are divided by a four letter word.

That word I believe is “risk”. Devs have a mandate from the business to build features & directly answer to customer requests today. Ops have a mandate to reliability, working against change and thinking in terms of making all that change manageable.

Different mandates mean different perspectives.

Related: What is Devops & why is it important?

4. Can infrastructure live as code?

Puppet along with infrastructure automation & configuration management tools like Chef offer the promise of fully automated infrastructure. But the truth is much much more complex. As typical technology stacks expand from load balancer, webserver & database, to multiple databases, caching server, search server, puppet masters, package repositories, monitoring & metrics collection & jump boxes we’re all reaching a saturation point.

Yes automation helps with that saturation, but ultimately you need people with those wide ranging skills, to manage the complex web of dependencies when things fail.

And fail they will.

Check out: Why are MySQL DBA’s and ops so hard to find?

5. ORM’s and architecture

If you aren’t familiar, ORM’s are a rather dry sounding name for a component that is regularly overlooked. It’s a middleware sitting between application & database, and they drastically simplify developers lives. It helps them write better code and get on with the work of delivering to the business. It’s no wonder they are popular.

But as Ward Cunningham elloquently explains, they are surely technical debt that eventually must get paid. Indeed.

There is broad agreement among professional DBA’s. Each query should be written, each one tuned, and each one deployed. Just like any other bit of code. Handing that process to a library is doomed to failure. Yet ORM’s are still evolving, and the dream still lives on.

And all that because devs & ops have a completely different perspective. We need both of them to run modern internet applications. Lets not forget folks. 🙂

Read this: Do managers and CTO’s underestimate operational costs?

Want more? Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters