Before you do infrastructure as code, consider your workflow carefully

via GIPHY

What happens with infrastructure as code when you want to make a change on prod?

I’ve been working on automation for a few years now. When you build your cloud infrastructure with code, you really take everything to a whole new level.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

You might be wondering, what’s the day to day workflow look like? It’s not terribly different from regular software development. You create a branch, write some code, test it, and commit it. But there are some differences.

1. Make a change on production

In this scenario, I have development branch reference the repo directly on my laptop. There is a module, but it references locally like this:

source = “../”

So here are the steps:

1. Make your change to terraform in main repo

This happens in the root source directory. Make your changes to .tf files, and save them.

2. Apply change on dev environment

You haven’t committed any changes to the git repo yet. You want to test them. Make sure there are no syntax errors, and they actually build the cloud resources you expect.

$ terraform plan
$ terraform apply

fix errors, etc.

3. Redeploy containers

If you’re using ECS, your code above may have changed a task definition, or other resources. You may need to update the service. This will force the containers to redeploy fresh with any updates from ECR etc.

4. Eyeball test

You’ll need to ssh to the ecs-host and attach to containers. There you can review env, or verify that docker ps shows your new containers are running.

5. commit changes to version control

Ok, now you’re happy with the changes on dev. Things seem to work, what next? You’ll want to commit your changes to the git repo:

$ git commit -am “added some variables to the application task definition”

6. Now tag your code – we’ll use v1.5

$ git tag -a v1.5 -m “added variables to app task definition”
$ git push origin v1.5

Be sure to push the tag (step 2 above)

7. Update stage terraform module to use v1.5

In your stage main.tf where your stage module definition is, change the source line:

source = “git::https://github.com/hullsean/infra-repo.git?ref=v1.5”

8. Apply changes to stage

$ terraform init
$ terraform plan
$ terraform apply

Note that you have to do terraform init this time. That’s because you are using a new version of your code. So terraform has to go and fetch the whole thing, and cache it in .terraform directory.

9. apply change on prod

Redo steps 7 & 8 for your prod-module main.tf.

Related: When you have to take the fall

2. Pros of infrastructure code

o very professional pipeline
o pipeline can be further automated with tests
o very safe changes on prod
o infra changes managed carefully in version control
o you can back out changes, or see how you got here
o you can audit what has happened historically

Related: When clients don’t pay

3. Cons of over automating

o no easy way to sidestep
o manual changes will break everything
o you have to have a strong knowledge of Terraform
o you need a strong in-depth knowledge of AWS
o the whole team has to be on-board with automation
o you can’t just go in and tweak things

Related: Why i ask for a deposit

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to setup an Amazon EKS demo with Terraform

via GIPHY

Since EKS is pretty new, there aren’t a lot of howtos on it yet.

I wanted to follow along with Amazon’s Getting started with EKS & Kubernetes Guide.

However I didn’t want to use cloudformation. We all know Terraform is far superior!

Join 38,000 others and follow Sean Hull on twitter @hullsean.

With that I went to work getting it going. And a learned a few lessons along the way.

My steps follow pretty closely with the Amazon guide above, and setting up the guestbook app. The only big difference is I’m using Terraform.

1. create the EKS service role

Create a file called eks-iam-role.tf and add the following:

resource "aws_iam_role" "demo-cluster" {
  name = "terraform-eks-demo-cluster"

  assume_role_policy = --POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.demo-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.demo-cluster.name}"
}

Note we reference demo-cluster resource. We define that in step #3 below.

Related: How to setup Amazon ECS with Terraform

2. Create the EKS vpc

Here’s the code to create the VPC. I’m using the Terraform community module to do this.

There are two things to notice here. One is I reference eks-region variable. Add this in your vars.tf. “us-east-1” or whatever you like. Also add cluster-name to your vars.tf.

Also notice the special tags. Those are super important. If you don’t tag your resources properly, kubernetes won’t be able to do it’s thing. Or rather EKS won’t. I had this problem early on and it is very hard to diagnose. The tags in this vpc module, with propagate to subnets, and security groups which is also crucial.

#
provider "aws" {
  region = "${var.eks-region}"
}

#
module "eks-vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = "${var.eks-azs}"
  private_subnets = "${var.eks-private-cidrs}"
  public_subnets  = "${var.eks-public-cidrs}"

  enable_nat_gateway = false
  single_nat_gateway = true

  #  reuse_nat_ips        = "${var.eks-reuse-eip}"
  enable_vpn_gateway = false

  #  external_nat_ip_ids  = ["${var.eks-nat-fixed-eip}"]
  enable_dns_hostnames = true

  tags = {
    Terraform                                   = "true"
    Environment                                 = "${var.environment_name}"
    "kubernetes.io/cluster/${var.cluster-name}" = "shared"
  }
}

resource "aws_security_group_rule" "allow_http" {
  type              = "ingress"
  from_port         = 80
  to_port           = 80
  protocol          = "TCP"
  security_group_id = "${module.eks-vpc.default_security_group_id}"
  cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "allow_guestbook" {
  type              = "ingress"
  from_port         = 3000
  to_port           = 3000
  protocol          = "TCP"
  security_group_id = "${module.eks-vpc.default_security_group_id}"
  cidr_blocks       = ["0.0.0.0/0"]
}

Related: How I resolved some tough Docker problems when i was troubleshooting amazon ECS

3. Create the EKS Cluster

Creating the cluster is a short bit of terraform code below. The aws_eks_cluster resource.

#
# main EKS terraform resource definition
#
resource "aws_eks_cluster" "eks-cluster" {
  name = "${var.cluster-name}"

  role_arn = "${aws_iam_role.demo-cluster.arn}"

  vpc_config {
    subnet_ids = ["${module.eks-vpc.public_subnets}"]
  }
}

output "endpoint" {
  value = "${aws_eks_cluster.eks-cluster.endpoint}"
}

output "kubeconfig-certificate-authority-data" {
  value = "${aws_eks_cluster.eks-cluster.certificate_authority.0.data}"
}

Related: Is Amazon too big to fail?

4. Install & configure kubectl

The AWS docs are pretty good on this point.

First you need to install the client on your local desktop. For me i used brew install, the mac osx package manager. You’ll also need the heptio-authenticator-aws binary. Again refer to the aws docs for help on this.

The main piece you will add is a directory (~/.kube) and edit this file ~/.kube/config as follows:

apiVersion: v1
clusters:
- cluster:
    server: https://3A3C22EEF7477792E917CB0118DD3X22.yl4.us-east-1.eks.amazonaws.com
    certificate-authority-data: "a-really-really-long-string-of-characters"
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "sean-eks"
      #  - "-r"
      #  - "arn:aws:iam::12345678901:role/sean-eks-role"
      #env:
      #  - name: AWS_PROFILE
      #    value: "seancli"%  

Related: Is AWS too complex for small dev teams?

5. Spinup the worker nodes

This is definitely the largest file in your terraform EKS code. Let me walk you through it a bit.

First we attach some policies to our role. These are all essential to EKS. They’re predefined but you need to group them together.

Then you need to create a security group for your worker nodes. Notice this also has the special kubernetes tag added. Be sure that it there or you’ll have problems.

Then we add some additional ingress rules, which allow workers & the control plane of kubernetes all to communicate with eachother.

Next you’ll see some serious user-data code. This handles all the startup action, on the worker node instances. Notice we reference some variables here, so be sure those are defined.

Lastly we create a launch configuration, and autoscaling group. Notice we give it the AMI as defined in the aws docs. These are EKS optimized images, with all the supporting software. Notice also they are only available currently in us-east-1 and us-west-1.

Notice also that the autoscaling group also has the special kubernetes tag. As I’ve been saying over and over, that super important.

#
# EKS Worker Nodes Resources
#  * IAM role allowing Kubernetes actions to access other AWS services
#  * EC2 Security Group to allow networking traffic
#  * Data source to fetch latest EKS worker AMI
#  * AutoScaling Launch Configuration to configure worker instances
#  * AutoScaling Group to launch worker instances
#

resource "aws_iam_role" "demo-node" {
  name = "terraform-eks-demo-node"

  assume_role_policy = --POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "demo-node-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = "${aws_iam_role.demo-node.name}"
}

resource "aws_iam_role_policy_attachment" "demo-node-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = "${aws_iam_role.demo-node.name}"
}

resource "aws_iam_role_policy_attachment" "demo-node-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = "${aws_iam_role.demo-node.name}"
}

resource "aws_iam_role_policy_attachment" "demo-node-lb" {
  policy_arn = "arn:aws:iam::12345678901:policy/eks-lb-policy"
  role       = "${aws_iam_role.demo-node.name}"
}

resource "aws_iam_instance_profile" "demo-node" {
  name = "terraform-eks-demo"
  role = "${aws_iam_role.demo-node.name}"
}

resource "aws_security_group" "demo-node" {
  name        = "terraform-eks-demo-node"
  description = "Security group for all nodes in the cluster"

  #  vpc_id      = "${aws_vpc.demo.id}"
  vpc_id = "${module.eks-vpc.vpc_id}"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = "${
    map(
     "Name", "terraform-eks-demo-node",
     "kubernetes.io/cluster/${var.cluster-name}", "owned",
    )
  }"
}

resource "aws_security_group_rule" "demo-node-ingress-self" {
  description              = "Allow node to communicate with each other"
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = "${aws_security_group.demo-node.id}"
  source_security_group_id = "${aws_security_group.demo-node.id}"
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "demo-node-ingress-cluster" {
  description              = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
  from_port                = 1025
  protocol                 = "tcp"
  security_group_id        = "${aws_security_group.demo-node.id}"
  source_security_group_id = "${module.eks-vpc.default_security_group_id}"
  to_port                  = 65535
  type                     = "ingress"
}

data "aws_ami" "eks-worker" {
  filter {
    name   = "name"
    values = ["eks-worker-*"]
  }

  most_recent = true
  owners      = ["602401143452"] # Amazon
}

# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encoding this
# information into the AutoScaling Launch Configuration.
# More information: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml
locals {
  demo-node-userdata = --USERDATA
#!/bin/bash -xe

CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${aws_eks_cluster.eks-cluster.certificate_authority.0.data}" | base64 -d >  $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${aws_eks_cluster.eks-cluster.endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${var.cluster-name},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${var.eks-region},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${aws_eks_cluster.eks-cluster.endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
USERDATA
}

resource "aws_launch_configuration" "demo" {
  associate_public_ip_address = true
  iam_instance_profile        = "${aws_iam_instance_profile.demo-node.name}"
  image_id                    = "${data.aws_ami.eks-worker.id}"
  instance_type               = "m4.large"
  name_prefix                 = "terraform-eks-demo"
  security_groups             = ["${aws_security_group.demo-node.id}"]
  user_data_base64            = "${base64encode(local.demo-node-userdata)}"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "demo" {
  desired_capacity     = 2
  launch_configuration = "${aws_launch_configuration.demo.id}"
  max_size             = 2
  min_size             = 1
  name                 = "terraform-eks-demo"

  #  vpc_zone_identifier  = ["${aws_subnet.demo.*.id}"]
  vpc_zone_identifier = ["${module.eks-vpc.public_subnets}"]

  tag {
    key                 = "Name"
    value               = "eks-worker-node"
    propagate_at_launch = true
  }

  tag {
    key                 = "kubernetes.io/cluster/${var.cluster-name}"
    value               = "owned"
    propagate_at_launch = true
  }
}

Related: How to hire a developer that doesn’t suck

6. Enable & Test worker nodes

If you haven’t already done so, apply all your above terraform:

$ terraform init
$ terraform plan
$ terraform apply

After that all runs, and all your resources are created. Now edit the file “aws-auth-cm.yaml” with the following contents:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::12345678901:role/terraform-eks-demo-node
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes% 

Then apply it to your cluster:

$ kubectl apply -f aws-auth-cm.yaml

you should be able to use kubectl to view node status:

$ kubectl get nodes
NAME                           STATUS    ROLES     AGE       VERSION
ip-10-0-101-189.ec2.internal   Ready         10d       v1.10.3
ip-10-0-102-182.ec2.internal   Ready         10d       v1.10.3
$ 

Related: Why would I help a customer that’s not paying?

7. Setup guestbook app

Finally you can follow the exact steps in the AWS docs to create the app. Here they are again:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-master-controller.json
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-master-service.json
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-slave-controller.json
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/redis-slave-service.json
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/guestbook-controller.json
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.3/examples/guestbook-go/guestbook-service.json

Then you can get the endpoint with kubectl:

$ kubectl get services        
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)          AGE
guestbook      LoadBalancer   172.20.177.126   aaaaa555ee87c...   3000:31710/TCP   4d
kubernetes     ClusterIP      172.20.0.1                    443/TCP          10d
redis-master   ClusterIP      172.20.242.65                 6379/TCP         4d
redis-slave    ClusterIP      172.20.163.1                  6379/TCP         4d
$ 

Use “kubectl get services -o wide” to see the entire EXTERNAL-IP. If that is saying you likely have an issue with your node iam role, or missing special kubernetes tags. So check on those. It shouldn’t show for more than a minute really.

Hope you got everything working.

Good luck and if you have questions, post them in the comments & I’ll try to help out!

Related: How to migrate my skills to the cloud?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What are the key aws skills and how do you interview for them?

via GIPHY

Whether you’re striving for a new role as a Devops engineer, or a startup looking to hire one, you’ll need to be on the lookout for specific skills.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I’ve been on both sides of the fence, at times interviewing candidates, and other times the candidate looking to impress to win a new role.

Here are my suggestions…

Devops Pipeline

Jenkins isn’t the only build server, but it’s been around a long time, so it’s everywhere. You can also do well with CircleCI or Travis. Or even Amazon’s own CodeBuild & CodePipeline.

You should also be comfortable with a configuration management system. Ansible is my personal favorite but obviously there is lots of Puppet & Chef out there too. Talk about a playbook you wrote, how it configures the server, installs packages, edits configs and restarts services.

Bonus points if you can talk about handling deployments with autoscaling groups. Those dynamic environments can’t easily be captured in static host manifests, so talk about how you handle that.

Of course you should also be strong with Git, bitbucket or codecommit. Talk about how you create a branch, what’s gitflow and when/how do you tag a release.

Also be ready to talk about how a code checkin can trigger a post commit hook, which then can go and build your application, or new infra to test your code.

Related: How to avoid insane AWS bills

CloudFormation or Terraform

I’m partial to Terraform. Terraform is MacOSX or iPhone to CloudFormation as Android or Windows. Why do I say that? Well it’s more polished and a nicer language to write in. CloudFormation is downright ugly. But hey both get the job done.

Talk about some code you wrote, how you configured IAM roles and instance profiles, how you spinup an ECS cluster with Terraform for example.

Related: How best to do discovery in cloud and devops engagements?

AWS Services

There are lots of them. But the core services, are what you should be ready to talk about. CloudWatch for centralized logging. How does it integrate with ECS or EKS?

Route53, how do you create a zone? How do you do geo load balancing? How does it integrate with CertificateManager? Can Terraform build these things?

EC2 is the basic compute service. Tell me what happens when an instance dies? When it boots? What is a user-data script? How would you use one? What’s an AMI? How do you build them?

What about virtual networking? What is a VPC? And a private subnet? What’s a public subnet? How do you deploy a NAT? WHat’s it for? How do security groups work?

What are S3 buckets? Talk about infraquently accessed? How about glacier? What are lifecycle policies? How do you do cross region replication? How do you setup cloudfront? What’s a distribution?

What types of load balancers are there? Classic & Application are the main ones. How do they differ? ALB is smarter, it can integrate with ECS for example. What are some settings I should be concerned with? What about healthchecks?

What is Autoscaling? How do I setup EC2 instances to do this? What’s an autoscaling group? Target? How does it work with ECS? What about EKS?

Devops isn’t about writing application code, but you’re surely going to be writing jobs. What language do you like? Python and shell scripting  are a start. What about Lambda? Talk about frameworks to deploy applications.

Related: Are you getting good at Terraform or wrestling with a bear?

Databases

You should have some strong database skills even if you’re not the day-to-day DBA. Amazon RDS certainly makes administering a bit easier most of the time. But upgrade often require downtime, and unfortunately that’s wired into the service. I see mostly Postgresql, MySQL & Aurora. Get comfortable tuning SQL queries and optimizing. Analyze your slow query log and provide an output.

Amazon’s analytics offering is getting stronger. The purpose built Redshift is everywhere these days. It may use a postgresql driver, but there’s a lot more under the hood. You also may want to look at SPectrum, which provides a EXTERNAL TABLE type interface, to query data directly from S3.

Not on Redshift yet? Well you can use Athena as an interface directly onto your data sitting in S3. Even quicker.

For larger data analysis or folks that have systems built around the technology, Hadoop deployments or EMR may be good to know as well. At least be able to talk intelligently about it.

Related: Is zero downtime even possible on RDS?

Questions

Have you written any CloudFormation templates or Terraform code? For example how do you create a VPC with private & public subnets, plus bastion box with Terraform? What gotches do you run into?

If you are given a design document, how do you proceed from there? How do you build infra around those requirements? What is your first step? What questions would you ask about the doc?

What do you know about Nodejs? Or Python? Why do you prefer that language?

If you were asked to store 500 terrabytes of data on AWS and were going to do analysis of the data what would be your first choice? Why? Let’s say you evaluated S3 and Athena, and found the performance wasn’t there, what would you move to? Redshift? How would you load the data?

Describe a multi-az VPC setup that you recommend. How do you deploy multiple subnets in a high availability arragement?

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to use terraform to setup vpc & bastion box

via GIPHY

If you’re building infrastructure on AWS or GCP you need a sandbox in which to place your toys. That sandbox is called a VPC. It’s one of those lovely acronyms that we in the tech world take for granted.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Those letters stand for Virtual Private Cloud, one of many networks within your cloud, that serve as a firewall, controlling access to servers, applications and other resources.

1. What is it for?

VPC partitions off your cloud, allowing you to control who gets into what. A VPC typically has a private Zone and a public Zone.

Within your private Zone you’ll have 2 or more private subnets and within your public, you’ll have two or more public subnets. These each sit in different availability zones, or data centers within a region. Having at least two means you can be redundant right from the start.

Related: 30 questions to ask a serverless fanboy

2. How to setup the VPC

Terraform has some excellent community modules that help you get on the ground running. One of those facilitates creating a VPC for you. When you create your VPC, the main things you want to think about are:

o what region am I building in?
o what az’s do I want to use?
o what network cidr’s to use?

You’ll have important outputs when you build your vpc. In particular the private subnets, public subnets and default security groups, which you will reference over and over in all of your terraform code. That’s because RDS databases, ec2 instances, redis clusters and many other resources sit inside of a subnet.

module "my-vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a","us-east-1b"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  reuse_nat_ips        = false
  enable_vpn_gateway   = false
  enable_dns_hostnames = true

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}


Note, this module can do a *lot* more. For example you can attached an unchanging or fixed IP (elastic IP in aws terminology) to the NAT device. This is useful so that your application appears to be coming from a single box all the time. It allows upstream providers, APIs and other integrations to whitelist you, allows your application and servers to tie into those services predictably and cleanly.

Also note that we created some nice tags. These tags become more and more important as you automate more of your infrastructure, because you will dig through the dashboard from time to time and can easily figure out what is what. You can also use a tag such as “monitoring = yes” to filter for resources that your monitoring system should tie into.

Related: How to use terraform to automate wordpress site deployment

3. How to add the bastion

You want to deploy all servers in private subnets. That’s because the internet is a dangerous place these days. Everything and I mean everything. From there you provide only two ways to reach those resources. A loac balancer fronts your applications, opening ports 80, 443 or other relavant ports. And a jump box fronts your ssh access.

Place the bastion box in your PUBLIC subnet, so that you can reach it from the outside internet.

Again we’re using an amazing community terraform module, which also implements another cool feature for us. Note we deploy mykey onto the box. Think of this as your master key. But you may want to provide other users access to these machaines. In that case, simply place their public keys into my-public-keys-bucket.

Terraform will automatically deploy a key copying job onto this box via user-data script. The job will run via cron every 15 minutes, and copy (sync rather) public keys into the authorized keys file. This will allow you to add/remove users easily.

There are of course many more sophisticated networks which would require more nuanced user control, but this method is great for starters. 🙂

module "my-bastion" {
  source                      = "github.com/terraform-community-modules/tf_aws_bastion_s3_keys"
  instance_type               = "t2.micro"
  ami                         = "ami-976152f2"
  region                      = "us-east-1"
  key_name                    = "mykey"
  iam_instance_profile        = "s3_readonly"
  s3_bucket_name              = "my-public-keys-bucket"
  vpc_id                      = "${module.my-vpc.vpc_id}"
  subnet_ids                  = "${module.my-vpc.public_subnets}"
  keys_update_frequency       = "*/15 * * * *"
  additional_user_data_script = "date"
  name  = "my-bastion"
  associate_public_ip_address = true
  ssh_user = "ec2-user"
}

# allow ssh coming from bastion to boxes in vpc
#
resource "aws_security_group_rule" "allow_ssh" {
  type            = "ingress"
  from_port       = 22
  to_port         = 22
  protocol        = "tcp"
  security_group_id = "${module.my-vpc.default_security_group_id}"
  source_security_group_id = "${module.my-bastion.security_group_id}" 
}

Related: How to automate Amazon ECS and Docker with Terraform

4. Add an EC2 instance

Now that we have a bastion box in the public subnet, we can use it as a jump box to resources sitting in the private subnets.

Let’s add an ec2 instance in one of our private subnets first. Then in the test section, you can actually reach those boxes by configuring your ssh config.

Here’s the code to create an ec2 instance. Create a file testbox.tf and add these lines. Then do the usual “$ terraform plan && terraform apply”

resource "aws_instance" "example" {
  ami           = "ami-976152f2"
  instance_type = "t2.micro"
  subnet_id = "${module.my-vpc.public_subnets}"
  key_name = "mykey"
}

Related: How do I migrate my skills to the cloud?

5. Testing

In order to test, you’ll need to edit your local ssh config file. This sits in ~/.ssh/config and defines names you can use on your local machine, to hit resources out there on the internet via ssh. Each definition includes a host, an ssh key and a user.

Below we define our bastion box. With that saved to our ssh config file, we can do “$ ssh bastion” and login to it without any password. Excellent!

The second section is even cooler. Remember that our testbox sits in a private subnet, so there is no route to it from the internet at all. Even if we changed it’s security group to allow all ports from all source IPs, it would still not be reachable. 10.0.1.19 is not an internet IP, it is one only defined within the world of our private subnet.

The second section defines how to use bastion as a proxy to reach the testbox. Once that is added to our ssh config file, we can do “$ ssh testbox” and magically reach it in one hop, by using the bastion as a proxy.

Host bastion
   Hostname ec2-22-205-135-133.compute-1.amazonaws.com
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ForwardAgent yes


Host testbox
   Hostname 10.0.1.19
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ProxyCommand ssh bastion -W %h:%p

Related: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Very easy cloudformation template comparison with simple terraform for beginners

via GIPHY

If you search a bit on google, you’ll find lots of sample templates for both of these systems. However I found they had a lot of complexity.

When you’re just starting, you want a very simple example. So I thought I’d put one together.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I’m going to compare both terraform & cloudformation. They get you to the same endpoint, but do it slightly differently.

Very basic terraform template

Ok, you’ve got terraform installed right? If not there are howtos here.

Now let’s create a server.

Create a directory “terraform” then cd into it. Edit this file as main.tf

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    key_name = "seanKey"
}

Please change the subnet to a valid one for you. In the real world you would definitely *not* hardcode a subnet like this. But I wanted to keep this example very simple. Don’t know what subnet to use? Navigate your aws dashboard over to “VPC” and dig around.

Also of course edit for your key.

Ok, you’re ready to test. Let’s first ask terraform what it will do with the “plan” command:

levanter:terraform sean$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-40d28157"
    availability_zone:        ""
    ebs_block_device.#:       ""
    ephemeral_block_device.#: ""
    instance_state:           ""
    instance_type:            "t2.micro"
    key_name:                 "seanKey"
    network_interface_id:     ""
    placement_group:          ""
    private_dns:              ""
    private_ip:               ""
    public_dns:               ""
    public_ip:                ""
    root_block_device.#:      ""
    security_groups.#:        ""
    source_dest_check:        "true"
    subnet_id:                "subnet-111ddaaa"
    tenancy:                  ""
    vpc_security_group_ids.#: ""


Plan: 1 to add, 0 to change, 0 to destroy.
levanter:terraform sean$

Related: What is devops and why is it important?

Build & change with Terraform

Next you want to ask terraform to go ahead and do the work. Because above we only did a dry-run.

levanter:terraform sean$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-40d28157"
  availability_zone:        "" => ""
  ebs_block_device.#:       "" => ""
  ephemeral_block_device.#: "" => ""
  instance_state:           "" => ""
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "seanKey"
  network_interface_id:     "" => ""
  placement_group:          "" => ""
  private_dns:              "" => ""
  private_ip:               "" => ""
  public_dns:               "" => ""
  public_ip:                "" => ""
  root_block_device.#:      "" => ""
  security_groups.#:        "" => ""
  source_dest_check:        "" => "true"
  subnet_id:                "" => "subnet-111ddaaa"
  tenancy:                  "" => ""
  vpc_security_group_ids.#: "" => ""
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

One thing I like is terraform shows us the progress at command line. Cloudformation isn’t so nicely finished. 🙂

Ok, let’s add a tag name to our server. We’re going to add just three lines to our main.tf file:

provider "aws" {
    region = "us-east-1"
}

resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    tags {
        Name = "terraform-box"
    }
}

Now we do terraform apply again. Look how easy that change is to make!

levanter:terraform sean$ terraform apply
aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Modifying...
  tags.%:    "0" => "1"
  tags.Name: "" => "terraform-box"
aws_instance.example: Modifications complete

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

Navigate to the EC2 dashboard and you should see the first column showing your new name.

That was cool!

Chances are you don’t wanna leave these components sitting around. Let’s cleanup. That’s easy too!

levanter:terraform sean$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Destroy complete! Resources: 1 destroyed.
levanter:terraform sean$ 

Related: Top questions to ask on a devops interview

Very basic CloudFormation template example

Hopefully you wrote down your subnet name & keyname. So this will be easy.

Let’s create a “cfn” directory and cd into it.

Next edit main.yml

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"

Now let’s build that with cloudformation. You need to have the awscli installed. Here’s amazon’s howto.

Now let’s create. Cloudformation organizes things as “stacks.

aws cloudformation create-stack --template-body file://sean-instance.yml --stack-name cfn-test

Since I didn’t define “outputs” to keep the yaml simple, the command above should just return without error.

You can go into the aws dashboard, and navigate to “CloudFormation” and see the stack being created. You can also see under “EC2” a new instance has been created.

Related: How do I migrate my skills to the cloud?

Add an instance name with tags in Cloud Formation

As we did with terraform, let’s add a name to the server. This is just a tag, not a hostname, so it’s only useful throughout the AWS API.

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"
      Tags:
        - Key: "Name"
          Value: "cfn-box"

Note the three new lines at the bottom. Ok, let’s apply those changes:

levanter:cfn sean$ aws cloudformation update-stack --template-body file://sean-instance.yml --stack-name cfn-test

Navigate to the EC2 dashboard and you should see the first column showing your new name.

Time to cleanup. Let’s delete that stack:

levanter:cfn sean$ aws cloudformation delete-stack --stack-name cfn-test12
levanter:cfn sean$ 

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

Conclusions

Terraform just supports JSON or it’s HCL (hashicorp configuration language). Actually the latter way of formatting is better supported.

On the CloudFormation side you can use yaml or json.

However CloudFormation can be clunky and frustrating to work with. For example to dry-run in terraform is easy. Just use “plan”. And isn’t something we’re going to do over and over?

In CloudFormation there is a “validate-template” option, but this just checks your JSON or YAML. It doesn’t hit amazon’s API or test things in any real way. They have added something called Change Sets, but I haven’t tried them too much yet.

Also CloudFormations error messages are really lacking. They often give you a syntax error or tell you a resource is incomplete without real details on where or how. It makes debugging slow and tedious. Sometimes I see errors at create-stack calls. Other times that succeeds only to find errors within the CloudFormation dashboard.

Terraform is wayyyyy better.

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How I use Terraform & Composer to automate wordpress on aws

iRobot1

How I setup wordpress to deploy automatically on aws

You want to make your wordpress site bulletproof? No server outage worries? Want to make it faster & more reliable. And also host on cheaper components?

I was after all these gains & also wanted to kick the tires on some of Amazon’s latest devops offerings. So I plotted a way forward to completely automate the deployment of my blog, hosted on wordpress.

Here’s how!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The article is divided into two parts…

Deploy a wordpress site on aws – decouple assets (part 1)

In this one I decouple the assets from the website. What do I mean by this? By moving the db to it’s own server or RDS of even simpler management, it means my server can be stopped & started or terminated at will, without losing all my content. Cool.

You’ll also need to decouple your assets. Those are all the files in the uploads directory. Amazon’s S3 offering is purpose built for this use case. It also comes with easy cloudfront integration for object caching, and lifecycle management to give your files backups over time. Cool !

Deploy a wordpress site on aws – automate (part 2)

The second part we move into all the automation pieces. We’ll use PHP’s Composer to manage dependencies. That’s fancy talk for fetching wordpress itself, and all of our plugins.

1. Isolate your config files

Create a directory & put your config files in it.

$ mkdir iheavy
$ cd iheavy
$ touch htaccess
$ touch httpd.conf
$ touch wp-config.php
$ touch a_simple_pingdom_test.php
$ touch composer.json
$ zip -r iheavy-config.zip *
$ aws s3 cp iheavy-config.zip s3://my-config-bucket/

In a future post we’re going to put all these files in version control. Amazon’s CodeCommit is feature compatible with Github, but integrated right into your account. Once you have your files there, you can use CodeDeploy to automatically place files on your server.

We chose to leave this step out, to simplify the server role you need, for your new EC2 webserver instance. In our case it only needs S3 permissions!

Also: When devops means resistance to change

2. Build your terraform script

Terraform is a lot like Vagrant. I wrote a Howto deploy on EC2 with Vagrant article a couple years ago.

The terraform configuration formalizes what you are asking of Amazon’s API. What size instance? Which AMI? What VPC should I launch in? Which role should my instance assume to get S3 access it needs? And lastly how do we make sure it gets the same Elastic IP each time it boots?

All the magic is inside the terraform config.

Here’s what I see:

levanter:~ sean$ cat iheavy.tf

resource "aws_iam_role" "web_iam_role" {
    name = "web_iam_role"
    assume_role_policy = <

And here's what it looks like when I ask terraform to build my infrastructure:

levanter:~ sean$ terraform apply
aws_iam_instance_profile.web_instance_profile: Refreshing state... (ID: web_instance_profile)
aws_iam_role.web_iam_role: Refreshing state... (ID: web_iam_role)
aws_s3_bucket.apps_bucket: Refreshing state... (ID: iheavy)
aws_iam_role_policy.web_iam_role_policy: Refreshing state... (ID: web_iam_role:web_iam_role_policy)
aws_instance.iheavy: Refreshing state... (ID: i-14e92e24)
aws_eip.bar: Refreshing state... (ID: eipalloc-78732a47)
aws_instance.iheavy: Creating...
  ami:                               "" => "ami-1a249873"
  availability_zone:                 "" => ""
  ebs_block_device.#:                "" => ""
  ephemeral_block_device.#:          "" => ""
  iam_instance_profile:              "" => "web_instance_profile"
  instance_state:                    "" => ""
  instance_type:                     "" => "t1.micro"
  key_name:                          "" => "iheavy"
  network_interface_id:              "" => ""
  placement_group:                   "" => ""
  private_dns:                       "" => ""
  private_ip:                        "" => ""
  public_dns:                        "" => ""
  public_ip:                         "" => ""
  root_block_device.#:               "" => ""
  security_groups.#:                 "" => ""
  source_dest_check:                 "" => "true"
  subnet_id:                         "" => "subnet-1f866434"
  tenancy:                           "" => ""
  user_data:                         "" => "ca8a661fffe09e4392b6813fbac68e62e9fd28b4"
  vpc_security_group_ids.#:          "" => "1"
  vpc_security_group_ids.2457389707: "" => "sg-46f0f223"
aws_instance.iheavy: Still creating... (10s elapsed)
aws_instance.iheavy: Still creating... (20s elapsed)
aws_instance.iheavy: Creation complete
aws_eip.bar: Modifying...
  instance: "" => "i-6af3345a"
aws_eip_association.eip_assoc: Creating...
  allocation_id:        "" => "eipalloc-78732a47"
  instance_id:          "" => "i-6af3345a"
  network_interface_id: "" => ""
  private_ip_address:   "" => ""
  public_ip:            "" => ""
aws_eip.bar: Modifications complete
aws_eip_association.eip_assoc: Creation complete

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:~ sean$ 

Also: Is Amazon too big to fail?

3. Use Composer to automate wordpress install

There is a PHP package manager called composer. It manages dependencies and we depend on a few things. First WordPress itself, and second the various plugins we have installed.

The file is a JSON file. Pretty vanilla. Have a look:

{
    "name": "acme/brilliant-wordpress-site",
    "description": "My brilliant WordPress site",
    "repositories":[
        {
            "type":"composer",
            "url":"https://wpackagist.org"
        }
    ],
    "require": {
       "aws/aws-sdk-php":"*",
	    "wpackagist-plugin/medium":"1.4.0",
       "wpackagist-plugin/google-sitemap-generator":"3.2.9",
       "wpackagist-plugin/amp":"0.3.1",
	    "wpackagist-plugin/w3-total-cache":"0.9.3",
	    "wpackagist-plugin/wordpress-importer":"0.6.1",
	    "wpackagist-plugin/yet-another-related-posts-plugin":"4.0.7",
	    "wpackagist-plugin/better-wp-security":"5.3.7",
	    "wpackagist-plugin/disqus-comment-system":"2.74",
	    "wpackagist-plugin/amazon-s3-and-cloudfront":"1.1",
	    "wpackagist-plugin/amazon-web-services":"1.0",
	    "wpackagist-plugin/feedburner-plugin":"1.48",
       "wpackagist-theme/hueman":"*",
	    "php": ">=5.3",
	    "johnpbloch/wordpress": "4.6.1"
    },
    "autoload": {
        "psr-0": {
            "Acme": "src/"
        }
    }

}

Read: Is aws a patient that needs constant medication?

4. build your user-data script

This captures all the commands you run once the instance starts. Update packages, install your own, move & configure files. You name it!

#!/bin/sh

yum update -y
yum install emacs -y
yum install mysql -y
yum install php -y
yum install git -y
yum install aws-cli -y
yum install gd -y
yum install php-gd -y
yum install ImageMagick -y
yum install php-mysql -y


yum install -y httpd24 
service httpd start
chkconfig httpd on

# configure mysql password file
echo "[client]" >> /root/.my.cnf
echo "host=my-rds.ccccjjjjuuuu.us-east-1.rds.amazonaws.com" >> /root/.my.cnf
echo "user=root" >> /root/.my.cnf
echo "password=abc123" >> /root/.my.cnf


# install PHP composer
export COMPOSE_HOME=/root
echo "installing composer..."
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"
mv composer.phar /usr/local/bin/composer


# fetch config files from private S3 folder
aws s3 cp s3://iheavy-config/iheavy_files.zip .

# unzip files
unzip iheavy_files.zip 

# use composer to get wordpress & plugins
composer update

# move wordpress software
mv wordpress/* /var/www/html/

# move plugins
mv wp-content/plugins/* /var/www/html/wp-content/plugins/

# move pingdom test
mv a_simple_pingdom_test.php /var/www/html

# move htaccess
mv htaccess /var/www/html/.htaccess

# move httpd.conf
mv iheavy_httpd.conf /etc/httpd/conf.d

# move our wp-config into place
mv wp-config.php /var/www/html

# restart apache
service httpd restart

# allow apache to create uploads & any files inside wp-content
chown apache /var/www/html/wp-content

You can monitor things as they're being installed. Use ssh to reach your new instance. Then as root:

$ tail -f /var/log/cloud-init.log

Related: Does Amazon eat it's own dogfood?

5. Time to test

Visit the domain name you specified inside your /etc/httpd/conf.d/mysite.conf

You have full automation now. Don't believe me? Go ahead & TERMINATE the instance in your aws console. Now drop back to your terminal and do:

$ terraform apply

Terraform will figure out that the resources that *should* be there are missing, and go ahead and build them for you. AGAIN. Fully automated style!

Don't forget your analytics beacon code

Hopefully you remember how your analytics is configured. The beacon code makes an API call everytime a page is loaded. This tells google analytics or other monitoring systems what your users are doing, and how much time they're spending & where.

This typically goes in the header.php file. We'll leave it as an exercise to automate this piece yourself!

Also: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters