I have a new appreciation for Agile and not because it worked

via GIPHY

A couple of years ago I worked at a startup in the online publishing space. But this story isn’t about online publishing. This story is about deadlines, and not meeting them.

For those who don’t know me, I’m a professional services consultant. I’ve worked on a project basis for 90% of two plus decades of my professional career. I mentioned this to give my opinions and perspectives some context. Although I’m not a manager, I’ve worked under more managers than most. Because I work on 5-10 projects per year, that comes to close to two hundred in my career.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

My career path has given me a unique perspective, on teams, leadership and motivation. Of course like anyone who’s worked in tech, I’ve see a lot of agile.

Daily standups are de rigueur of course. As are breaking up your work into two week sprints, and assigning story points to those tasks.

I have to admit there are times when Agile seemed the most fashionable way I saw teams working. And I guess I believe it was more trendy than functional.

Doesn’t everybody already work off tight todo lists, break tasks down into manageable pieces, and always deliver what they promised? It may be because I’ve spent a lot of time managing myself, and surviving as a freelancer that I’ve picked up some of these habits. But I digress…

1. Crunch time

While we as a team had been working on two week sprints, something happened to derail us. Suddenly a major customer was in jeopardy, because we had not delivered on sales promises.

Although what was being promised, we *could* build, we were stumbling over the details.

As an emergency plan, we dropped our current sprint tasks, and marshaled our forces towards this new goal. Everyone on the team knew the customer was hanging by a thread.

Related: When you have to take the fall

2. We need to deliver production by Friday

While this edict looks great on paper, it didn’t work out so well. Developers and ops weren’t clear which systems were included in “production”, and how they needed to be connected together.

Each engineer ended up interpreting the goal in their own way, often assuming that others were shouldering ultimate responsibility of delivery.

“Well I did my part, this other part of the team is responsible for those other pieces…” was the refrain I heard. Sadly the deadline was missed, and everything was a mess. Only after management later dug in and sifted through the rubble, did they actually break up the tasks, assign story points, and give each engineer actionable items to work on.

Related: When clients don’t pay

3. Please work together to make that happen

This doesn’t work because “production” is not an actionable item.

Actionable is much more specific
o deploy this container
o setup these five servers
o fix these three bugs and push code
o setup these new environment variables

Why is “production” not specific?

Which application? Which layer or API must work? Are there intervening steps before production will work? Are their individual tasks for each engineer?

To me you need to “break things down” further if you have tasks that span multiple people, and multiple sessions. I think of a session as a 2-3 hour bucket of productive work. For me it is also the length of time my laptop battery takes to drain from 100% to 0%. When that happens I know I need a break.

And I know that’s how I get chunks of work done.

So to take this to a more specific level, if Friday is 5 work days away, I figure I have 12 increments of work I can do in that time, and if I have 3 engineers, then I need 36 chunks of work.

If you assign 36 chunks of work I believe you are much more likely to get 25-30 of those chunks done.

If you stick to the one macro goal of “get production to work”, engineers may secretly drop the ball figuring, well there are dependent tasks that are not done, so we’re not gonna get there. And also since the goal points at everyone, nobody saddles the failure.

Whereas if you have 36 chunks of work, individual engineers are less likely to drag their feet because it’ll be clear that the hold up was three of john’s tasks…

Related: Why i ask for a deposit

All of this gave me a new appreciation for Agile. It truly does keep teams on track, and focuses everyone on actionable work. This allows management more transparency on whats working, and engineers the feedback they need to get to the finish line.

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to avoid legal problems in consulting

via GIPHY

I posted a newsletter recently entitled “When Clients Don’t Pay”.

I got a lot of responses in email, which is always encouraging. I’m happy to know that folks are reading and getting something out of my ideas.

One colleague suggested that I modify my last point about going to court. He suggested that legal action does make sense after other avenues are exhausted.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

My feeling about avoiding court, has only grown stronger over the years.

There are usually only a few reasons a customer won’t pay. In my experience each of them are avoidable without going to court.

Here are my thoughts on those…

1. Misaligned on tasks, deliverables or deadlines

I find weekly progress reports and endless notes go a long way towards avoiding this problem. If it does arise, there is usually something specific in those notes that can be remedied.

One also needs to be willing to compromise. Putting yourself in the other’s shoes will help to understand their perspective.

Communicate, communicate, and communicate more!

Related: When you have to take the fall

2. Budget problems

Here there isn’t a lot to do anyway. Although companies are obligated to meet payroll by law, they are not so with vendors. If they are out of cash, will court really resolve that?

My way of heading off this problem is, billing/invoicing in smaller increments, getting a deposit, and keep on top of things, so larger debts don’t build up.

Related: The fine art of resistance

3. Shady customers

These I usually suss out well before becoming engaged. I’ve had a few incidents where a prospect was meeting me to get “free advice”. They ask a lot of architectural questions, and take careful notes. Then don’t engage, or use their own people to implement.

One situation in particular I remember was around scalability. The product was a website & app for teachers. From the beginning they built it to sync data instantly. As they got bigger and more customers used the platform, their servers became heavily loaded.

I suggested, instead of looking for a technical solution, why not offer your customers, silver, bronze & gold service levels. For the gold customers, yeah they get their own servers, and can sync all the time. But for the silver ones, once-a-day would probably suffice. Much less load on the servers, because 75% of customers would go silver, 20% bronze and 5% gold.

They actually ran with the idea and implemented it, but never hired me even for an hour of work. I knew they implemented it because I had a friend in the company. It is experiences like that which teach you quite a lot about business and about how you conduct yourself.

This has happened a few times, and I guess it’s part of doing business. But usually that comes out before we go much further, so in a sense it’s a blessing in disguise. 🙂

Related: How to hire a developer that doesn’t suck

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What are the key aws skills and how do you interview for them?

via GIPHY

Whether you’re striving for a new role as a Devops engineer, or a startup looking to hire one, you’ll need to be on the lookout for specific skills.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I’ve been on both sides of the fence, at times interviewing candidates, and other times the candidate looking to impress to win a new role.

Here are my suggestions…

Devops Pipeline

Jenkins isn’t the only build server, but it’s been around a long time, so it’s everywhere. You can also do well with CircleCI or Travis. Or even Amazon’s own CodeBuild & CodePipeline.

You should also be comfortable with a configuration management system. Ansible is my personal favorite but obviously there is lots of Puppet & Chef out there too. Talk about a playbook you wrote, how it configures the server, installs packages, edits configs and restarts services.

Bonus points if you can talk about handling deployments with autoscaling groups. Those dynamic environments can’t easily be captured in static host manifests, so talk about how you handle that.

Of course you should also be strong with Git, bitbucket or codecommit. Talk about how you create a branch, what’s gitflow and when/how do you tag a release.

Also be ready to talk about how a code checkin can trigger a post commit hook, which then can go and build your application, or new infra to test your code.

Related: How to avoid insane AWS bills

CloudFormation or Terraform

I’m partial to Terraform. Terraform is MacOSX or iPhone to CloudFormation as Android or Windows. Why do I say that? Well it’s more polished and a nicer language to write in. CloudFormation is downright ugly. But hey both get the job done.

Talk about some code you wrote, how you configured IAM roles and instance profiles, how you spinup an ECS cluster with Terraform for example.

Related: How best to do discovery in cloud and devops engagements?

AWS Services

There are lots of them. But the core services, are what you should be ready to talk about. CloudWatch for centralized logging. How does it integrate with ECS or EKS?

Route53, how do you create a zone? How do you do geo load balancing? How does it integrate with CertificateManager? Can Terraform build these things?

EC2 is the basic compute service. Tell me what happens when an instance dies? When it boots? What is a user-data script? How would you use one? What’s an AMI? How do you build them?

What about virtual networking? What is a VPC? And a private subnet? What’s a public subnet? How do you deploy a NAT? WHat’s it for? How do security groups work?

What are S3 buckets? Talk about infraquently accessed? How about glacier? What are lifecycle policies? How do you do cross region replication? How do you setup cloudfront? What’s a distribution?

What types of load balancers are there? Classic & Application are the main ones. How do they differ? ALB is smarter, it can integrate with ECS for example. What are some settings I should be concerned with? What about healthchecks?

What is Autoscaling? How do I setup EC2 instances to do this? What’s an autoscaling group? Target? How does it work with ECS? What about EKS?

Devops isn’t about writing application code, but you’re surely going to be writing jobs. What language do you like? Python and shell scripting  are a start. What about Lambda? Talk about frameworks to deploy applications.

Related: Are you getting good at Terraform or wrestling with a bear?

Databases

You should have some strong database skills even if you’re not the day-to-day DBA. Amazon RDS certainly makes administering a bit easier most of the time. But upgrade often require downtime, and unfortunately that’s wired into the service. I see mostly Postgresql, MySQL & Aurora. Get comfortable tuning SQL queries and optimizing. Analyze your slow query log and provide an output.

Amazon’s analytics offering is getting stronger. The purpose built Redshift is everywhere these days. It may use a postgresql driver, but there’s a lot more under the hood. You also may want to look at SPectrum, which provides a EXTERNAL TABLE type interface, to query data directly from S3.

Not on Redshift yet? Well you can use Athena as an interface directly onto your data sitting in S3. Even quicker.

For larger data analysis or folks that have systems built around the technology, Hadoop deployments or EMR may be good to know as well. At least be able to talk intelligently about it.

Related: Is zero downtime even possible on RDS?

Questions

Have you written any CloudFormation templates or Terraform code? For example how do you create a VPC with private & public subnets, plus bastion box with Terraform? What gotches do you run into?

If you are given a design document, how do you proceed from there? How do you build infra around those requirements? What is your first step? What questions would you ask about the doc?

What do you know about Nodejs? Or Python? Why do you prefer that language?

If you were asked to store 500 terrabytes of data on AWS and were going to do analysis of the data what would be your first choice? Why? Let’s say you evaluated S3 and Athena, and found the performance wasn’t there, what would you move to? Redshift? How would you load the data?

Describe a multi-az VPC setup that you recommend. How do you deploy multiple subnets in a high availability arragement?

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

I tried to build infrastructure as code Terraform and Amazon. It didn’t go as I expected.

via GIPHY

As I was building infrastructure code, I stumbled quite a few times. You hit a wall and you have to work through those confusing and frustrating moments.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Here are a few of the lessons I learned in the process of building code for AWS. It’s not easy but when you get there you can enjoy the vistas. They’re pretty amazing.

Don’t pass credentials

As you build your applications, there are moments where components need to use AWS in some way. Your webserver needs to use S3 or your ELK box needs to use CloudWatch. Maybe you want to do an RDS backup, or list EC2 instances.

However it’s not safe to pass your access_key and secret_access_key around. Those should be for your desktop only. So how best to handle this in the cloud?

IAM roles to the rescue. These are collections of privileges. The cool thing is they can be assigned at the INSTANCE LEVEL. Meaning your whole server has permissions to use said resources.

Do this by first creating a role with the privileges you want. Create a json policy document which outlines the specific rules as you see fit. Then create an instance profile for that role.

When you create your ec2 instance in Terraform, you’ll specify that instance profile. Either by ARN or if Terraform created it, by resource ID.

Related: How to avoid insane AWS bills

Keep passwords out of code

Even though we know it should not happen, sometimes it does. We need to be vigilant to stay on top of this problem. There are projects like Pivotal’s credential scan. This can be used to check your source files for passwords.

What about something like RDS? You’re going to need to specify a password in your Terraform code right? Wrong! You can define a variable with no default as follows:

variable "my_rds_pass" {
  description = "password for rds database"
}

When Terraform comes upon this variable in your code, but sees there is no “default” value, it will prompt you when you do “$ terraform apply”

Related: How best to do discovery in cloud and devops engagements?

Versioning your code

When you first start building terraform code, chances are you create a directory, and some tf files, then do your “$ terraform apply”. When you watch that infra build for the first time, it’s exciting!

After you add more components, your code gets more complex. Hopefully you’ve created a git repo to house your code. You can check & commit the files, so you have them in a safe place. But of course there’s more to the equation than this.

How do you handle multiple environments, dev, stage & production all using the same code?

That’s where modules come in. Now at the beginning you may well have a module that looks like this:

module "all-proj" {

  source = "../"

  myvar = "true"
  myregion = "us-east-1"
  myami = "ami-64300001"
}

Etc and so on. That’s the first step in the right direction, however if you change your source code, all of your environments will now be using that code. They will get it as soon as you do “$ terraform apply” for each. That’s fine, but it doesn’t scale well.

Ultimately you want to manage your code like other software projects. So as you make changes, you’ll want to tag it.

So go ahead and checkin your latest changes:

# push your latest changes
$ git push origin master
# now tag it
$ git tag -a v0.1 -m "my latest coolest infra"
# now push the tags
$ git push origin v0.1

Great now you want to modify your module slightly. As follows:

module "all-proj" {

  source = "git::https://[email protected]/hullsean/myproj-infra.git?ref=v0.1"

  myvar = "true"
  myregion = "us-east-1"
  myami = "ami-64300001"
}

Cool! Now each dev, stage and prod can reference a different version. So you are free to work on the infra without interrupting stage or prod. When you’re ready to promote that code, checkin, tag and update stage.

You could go a step further to be more agile, and have a post-commit hook that triggers the stage terraform apply. This though requires you to build solid infra tests. Checkout testinfra and terratest.

Related: Are you getting good at Terraform or wrestling with a bear?

Managing RDS backups

Amazon’s RDS service is a bit weird. I wrote in the past asking Is upgrading RDS like a shit-storm that will not end?. Yes I’ve had my grievances.

My recent discovery is even more serious! Terraform wants to build infra. And it wants to be able to later destroy that infra. In the case of databases, obviously the previous state is one you want to keep. You want that to be perpetual, beyond the infra build. Obvious, no?

Apparently not to the folks at Amazon. When you destroy an RDS instance it will destroy all the old backups you created. I have no idea why anyone would want this. Certainly not as a default behavior. What’s worse you can’t copy those backups elsewhere. Why not? They’re probably sitting in S3 anyway!

While you can take a final backup when you destroy an RDS instance, that’s wondeful and I recommend it. However that’s not enough. I highly suggest you take matters into your own hands. Build a script that calls pg_dump yourself, and copy those .sql or .dump files to S3 for safe keeping.

Related: Is zero downtime even possible on RDS?

When to use force_destroy on S3 buckets

As with RDS, when you create S3 buckets with your infra, you want to be able to cleanup later. But the trouble is that once you create a bucket, you’ll likely fill it with objects and files.

What then happens is when you go to do “$ terraform destroy” it will fail with an error. This makes sense as a default behavior. We don’t want data disappearing without our knowledge.

However you do want to be able to cleanup. So what to do? Two things.

Firstly, create a process, perhaps a lambda job or other bucket replication to regularly sync your s3 bucket to your permanent bucket archive location. Run that every fifteen minutes or as often as you need.

Then add a force_destroy line to your s3 bucket resource. Here’s an example s3 bucket for storing load balancer logs:

data "aws_elb_service_account" "main" {}

resource "aws_s3_bucket" "lb_logs" {
  count         = "${var.create-logs-bucket ? 1 : 0}"
  force_destroy = "${var.force-destroy-logs-bucket}"
  bucket        = "${var.lb-logs-bucket}"
  acl           = "private"

  policy = POLICY
{
  "Id": "Policy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::${var.lb-logs-bucket}/*",
      "Principal": {
        "AWS": [
          "${data.aws_elb_service_account.main.arn}"
        ]
      }
    }
  ]
}
POLICY

  tags {
    Environment = "${var.environment_name}"
  }
}

NOTE: There should be “< <" above and to the left of POLICY. HTML was not having this, and I couldn't resolve it quickly. Oh well.

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to avoid insane AWS bills

via GIPHY

I was flipping through the aws news recently and ran into this article by Juan Ramallo – I was billed 14k on AWS!

Scary stuff!

Join 38,000 others and follow Sean Hull on twitter @hullsean.

When you see headlines like this, your first instinct as a CTO is probably, “Am I at risk?” And then “What are the chances of this happening to me?”

Truth can be stranger than fiction. Our efforts as devops should be towards mitigating risk, and reducing potential for these kinds of things to happen.

1. Use aws instance profiles instead

Those credentials that aws provides, are great for enabling the awscli. That’s because you control your desktop tightly. Don’t you?

But passing them around in your application code is prone to trouble. Eventually they’ll end up in a git repo. Not good!

The solution is applying aws IAM permissions at the instance level. That’s right, you can grant an instance permissions to read or write an s3 bucket, describe instances, create & write to dynamodb, or anything else in aws. The entire cloud is api configurable. You create a custom policy for your instance, and attach it to a named instance profile.

When you spinup your EC2 instance, or later modify it, you attach that instance profile, and voila! The instance has those permissions! No messy credentials required!

Related: Is Amazon too big to fail?

2. Enable 2 factor authentication

If you haven’t already, you should force 2 factor authentication on all of your IAM users. It’s an extra step, but well well worth it. Here’s how to set it up

Mobile phones support all sorts of 2FA apps now, from Duo, to Authenticator, and many more.

Related: Is AWS too complex for small dev teams?

3. blah blah

Encourage developers to use tools like Pivotal’s Credentials Scan.

Hey, while you’re at it, why not add a post commit hook to your code repo in git. Have it run the credentials scan each time code is committed. And when it finds trouble, it should email out the whole team.

This will get everybody on board quick!

Related: Are we fast approaching cloud-mageddon?

4. Scan your S3 Buckets

Open S3 buckets can be a real disaster, offering up your private assets & business data to the world. What to do about it?

Scan your S3 buckets regularly.

Also you can tie in this scanning process to a monitoring alert. That way as soon as an errant bucket is found, you’re notified of the problem. Better safe than sorry!

Related: Which tech do startups use most?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to find freelance work

via GIPHY

I’ve decided to take the plunge, and begin a career as a freelancer. What do you think of services like UpWork? Can I build a business around that?

Join 38,000 others and follow Sean Hull on twitter @hullsean.

There are lots of services that promise the same thing. Headshops too are businesses built around reselling you to customers.

1. Whose relationship?

On those platforms you are a commodity. And further you don’t control the relationship. Upwork becomes your customer.

This is a crucial point. You can’t negotiate additional services or fees, or build on the relationship. Because your customer is UpWork. They control the business they bring to you.

Just remember, your boss/client/customer is the one who writes you a check.

Related: When you have to take the fall

2. Learn sales

If you think you’re not so great at sales, join the club. It’s a real talent, and one everybody is not born with.

But if you want to work for yourself, it’s absolutely crucial. So get practicing!

Related: When clients don’t pay

3. Go to events

The ways i have found, network, meetups, blog weekly and have a newsletter that you send out monthly. Add everyone you ever meet to your newsletter. Write interesting things & appeal to a broad audience. Some receiving your newsletter will not read it but they will see your name pop up in their inbox once a month.

Related: Why i ask for a deposit

4. Expand

As you network, ask others for recommendations. Events, private email lists, single day conferences, forums etc.

Related: Can progress reports help consulting engagementss succeed?

5. Craft an origin story

And don’t forget to tell your story. And tell it well. Craft a memorable origin narrative. Practice & and add or remove things that resonate with people you meet. Even ask people, what do you think about my presentation? Any suggestions? Is it confusing, enticing, exciting?

Related: Why do people leave consulting?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to succeed with fixed price projects

via GIPHY

Bidding on projects is an art as much as a science. Exciting a customer, around skills and past successes is as important as being able to see details that haven’t yet materialized.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

So how does one approach this challenge. One way is to steer towards time and materials, and let things evolve in their own way. But that may not always work.

Here are my thoughts on how to navigate a fixed-fee project.

Overhead costs

When thinking about costing of projects, there are a lot of hidden costs. For fulltime folks, there is the cost of overhead around office space, supplies, training, liability & health insurance, retirement, time off and even severance in some cases.

There is also the cost of time, to hire the right team, manage them, and bring all the pieces together to success product out the door.

Lots of intangibles.

Related: Can progress reports help you achieve successful engagements?

Evolving scope

When looking at a project, to come up with a realistic fixed bid, the scope must be carefully considered. If the bridge has two spans at either end and you decide to add one in the middle, does that mean a project of twice the size?

Both the vendor and manager must together attempt to break down the full scope into smaller pieces. Inevitably there will be some amount of emergent tasks and the scope will change and evolve.

Both consultant and customer must be realistic about this. You can call them product features or in the agile universe stories, but at the end of the day when you have many pieces surprises will happen.

The devil is surely in the details!

Related: How best to do discovery in cloud and devops engagements?

Horse Trading Skills

Given that we know things will change, the customer and vendor should plan for change.

If both parties have a realistic perspective, there is the possibility of exchanging original scoped items for emergent or evolving scoped surprises.

That is both need to be comfortable doing some sort of horse trading, to keep the levels balanced. The client then gets some leeway, as does the consultant in deliverables.

It’s not easily, but truly necessary in a fixed priced project. Because a scope never really sits still.

Related: Why do people leave consulting?

Underbidding

Another approach that may work is underbidding to win the project. Here your scope is expected to change, and it becomes a painful process each time. If you are strong on sales, this may work, but you’re sure to get an endless stream of change orders, and many many scrapes and bruises.

Related: Why I ask for a deposit

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to use terraform to setup vpc & bastion box

via GIPHY

If you’re building infrastructure on AWS or GCP you need a sandbox in which to place your toys. That sandbox is called a VPC. It’s one of those lovely acronyms that we in the tech world take for granted.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Those letters stand for Virtual Private Cloud, one of many networks within your cloud, that serve as a firewall, controlling access to servers, applications and other resources.

1. What is it for?

VPC partitions off your cloud, allowing you to control who gets into what. A VPC typically has a private Zone and a public Zone.

Within your private Zone you’ll have 2 or more private subnets and within your public, you’ll have two or more public subnets. These each sit in different availability zones, or data centers within a region. Having at least two means you can be redundant right from the start.

Related: 30 questions to ask a serverless fanboy

2. How to setup the VPC

Terraform has some excellent community modules that help you get on the ground running. One of those facilitates creating a VPC for you. When you create your VPC, the main things you want to think about are:

o what region am I building in?
o what az’s do I want to use?
o what network cidr’s to use?

You’ll have important outputs when you build your vpc. In particular the private subnets, public subnets and default security groups, which you will reference over and over in all of your terraform code. That’s because RDS databases, ec2 instances, redis clusters and many other resources sit inside of a subnet.

module "my-vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a","us-east-1b"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  reuse_nat_ips        = false
  enable_vpn_gateway   = false
  enable_dns_hostnames = true

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}


Note, this module can do a *lot* more. For example you can attached an unchanging or fixed IP (elastic IP in aws terminology) to the NAT device. This is useful so that your application appears to be coming from a single box all the time. It allows upstream providers, APIs and other integrations to whitelist you, allows your application and servers to tie into those services predictably and cleanly.

Also note that we created some nice tags. These tags become more and more important as you automate more of your infrastructure, because you will dig through the dashboard from time to time and can easily figure out what is what. You can also use a tag such as “monitoring = yes” to filter for resources that your monitoring system should tie into.

Related: How to use terraform to automate wordpress site deployment

3. How to add the bastion

You want to deploy all servers in private subnets. That’s because the internet is a dangerous place these days. Everything and I mean everything. From there you provide only two ways to reach those resources. A loac balancer fronts your applications, opening ports 80, 443 or other relavant ports. And a jump box fronts your ssh access.

Place the bastion box in your PUBLIC subnet, so that you can reach it from the outside internet.

Again we’re using an amazing community terraform module, which also implements another cool feature for us. Note we deploy mykey onto the box. Think of this as your master key. But you may want to provide other users access to these machaines. In that case, simply place their public keys into my-public-keys-bucket.

Terraform will automatically deploy a key copying job onto this box via user-data script. The job will run via cron every 15 minutes, and copy (sync rather) public keys into the authorized keys file. This will allow you to add/remove users easily.

There are of course many more sophisticated networks which would require more nuanced user control, but this method is great for starters. 🙂

module "my-bastion" {
  source                      = "github.com/terraform-community-modules/tf_aws_bastion_s3_keys"
  instance_type               = "t2.micro"
  ami                         = "ami-976152f2"
  region                      = "us-east-1"
  key_name                    = "mykey"
  iam_instance_profile        = "s3_readonly"
  s3_bucket_name              = "my-public-keys-bucket"
  vpc_id                      = "${module.my-vpc.vpc_id}"
  subnet_ids                  = "${module.my-vpc.public_subnets}"
  keys_update_frequency       = "*/15 * * * *"
  additional_user_data_script = "date"
  name  = "my-bastion"
  associate_public_ip_address = true
  ssh_user = "ec2-user"
}

# allow ssh coming from bastion to boxes in vpc
#
resource "aws_security_group_rule" "allow_ssh" {
  type            = "ingress"
  from_port       = 22
  to_port         = 22
  protocol        = "tcp"
  security_group_id = "${module.my-vpc.default_security_group_id}"
  source_security_group_id = "${module.my-bastion.security_group_id}" 
}

Related: How to automate Amazon ECS and Docker with Terraform

4. Add an EC2 instance

Now that we have a bastion box in the public subnet, we can use it as a jump box to resources sitting in the private subnets.

Let’s add an ec2 instance in one of our private subnets first. Then in the test section, you can actually reach those boxes by configuring your ssh config.

Here’s the code to create an ec2 instance. Create a file testbox.tf and add these lines. Then do the usual “$ terraform plan && terraform apply”

resource "aws_instance" "example" {
  ami           = "ami-976152f2"
  instance_type = "t2.micro"
  subnet_id = "${module.my-vpc.public_subnets}"
  key_name = "mykey"
}

Related: How do I migrate my skills to the cloud?

5. Testing

In order to test, you’ll need to edit your local ssh config file. This sits in ~/.ssh/config and defines names you can use on your local machine, to hit resources out there on the internet via ssh. Each definition includes a host, an ssh key and a user.

Below we define our bastion box. With that saved to our ssh config file, we can do “$ ssh bastion” and login to it without any password. Excellent!

The second section is even cooler. Remember that our testbox sits in a private subnet, so there is no route to it from the internet at all. Even if we changed it’s security group to allow all ports from all source IPs, it would still not be reachable. 10.0.1.19 is not an internet IP, it is one only defined within the world of our private subnet.

The second section defines how to use bastion as a proxy to reach the testbox. Once that is added to our ssh config file, we can do “$ ssh testbox” and magically reach it in one hop, by using the bastion as a proxy.

Host bastion
   Hostname ec2-22-205-135-133.compute-1.amazonaws.com
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ForwardAgent yes


Host testbox
   Hostname 10.0.1.19
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ProxyCommand ssh bastion -W %h:%p

Related: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How best to do discovery in cloud & devops engagements?

via GIPHY

Customers reach out to me asking to do implementations, that is architecting applications, deploying code to the cloud, optimizing, tuning, and automating all the things.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

But there are also a portion of engagements the require an amount of discovery. Some of that is technical in nature, and some is more around people and process.

Here are my thoughts.

1. Technical discovery

This is the most obvious type of discovery I might do. It would involve code reviews to begin, and then architecture reviews. Diagrams, microservice communication, apis and so forth.

Here’s a sample executive summary I did for one engagement, with names changed.

Next there is infrastructure, which of course should be defined in code. Terraform and CloudFormation provide good solutions here.

There also is hopefully documentation to review. This includes README’s and code comments, but also confluence docs as well.

Related: Can progress reports help engagements succeed?

2. Process discovery

Understanding the process of how the engineering team builds software, and gets new features to customers cannot be overstated.

What is the methodology? How are deployments managed? Do they break often? How quickly can a developer get changes to production?

I’d recommend this a16z podcast on devops to get a better understanding of this process.

Related: When clients don’t pay

3. Team discovery

This is another area that is key to success. Is there an offshore team? Are SRE’s working remote? Are devs all here in New York or elsewhere? How well is communication happening? Are there trouble spots? Bottlenecks?

In particular it’s worth looking at strengths, weaknesses, opportunities and threats to team and cohesion.

Related: A CTO must never do this

4. Tools discovery

I’m often surprised how many firms don’t know what they have. As enterprises grow, and as team turnover changes, the institutional knowledge can sometimes move with them.

In these cases review of systems and tools in place can be very helpful. Tracking a product, its deployment, and the components in place to facilitate that.

This process can uncover surprises and much room for improvement.

Related: When you have to take the fall

5. In Summary

I’ve uncovered opportunities for improvement in all of the four areas. Although technical discovery high on the list, the other areas can also be ripe areas for investigation.

Production quality, efficiency, and speed of execution and overall team morale and communication all contribute to the velocity of the firm in the marketplace.

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What does your dream job look like?

via GIPHY

I see this question a lot because I’m often on the lookout for new opportunities. So I speak with a lot of recruiters, hiring managers and CTOs. It’s an interesting question.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

When I think about it, there are a few ways to break it down. Here’s what differentiates firms for me.

1. What pace are you looking for?

Is the work-life balance the most important thing for you? That is do you want to leave at 5pm and not be oncall nights and weekends?

Alternatively are you after the fast-paced, always on, blistering hockey stick growth startup phase? That’s also exciting, although it may make work-life balance tougher.

Not to say the world is divided up into only two types, I do think this is an interesting way to divide up the world.

Related: Why I don’t work with recruiters

2. What engineering culture do you like?

Do you prefer an engineering organization, that is doing things cleanly, concisely, with truly best practices and high code quality, though perhaps with greater process control?

Or would you prefer more cowboy style, with less process and able to move quickly and get things out the door?

Related: How to hack job search?

3. What type of teams do you enjoy?

In some organizations that are smaller, you get a chance to wear a lot of hats. You aren’t so specialized because there are fewer total team members. For example there may not be one person devoted to the database work, and one developer takes on that responsibility. While there is not devops team, another developer automates infrastructure.

Alternatively do you prefer more clearly defined job roles? That may be a larger org that has many more engineers. In that way you can own your own tiny slice, and focus just on that skillset or tool.

Both are valid of course, but they may be different types of orgs or companies at different stages in their development.

Related: Questions to ask for a devops interview

4. What’s your overall motivation?

This is an interesting question. For me personally, I prefer to have the biggest business impact. If I can come into an organization and raise the bar, even if the bar wasn’t high to begin with, that is very satisfying. If I don’t get to use the coolest wiz-bang technologies that’s ok with me.

Alternatively there are some organizations that are facing much more challenging problems. These tend to be very hard technical problems, where the bar is already quite high. In those you may be surrounded by very talented engineers indeed, and the baseline for entry is already quite high.

Again both are valid, just a matter of what type of environment you thrive in.

Related: How to hire a developer that doesn’t suck?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters