Before you do infrastructure as code, consider your workflow carefully

via GIPHY

What happens with infrastructure as code when you want to make a change on prod?

I’ve been working on automation for a few years now. When you build your cloud infrastructure with code, you really take everything to a whole new level.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

You might be wondering, what’s the day to day workflow look like? It’s not terribly different from regular software development. You create a branch, write some code, test it, and commit it. But there are some differences.

1. Make a change on production

In this scenario, I have development branch reference the repo directly on my laptop. There is a module, but it references locally like this:

source = “../”

So here are the steps:

1. Make your change to terraform in main repo

This happens in the root source directory. Make your changes to .tf files, and save them.

2. Apply change on dev environment

You haven’t committed any changes to the git repo yet. You want to test them. Make sure there are no syntax errors, and they actually build the cloud resources you expect.

$ terraform plan
$ terraform apply

fix errors, etc.

3. Redeploy containers

If you’re using ECS, your code above may have changed a task definition, or other resources. You may need to update the service. This will force the containers to redeploy fresh with any updates from ECR etc.

4. Eyeball test

You’ll need to ssh to the ecs-host and attach to containers. There you can review env, or verify that docker ps shows your new containers are running.

5. commit changes to version control

Ok, now you’re happy with the changes on dev. Things seem to work, what next? You’ll want to commit your changes to the git repo:

$ git commit -am “added some variables to the application task definition”

6. Now tag your code – we’ll use v1.5

$ git tag -a v1.5 -m “added variables to app task definition”
$ git push origin v1.5

Be sure to push the tag (step 2 above)

7. Update stage terraform module to use v1.5

In your stage main.tf where your stage module definition is, change the source line:

source = “git::https://github.com/hullsean/infra-repo.git?ref=v1.5”

8. Apply changes to stage

$ terraform init
$ terraform plan
$ terraform apply

Note that you have to do terraform init this time. That’s because you are using a new version of your code. So terraform has to go and fetch the whole thing, and cache it in .terraform directory.

9. apply change on prod

Redo steps 7 & 8 for your prod-module main.tf.

Related: When you have to take the fall

2. Pros of infrastructure code

o very professional pipeline
o pipeline can be further automated with tests
o very safe changes on prod
o infra changes managed carefully in version control
o you can back out changes, or see how you got here
o you can audit what has happened historically

Related: When clients don’t pay

3. Cons of over automating

o no easy way to sidestep
o manual changes will break everything
o you have to have a strong knowledge of Terraform
o you need a strong in-depth knowledge of AWS
o the whole team has to be on-board with automation
o you can’t just go in and tweak things

Related: Why i ask for a deposit

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to avoid legal problems in consulting

via GIPHY

I posted a newsletter recently entitled “When Clients Don’t Pay”.

I got a lot of responses in email, which is always encouraging. I’m happy to know that folks are reading and getting something out of my ideas.

One colleague suggested that I modify my last point about going to court. He suggested that legal action does make sense after other avenues are exhausted.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

My feeling about avoiding court, has only grown stronger over the years.

There are usually only a few reasons a customer won’t pay. In my experience each of them are avoidable without going to court.

Here are my thoughts on those…

1. Misaligned on tasks, deliverables or deadlines

I find weekly progress reports and endless notes go a long way towards avoiding this problem. If it does arise, there is usually something specific in those notes that can be remedied.

One also needs to be willing to compromise. Putting yourself in the other’s shoes will help to understand their perspective.

Communicate, communicate, and communicate more!

Related: When you have to take the fall

2. Budget problems

Here there isn’t a lot to do anyway. Although companies are obligated to meet payroll by law, they are not so with vendors. If they are out of cash, will court really resolve that?

My way of heading off this problem is, billing/invoicing in smaller increments, getting a deposit, and keep on top of things, so larger debts don’t build up.

Related: The fine art of resistance

3. Shady customers

These I usually suss out well before becoming engaged. I’ve had a few incidents where a prospect was meeting me to get “free advice”. They ask a lot of architectural questions, and take careful notes. Then don’t engage, or use their own people to implement.

One situation in particular I remember was around scalability. The product was a website & app for teachers. From the beginning they built it to sync data instantly. As they got bigger and more customers used the platform, their servers became heavily loaded.

I suggested, instead of looking for a technical solution, why not offer your customers, silver, bronze & gold service levels. For the gold customers, yeah they get their own servers, and can sync all the time. But for the silver ones, once-a-day would probably suffice. Much less load on the servers, because 75% of customers would go silver, 20% bronze and 5% gold.

They actually ran with the idea and implemented it, but never hired me even for an hour of work. I knew they implemented it because I had a friend in the company. It is experiences like that which teach you quite a lot about business and about how you conduct yourself.

This has happened a few times, and I guess it’s part of doing business. But usually that comes out before we go much further, so in a sense it’s a blessing in disguise. πŸ™‚

Related: How to hire a developer that doesn’t suck

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

I tried to build infrastructure as code Terraform and Amazon. It didn’t go as I expected.

via GIPHY

As I was building infrastructure code, I stumbled quite a few times. You hit a wall and you have to work through those confusing and frustrating moments.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Here are a few of the lessons I learned in the process of building code for AWS. It’s not easy but when you get there you can enjoy the vistas. They’re pretty amazing.

Don’t pass credentials

As you build your applications, there are moments where components need to use AWS in some way. Your webserver needs to use S3 or your ELK box needs to use CloudWatch. Maybe you want to do an RDS backup, or list EC2 instances.

However it’s not safe to pass your access_key and secret_access_key around. Those should be for your desktop only. So how best to handle this in the cloud?

IAM roles to the rescue. These are collections of privileges. The cool thing is they can be assigned at the INSTANCE LEVEL. Meaning your whole server has permissions to use said resources.

Do this by first creating a role with the privileges you want. Create a json policy document which outlines the specific rules as you see fit. Then create an instance profile for that role.

When you create your ec2 instance in Terraform, you’ll specify that instance profile. Either by ARN or if Terraform created it, by resource ID.

Related: How to avoid insane AWS bills

Keep passwords out of code

Even though we know it should not happen, sometimes it does. We need to be vigilant to stay on top of this problem. There are projects like Pivotal’s credential scan. This can be used to check your source files for passwords.

What about something like RDS? You’re going to need to specify a password in your Terraform code right? Wrong! You can define a variable with no default as follows:

variable "my_rds_pass" {
  description = "password for rds database"
}

When Terraform comes upon this variable in your code, but sees there is no “default” value, it will prompt you when you do “$ terraform apply”

Related: How best to do discovery in cloud and devops engagements?

Versioning your code

When you first start building terraform code, chances are you create a directory, and some tf files, then do your “$ terraform apply”. When you watch that infra build for the first time, it’s exciting!

After you add more components, your code gets more complex. Hopefully you’ve created a git repo to house your code. You can check & commit the files, so you have them in a safe place. But of course there’s more to the equation than this.

How do you handle multiple environments, dev, stage & production all using the same code?

That’s where modules come in. Now at the beginning you may well have a module that looks like this:

module "all-proj" {

  source = "../"

  myvar = "true"
  myregion = "us-east-1"
  myami = "ami-64300001"
}

Etc and so on. That’s the first step in the right direction, however if you change your source code, all of your environments will now be using that code. They will get it as soon as you do “$ terraform apply” for each. That’s fine, but it doesn’t scale well.

Ultimately you want to manage your code like other software projects. So as you make changes, you’ll want to tag it.

So go ahead and checkin your latest changes:

# push your latest changes
$ git push origin master
# now tag it
$ git tag -a v0.1 -m "my latest coolest infra"
# now push the tags
$ git push origin v0.1

Great now you want to modify your module slightly. As follows:

module "all-proj" {

  source = "git::https://[email protected]/hullsean/myproj-infra.git?ref=v0.1"

  myvar = "true"
  myregion = "us-east-1"
  myami = "ami-64300001"
}

Cool! Now each dev, stage and prod can reference a different version. So you are free to work on the infra without interrupting stage or prod. When you’re ready to promote that code, checkin, tag and update stage.

You could go a step further to be more agile, and have a post-commit hook that triggers the stage terraform apply. This though requires you to build solid infra tests. Checkout testinfra and terratest.

Related: Are you getting good at Terraform or wrestling with a bear?

Managing RDS backups

Amazon’s RDS service is a bit weird. I wrote in the past asking Is upgrading RDS like a shit-storm that will not end?. Yes I’ve had my grievances.

My recent discovery is even more serious! Terraform wants to build infra. And it wants to be able to later destroy that infra. In the case of databases, obviously the previous state is one you want to keep. You want that to be perpetual, beyond the infra build. Obvious, no?

Apparently not to the folks at Amazon. When you destroy an RDS instance it will destroy all the old backups you created. I have no idea why anyone would want this. Certainly not as a default behavior. What’s worse you can’t copy those backups elsewhere. Why not? They’re probably sitting in S3 anyway!

While you can take a final backup when you destroy an RDS instance, that’s wondeful and I recommend it. However that’s not enough. I highly suggest you take matters into your own hands. Build a script that calls pg_dump yourself, and copy those .sql or .dump files to S3 for safe keeping.

Related: Is zero downtime even possible on RDS?

When to use force_destroy on S3 buckets

As with RDS, when you create S3 buckets with your infra, you want to be able to cleanup later. But the trouble is that once you create a bucket, you’ll likely fill it with objects and files.

What then happens is when you go to do “$ terraform destroy” it will fail with an error. This makes sense as a default behavior. We don’t want data disappearing without our knowledge.

However you do want to be able to cleanup. So what to do? Two things.

Firstly, create a process, perhaps a lambda job or other bucket replication to regularly sync your s3 bucket to your permanent bucket archive location. Run that every fifteen minutes or as often as you need.

Then add a force_destroy line to your s3 bucket resource. Here’s an example s3 bucket for storing load balancer logs:

data "aws_elb_service_account" "main" {}

resource "aws_s3_bucket" "lb_logs" {
  count         = "${var.create-logs-bucket ? 1 : 0}"
  force_destroy = "${var.force-destroy-logs-bucket}"
  bucket        = "${var.lb-logs-bucket}"
  acl           = "private"

  policy = POLICY
{
  "Id": "Policy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::${var.lb-logs-bucket}/*",
      "Principal": {
        "AWS": [
          "${data.aws_elb_service_account.main.arn}"
        ]
      }
    }
  ]
}
POLICY

  tags {
    Environment = "${var.environment_name}"
  }
}

NOTE: There should be “< <" above and to the left of POLICY. HTML was not having this, and I couldn't resolve it quickly. Oh well.

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to find freelance work

via GIPHY

I’ve decided to take the plunge, and begin a career as a freelancer. What do you think of services like UpWork? Can I build a business around that?

Join 38,000 others and follow Sean Hull on twitter @hullsean.

There are lots of services that promise the same thing. Headshops too are businesses built around reselling you to customers.

1. Whose relationship?

On those platforms you are a commodity. And further you don’t control the relationship. Upwork becomes your customer.

This is a crucial point. You can’t negotiate additional services or fees, or build on the relationship. Because your customer is UpWork. They control the business they bring to you.

Just remember, your boss/client/customer is the one who writes you a check.

Related: When you have to take the fall

2. Learn sales

If you think you’re not so great at sales, join the club. It’s a real talent, and one everybody is not born with.

But if you want to work for yourself, it’s absolutely crucial. So get practicing!

Related: When clients don’t pay

3. Go to events

The ways i have found, network, meetups, blog weekly and have a newsletter that you send out monthly. Add everyone you ever meet to your newsletter. Write interesting things & appeal to a broad audience. Some receiving your newsletter will not read it but they will see your name pop up in their inbox once a month.

Related: Why i ask for a deposit

4. Expand

As you network, ask others for recommendations. Events, private email lists, single day conferences, forums etc.

Related: Can progress reports help consulting engagementss succeed?

5. Craft an origin story

And don’t forget to tell your story. And tell it well. Craft a memorable origin narrative. Practice & and add or remove things that resonate with people you meet. Even ask people, what do you think about my presentation? Any suggestions? Is it confusing, enticing, exciting?

Related: Why do people leave consulting?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to succeed with fixed price projects

via GIPHY

Bidding on projects is an art as much as a science. Exciting a customer, around skills and past successes is as important as being able to see details that haven’t yet materialized.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

So how does one approach this challenge. One way is to steer towards time and materials, and let things evolve in their own way. But that may not always work.

Here are my thoughts on how to navigate a fixed-fee project.

Overhead costs

When thinking about costing of projects, there are a lot of hidden costs. For fulltime folks, there is the cost of overhead around office space, supplies, training, liability & health insurance, retirement, time off and even severance in some cases.

There is also the cost of time, to hire the right team, manage them, and bring all the pieces together to success product out the door.

Lots of intangibles.

Related: Can progress reports help you achieve successful engagements?

Evolving scope

When looking at a project, to come up with a realistic fixed bid, the scope must be carefully considered. If the bridge has two spans at either end and you decide to add one in the middle, does that mean a project of twice the size?

Both the vendor and manager must together attempt to break down the full scope into smaller pieces. Inevitably there will be some amount of emergent tasks and the scope will change and evolve.

Both consultant and customer must be realistic about this. You can call them product features or in the agile universe stories, but at the end of the day when you have many pieces surprises will happen.

The devil is surely in the details!

Related: How best to do discovery in cloud and devops engagements?

Horse Trading Skills

Given that we know things will change, the customer and vendor should plan for change.

If both parties have a realistic perspective, there is the possibility of exchanging original scoped items for emergent or evolving scoped surprises.

That is both need to be comfortable doing some sort of horse trading, to keep the levels balanced. The client then gets some leeway, as does the consultant in deliverables.

It’s not easily, but truly necessary in a fixed priced project. Because a scope never really sits still.

Related: Why do people leave consulting?

Underbidding

Another approach that may work is underbidding to win the project. Here your scope is expected to change, and it becomes a painful process each time. If you are strong on sales, this may work, but you’re sure to get an endless stream of change orders, and many many scrapes and bruises.

Related: Why I ask for a deposit

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to use terraform to setup vpc & bastion box

via GIPHY

If you’re building infrastructure on AWS or GCP you need a sandbox in which to place your toys. That sandbox is called a VPC. It’s one of those lovely acronyms that we in the tech world take for granted.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Those letters stand for Virtual Private Cloud, one of many networks within your cloud, that serve as a firewall, controlling access to servers, applications and other resources.

1. What is it for?

VPC partitions off your cloud, allowing you to control who gets into what. A VPC typically has a private Zone and a public Zone.

Within your private Zone you’ll have 2 or more private subnets and within your public, you’ll have two or more public subnets. These each sit in different availability zones, or data centers within a region. Having at least two means you can be redundant right from the start.

Related: 30 questions to ask a serverless fanboy

2. How to setup the VPC

Terraform has some excellent community modules that help you get on the ground running. One of those facilitates creating a VPC for you. When you create your VPC, the main things you want to think about are:

o what region am I building in?
o what az’s do I want to use?
o what network cidr’s to use?

You’ll have important outputs when you build your vpc. In particular the private subnets, public subnets and default security groups, which you will reference over and over in all of your terraform code. That’s because RDS databases, ec2 instances, redis clusters and many other resources sit inside of a subnet.

module "my-vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a","us-east-1b"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  reuse_nat_ips        = false
  enable_vpn_gateway   = false
  enable_dns_hostnames = true

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}


Note, this module can do a *lot* more. For example you can attached an unchanging or fixed IP (elastic IP in aws terminology) to the NAT device. This is useful so that your application appears to be coming from a single box all the time. It allows upstream providers, APIs and other integrations to whitelist you, allows your application and servers to tie into those services predictably and cleanly.

Also note that we created some nice tags. These tags become more and more important as you automate more of your infrastructure, because you will dig through the dashboard from time to time and can easily figure out what is what. You can also use a tag such as “monitoring = yes” to filter for resources that your monitoring system should tie into.

Related: How to use terraform to automate wordpress site deployment

3. How to add the bastion

You want to deploy all servers in private subnets. That’s because the internet is a dangerous place these days. Everything and I mean everything. From there you provide only two ways to reach those resources. A loac balancer fronts your applications, opening ports 80, 443 or other relavant ports. And a jump box fronts your ssh access.

Place the bastion box in your PUBLIC subnet, so that you can reach it from the outside internet.

Again we’re using an amazing community terraform module, which also implements another cool feature for us. Note we deploy mykey onto the box. Think of this as your master key. But you may want to provide other users access to these machaines. In that case, simply place their public keys into my-public-keys-bucket.

Terraform will automatically deploy a key copying job onto this box via user-data script. The job will run via cron every 15 minutes, and copy (sync rather) public keys into the authorized keys file. This will allow you to add/remove users easily.

There are of course many more sophisticated networks which would require more nuanced user control, but this method is great for starters. πŸ™‚

module "my-bastion" {
  source                      = "github.com/terraform-community-modules/tf_aws_bastion_s3_keys"
  instance_type               = "t2.micro"
  ami                         = "ami-976152f2"
  region                      = "us-east-1"
  key_name                    = "mykey"
  iam_instance_profile        = "s3_readonly"
  s3_bucket_name              = "my-public-keys-bucket"
  vpc_id                      = "${module.my-vpc.vpc_id}"
  subnet_ids                  = "${module.my-vpc.public_subnets}"
  keys_update_frequency       = "*/15 * * * *"
  additional_user_data_script = "date"
  name  = "my-bastion"
  associate_public_ip_address = true
  ssh_user = "ec2-user"
}

# allow ssh coming from bastion to boxes in vpc
#
resource "aws_security_group_rule" "allow_ssh" {
  type            = "ingress"
  from_port       = 22
  to_port         = 22
  protocol        = "tcp"
  security_group_id = "${module.my-vpc.default_security_group_id}"
  source_security_group_id = "${module.my-bastion.security_group_id}" 
}

Related: How to automate Amazon ECS and Docker with Terraform

4. Add an EC2 instance

Now that we have a bastion box in the public subnet, we can use it as a jump box to resources sitting in the private subnets.

Let’s add an ec2 instance in one of our private subnets first. Then in the test section, you can actually reach those boxes by configuring your ssh config.

Here’s the code to create an ec2 instance. Create a file testbox.tf and add these lines. Then do the usual “$ terraform plan && terraform apply”

resource "aws_instance" "example" {
  ami           = "ami-976152f2"
  instance_type = "t2.micro"
  subnet_id = "${module.my-vpc.public_subnets}"
  key_name = "mykey"
}

Related: How do I migrate my skills to the cloud?

5. Testing

In order to test, you’ll need to edit your local ssh config file. This sits in ~/.ssh/config and defines names you can use on your local machine, to hit resources out there on the internet via ssh. Each definition includes a host, an ssh key and a user.

Below we define our bastion box. With that saved to our ssh config file, we can do “$ ssh bastion” and login to it without any password. Excellent!

The second section is even cooler. Remember that our testbox sits in a private subnet, so there is no route to it from the internet at all. Even if we changed it’s security group to allow all ports from all source IPs, it would still not be reachable. 10.0.1.19 is not an internet IP, it is one only defined within the world of our private subnet.

The second section defines how to use bastion as a proxy to reach the testbox. Once that is added to our ssh config file, we can do “$ ssh testbox” and magically reach it in one hop, by using the bastion as a proxy.

Host bastion
   Hostname ec2-22-205-135-133.compute-1.amazonaws.com
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ForwardAgent yes


Host testbox
   Hostname 10.0.1.19
   IdentityFile ~/.ssh/mykey.pem
   User ec2-user
   ProxyCommand ssh bastion -W %h:%p

Related: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How best to do discovery in cloud & devops engagements?

via GIPHY

Customers reach out to me asking to do implementations, that is architecting applications, deploying code to the cloud, optimizing, tuning, and automating all the things.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

But there are also a portion of engagements the require an amount of discovery. Some of that is technical in nature, and some is more around people and process.

Here are my thoughts.

1. Technical discovery

This is the most obvious type of discovery I might do. It would involve code reviews to begin, and then architecture reviews. Diagrams, microservice communication, apis and so forth.

Here’s a sample executive summary I did for one engagement, with names changed.

Next there is infrastructure, which of course should be defined in code. Terraform and CloudFormation provide good solutions here.

There also is hopefully documentation to review. This includes README’s and code comments, but also confluence docs as well.

Related: Can progress reports help engagements succeed?

2. Process discovery

Understanding the process of how the engineering team builds software, and gets new features to customers cannot be overstated.

What is the methodology? How are deployments managed? Do they break often? How quickly can a developer get changes to production?

I’d recommend this a16z podcast on devops to get a better understanding of this process.

Related: When clients don’t pay

3. Team discovery

This is another area that is key to success. Is there an offshore team? Are SRE’s working remote? Are devs all here in New York or elsewhere? How well is communication happening? Are there trouble spots? Bottlenecks?

In particular it’s worth looking at strengths, weaknesses, opportunities and threats to team and cohesion.

Related: A CTO must never do this

4. Tools discovery

I’m often surprised how many firms don’t know what they have. As enterprises grow, and as team turnover changes, the institutional knowledge can sometimes move with them.

In these cases review of systems and tools in place can be very helpful. Tracking a product, its deployment, and the components in place to facilitate that.

This process can uncover surprises and much room for improvement.

Related: When you have to take the fall

5. In Summary

I’ve uncovered opportunities for improvement in all of the four areas. Although technical discovery high on the list, the other areas can also be ripe areas for investigation.

Production quality, efficiency, and speed of execution and overall team morale and communication all contribute to the velocity of the firm in the marketplace.

Related: Why generalists are better at scaling the web

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How to find consulting clients?

via GIPHY

I get asked this question by people a lot. Whenever I attend conferences, meetups, or social events. How do you *do* consulting? I’d love to be doing that, how do I get there?

Join 38,000 others and follow Sean Hull on twitter @hullsean.

From what I can tell, the most important factor is being hungry. How do you teach that? If you’re fiercely independent to begin with, you may be willing to have a roomate, or do without luxury for a while, in order to build up your nest egg. To be sure you need some cushion to get started.

But you also need customers. Where the heck do you find them? Here are some tips…

1. What not to do?

The first thing you’ll find is that recruiters seem to have a lot of “customers”. Maybe I can just go that route. Sure, but just remember, those are not *your* customers. Your customer is the recruiting company. They have the relationship.

Why does this matter? Because you can’t build a business this way. You are effectively working as an employee of the recruiting company. Nothing wrong with it, but it’s not an entrepreneurial path. You don’t really have responsibility nor control over the full lifecycle of your business.

So again I would summarize, don’t use “hire a freelancer” sites.

And don’t use consulting headshops or recruiters.

Related: When you’re hired to solve a people problem

2. Do socialize

So how then? Well you need to first *peer* with economic buyers. What do I mean by that? Well if you go to technical conferences, that’s fine, but your peers are other engineers. These are not buyers. At best you may get a weak referral.

Hiring managers, CTOs, directors of operations, all will attend events where their peers will be found. If you want to be a professional services consultant, these folks are whom you need to socialize with.

First, experiment. Go to a lot of different types of events, meetups, small single track conferences. And ask where others go? What’s on their radar? Also introduce people to eachother. This may sound counterintuitive, but it’s important. Don’t have an agenda of “I’m looking for a job” but rather, I have a lot to offer.

Network by interviewing. This may appear to be an odd one, but it sometimes works. Take any interviews you can. Discuss how you solve problems. Learn by failing a few. If you talk to ten firms that have a seat to fill, one of them may go the consulting route, even though that wasn’t their original thought.

Talk to recruiters. It may sounds at odds with what I said above, but it can be very valuable. Recruiters have their finger on the pulse. Even if their not physicians, they can measure the pulse. So too they may not know redshift from Oracle, but they’re hearing what the industry is looking for. They have a great perspective to share.

Go to non-peer events. Expand your horizons. Surprise yourself with who might attend other events. Tell your story. In the process, ask others where they spend time.

Ping all the people. Yes keep in touch with folks. You may create a newsletter to help with this. See below.

Keep the pipeline warm. Once you get a gig, you may quickly give up the socializing because you just want to do the work. But this will fail in the long term. You have to like the socializing and keep doing it. Even while you have an engagement or two going.

Always *GET* cards. Giving them is fine, but be sure to get the contact of the other person. 99% will not followup. That’s your job!

Related: When clients don’t pay

3. Build your reputation

When people search your name, they should find you. On social networks like Linkedin, github, twitter, google plus, Slideshare, StackOverflow etc. Create profiles on all of these. Link them back to your professional site.

You *do* have a professional site right? If not go get a domain right now. devopsguru.io, backenddeveloper.guru, whatever! The domain doesn’t matter that much, most traffic will come from google, and it won’t be going to your homepage anyway.

Speak at non-peer events & conferences. Lunch & learns. Co-working spaces, incubators. These all have events, they’re all looking for experts. You may also apply to CFP’s regularly. Hey you might even get some conference passes out of it!

As I also mentioned above, a newsletter is also a good idea. Add every single person you meet in your professional context. It gives you something else to talk about as you are socializing. πŸ™‚

Related: Why i ask for a deposit

4. Positioning

This one is counterintuitive. Why can’t I just do the thing I love.

Well sure maybe you can. But finding an underserved niche is a fast track to success. To my mind it’s crucial. So find out what is in demand. I know you’ve been talking to recruiters, right?

So yes pay attention to the wind. And pivot as necessary. Keep reading and stay up to date on new tech.

Related: Can progress reports help your engagements succeed?

5. What you might find

Don’t expect to get in at large companies like google & facebook, or with defense contractors. There’s a terrible amount of bearocracy, and you would need a larger team to become an approved vendor. Also many of these larger well organized firms already have tons of talent.

You’re better suited to less organized, or newer companies. Because you want to be able to raise the bar for them. Better you start where the bar is a bit lower.

Examples…

o small early stage startups

These folks have some money, but they are still small so they may not need a fullsize engineering organization. They also need things done yesterday. So they are ripe with opportunities.

o medium size startups

Same as above. But they may be having trouble finding your skills. Because you’ve found that niche, right?

o greenfield

Startups building an mvp, where the skies the limit. You have the opportunity to build out the first gen. Go for it!

o second generation & legacy

Once a startup has seen it’s first round of developers leave, they may be in a spot, where the business is great, but the product needs lifting. You’re looking at a quote-unquote legacy application, and you need to use your skills to tune, troubleshoot, and identify technical debt.

Related: Why do people leave consulting?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What makes a highly valued docker expert?

via GIPHY

What exactly do we need to know about to manage docker effectively? What are the main pain points?

Join 38,000 others and follow Sean Hull on twitter @hullsean.

The basics aren’t tough. You need to know the anatomy of a Dockerfile, and how to setup a docker-compose.yml to ease the headache of docker run. You also should know how to manage docker images, and us docker ps to find out what’s currently running. And get an interactive shell (docker exec -it imageid). You’ll also make friends with inspect. But what else?

1. Manage image bloat

Docker images can get quite large. Even as you try to pair them down they can grow. Why is this?

Turns out the architecture of docker means as you add more stuff, it creates more “layers”. So even as you delete files, the lower or earlier layers still contain your files.

One option, during a package install you can do this:

RUN apt-get update && apt-get install -y mypkg && rm -rf /var/lib/apt/lists/*

This will immediately cleanup the crap that apt-get built from, without it ever becoming permanent in that layer. Cool! As long as you use “&&” it is part of that same RUN command, and thus part of that same layer.

Another option is you can flatten a big image. Something like this should work:

$ docker export 0453814a47b3 | docker import – newimage

Related: 30 questions to ask a serverless fanboy

2. Orchestrate

Running docker containers on dev is great, and it can be a fast and easy way to get things running. Plus it can work across dev environments well, so it solves a lot of problems.

But what about when you want to get those containers up into the cloud? That’s where orchestration comes in. At the moment you can use docker’s own swarm or choose fleet or mesos.

But the biggest players seem to be kubernetes & ECS. The former of course is what all the cool kids in town are using, and couple it with Helm package manager, it becomes very manageable system. Get your pods, services, volumes, replicasets & deployments ready to go!

On the other hand Amazon is pushing ahead with it’s Elastic Container Service, which is native to AWS, and not open source. It works well, allowing you to apply a json manifest to create a task. Then just as with kubernetes you create a “service” to run one or more copies of that. Think of the task as a docker-compose file. It’s in json, but it basically specifies the same types of things. Entrypoint, ports, base image, environment etc.

For those wanting to go multi-cloud, kubernetes certainly has an appeal. But amazon is on the attack. They have announced a service to further ease container deployments. Dubbed Amazon Fargate. Remember how Lambda allowed you to just deploy your *code* into the cloud, and let amazon worry about the rest? Imaging you can do that with containers, and that’s what Fargate is.

Check out what Krish has to say – Why Kubernetes should be scared of AWS

Related: What’s the luckiest thing that’s happened in your career?

3. Registries & Deployment

There are a few different options for where to store those docker images.

One choice is dockerhub. It’s not feature rich, but it does the job. There is also Quay.io. Alternatively you can run your own registry. It’s as easy as:

$ docker run -d -p 5000:5000 registry:2

Of course if you’re running your own registry, now you need to manage that, and think about it’s uptime, and dependability to your deployment pipeline.

If you’re using ECS, you’ll be able to use ECR which is a private docker registry that comes with your AWS account. I think you can use this, even if you’re not on ECS. The login process is a little weird.

Once you have those pieces in place, you can do some fun things. Your jenkins deploy pipeline can use docker containers for testing, to spinup a copy of your app just to run some unittests, or it can build your images, and push them to your registry, for later use in ECS tasks or Kubernetes manifests. Awesome sauce!

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Curve ball technology questions and solutions

via GIPHY

I’ve been on the phone with a lot of companies lately. You might be surprised that some of the challenges firms struggle with in the cloud, are repeated over and over.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I put together some of the most common ones I’ve heard, and my thoughts on the right solution.

1. How would you load test?

Here’s an interesting question. Do you talk about tools? How do you approach the problem?

The first thing I talk about is simulating real users. If your site normally has 1000 active users, how will it behave when it has 5000, 10,000, 100,000 or 1million? We can simulate this by using a load testing tool, and monitoring the infrastructure and database during that test.

But how accurate are those tests? What do active users do? Login to the site? Edit and change some data? Where do active users spend most of their time? Are there some areas of the site that are busier than others? What about some dark corner of the site or product that gets less use, but is also less tuned? Suddenly a few users decide that want that feature, and performance slides!

Real world usage patterns are unpredictable. There is as much art as science to this type of load testing.

Related: 30 questions to ask a serverless fanboy

2. Why is Amazon S3’s 99.999999999% promise *not* enough??

I’ve heard people say before that S3 is extremely reliable. But is it?

According to their SLA, the durability guarantee is 11 nines. What does this mean? Durability is confidence that a file is saved. That you will not lose it. It’s on storage, and that storage has redundant copies. Great. You can be confident you will never lose a file.

What about uptime? That SLA is 99.99% or an hour a month. Surprise! That amounts to an hour of DOWNTIME per month. And if your product fails when S3 files are missing, guess what, your business is down for an hour a month.

That’s actually quite a *lot* of downtime.

Solution: You better be doing cross-region replication. You have the tools and the cloud, now get to work!

Related: What’s the luckiest thing that’s happened in your career?

3. Why is continuous integration not about tools?

I hear a lot of talk about continuous integration. I’ve even seen it as a line item on a todo list I was handed. Hmmm…

I asked the manager, “so it says here setup CI/CD. Are there already unit tests written? What about integration tests?” Turns out the team is not yet on board with writing tests. I gently explain that automated builds are not going to get very far without tests to run. πŸ™‚

CI/CD requires the team to be on-board. It’s first a cultural change in how you develop code.

It means more regular code checkins. It means every engineer promises not to break the build.

It means write enough tests for good code coverage.

Related: How I use progress reports to achieve consulting success

4. What can VPC peering do for you?

Amazon has made VPC peering a lot easier. But what the heck is it?

In the world of cloud networking, everything is virtual. You have VPCs and inside those you have public and private subnets. What if you have multiple AWS accounts? VPC peering can allow you to connect backend private subnets, without going across the public internet at all.

As security becomes front & center for more businesses, this can be a huge win.

What’s more it’s easier now because it is semi managed by AWS.

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

5. Why go with a New York based resource?

As more and more small startups put together teams to build their MVP, the offshore market has never been hotter. And there are very talented engineers in faraway places, from Eastern Europe, to India and China. And South America too.

At β…“ to ΒΌ the price, why hire US based people? Well one reason might be compliance. If you have sensitive data, that must be handled by US nationals, that might be one reason.

Why New York based? Well there is the value of being face-to-face and working side by side with teams. It may also ease the language barrier & communication. And timezone challenges sometimes make communication difficult.

And lastly ownership. With resources that are focused solely on you, and for which you are a big customer, you’re likely to get more personalized focused attention.

Related: Is Amazon Web Services too complex for small dev teams?

6. What are some common antipatterns in the cloud

Antipatterns are interesting. Because you see them regularly, and yet they are the *wrong* way to solve a problem, either they’re slower, or there is a better more reliable way to solve it.

o Using EFS Amazon’s NFS solution, instead of putting assets in S3.

It might help you avoid rewriting code, but in the end S3 is definitely the way to go.

o Hardcoded IPs in security group rules instead of naming a group.

Yes I’ve seen this. If you specify each webserver individually, what happens when you autoscale? Answer, the new nodes break! The solution is to put all the webservers in a group, and then add a security group rule allowing access from that group. Voila!

o Passing credentials around instead of using AWS instance level roles

Credentials are the bane of applications. You hardcode them and things break later. Or they create a vulnerability that you forget about. That’s why AWS invented roles. But did you know a server *itself* can have a role? That means that server and any software running on it, has permissions to certain APIs within the amazon universe. You can change a servers roles or it’s underlying policies, while the server is still running. No restart required!

Implement CI/CD as a task item

Don’t forget culture & process are the big hurdles. Installing a tool is easy. Getting everyone using it everyday is the challenge!

Reducing and managing docker image bloat

As you change your docker images, you add layers. Even as you delete things, the total image size only grows! Seems counterintuitive. What’s more when you do all that work with yum or apt-get those packages stay lying around. One way is to install packages and then cleanup all in one command. Another way is to export and import an finished image.

ssh-ing into servers or giving devs kubectl

Old habits die hard! I was watching Kelsey Hightower’s keynote at KubCon. He made some great points about kubernetes. If you give all the devs kubectl, then it’s kind of like allowing everybody to SSH into the boxes. It’s not the way to do it!

Related: Which tech do startups use most?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters