Is there a serious skills shortage around devops space?

via GIPHY

As devops adoption picks up pace, the signs are everywhere. Infrastructure as code once a backwater concept, and a hoped for ideal, has become an essential to many startups.

Why might that be?

Join 37,000 others and follow Sean Hull on twitter @hullsean.

My theory is that devops enables the business in a lot of profound ways. Sure it means one sysadmin can do much more, manage a fleet of servers, and support a large user base. But it goes much deeper than that.





Being able to standup your entire dev, qa, or production environment at the click of the button transforms software delivery dramatically. It means it can happen more often, more easily, and with less risk to the business. It means you can do things like blue/green deployments, rolling out featues without any risk to the production environment running in parallel.

What kind of chops does it take?

Strong generalist skills

For starters you’ll need a pragmatist mindset. Not fanatical about one technology, but open to the many choices available. And as a generalist, you start with a familiarity with a broad spectrum of skills, from coding, troubleshooting & debugging, to performance tuning & integration testing.

Stir into the mix good operating system fundamentals, top to bottom knowledge of Unix & Linux, networking, configuration and more. Maybe you’ve built kernels, compiled packages by hand, or better yet contributed to a few open source projects yourself.

You’ll be comfortable with databases, frontend frameworks, backend technologies & APIs. But that’s not all. You’ll need a broad understanding of cloud technologies, from GCP to AWS. S3, EC2, VPCs, EBS, webservers, caching servers, load balancing, Route53 DNS, serverless lambda. Add to all of that programmable infrastructure through CloudFormation or Terraform.

Related: 30 questions to ask a serverless fanboy

Competent programmer

Although as a devop you probably won’t be doing frontend dev, you’ll need some cursory understanding of those. You should be competent at Python and perhaps Nodejs. Maybe Ruby & bash scripts. You’ll need to understand JSON & Yaml, CloudFormation & Terraform if you want to deliver IAC.

Related: Does a 4-letter-word divide dev & ops?

Strong sysadmin with ops mindset

These are fundamental. But what does that mean? Ops mindset is born out of necessity. Having seen failures & outages, you prioritize around uptime. A simpler stack means fewer moving parts & less to manage. Do as Martin Weiner would suggest & use boring tech.

But you’ll also need to reason about all these components. That’ll come from dozens of debug & troubleshooting sessions you’ll do through years of practice.

Related: How to hire a developer that doesn’t suck

Understand build systems & deployment models

Build systems like CircleCI, Jenkins or Gitlab offer a way to automate code delivery. And as their use becomes more widespread knowing them becomes de rigueur. But it doesn’t end there.

With deployments you’ll have a lot to choose from. At the very simplest a single target deploy, to all-at-once, minimum in service and rolling upgrades. But if you have completely automated your dev, qa & prod infra buildout, you can dive into blue/green deployments, where you make a completely knew infra for each deploy, test, then tear down the old.

Related: Is AWS too complex for small dev teams?

Personality to communicate across organization

I think if you’ve made it this far you will agree that the technical know-how is a broad spectrum of modern computing expertise. But you’ll also need excellent people skills to put all this into practice.

That’s because devops is also about organizational transformation. Yes devs & ops have to get up to speed on the tech, but the organization has to get on board too. Many entrenched orgs pay lip service to devops, but still do a lot of things manually. This is out of fear as much as it stands as technical debt.

But getting past that requires evangelizing, and advocating. For that a leader in the devops department will need superb people skills. They’ll communicate concepts broadly across the organization to win hearts and minds.

Related: Will Microservices just die already?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Very easy cloudformation template comparison with simple terraform for beginners

via GIPHY

If you search a bit on google, you’ll find lots of sample templates for both of these systems. However I found they had a lot of complexity.

When you’re just starting, you want a very simple example. So I thought I’d put one together.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I’m going to compare both terraform & cloudformation. They get you to the same endpoint, but do it slightly differently.

Very basic terraform template

Ok, you’ve got terraform installed right? If not there are howtos here.

Now let’s create a server.

Create a directory “terraform” then cd into it. Edit this file as main.tf

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    key_name = "seanKey"
}

Please change the subnet to a valid one for you. In the real world you would definitely *not* hardcode a subnet like this. But I wanted to keep this example very simple. Don’t know what subnet to use? Navigate your aws dashboard over to “VPC” and dig around.

Also of course edit for your key.

Ok, you’re ready to test. Let’s first ask terraform what it will do with the “plan” command:

levanter:terraform sean$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-40d28157"
    availability_zone:        ""
    ebs_block_device.#:       ""
    ephemeral_block_device.#: ""
    instance_state:           ""
    instance_type:            "t2.micro"
    key_name:                 "seanKey"
    network_interface_id:     ""
    placement_group:          ""
    private_dns:              ""
    private_ip:               ""
    public_dns:               ""
    public_ip:                ""
    root_block_device.#:      ""
    security_groups.#:        ""
    source_dest_check:        "true"
    subnet_id:                "subnet-111ddaaa"
    tenancy:                  ""
    vpc_security_group_ids.#: ""


Plan: 1 to add, 0 to change, 0 to destroy.
levanter:terraform sean$

Related: What is devops and why is it important?

Build & change with Terraform

Next you want to ask terraform to go ahead and do the work. Because above we only did a dry-run.

levanter:terraform sean$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-40d28157"
  availability_zone:        "" => ""
  ebs_block_device.#:       "" => ""
  ephemeral_block_device.#: "" => ""
  instance_state:           "" => ""
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "seanKey"
  network_interface_id:     "" => ""
  placement_group:          "" => ""
  private_dns:              "" => ""
  private_ip:               "" => ""
  public_dns:               "" => ""
  public_ip:                "" => ""
  root_block_device.#:      "" => ""
  security_groups.#:        "" => ""
  source_dest_check:        "" => "true"
  subnet_id:                "" => "subnet-111ddaaa"
  tenancy:                  "" => ""
  vpc_security_group_ids.#: "" => ""
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

One thing I like is terraform shows us the progress at command line. Cloudformation isn’t so nicely finished. 🙂

Ok, let’s add a tag name to our server. We’re going to add just three lines to our main.tf file:

provider "aws" {
    region = "us-east-1"
}

resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    tags {
        Name = "terraform-box"
    }
}

Now we do terraform apply again. Look how easy that change is to make!


levanter:terraform sean$ terraform apply
aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Modifying...
  tags.%:    "0" => "1"
  tags.Name: "" => "terraform-box"
aws_instance.example: Modifications complete

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

Navigate to the EC2 dashboard and you should see the first column showing your new name.

That was cool!

Chances are you don’t wanna leave these components sitting around. Let’s cleanup. That’s easy too!

levanter:terraform sean$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Destroy complete! Resources: 1 destroyed.
levanter:terraform sean$ 

Related: Top questions to ask on a devops interview

Very basic CloudFormation template example

Hopefully you wrote down your subnet name & keyname. So this will be easy.

Let’s create a “cfn” directory and cd into it.

Next edit main.yml

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"

Now let’s build that with cloudformation. You need to have the awscli installed. Here’s amazon’s howto.

Now let’s create. Cloudformation organizes things as “stacks.

aws cloudformation create-stack --template-body file://sean-instance.yml --stack-name cfn-test

Since I didn’t define “outputs” to keep the yaml simple, the command above should just return without error.

You can go into the aws dashboard, and navigate to “CloudFormation” and see the stack being created. You can also see under “EC2” a new instance has been created.

Related: How do I migrate my skills to the cloud?

Add an instance name with tags in Cloud Formation

As we did with terraform, let’s add a name to the server. This is just a tag, not a hostname, so it’s only useful throughout the AWS API.

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"
      Tags:
        - Key: "Name"
          Value: "cfn-box"

Note the three new lines at the bottom. Ok, let’s apply those changes:

levanter:cfn sean$ aws cloudformation update-stack --template-body file://sean-instance.yml --stack-name cfn-test

Navigate to the EC2 dashboard and you should see the first column showing your new name.

Time to cleanup. Let’s delete that stack:

levanter:cfn sean$ aws cloudformation delete-stack --stack-name cfn-test12
levanter:cfn sean$ 

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

Conclusions

Terraform just supports JSON or it’s HCL (hashicorp configuration language). Actually the latter way of formatting is better supported.

On the CloudFormation side you can use yaml or json.

However CloudFormation can be clunky and frustrating to work with. For example to dry-run in terraform is easy. Just use “plan”. And isn’t something we’re going to do over and over?

In CloudFormation there is a “validate-template” option, but this just checks your JSON or YAML. It doesn’t hit amazon’s API or test things in any real way. They have added something called Change Sets, but I haven’t tried them too much yet.

Also CloudFormations error messages are really lacking. They often give you a syntax error or tell you a resource is incomplete without real details on where or how. It makes debugging slow and tedious. Sometimes I see errors at create-stack calls. Other times that succeeds only to find errors within the CloudFormation dashboard.

Terraform is wayyyyy better.

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How can I get started with lambda and nodejs in 5 minutes?

via GIPHY

I know these learn-to-do-x in 5 minutes type articles are a dime a dozen. But it’s true, we’re short on time, and we just wanna jump in. So let’s go!

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Rather than go the old route of doing everything manually, and struggling, we’re going to give ourselves a skeleton to start with.





Enter, serverless framework. What’s it do? It’s a command line tool written in nodejs, which allows you to create a lambda project from a template.

From there you edit a yml file to tell serverless what to build & how. Then you put your code inside of the handler.js file. Sounds simple right?

1. Create

If you haven’t already done it, install nodejs. There are lots of docs on the interwebs. For mac users, “brew install node” does the trick!

Next install the serverless package.

$ npm install serverless

Great! If you got dependency errors, get digging. Those moments of troubleshooting & patience teach you a lot. 🙂

Ok, now let’s kick the tires. We’ll create our new project.

$ serverless create --template aws-nodejs --path myEndpoint
$ cd myEndpoint

Related: 30 questions to ask a serverless fanboy

2. Edit serverless.yml

service: myEndpoint

frameworkVersion: ">=1.1.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs4.3

functions:
  currentTime:
    handler: handler.endpoint
    events:
      - http:
          path: ping
          method: get

Ok, what are we looking at here? Framework is the version of the serverless framework. Provider is aws, because serverless is attempting to build cross-platform support. You may also use azure, openwhisk, google cloud functions etc. Runtime is your language.

Under functions, our main one is currentTime. handler tells serverless framework what code to matchup with your function name. And finally events tell serverless about the API endpoint to configure.

There's a lot of magic going on under the hood. The serverless framework us using CloudFormation to build things in the background for you. CloudFormation is like Latin, it is a foundational construct to the entire AWS world. You can formalize any object, from servers to sqs queues, dynamodb tables, security groups, IAM users, S3 buckets, ebs volumes etc etc. You get the idea.

Want to see what serverless did? Head over to your aws dashboard, navigate to CloudFormation. You should see a new stack there called myEndpoint-dev. Scroll down and click the "Template" tab. You'll see the exact JSON code in all it's gory detail!

Related: 5 surprising features of Amazon Lambda serverless computing

3. Edit handler.js

Next up let's add a bit of code.

'use strict';

// return the current time in JSON format
module.exports.endpoint = (event, context, callback) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: `Hello, the current time is ${new Date().toTimeString()}.`,
    }),
  };

  callback(null, response);
};

Whenever this function gets called, we'll just return the current time. Pretty self explanatory.

Related: Are you getting errors building lambda functions? I got you covered!

4. Deploy!

Now the fun party. Let's deploy the code.

$ serverless deploy

Simple command, but it's doing a lot of work. Serverless framework is packaging up your nodejs code into a zip file and uploading it to aws for you. You should see some output telling you what happened.

$ serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.2 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
........................
Serverless: Stack update finished...
Service Information
service: myEndpoint
stage: dev
region: us-east-1
stack: myEndpoint-dev
api keys:
  None
endpoints:
  GET - https://ABCDEFGHIJK.execute-api.us-east-1.amazonaws.com/dev/ping
functions:
  currentTime: myEndpoint-dev-currentTime
$

Related: Is Amazon too big to fail?

5. Test

Awesome, now it's time to make sure it's working.

You can invoke the function directly using serverless' "invoke" command like this:

$ serverless invoke --function currentTime --log
{
    "statusCode": 200,
    "body": "{\"message\":\"Hello, the current time is 20:46:02 GMT+0000 (UTC).\"}"
}
--------------------------------------------------------------------
START RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce Version: $LATEST
END RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce
REPORT RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce	Duration: 0.67 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 21 MB	


$

But we created an API endpoint didn't we? Yep. You can hit that. If you have a browser open, go ahead and copy/past the url listed in the endpoints section of your deploy process.

You can also use curl like this:

$ curl https://ABCDEFGHIJK.execute-api.us-east-1.amazonaws.com/dev/ping
{"message":"Hello, the current time is 20:46:18 GMT+0000 (UTC)."}
$ 

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Is Alex Hudson right that software architecture is failing?

via GIPHY

I read Hacker News aka Ycombinator’s popular top 100. I never fail to find useful, surprising & stimulating reading there.

I recently stumbled on Alex Hudson’s software architecture is failing.

It’s very good, I recommend reading it.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

But why did it grab my attention, you might ask? Perhaps I’m a naysayer. But I do find there is a lot of hype, and a lot of sex in software today. It’s as though the shiniest, newest, coolest toys are the ones getting the spotlight.

So when I find an alternative view, I sit up and take notice.

1. Are we making systems too complex?

Right out of the gates, Alex makes a great point:

“We’re not delivering quickly enough!”. “Our systems are too complex to maintain!”. “The application we delivered last year is completely legacy now but it’s too difficult to replace!”.

Our industry’s obsession with the newest & coolest toys, means we’re building things that don’t last very long. A real & ongoing problem.

Related: Why does Reddit CTO Martin Weiner advocate boring tech?

2. Smaller enterprises

One thing Alex pointed out that really struck a nerve was this:

For those in tech who are not working at Facebook/Google/Amazon, we’re simply not talking enough about what systems at smaller enterprises look like.

I couldn’t agree more. As a profession, we watch closely at what the big guys are doing. And that’s useful to a point. But for many smaller companies, to use such architectures would be over engineering in the extreme. Not to mention extremely costly!

Related: How I use terraform & composer to automate wordpress on AWS

3. Not bleeding & far from the edge

Another choice quote from Alex’s piece:


“It’s totally legacy, and no-one maintains it – it just sits there working, except for the occasions it doesn’t. The problem is replacing it is so hard, it’s got great performance, and the business doesn’t want to spend time replacing something working”. This is the problem being ahead of the curve – the definition of “success” (it works great, it’s reliable, it’s performant, we don’t need to think about it) looks a hell of a lot like the definition of “legacy”.

We know the term bleeding edge because it’s tough being out there trail blazing. Here I agree that sometimes legacy is also boring, yet eminently reliable.

Related: 30 questions to ask a serverless fanboy

4. Reduce, reuse, recycle

Should we build it or should we buy it? Here’s what Alex says:


I think we’re often getting the build/buy decision wrong. Software development should be the tool of last resort: “we’re building this because it doesn’t exist in the form we need it”.

Well said. Sure we should consider integration costs & testing. And using a service brings other things to balance. But it means we don’t have to own that code.

Better to focus on our business core competency.

Related: Is Amazon about to disrupt your data warehouse?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How do I migrate my skills to the cloud?

via GIPHY


Hi, I’m currently an IT professional and I’m training for AWS Solutions Architect – Associate exam. My question is how to gain some valuable hands-on experience without quitting my well-paying consulting gig I currently have which is not cloud based. I was thinking, perhaps I could do some cloud work part time after I get certified.

Join 38,000 others and follow Sean Hull on twitter @hullsean.


I work in the public sector and the IT contract prohibits the agency from engaging any cloud solutions until the current contract expires in 2019. But I can’t just sit there without using these new skills – I’ll lose it. And if I jump ship I’ll loose $$$ because I don’t have the cloud experience.


Hi George,

Here’s what I’d suggest:

1. Setup your AWS account

A. open aws account, secure with 2FA & create IAM roles

First things first, if you don’t already have one, go signup. Takes 5 minutes & a credit card.

From there be sure to enable two factor authentication. Then stop using your root account! Create a new IAM user with permissions to command line & API. Then use that to authenticate. You’ll be using the awscli python package.

Also: Is Amazon too big to fail?

2. Automatic deployments

B. plugin a github project
C. setup CI & deployment
D. get comfy with Ansible

Got a pet project on github? If not it’s time to start one. 🙂

You can also alternatively use Amazon’s own CodeCommit which is a drop-in replacement for github and works fine too. Get your code in there.

Next setup codedeploy so that you can deploy that application to your EC2 instance with one command.

But you’re not done yet. Now automate the spinup of the EC2 instance itself with Ansible. If you’re comfortable with shell scripts, or other operational tools, the learning curve should be pretty easy for you.

Read: Is AWS too complex for small dev teams? The growing demand for Cloud SRE

3. Clusters

E. play around with kubernetes or docker swarm

Both of these technologies allow you to spinup & control a fleet of containers that are running on a fixed set of EC2 instances. You may also use Amazon ECS which is a similar type of offering.

Related: How to deploy on EC2 with Vagrant

4. Version your infrastructure

F. use terraform or cloudformation to manage your aws objects
G. put your terraform code into version control
H. test rollback & roll foward infrastructure changes

Amazon provides CloudFormation as it’s foundational templating system. You can use JSON or YAML. Basically you can describe every object in your account, from IAM users, to VPCs, RDS instances to EC2, lambda code & on & on all inside of a template file written in JSON.

Terraform is a sort of cloud-agnostic version of the same thing. It’s also more feature rich & has got a huge following. All reasons to consider it.

Once you’ve got all your objects in templates, you can checkin these files into your git or CodeCommit repository. Then updating infrastructure is like updating any other pieces of code. Now you’re self-documenting, and you can roll-forward & backward if you make a mistake!

Related: How I use terraform & composer to automate wordpress on AWS

5. Learn serverless

I. get familiar with lambda & use serverless framework

Building applications & deploying only code is the newest paradigm shift happening in cloud computing. On Amazon you have Lambda, on Google you have Cloud Functions.

Related: 30 questions to ask a serverless fanboy

6. Bonus: database skills

J. Learn RDS – MySQL, Postgres, Aurora, Oracle, SQLServer etc

For a bonus page on your resume, dig into Amazon Relational Database Service or RDS. The platform supports various databases, so try out the ones you know already first. You’ll find that there are a few surprises. I wrote Is upgrading RDS like a sh*t storm that will not end?. That was after a very frustrating weekend upgrading a customers production RDS instance. 🙂

Related: Is Amazon about to disrupt your data warehouse?

7. Bonus: Data warehousing

K. Redshift, Spectrum, Glue, Quicksight etc

If you’re interested in the data side of the house, there is a *LOT* happening at AWS. From their spectrum technology which allows you to keep most of your data in S3 and still query it, to Glue which provides an ETL as a service offering.

You can also use a world-class columnar storage database called Redshift. This is purpose built for reporting & batch jobs. It’s not going to meet your transactional web-backend needs, but it will bring up those Tableau reports blazingly fast!

Related: Is Amazon about to disrupt your data warehouse?

8. Now go find that cloud deployment job!


With the above under your belt there’s plenty of work for you. There is tons of demand right now for this stuff.

Did you do learn all that? You’ve now got very very in-demand skills. The recruiters will be chomping at the bit. Update those buzzwords (I mean keywords). This will help match you with folks looking for someone just like you!

Related: Why I don’t work with recruiters

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What’s the *real* way to deploy on Google Cloud?

via GIPHY

I was talking to a customer recently and they asked about deployments. They wanted to do things the real way. Here’s a snippet…

I’m helping out a company called Blue Marble and they are getting ready to deploy a new POS system. The app has been built using a Node.js back-end and Google Cloud Datastore for storage. The current dev build is hosted on AWS and connects to Google for the data bits.

Join 38,000 others and follow Sean Hull on twitter @hullsean.


For prod launch, they are interested in migrating to the “real” way of deployment on Google for everything.

They are pressed on time and looking for someone who can jump in quickly. Are you available? Do you have Google Cloud expertise?

Here’s what I said.

Cultural hurdles


Yep, I’ve have used Bigquery & GCE.

What are they looking for specifically? Full deployment automation? Multiple deploys per day?

I’ve found that sometimes the biggest hurdle to fully automated deploys can be cultural issues.

In other words yes you can automate your deployment so it is push button, get all the artifacts & moving parts automated. Then deploy without much intervention. But to go from that to the team having *faith* in the system, that is a challenge.

Also: Why would I help a customer that’s not paying?

Unit testing


Once the process has been streamlined, a lot often still needs to happen around unit & smoke tests.

If the team isn’t already in the habit of building tests for each bit of code, this may take some time. Also building tests can be an art in itself. What are the edge cases? What values are out of bounds?

Consider for example odd vulnerabilities that show up when hackers type SQL code into fields that devs were expecting. Sanity checking anyone?

Read: Is AWS too complex for small dev teams? The growing demand for Cloud SRE

Integraton testing

What makes this all even more complicated is integration testing. Today many application use various third party APIs, service-based authentication, and even web-based databases like Firebase. So these things can complicate testing.

Related: How to build an operational datastore on Amazon Redshift with S3

Getting there

Although your project, startup or business may be pressed for time, that may not change the realities of development. Your team has to become culturally ready to be completely agile. Many teams choose a middle ground of automating much of the deployment process, but still having a person in the loop just in case.

Same with testing. Sure automating can make you more agile & more efficient. But you’ll never automate out creative thinking, problem solving & ownership of the product.

Related: Why did Flatiron School fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is Amazon about to disrupt your data warehouse?

via GIPHY

Amazon is about to launch a product called glue. As you can see below, this is the last piece in the data warehousing puzzle. With that in place, Amazon will own you! Or at least have push button products to meet all of enterprises varying needs.

Even if you’re a small startup, you can do big-shot big enterprise data warehousing. That means everyone can use cutting edge data driven techniques for product & business decisions.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

What is Redshift

Redshift is like the OLAP databases of years past, the Oracle’s of the world purpose built for warehousing data. Obviously without the crazy licensing model Oracle was famous for. With Amazon you can get enterprise class data warehouse for modest hourly prices.

If my recent conversations with recruiters about Redshift demand are any indication, there’s been a sudden uptick in startups looking for redshift expertise.

Also: Top serverless interview questions for hiring aws lambda experts

What is Spectrum?

Spectrum is a very new extension of Redshift allowing you to access & query S3 file data directly. This means you can have petabytes of data that you can access pre-load time. So you will ETL and load portions of it, but with Spectrum you can still access the offline data too.

In the old Oracle days this was called an EXTERNAL TABLE. I mention this only to say that Amazon isn’t doing anything that hasn’t been done before. Rather they’re bringing these advanced features within reach of everyday startups. That’s cool.

Related: Which engineering roles are in greatest demand?

What is glue?

Glue is still in beta, but if the RE:Invent talk above is any indication, it’s set to disrupt an entire industry. Wow!

Glue first catalogs your data sources. What does this mean, it scans them & models their schemas.

It then generates sample python ETL code. Modify it, or write your own. Share your code on Git. Or borrow other open source pieces, that already address your specific ETL use case!

Lastly it includes a job scheduler which handles dependencies. Job A must be completed before B can run and so forth. Error handling & logging are also all included.

Since these are native Amazon services, of course they’re going to integrate with their dangerously fast Redshift warehouse.

Read: Can on-demand consulting save startups time & money?

What is serverless?

I’ve written about how to throw fastballs at a serverless fanboy and even how to hire a serverless expert. But really what is it?

Serverless means deploying functions directly into the cloud. No servers, no configuration. All the systems administration & automation is hidden. No more devops to argue with! Amazon’s own offering is called Lambda.

Also: 30 questions to ask a serverless fanboy

What is Quicksight?

Amazon’s even jumped into the fray at the presentation layer. Quicksight is a BI tool along the lines of mode, domo, looker or Tableau.

Now it’s possible to stay completely within the cozy Amazon ecosystem even for business insight and analytics.

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Top questions to ask a devops expert when hiring or preparing for job & interview

xkcd_goodcode
Strip by Randall Munroe; xkcd.com

Whether your a hiring manager, head of HR or recruiter, you are probably looking for a devops expert. These days good ones are not easy to find. The spectrum of tools & technologies is broad. To manage today’s cloud you need a generalist.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

If you’re a devops expert and looking for a job, these are also some essential questions you should have in your pocket. Be able to elaborate on these high level concepts as they’re crucial in todays agile startups.

Check out: 8 questions to ask an aws ec2 expert

Also new: Top questions to ask on a devops expert interview

And: How to hire a developer that doesn’t suck

1. How do you automate deployments?

A. Get your code in version control (git)

Believe it or not there are small 1 person teams that haven’t done this. But even with those, there’s real benefit. Get on it!

B. Evolve to one script push-button deploy (script)

If deploying new code involves a lot of manual steps, move file here, set config there, set variable, setup S3 bucket, etc, then start scripting. That midnight deploy process should be one master script which includes all the logic.

It’s a process to get there, but keep the goal in sight.

C. Build confidence over many iterations (team process & agile)

As you continue to deploy manually with a master script, you’ll iron out more details, contingencies, and problems. Over time You’ll gain confidence that the script does the job.

D. Employ continuous integration Tools to formalize process (CircleCI, Jenkins)

Now that you’ve formalized your deploy in code, putting these CI tools to use becomes easier. Because they’re custom built for you at this stage!

E. 10 deploys per day (long term goal)

Your longer term goal is 10 deploys a day. After you’ve automated tests, team confidence will grow around developers being able to deploy to production. On smaller teams of 1-5 people this may still be only 10 deploys per week, but still a useful benchmark.

Also: Top serverless interview questions for hiring aws lambda experts

2. What is microservices?

Microservices is about two-pizza teams. Small enough that there’s little beaurocracy. Able to be agile, focus on one business function. Iterate quickly without logjams with other business teams & functions.

Microservices interact with each other through APIs, deploy their own components, and use their own isolated data stores.

Function as a service, Amazon Lambda, or serverless computing enables microservices in a huge way.

Related: Which engineering roles are in greatest demand?

3. What is serverless computing?

Serverless computing is a model where servers & infrastructure do not need to be formalized. Only the code is deployed, and the platform, AWS Lambda for example, takes care of instant provisioning of containers & VMs when the code gets called.

Events within the cloud environment, such a file added to S3 bucket, trigger the serverless functions. API Gateway endpoints can also trigger the functions to run.

Authentication services are used for user login & identity management such as Auth0 or Amazon Cognito. The backend data store could be Dynamodb or Google’s Firebase for example.

Read: Can on-demand consulting save startups time & money?

4. What is containerization?

Containers are like faster deploying VMs. They have all the advantages of an image or snapshot of a server. Why is this useful? Because you can containerize your microservices, so each one does one thing. One has a webserver, with specific version of xyz.

Containers can also help with legacy applications, as you isolate older versions & dependencies that those applications still rely on.

Containers enable developers to setup environments quickly, and be more agile.

Also: 30 questions to ask a serverless fanboy

5. What is CloudFormation?

CloudFormation, formalizes all of your cloud infrastructure into json files. Want to add an IAM user, S3 bucket, rds database, or EC2 server? Want to configure a VPC, subnet or access control list? All these things can be formalized into cloudformation files.

Once you’ve started down this road, you can checkin your infrastructure definitions into version control, and manage them just like you manage all your other code. Want to do unit tests? Have at it. Now you can test & deploy with more confidence.

Terraform is an extension of CloudFormation with even more power built in.

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Key lessons from the Devops Handbook

I picked up a copy of the DevOps Handbook.

This is not a book about how to setup Amazon servers, how to use git, codePipeline or Jenkins. It’s not about Chef or Ansible or other tools.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

This is a book about processes & people. It’s about how & why automation & world-class infrastructure will make your business more agile, raise quality & increase productivity.

1. Infrastructure in version control

With technologies like Terraform and CloudFormation, the entire state of your infrastructure can be captured. That means you can manage it just like any other code.

Also: Myth of five nines – Why high availability is overrated

2. Pushbutton builds

You’ve heard it before. Automate your builds. That means putting everything in version control, from environment building scripts, to configs, artifacts & reference data. Once you can do that, you’re on your way to automating production deploys completely.

Related: 5 ways to move data to amazon redshift

3. Devs & Ops comingled

In the devops world, devs should learn about operations, infrastructure, performance & more. What’s more operations teams should work closely with devs.

Read: Why were dev & ops siloed job roles?

4. Servers as cattle not pets

In the old days, we logged into servers & provided personal care & feeding. We treated them like pets.

In the new world of devops, we should treat servers like cattle. When it begins to fail, take it out back and shoot it. (tbh i don’t love the analogy, but it carries some meaning…)

Also: Are SQL databases dead?

5. Open to learnings & failures

Organizations that are open to failures, without playing the blame game, learn quicker & recover from problems faster.

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

30 questions to ask a serverless fanboy

Everyone is hot under the collar again. So-called serverless or no-ops services are popping up everywhere allowing you to deploy “just code” into the cloud. Not only won’t you have to login to a server, you won’t even have to know they’re there.

As your code is called, but cloud events such a file upload, or hitting an http endpoint, your code runs. Behind the scene through the magic of containers & autoscaling, Amazon & others are able to provision in milliseconds.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Pretty cool. Yes even as it outsources the operations role to invisible teams behind Amazon Lambda, Google Cloud Functions or Webtask it’s also making companies more agile, and allowing startup innovation to happen even faster.

Believe it or not I’m a fan too.

That said I thought it would be fun to poke a hole in the bubble, and throw some criticisms at the technology. I mean going serverless today is still bleeding edge, and everyone isn’t cut out to be a pioneer!

With that, here’s 30 questions to throw on the serverless fanboys (and ladies!)…

1. Security

o Are you comfortable removing the barrier around your database?
o With more services, there is more surface area. How do you prevent malicious code?
o How do you know your vendor is doing security right?
o How transparent is your vendor about vulnerabilities?

Also: Myth of five nines – Why high availability is overrated

2. Testing

o How do you do integration testing with multiple vendor service components?
o How do you test your API Gateway configurations?
o Is there a way to version control changes to API Gateway configs?
o Can Terraform or CloudFormation help with this?
o How do you do load testing with a third party db backend?
o Are your QA tests hitting the prod backend db?
o Can you easily create & destroy test dbs?

Related: 5 ways to move data to amazon redshift

3. Management

o How do you do zero downtime deployments with Lambda?
o Is there a way to deploy functions in groups, all at once?
o How do you manage vendor lock-in at the monitoring & tools level but also code & services?
o How do you mitigate your vendors maintenance? Downtime? Upgrades?
o How do you plan for move to alternate vendor? Database import & export may not be ideal, plus code & infrastructure would need to be duplicated.
o How do you manage a third party service for authentication? What are the pros & cons there?
o What are the pros & cons of using a service-based backend database?
o How do you manage redundancy of code when every client needs to talk to backend db?

Read: Why were dev & ops siloed job roles?

4. Monitoring & debugging

o How do you build a third-party monitoring tool? Where are the APIs?
o When you’re down, is it your app or a system-wide problem?
o Where is the New Relic for Lambda?
o How do you degrade gracefully when using multiple vendors?
o How do you monitor execution duration so your function doesn’t fail unexpectedly?
o How do you monitor your account wide limits so dev deploy doesn’t take down production?

Also: Are SQL databases dead?

5. Performance

o How do you handle startup latency?
o How do you optimize code for mobile?
o Does battery life preclude a large codebase on client?
o How do you do caching on server when each invocation resets everything?
o How do you do database connection pooling?

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters