25 lessons from Adrian Mouat’s Using Docker book

I spent some time digging through Adrian Mouat’s great book on Docker. Although it’s almost two years old now, it is still chock full of useful information on container goodness.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I flipped through page after page, and chapter after chapter, and found the bits that I thought were particularly useful. And I have summarized those here.

1. Basics

o docker-compose organizes docker runs with a yaml config
o multiple services in one container is an antipattern
o deleting files don’t reduce container size, because they still exist in previous layer
o export followed by import can be a quick way to reduce image size
o docker-machine allows you to provision containers on virtual hosts locally or in the cloud

Related: 5 surprising features of Amazon Lambda serverless computing

2. Testing

o build a private registry node, then push & pull images through it with deploy pipeline
o unit tests are key and provide tests for individual functions in your code
o component tests are also important to test api endpoints for example
o integration tests can be useful, verifying an auth service or external API is working with app
o end-to-end tests verify that the entire application is working

Related: 30 questions to ask a serverless fanboy

3. Networking

o by default containers can talk, consider –icc=false & –iptables=true
o passing secrets with env variables or better yet use a file, vault or kms
o SkyDNS on top of etcd can provide a powerful service discovery solution
o use registrator project to automatically register containers when they start
o orchestration with swarm (native), fleet, mesos or Kubernetes

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

4. Security

o don’t run as root – because a breakout would have root on host
o use limits on memory, cpu, restarts & filesystem to avoid DoS
o defang setuid root binaries with a find +6000 & chmod a-s
o use gpg keys & verify checksums when downloading software
o selinux & AppArmor may help, but buyer beware

Related: Is Amazon Web Services too complex for small dev teams?

5. Miscellaneous

o you can use logsprout to send docker image logs to logstash
o add elasticsearch on top with kibana as frontend to give a great searchable logging UI
o Jason Wilder’s docker-gen can streamline config file creation from templates
o we can modularize compose files with the extends keyword (like library import)
o audit containers & use docker diff to find issues

Related: Are you getting errors building lambda functions? I got you covered!

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

6 Devops interview questions

via GIPHY

Devops is in serious demand these days. At every meetup or tech event I attend, I hear a recruiter or startup founder talking about it. It seems everyone wants to see benefits of talented operations brought to their business.

Join 37,000 others and follow Sean Hull on twitter @hullsean.

That said the skill set is very broad, which explains why there aren’t more devs picking up the batton.





I thought it would be helpful to put together a list of interview questions. There are certainly others, but here’s what I came up with.

1. Explain the gitflow release process

As a devops engineer you should have a good foundation about software delivery. With that you should understand git very well, especially the standard workflow.

Although there are other methods to manage code, one solid & proven method is gitflow. In a nutshell you have two main branches, development & master. Developers checkout a new branch to add a feature, and push it back to development branch. Your stage server can be built automatically off of this branch.

Periodically you will want to release a new version of the software. For this you merge development to master. UAT is then built automatically off of the master branch. When acceptance testing is done, you deploy off of master to production. Hence the saying always ship trunk.

Bonus points if you know that hotfixes are done directly off the master branch & pushed straight out that way.

Related: 8 questions to ask an AWS expert

2. How do you provision resources?

There are a lot of tools in the devops toolbox these days. One that is great at provisioning resources is Terraform. With it you can specify in declarative code everything your application will need to run in the cloud. From IAM users, roles & groups, dynamodb tables, rds instances, VPCs & subnets, security groups, ec2 instances, ebs volumes, S3 buckets and more.

You may also choose to use CloudFormation of course, but in my experience terraform is more polished. What’s more it supports multi-cloud. Want to deploy in GCP or Azure, just port your templates & you’re up and running in no time.

It takes some time to get used to the new workflow of building things in terraform rather than at the AWS cli or dashboard, but once you do you’ll see benefits right away. You gain all the advantages of versioning code we see with other software development. Want to rollback, no problem. Want to do unit tests against your infrastructure? You can do that too!

Related: Does a 4-letter-word divide dev & ops?

3. How do you configure servers?

The four big choices for configuration management these days are Ansible, Salt, Chef & Puppet. For my money Ansible has some nice advantages.

First it doesn’t require an agent. As long as you have SSH access to your box, you can manage it with Ansible. Plus your existing shell scripts are pretty easy to port to playbooks. Ansible also does not require a server to house your playbooks. Simply keep them in your git repository, and checkout to your desktop. Then run ansible-playbook on the yaml file. Voila, server configuration!

Related: How to hire a developer that doesn’t suck

4. What does testing enable?

Unit testing & integration testing are super import parts of continuous integration. As you automate your tests, you formalize how your site & code should behave. That way when you automate the deployment, you can also automate the test process. Let the software do the drudgework of making sure a new feature hasn’t broken anything on the site.

As you automate more tests, you accelerate the software development process, because you’re doing less and less manually. That means being more agile, and makes the business more nimble.

Related: Is AWS too complex for small dev teams?

5. Explain a use case for Docker

Docker a low overhead way to run virtual machines on your local box or in the cloud. Although they’re not strictly distinct machines, nor do they need to boot an OS, they give you many of those benefits.

Docker can encapsulate legacy applications, allowing you to deploy them to servers that might not otherwise be easy to setup with older packages & software versions.

Docker can be used to build test boxes, during your deploy process to facilitate continuous integration testing.

Docker can be used to provision boxes in the cloud, and with swarm you can orchestrate clusters too. Pretty cool!

Related: Will Microservices just die already?

6. How is communicating relevant to Devops

Since devops brings a new process of continuous delivery to the organization, it involves some risk. Actually doing things the old way involves more risk in the long term, because things can and will break. With automation, you can recovery quicker from failure.

But this new world, requires a leap of faith. It’s not right for every organization or in every case, and you’ll likely strike a balance from what the devops holy book says, and what your org can tolerate. However inevitably communication becomes very important as you advocate for new ways of doing things.

Related: How do I migrate my skills to the cloud?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is there a serious skills shortage around devops space?

via GIPHY

As devops adoption picks up pace, the signs are everywhere. Infrastructure as code once a backwater concept, and a hoped for ideal, has become an essential to many startups.

Why might that be?

Join 37,000 others and follow Sean Hull on twitter @hullsean.

My theory is that devops enables the business in a lot of profound ways. Sure it means one sysadmin can do much more, manage a fleet of servers, and support a large user base. But it goes much deeper than that.





Being able to standup your entire dev, qa, or production environment at the click of the button transforms software delivery dramatically. It means it can happen more often, more easily, and with less risk to the business. It means you can do things like blue/green deployments, rolling out featues without any risk to the production environment running in parallel.

What kind of chops does it take?

Strong generalist skills

For starters you’ll need a pragmatist mindset. Not fanatical about one technology, but open to the many choices available. And as a generalist, you start with a familiarity with a broad spectrum of skills, from coding, troubleshooting & debugging, to performance tuning & integration testing.

Stir into the mix good operating system fundamentals, top to bottom knowledge of Unix & Linux, networking, configuration and more. Maybe you’ve built kernels, compiled packages by hand, or better yet contributed to a few open source projects yourself.

You’ll be comfortable with databases, frontend frameworks, backend technologies & APIs. But that’s not all. You’ll need a broad understanding of cloud technologies, from GCP to AWS. S3, EC2, VPCs, EBS, webservers, caching servers, load balancing, Route53 DNS, serverless lambda. Add to all of that programmable infrastructure through CloudFormation or Terraform.

Related: 30 questions to ask a serverless fanboy

Competent programmer

Although as a devop you probably won’t be doing frontend dev, you’ll need some cursory understanding of those. You should be competent at Python and perhaps Nodejs. Maybe Ruby & bash scripts. You’ll need to understand JSON & Yaml, CloudFormation & Terraform if you want to deliver IAC.

Related: Does a 4-letter-word divide dev & ops?

Strong sysadmin with ops mindset

These are fundamental. But what does that mean? Ops mindset is born out of necessity. Having seen failures & outages, you prioritize around uptime. A simpler stack means fewer moving parts & less to manage. Do as Martin Weiner would suggest & use boring tech.

But you’ll also need to reason about all these components. That’ll come from dozens of debug & troubleshooting sessions you’ll do through years of practice.

Related: How to hire a developer that doesn’t suck

Understand build systems & deployment models

Build systems like CircleCI, Jenkins or Gitlab offer a way to automate code delivery. And as their use becomes more widespread knowing them becomes de rigueur. But it doesn’t end there.

With deployments you’ll have a lot to choose from. At the very simplest a single target deploy, to all-at-once, minimum in service and rolling upgrades. But if you have completely automated your dev, qa & prod infra buildout, you can dive into blue/green deployments, where you make a completely knew infra for each deploy, test, then tear down the old.

Related: Is AWS too complex for small dev teams?

Personality to communicate across organization

I think if you’ve made it this far you will agree that the technical know-how is a broad spectrum of modern computing expertise. But you’ll also need excellent people skills to put all this into practice.

That’s because devops is also about organizational transformation. Yes devs & ops have to get up to speed on the tech, but the organization has to get on board too. Many entrenched orgs pay lip service to devops, but still do a lot of things manually. This is out of fear as much as it stands as technical debt.

But getting past that requires evangelizing, and advocating. For that a leader in the devops department will need superb people skills. They’ll communicate concepts broadly across the organization to win hearts and minds.

Related: Will Microservices just die already?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Very easy cloudformation template comparison with simple terraform for beginners

via GIPHY

If you search a bit on google, you’ll find lots of sample templates for both of these systems. However I found they had a lot of complexity.

When you’re just starting, you want a very simple example. So I thought I’d put one together.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

I’m going to compare both terraform & cloudformation. They get you to the same endpoint, but do it slightly differently.

Very basic terraform template

Ok, you’ve got terraform installed right? If not there are howtos here.

Now let’s create a server.

Create a directory “terraform” then cd into it. Edit this file as main.tf

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    key_name = "seanKey"
}

Please change the subnet to a valid one for you. In the real world you would definitely *not* hardcode a subnet like this. But I wanted to keep this example very simple. Don’t know what subnet to use? Navigate your aws dashboard over to “VPC” and dig around.

Also of course edit for your key.

Ok, you’re ready to test. Let’s first ask terraform what it will do with the “plan” command:

levanter:terraform sean$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.example
    ami:                      "ami-40d28157"
    availability_zone:        ""
    ebs_block_device.#:       ""
    ephemeral_block_device.#: ""
    instance_state:           ""
    instance_type:            "t2.micro"
    key_name:                 "seanKey"
    network_interface_id:     ""
    placement_group:          ""
    private_dns:              ""
    private_ip:               ""
    public_dns:               ""
    public_ip:                ""
    root_block_device.#:      ""
    security_groups.#:        ""
    source_dest_check:        "true"
    subnet_id:                "subnet-111ddaaa"
    tenancy:                  ""
    vpc_security_group_ids.#: ""


Plan: 1 to add, 0 to change, 0 to destroy.
levanter:terraform sean$

Related: What is devops and why is it important?

Build & change with Terraform

Next you want to ask terraform to go ahead and do the work. Because above we only did a dry-run.

levanter:terraform sean$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-40d28157"
  availability_zone:        "" => ""
  ebs_block_device.#:       "" => ""
  ephemeral_block_device.#: "" => ""
  instance_state:           "" => ""
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "seanKey"
  network_interface_id:     "" => ""
  placement_group:          "" => ""
  private_dns:              "" => ""
  private_ip:               "" => ""
  public_dns:               "" => ""
  public_ip:                "" => ""
  root_block_device.#:      "" => ""
  security_groups.#:        "" => ""
  source_dest_check:        "" => "true"
  subnet_id:                "" => "subnet-111ddaaa"
  tenancy:                  "" => ""
  vpc_security_group_ids.#: "" => ""
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

One thing I like is terraform shows us the progress at command line. Cloudformation isn’t so nicely finished. ๐Ÿ™‚

Ok, let’s add a tag name to our server. We’re going to add just three lines to our main.tf file:

provider "aws" {
    region = "us-east-1"
}

resource "aws_instance" "example" {
    ami = "ami-40d28157"
    subnet_id = "subnet-111ddaaa"
    instance_type = "t2.micro"
    tags {
        Name = "terraform-box"
    }
}

Now we do terraform apply again. Look how easy that change is to make!

levanter:terraform sean$ terraform apply
aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Modifying...
  tags.%:    "0" => "1"
  tags.Name: "" => "terraform-box"
aws_instance.example: Modifications complete

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:terraform sean$ 

Navigate to the EC2 dashboard and you should see the first column showing your new name.

That was cool!

Chances are you don’t wanna leave these components sitting around. Let’s cleanup. That’s easy too!

levanter:terraform sean$ terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Refreshing state... (ID: i-0ddd063bbbbce56e2)
aws_instance.example: Destroying...
aws_instance.example: Still destroying... (10s elapsed)
aws_instance.example: Still destroying... (20s elapsed)
aws_instance.example: Still destroying... (30s elapsed)
aws_instance.example: Still destroying... (40s elapsed)
aws_instance.example: Still destroying... (50s elapsed)
aws_instance.example: Still destroying... (1m0s elapsed)
aws_instance.example: Destruction complete

Destroy complete! Resources: 1 destroyed.
levanter:terraform sean$ 

Related: Top questions to ask on a devops interview

Very basic CloudFormation template example

Hopefully you wrote down your subnet name & keyname. So this will be easy.

Let’s create a “cfn” directory and cd into it.

Next edit main.yml

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"

Now let’s build that with cloudformation. You need to have the awscli installed. Here’s amazon’s howto.

Now let’s create. Cloudformation organizes things as “stacks.

aws cloudformation create-stack --template-body file://sean-instance.yml --stack-name cfn-test

Since I didn’t define “outputs” to keep the yaml simple, the command above should just return without error.

You can go into the aws dashboard, and navigate to “CloudFormation” and see the stack being created. You can also see under “EC2” a new instance has been created.

Related: How do I migrate my skills to the cloud?

Add an instance name with tags in Cloud Formation

As we did with terraform, let’s add a name to the server. This is just a tag, not a hostname, so it’s only useful throughout the AWS API.

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      SubnetId: subnet-333dfe6a
      KeyName: "iheavy"
      ImageId: "ami-40d28157"
      Tags:
        - Key: "Name"
          Value: "cfn-box"

Note the three new lines at the bottom. Ok, let’s apply those changes:

levanter:cfn sean$ aws cloudformation update-stack --template-body file://sean-instance.yml --stack-name cfn-test

Navigate to the EC2 dashboard and you should see the first column showing your new name.

Time to cleanup. Let’s delete that stack:

levanter:cfn sean$ aws cloudformation delete-stack --stack-name cfn-test12
levanter:cfn sean$ 

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

Conclusions

Terraform just supports JSON or it’s HCL (hashicorp configuration language). Actually the latter way of formatting is better supported.

On the CloudFormation side you can use yaml or json.

However CloudFormation can be clunky and frustrating to work with. For example to dry-run in terraform is easy. Just use “plan”. And isn’t something we’re going to do over and over?

In CloudFormation there is a “validate-template” option, but this just checks your JSON or YAML. It doesn’t hit amazon’s API or test things in any real way. They have added something called Change Sets, but I haven’t tried them too much yet.

Also CloudFormations error messages are really lacking. They often give you a syntax error or tell you a resource is incomplete without real details on where or how. It makes debugging slow and tedious. Sometimes I see errors at create-stack calls. Other times that succeeds only to find errors within the CloudFormation dashboard.

Terraform is wayyyyy better.

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Will microservices just die already?

via GIPHY

I was just reading Dave Kerr’s piece
The Death of Microservice Madness in 2018.

Not just because it has an awesome title, but because it was trending on news.ycombinator.com for a while, and that is always a good quality signal.

And I’m all about quality. ๐Ÿ™‚

Join 37,000 others and follow Sean Hull on twitter @hullsean.

I quickly found that I agreed with him on a lot of points. There were also a bunch of serious criticisms in there that I hadn’t heard before.





Here are some of my comments on the piece:


Dave, this piece is genius. You hit on a lot of stuff here, and offered critical thought with such finesse. It’s not easy to stand up and be contrary to the trends!

o increased complexity for devs
– so true, setting up the entire suite of services on dev is tough
– and lets not forget about integration testing, which also becomes tough

Check out: The Myth of five nines


o systems have poorly defined boundaries
– very true. We can break them up into easy teams at the start, but over time things get messy, and they overlap.

Read: Lambda & serverless interview questions


o complexities of state

– Do you use a monolithic db? If so the architecture isn’t really microservices.
– If each service has it’s own, transactions that touch multiple services become very tough.
– And what about backups for all these individual databases?
– How about at restore time? How do you manage them all to restore at a SINGLE POINT IN TIME?

Check out: my get started with serverless & lambda in 5 minutes guide


o Databases without schemas push logic into the application

– They sure do. And ones without complex joins do too. It’s a dirty little secret of NoSQL

Check out: How to hire a developer that doesn’t suck

o Versioning can be hard
– Absolutely. Sure each service has it’s own version, but as Dave says you have to manage cross version compatibility. if they are truly independent, this will drift over time, in and unpredictably complex way
– And what about backup versions?

Related: Lambda & serverless interview questions

o Distributed transactions
– with a monolithic db broken up into little pieces, sometimes… maybe often, you will need to do things across data in multiple services. then what?

I like the graphic Dave put together. It’s great. I do like serverless too. I’m also critical of it. I wrote a piece 30 questions to ask a serverless fanboy ๐Ÿ™‚

Also: Is AWS too complex for small dev teams?

Get more. I write one piece every month & share it through email.. Tech, startups & innovation. My latest Can daily notes help projects succeed?




How is automation impacting the dba role?

via GIPHY

I was at a dinner party recently, and talking with some colleagues. I had worked with them years back on Oracle systems.

One colleague Maria said she really enjoyed my newsletter.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

She went on to say how much has changed in the last decade. We talked about how the database administrator, as a career role, wasn’t really being hired for much these days. Things had changed. Evolved a lot.

How do you keep up with all the new technology, she asked?

I went on to talk about Amazon RDS, EC2, lambda & serverless as really exciting stuff. And lets not forget terraform (I wrote a howto on terraform), ansible, jenkins and all the other deployment automation technologies.





We talked about Redshift too. It seems to be everywhere these days and starting to supplant hadoop as the warehouse of choice for analytics.

It was a great conversation, and afterward I decided to summarize my thoughts. Here’s how I think automation and the cloud are impacting the dba role.

My career pivots

Over the years I’ve poured all those computer science algorithms, coding & hardware skills into a lot of areas. Tools & popular language change. Frameworks change. But solid deductive reasoning remains priceless.

o C++ Developer

Fresh out of college I was doing Object Oriented Programming on the Macintosh with Codewarrior & powerplant. C++ development is no joke, and daily coding builds strength in a lot of areas. Turns out he application was a database application, so I was already getting my feet wet with databases.

o Jack of all trades developer & Unix admin

One type of job role that I highly recommend early on is as a generalist. At a small startup with less than ten employees, you become the primary technology solutions architect. So any projects that come along you get your hands dirty with. I was able to land one of these roles. I got to work on Windows one day, Mac programming another & Unix administration & Oracle yet another day.

o Oracle DBA

The third pivot was to work primarily on Oracle. I attended Oracle conferences & my peers were Oracle admins. Interestingly, many of the Oracle “experts” came from more of a business background, not computer science. So to have a more technical foundation really made you stand out.

For the startups I worked with, I was a performance guru, scalability expert. Managers may know they have Oracle in the mix, but ultimately the end goal is to speed up the website & make the business run. The technical nuts & bolts of Oracle DBA were almost incidental.

o MySQL & Postgres

As Linux matured, so did a lot of other open source projects. In particular the two big Open Source databases, MySQL & Postgres became viable.

Suddenly startups were willing to put their businesses on these technologies. They could avoid huge fees in Oracle licenses. Still there were not a lot of career database experts around, so this proved a good niche to focus on.

o RDS & Redshift on Amazon Cloud

Fast forward a few more years and it’s my fifth career pivot. Amazon Web Services bursts on the scene. Every startup is deploying their applications in the cloud. And they’re using Amazon RDS their managed database service to do it. That meant the traditional DBA role was less crucial. Sure the business still needed data expertise, but usually not as a dedicated role.

Time to shift gears and pour all of that Linux & server building experience into cloud deployments & migrating to the cloud.

o Devops, data, scalability & performance

Now of course the big sysadmin type role is usually called an SRE or Devops role. SRE being site reliability engineer. New name but many of the same responsibilities.

Now though infrastructure as code becomes front & center. Tools like CloudFormation & Terraform, plus Ansible, Chef & Jenkins are all quite mature, and being used everywhere.

Checkout your infrastructure code from git, and run terraform apply. And minutes later you have rebuilt your entire stack from bare metal to fully functioning & autoscaling application. Cool!

Related: 30 questions to ask a serverless fanboy

How I’ve steered DBA skills

There’s no doubt that data expertise & management skills are still huge. But the career role of database administrator has evolved quite a bit.

Related: 5 surprising features of Amazon Lambda serverless computing

Pros of automation & managing databases

For DBAs who are looking at the cloud from the old way of doing things, there’s a lot to love about it.

Automation brings repeatability to work & jobs. This is great. It raises the bar & makes us more professional, reducing manual processes & mistakes.

Infrastructure as code is self documenting. It means we have a better idea of day-to-day processes, and can more easily handoff to new folks as we change roles or companies.

Related: Why generalists are better at scaling the web

Cons of automation & databases

However these days cloud, automation & microservices have brought a lot of madness too! Don’t believe me check out this piece on microservice madness.

With microservices you have more databases across the enterprise, on more platforms. How do you restore all at the same time? How do you do point-in-time recovery? What if your managed service goes down?

Migration scripts have become popular to make DDL changes in the database. Going forward (adding columns or tables) is great. But should we be letting our deployment automation roll *BACK* DDL changes? Remember that deletes data right? ๐Ÿ™‚

What about database drop & rebuild? Or throwing databases in a docker container? No bueno. But we’re seeing this more and more. New performance problems are cropping up because of that.

What about when your database upgrades automatically? Remember when you use a managed service, it is build for 1000 users, not one. So if your use case is different you may struggle.

In my experience upgrading RDS was a nightmare. Database as a service upgrades lack visibility. You don’t have OS or SSH access so you can’t keep track of things. You just simply wait.

No longer do we have “zero downtime”. With amazon RDS you have guarenteed downtime upgrades. No seriously.

As the field of databases fragments, we are wearing many more hats. If you like this challenge & enjoy being a generalist, you may feel at home here. But it is a long way from one platform one skill set career path.

Also fragmented db platforms means more complex recovery. I can’t stress this enough. It would become practically impossible to restore all microservices, all their underlying databases & all systems to one single point in time, if you need to.

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

DBAs, it’s time to step up and pivot

As the DBA role evolves, it also brings great opportunity. For those with solid database & data skills are sorely in need at startups and many fortune 500 organizations.

What I’m seeing is that organizations have lost much of the discipline they had as separate dba or operations departments. Schemaless databases have proliferated, and performance has suffered.

All these are more complex now, but strong DBA, performance & troubleshooting skills are needed now more than ever.

Related: The art of resistance in tech consulting

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How can I get started with lambda and nodejs in 5 minutes?

via GIPHY

I know these learn-to-do-x in 5 minutes type articles are a dime a dozen. But it’s true, we’re short on time, and we just wanna jump in. So let’s go!

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Rather than go the old route of doing everything manually, and struggling, we’re going to give ourselves a skeleton to start with.





Enter, serverless framework. What’s it do? It’s a command line tool written in nodejs, which allows you to create a lambda project from a template.

From there you edit a yml file to tell serverless what to build & how. Then you put your code inside of the handler.js file. Sounds simple right?

1. Create

If you haven’t already done it, install nodejs. There are lots of docs on the interwebs. For mac users, “brew install node” does the trick!

Next install the serverless package.

$ npm install serverless

Great! If you got dependency errors, get digging. Those moments of troubleshooting & patience teach you a lot. ๐Ÿ™‚

Ok, now let’s kick the tires. We’ll create our new project.

$ serverless create --template aws-nodejs --path myEndpoint
$ cd myEndpoint

Related: 30 questions to ask a serverless fanboy

2. Edit serverless.yml

service: myEndpoint

frameworkVersion: ">=1.1.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs4.3

functions:
  currentTime:
    handler: handler.endpoint
    events:
      - http:
          path: ping
          method: get

Ok, what are we looking at here? Framework is the version of the serverless framework. Provider is aws, because serverless is attempting to build cross-platform support. You may also use azure, openwhisk, google cloud functions etc. Runtime is your language.

Under functions, our main one is currentTime. handler tells serverless framework what code to matchup with your function name. And finally events tell serverless about the API endpoint to configure.

There's a lot of magic going on under the hood. The serverless framework us using CloudFormation to build things in the background for you. CloudFormation is like Latin, it is a foundational construct to the entire AWS world. You can formalize any object, from servers to sqs queues, dynamodb tables, security groups, IAM users, S3 buckets, ebs volumes etc etc. You get the idea.

Want to see what serverless did? Head over to your aws dashboard, navigate to CloudFormation. You should see a new stack there called myEndpoint-dev. Scroll down and click the "Template" tab. You'll see the exact JSON code in all it's gory detail!

Related: 5 surprising features of Amazon Lambda serverless computing

3. Edit handler.js

Next up let's add a bit of code.

'use strict';

// return the current time in JSON format
module.exports.endpoint = (event, context, callback) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: `Hello, the current time is ${new Date().toTimeString()}.`,
    }),
  };

  callback(null, response);
};

Whenever this function gets called, we'll just return the current time. Pretty self explanatory.

Related: Are you getting errors building lambda functions? I got you covered!

4. Deploy!

Now the fun party. Let's deploy the code.

$ serverless deploy

Simple command, but it's doing a lot of work. Serverless framework is packaging up your nodejs code into a zip file and uploading it to aws for you. You should see some output telling you what happened.

$ serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.2 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
........................
Serverless: Stack update finished...
Service Information
service: myEndpoint
stage: dev
region: us-east-1
stack: myEndpoint-dev
api keys:
  None
endpoints:
  GET - https://ABCDEFGHIJK.execute-api.us-east-1.amazonaws.com/dev/ping
functions:
  currentTime: myEndpoint-dev-currentTime
$

Related: Is Amazon too big to fail?

5. Test

Awesome, now it's time to make sure it's working.

You can invoke the function directly using serverless' "invoke" command like this:

$ serverless invoke --function currentTime --log
{
    "statusCode": 200,
    "body": "{\"message\":\"Hello, the current time is 20:46:02 GMT+0000 (UTC).\"}"
}
--------------------------------------------------------------------
START RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce Version: $LATEST
END RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce
REPORT RequestId: ed5e427c-fe22-11e7-90cc-a1fe66d674ce	Duration: 0.67 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 21 MB	


$

But we created an API endpoint didn't we? Yep. You can hit that. If you have a browser open, go ahead and copy/past the url listed in the endpoints section of your deploy process.

You can also use curl like this:

$ curl https://ABCDEFGHIJK.execute-api.us-east-1.amazonaws.com/dev/ping
{"message":"Hello, the current time is 20:46:18 GMT+0000 (UTC)."}
$ 

Related: Is Amazon Web Services too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Can misfits teach us a thing or two about innovation?

I just finished reading Alexa Clay & Kyra Maya Phillips tour de force, The Misfit Economy.

(Yes that’s an affiliate link. The first one I’ve ever posted on this blog. If you like the book, please ๐Ÿ™‚ buy through my link. )
I have to admit I was surprised & delighted by the book.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Alexa & Kyra offer us a tantalizing question. Could it be that we could learn a lot from oddball innovators at the edge of the economy? When I say edge, I really mean it. She interviews Sam Hostetler who is building a business around milking camels, and then there’s Abdi Hasan a pirate from Galkayo northern Somalia. Yeah really! Or what about the German copycats Wimdu who built a complete replica of Airbnb by reverse engineering it.

1. Hack the cold call

Take the example of Lance Weiler. Early on the industry was very against digital. They didn’t see it as really making films.

“Part of [Lance] Weiler’s success was due to his ability to work the system. He wrote letters to major production companies telling them he wanted to make the first digital motion picture. After he didn’t hear back, he took a page from the con man’s handbook and wrote the same letters but intentionally misaddressed them so they were sent to the wrong companies. Sony for example would get a letter intended for Barco.”

He was later able to bring digital projection to Cannes & Sundance!

“For Weiler his big epiphany was when he realized he could be creative across all of it [the business]. Not just in the art product, but in financing, distribution, and business aspects of artistic production.”

Related: The art of resistence or when you have to be the bad guy

2. Copy the product

The german brothers Oliver, Marc & Alexander Samwer make a superb example of how copying can bring building prowess to compete against innovators that were first to market.

“in 1998 Marc Samwer had an instinct that eBay would thrive in the German market… his brothers agreed… they contacted eBay via email numerous times, recommending that the company replicate it’s platformin Germany. Claiming that eBay failed to respond, the brother’s started their own German-language auction site, Alando, which was then purchased by eBay for 38 million euros (over $50 million) only 100 days after it’s debut. Had the Samwers not copied, eBay might have remained complacent, not realizing its potential within the german market.”

Although not mentioned in the book, Inditex the wildly successful firm behind fashion brand Zara did much the same thing to the fashion industry. By mastering the supply chain, they enabled their company to take designs from the runway & replicate them, turning designs into real clothing in stores, in just two weeks! And indeed they really do replicate, borrow & straight copy those designs from what they see at fashion week. Sad & brilliant at the same time.

Related: When you have to take the fall

3. Don’t forget to hustle


“In the lexicon of the Misfit Economy, we define “hustle” as making something out of nothing. To move fast, to trade one thing for another, and to proactively create your own opportunities rather than waiting for opportunity to come your way. To hustle means getting your hands dirty, being lean and facile, working hard, being resourceful and resilient, and showing or having gumption, chutzpah, or mojo.”

And after all, isn’t that everything the startup industry aspires to? Agile teams? Growth hackers? Scrappy startups & innovation?

Related: When clients don’t pay

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Can daily notes help you work better with clients?

via GIPHY

Years ago I was working at a customer for a few weeks. There was some confusion as to what was going on, in terms of progress. Things weren’t moving as quickly as they expected.

After a lot of back and forth, I suggested I could provide detailed notes of what I had done.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

After I put together my in-depth notes the customer was really happy. It seems these notes had highlighted a few problems that they didn’t know about. What’s more they even highlighted some people issues, where communicate was blocked. Whats more the notes underlined what I was doing, and this really improved the customers confidence in the work product.

1. Visibility

Keeping daily notes is a habit I found useful over and over again. If your client or customer comes to you and says, why are we paying $X, you can provide the notes as a detailed explanation of what they have gotten for their money.

Related: Are generalists better at scaling the web?

2. Transparency

Transparency is a door that swings both ways. As I mentioned above it can be great when the customer is not sure how much work was done, or what the bill is for. But it can also highlight things they may not want done. For instance perhaps you were investigating a problem authenticating to a server. You determined that it was an important piece.

When the customer sees this in your notes they may say “Oh we don’t need to deal with that system. Please leave it alone” or they may say “We actually have Rakesh available to help us with that piece, so please communicate with him and he can resolve that”.

Related: When clients don’t pay

3. Trust

Most important of all, keeping detailed notes helps build trust. Many customers, hiring managers & CTOs are not command-line technical. And that’s perfectly normal. However looking over a long list of notes like these provides great insight to them as to what you do from day-to-day.

Do they need to know what every line means? No. But the visibility goes a long long way toward building trust in the consultant client relationship.

Related: 5 conversational ways to evaluate great consultants

Week 1 April 1 – April 10

Here’s a sample of the kind notes I keep. Actually they cover a ten day period, but that’s because the initial day was towards the end of a week.

Friday April 1st
o coord with Jake on getting started
o dropbox for password, creds & server docs
o reviewing system network diagram
o reviewing techlist excel doc
– techlist
– server list & access
– database access
– projects -old
o reviewing systems access.docx
o testing AWS login credentials
– issue with permissions
– coordinating with Jake on Admin access
o testing AWS creds again
– access to all AWS services
– IAM for seanhull user
– enabling MFA for user
o questions for outgoing op Roger

Sat April 2nd (no hours)

Sun April 3rd (no hours)

Mon April 4th
o coord with Jake to get onboarded
o sending W9 form to Acme Inc.
o setup slack
o plan for today
– review aws servers
– review dg servers
– questions for Roger
– review docs
o coord with Roger on VPN access
– reach out to Larry
– emailed Larry CC Jake
– Larry requests Acme access CC to mgmt
– turns out VPN access isn’t required
– can just whitelist IP inside the relevant security groups
– coord with J, going ahead to add whitelist 1.2.3.4/32
o updating Acmemedia-sandbox security group
– trying to reach host, coord with Roger
– asked to drop ssh key onto servers
– asked about .ssh/config file – Did you get from Jake?
– found the AWS PEM folder that I overlooked ๐Ÿ™‚
o configuring .ssh/config file
– copying up to iheavy.com
– setting permissions 600 on pem files
– ssh to sandbox successful!!
o adding whitelist to Acmemedia-prod security group
o updating Jake – access is working

Tue April 5th
o coord with Jake on todo list for today
o verifying mysql access
– review security groups
– no whitelisted IPs
– can reach from webserver?
– test db1 MySQL access via webserver, OK
– test db2 MySQL access via webserver OK
o reviewing monitoring system
– testing nagios access
– locating configurations
– reviewing dashboard
– understanding tests
– down system db1 – 108 days – why?
– down system p1 – LB1 sailthru check down for 85 days why?
– down staging – 174 days why?
– emailed nagios questions to Roger
– request to add me to nagios notifications group
o coord with Roger on questions
– nagios setup & stopped checks
– add to admin group
o github access for sandbox details doc
o login to Acmemedia wp
– check list of 25 plugins
– review recent backups on abc (8)
o login to DDD wordpress
– check list of 33 plugins
– review recent backups in abc (8)
o login to EEE wordpress
– check list of 31 plugins
– review recent backups in abc (37)
o login to FFF wordpress
– check list of 35 plugins
– review recent backups in abc (8)
o login to DDD
o login to EEE
o login to FFF
o emailed Roger – request details about Glasgow server
o review various Acme github pages

Also: The art of resistance or when you have to be the bad guy

Wed April 6th
o coord with Jake on todos for today
o reviewing github pages docs on various system processes
– git deployment server page
– git deployment process
– new deploy process Nov 2015
– wiki pages are a bit sparse overall
o tested jenkins login
– found API cache clear
– found varnish cache clear
o understand separation of dev & production
o digging into Jenkins docs
o understanding build process
o tried login to EEEv2 wp login, don’t have pass
– coordinating with Jake on that login
o checking on nfs disk full nagios alert
– can’t reach box
– notified Jake & Roger via slack
– slack with Lester
– yes nfs01 space 90% is normal
– new launch of EEE tomorrow & old stuff will be deleted then
o updating nfs security group
– ssh login working now.
o getting diskspace error on prod04
– messaged Lester, related to EEE launch tonight
o email from Jake – local dev & test environment setups are slow
– very overengineered for simple wordpress site
– not using multisites, so have FOUR SEPARATE setups
– different plugins on each install
– four sets of logins
– four places to update
– four places to test/qa
– migration may be complex based on custom Acme plugins
– shortcodes compatability across four sites
– not using ithemes security plugin
o discuss with Lester on slack
– API is hosted on datagram
– single point of failure for the site currently
– outage there would take the site down
– migrate to AWS using internal loadbalancer & webservers in 2 AZs

Thu April 7th
o call with Jake on EEEv2 launch today
– general observations of Acme sites & architecture
o reviewing access.Acmemedia.com
o discuss with Jake
– hosting media files on S3 vs nfs
– using multisite
– using wordpress through API only
– javascript based static site builder
– moving API to amazon EC2
– create slave MySQL db of master MySQL currently in datadotnet
o discuss with Roger
– launch plan
– two vhosts new.EEE.com
– old.EEE.com
– simply restart apache to enable switch
– refresh maxCDN after launch
o review EEEv2 deploy steps
– pre-deploy steps
– DNS for old.EEE.com
– add vhosts EEEv2.conf
– restart apache
– restart varnish
– clear maxcdn
o verified login to access.Acmemedia.com
– API log is in /var/log/httpd/production-access.log
– login as sandy & root
o not able to login to dashboard.Acmemedia.com
– tried admin & pass in datagram docs

o meeting onsite with Jake & Roger
– discuss deployment process
– discuss legacy systems
– discuss NFS vs S3 for media files
– discuss plugins & management
– discuss wordpress version upgrade process
– discuss plugin version upgrade process
– discuss Jenkins access, configs, success & error logs
– discuss managing secrets file
– script that takes webserver out of load balancer while apache restarting
o met Rachel, Louis, Lester, Rick, Stuart, Jack

Fri April 8th
o testing Acme stage build
o emailed Roger further questions
– where is secrets file configuration & process
– composer is PHP dependency management
– what are the steps to upgrade plugin only
o summarizing & notes on Acme
o put together steps for complete firedrill
– questions for Roger, requesting help with process
– build webserver with varnish & apache
– should setup separate NFS server
– should use Acmemedia.com bc it uses API heavily
– setup copy of API server & db
– setup mysql instance for wordpress
– setup amazon cloudfront for content
o outline additional questions for Roger
– how to upgrade plugin only
– composer for php dependency management
– how are secrets files managed & deployed outside developer access
o secrets management
– asked Roger for clarificaiton
o plugin-only installs
– reviewed jenkins configs
– various questions to Roger
– composer:install seems to be the key change (not just deploy which does all?)
– why is STAGING PLUGIN DEPLOY for ORIM different?
o what happens when github account is disabled!!
– jenkins changes for new github deploy account
– THIS WOULD BREAK ALL DEPLOYS & CI/CD pipeline
– capistrano changes?
– any other changes on sandbox
– any other dependencies for Roger github?
o email step-by-step outline to add a plugin
– reviewing steps with Roger
– making sure no missing pieces

Sat April 9th (1 off-hour)
o receiving nagios alert for p1
o emailed Roger, Jake about issue
o slack messaged Jake
o raises question about off-hours coverage

Sun April 10th (2 off-hours)

o p1 still throwing errors
o coordinating with Lester & Ralph on Slack
– reiterated this is *not* an issue with NFS
– because of large number of nagios alerts, p1 lost in the shuffle
– p1 is new error, 97% so more dire than the NFS issue
– Lester attempting to login, fails because of AWS security group
– adding his *own* home IP as whitelist (devs have access to AWS console)
– first time logging in from home?
– Lester deleted old DDD logfiles to clear up 1.2G
– plan to touch base again tomorrow about issue
o emailing Jake about status
o questions for Jake
– how to manage on-call & alerts
– how to manage developer access
– Roger mentioned secrets files are not shared with devs
o Lester questions, comments on servers & diskspace

Related: When you have to take the fall

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Can i get more done by taking some dream time?

via GIPHY

When I have a long todo list and a million things on my plate, my usual tactic is to just plow through it. Take short break to eat, but then get right back to work. My feeling is, if it’s weighing on the back of my mind, I won’t enjoy downtime anyway.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

Recently though I had a very different experience. And it surprised me.

1. Too much to do

A colleague of mine asked me to meetup for beers. We planned to talk technology, and to catchup on what we were both working on.

As the night rolled on he had some delays, and I wanted to cancel too. After all I had a ton of work to do, and didn’t think I would enjoy myself. I really felt like I’d be worrying about all this work on my plate. It’s like taking a vacation when you have a deadline. It doesn’t feel quite right.

Related: Can a growth mindset help you recover from setbacks?

2. My surprise

We ended up meeting anyway. At first I wasn’t totally relaxed, but then we started talking.
Our conversation turned to the evolution of datacenters. How they used to be on premise, then there were lots of hosting companies. And then Amazon changed everything!

We talked about evolution of tooling & automation. Although system administrators of old have been writing bash scripts forever to make their jobs easier, the proliferation of tools for deployment has allowed smaller ops teams to control fleets of servers. As my friend & colleague was newly starting a job on Amazon Web Services, a lot of this cloud stuff was new to him. So talking about it from a teaching vantage point, made me realize how strong I was in a lot of this stuff.

We talked about docker & containerization too. Even the origins back in the late 70’s with Unix chroot all the way up to Docker today. I explained to him that he could think of a container almost like a unix user, but with a more self-contained view of the whole system. In many ways a container acts like a vm, with it’s own filesystem and processes.

We talked a lot about aws, how S3 was an evolution of FTP in the old days, but much much better, how VPCs worked and the virtualization of networking, how VMs in the AWS world match with bare metal or not, how they share EBS storage. How Amazon has built a database service RDS around popular platforms like Oracle, MySQL & Postgres.

We shared a lot of ideas & brainstorming. About coding, C versus Java versus Python, package management, dependencies and on and on. He also mentioned he needed to build a test script to talk to an Amazon queue. I explained that it should be quite easy, and which libraries to look for.

Related: How I use terraform & composer to automate wordpress on AWS

3. Breaking through hurdles

It’s funny how dramatically different I felt after we got together. I all of a sudden had tons of new ideas bouncing around in my head.

Instead of waiting for the next day, after our get together, I went straight to the terminal. I quickly finished a coding challenge I was working on and struggling with. Easy peasy!

After that I felt inspired further. I created an Amazon SQS queue with the dashboard, and then wrote some python code to talk to an Amazon sqs

I created a git repo & checked in my code. All within a couple of hours!

I was just sitting there laughing. Because I felt such relief that I’ve made progress.

It was a big surprise that such a circuitous route got me there.

I guess the takeaway is that mental play or dream time is important to making progress. Otherwise you’re just working in a vacuum!

Related: What I’ve learned from 10 years of blogging

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters