6 Devops interview questions


Devops is in serious demand these days. At every meetup or tech event I attend, I hear a recruiter or startup founder talking about it. It seems everyone wants to see benefits of talented operations brought to their business.

Join 37,000 others and follow Sean Hull on twitter @hullsean.

That said the skill set is very broad, which explains why there aren’t more devs picking up the batton.

I thought it would be helpful to put together a list of interview questions. There are certainly others, but here’s what I came up with.

1. Explain the gitflow release process

As a devops engineer you should have a good foundation about software delivery. With that you should understand git very well, especially the standard workflow.

Although there are other methods to manage code, one solid & proven method is gitflow. In a nutshell you have two main branches, development & master. Developers checkout a new branch to add a feature, and push it back to development branch. Your stage server can be built automatically off of this branch.

Periodically you will want to release a new version of the software. For this you merge development to master. UAT is then built automatically off of the master branch. When acceptance testing is done, you deploy off of master to production. Hence the saying always ship trunk.

Bonus points if you know that hotfixes are done directly off the master branch & pushed straight out that way.

Related: 8 questions to ask an AWS expert

2. How do you provision resources?

There are a lot of tools in the devops toolbox these days. One that is great at provisioning resources is Terraform. With it you can specify in declarative code everything your application will need to run in the cloud. From IAM users, roles & groups, dynamodb tables, rds instances, VPCs & subnets, security groups, ec2 instances, ebs volumes, S3 buckets and more.

You may also choose to use CloudFormation of course, but in my experience terraform is more polished. What’s more it supports multi-cloud. Want to deploy in GCP or Azure, just port your templates & you’re up and running in no time.

It takes some time to get used to the new workflow of building things in terraform rather than at the AWS cli or dashboard, but once you do you’ll see benefits right away. You gain all the advantages of versioning code we see with other software development. Want to rollback, no problem. Want to do unit tests against your infrastructure? You can do that too!

Related: Does a 4-letter-word divide dev & ops?

3. How do you configure servers?

The four big choices for configuration management these days are Ansible, Salt, Chef & Puppet. For my money Ansible has some nice advantages.

First it doesn’t require an agent. As long as you have SSH access to your box, you can manage it with Ansible. Plus your existing shell scripts are pretty easy to port to playbooks. Ansible also does not require a server to house your playbooks. Simply keep them in your git repository, and checkout to your desktop. Then run ansible-playbook on the yaml file. Voila, server configuration!

Related: How to hire a developer that doesn’t suck

4. What does testing enable?

Unit testing & integration testing are super import parts of continuous integration. As you automate your tests, you formalize how your site & code should behave. That way when you automate the deployment, you can also automate the test process. Let the software do the drudgework of making sure a new feature hasn’t broken anything on the site.

As you automate more tests, you accelerate the software development process, because you’re doing less and less manually. That means being more agile, and makes the business more nimble.

Related: Is AWS too complex for small dev teams?

5. Explain a use case for Docker

Docker a low overhead way to run virtual machines on your local box or in the cloud. Although they’re not strictly distinct machines, nor do they need to boot an OS, they give you many of those benefits.

Docker can encapsulate legacy applications, allowing you to deploy them to servers that might not otherwise be easy to setup with older packages & software versions.

Docker can be used to build test boxes, during your deploy process to facilitate continuous integration testing.

Docker can be used to provision boxes in the cloud, and with swarm you can orchestrate clusters too. Pretty cool!

Related: Will Microservices just die already?

6. How is communicating relevant to Devops

Since devops brings a new process of continuous delivery to the organization, it involves some risk. Actually doing things the old way involves more risk in the long term, because things can and will break. With automation, you can recovery quicker from failure.

But this new world, requires a leap of faith. It’s not right for every organization or in every case, and you’ll likely strike a balance from what the devops holy book says, and what your org can tolerate. However inevitably communication becomes very important as you advocate for new ways of doing things.

Related: How do I migrate my skills to the cloud?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is there a serious skills shortage around devops space?


As devops adoption picks up pace, the signs are everywhere. Infrastructure as code once a backwater concept, and a hoped for ideal, has become an essential to many startups.

Why might that be?

Join 37,000 others and follow Sean Hull on twitter @hullsean.

My theory is that devops enables the business in a lot of profound ways. Sure it means one sysadmin can do much more, manage a fleet of servers, and support a large user base. But it goes much deeper than that.

Being able to standup your entire dev, qa, or production environment at the click of the button transforms software delivery dramatically. It means it can happen more often, more easily, and with less risk to the business. It means you can do things like blue/green deployments, rolling out featues without any risk to the production environment running in parallel.

What kind of chops does it take?

Strong generalist skills

For starters you’ll need a pragmatist mindset. Not fanatical about one technology, but open to the many choices available. And as a generalist, you start with a familiarity with a broad spectrum of skills, from coding, troubleshooting & debugging, to performance tuning & integration testing.

Stir into the mix good operating system fundamentals, top to bottom knowledge of Unix & Linux, networking, configuration and more. Maybe you’ve built kernels, compiled packages by hand, or better yet contributed to a few open source projects yourself.

You’ll be comfortable with databases, frontend frameworks, backend technologies & APIs. But that’s not all. You’ll need a broad understanding of cloud technologies, from GCP to AWS. S3, EC2, VPCs, EBS, webservers, caching servers, load balancing, Route53 DNS, serverless lambda. Add to all of that programmable infrastructure through CloudFormation or Terraform.

Related: 30 questions to ask a serverless fanboy

Competent programmer

Although as a devop you probably won’t be doing frontend dev, you’ll need some cursory understanding of those. You should be competent at Python and perhaps Nodejs. Maybe Ruby & bash scripts. You’ll need to understand JSON & Yaml, CloudFormation & Terraform if you want to deliver IAC.

Related: Does a 4-letter-word divide dev & ops?

Strong sysadmin with ops mindset

These are fundamental. But what does that mean? Ops mindset is born out of necessity. Having seen failures & outages, you prioritize around uptime. A simpler stack means fewer moving parts & less to manage. Do as Martin Weiner would suggest & use boring tech.

But you’ll also need to reason about all these components. That’ll come from dozens of debug & troubleshooting sessions you’ll do through years of practice.

Related: How to hire a developer that doesn’t suck

Understand build systems & deployment models

Build systems like CircleCI, Jenkins or Gitlab offer a way to automate code delivery. And as their use becomes more widespread knowing them becomes de rigueur. But it doesn’t end there.

With deployments you’ll have a lot to choose from. At the very simplest a single target deploy, to all-at-once, minimum in service and rolling upgrades. But if you have completely automated your dev, qa & prod infra buildout, you can dive into blue/green deployments, where you make a completely knew infra for each deploy, test, then tear down the old.

Related: Is AWS too complex for small dev teams?

Personality to communicate across organization

I think if you’ve made it this far you will agree that the technical know-how is a broad spectrum of modern computing expertise. But you’ll also need excellent people skills to put all this into practice.

That’s because devops is also about organizational transformation. Yes devs & ops have to get up to speed on the tech, but the organization has to get on board too. Many entrenched orgs pay lip service to devops, but still do a lot of things manually. This is out of fear as much as it stands as technical debt.

But getting past that requires evangelizing, and advocating. For that a leader in the devops department will need superb people skills. They’ll communicate concepts broadly across the organization to win hearts and minds.

Related: Will Microservices just die already?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How is automation impacting the dba role?


I was at a dinner party recently, and talking with some colleagues. I had worked with them years back on Oracle systems.

One colleague Maria said she really enjoyed my newsletter.

Join 38,000 others and follow Sean Hull on twitter @hullsean.

She went on to say how much has changed in the last decade. We talked about how the database administrator, as a career role, wasn’t really being hired for much these days. Things had changed. Evolved a lot.

How do you keep up with all the new technology, she asked?

I went on to talk about Amazon RDS, EC2, lambda & serverless as really exciting stuff. And lets not forget terraform (I wrote a howto on terraform), ansible, jenkins and all the other deployment automation technologies.

We talked about Redshift too. It seems to be everywhere these days and starting to supplant hadoop as the warehouse of choice for analytics.

It was a great conversation, and afterward I decided to summarize my thoughts. Here’s how I think automation and the cloud are impacting the dba role.

My career pivots

Over the years I’ve poured all those computer science algorithms, coding & hardware skills into a lot of areas. Tools & popular language change. Frameworks change. But solid deductive reasoning remains priceless.

o C++ Developer

Fresh out of college I was doing Object Oriented Programming on the Macintosh with Codewarrior & powerplant. C++ development is no joke, and daily coding builds strength in a lot of areas. Turns out he application was a database application, so I was already getting my feet wet with databases.

o Jack of all trades developer & Unix admin

One type of job role that I highly recommend early on is as a generalist. At a small startup with less than ten employees, you become the primary technology solutions architect. So any projects that come along you get your hands dirty with. I was able to land one of these roles. I got to work on Windows one day, Mac programming another & Unix administration & Oracle yet another day.

o Oracle DBA

The third pivot was to work primarily on Oracle. I attended Oracle conferences & my peers were Oracle admins. Interestingly, many of the Oracle “experts” came from more of a business background, not computer science. So to have a more technical foundation really made you stand out.

For the startups I worked with, I was a performance guru, scalability expert. Managers may know they have Oracle in the mix, but ultimately the end goal is to speed up the website & make the business run. The technical nuts & bolts of Oracle DBA were almost incidental.

o MySQL & Postgres

As Linux matured, so did a lot of other open source projects. In particular the two big Open Source databases, MySQL & Postgres became viable.

Suddenly startups were willing to put their businesses on these technologies. They could avoid huge fees in Oracle licenses. Still there were not a lot of career database experts around, so this proved a good niche to focus on.

o RDS & Redshift on Amazon Cloud

Fast forward a few more years and it’s my fifth career pivot. Amazon Web Services bursts on the scene. Every startup is deploying their applications in the cloud. And they’re using Amazon RDS their managed database service to do it. That meant the traditional DBA role was less crucial. Sure the business still needed data expertise, but usually not as a dedicated role.

Time to shift gears and pour all of that Linux & server building experience into cloud deployments & migrating to the cloud.

o Devops, data, scalability & performance

Now of course the big sysadmin type role is usually called an SRE or Devops role. SRE being site reliability engineer. New name but many of the same responsibilities.

Now though infrastructure as code becomes front & center. Tools like CloudFormation & Terraform, plus Ansible, Chef & Jenkins are all quite mature, and being used everywhere.

Checkout your infrastructure code from git, and run terraform apply. And minutes later you have rebuilt your entire stack from bare metal to fully functioning & autoscaling application. Cool!

Related: 30 questions to ask a serverless fanboy

How I’ve steered DBA skills

There’s no doubt that data expertise & management skills are still huge. But the career role of database administrator has evolved quite a bit.

Related: 5 surprising features of Amazon Lambda serverless computing

Pros of automation & managing databases

For DBAs who are looking at the cloud from the old way of doing things, there’s a lot to love about it.

Automation brings repeatability to work & jobs. This is great. It raises the bar & makes us more professional, reducing manual processes & mistakes.

Infrastructure as code is self documenting. It means we have a better idea of day-to-day processes, and can more easily handoff to new folks as we change roles or companies.

Related: Why generalists are better at scaling the web

Cons of automation & databases

However these days cloud, automation & microservices have brought a lot of madness too! Don’t believe me check out this piece on microservice madness.

With microservices you have more databases across the enterprise, on more platforms. How do you restore all at the same time? How do you do point-in-time recovery? What if your managed service goes down?

Migration scripts have become popular to make DDL changes in the database. Going forward (adding columns or tables) is great. But should we be letting our deployment automation roll *BACK* DDL changes? Remember that deletes data right? ūüôā

What about database drop & rebuild? Or throwing databases in a docker container? No bueno. But we’re seeing this more and more. New performance problems are cropping up because of that.

What about when your database upgrades automatically? Remember when you use a managed service, it is build for 1000 users, not one. So if your use case is different you may struggle.

In my experience upgrading RDS was a nightmare. Database as a service upgrades lack visibility. You don’t have OS or SSH access so you can’t keep track of things. You just simply wait.

No longer do we have “zero downtime”. With amazon RDS you have guarenteed downtime upgrades. No seriously.

As the field of databases fragments, we are wearing many more hats. If you like this challenge & enjoy being a generalist, you may feel at home here. But it is a long way from one platform one skill set career path.

Also fragmented db platforms means more complex recovery. I can’t stress this enough. It would become practically impossible to restore all microservices, all their underlying databases & all systems to one single point in time, if you need to.

Related: Is upgrading Amazon RDS like a sh*t storm that will not end?

DBAs, it’s time to step up and pivot

As the DBA role evolves, it also brings great opportunity. For those with solid database & data skills are sorely in need at startups and many fortune 500 organizations.

What I’m seeing is that organizations have lost much of the discipline they had as separate dba or operations departments. Schemaless databases have proliferated, and performance has suffered.

All these are more complex now, but strong DBA, performance & troubleshooting skills are needed now more than ever.

Related: The art of resistance in tech consulting

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Top questions to ask a devops expert when hiring or preparing for job & interview

Strip by Randall Munroe; xkcd.com

Whether your a hiring manager, head of HR or recruiter, you are probably looking for a devops expert. These days good ones are not easy to find. The spectrum of tools & technologies is broad. To manage today’s cloud you need a generalist.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

If you’re a devops expert and looking for a job, these are also some essential questions you should have in your pocket. Be able to elaborate on these high level concepts as they’re crucial in todays agile startups.

Check out: 8 questions to ask an aws ec2 expert

Also new: Top questions to ask on a devops expert interview

And: How to hire a developer that doesn’t suck

1. How do you automate deployments?

A. Get your code in version control (git)

Believe it or not there are small 1 person teams that haven’t done this. But even with those, there’s real benefit. Get on it!

B. Evolve to one script push-button deploy (script)

If deploying new code involves a lot of manual steps, move file here, set config there, set variable, setup S3 bucket, etc, then start scripting. That midnight deploy process should be one master script which includes all the logic.

It’s a process to get there, but keep the goal in sight.

C. Build confidence over many iterations (team process & agile)

As you continue to deploy manually with a master script, you’ll iron out more details, contingencies, and problems. Over time You’ll gain confidence that the script does the job.

D. Employ continuous integration Tools to formalize process (CircleCI, Jenkins)

Now that you’ve formalized your deploy in code, putting these CI tools to use becomes easier. Because they’re custom built for you at this stage!

E. 10 deploys per day (long term goal)

Your longer term goal is 10 deploys a day. After you’ve automated tests, team confidence will grow around developers being able to deploy to production. On smaller teams of 1-5 people this may still be only 10 deploys per week, but still a useful benchmark.

Also: Top serverless interview questions for hiring aws lambda experts

2. What is microservices?

Microservices is about two-pizza teams. Small enough that there’s little beaurocracy. Able to be agile, focus on one business function. Iterate quickly without logjams with other business teams & functions.

Microservices interact with each other through APIs, deploy their own components, and use their own isolated data stores.

Function as a service, Amazon Lambda, or serverless computing enables microservices in a huge way.

Related: Which engineering roles are in greatest demand?

3. What is serverless computing?

Serverless computing is a model where servers & infrastructure do not need to be formalized. Only the code is deployed, and the platform, AWS Lambda for example, takes care of instant provisioning of containers & VMs when the code gets called.

Events within the cloud environment, such a file added to S3 bucket, trigger the serverless functions. API Gateway endpoints can also trigger the functions to run.

Authentication services are used for user login & identity management such as Auth0 or Amazon Cognito. The backend data store could be Dynamodb or Google’s Firebase for example.

Read: Can on-demand consulting save startups time & money?

4. What is containerization?

Containers are like faster deploying VMs. They have all the advantages of an image or snapshot of a server. Why is this useful? Because you can containerize your microservices, so each one does one thing. One has a webserver, with specific version of xyz.

Containers can also help with legacy applications, as you isolate older versions & dependencies that those applications still rely on.

Containers enable developers to setup environments quickly, and be more agile.

Also: 30 questions to ask a serverless fanboy

5. What is CloudFormation?

CloudFormation, formalizes all of your cloud infrastructure into json files. Want to add an IAM user, S3 bucket, rds database, or EC2 server? Want to configure a VPC, subnet or access control list? All these things can be formalized into cloudformation files.

Once you’ve started down this road, you can checkin your infrastructure definitions into version control, and manage them just like you manage all your other code. Want to do unit tests? Have at it. Now you can test & deploy with more confidence.

Terraform is an extension of CloudFormation with even more power built in.

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Key lessons from the Devops Handbook

I picked up a copy of the DevOps Handbook.

This is not a book about how to setup Amazon servers, how to use git, codePipeline or Jenkins. It’s not about Chef or Ansible or other tools.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

This is a book about processes & people. It’s about how & why automation & world-class infrastructure will make your business more agile, raise quality & increase productivity.

1. Infrastructure in version control

With technologies like Terraform and CloudFormation, the entire state of your infrastructure can be captured. That means you can manage it just like any other code.

Also: Myth of five nines – Why high availability is overrated

2. Pushbutton builds

You’ve heard it before. Automate your builds. That means putting everything in version control, from environment building scripts, to configs, artifacts & reference data. Once you can do that, you’re on your way to automating production deploys completely.

Related: 5 ways to move data to amazon redshift

3. Devs & Ops comingled

In the devops world, devs should learn about operations, infrastructure, performance & more. What’s more operations teams should work closely with devs.

Read: Why were dev & ops siloed job roles?

4. Servers as cattle not pets

In the old days, we logged into servers & provided personal care & feeding. We treated them like pets.

In the new world of devops, we should treat servers like cattle. When it begins to fail, take it out back and shoot it. (tbh i don’t love the analogy, but it carries some meaning…)

Also: Are SQL databases dead?

5. Open to learnings & failures

Organizations that are open to failures, without playing the blame game, learn quicker & recover from problems faster.

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

30 questions to ask a serverless fanboy

Everyone is hot under the collar again. So-called serverless or no-ops services are popping up everywhere allowing you to deploy “just code” into the cloud. Not only won’t you have to login to a server, you won’t even have to know they’re there.

As your code is called, but cloud events such a file upload, or hitting an http endpoint, your code runs. Behind the scene through the magic of containers & autoscaling, Amazon & others are able to provision in milliseconds.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Pretty cool. Yes even as it outsources the operations role to invisible teams behind Amazon Lambda, Google Cloud Functions or Webtask it’s also making companies more agile, and allowing startup innovation to happen even faster.

Believe it or not I’m a fan too.

That said I thought it would be fun to poke a hole in the bubble, and throw some criticisms at the technology. I mean going serverless today is still bleeding edge, and everyone isn’t cut out to be a pioneer!

With that, here’s 30 questions to throw on the serverless fanboys (and ladies!)…

1. Security

o Are you comfortable removing the barrier around your database?
o With more services, there is more surface area. How do you prevent malicious code?
o How do you know your vendor is doing security right?
o How transparent is your vendor about vulnerabilities?

Also: Myth of five nines – Why high availability is overrated

2. Testing

o How do you do integration testing with multiple vendor service components?
o How do you test your API Gateway configurations?
o Is there a way to version control changes to API Gateway configs?
o Can Terraform or CloudFormation help with this?
o How do you do load testing with a third party db backend?
o Are your QA tests hitting the prod backend db?
o Can you easily create & destroy test dbs?

Related: 5 ways to move data to amazon redshift

3. Management

o How do you do zero downtime deployments with Lambda?
o Is there a way to deploy functions in groups, all at once?
o How do you manage vendor lock-in at the monitoring & tools level but also code & services?
o How do you mitigate your vendors maintenance? Downtime? Upgrades?
o How do you plan for move to alternate vendor? Database import & export may not be ideal, plus code & infrastructure would need to be duplicated.
o How do you manage a third party service for authentication? What are the pros & cons there?
o What are the pros & cons of using a service-based backend database?
o How do you manage redundancy of code when every client needs to talk to backend db?

Read: Why were dev & ops siloed job roles?

4. Monitoring & debugging

o How do you build a third-party monitoring tool? Where are the APIs?
o When you’re down, is it your app or a system-wide problem?
o Where is the New Relic for Lambda?
o How do you degrade gracefully when using multiple vendors?
o How do you monitor execution duration so your function doesn’t fail unexpectedly?
o How do you monitor your account wide limits so dev deploy doesn’t take down production?

Also: Are SQL databases dead?

5. Performance

o How do you handle startup latency?
o How do you optimize code for mobile?
o Does battery life preclude a large codebase on client?
o How do you do caching on server when each invocation resets everything?
o How do you do database connection pooling?

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 surprising features in Amazon’s Lambda serverless offering

Amazon is building out it’s serverless offering at a rapid clip. Lambda makes a great solution for a lot of different use cases including:

o a hybrid approach, building lambda functions for small pieces of your application, sitting along side your full application, working in concert with it

o working with Kinesis firehose to add ETL functionality into your pipeline. Extract Transform & Load is a method of transforming data from a relational or backend transactional databases, into one better fit for reporting & analytics.

o retrofitting your API? Layer Lambda functions in front, to allow you to rebuild in a managed way.

o a natural way to build microservices, with each function as it’s own little universe

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Great, tons of ways to put serverless to use. What’s Amazon doing to make it even better? Here are some of the features you’ll find indispensible in building with Lambda.

1. Versioned functions

As your serverless functions get more sophisticated, you’ll want to control & deploy different versions. Lambda supports this, allowing you to upload multiple copies of the same function. Coupled with Aliases below, this becomes a very powerful feature.

Also: When hosting data on Amazon turns bloodsport

2. Aliases

As you deploy multiple versions of your functions in AWS, you don’t want to recreate the API endpoints each time. That’s where aliases come in. Create one alias for dev, another for test, and a third for production. That way when new versions of those are deployed, all you have to do is change the alias & QA or customers will be hitting the new code. Cool!

Related: Are you getting errors building lambda functions?

3. Caching & throttling

Using the API gateway, we can do some fancy footwork with Lambda. First we can enabling caching to speedup access to our endpoint. Control the time-to-live, capacity of the cache easily. We’ll also need to invalidate the cache when we make changes & redeploy our functions.

Throttling is another useful feature, allowing you to control the maximum number of times your function can be called per second on average (the rate) and maximum number of times (burst limit). These can be set at both the stage & method levels.

Read: Is Amazon too big to fail?

4. Stage variables

Creating multiple stages, for dev, test & production means you can separate out and control environment variables with more granular control. For example suppose you have access & secret keys to reach S3. You can set environment variables for these to avoid committing any credentials or secrets in your code. Definitely don’t do that!

Allowing multiple copies of stage variables, means you can set them separately for dev, test & production.

Also: How to deploy on Amazon EC2 with Vagrant?

5. Logging

You can enable logging in your Lambda function configuration. This will send error and/or info warning messages out to CloudWatch.

You may also choose the log all of the request & response data. This is controlled in the API Gateway settings for individual stages.

Also: Is Amazon RDS hard to manage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

As cloud expands, does legacy grow too?

I was recently reading Drew Bell’s post Legacy systems are everywhere. It struck a deep chord for me.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Drew first touches on a story of upgrading an application with legacy components, taking pieces offline, and rebuilding to eliminate technical debt.

He then tells a parallel story of renovations in his new home. Well new for him, but an old building, with old building problems.

I’ve gone through some similar experiences so I thought I’d share some of those.

o A publishing company on AWS

I worked with one company in publishing. They had built a complex automation pipeline to deploy code. As a lead engineer planned to exit, I was brought in to provide support during transition. As with large complex websites, there was a lot that was done right, and some things done in old ways. Documenting all the pieces and digging up the dead bodies was a big part of the job.

Also: Is a dangerous anti-ops movement gaining momentum?

o Renovating a kitchen

In parallel to the above project, I was renovating my kitchen, in a new home in Brooklyn. Taking on this project myself, I dutifully assembled IKEA cabinents, and laid them out to spec. As I began the painstaking process of leveling for the countertop, I ran into trouble. Measurement after measurement didn’t add up. It seemed one section was shorter than another, where the counter should go.

Since I needed to add support for a dishwasher, that had to be measured correctly. Yet the level tool told a different story than the yardstick. Finally after thinking about it for a few hours, I put the level on the floor itself. Turns out the floor wasn’t level! That explained why cabinets were shorter in one area than another.

Also: How do we lock down systems from disgruntled engineers?

o Legacy in 5-7 years?

Complex systems like software, exhibit a lot of the same surprises as old buildings. That was one surprise I wasn’t expecting. As houses are renovated on the 15-30 year timeframe, software seems to experience a five to seven year cycle.

Whether a consequence of shifting sands in the underlying stack, databases, frameworks or cloud components, or the changing needs of product & customers

Also: Is AWS a patient that needs constant medication?

o Opportunity everywhere

As companies large & small migrate pieces of their systems to the cloud, move to microservices or rebuild on serverless, the opportunities are endless. It seems every firm is renovating their kitchen these days, putting on a new roof or upgrading their data pipeline.

Also: Is AWS too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Does AWS have a dirty little secret?

tell a secret

I was recently talking with a colleague of mine about where AWS is today. Obviously there companies are migrating to EC2 & the cloud rapidly. The growth rates are staggering.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

The question was…

“What’s good and bad with Amazon today?”

It’s an interesting question. I think there some dirty little secrets here, but also some very surprising bright spots. This is my take.

1. VPC is not well understood  (FAIL)

This is the biggest one in my mind. ¬†Amazon’s security model is all new to traditional ops folks. ¬†Many customers I see deploy in “classic EC2”. ¬†Other’s deploy haphazerdly in their own VPC, without a clear plan.

The best practices is to have one or more VPCs, with private & public subnet.  Put databases in private, webservers in public.  Then create a jump box in the public subnet, and funnel all ssh connections through there, allow any source IP, use users for authentication & auditing (only on this box), then use google-authenticator for 2factor at the command line.  It also provides an easy way to decommission accounts, and lock out users who leave the company.

However most customers have done little of this, or a mixture but not all of it. ¬†So GETTING TO BEST PRACTICES around vpc, would mean deploying a vpc as described, then moving each and every one of your boxes & services over there. ¬†Imagine the risk to production services. ¬†Imagine the chances of error, even if you’re using Chef or your own standardized AMIs.

Also: Are we fast approaching cloud-mageddon?

2. Feature fatigue (FAIL)

Another problem is a sort of “paradox of choice”. ¬†That is that Amazon is releasing so many new offerings so quickly, few engineers know it all. ¬†So you find a lot of shops implementing things wrong because they didn’t understand a feature. ¬†In other words AWS already solved the problem.

OpenRoad comes to mind. ¬†They’ve got media files on the filesystem, when S3 is plainly Amazon’s purpose-built service for this. ¬†

Is AWS too complex for small dev teams & startups?

Related: Does Amazon eat it’s own dogfood? Apparently yes!

3. Required redundancy & automation  (FAIL)

The model here is what Netflix has done with ChaosMonkey. ¬†They literally knock machines offline to test their setup. ¬†The problem is detected, and new hardware brought online automatically. ¬†Deploying across AZs is another example. ¬†As Amazon says, we give you the tools, it’s up to you to implement the resiliency.

But few firms do this. ¬†They’re deployed on Amazon as if it’s a traditional hosting platform. ¬†So they’re at risk in various ways. ¬†Of Amazon outages. ¬†Of hardware problems under the VMs. ¬†Of EBS network issues, of localized outages, etc.

Read: Is Amazon too big to fail?

4. Lambda  (WIN)

I went to the serverless conference a week ago.  It was exiting to see what is happening.  It is truely the *bleeding edge* of cloud.  IBM & Azure & Google all have a serverless offering now.  

The potential here is huge. ¬†Eliminating *ALL* of the server management headaches, from packages to config management & scaling, hiding all of that could have a huge upside. ¬†What’s more it takes the on-demand model even further. ¬†YOu have no compute running idle until you hit an endpoint. ¬†Cost savings could be huge. ¬†Wonder if it has the potential to cannibalize Amazon’s own EC2 … ¬†we’ll see.

Charity Majors wrote a very good critical piece – WTF is Operations? #serverless
WTF is operations? #serverless

Patrick Dubois 

Also: Is the difference between dev & ops a four-letter word?

5. Redshift  (WIN)

Seems like *everybody* is deploying a data warehouse on Redshift these days. ¬†It’s no wonder, because they already have their transactional database, their web backend on RDS of some kind. ¬†So it makes sense that Amazon would build an offering for reporting.

I’ve heard customers rave about reports that took 10 hours on MySQL run in under a minute on Redshift. ¬†It’s not surprising because MySQL wasn’t built for the size servers it’s being deployed on today. ¬†So it doesn’t make good use of all that memory. ¬†Even with SSD drives, query plans can execute badly.

Also: Is there a better way to build a warehouse in 2016?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Some thoughts on 12 factor apps

12 factor app

I was talking with a colleague recently about an upcoming project.

In the summary of technologies, he listed 12 factor, microservices, containers, orchestration, CI and nodejs. All familiar to everyone out there, right?

Join 28,000 others and follow Sean Hull on twitter @hullsean.

Actually it was the first I had heard of 12 factor, so I did a bit of reading.

1. How to treat your data resources

12 factor recommends that backing services be treated like attached resources. Databases are loosely coupled to the applications, making it easier to replace a badly behaving database, or connect multiple ones.

Also: Is the difference between dev & ops a four-letter word?

2. Stay loosely coupled

In 12 Fractured Apps Kelsey Hightower adds that this loose coupling can be taken a step further. Applications shouldn’t even assume the database is available. Why not fall back to some useful state, even when the database isn’t online. Great idea!

Related: Is Amazon too big to fail?

3. Degrade gracefully

A read-only or browse-only mode is another example of this. Allow your application to have multiple decoupled database resources, some that are read-only. The application behaves intelligently based on what’s available. I’ve advocated those before in Why Dropbox didn’t have to fail.

Read: When hosting data on Amazon turns bloodsport


The twelve-factor app appears to be an excellent guideline on building cleaning applications that are easier to deploy and manage. I’ll be delving into it more in future posts, so check back!

Read: Are we fast approaching cloud-mageddon?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters