Category Archives: All

How to build an operational datastore on AWS with S3 & Redshift

via GIPHY

You’re building your data warehouse, and getting data into Redshift. You’ve got your ETL pipeline running, and presentation layer talking to the warehouse. Great.

But how to get access to that source data? Wouldn’t it be nice if that was close by too?

Join 35,000 others and follow Sean Hull on twitter @hullsean.

It may be you have 10-zillion rows of source data and don’t want or need to get all of that into Redshift and keep it there. But it would be nice to have access to it when you do.

Enter EXTERNAL tables, aka Spectrum. Now you can keep all your raw data in S3, an in place operational datastore of data before it’s been reworked and transformed. Use SQL to access it right where it sits.

Get all the advantages of lifecycle management in S3, and don’t pay all the redshift costs for data you don’t need all the time. Cool!

Let’s see how it works.

What is an EXTERNAL table?

Spectrum is Amazon’s rebranding of an old database technology called EXTERNAL TABLES. Back in the 90’s Oracle pioneered this work, allowing you to essentially map a CSV file, that sits outside the database proper. This means you can query all that juicy data sitting in flat files. Cool!

Athena allows you to query this stuff as a service, native to AWS. Spectrum allows you to create those external tables inside of Redshift.

Also: Top serverless interview questions for hiring aws lambda experts

Give Redshift permissions

Go into IAM and create a new role called “SeanSpectrumRole”. Assign the policy AmazonS3ReadOnlyPolicy. It looks like this:


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}

If you’re using the dashboard you just pick the policy from the named list. However if you’re using CloudFormation, you’ll use the code above.

Now navigate your aws console to the Redshift dashboard, click clusters, and click the checkbox for your cluster. Probably there’s only one.

Now click the “Manage IAM Roles” button, and a dialog should popup.. Select the role you created earlier, SeanSpectrumRole. Then click “Apply Changes”.

The beauty of the AWS world is that servers themselves can have API permissions. In this case we gave the redshift cluster or server itself, access to S3 for our use below!

Related: Which engineering roles are in greatest demand?

Create your spectrum schema

First you must create a spectrum schema. Here’s the syntax:


create external schema spectrum
from data catalog
database 'sean'
region 'us-east-1'
iam_role 'arn:aws:iam::9999999999999:role/SeanSpectrumRole';

Read: Can on-demand consulting save startups time & money?

Upload your data to S3 bucket

Here we create an s3 bucket called sean_spectrum, then upload one csv file named sean_numbers.txt.


$ aws s3api create-bucket --bucket sean_spectrum --region us-east-1
{
"Location": "/sean_spectrum"
}
$ cd spectrum/
$ cat sean_numbers.txt
21,Dr.,Who,44-22-55-77-88
35,Bat,Man,317-222-4777
15,Wonder,Woman,999-324-7878
99,Storm,Cloud,367-399-6767
75,Marvel,Girl,222-333-9595
32,Quick,Silver,22-33-77-99
12,Scarlet,Witch,23-35-47-555
$ aws s3 cp sean_numbers.txt s3://sean_spectrum/
upload: ./sean_numbers.txt to s3://sean_spectrum/sean_numbers.txt
$ aws s3 ls s3://sean_spectrum/
2017-05-18 20:28:41 193 sean_numbers.txt
$

Note the names. The table name won’t turn out to be sean_numbers. It will be called sean_spectrum, and all files inside that directory will be queried. So make sure they have consistent formats!

Also: 30 questions to ask a serverless fanboy

Create & query your external table

Here’s how you create your external table. Note this is just a map to data. The data is still stored in S3. it is not brought into Redshift except to slice, dice & present.


mydb=# create external table spectrum_schema.sean_numbers(
id int,
fname string,
lname string,
phone string)
row format delimited
fields terminated by ','
stored as textfile
location 's3://sean_spectrum/';

Here’s how you query it:


mydb=# select * from spectrum_schema.sean_numbers order by id;
id | fname | lname | phone
----------------+---------------
12 | Scarlet | Witch | 23-35-47-555
15 | Wonder | Woman | 999-324-7878
21 | Dr. | Who | 44-22-55-77-88
32 | Quick | Silver | 22-33-77-99
35 | Bat | Man | 317-222-4777
75 | Marvel | Girl | 222-333-9595
99 | Storm | Cloud | 367-399-6767

Cool. We reordered data read from an S3 file!!!

Although you can’t create a view over a redshift table *AND* an S3 external table, you can query them together.

So for example if I have a table in redshift with addresses, I can join them together:

mydb=# select a.id, a.fname, a.lname, b.address from spectrum_schema.sean_numbers a, sean_addresses b
where a.id = b.id order by id;

id | fname | lname | phone | address
----------------+----------------------------
12 | Scarlet | Witch | 23-35-47-555 | 10 main st
15 | Wonder | Woman | 999-324-7878 | 25 center st
21 | Dr. | Who | 44-22-55-77-88 | 32 broadway
32 | Quick | Silver | 22-33-77-99 | 1 first st
35 | Bat | Man | 317-222-4777 | 99 west st
75 | Marvel | Girl | 222-333-9595 | 66 East Ave
99 | Storm | Cloud | 367-399-6767 | 50 North st

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Will SQL just die already?

With tons of new No-SQL database offerings everyday, developers & architects have a lot of options. Cassandra, Mongodb, Couchdb, Dynamodb & Firebase to name a few.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

What’s more in the data warehouse space, you have Hadoop, which can churn through terabytes of data and get you results back before lunchtime!

So when I stumbled on this article SQL is 43 years old, I was intrigued.

Answer the questions you haven’t thought of

No-SQL databases are great if you know how you want to access the data. Users come from the users table, and that’s that!

But if later on you want to ask questions like, which users watched this video, which users are active, which users spent $100 in January? These questions may not be possible because NoSQL can’t join those other tables.

Relational databases shine when you need to aggregate your data, reorganize it, or ask unanticipated questions. And aren’t those most of the interesting questions?

Also: Top serverless interview questions for hiring aws lambda experts

Big Query, Redshift & even Hive speak SQL

I wrote that despite recent popularity in Hadoop, Redshift seems to be eating their lunch. And what would you know, surprise surprise, Amazon’s newish data warehousing solution, speaks SQL! What’s more there’s Apache Hive, which allows you to query Hadoop with, drumroll please… SQL!

Bigquery is the other major bigdata offering from none other than Google. And it too uses SQL!

Related: Which engineering roles are in greatest demand?

Still dominant

If you look at Stackoverflow’s developer survey, you’ll see that SQL is the second most popular language. Why might that be? For one thing it’s simple to learn. Enough that even business users can write simple requests, join & aggregate data.

Read: Can on-demand consulting save startups time & money?

Rugged, Proven & Open

SQL having been around so long is a fairly open standard. Sure there are extensions of it, but most of the basic stuff is there in all the products. That means you learn it once, and can interact with databases across the spectrum. That’s a win for everybody.

Also: 30 questions to ask a serverless fanboy

Business users can write it

Another under appreciated feature though is that basic queries are easy to write. They don’t require complex syntax like a hadoop job, or your favorite imperative programming language. The queries are readable, almost english-like sentences.

Given all that, it seems SQL is likely to be around for a long time to come!

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is Amazon about to disrupt your data warehouse?

via GIPHY

Amazon is about to launch a product called glue. As you can see below, this is the last piece in the data warehousing puzzle. With that in place, Amazon will own you! Or at least have push button products to meet all of enterprises varying needs.

Even if you’re a small startup, you can do big-shot big enterprise data warehousing. That means everyone can use cutting edge data driven techniques for product & business decisions.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

What is Redshift

Redshift is like the OLAP databases of years past, the Oracle’s of the world purpose built for warehousing data. Obviously without the crazy licensing model Oracle was famous for. With Amazon you can get enterprise class data warehouse for modest hourly prices.

If my recent conversations with recruiters about Redshift demand are any indication, there’s been a sudden uptick in startups looking for redshift expertise.

Also: Top serverless interview questions for hiring aws lambda experts

What is Spectrum?

Spectrum is a very new extension of Redshift allowing you to access & query S3 file data directly. This means you can have petabytes of data that you can access pre-load time. So you will ETL and load portions of it, but with Spectrum you can still access the offline data too.

In the old Oracle days this was called an EXTERNAL TABLE. I mention this only to say that Amazon isn’t doing anything that hasn’t been done before. Rather they’re bringing these advanced features within reach of everyday startups. That’s cool.

Related: Which engineering roles are in greatest demand?

What is glue?

Glue is still in beta, but if the RE:Invent talk above is any indication, it’s set to disrupt an entire industry. Wow!

Glue first catalogs your data sources. What does this mean, it scans them & models their schemas.

It then generates sample python ETL code. Modify it, or write your own. Share your code on Git. Or borrow other open source pieces, that already address your specific ETL use case!

Lastly it includes a job scheduler which handles dependencies. Job A must be completed before B can run and so forth. Error handling & logging are also all included.

Since these are native Amazon services, of course they’re going to integrate with their dangerously fast Redshift warehouse.

Read: Can on-demand consulting save startups time & money?

What is serverless?

I’ve written about how to throw fastballs at a serverless fanboy and even how to hire a serverless expert. But really what is it?

Serverless means deploying functions directly into the cloud. No servers, no configuration. All the systems administration & automation is hidden. No more devops to argue with! Amazon’s own offering is called Lambda.

Also: 30 questions to ask a serverless fanboy

What is Quicksight?

Amazon’s even jumped into the fray at the presentation layer. Quicksight is a BI tool along the lines of mode, domo, looker or Tableau.

Now it’s possible to stay completely within the cozy Amazon ecosystem even for business insight and analytics.

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Top questions to ask a devops expert when hiring or preparing for job & interview

xkcd_goodcode
Strip by Randall Munroe; xkcd.com

Whether your a hiring manager, head of HR or recruiter, you are probably looking for a devops expert. These days good ones are not easy to find. The spectrum of tools & technologies is broad. To manage today’s cloud you need a generalist.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

If you’re a devops expert and looking for a job, these are also some essential questions you should have in your pocket. Be able to elaborate on these high level concepts as they’re crucial in todays agile startups.

Check out: 8 questions to ask an aws ec2 expert

Also new: Top questions to ask on a devops expert interview

And: How to hire a developer that doesn’t suck

1. How do you automate deployments?

A. Get your code in version control (git)

Believe it or not there are small 1 person teams that haven’t done this. But even with those, there’s real benefit. Get on it!

B. Evolve to one script push-button deploy (script)

If deploying new code involves a lot of manual steps, move file here, set config there, set variable, setup S3 bucket, etc, then start scripting. That midnight deploy process should be one master script which includes all the logic.

It’s a process to get there, but keep the goal in sight.

C. Build confidence over many iterations (team process & agile)

As you continue to deploy manually with a master script, you’ll iron out more details, contingencies, and problems. Over time You’ll gain confidence that the script does the job.

D. Employ continuous integration Tools to formalize process (CircleCI, Jenkins)

Now that you’ve formalized your deploy in code, putting these CI tools to use becomes easier. Because they’re custom built for you at this stage!

E. 10 deploys per day (long term goal)

Your longer term goal is 10 deploys a day. After you’ve automated tests, team confidence will grow around developers being able to deploy to production. On smaller teams of 1-5 people this may still be only 10 deploys per week, but still a useful benchmark.

Also: Top serverless interview questions for hiring aws lambda experts

2. What is microservices?

Microservices is about two-pizza teams. Small enough that there’s little beaurocracy. Able to be agile, focus on one business function. Iterate quickly without logjams with other business teams & functions.

Microservices interact with each other through APIs, deploy their own components, and use their own isolated data stores.

Function as a service, Amazon Lambda, or serverless computing enables microservices in a huge way.

Related: Which engineering roles are in greatest demand?

3. What is serverless computing?

Serverless computing is a model where servers & infrastructure do not need to be formalized. Only the code is deployed, and the platform, AWS Lambda for example, takes care of instant provisioning of containers & VMs when the code gets called.

Events within the cloud environment, such a file added to S3 bucket, trigger the serverless functions. API Gateway endpoints can also trigger the functions to run.

Authentication services are used for user login & identity management such as Auth0 or Amazon Cognito. The backend data store could be Dynamodb or Google’s Firebase for example.

Read: Can on-demand consulting save startups time & money?

4. What is containerization?

Containers are like faster deploying VMs. They have all the advantages of an image or snapshot of a server. Why is this useful? Because you can containerize your microservices, so each one does one thing. One has a webserver, with specific version of xyz.

Containers can also help with legacy applications, as you isolate older versions & dependencies that those applications still rely on.

Containers enable developers to setup environments quickly, and be more agile.

Also: 30 questions to ask a serverless fanboy

5. What is CloudFormation?

CloudFormation, formalizes all of your cloud infrastructure into json files. Want to add an IAM user, S3 bucket, rds database, or EC2 server? Want to configure a VPC, subnet or access control list? All these things can be formalized into cloudformation files.

Once you’ve started down this road, you can checkin your infrastructure definitions into version control, and manage them just like you manage all your other code. Want to do unit tests? Have at it. Now you can test & deploy with more confidence.

Terraform is an extension of CloudFormation with even more power built in.

Also: What can startups learn from the DYN DNS outage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Top Amazon Lambda questions for hiring a serverless expert

via GIPHY

If you’re looking to fill a job roll that says microservices or find an expert that knows all about serverless computing, you’ll want to have a battery of questions to ask them.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

For technical interviews, I like to focus on concepts & the big picture. Which rules out coding exercises or other puzzles which I think are distracting from the process. I really like what what the guys at 37 Signals say

“Hire for attitude. Train for skill.”

So let’s get started.

1. How do you automate deployment?

Programming lambda functions is much like programming in other areas, with some particular challenges. When you first dive in, you’ll use the Amazon dashboard to upload a zipfile with your code. But as you become more proficient, you’ll want to create a deployment pipeline.

o What features in Amazon facilitate automatic deployments?

AWS Lambda supports environment variables. Use these for credentials & other data you don’t want in your deployment package.

Amazon’s serverless offering, also supports aliases. You can have a dev, stage & production alias. That way you can deploy functions for testing, without interrupting production code. What’s more when you are ready to push to production, the endpoint doesn’t change.

o What frameworks are available for serverless?

Serverless Framework is the most full featured option. It fully supports Amazon Lambda & as of 1.0 provides support for other platforms such as IBM Openwhisk, Google Cloud Functions & Azure functions. There is also something called SAM or Serverless Application Model which extends CloudFormation. With this, you can script changes to API Gateway, Dynamo DB & Cognos authentication stuff.

If you’re using Auth0 instead of Cognito or Firebase instead of Dynamodb, you’ll have to come up with your own way to automate changes there.

Also: Is the difference between dev & ops a four-letter word?

2. What are the pros of serverless?

Why are we moving to a serverless computing model? What are the advantages & benefits of it?

o easier operations means faster time to market
o large application components become managed
o reduced costs, only pay while code is running
o faster deploy means more experimentation, more agile
o no more worry about which servers will this code run on?
o reduced people costs & less infrastructure
o no chef playbooks to manage, no deploy keys or IAM roles

Related: Is automation killing old-school operations?

3. What are the cons of serverless?

There are a lot of fanboys of serverless, because of the promise & hope of this new paradigm. But what about healthy criticism? A little dose of reality can identify a critical & active mind.

o With Lambda you have less vendor control which could mean… more downtime, system limits, sudden cost changes, loss of functionality or features and possible forced API upgrades. Remember that Amazon will choose the needs of the many over your specific application idiosyncracies.

o There’s no dedicated hardware option with serverless. So you have the multi-tenant challenges of security & performance problems of other customers code. You may even bump into problems because of other customers errors!

o Vendor lock-in is a real obvious issue. Changing to Google Cloud Functions or Azure Functions would mean new deployment & monitoring tools, a code rewrite & rearchitect, and new infrastructure too. You would also have to export & import your data. How easy does Amazon make this process?

o You can no longer store application & state data in local server memory. Because each instantiation of a function will effectively be a new “server”. So everything must be stored in the database. This may affect performance.

o Testing is more complicated. With multiple vendors, integration testing becomes more crucial. Also how do you create dev db instance? How do you fully test offline on a laptop?

o You could hit system wide limits. For example a big dev deploy could take out production functions by hitting an AWS account limit. You would thus have DDoS yourself! You can also hit the 5 minute execution time limit. And code will get aborted!

o How do you do zero downtime deployments? Since Amazon currently deploys function-by-function, if you have a group of 10 or 20 that act as a unit, they will get deployed in pieces. So your app would need to be taken offline during that period or it would be executing some from old version & some from new version together. With unpredictable results.

Read: Do managers underestimate operational cost?

4. How does security change?

o In serverless you may use multiple vendors, such as Auth0 for authentication, and perhaps Firebase for your data. With Lambda as your serverless platform you now have three vendors to work with. More vendors means a larger area across which hackers may attack your application.

o With the function as a service application model, you lose the protective wall around your database. It is no longer safely deployed & hidden behind a private subnet. Is this sufficient protection of your key data assets?

Also: Is the difference between dev & ops a four-letter word?

5. How do you troubleshoot & debug microservices?

o Monitoring & debugging is still very limited. This becomes a more complex process in the serverless world. You can log error & warning messages to CloudWatch.

o Currently Lambda doesn’t have any open API for third party tooling. This will probably come with time, but again it’s hard to see & examine a serverless function “server” while it is running.

o For example there is no New Relic for serverless.

o Performance tuning may be a bit of a guessing game in the serverless space right now. Amazon will surely be expanding it’s offering, and this is one area that will need attention.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What engineering roles are most in demand at startups?

via GIPHY

I was just reading over StackOverflow’s 2017 Developer survey. As it turns out there were some surprising findings.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

One that stood out was databases. In the media, one hears more and more about NoSQL databases like Cassandra, Dynamo & Firebase. Despite all that MySQL seems to remain the most popular database by a large margin. Legacy indeed!

1. Databases

MySQL is still the most popular db by a large margin 56%. Followed by SQL Server 39%, SQLite 27% and Postgres 27%.

Related: Is Amazon too big to fail?

2. Most popular language

Javascript sits at number one for Web developers, sysadmins & Data Scientists alike. Followed by SQL.

Read: Are SQL Databases dead?

3. Most popular framework

Node.js at 47%. It’s followed by AngularJS at 44%.

Also: 5 ways to move data to Amazon Redshift

4. Most loved database

Redis sits at number one here at 65%, followed by Postgres & Mongo.

Also: Myth of five nines – why HA is overrated

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Key lessons from the Devops Handbook

I picked up a copy of the DevOps Handbook.

This is not a book about how to setup Amazon servers, how to use git, codePipeline or Jenkins. It’s not about Chef or Ansible or other tools.

Join 33,000 others and follow Sean Hull on twitter @hullsean.

This is a book about processes & people. It’s about how & why automation & world-class infrastructure will make your business more agile, raise quality & increase productivity.

1. Infrastructure in version control

With technologies like Terraform and CloudFormation, the entire state of your infrastructure can be captured. That means you can manage it just like any other code.

Also: Myth of five nines – Why high availability is overrated

2. Pushbutton builds

You’ve heard it before. Automate your builds. That means putting everything in version control, from environment building scripts, to configs, artifacts & reference data. Once you can do that, you’re on your way to automating production deploys completely.

Related: 5 ways to move data to amazon redshift

3. Devs & Ops comingled

In the devops world, devs should learn about operations, infrastructure, performance & more. What’s more operations teams should work closely with devs.

Read: Why were dev & ops siloed job roles?

4. Servers as cattle not pets

In the old days, we logged into servers & provided personal care & feeding. We treated them like pets.

In the new world of devops, we should treat servers like cattle. When it begins to fail, take it out back and shoot it. (tbh i don’t love the analogy, but it carries some meaning…)

Also: Are SQL databases dead?

5. Open to learnings & failures

Organizations that are open to failures, without playing the blame game, learn quicker & recover from problems faster.

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

30 questions to ask a serverless fanboy

Everyone is hot under the collar again. So-called serverless or no-ops services are popping up everywhere allowing you to deploy “just code” into the cloud. Not only won’t you have to login to a server, you won’t even have to know they’re there.

As your code is called, but cloud events such a file upload, or hitting an http endpoint, your code runs. Behind the scene through the magic of containers & autoscaling, Amazon & others are able to provision in milliseconds.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Pretty cool. Yes even as it outsources the operations role to invisible teams behind Amazon Lambda, Google Cloud Functions or Webtask it’s also making companies more agile, and allowing startup innovation to happen even faster.

Believe it or not I’m a fan too.

That said I thought it would be fun to poke a hole in the bubble, and throw some criticisms at the technology. I mean going serverless today is still bleeding edge, and everyone isn’t cut out to be a pioneer!

With that, here’s 30 questions to throw on the serverless fanboys (and ladies!)…

1. Security

o Are you comfortable removing the barrier around your database?
o With more services, there is more surface area. How do you prevent malicious code?
o How do you know your vendor is doing security right?
o How transparent is your vendor about vulnerabilities?

Also: Myth of five nines – Why high availability is overrated

2. Testing

o How do you do integration testing with multiple vendor service components?
o How do you test your API Gateway configurations?
o Is there a way to version control changes to API Gateway configs?
o Can Terraform or CloudFormation help with this?
o How do you do load testing with a third party db backend?
o Are your QA tests hitting the prod backend db?
o Can you easily create & destroy test dbs?

Related: 5 ways to move data to amazon redshift

3. Management

o How do you do zero downtime deployments with Lambda?
o Is there a way to deploy functions in groups, all at once?
o How do you manage vendor lock-in at the monitoring & tools level but also code & services?
o How do you mitigate your vendors maintenance? Downtime? Upgrades?
o How do you plan for move to alternate vendor? Database import & export may not be ideal, plus code & infrastructure would need to be duplicated.
o How do you manage a third party service for authentication? What are the pros & cons there?
o What are the pros & cons of using a service-based backend database?
o How do you manage redundancy of code when every client needs to talk to backend db?

Read: Why were dev & ops siloed job roles?

4. Monitoring & debugging

o How do you build a third-party monitoring tool? Where are the APIs?
o When you’re down, is it your app or a system-wide problem?
o Where is the New Relic for Lambda?
o How do you degrade gracefully when using multiple vendors?
o How do you monitor execution duration so your function doesn’t fail unexpectedly?
o How do you monitor your account wide limits so dev deploy doesn’t take down production?

Also: Are SQL databases dead?

5. Performance

o How do you handle startup latency?
o How do you optimize code for mobile?
o Does battery life preclude a large codebase on client?
o How do you do caching on server when each invocation resets everything?
o How do you do database connection pooling?

Also: Is Amazon too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Some irresistible reading for March – outages, code, databases, legacy & hiring

via GIPHY

I decided this week to write a different type of blog post. Because some of my favorite newsletters are lists of articles on topics of the day.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Here’s what I’m reading right now.

1. On Outages

While everyone is scrambling to figure out why part of the internet went down … wait is S3 is part of the internet, really? While I’m figuring out if it is a service of Amazon, or if Amazon is so big that Amazon *is* the internet now…

Let’s look at s3 architectural flaws in depth.

Meanwhile Gitlab had an outage too in which they *gasp* lost data. Seriously? An outage is one thing, losing data though. Hmmm…

And this article is brilliant on so many levels. No least because Matthew knows that “post truth” is a trending topic now, and uses it his title. So here we go, AWS Service status truth in a post truth world. Wow!

And meanwhile the Atlantic tries to track down where exactly are those Amazon datacenters?

Also: Is Amazon too big to fail?

2. On Code

Project wise I’m fiddling around with a few fun things.

Take a look at Guy Geerling’s Ansible on a Mac playbooks. Nice!

And meanwhile a very nice deep dive on Amazon Lambda serverless best practices.

Brandur Leach explains how to build awesome APIs aka ones that are robust & idempotent

Meanwhile Frans Rosen explains how to 0wn slack. And no you don’t want this. 🙂

Related: 5 surprising features in Amazon’s serverless Lambda offering

3. On Hiring & Talent

Are you a rock star dev or a digital nomad? Take a look at the 12 best international cities to live in for software devs.

And if you’re wondering who’s hiring? Well just about everyone!

Devs are you blogging? You should be.

Looking to learn or teach… check out codementor.

Also: why did dev & ops used to be separate job roles?

4. On Legacy Systems

I loved Drew Bell’s story of stumbling into home ownership, attempting to fix a doorbell, and falling down a familiar rabbit hole. With parallels to legacy software systems… aka any older then oh say five years?

Ian Bogost ruminates why nothing works anymore… and I don’t think an hour goes by where I don’t ask myself the same question!

Also: Are we fast approaching cloud-mageddon?

5. On Databases

If you grew up on the virtual world of the cloud, you may have never touched hardware besides your own laptop. Developing in this world may completely remove us from understanding those pesky underlying physical layers. Yes indeed folks containers do run in “virtual” machines, but those themselves are running on metal, somewhere down the stack.

With that let’s not forget that No, databases are not for containers… but a healthy reminder ain’t bad..

Meanwhile Larry’s mothership is sinking…(hint: Oracle) Does anybody really care? Now’s the time to revisit Mike Wilson’s classic The difference between god and Larry Ellison.

Read: Are SQL Databases Dead?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 surprising features in Amazon’s Lambda serverless offering

Amazon is building out it’s serverless offering at a rapid clip. Lambda makes a great solution for a lot of different use cases including:

o a hybrid approach, building lambda functions for small pieces of your application, sitting along side your full application, working in concert with it

o working with Kinesis firehose to add ETL functionality into your pipeline. Extract Transform & Load is a method of transforming data from a relational or backend transactional databases, into one better fit for reporting & analytics.

o retrofitting your API? Layer Lambda functions in front, to allow you to rebuild in a managed way.

o a natural way to build microservices, with each function as it’s own little universe

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Great, tons of ways to put serverless to use. What’s Amazon doing to make it even better? Here are some of the features you’ll find indispensible in building with Lambda.

1. Versioned functions

As your serverless functions get more sophisticated, you’ll want to control & deploy different versions. Lambda supports this, allowing you to upload multiple copies of the same function. Coupled with Aliases below, this becomes a very powerful feature.

Also: When hosting data on Amazon turns bloodsport

2. Aliases

As you deploy multiple versions of your functions in AWS, you don’t want to recreate the API endpoints each time. That’s where aliases come in. Create one alias for dev, another for test, and a third for production. That way when new versions of those are deployed, all you have to do is change the alias & QA or customers will be hitting the new code. Cool!

Related: Are you getting errors building lambda functions?

3. Caching & throttling

Using the API gateway, we can do some fancy footwork with Lambda. First we can enabling caching to speedup access to our endpoint. Control the time-to-live, capacity of the cache easily. We’ll also need to invalidate the cache when we make changes & redeploy our functions.

Throttling is another useful feature, allowing you to control the maximum number of times your function can be called per second on average (the rate) and maximum number of times (burst limit). These can be set at both the stage & method levels.

Read: Is Amazon too big to fail?

4. Stage variables

Creating multiple stages, for dev, test & production means you can separate out and control environment variables with more granular control. For example suppose you have access & secret keys to reach S3. You can set environment variables for these to avoid committing any credentials or secrets in your code. Definitely don’t do that!

Allowing multiple copies of stage variables, means you can set them separately for dev, test & production.

Also: How to deploy on Amazon EC2 with Vagrant?

5. Logging

You can enable logging in your Lambda function configuration. This will send error and/or info warning messages out to CloudWatch.

You may also choose the log all of the request & response data. This is controlled in the API Gateway settings for individual stages.

Also: Is Amazon RDS hard to manage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters