5 surprising features in Amazon’s Lambda serverless offering

Amazon is building out it’s serverless offering at a rapid clip. Lambda makes a great solution for a lot of different use cases including:

o a hybrid approach, building lambda functions for small pieces of your application, sitting along side your full application, working in concert with it

o working with Kinesis firehose to add ETL functionality into your pipeline. Extract Transform & Load is a method of transforming data from a relational or backend transactional databases, into one better fit for reporting & analytics.

o retrofitting your API? Layer Lambda functions in front, to allow you to rebuild in a managed way.

o a natural way to build microservices, with each function as it’s own little universe

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Great, tons of ways to put serverless to use. What’s Amazon doing to make it even better? Here are some of the features you’ll find indispensible in building with Lambda.

1. Versioned functions

As your serverless functions get more sophisticated, you’ll want to control & deploy different versions. Lambda supports this, allowing you to upload multiple copies of the same function. Coupled with Aliases below, this becomes a very powerful feature.

Also: When hosting data on Amazon turns bloodsport

2. Aliases

As you deploy multiple versions of your functions in AWS, you don’t want to recreate the API endpoints each time. That’s where aliases come in. Create one alias for dev, another for test, and a third for production. That way when new versions of those are deployed, all you have to do is change the alias & QA or customers will be hitting the new code. Cool!

Related: Are you getting errors building lambda functions?

3. Caching & throttling

Using the API gateway, we can do some fancy footwork with Lambda. First we can enabling caching to speedup access to our endpoint. Control the time-to-live, capacity of the cache easily. We’ll also need to invalidate the cache when we make changes & redeploy our functions.

Throttling is another useful feature, allowing you to control the maximum number of times your function can be called per second on average (the rate) and maximum number of times (burst limit). These can be set at both the stage & method levels.

Read: Is Amazon too big to fail?

4. Stage variables

Creating multiple stages, for dev, test & production means you can separate out and control environment variables with more granular control. For example suppose you have access & secret keys to reach S3. You can set environment variables for these to avoid committing any credentials or secrets in your code. Definitely don’t do that!

Allowing multiple copies of stage variables, means you can set them separately for dev, test & production.

Also: How to deploy on Amazon EC2 with Vagrant?

5. Logging

You can enable logging in your Lambda function configuration. This will send error and/or info warning messages out to CloudWatch.

You may also choose the log all of the request & response data. This is controlled in the API Gateway settings for individual stages.

Also: Is Amazon RDS hard to manage?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

As cloud expands, does legacy grow too?

I was recently reading Drew Bell’s post Legacy systems are everywhere. It struck a deep chord for me.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Drew first touches on a story of upgrading an application with legacy components, taking pieces offline, and rebuilding to eliminate technical debt.

He then tells a parallel story of renovations in his new home. Well new for him, but an old building, with old building problems.

I’ve gone through some similar experiences so I thought I’d share some of those.

o A publishing company on AWS

I worked with one company in publishing. They had built a complex automation pipeline to deploy code. As a lead engineer planned to exit, I was brought in to provide support during transition. As with large complex websites, there was a lot that was done right, and some things done in old ways. Documenting all the pieces and digging up the dead bodies was a big part of the job.

Also: Is a dangerous anti-ops movement gaining momentum?

o Renovating a kitchen

In parallel to the above project, I was renovating my kitchen, in a new home in Brooklyn. Taking on this project myself, I dutifully assembled IKEA cabinents, and laid them out to spec. As I began the painstaking process of leveling for the countertop, I ran into trouble. Measurement after measurement didn’t add up. It seemed one section was shorter than another, where the counter should go.

Since I needed to add support for a dishwasher, that had to be measured correctly. Yet the level tool told a different story than the yardstick. Finally after thinking about it for a few hours, I put the level on the floor itself. Turns out the floor wasn’t level! That explained why cabinets were shorter in one area than another.

Also: How do we lock down systems from disgruntled engineers?

o Legacy in 5-7 years?

Complex systems like software, exhibit a lot of the same surprises as old buildings. That was one surprise I wasn’t expecting. As houses are renovated on the 15-30 year timeframe, software seems to experience a five to seven year cycle.

Whether a consequence of shifting sands in the underlying stack, databases, frameworks or cloud components, or the changing needs of product & customers

Also: Is AWS a patient that needs constant medication?

o Opportunity everywhere

As companies large & small migrate pieces of their systems to the cloud, move to microservices or rebuild on serverless, the opportunities are endless. It seems every firm is renovating their kitchen these days, putting on a new roof or upgrading their data pipeline.

Also: Is AWS too big to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What products & improvements are new on AWS?

Amazon is releasing new products & services to it’s global cloud compute network at a rate that has all of our heads spinning.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

Here’s new stuff worth mentioning around databases & data.

1. For ETL – AWS GLUE

Moving data from your transactional MySQL or Arora database to your reporting database isn’t always easy.

In the past you could use a service like xplenty or Alooma.

Now Amazon themselves are getting into the ETL game, providing a new service called Glue.

Also: RDS or Mysql? 10 use cases

2. Query S3 with Athena

Chances are if you’re using AWS for anything, you’ve got data in S3. And wouldn’t it be nice to pick that apart and dig through it, where it sits?

Oracle had a feature called “external tables” and MySQL had something similar. Now Amazon is offering that native within it’s own cloud universe. Thanks to some tricky lambda code, now you can do that. Don’t worry how they did it, because it’s been packaged into a nice easy service for your use!

Related: When you have to take the fall – consulting war stories

3. Business Intelligence with QuickSight

If you’re a data driven startup, and who isn’t these days, you’re going to have a business unit building reports. Tableau or Looker may be in your wheelhouse.

Amazon is obviously seeing the opportunity here, and competing with their own partners. Check out Amazon Quicksight for details.

Read: Is upgrading RDS like a sh*t storm that will not end?

4. Expanded RDS

RDS is obviously a very popular offering. And even though zero downtime is very hard to achieve with RDS, you’ll save plenty on DBAs and admins you don’t have to hire!

If you hadn’t heard, there is now MariaDB support. And with it, there’s a migration from MySQL to Mariadb as well.

Using Mariadb may bring you performance advantages & improvements. But RDS may mitigate this by productize & standarizing things.

You can also now move encrypted snapshots across regions. In my view this isn’t really a new feature, but rather fixing something that was broken before. The previous limitation was really more a symptom of their global network of data centers, than any built feature per se.

Also: Is the difference between dev & ops a four-letter word?

5. Expanded Redshift

As I’ve blogged before, everybody is excited about Redshift these days.

Amazon has introduced some new features.

o better loading of sorted data

This is done behind the scenes to load data quickly, and keep it stored efficiently. No more vacuuming after a big load!

o user & database rate limiting

Limit connections on a per user or per database level. Useful!

o storage estimates on analyze

When you perform the analyze command, you can get storage information so it’s easier to decide datatypes & compression type. Nifty!

Also: Is Redshift outpacing Hadoop as the big data warehouse for startups?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How can startups learn from the Dyn DNS outage?

storm coming

As most have heard by now, last Friday saw a serious DDOS attack against one of the major US DNS providers, Dyn.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

DNS being such a critical dependency, this affected many businesses across the board. We’re talking twitter, etsy, github, Airbnb & Reddit to name just a few. In fact Amazon Web Services itself was severely affected. And with so many companies hosting on the Amazon cloud, it’s no wonder this took down so much of the internet.

1. What happened?

According to Brian Krebs, a Mirai botnet was responsible for the attack. What’s even scarier, those requests originated for IOT devices. You know, baby monitors, webcams & DVRs. You’ve secured those right? ūüôā

Brian has posted a list of IOT device makers that have backdoors & default passwords and are involved. Interesting indeed.

Also: Is a dangerous anti-ops movement gaining momentum?

2. What can be done?

Companies like Dyn & Cloudflare among others spend plenty of energy & engineering resources studying attacks like this one, and figuring out how to reduce risk exposure.

But what about your startup in particular? How can we learn from these types of outages? There are a number of ways that I outline below.

Also: How do we lock down systems from disgruntled engineers?

3. What are your dependencies?

After an outage like the Dyn one, it’s an opportunity to survey your systems. Take stock of what technologies, software & services you rely on. This is something your ops team can & likely wants to do.

What components does your stack rely on? Which versions are hardest to upgrade? What hardware or services do you rely on? Which APIs do you call out to? Which steps or processes are still manual?

Related: The myth of five nines

4. Put your eggs in many baskets

Awareness around your dependencies, helps you see where you may need to build in redundancy. Can you setup a second cloud provider for DR? Can you use an alternate API to get data, when your primary is out? For which dependencies are your hands tied? Where are your weaknesses?

Read: Is AWS too complex for small dev teams?

5. Don’t assume five nines

The gold standard in technology & startup land has been 5 nines availability. This is the SLA we’re expected to shoot for. I’ve argued before (see: myth of five nines) that it’s rarely ever achieved. Outages like this one, bringing hours long downtime, kill hour 5 nines promise for years. That’s because 5 nines means only 5 ¬Ĺ minutes downtime per year!

Better to be realistic that outages can & will happen, manage & mitigate, and be realistic with your team & your customers.

Also: Is AWS a patient that needs constant medication?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is a dangerous anti-ops movement gaining momentum?

devops divide

I was talking with a colleague recently. He asked me …

What do you think of the #no-ops movement that seems to be gaining ground? How is it related to devops?

It’s an interesting question. With technologies like lambda & docker containers, the role & responsibilities & challenges of operations are definitely changing quickly.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

The tooling & automation stacks that are available now are great. ¬†Groundbreaking. Paradigm shifting. ¬†But there’s another devops story that’s buried there waiting to be heard…

1. What is ops anyway?

What exactly is operations anyway? Charity Majors wrote an amazing piece – WTF is operations which I highly recommend reading.

At root, operations is about providing a safe nest where software can live. From incubation, to birth, then care & feeding to maturity.

Also: Why Reddit CTO Martin Weiner wants a boring tech stack

2. Is Noops possible?

The trend to a #NOops movement I think is a dangerous one.

At first glance this might seem reflexive on my part. ¬†After all I’ve specialized in operations & databases for years. ¬†But I think there’s something more insidious here.

Devs are often presiding over the first wave of software. That’s the initial period of perhaps five years, where frenetic product development is happening. ¬†After those years have passed, early innovators are long gone, and an OPS team is trying to keep things running, and patch where necessary. ¬†This is when more conservative thinking, and the perspective of fewer moving parts & a simpler infrastructure seems so obvious. ¬†All the technical debt is piled up & it’s hard to find the front door.

There’s an interesting article The ops identity crisis by Susan Fowler that I’d recommend for further reading.

Related: Is zero downtime even possible on RDS?

3. The dev mandate

I’ve sat in on teams talking about getting rid of ops & how it’ll mean more money to spend on devs etc. ¬†It’s always a surprising sentiment to hear.

I would argue that developers have a mandate to build production & functionality that can directly help customers. This is in essence a mandate for change. Faster, more agile & responsive means quicker to market & more responsive to changes there.

Read: Five reasons to move data to Amazon Redshift

4. The ops mandate

I’ve also heard the other camp, ops talking about how stupid & short sighted devs can be. Deploying the lastest shiny toys, without operational or long term considerations being thought of.

The ops mandate then is for this longer term view. How can we keep systems stable at 2am in the morning? How can we keep them chugging along after five or more years?

This great article Happiness is a boring stack by Jason Kester really sums up the sentiment. The sure & steady, standard & reliable stack wins the operations test every time.

Also: Is Amazon too big to fail?

5. Coming together

Ultimately dev & ops have different mandates. ¬†One for change & new product features, the other against change, for long term stability. ¬†It’s about striking a balance between the two.

It’s always a dance. That’s why dev & ops need to come together. That’s really what devops is all about.

For some further reading, I found Julia Evans’ piece What Is Devops to be an excellent read.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How I use Terraform & Composer to automate wordpress on aws

iRobot1

How I setup wordpress to deploy automatically on aws

You want to make your wordpress site bulletproof? No server outage worries? Want to make it faster & more reliable. And also host on cheaper components?

I was after all these gains & also wanted to kick the tires on some of Amazon’s latest devops offerings. So I plotted a way forward to completely automate the deployment of my blog, hosted on wordpress.

Here’s how!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The article is divided into two parts…

Deploy a wordpress site on aws – decouple assets (part 1)

In this one I decouple the assets from the website. What do I mean by this? By moving the db to it’s own server or RDS of even simpler management, it means my server can be stopped & started or terminated at will, without losing all my content. Cool.

You’ll also need to decouple your assets. Those are all the files in the uploads directory. Amazon’s S3 offering is purpose built for this use case. It also comes with easy cloudfront integration for object caching, and lifecycle management to give your files backups over time. Cool !

Deploy a wordpress site on aws – automate (part 2)

The second part we move into all the automation pieces. We’ll use PHP’s Composer to manage dependencies. That’s fancy talk for fetching wordpress itself, and all of our plugins.

1. Isolate your config files

Create a directory & put your config files in it.

$ mkdir iheavy
$ cd iheavy
$ touch htaccess
$ touch httpd.conf
$ touch wp-config.php
$ touch a_simple_pingdom_test.php
$ touch composer.json
$ zip -r iheavy-config.zip *
$ aws s3 cp iheavy-config.zip s3://my-config-bucket/

In a future post we’re going to put all these files in version control. Amazon’s CodeCommit is feature compatible with Github, but integrated right into your account. Once you have your files there, you can use CodeDeploy to automatically place files on your server.

We chose to leave this step out, to simplify the server role you need, for your new EC2 webserver instance. In our case it only needs S3 permissions!

Also: When devops means resistance to change

2. Build your terraform script

Terraform is a lot like Vagrant. I wrote a Howto deploy on EC2 with Vagrant article a couple years ago.

The terraform configuration formalizes what you are asking of Amazon’s API. What size instance? Which AMI? What VPC should I launch in? Which role should my instance assume to get S3 access it needs? And lastly how do we make sure it gets the same Elastic IP each time it boots?

All the magic is inside the terraform config.

Here’s what I see:

levanter:~ sean$ cat iheavy.tf

resource "aws_iam_role" "web_iam_role" {
    name = "web_iam_role"
    assume_role_policy = <

And here's what it looks like when I ask terraform to build my infrastructure:

levanter:~ sean$ terraform apply
aws_iam_instance_profile.web_instance_profile: Refreshing state... (ID: web_instance_profile)
aws_iam_role.web_iam_role: Refreshing state... (ID: web_iam_role)
aws_s3_bucket.apps_bucket: Refreshing state... (ID: iheavy)
aws_iam_role_policy.web_iam_role_policy: Refreshing state... (ID: web_iam_role:web_iam_role_policy)
aws_instance.iheavy: Refreshing state... (ID: i-14e92e24)
aws_eip.bar: Refreshing state... (ID: eipalloc-78732a47)
aws_instance.iheavy: Creating...
  ami:                               "" => "ami-1a249873"
  availability_zone:                 "" => ""
  ebs_block_device.#:                "" => ""
  ephemeral_block_device.#:          "" => ""
  iam_instance_profile:              "" => "web_instance_profile"
  instance_state:                    "" => ""
  instance_type:                     "" => "t1.micro"
  key_name:                          "" => "iheavy"
  network_interface_id:              "" => ""
  placement_group:                   "" => ""
  private_dns:                       "" => ""
  private_ip:                        "" => ""
  public_dns:                        "" => ""
  public_ip:                         "" => ""
  root_block_device.#:               "" => ""
  security_groups.#:                 "" => ""
  source_dest_check:                 "" => "true"
  subnet_id:                         "" => "subnet-1f866434"
  tenancy:                           "" => ""
  user_data:                         "" => "ca8a661fffe09e4392b6813fbac68e62e9fd28b4"
  vpc_security_group_ids.#:          "" => "1"
  vpc_security_group_ids.2457389707: "" => "sg-46f0f223"
aws_instance.iheavy: Still creating... (10s elapsed)
aws_instance.iheavy: Still creating... (20s elapsed)
aws_instance.iheavy: Creation complete
aws_eip.bar: Modifying...
  instance: "" => "i-6af3345a"
aws_eip_association.eip_assoc: Creating...
  allocation_id:        "" => "eipalloc-78732a47"
  instance_id:          "" => "i-6af3345a"
  network_interface_id: "" => ""
  private_ip_address:   "" => ""
  public_ip:            "" => ""
aws_eip.bar: Modifications complete
aws_eip_association.eip_assoc: Creation complete

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
levanter:~ sean$ 

Also: Is Amazon too big to fail?

3. Use Composer to automate wordpress install

There is a PHP package manager called composer. It manages dependencies and we depend on a few things. First WordPress itself, and second the various plugins we have installed.

The file is a JSON file. Pretty vanilla. Have a look:

{
    "name": "acme/brilliant-wordpress-site",
    "description": "My brilliant WordPress site",
    "repositories":[
        {
            "type":"composer",
            "url":"https://wpackagist.org"
        }
    ],
    "require": {
       "aws/aws-sdk-php":"*",
	    "wpackagist-plugin/medium":"1.4.0",
       "wpackagist-plugin/google-sitemap-generator":"3.2.9",
       "wpackagist-plugin/amp":"0.3.1",
	    "wpackagist-plugin/w3-total-cache":"0.9.3",
	    "wpackagist-plugin/wordpress-importer":"0.6.1",
	    "wpackagist-plugin/yet-another-related-posts-plugin":"4.0.7",
	    "wpackagist-plugin/better-wp-security":"5.3.7",
	    "wpackagist-plugin/disqus-comment-system":"2.74",
	    "wpackagist-plugin/amazon-s3-and-cloudfront":"1.1",
	    "wpackagist-plugin/amazon-web-services":"1.0",
	    "wpackagist-plugin/feedburner-plugin":"1.48",
       "wpackagist-theme/hueman":"*",
	    "php": ">=5.3",
	    "johnpbloch/wordpress": "4.6.1"
    },
    "autoload": {
        "psr-0": {
            "Acme": "src/"
        }
    }

}

Read: Is aws a patient that needs constant medication?

4. build your user-data script

This captures all the commands you run once the instance starts. Update packages, install your own, move & configure files. You name it!

#!/bin/sh

yum update -y
yum install emacs -y
yum install mysql -y
yum install php -y
yum install git -y
yum install aws-cli -y
yum install gd -y
yum install php-gd -y
yum install ImageMagick -y
yum install php-mysql -y


yum install -y httpd24 
service httpd start
chkconfig httpd on

# configure mysql password file
echo "[client]" >> /root/.my.cnf
echo "host=my-rds.ccccjjjjuuuu.us-east-1.rds.amazonaws.com" >> /root/.my.cnf
echo "user=root" >> /root/.my.cnf
echo "password=abc123" >> /root/.my.cnf


# install PHP composer
export COMPOSE_HOME=/root
echo "installing composer..."
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"
mv composer.phar /usr/local/bin/composer


# fetch config files from private S3 folder
aws s3 cp s3://iheavy-config/iheavy_files.zip .

# unzip files
unzip iheavy_files.zip 

# use composer to get wordpress & plugins
composer update

# move wordpress software
mv wordpress/* /var/www/html/

# move plugins
mv wp-content/plugins/* /var/www/html/wp-content/plugins/

# move pingdom test
mv a_simple_pingdom_test.php /var/www/html

# move htaccess
mv htaccess /var/www/html/.htaccess

# move httpd.conf
mv iheavy_httpd.conf /etc/httpd/conf.d

# move our wp-config into place
mv wp-config.php /var/www/html

# restart apache
service httpd restart

# allow apache to create uploads & any files inside wp-content
chown apache /var/www/html/wp-content

You can monitor things as they're being installed. Use ssh to reach your new instance. Then as root:

$ tail -f /var/log/cloud-init.log

Related: Does Amazon eat it's own dogfood?

5. Time to test

Visit the domain name you specified inside your /etc/httpd/conf.d/mysite.conf

You have full automation now. Don't believe me? Go ahead & TERMINATE the instance in your aws console. Now drop back to your terminal and do:

$ terraform apply

Terraform will figure out that the resources that *should* be there are missing, and go ahead and build them for you. AGAIN. Fully automated style!

Don't forget your analytics beacon code

Hopefully you remember how your analytics is configured. The beacon code makes an API call everytime a page is loaded. This tells google analytics or other monitoring systems what your users are doing, and how much time they're spending & where.

This typically goes in the header.php file. We'll leave it as an exercise to automate this piece yourself!

Also: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Deploy wordpress on aws by first decoupling assets

too much inventory

You want to make your wordpress site bulletproof? No server outage worries? Want to make it faster & more reliable. And also host on cheaper components?

I was after all these gains & also wanted to kick the tires on some of Amazon’s latest devops offerings. So I plotted a way forward to completely automate the deployment of my blog, hosted on wordpress.

Here’s how!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The article is divided into two parts…

Deploy a wordpress site on aws – decouple assets (part 1)

In this one I decouple the assets from the website. What do I mean by this? By moving the db to it’s own server or RDS of even simpler management, it means my server can be stopped & started or terminated at will, without losing all my content. Cool.

You’ll also need to decouple your assets. Those are all the files in the uploads directory. Amazon’s S3 offering is purpose built for this use case. It also comes with easy cloudfront integration for object caching, and lifecycle management to give your files backups over time. Cool !

Terraform a wordpress site on aws – automate deploy (part 2)

The second part we move into all the automation pieces. We’ll use PHP’s Composer to manage dependencies. That’s fancy talk for fetching wordpress itself, and all of our plugins.

1. get your content into S3

How to do it?

A. move your content

$ cd html/wp-content/
$ aws s3 cp uploads s3://iheavy/wp-content/

Don’t have the aws command line tools installed?

$ yum install aws-cli -y
$ aws configure

B. Edit your .htaccess file with these lines:

These above steps handle all *existing* content. However you also want new content to go to S3. For that wordpress needs to understand how to put files there. Luckily there’s a plugin to help!


RewriteEngine On
RewriteRule ^wp-content/uploads/(.*)$ http://s3.amazonaws.com/your-bucket/wp-content/uploads/$1 [P]

C. Fetch WP Offload S3 Lite

You’ll see the plugin below in our composer.json file as “amazon-s3-and-cloudfront”

Theoretically you need to specify your aws credentials inside the wp-config.php. However this is insecure. You don’t ever want stuff like that in S3 or any code repository. What to do?

The best way is to use AWS ROLES for your instance. These give the whole instance access to API calls without credentials. Cool! Read more about AWS roles for instances.

Related: Is there a devops talent gap?

2. Move to your database to RDS

You may also use a roll-your-own MySQL instance. The point here is to make it a different EC2 instance. That way you can kill & rebuild the webserver at will. This offers us some cool advantages.

A. Create an RDS instance in a private subnet.

o Be sure it has no access to the outside world.
o note the root user, password
o note the endpoint or hostname

I recommend changing the password from your old instance. That way you can’t accidentally login to your old db. Well it’s still possible, but it’s one step harder.

B. mysqldump your current wp db from another server

$ cd /tmp
$ mysqldump –opts wp_database > wp_database.mysql

C. copy that dump to an instance in the same VPC & subnet as the rds instance

$ scp -i .ssh/mykey.pem ec2-user@oldbox.com:/tmp/wp_database.mysql /tmp/

D. import the data into your new db

$ cd /tmp
$ echo “create database wp_database” | mysql
$ mysql < wp_database.mysql E. Edit your wp-config.php

define(‘DB_PASSWORD’, ‘abc123’);
define(‘DB_HOST’, ‘my-rds.ccccjjjjuuuu.us-east-1.rds.amazonaws.com’);

Also: When hosting data on Amazon turns bloodsport?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 things you didn’t know about Dynamodb that are hurting you bad

amazon-dynamo-db

If you’re like a lot of folks you’re building an application in AWS & using a NoSQL database for persistent data. Dynamodb fits the bill nicely. Little or no ops to worry about, at least in the traditional sense.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

However there are knobs to turn & dials to set. Here are a few you should be thinking about.

1. You can replicate across regions

Dynamodb introduced a feature in 2015 called streams. If you come from the relational database world, you can think of streams like a transaction log. It captures before & after image of your data. Couple those with useful lambda functions, and you have triggers that can do anything you want.

Turns out Amazon have been all over this, and already build a library to do cross-region replication with streams. Pretty cool!

Also: Is aws too complex for small dev teams?

2. You can manage retrieval costs

Dynamodb automatically creates and manages an index on the primary key. But chances are that your application will read data based on other columns too. You can create secondary indexes on these other columns, reducing your data access patterns. Without an index Dynamodb would have to scan every row to find your data, but the index can dramatically reduce this, and making data retrieval faster too!

Related: Does Amazon eat it’s own dogfood?

3. You can do SQL Like queries

That’s right, if you thought NoSQL meant no SQL you were only half right. By loading your Dynamodb data into HDFS, you can allow elastic map reduce to have at it. And thus open the door to use HiveQL to query the data the way you wanted to in the first place.

Convoluted? Yes. But this is the brave new world of the cloud!

Read: Is AMazon too big to fail?

4. Partitions are handy & useful

By default dynamo is partitioning your data behind the scenes. Because that’s what good distributed databases are supposed to do. It does so using the primary key to figure out where the data should go. And just like with Redshift you have option of also using sort key to help the optimizer figure out how to distribute the data. This is important. Going across those different instances brings a lot of latency costs that will surprise you.

Also: When hosting data on Amazon turned bloodsport

5. Metrics are your partner in performance

CloudWatch provides all sorts of instrumentation for Dynamodb. Read & write activity, throttling, errors & latency are just a few of the things you can see.

Also: Is aws the patient that needs constant medication?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How do we lock down cloud systems from disgruntled engineers?

CommitStrip.com

I worked at a customer last year, on a short term assignment. A brilliant engineer had built their infrastructure, automated deployments, and managed all the systems. Sadly despite all the sleepless nights, and dedication, they hadn’t managed to build up good report with management.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

I’ve seen this happen so many times, and I do find it a bit sad. Here’s an engineer who’s working his butt off, really wants the company to succeed. Really cares about the systems. But doesn’t connect well with people, often is dismissive, disrespectful or talks down to people like they’re stupid. All of this burns bridges, and there’s a lot of bad feelings between all parties.

How do you manage the exit process? Here’s a battery of recommendations for changing credentials & logins so that systems can’t be accessed anymore.

1. Lock out API access

You can do this by removing the administrator role or any other role their IAM user might have. That way you keep the account around *just in case*. This will also prevent them from doing anything on the console, but you can see if they attempt any logins.

Also: Is AWS too complex for small dev teams?

2. Lock out of servers

They may have the private keys for various serves in your environment. So to lock them out, scan through all the security groups, and make sure their whitelisted IPs are gone.

Are you using a bastion box for access? That’s ideal because then you only have one accesspoint. Eliminate their login and audit access there. Then you’ve covered your bases.

Related: Does Amazon eat it’s own dogfood?

3. Update deployment keys

At one of my customers the outgoing op had setup many moving parts & automated & orchestrated all the deployment processes beautifully. However he also used his personal github key inside jenkins. So when it went to deploy, it used those credentials to get the code from github. Oops.

We ended up creating a company github account, then updating jenkins with those credentials. There were of course other places in the capistrano bits that also needed to be reviewed.

Read: Is aws a patient that needs constant medication?

4. Update dashboard logins

Monitoring with NewRelic or Nagios? Perhaps you have a centralized dashboard for your internal apps? Or you’re using Slack?

Also: Is Amazon too big to fail?

5. Audit Non-key based logins

Have some servers outside of AWS in a traditional datacenter? Or even servers in AWS that are using usernames & passwords? Be sure to audit the full list of systems, and change passwords or disable accounts for the outgoing sysop.

Also: When hosting data on Amazon turns bloodsport?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How do we measure devotion?

devoted_employee

I was talking recently over email with a hiring manager. Jamie (not his real name) wanted to hire me, but was set against consulting. While that by itself is understandable, he seemed to equate it with devotion. This troubled me. Here’s the quote below.

Join 32,000 others and follow Sean Hull on twitter @hullsean.


While I am sure your skills are excellent, I guess what I am trying to gauge is your desire to quit consulting and join us full time.  I am looking for you to share my vision of changing publishing through data.   Let me be clear: I am not looking for a contractor.  Acme is a fabulous company and I need a person devoted to Acme and to our data assets.

1. Devotion on vacation

Here’s my response. All names have been changed.


I understand Jamie.

I hear you about devotion, I think it’s very important too. ¬†In 2010, I was working at MGC. ¬†After 3 months, they hired a large remote DBA firm out of Canada, to manage the database systems & my contract concluded. ¬†

A few weeks later and a few hours before a plane flight,  I got a harried call.  Can you help us? Database replication is broken & our site is offline.   I jumped on skype to chat with the team, even as I was packing my bags.  I went to the airport, and got on WIFI again.  In-flight on my way to California I remained online to help repair the systems & bring everything back.  It took a few more days and half of my vacation to get things working again, but I wanted to help.

My boss at MGC kept me on for 1 ¬Ĺ year after that. ¬†He felt I was devoted & gave them the very best service. ¬†

If you change your mind, or would like to discuss further, don’t hesitate to reach out.

Also: What happens when clients don’t pay?

2. Devotion to a manager

I had another experience years back with company Media Inc. Working under a very good CTO, I was surrounded by a team who was also very loyal to him. After about a year, he decided to leave. He had gotten a very enticing offer from another firm. Although he made a great effort to leave the ship in good condition, the crew felt the ship rocking a bit. A temporary CTO was brought on who had a very different style.

As the ship continued to rock at sea, finally a new CTO was found. He however was not popular at all. He had a swagger & tended to throw his weight around, irritating the team, and making them fear they might be thrown from the ship. Slowly they began to leave. After three months, six out of eight on the team had left. There was one old-school Oracle guy still left, and me.

Although he certainly had a different style than the previous boss, it didn’t bother me much. I told him I’d stay as long as he needed me. I was also working remote so I didn’t deal with some of the day-to-day politics.

My devotion was to the business, databases & systems. I accomplished this by being devoted to my own business.

Related: Why I ask customers for a deposit?

3. Devotion to vesting

I worked at another firm about three years ago. Let’s call them Growing Fast Inc. While the firm itself was gaining ground & getting customers like Nike & Wallmart, it still had an engineering team of only ten. You could say it was boxing way above it’s weight.

While it tried to grow, it hired an outside CTO to help. His style was primarily management facing, while the teams problems were based in technology. With tons of technical debt & a lack of real leadership, the engineering team was floundering. Lots of infighting was making things worse.

Suddenly a key team member decided to quit. The following week another, and after that two more. All told four left. When you consider how small the team was, and further that the remaining members were basically founders a different picture emerges. Four out of six (non-founders) had left in two weeks, roughly 66% of the engineering team. The only other guy who stayed had his visa sponsored by Growing Fast Inc.

The founders who stayed were all vested. Everyone else quit because of mismanagement.

Read: 5 conversational ways to evaluate great consultants

4. Devotion to code & data

In an industry as competitive as software & technology, it’s often devotion to building things that wins the day. Using the latest & greatest languages, databases & tech stack can carry a lot of weight.

Managing technical debt can make a difference too. Developers don’t want to be asked to constantly walk a minefield of other developers mistakes. A minefield needs to be cleaned up, for the business to flourish.

Also: 5 things I learned about trust & advising clients?

5. Devotion through & through

Running a startup isn’t easy. Many fail after 3 or 5 years. I’m devoted to business. ¬†I’ve been an entrepreneur for 20 years, and built it into a success. ¬†

The year after 9/11 & again after 2008 were the most difficult periods to tough it out. ¬†It’s been hard fought & I wouldn’t shutter the doors of my own business easily. ¬†It affords me the opportunity to attend AWS popup loft hearing lectures, going to conferences & meetups & blogging about technology topics, & pivoting with the technological winds change. ¬†

I’ve found all of this makes me extremely valuable to firms looking for expertise. ¬†I have independence & perspective that’s hard to find. ¬†I’m also there for firms that have been looking to fill a role, and need help sooner rather than later.

Also: A CTO must never do this

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters