Categories
All Cloud Computing Consulting CTO/CIO Security Startups

How can we secretly deploy this key to production?

via GIPHY

I worked with a client some time back that was in a very unfortunate position indeed. Through various management shuffling, the COO was the new sheriff in town. Which is fine and normal.

Except in this case the entire operations was managed across the sea.

Join 35,000 others and follow Sean Hull on twitter @hullsean.

Now in many or most cases that would be fine. The new COO asks for the root credentials, works with devops to get new hires on board, and move forward. But these were not normal circumstances.

No, here the turf war began in earnest. Threats were even lobbed about what damage could be done.

1. Get a handle on all the systems

Upon starting the engagement, I attempted to inventory everything. What aws logins & IAM accounts were there? What servers were deployed? What networks, and security groups were setup like?

I followed many of the suggestions I outlined here: locking down cloud systems.

However we hit some hurdles, because I did not have my key deployed on production servers. I also could not update the deployed keys. So we were stuck in a holding pattern.

Read: Is AWS too complex?

2. Can you deploy this key in secret?

As we moved through the mess, and still could not gain control of systems, I received a strange request.

Can you deploy your key to production in secret?

Well of course what I’m thinking at this point is, I sure hope not. If a new engineer could bypass the devops team and get the keys changed on the servers, that would amount to a giant security hole. It was as it turned out not possible to do.

Related: How do I migrate my skills to the cloud?

3. Git is not a good way sidestep human communication

What I realized is that we were trying to use git to deploy a key, which would have communicated our intentions to the devops team anyway. What’s more it would have communicated it in a strange way. Why is this happening, and what are *they* trying to do over there in New York?

Better to communicate the normal way, hey we hired this new engineer and we would like to give him access to these production servers. End of story.

Related: What mistakes did you make when starting as a consultant?

4. How things got resolved

We attempted to get the key installed on systems, which did precipitate a conversation about, who is this guy Sean and what’s his role? All of that stalled for a while.

In the end a new CTO was hired, and the devops lead in Odessa reported to this new guy. This was a savvy move since the new CTO didn’t yet have a relationship good or bad with the loose cannon in Ukraine.

As we stepped forward carefully, we got a hold of credentials, and then planned an early morning cutover. While multiple engineers were poised at our keyboards, the devops lead in Ukraine was being fired. So we were locking him out a the same time.

Related: How to hire a developer that doesn’t suck?

5. No technology can sidestep hard conversations

It all went smoothly, and a lot was learned.

What I learned was that when strange requests surface, usually it means some bigger conversation needs to happen.

No technology is can sidestep those hard conversations.

Related: 30 questions to ask a serverless fanboy

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Categories
All Security

Does Amazon’s security work well for startups?

via GIPHY

I was sifting through my project & progress reports from former clients today. Something struck me loud and clear. It seems 4 out of 5 of them don’t implement VPC best practices.

Join 35,000 others and follow Sean Hull on twitter @hullsean.

Which begs the question again and again, is the service just too damn complicated? I wrote about this topic before… Is aws a bit too complex for most or at least smaller dev teams?

1. No private subnets

What are those you ask? I really hope you’re not asking that.

The best practices way to deploy on amazon is using a vpc. This provides a logical grouping. You could have a dev, stage and prod vpc, and perhaps a utility one for other more permanent services.

Within that VPC, you want to have everything deployed in one or more private subnets. These are each mapped to a specific AZ in that region. The AZ mapps to a physical datacenter, a single building within that region. These private subnets have *NO route to the internet*.

How do you reach resources in the private subnet? You must be coming from the public subnet deployed within that same VPC. All the routing rules enforce this. The two types of resources that would be deployed in public subnet: load balancer for 80/443 traffic, and a jump or bastion box for ssh.

Read: How can 1% of something equal nothing?

2. Security groups with all ports open

Another thing that I see more often than you might guess is all ports open by some wildcard rule. *BAD*. We all know it’s bad, but it happens. And then it gets forgotten. We see developers doing it as a temporary fix to get something working and forget to later plug up the hole.

Even for security groups that don’t have this problem, they often allow port 22 from anywhere on the internet (0.0.0.0). This is unnecessary and rather reckless. Everyone should be coming from known source IPs. This can be an office network, or it can be some other trusted server on the internet. Or a block of IPs that you’ll always have assigned.

And of course don’t have your database port open. MySQL and Postgres don’t have particularly great protections here.

Related: Is Fred Wilson right about dealing in an honest, direct and transparent way?

3. No flowlogs enabled

Flowlogs allow you to log things at the packet level. Want to know about failed ssh attempts? Log that. What to know about other ports? Log that too.

If you are funneling all your connections through a jump box, then you can just enable flowlogs then you can configure your vpc flowlogs monitoring just for that box itself. You may also want to watch what’s happening with the load balancer too.

Flowlogs work at the network interface layer of your VPC, so you’ll need to understand VPCs in depth.

Related: What mistakes did you make when starting as a consultant?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Categories
All Consulting CTO/CIO Security

How do we secure an existing aws hosted application?

via GIPHY

What if you don’t have the luxury of a greenfield. You are looking at an already built application, and asking yourself, how do I secure this?

Join 35,000 others and follow Sean Hull on twitter @hullsean.

One can think of it as a giant labyrinth, with many turns and many paths. Some of those paths have not had light shining in them for some time. So you’ll need to be cautious, thorough, and vigilant.

Here are some notes on where to start.

1. Scanning – code

One area you’ll need to dig into is the application code itself. If you don’t have the luxury to push new code, you’ll need to verify what version is deployed, and scan the repository for keys or passwords. You can also scan on the server itself. Better to double your efforts.

Read: What do the best engineers do better?

2. Scanning – network

Your VPC is obviously your first layer of defense. Scan the routing table policies, to make sure there aren’t open ports or whitelisted IPs.

Do the same sort of review for security groups, as those are an alternative method for configuring access to servers.

AWS has a service called Flowlogs, which can be enabled. These give you detailed network layer logging, which you can then scan for trouble.

Related: Is Fred Wilson right about dealing in an honest, direct and transparent way?

3. Scanning – IAM, keys & console

Your existing devs probably have keys to some or all of the EC2 boxes. If you don’t want to relaunch all of these boxes with new keys, or don’t have the luxury to do that, you’ll need to lock down the security groups, whitelisted IPs and VPC routing rules.

You’ll also need to carefully review IAM roles & policies. Amazon Inspector may be a useful tool to scan your environment, and find glaring holes and enforce best practices. But you’ll also want to do your own scanning both automated and manually eyeballing the accounts.

You’ll also want to lock down console access, especially the root account, and any others that have adminstrator privileges. Enable password policies and password rotation, as well as multi-factor authentication. There is also a nice toggle for “alert on login”. You certainly want to know about those!

Related: What mistakes did you make when starting as a consultant?

4. Scanning – services

Review all of the AWS services that are deployed. Ask yourself some of these questions:

o which regions & availability zones am I deployed in?
o what elastic IPs do I have configured where?
o what IAM roles & policies do I have created?
o what databases, API gateways & S3 buckets are configured
o etc…

Cloudtrail can be a great help here as it can log all sorts of useful information. You can then scan those logs for problems.

Related: Why did mailchimp fraudulently charge my credit card?

5. Rebuilding

The scanning approach can work, but there is a strong need to be thorough. If you miss one whitelisted IP or existing ssh key, you can leave the whole network open to a crafty intruder.

Another option is to rebuild the whole application. This gives you the time to:

o automate the whole stack with terraform
o test that everything is working
o plan for failover
o ensure that every bucket is secure with lifecycle policies enabled
o ensure that every EBS volume is encrypted
o enabled cloudtrail, cloudwatch etc

o potentially setup in a *brand new* aws account, for even more confidence
o backup all the pieces of the application as you go

Read: Did Disney+ have to fail?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters