Categories
All Cloud Computing CTO/CIO

Why is there a company whose only purpose is to help me with my AWS bill?

via GIPHY

I know everyone is talking about the pandemic at the moment, I thought I would sidestep the topic, and continue to discuss what I know best. Which is public cloud!

Join 35,000 others and follow Sean Hull on twitter @hullsean.

Of late, I see more and more discussions on CTO forums, and in the news on surprise AWS bills. Did you hear what happened NASA’s cloud deployment fail?

So I thought I’d provide some history, and a few ideas…

1. Oracle days

Believe it or not this has happened before. In the days when I did a lot of Oracle work, I remember it was terrible how tough their licensing was. And companies were pretty much hamstrung because they needed the technology to run their business.

Given the demand, new firms like Miro Consulting stepped in to help. Their business is specifically built to help customers with out-of-control Oracle bills.

Read: How to hire a developer that doesn’t suck

2. Repeating refrain

What’s this got to do with Amazon Web Services you say?

Just today I discovered a firm that promises to help you lower your aws bill by 20-40%.

Promises aside, if there is a business doing this, there must be money to be made in it. So that speaks to a sizeable market opportunity.

Related: When you have to take the fall

3. AWS has a webinar on the topic

Hey if you’re thinking you can manage it yourself, why not look to AWS for assistance on lowering your bill.

Something tells me there’s a conflict of interest there, but I digress.

Point is with this topic becoming of growing interest, I think you’ll see more businesses stepping into this niche.

Related: How do I migrate my skills to the cloud?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Categories
All Cloud Computing Cloud Migrations Consulting CTO/CIO

What can we learn from NASA’s AWS fail?

via GIPHY

I was just reading the Register, which is sort of the UK’s version of Slashdot, and they had a jaw dropping title. NASA moved 247 petabytes into AWS and then later learned about EGRESS costs

OMG! Face palm. Wow.

Join 35,000 others and follow Sean Hull on twitter @hullsean.

To say this is a disaster is an understatement. Could it have been prevented? Not likely by 100% strategic thinking. I believe a certain amount of real-world testing & prototyping is the only way.

Here are my thoughts…

1. Expect hidden costs

Everytime I check there are more AWS services. Just now I did some googling, and the number stands at 170. Not only is it tough to keep up with all of those, but the offerings are constantly evolving, getting new features and so forth. That means the pricing and costs are evolving too.

All this means an infinitely complex web of interconnecting pieces, so it is near impossible to predict prices in advance. The solution? Prototype.

And this would have helped save NASA. Because you would build, feature test and load test too. In all of that would have come a small estimate of cost which would include EGRESS costs.

There are no guarantees in this game, but it is surely getting complicated.

Read: How can 1% of something equal nothing?

2. Vendor lock-in is not dead

With the receding of some of the big old world vendors like Oracle, many have forgotten the shark like tactics they used with startup companies.

The model was something like this. Send in the big guns, nicely dressed to get you on board. Finesse the sale. Offer deep discounts, and get the customer on Oracle. After a year, maybe too, start squeezing. You’d be surprised how much blood comes out of diamonds.

These days we feel more free to port our applications to different cloud vendors. Even if mostly everybody is on Amazon already. But this NASA story really highlights the great organizational cost to migrate to the cloud. You architect your application, you do cost planning, and so on. So once you’re there, it’s hard to unravel.

Related: Is Fred Wilson right about dealing in an honest, direct and transparent way?

3. New possible hacking vector

Since costs are tied to usage in the public cloud, this could have implications for hacking. If a bad actor wants to cause you harm, then can now just use your service more.

Don’t like company A? Write some bots to access them from obscure locations, and ramp up those egress costs. With all the complexity of the cloud, are most firms monitoring for this sort of thing? I don’t see it in my engagements.

Something similar happened to me. I wrote: when mailchimp fraudulently charged my credit card. It really happened. Do I think it was intentional? You’ll have to read the article to get my 360 degree take on it.

Related: What mistakes did you make when starting as a consultant?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Categories
All Cloud Computing CTO/CIO

How to avoid insane AWS bills

via GIPHY

I was flipping through the aws news recently and ran into this article by Juan Ramallo – I was billed 14k on AWS!

Scary stuff!

Join 38,000 others and follow Sean Hull on twitter @hullsean.

When you see headlines like this, your first instinct as a CTO is probably, “Am I at risk?” And then “What are the chances of this happening to me?”

Truth can be stranger than fiction. Our efforts as devops should be towards mitigating risk, and reducing potential for these kinds of things to happen.

1. Use aws instance profiles instead

Those credentials that aws provides, are great for enabling the awscli. That’s because you control your desktop tightly. Don’t you?

But passing them around in your application code is prone to trouble. Eventually they’ll end up in a git repo. Not good!

The solution is applying aws IAM permissions at the instance level. That’s right, you can grant an instance permissions to read or write an s3 bucket, describe instances, create & write to dynamodb, or anything else in aws. The entire cloud is api configurable. You create a custom policy for your instance, and attach it to a named instance profile.

When you spinup your EC2 instance, or later modify it, you attach that instance profile, and voila! The instance has those permissions! No messy credentials required!

Related: Is Amazon too big to fail?

2. Enable 2 factor authentication

If you haven’t already, you should force 2 factor authentication on all of your IAM users. It’s an extra step, but well well worth it. Here’s how to set it up

Mobile phones support all sorts of 2FA apps now, from Duo, to Authenticator, and many more.

Related: Is AWS too complex for small dev teams?

3. blah blah

Encourage developers to use tools like Pivotal’s Credentials Scan.

Hey, while you’re at it, why not add a post commit hook to your code repo in git. Have it run the credentials scan each time code is committed. And when it finds trouble, it should email out the whole team.

This will get everybody on board quick!

Related: Are we fast approaching cloud-mageddon?

4. Scan your S3 Buckets

Open S3 buckets can be a real disaster, offering up your private assets & business data to the world. What to do about it?

Scan your S3 buckets regularly.

Also you can tie in this scanning process to a monitoring alert. That way as soon as an errant bucket is found, you’re notified of the problem. Better safe than sorry!

Related: Which tech do startups use most?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters