|

What Can We Learn from NASA’s AWS Fail?

I was just reading the Register, which is sort of the UK’s version of Slashdot, and they had a jaw-dropping title. NASA moved 247 petabytes into AWS and then later learned about EGRESS costs

OMG! Facepalm. Wow.

To say this is a disaster is an understatement. Could it have been prevented? Not likely by 100% strategic thinking. I believe a certain amount of real-world testing & prototyping is the only way.

Let’s explore what we can do to avoid this mistake.

Learn from NASAs AWS Fail

Things to Learn from NASA’s Mistake in AWS Fail

Here are my thoughts…

1. Expect Hidden Costs

Every time I check there are more AWS services. Just now I did some googling, and the number stands at 170. Not only is it tough to keep up with all of those, but the offerings are constantly evolving, getting new features, and so forth. That means the pricing and costs are evolving too.

All this means an infinitely complex web of interconnecting pieces, so it is nearly impossible to predict prices in advance. The solution? Prototype.

And this would have helped save NASA. Because you would build, feature test, and load test too. In all of that would have come a small estimate of cost which would include EGRESS costs.

There are no guarantees in this game, but it is surely getting complicated.

Read: How can 1% of something equal nothing?

2. Vendor Lock-In Is Not Dead

With the receding of some of the big old-world vendors like Oracle, many have forgotten the shark-like tactics they used with startup companies.

The model was something like this. Send in the big guns, nicely dressed to get you on board. Finesse the sale. Offer deep discounts, and get the customer on Oracle. After a year, maybe too, start squeezing. You’d be surprised how much blood comes out of diamonds.

These days we feel freer to port our applications to different cloud vendors. Even if mostly, everybody is on Amazon already. But this NASA story really highlights the great organizational cost of migrating to the cloud. You architect your application, you do cost planning, and so on. So, once you’re there, it’s hard to unravel.

Related: Is Fred Wilson right about dealing in an honest, direct, and transparent way?

3. New Possible Hacking Vector

Since costs are tied to usage in the public cloud, this could have implications for hacking. If a bad actor wants to cause you harm, then can now just use your service more.

Don’t like company A? Write some bots to access them from obscure locations, and ramp up those egress costs. With all the complexity of the cloud, are most firms monitoring for this sort of thing? I don’t see it in my engagements.

Something similar happened to me. I wrote: When Mailchimp fraudulently charged my credit card. It really happened. Do I think it was intentional? You’ll have to read the article to get my 360-degree take on it.

Conclusion

To conclude, NASA moved 247 petabytes into AWS but later discovered high EGRESS costs. This could have been prevented with real-world testing and prototyping. We hope now you’ve understood what things we need to do to prevent this type of situation. However, it’s tough to keep up with evolving AWS services and predict prices in advance. For further queries regarding this topic, our comment box is always ready for you. Thanks for reading!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *