Software development has always made use of libraries, off-the-shelf components that are shared between different projects. These allow you to stand on the shoulders of others and build bigger things. Frameworks do the same thing, they provide a context from which to build on. Ruby on Rails for example provides a great starting framework from which to build web applications, managing sessions in an elegant way. Continue reading “5 Scalability Pitfalls to Avoid”
When migrating to the cloud consider security and resource variability, the cultural shift for operations and the new cost model. Continue reading “4 Considerations Migrating to The Cloud”
Spencer Johnson is a great writer. His business book classic was a real page turner. He takes a page from the REWORK book and that’s a good thing.
Who Moved My Cheese is a story about mice living in a maze happy and content that they have an unlimited supply of cheese. Then one day the cheese runs out. Continue reading “Review – Who Moved My Cheese”
There are a lot of components that make up modern internet websites, and a lot of places to get stuck in the mud. Website performance starts with the browser, what caching it is doing, their bandwidth to your server, what the webserver is doing (caching or not and how), if the webserver has sufficient memory, and then what the application code is doing and lastly how it is interacting with the backend database. Continue reading “Top 3 Questions From Clients”
With the fast growth of virtualized data centers, and companies like Google, Amazon and Facebook, it’s easy to forget how much is built on open-source components, aka commodity software. In a very real way open-source has enabled the huge explosion of commodity hardware, the fast growth of the internet itself, and now the further acceleration through cloud services, cloud infrastructure, and virtualization of data centers.
Your typical internet stack and application now stands on the shoulders of tens of thousands of open source developers and projects. Let’s look at a few of them. Continue reading “Open Source Enables the Cloud”
One very strong case for cloud computing is that it can satisfy applications with seasonal traffic patterns. One way to test the advantages of the cloud is through a hybrid approach.
Cloud infrastructure can be built completely through scripts. You can spinup specific AMIs or machine images, automatically install and update packages, install your credentials, startup services, and you’re running.
All of these steps can be performed in advance of your need at little cost. Simply build and test. When you’re finished, shutdown those instances. What you walk away with is scripts. What do we mean?
The power here is that you carry zero costs for that burst capacity until you need it. You’ve already build the automation scripts, and have them in place. When your capacity planning warrants it, spinup additional compute power, and watch your internet application scale horizontally. Once your busy season is over, scale back and disable your usage until you need it again.
Shoe leather cost is similar to opportunity cost. It refers to the cost of counteracting inflation by keeping less of your assets in cash. Your strategy would require more trips to the bank and more walking, and incur a cost in the wearing out of the leather in your shoes.
All joking aside, it’s an interesting idea. It highlights how there are all sorts of hidden costs to different strategies. There are hidden costs to using coupons, loyalty cards, frequent flyer miles, managing assets & investments, hiring resources and in general running a business. Let’s look at a few. Continue reading “iHeavy Insights 83 – Shoe Leather Cost”
Rework is chock full of ideas
Jason Fried and David Heinemeier Hansson’s new book REWORK is one of the best startup business books I’ve read since Alan Weiss’ Million Dollar Consulting. If you’re already a fan of their signal vs noise blog, you’d be familiar with their terse style. Sharp and to the point.
Which is why you can pick it up and read it in a few hours. You’ll want to because it’s well written and pared down to essentials. In fact the book reads like their workflow advice, less mass, do it yourself, cut out the fat, concentrate on essentials. As such they are clearly practicing what they preach, which I like. Continue reading “Book Review – Rework”
If you’re headhunting a cloud computing expert, specifically someone who knows Amazon Web Services (AWS) and EC2, you’ll want to have a battery of questions to ask them to assess their knowledge. As with any technical interview focus on concepts and big picture. As the 37Signals folks like to say “hire for attitude, train for skill”. Absolutely!
If you want more general info about Amazon Web Services, read our Intro to EC2 Deployments.
1. Explain Elastic Block Storage? What type of performance can you expect? How do you back it up? How do you improve performance?
EBS is a virtualized SAN or storage area network. That means it is RAID storage to start with so it’s redundant and fault tolerant. If disks die in that RAID you don’t lose data. Great! It is also virtualized, so you can provision and allocate storage, and attach it to your server with various API calls. No calling the storage expert and asking him or her to run specialized commands from the hardware vendor.
Performance on EBS can exhibit variability. That is it can go above the SLA performance level, then drop below it. The SLA provides you with an average disk I/O rate you can expect. This can frustrate some folks especially performance experts who expect reliable and consistent disk throughput on a server. Traditional physically hosted servers behave that way. Virtual AWS instances do not.
Related: Is Amazon too big to fail?
Backup EBS volumes by using the snapshot facility via API call or via a GUI interface like elasticfox.
Improve performance by using Linux software raid and striping across four volumes.
2. What is S3? What is it used for? Should encryption be used?
S3 stands for Simple Storage Service. You can think of it like ftp storage, where you can move files to and from there, but not mount it like a filesystem. AWS automatically puts your snapshots there, as well as AMIs there. Encryption should be considered for sensitive data, as S3 is a proprietary technology developed by Amazon themselves, and as yet unproven vis-a-vis a security standpoint.
3. What is an AMI? How do I build one?
AMI stands for Amazon Machine Image. It is effectively a snapshot of the root filesystem. Commodity hardware servers have a bios that points the the master boot record of the first block on a disk. A disk image though can sit anywhere physically on a disk, so Linux can boot from an arbitrary location on the EBS storage network.
Need an AWS expert? Email me for a quote hullsean @ gmail.com
Build a new AMI by first spinning up and instance from a trusted AMI. Then adding packages and components as required. Be wary of putting sensitive data onto an AMI. For instance your access credentials should be added to an instance after spinup. With a database, mount an outside volume that holds your MySQL data after spinup as well.
4. Can I vertically scale an Amazon instance? How?
Yes. This is an incredible feature of AWS and cloud virtualization. Spinup a new larger instance than the one you are currently running. Pause that instance and detach the root ebs volume from this server and discard. Then stop your live instance, detach its root volume. Note the unique device ID and attach that root volume to your new server. And the start it again. Voila you have scaled vertically in-place!!
5. What is auto-scaling? How does it work?
Autoscaling is a feature of AWS which allows you to configure and automatically provision and spinup new instances without the need for your intervention. You do this by setting thresholds and metrics to monitor. When those thresholds are crossed a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool. Voila you’ve scaled horizontally without any operator intervention!
With MySQL databases autoscaling can get a little dicey, so we wrote a guide to autoscaling MySQL on amazon EC2.
6. What automation tools can I use to spinup servers?
The most obvious way is to roll-your-own scripts, and use the AWS API tools. Such scripts could be written in bash, python or another language or your choice. Next option is to use a configuration management and provisioning tool like puppet or better it’s successor Opscode Chef. Ansible is also an excellent option because it doesn’t require an agent, and can run your shell scripts as-is. You might also look towards CloudFormation or Terraform. The resulting code captures your entire infrastructure, can be checked into your git repository & version controlled. You can even unit test this way!
7. What is configuration management? Why would I want to use it with cloud provisioning of resources?
Configuration management has been around for a long time in web operations and systems administration. Yet the cultural popularity of it has been limited. Most systems administrators configure machines as software was developed before version control – that is manually making changes on servers. Each server can then and usually is slightly different. Troubleshooting though is straightforward as you login to the box and operate on it directly. Configuration management brings a large automation tool into the picture, managing servers like strings of a puppet. This forces standardization, best practices, and reproducibility as all configs are versioned and managed. It also introduces a new way of working which is the biggest hurdle to its adoption.
Enter the cloud, and configuration management becomes even more critical. That’s because virtual servers such as amazons EC2 instances are much less reliable than physical ones. You absolutely need a mechanism to rebuild them as-is at any moment. This pushes best practices like automation, reproducibility and disaster recovery into center stage.
While on the subject of configuration management take a quick peek at hiring a devops guide.
8. Explain how you would simulate perimeter security using Amazon Web Services model?
Traditional perimeter security that we’re already familiar with using firewalls and so forth is not supported in the Amazon EC2 world. AWS supports security groups. One can create a security group for a jump box with ssh access – only port 22 open. From there a webserver group and database group are created. The webserver group allows 80 and 443 from the world, but port 22 *only* from the jump box group. Further the database group allows port 3306 from the webserver group and port 22 from the jump box group. Add any machines to the webserver group and they can all hit the database. No one from the world can, and no one can directly ssh to any of your boxes.
The more full featured way to go is VPC. That’s Amazon’s acronym for virtual private cloud. You can create virtual networks both private & public, with subnets etc all within VPCs. You then spinup servers & resources inside those virtual networks. VPCs can be control with security groups or the more powerful but messy access control lists.
Want to further lock this configuration down? Only allow ssh access from specific IP addresses on your network, or allow just your subnet.
Did you make it this far?!?! Grab our newsletter.
Does anyone remember 15 years ago when the dot-com boom was just starting? A lot of companies were running on Sun. Sun was the best hardware you could buy for the price. It was reliable and a lot of engineers had experience with the operating system, SunOS a flavor of Unix.
Yet suddenly companies were switching to cheap crappy hardware. The stuff failed more often, had lower quality control, and cheaper and slower buses. Despite all of that, cutting edge firms and startups were moving to commodity hardware in droves. Why was it so? Continue reading “The New Commodity Hardware Craze aka Cloud Computing”