There are a lot of components that make up modern internet websites, and a lot of places to get stuck in the mud. Website performance starts with the browser, what caching it is doing, their bandwidth to your server, what the webserver is doing (caching or not and how), if the webserver has sufficient memory, and then what the application code is doing and lastly how it is interacting with the backend database.
With all this complexity, it’s no wonder so many sites struggle. Typically these types of analysis start with some load testing, to stress test your setup, so you can watch for leaks. Then some tools are applied to the webserver tier, and the database tier to see where the bottleneck lies. It may be in the network itself, and how much data is passed back and forth. Or it may be badly formed queries being sent to the database, asking it to do more work than it needs to. Imagine looking up a friends phone number with only a first name. You’re going to be there all day digging through the phone book, getting nowhere. This happens when a database isn’t indexed properly.
At the end of the day we’ve seen these types of scenarios over and over, and know where to look to sniff out the trouble.
2. How can we make our database go faster?
Databases are tremendous workhorses, and they do a heroes job of storing a retrieving just the information we want. But they require care and feeding.
3. Are we covered as far as backups & high availability?
I hear this question a lot as well, and it’s a good one to be concerned about. With all your crown jewels in a central data store you want to know for sure you can restore it. What’s the best way to know that? Actually restore your backups. Run a fire drill. Rebuild the whole thing! Here are all the pieces you’ll need to consider:
If you’re lucky enough to be in the cloud, all of this can be scripted. Run fire drills to your hearts content, and kill the instances when you’re satisfied that you’re covered!