Category Archives: Scalability

Is AWS the patient that needs constant medication?

storm coming

I was just reading High Scalability about Why Swiftype moved off Amazon EC2 to Softlayer and saw great wins!

We’ve all heard by now how awesome the cloud is. Spinup infrastructure instantly. Just add water! No up front costs! Autoscale to meet seasonal application demands!

But less well known or even understood by most engineering teams are the seasonal weather patterns of the cloud environment itself!

Join 28,000 others and follow Sean Hull on twitter @hullsean.

Sure there are firms like Netflix, who have turned the fickle cloud into one of virtues & reliability. But most of the firms I work with everyday, have moved to Amazon as though it’s regular bare-metal. And encountered some real problems in the process.

1. Everyday hardware outages

Many of the firms I’ve seen hosted on AWS don’t realize the servers fail so often. Amazon actually choosing cheap commodity components as a cost-savings measure. The assumption is, resilience should be built into your infrastructure using devops practices & automation tools like Chef & Puppet.

The sad reality is most firms provision the usual way, through the dashboard, with no safety net.

Also: Is your cloud speeding for a scalability cliff

2. Ongoing network problems

Network latency is a big problem on Amazon. And it will affect you more. One reason is you’re most likely sitting on EBS as your storage. EBS? That’s elastic block storage, it’s Amazon’s NAS solution. Your little cheapo instance has to cross the network to get to storage. That *WILL* affect your performance.

If you’re not already doing so, please start using their most important & easily missed performance feature – provisioned IOPS.

Related: The chaos theory of cloud scalability

3. Hard to be as resilient as netflix

We’ve by now heard of firms such as Netflix building their Chaos Monkey to actively knock out servers, in effort to test their ability to self-healing infrastructure.

From what I’m seeing at startups, most have a bit of devops in place, a bit of automation, such as autoscaling around the webservers. But little in terms of cross-region deployments. What’s more their database tier is protected only by multi-az or just a read-replica or two. These are fine for what they are, but will require real intervention when (not if) the server fails.

I recommend building a browse-only mode for your application, to eliminate downtime in these cases.

Read: 8 questions to ask an aws expert

4. Provisioning isn’t your only problem

But the cloud gives me instant infrastructure. I can spinup servers & configure components through an API! Yes this is a major benefit of the cloud, compared to 1-2 hours in traditional environments like Softlayer or Rackspace. But you can also compare that with an outage every couple of years! Amazon’s hardware may fail a couple times a hear, more if you’re unlucky.

Meanwhile you’re going to deal with season weather problems *INSIDE* your datacenter. Think of these as swarms of customers invading your servers, like a DDOS attack, but self-inflicted.

Amazon is like a weak immune system attacking itself all the time, requiring constant medication to keep the host alive!

Also: 5 Things toxic to scalability

5. RDS is going to bite you

Besides all these other problems, I’m seeing more customers build their applications on the managed database solution MySQL RDS. I’ve found RDS terribly hard to manage. It introduces downtime at every turn, where standard MySQL would incur none.

In my experience Upgrading RDS is like a shit-storm that will not end!

Also: Does open source enable the cloud?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Can entrepreneurs learn from how science beat Cholera?

cholera ghost map johnson

When I picked up Johnson’s book, I knew nothing about Cholera. Sure I’d heard the name, but I didn’t know what a plague it was, during the 19th century.

Johnson’s book is at once a thriller, of the deadly progress of the disease. But in that story, we learn of the squalor inhabitants of victorian england endured, before public works & sanitation. We learn of architecture & city planning, statistics & how epidemiology was born. The evidence is weaved together with map making, the study of pandemics, information design, environmentalism & modern crisis management.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

“It is a great testimony to the connectedness of life on earth that the fates of the largest and the tiniest life should be so closely dependent on each other. In a city like Victorian London, unchallenged by military threats and bursting with new forms of capital and energy, microbes were the primary force reigning in the city’s otherwise runaway growth, precisely because London had offered Vibrio cholerae (not to mention countless other species of bacterium) precisely what it had offered stock-brokers and coffee-house proprietors and sewer-hunters: a whole new way of making a living.”

1. Scientific pollination

John Snow was the investigator who solved the riddle. He didn’t believe that putrid smells carried disease, the miasma theory prevailing of the day.

“Part of what made Snow’s map groundbreaking was the fact that it wedded state-of-the-art information design to a scientifically valid theory of cholera’s transmission. “

Also: Did Airbnb have to fail?

2. Public health by another name

“The first defining act of a modern, centralized public health authority was to poison an entire urban population”

Although they didn’t know it at the time, the dumping of waste water directly into the Thames river was in fact poisoning people & wildlife in the surrounding areas.

In large part the establishment was blinded by it’s belief in miasma, the theory that disease was originated from bad smells & thus traveled through the air.

Related: 5 reasons to move data to amazon redshift

3. A Generalist saves the day

The interesting thing about John Snow was how much of a generalist he really was. Because of this he was able to see thing across disciplines that others of the time were not able to see.

“Snow himself was a kind of one-man coffeehouse: one of the primary reasons he was able to cut through the fog of miasma was his multi-disciplinary approach, as a practicing physician, mapmaker, inventor, chemist, demographer & medical detective”

Read: Are generalists better at scaling the web?

4. Enabling the growth of modern cities

The discovery of the cause of Cholera prompted the city of London to a huge public works project, to build sewers that would flush waste water out to sea. Truly, it was this discovery and it’s solution that has enabled much of the population densities we see in the 21st century. Without modern sanitation, 20 million plus cities would certainly not be possible.

Also: Is automation killing old-school operations?

5. Bird Flu & modern crisis management

In recent years we have seen public health officials around the world push for Thai poultry workers to get their flu shots. Why? Because although avian flu only spreads from animal to humans now, were it to encounter a person who had our run-of-the-mill flu, it could quickly learn to move human to human.

All these disciplines of science, epidemiology and so forth direct decisions we make today. And much of that is informed by what Snow pioneered with the study of cholera.

Also: 5 Things toxic to scalability

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is Agile right for fixing performance issues?

storm coming

I was sifting through the CTO school email list recently, and the discussion of performance tuning came up. One manager had posted asking how to organize sprints, and break down stories for the process.

Join 29,000 others and follow Sean Hull on twitter @hullsean.

Another CTO chimed in with a response…

“Agile is not right for fixing performance issues.”

I agree with him & here’s why.

1. Agile roadblocks

At a very high level, agile seeks to organize work around sprints of a few weeks, and sets of stories within those sprints. The assumption here is that you have a set of identified issues. With software development, you have features you’re building. With performance tuning, it’s all about investigation.

How long will it take to solve the crime? Very good question!

Also: 5 things toxic to scalability

2. Reproduce problem

Are you seeing general site slowness? Is there a particular feature that loads extremely slowly? Or is there a report that runs forever? Whatever it is, you must first be able to reproduce it. If it’s general site slowness, identify when it is happening.

Related: Are SQL Databases dead?

3. Search for bottlenecks

Once you’ve reproduced your problem, next you want to start digging. Looking at logfiles can help you find errors, such as timeouts. The database has a slow query log, which you’ll definitely want to review. Slow queries can be surfaced by new code deploys, or middleware in front of your database, such as an ORM.

If you find your logfiles aren’t enabled, it’s a good first step to turn them on. Also look at how you’re caching. The browser should be directed to cache, assets should be on CDN, a page cache should protect your application server, and an object cache in front of your database.

Read: Is five nines a myth that just won’t die?

4. Find the root cause

As you dig deeper into your problem, you’ll likely uncover the root of your scalability problem. Likely causes include synchronous, serial or locking processes & requests, object relational modelers, lack of caching or new code that has not been tuned well.

Also: Did Airbnb, reddit , heroku & flipboard have to fail?

5. Optimize

This is what I think of as the fun part. You’ve measured the issues, found the problem. Now it’s time to fix it. This is an exciting moment, to bring real benefit to the business. Eliminating a performance problem can feel like springtime at the end of a long cold winter!

Also: Is zero downtime even possible on RDS?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

The chaos theory of cloud scalability

The.Rohit - Flickr

The.Rohit – Flickr

Reading Benedict Evans weekly newsletter, you’re bound to bump into something new & useful. His newsletter covers Mobile, but that also means it touches on a lot of other areas of tech, innovation & startups.

This week he pointed me to A Weissman’s The Chaos Theory of Startups. He argues a VC’s job is to help a startup identify the right framework. It’s about finding the signal in the noise.

Join 29,000 others and follow Sean Hull on twitter @hullsean.

I think you can carry this idea over to technical operations today. There are a few key maxims I follow to keep you on the scalable track.

1. Degrade gracefully

You’ve heard it before, but have you done anything about it?

Build a read-only or browse only mode into your application. Do it now. You will thank me. When your database goes down unexpectedly (with RDS this might happen sooner than you think), you want to be able to use your lovely read-only slave database. Browse only mode, forces developers to add read-only support in most application functions, keeping the site up and running, without a full visible and ugly outage.

Which brings me to point two, be sure to have copies of your production database. Real live, only read-only copies. In Amazon speak, this is a read-replica, in MySQL this is a slave database. Most startups I see these days have this, but if you’re one of the ones dragging your feet, do this now.

Also: Is the difference between dev & ops a four-letter word?

2. Monitor & measure

Amazon’s cloudwatch is fine for what it is, and so is New Relic. But employing a dedicated tool just for monitoring, such a Nagios & cacti can give you much more granular intelligence about what’s happening with your infrastructure. Nagios gives you the monitoring & alerting, Cacti gives you the history. It’s like a BI reporting tool for infrastructure.

Related: Is automation killing old-school operations?

3. Keep components simple

Keep it simple stupid? Don’t adopt new technologies, languages, or versions of software, without first vetting them. Ask questions:

o Is there an existing piece of software or service that can overlap this new one, killing two birds with one stone?
o Does everybody know this new technology?
o Does this choice of technology solve any other broad problems we have?
o Is there a large community around the project?
o Are there a lot of engineers with experience in this chosen technology?

Tellingly, many startups don’t have an operations person to start with. In those, the danger is developers choose new solutions, with no push back.

I asked… Does a four letter word divide dev and ops?

Read: Do managers underestimate operational cost?

4. Don’t force database abstraction

Object Relational Modelers, aka database middleware, are great in theory. We want a library that takes database & SQL drudgery away from developers. Why reinvent the wheel?

The trouble is database independent code doesn’t work, and never has. ORMs are painfully inefficient, selecting all columns, or repeatedly reading rows from tables. This causes serious traffic jams inside your database.

They come in various guises, Cake PHP, Active Record for Ruby, Hibernate for Java, SQL Alchemy for Python.

Also: Is the difference between dev & ops a four-letter word?

5. Be asynchronous

This means don’t make your application code wait. Make asynchronous calls to APIs & check back later, use software queues so traffic backups don’t clog your components & communication.

Avoid any type of two-phase or multi-phase commit. These are common in clustered databases, forcing a serialization point so nodes can agree on what data looks like. However they’re toxic to scalability. Look for technologies that use eventually consistent algorithms.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Why you need a performance dashboard like StackExchange

stackexchange

Most startups talk about performance crucial. But often with all the other pressing business demands, it can be forgotten until it becomes a real problem.

Flipping through High Scalability today, I found a post about Stack Exchange’s performance dashboard.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The dashboard for Stack Exchange performance is truly a tectonic shift. They have done a tremendous job with the design, to make this all visually appealing.

But to focus just on the visual aesthetics would be to miss many of the other impacts to the business.

1. Highlight reliability to the business

Many dashboards, from Cacti to New Relic present performance data. But they’re also quite technical and complicated to understand. This inhibits their usefulness across the business.

The dashboard at Stack Exchange boils performance down to the essentials. What customers are viewing, how quickly the site is serving them, and where bottlenecks are if any.

Also: Is the difference between dev & ops a four-letter word?

2. What’s our architecture?

Another thing their dashboard does is illustrate their infrastructure clearly.

I can’t count the number of startups I’ve worked at where there are extra services running, odd side utility boxes performing tasks, and general disorganization. In some cases engineering can’t tall you what one service or server does.

By outlining the architecture here, they create a living network diagram that everyone benefits from.

Related: Is automation killing old-school operations?

3. Because Fred Wilson says so

If you’re not convinced by what google says, consider Fred Wilson who surely should know. He says speed is an essential feature. In fact *the* essential feature.

The 10 Golden Principles of Successful Web Apps from Carsonified on Vimeo.

Read: Do managers underestimate operational cost?

4. Focus on page loading times!

If you scroll to the very bottom of the dashboard, you have two metrics. Homepage load time, and their “questions” page. The homepage is a metric everyone can look at, as many customers will arrive at your site though this portal. The questions page will be different for everyone. But there will be some essential page or business process that it highlights.

By sifting down to just these two metrics, we focus on what’s most important. All of this computing power, all these servers & networks are all working together to bring the fastest page load times possible!

Also: Is the difference between dev & ops a four-letter word?

5. Expose reliability to the customer

This performance page doesn’t just face the business. It also faces the customers. It lets them know how important speed is, and can underscore how serious the business takes it’s customers. Having an outage or a spike that’s slowing you down. Customers have some transparency into what’s happening.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is automation killing old-school operations?

puppet logo

Join 27,000 others and follow Sean Hull on twitter @hullsean.

I was shocked to find this article on ReadWrite: The Truth About DevOps: IT Isn’t Dead; It’s not even Dying. Wait a second, do people really think this?

Truth is I have heard whispers of this before. I was at a meetup recently where the speaker claimed “With more automation you can eliminate ops. You can then spend more on devs”. To an audience of mostly developers & startup founders, I can imagine the appeal.

1. Does less ops mean more devs?

If you’re listening to a platform service sales person or a developer who needs more resources to get his or her job done, no one would be surprised to hear this. If we can automate away managing the stack, we’ll be able to clear the way for the real work that needs to be done!

This is a very seductive perspective. But it may be akin to taking on technical debt, ignoring the complexity of operations and the perspective that can inform a longer view.

chef logo

Puppet Labs’ Luke Kanies says “Become uniquely valuable. Become great at something the market finds useful.”. I couldn’t agree more.

Read: Are SQL Databases Dead?

2. What happens when developers leave?

I would argue that ops have a longer view of product lifecycle. I for one have been brought in to many projects after the first round of developers have left, and teams are trying to support that software five years after the first version was built.

That sort of long term view, of how to refresh performance, and revitalize code is a unique one. It isn’t the “building the future” mindset, the sexy products, and disruptive first mover “we’re changing the world” mentality.

It’s a more stodgy & conservative one. The mindset is of reliability, simplicity, and long term support.

Also: How to hire a developer that doesn’t suck

3. What’s your mandate?

From what I’ve seen, devs & ops are divided by a four letter word.

That word I believe is “risk”. Devs have a mandate from the business to build features & directly answer to customer requests today. Ops have a mandate to reliability, working against change and thinking in terms of making all that change manageable.

Different mandates mean different perspectives.

Related: What is Devops & why is it important?

4. Can infrastructure live as code?

Puppet along with infrastructure automation & configuration management tools like Chef offer the promise of fully automated infrastructure. But the truth is much much more complex. As typical technology stacks expand from load balancer, webserver & database, to multiple databases, caching server, search server, puppet masters, package repositories, monitoring & metrics collection & jump boxes we’re all reaching a saturation point.

Yes automation helps with that saturation, but ultimately you need people with those wide ranging skills, to manage the complex web of dependencies when things fail.

And fail they will.

Check out: Why are MySQL DBA’s and ops so hard to find?

5. ORM’s and architecture

If you aren’t familiar, ORM’s are a rather dry sounding name for a component that is regularly overlooked. It’s a middleware sitting between application & database, and they drastically simplify developers lives. It helps them write better code and get on with the work of delivering to the business. It’s no wonder they are popular.

But as Ward Cunningham elloquently explains, they are surely technical debt that eventually must get paid. Indeed.

There is broad agreement among professional DBA’s. Each query should be written, each one tuned, and each one deployed. Just like any other bit of code. Handing that process to a library is doomed to failure. Yet ORM’s are still evolving, and the dream still lives on.

And all that because devs & ops have a completely different perspective. We need both of them to run modern internet applications. Lets not forget folks. :)

Read this: Do managers and CTO’s underestimate operational costs?

Want more? Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Which tech do startups use most?

MySQL on Amazon Cloud AWS

Leo Polovets of Susa Ventures publishes an excellent blog called Coding VC. There you can find some excellent posts, such as pitches by analogy, and an algorithm for seed round valuations and analyzing product hunt data.

He recently wrote a blog post about a topic near and dear to my heart, Which Technologies do Startups Use. It’s worth a look.

One thing to keep in mind looking over the data, is that these are AngelList startups. So that’s not a cross section of all startups, nor does it cover more mature companies either.

In my experience startups can get it right by starting fresh, evaluating the spectrum of new technologies out there, balancing sheer solution power with a bit of prudence and long term thinking.

I like to ask these questions:

o Which technologies are fast & high performance?
o Which technologies have a big, vibrant & robust community?
o Which technologies can I find plenty of engineers to support?
o Which technologies have low operational overhead?
o Which technologies have low development overhead?

1. Database: MySQL

MySQL holds a slight lead according to the AngelList data. In my experience its not overly complex to setup and there are some experienced DBAs out there. That said database expertise can still be hard to find .

We hear a lot about MongoDB these days, and it is surely growing in popularity. Although it doesn’t support joins and arbitrary slicing and dicing of data, it is a very powerful database engine. If your application needs more straightforward data access, it can bring you amazing speed improvements.

Postgres is a close third. It’s a very sophisticated database engine. Although it may have a smaller community than MySQL, overall it’s a more full featured database. I’d have no reservations recommending it.

Also: Top MySQL DBA Interview questions

2. Hosting: Amazon

Amazon Web Services is obviously the giant in the room. They’re big, they’re cheap, they’re nimble. You have a lot of options for server types, they’ve fixed many of the problems around disk I/O and so forth. Although you may still experience latency around multi-tenant related problems, you’ll benefit from a truly global reach, and huge cost savings from the volume of customers they support.

Heroku is included although they’re a different type of service. In some sense their offering is one part operations team & one part automation. Yes ultimately you are getting hosting & virtualization, but some things are tied down. Amazon RDS provides some parallels here. I wrote Is Amazon RDS hard to manage?. Long term you’re likely going to switch to an AWS, Joyent or Rackspace for real scale.

I was surprised to see Azure on the list at all here, as I rarely see startups build on microsoft technologies. It may work for the desktop & office, but it’s not the right choice for the datacenter.

Read: Are generalists better at scaling the web?

3. Languages: Javascript

Javascript & Node.js are clearly very popular. They are also highly scalable.

In my experience I see a lot of PHP & of course Ruby too. Java although there is a lot out there, can tend to be a bear as a web dev language, and provide some additional complication, weight and overhead.

Related: Is Hunter Walk right about operations & startups?

4. Search: Elastic Search

I like that they broke apart search technology as a separate category. It is a key component of most web applications, and I do see a lot of Elastic Search & Solr.

That said I think this may be a bit skewed. I think by far the number one solution would be NO SPECIFIC SEARCH technology. That’s right, many times devs choose a database centric approach, like FULLTEXT or others that perform painfully bad.

If this is you, consider these search solutions. They will bring you huge performance gains.

Check this: Are SQL Databases Dead?

5. Automation: Chef

As with search above, I’d argue there is a far more prevalent trend, that is #1 to use none of these automation technologies.

Although I do think chef, docker & puppet can bring you real benefits, it’s a matter of having them in the right hands. Do you have an operations team that is comfortable with using them? When they leave in a years time, will your new devops also know the technology you’re using? Can you find a good balance between automation & manual configuration, and document accordingly?

Read: Why are database & operations experts so hard to find?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 ways startups misstep on scalability

Russian_Dolls

Join 27,000 others and follow Sean Hull on twitter @hullsean.

1. Ignoring the database

Yes, your internet site sits on top of a database. Have you forgotten to take care of it?

Like a garden, it must be watered & tended. And a gardener will always scold you for leaving plants to wither. I guess as a database administrator by background, I’ve seen a lot of this. But truth be told it is in large part the cause of slowness.

Queries

Are you writing them? If you’re using an ORM middleware, you may be leaving this heavy lifting to a library. These are inefficient. Avoid the Vietnam of computer science.

Testing

Now that you’re writing SQL, are you testing & tuning each one? Think of it like turning apps off on your phone that you’re not using. Saves memory, battery, and general headache.

Monitoring

Now that you’ve gotten in the habit of caring for your database, keep at it. Monitor its health regularly. NewRelic as a service or Cacti, Ganglia or Collectd if you’d like to roll your own. Real data can reap real benefits.

Read: Are SQL Databases Dead

2. Shortage of caching

You’ve heard it before, we’ll say it again, make sure you’re caching. But where?

Content Network
Amazon has it’s CloudFront, Rackspace uses Akamai. There are many choices but the results are the same. Static assets such as html & css files, images & video content all part all part of the page you are serving, get dished out closer to your users. It’s like only asking them to go to a corner deli for a soda, rather than the closest supermarket.

Webserver tier

There are many things you can do to cache at the webserver. In particular you can configure to tell browsers to cache objects. One example is Cache-Control. That means longer time-to-live, so objects don’t expire by default. You can always expire them manually. There are also ways to compress objects as well. See How to cache websites & boost speed.

Between webserver & database

Are you using Memcache or redis? Caching here can reduce load on your database by as much as 10x. That’s like buying you 10x free servers, or one large one that costs 10x the price!

Most languages such as PHP provide libraries to interact with memcache. Whenever you make a call out to your database, first check memcache. If you find your key, fetch the value & done. Otherwise grab the answer from the database, and pop it into memcache.

At the database

Databases of all kinds, be they postgres, Oracle, or MySQL have a query cache. Be sure you’ve enabled & tuned yours. Also check that your buffer cache is sizeable enough to fit most frequently hit data. A hit ratio may provide you a cheap guestimate on this.

Related: Why a four letter word divides dev and ops

3. Missing metrics collection

In a recent article Why Scalability is big business I talked about collecting metrics. These are invaluable.

If you’re a home owner or renting, and want to know what you spent on energy in the past year, what do you do? You look at your heating bills for the winter months. Similarly, collecting real data on all your servers, like with cacti, or a service like NewRelic allows you to do the same thing with your servers & infrastructure.

Real hindsight, and real visibility helps everyone from operations teams, to business units evaluating past problems.

Also: Why a killer title can make or break your content efforts

4. Not building feature flags

Tractor trailers use two tires on every axil. If one fails, you are still on the road. Planes use redundant engines. Having switches built into your application to turn off non-essential features may seem abstract when your deadlines for features are looming.

But operational switches for your devops team should be seen as good foundation, and solid bedrock to build on. It means you can do the maintenance that you will need to do, and do it without interrupting customers. It also means when your site gets hammered, and we hope that day will come, you can adjust the dials, and not go down.

Related: Is Amazon RDS Difficult to Manage

5. Building on a single database

Various NoSQL databases like MongoDB, Cassandra & Hbase are distributed out of the box. Keep in mind though they make various tradeoffs to achieve this.

Meanwhile the vast majority of web applications are still built on reliable relational databases. But they don’t scale seamlessly. Build a read-only mode into your application and you’ll thank yourself for years to come. This means you can browse, even while the master database is offline. What’s more it means you can scale more easily.

Avoid solutions that try to scale writes across multiple servers. Partitioning aka sharding is terribly complex to get right, both in planning & layout. Lets not forget how do we piece together a puzzle of 8 shards with 8 pieces to a backup. Recipe for trouble. There are some new cluster options for MySQL, such as Galera. Oracle has it’s own take. But in the end you’ll do better to get a bigger box for your central datastore and keep it central.

Related: How to Deploy on Amazon with Vagrant

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Are startup CEO’s hiding their scalability problems?

Russian_Dolls

Join 27,000 others and follow Sean Hull on twitter @hullsean.

Your site is running fine right? You have 1000 customers, and it usually runs smoothly. Just this one lingering question, why does it take five high performance EC2 instances to run the database, all on flash drives? Goood question!

The truth is one of the highest trafficed sites I managed, pulled in 100 million uniques a month, and only used three backend databases. That site was one of these wildly popular celebrity gossip sites, the ultimate guilty pleasure when you’re at the office and can’t watch reality tv!

Snickers aside, this is huge traffic. And all of the above was built on Drupal, with no ORM in the mix. It could even run, albeit noticeably slower, while memcache was disabled.

1. Servers with solid state drives

I’m very excited to see Amazon introduce servers with SSD drives. They can bring you 100x improvement of disk I/O, and that my friends is the end all and be all for databases. So why complain?

If you deploy on these boxes right out of the gates, it may be like using a crutch. You become dependent on it, and ignore real performance tuning. Solid state drives still won’t obviate that ORM middleware you’re using.

Also: Do managers & CEO’s underestimate operational costs?

2. Memcache saving your bad queries

Memcache is also a powerful tool. It sits between the database and your webservers, reducing load on the database by as much as 10x. That’s a great way to get better response time, and reduce drag on your db tier. But it’s still worthwhile performance tuning without it.

Why? If you can get your site to run without caching, it will run blazingly fast *with* it. Don’t use it as a crutch, use it as rocket fuel for your well tuned site.

Read this: Do startups need techops?

3. A legion of read slaves

I’ve seen smaller sites, using a ton of read slaves. All of it deployed to cover up slow & redundant queries pouring out of an ORM middleware layer, in this case Cake PHP.

Again, read slaves are great, but tune & test with less hardware, and get the performance up the hard way. With elbow grease!

Related: Howto automate MySQL query analysis with Amazon RDS

4. Really really big memory

64G, 128G, 256G of main memory? If I wax on about the days when you’d get excited by 64k, I’ll sound like an old timer. But with those extreme limitations, you had to write tight code. Otherwise it just wouldn’t do anything.

Really really big memory of today’s servers allows us to get lazy. I hear developers say “Hey, the database is 10G of data, and we have 64G main memory, so the whole thing will fit in memory. Problem solved!”

Duhhh… No. Why not? Because you still have to slice and dice that data. You still have to scan through for bits & pieces that aren’t indexed, then sort, and organize that into temporary memory space. In DBA speak, you’re still doing a ton of logical IOs.

Picture it another way, imagine the days when you’re on horseback, riding across the west. You travel light cause frankly your horse can carry only so much. Then along come cars, and you start loading up the trunk. You add the kitchen sign, and the rear tires are hanging on the ground. All seems fine until you hit a steep mountain, and you’re car is almost stalling at 20mph. If you had only carried the same load as you did on horseback, you’d be speeding across the country at lightning pace.

Read: Is Amazon RDS hard to manage?

5. Deploying poor code

Deadlines are looming, and new features must be deployed. So performance testing can wait until later. The code works after all.

Been there, done that. Code gets deployed and all of a sudden there are spikes on server load in the evening. Ops & DBA teams are screaming, “Who wrote this code?”.

Load testing should be a part of everyday QA & test. It’s the only way to avoid growing scalability problems.

Check this: Are SQL databases dead?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Howto automate MySQL slow query analysis with amazon RDS

iRobot1

If you’ve used relational databases for more than ten minutes, I hope you’ve heard of slow queries. Those are those pesky little gremlins that are slowing down your startup, and preventing scalability you so desperately need.

Luckily there’s a solution. What I’ve found is if I send a report to developers every week, it keeps these issues front and center, for folks that are very busy indeed.

The script below is for RDS, but you can surely modify it if you have a physical server or roll-your-own MySQL box on Amazon. Take a look & enjoy!

Join 26,000 others and follow Sean Hull on twitter @hullsean.

1. install percona tools

Percona as many probably already know, are a wildly successful services firm that support MySQL and related technologies. They also have a very popular & scalable MySQL distribution by the same name.

Even if you’re not using Percona MySQL, you definitely want to get ahold of the percona toolkit. It provides all sorts of useful tools, including the one this article is based on, query-digest.

This tool takes your stock MySQL slow query logfile as input, and summarizes it into a very useful and readable report. Formerly mk-query-digest, it’s not called pt-query-digest. See below.

You can install the percona tools easily by grabbing the repository file and installing that with rpm. From there you can just use yum or apt-get depending on your distribution.

Related: Why a killer title can make or break your content efforts

2. install aws command line tool

Amazon has consolidated all it’s command line tools into a single one called just “aws”. The options can be a little arcane, and the error messages misleading besides. What’s good though is it is slightly easier to install & configure.

Do you already use Python? Install it this way:


$ pip install awscli

If not, you’ll need to dig into the aws cli installation instructions further.

Also: Do managers underestimate operational costs?

3. edit .aws/config

After you get the tool installed, you need to setup your environment. I edited a file named /home/shull/.aws/config as follows:


[default]
region = us-east-1
aws_access_key_id = BLIBJZMKLWIL5UTNRBMQ
aws_secret_access_key = MF5J/2z7HmN92lQUrV12ZO/FBXNjDVjL52TNRWsG

Those access_key_id and secret_access_key you can find on your amazon dashboard. Click upper right hand corner under your name, select the menu item “Security Credentials”.

Check out: Are SQL Databases Dead?

4. edit send_query_report.sh

I wrote the script below so you can fairly easily edit it.


#!/bin/bash
#

# get the rds db instanceID from command line (or crontab) entry
#
AWS_INSTANCE=$1

# here's where we'll store the latest slowquery.log
#
SLOWLOG=/tmp/rds_slow.log
#SLOWLOG=`/bin/ls -tr /home/shull/*.log | /usr/bin/tail -1`

# fetch slow query log from rds box
# here I always grab the latest one.
#
/usr/local/bin/aws rds download-db-log-file-portion --db-instance-identifier $AWS_INSTANCE --output text --log-file-name slowquery/mysql-slowquery.log > $SLOWLOG

# query report output
SLOWREPORT=/tmp/reportoutput.txt

# pt-query-digest location
MKQD=/usr/local/bin/pt-query-digest

# run the tool to get analysis report
$MKQD $SLOWLOG > $SLOWREPORT

# today's date in a variable
TODAY=`/bin/date +\%m/\%d/\%Y-\%H:\%S`
#YESTERDAY=`/bin/date -d "1 day ago" +\%m/\%d/\%Y-\%H:\%S`

# report subject
SUBJECT="Sean Query Report -- $TODAY "

# recipient
EMAIL="hullsean@gmail.com"

# send an email using /bin/mail
/usr/bin/mailx -s "$SUBJECT" "$EMAIL" < $SLOWREPORT

Note, if you don't have mailx installed, it should be available in your repository. Use apt-get or yum as necessary to get it installed.

Also: Is high availability overrated & near impossible to deliver?

5. Add to crontab

After you've tested the above script from command line, you will want to add it to a weekly cron job. Voila, automation! Don't forget to chmod +x to make it executable. :)


00 09 * * 5 /home/shull/send_query_report.sh seandb

Read: Are MySQL DBA's impossible to find?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters