Category Archives: High Availability

MySQL needs single master to check data integrity

MySQL slaves can drift out of sync. Many of our clients are surprised to find some data differences in their replication topology, once we do some checking and sniffing around. Such checks require a single reliable or authoritative master to compare against.

Click through to the end for multi-master solutions that work with MySQL.

Reason 5 – No way to do integrity checking

MySQL’s statement based replication has a number of problems. It is notoriously bad at handling stored procedures. There are also lots of problems around non-deterministic functions. When you’re lucky it fails with an error. But many environments have replication problems and data inconsistencies that they don’t even know about.

This problem of data drift is well known in MySQL replication. Luckly there is a solution. Formerly part of Maatkit, now redubbed percona toolkit, the table checksum tool attacks this problem head on. pt-table-checksum is a great tool, but won’t help you much when you don’t have a single master.

Which database holds the correct view of all your data?

NEXT: Reason 6 – Active-active breaks transaction isolation

PREV: Reason 4 – Cannot clone & build a slave without a single master

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample

MySQL requires an authoritative master to build slaves

In MySQL database operations, you often need to rebuild slaves. They fail for a lot of different reasons, fall out of sync or crash. When this happens you may find you need to reclone and start fresh. This is normally done by finding your authoritative master database, and doing a hotbackup.

Click through to the end for multi-master solutions that work with MySQL.

Reason 4 – You cannot reclone without single master

If you use active-active multi-master replication, you no longer have a single authoritative master copy of all your data. That means you have no reliable master to clone from, if a replica breaks.

If you’re an operational dba, you probably know that MySQL replication breaks regularly. You always need to be able to go back to a fresh copy when things get mixed up on the slaves.

NEXT: Reason 5 – MySQL row-based replication has limitations

PREV: Reason 3 – Limitations of MySQL row-based replication

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample

Limitations of MySQL row-based replication

MySQL offers a few different options for how you perform replication. Statement-based has been around a lot longer, and though it has some troublesome characteristics they’re known well and can be managed. What’s more it supports online schema changes with multi-master active-passive setup. We recommend this solution.

Row-based replication is newer. It attempts to address problems like those introduced by non-deterministic functions, and replicating stored procedures. But it introduces it’s own challenges.

Click through to the end for multi-master solutions that work with MySQL.

3. Row-based replication limitations

Row-based replication addresses some of the limitations of statement based replication.

o works better with stored procedures
o reduces problems associated with non-deterministic functions

But it creates a few of it’s own, some are show-stoppers:

o won’t work if target table storage engine, column order, data types or row itself are different or missing.
o doesn’t write SQL to the binlogs – useful for troubleshooting
o harder to do point-in-time recovery without SQL in binlogs
o harder to do online schema changes by switching masters

NEXT: Reason 4 – Cannot reclone without a single master

PREV: Reason 2 – MySQL Replication is prone to failure

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample

Why does MySQL replication fail?

When considering active-active multi-master, you must consider it’s foundation technology. Although MySQL replication is straightforward to setup, it can fail in a myriad of ways. Most of those are known and well understood. We can solve them only if we use the technology in the standard way.

Click through to the end for multi-master solutions that work with MySQL.

Reason 2 – Replication is brittle to start with

– statement based – throw it over the fence architecture
– use non-deterministic functions causes unpredictable behavior
– no globally unique transaction id (like Oracle SCN – system change number)
– no built in checksums – pt-table-checksum is a must
– replication position syncing to index files prone to breaking
– temp tables disappear after restart
– row-based still a new code path – doesn’t support zero downtime
– even with row-based mysql can fall back to statement
– row-based does not include SQL in binlogs
– MySQL replica slaves die frequently, require re-clone from authoritative master

NEXT: Reason 3 – MySQL row-based replication has limitations

PREV: Reason 1 – auto increment settings create new problems

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample

Facebook, Is Anybody Listening?

If you weren’t actually using Facebook on Monday, you probably heard a coworker or friend complain it was down. Can you believe it?!?!

Also find Sean Hull’s ramblings on twitter @hullsean.

What Happened?

Facebook explained that they hit a DNS glitch. DNS the the internet’s phone book, but it’s all automated. It turns website URLs into numbers. Like phone numbers they route you to the right place. A mismatch here will send you to the wrong place, and hence no Facebook for you!

[quote]
Always on, 24×7 uptime has become de rigeur, almost a holy mantra that no one questions. But as we rely more heavily on web services for business, availability grows in importance. We need realistic expectations about uptime to plan accordingly.
[/quote]

Achieving HA in the Amazon cloud is even harder. Look at the outage that took out Reddit & AirBNB.

Who should care?

Whether facebook is online or not may seem like fun & games until you start tying business processes to the site. And we’re not just talking fan pages here. Facebook logins on sites like Spotify, Disqus, Xobni, Vimeo, CNN The Forum & Digg to name a few.

As more businesses rely on your platform, outages quickly multiply with collateral dammage.

Read this: The Myth of Five Nines – Why High Availability is Overrated.

Expectations of Perfection

The power grid can’t stay up with only five minutes of downtime per year, why should we expect online businesses to live to this standard. I work with a lot of startups, and universally 24×7 is expected. Other clients I work with, some hedge fund, legal or news providers and they don’t always have this expectation. Even banks, it is only the very largest ones who are also global, that promise 24×7 services.

I would argue it is cultural. Look at this whitepaper Bellcore Standards – Myth versus Reality. The real world is messier than calculations and probabilities. It’s time we brought the bar down a notch, and give operations folks a pat on the back for the heroic effort they do, and the huge uptime they’re already providing!

What did we learn from Sandy? A lot about disaster recovery, that’s what.

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample

No iPhones Were Harmed in the Creation of this Outage

Apple’s recent iMessage outage had some users confused. What do you mean I can’t text my favorite cat photos?? How can Apple do this to me!?!?

What happened?

Apple provides services to everyone who uses it’s platform. iCloud for example stores your contacts, calendar, photos, apps and documents in the cloud. No more syncing to itunes to make sure all your stuff is backed up. It’s automatic in the cloud. Yes or course unless iCloud is down.

Same goes for iMessage. Apple has quietly introduced this, as a more feature rich version of text messaging. It’s great until the service isn’t available. What gives?

All these services are backed magically or not so magically by computer servers. These computers sit in datacenters, managed by operations teams, and to some degree with automation. All the things that brought down AWS & AirBNB & Reddit with it could also take out Apple. A serious storm like Sandy also presents real risks.

[quote]
iMessage is a text and SMS replacement service for iPhones & iPads. It is more feature rich, offering device synchronization, group texting & return receipt. But in a very big way it is also an attempt for Apple to muscle into the market and further extend it’s platform reach.
[/quote]

100% uptime ain’t easy

Even for firms that promise insanely good uptime, five nines remains very very hard to achieve in practice.

For starters all the components behind your service, need to be redundant. Multiple load balancers, webservers, caching servers, and of course databases that hold all your business assets.

But as the repeated AWS outages attest, even redundancy here isn’t enough. You also need to use multiple cloud providers. Here you can mirror across clouds so even an outage in one won’t bring down your business.

What about in the world of messaging? Well you can bet your customers don’t likely know or care about high availability, uptime, or any of these other web operations buzzwords. But they sure understand when they can’t use their service. It may give companies like Apple pause as they try to stretch themselves into areas outside their core business of iphones, ipads, and the IOS platform itself.

iMessage – messaging standards power play

When I first upgraded to an iPhone 4S, the first thing I noticed was the light blue bubbles when texting certain people. Why was that, I wondered? I quickly found out about iMessage, which was conveniently configured, to replace my old and trusty text messaging.

Texts or SMS work across all phones, smartphone or not, and apple or not. But open standards don’t lend themselves well to market muscle and dominance. So it makes sense that Apple would be pushing into this space. I met more than one blackberry owner who loved using bbm to keep in touch with colleagues. It’s like your own private club. And that muscle further strengthens Apple’s platform overall. Just take a look at how the Android Ecosystem is broken if you need an example of what not to do.

The flip side is it means you have more to manage. More servers, more services, more dimensions to your business. More frequent outages that can tarnish your reputation.

[quote]
A lot complaining and publicity like the iMessage outage received, may just be an indication that you’re big enough for people to care.
[/quote]

Alternatives abound…

There is huge competition in the messaging space. The outage and it’s publicity further underline this fact.

For example on the iPhone for messaging there is ChatOn, Whatsapp, LINE, SKYPE & wechat just to name a few.

Interestingly, while researching this article, I downloaded WhatsApp to give it a try. Only 99 cents, why not. Turns out that they had not one, but two outages, just a week ago. Seems Apple isn’t the only one experiencing growing pains.

A lot of complaining and publicity could be a sign that you’re big enough for people to care!

Read this far? Grab our Scalable Startups monthly.

AirBNB didn't have to fail

Today part of Amazon Web Services failed, taking down with it a slew of startups that all run on Amazon’s Cloud infrastructure. AirBNB was one of the biggest, but also Heroku, Reddit, Minecraft, Flipboard & Coursera down with it. Its not the first time. What the heck happened, and why should we care?

1. Root Cause

The AWS service allows companies like AirBNB to build web applications, and host them on servers owned and managed by Amazon. The so-called raw iron of this army of compute power sits in datacenters. Each datacenter is a zone, and there are many in each of their service regions including US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), South America (Sao Paulo), and AWS GovCloud.

Today one of those datacenters in the Northern Virginia region had a failure. What does this mean? Essentially firms like AirBNB that hosted their applications ONLY in Northern Virginia experienced outages.

As it turns out, Amazon has a service level agreement of 99.95% availability. We’ve long since said goodbye to the five nines. HA is overrated.

2. Use Redundancy

Although there are lots of pieces and components to a web infrastructure, two big ones are webservers and database servers. Turns out AirBNB could make both of these tiers redundant. How do we do it?

On the database side, you can use Amazon’s multi-az or alternately read-replicas. Each have different service characteristics so you’ll have to evaluate your application to figure out what will work for you.

Then there is the option to host mysql or Percona directly on Amazon servers yourself and use replication.

[quote]Using redundant components like placing webservers and databases in multiple regions, AirBNB could avoid an Amazon outage like Monday’s that affected only Northern Virginia.[/quote]
When do I want RDS versus mysql? Here are some use cases for RDS versus roll your own MySQL.

Now that you’re using multiple zones and regions for your database the hard work is completed. Webservers can be hosted in different regions easily, and don’t require complicated replication to do it.

3. Have a browsing only mode

Another step AirBNB can take to be resilient is to build a browsing only mode into their application. Often we hear about this option for performing maintenance without downtime. But it’s even more valuable during a situation like this. In a real outage you don’t have control over how long it lasts or WHEN it happens. So a browsing only mode can provide real insurance.

For a site like AirBNB this would mean the entire website was up and operating. Customers could browse and view listings, only when they went to book a room would the encounter an error. This would be a very small segment of their customers, and a much less painful PR problem.

Facebook has experience intermittent outages of it’s service. People hardly notice because they’ll often only see a message when they are trying to comment on someone’s wall post, send a message or upload a photo. The site is still operating, but not allowing changes. That’s what a browsing only mode affords you.

[quote]A browsing only mode can make a big difference, keeping most of the site up even when transactions or publish are blocked.
[/quote]

Drupal, an open source CMS system that powers sites like Adweek.com, TheHollywoodReporter.com, and Economist.com uses this technology. It supports a browsing only mode out of the box. An amazon outage like this one would only stop editors from publishing new stories temporarily. A huge win to sites that get 50 to 100 million with-an-m pageviews per month.

4. Web Applications need Feature Flags

Feature flags give you an on/off switch. Build them into heavy duty parts of your site, and you can disable those in an emergency. Host components multiple availability zones for extra peace of mind.

One of our all time most popular posts 5 Things Toxic to Scalability included some indepth discussion of feature flags.

5. Consider Netflix’s Simian Army

Netflix takes a very progressive approach to availability. They bake redundancy and automation right into all of their infrastructure. Then they run an app called the Chaos Monkey which essentially causes outages, randomly. If resilience from constantly falling and getting back up can’t make you stronger, I don’t know what can!

Take a look at the Netflix blog for details on intentional load & stress testing.

6. Use multiple cloud providers

If all of the above isn’t enough for you, taking it further you’d do as George Reese of enstratus recommends and use multiple cloud providers. Not being beholden to one company could help in more situations than just these type of service disruptions too.

Basic EC2 Best Practices mean building redundancy into your infrastructure. Multiple cloud providers simply take that one step further.

Read this far? Grab our newsletter on scalability and startups!

10 ways I avoid trouble in database operations

1. Avoid destructive commands

From time to time I’m working with new recruits and bringing them up to speed in operations. The first thing I emphasize is care with destructive commands.

What do I mean here? Well there are all sorts of them. SQL commands such as DROP table & DROP database. But also TRUNCATE and DELETE are all destructive. They’re easy to execute but harder to undo. Think of all the steps it would take to restore from your backup.

If you are logged in as root there are many many ways to shoot your own foot. I hope you know this right? rm has lots of options that can be very difficult to step back from like -r (recursive) and -f (force). Better to not use the command at all and just move the file or directory you’re working on by renaming it. You can always delete later.

2. Set your command prompts

When working on the command line, your prompt is crucial. You check it over and over to make sure you’re working on the right box. At the OS, your prompt can tell you if you’re root or not, what directory you’re sitting in, and what’s the hostname of the box. With a few different terminals open, it’s very easy to execute a heavy loading command or destructive command on the wrong box. Check thrice, cut once!

You can also set your mysql prompt too. This can provide similar insurance. It can tell you the database schema you’re set at default, and the user you’re logged in as. Hostname or localhost too. It is one more piece in the risk aversion puzzle.

3. Perform backups & test them

I know I know, we’re all doing backups already. Well I sure hope so. But if you’re getting on a system for the first time, it should be your very initial impulse to check and find out what types of backups are being done. If they’re not, you should set them up. I don’t care how big the database is. If it’s an obstacle, you need to sell or educate management on what might happen if. Paint some ugly scenarios. It’s not always easy to see urgency in these things without a good war story or two.

We wrote a guide to using xtrabackup for hotbackups. These can be done online even while your production database is serving customers without table locking or other downtime.

4. Stay off production machines

This may sound funny to some of you, but I live by it. If it ain’t broke, don’t go and try to fix it! You don’t need to be on all these boxes all the time. That goes for other folks too. Don’t give devs access to every production box. Too many hands in the pie so to speak. Also limit root users. But again if those systems are running well, you don’t have to login to them and poke around every five minutes. This just brings more chances for operator error.

5. Avoid change as much as possible

This one might sound controversial but it’s saved me more than once.

I worked at one firm a few years back managing the MySQL servers. The Oracle DBA was going on vacation for a few weeks so I was picking up the reigns for a bit. I met with the DBA for some brain dump sessions, and he outlined the main things that can and do go wrong. He also asked that I avoid any table alterations.

Sure enough ten days into his vacation, a problem arose in the application. One page on the site was failing silently. There was a missing field which needed to be added. I resisted. A fight ensued. Suddenly a lot of money was at stake if this change wasn’t pushed through. I continued to resist. I explained that if such a change were not done correctly, it very likely would break replication, pushing a domino of other things to break and causing an unpredictable mess.

I also knew I only had to hold on for a few more days. The resident dba would be returning and he could juggle the change. You see Oracle was setup to use multi-master replication those changes needed to go through a rather complex process to be applied. Done incorrectly the damage would have taken days to cleanup and caused much more financial damage.

The DBA was very thankful at my resistance and management somewhat magically found a solution to the application & edit problem.

Push back is very important sometimes.

[quote]
Many of these ten tips are great characteristics to select for in the DBA hiring process. If you’re a candidate, emphasize your caution and track record with uptime. If you’re a manager, ask candidates about how they handle these situations. We wrote a MySQL DBA hiring guide too.

[/quote]

6. Monitor important things

You should monitor your OS syslog and MySQL error log for starters. But also your slow query log for new activity, analyze them and send the reports along to devs. Provide analysis. Monitor your partitions. You don’t ever want disks to fill up. Monitor load average, and have a check that the database login or some other simple transaction can succeed. You can even monitor your backups to make sure they complete without error. Use your judgement to decide what checks satisfy these requirements.

7. Use one or more slaves & checksum

MySQL slave databases are a great way to provide insurance. You can use a lagging slave to provide insurance against operator error, or one of those destructive commands we mentioned above. Have it lag a few hours behind so you’ll have that much insurance. At night this slave may be fresh enough to use for backups.

Also since mysql uses statement based replication, data can get out of sync over time. Those problems may or may not flag errors. So use a tool to compare your master and slave for data consistency. We wrote a howto on using checksums to do just that.

8. Be very careful of automatic failover

Automation is wonderful when it works. We dream of a data center that works like clockwork, with robots that never sleep. We can work towards this ideal, and in some cases get close. But it’s important to also understand that failure is by nature *not* what we predicted. The myriad ways that complex systems can fail boggles the mind, and surprises even seasoned veterans of operations. So maintain a heathy suspicion of this type of automation. Understand that if you automate things to happen in this crucial time, you can potentially put yourself in an even *more* compromised position than simply failing.

Sometimes monitoring, alerting, and manual intervention are the more prudent path. Your mileage may vary of course.

9. Be paranoid

It takes many years of doing ops to realize you can never be paranoid enough. Already checked that you’re on the right host, and about to execute some command? Quit the shell prompt and check again. Go back and ask the team if that table really needs to be dropped. Try to rephrase what you’re about to do in different words. Email out again to the team and wait some time before you pull the trigger. Check one more time that you have a fresh backup.

Delay that destructive command as long as you possibly can.

10. Keep it simple

I know I know, we all want to use that new command or tool, or jump on the latest hardware and take it for a spin. We want to build beautiful architectures that perform great feats of magic. But the fewer moving parts, the less things that can go wrong. And in ops, your job is stability and availability. Can you avoid using multi-master replication and go with just basic master-slave replication in MySQL? That’s simpler. Can you have fewer schemas or fewer filter rules? Can you skip the complicated HA layer, and use monitoring and manual failover?

Made it this far? Grab our newsletter.

The myth of five nines – Why high availability is overrated

nine_clock

Join 12,000 others and follow Sean Hull on Twitter @hullsean.

In the Internet world 24×7 has become the de facto standard. Websites must be always on, available 24 hours a day, 365 days a year. In our pursuit of perfection, performance is being measured down to three decimal places, that is being up 99.999% of the time; in short, five-nines

Just like a mantra, when repeated enough it becomes second nature and we don’t give the idea a second thought. We don’t stop to consider that while it may be generally a good thing to have, is five-nines necessary and is it realistic for the business?

Also: How to hire a developer that doesn’t suck

In my dealings with small businesses, I’ve found that the ones that have been around longer, and with more seasoned managers tend to take a more flexible and pragmatic view of the five-nines standard. Some even feel that periods of outages during off hours as – *gasp* – no problem at all! On the other hand it is a universal truth held by the next-big-idea startups that 24×7 is do or die. To them, a slight interruption in service will send the wrong signal to customers.

The sense I get is that businesses that have been around longer have more faith in their customers and are confident about what their customers want and how to deliver it.  Meanwhile startups who are building a customer base feel the need to make an impression and are thus more sensitive to perceived limitations in their service.

Of course the type of business you run might well inform your policy here. Short outages in payments and e-commerce sites could translate into lost revenue while perhaps a mobile game company might have a little more room to breathe.

Related: Why generalists are better at scaling the web

Sustaining five nines is too expensive for some

The truth is sustaining high availability at the standard of five-nines costs a lot of money. These costs are incurred from buying more servers, whether as physical infrastructure or in the cloud. In addition you’ll likely involve more software components and configuration complexity. And here’s a hard truth, with all that complexity also comes more risk.  More moving parts means more components that can fail. Those additional components can fail from bugs, misconfiguration, or interoperability issues.

What’s more, pushing for that marginal 0.009% increase in high availability means you’ll require more people and create more processes.

Read this: Why reddit didn’t have to fail

Complex architecture downtime

In a client engagement back in 2011, I worked with a firm in the online education space.  Their architecture was quite complex.  Although they had web servers and database servers—the standard internet stack—they did not have standardized operations.  So they had the Apache web server on some boxes, and Nginx on others.  What’s more they had different versions of each as well as different distributions of Linux, from Ubuntu to RedHat Enterprise Edition.  On the database side they had instances on various boxes, and since they weren’t all centralized they were not all being backed up.  During one simple maintenance operation, a couple of configurations were rearranged, bringing the site down and blocking e-commerce transactions for over an hour.  It wasn’t a failure of technology but a failure of people and processes made worse by the hazard of an overly complex infrastructure.

In another engagement at a financial media firm, I worked closely with the CTO outlining how we could architect an absolutely zero downtime infrastructure.  When he warned that “We have no room for *ANY* downtime,” alarm bells were ringing in my head already.

Also: Why RDS doesn’t support Maria DB or Percona

When I hear talk of five-nines, I hear marketing rhetoric, not real-world risk reduction.   Take for example the power grid outage that hit the Northeast in 2003.  That took out power from large swaths of the country for over 24 hours.  In real terms that means anyone hosted in the Northeast failed five-nines miserably because downtime for 24 hours would be almost 300 years of downtime at the five-nines standard!

For true high availability look at better management of processes

So what can we do in the real-world to improve availability?  Some of the biggest impacts will come from reducing so-called operator error, and mistakes of people and processes.

Before you think of aiming for five-nines,  first ask some of these questions:

o Do you test servers?
o Do you monitor logfiles?
o Do you have network wide monitoring in place?
o Do you verify backups?
o Do you monitor disk partitions?
o Do you watch load average?
o Do you monitor your server system logs for disk errors and warnings?
o Do you watch disk subsystem logs for errors? (the most likely component in hardware to fail is a disk)
o Do you have server analytics?  Do you collect server system metrics?
o Do you perform fire drills?
o Have you considered managed hosting?

If you’re thinking about and answering these questions you’re well on your way to improving availability and uptime.

Read this: Top MySQL interview questions for DBAs, hiring managers & recruiters

Want more? Grab our Scalable Startups monthly for more tips and special content. Here’s a sample