Category Archives: Database Management

Replicate MySQL to Amazon Redshift with Tungsten: The good, the bad & the ugly

tungsten replicator

Heterogenous replication involves moving data from one database platform to another. This is a complicated endevour because datatypes, date & time formats, and a whole lot more tend to differ across platforms. In fact it’s so complex many enterprises simply employ a commercial solution to take away the drudgery.

Join 31,000 others and follow Sean Hull on twitter @hullsean.

Enter Tungsten, which supports these types of deployments, on platforms as Postgresql, Mongodb, Oracle, Redshift, Vertica. With custom built appliers the field is infinite!

With that I’ve set out to get things working with Amazon Redshift. If you’re still struggling with the basics check out Wrestling with bears or how I tamed Tungsten Replicator.

1. Connect to redshift

The first thing you’ll need to do is allow your Tungsten boxes to reach redshift. Seems obvious, but when you’re juggling all these apples & oranges for the first time, it may slip you mind.

Configure your AWS security group to allow tungsten boxes

Get the external IP address of your tungsten box. If it’s in DNS this will work even if ping doesn’t.


$ ping tungsten01.mydomain.net

Add 10.20.30.40/32 to your Redshift security config. I created a special group called Tungsten and added the two tungsten boxes by IP address. That’s because these machines were on a different AWS account. If they’re on the same account, you could allow the entire EC2 group, and be done.

Install psql client

The best way I found to test the connection was psql. Install that:


$ apt-get install postgresql-client

Verify your connection:


$ psql -p 5439 -h 10.20.10.20 --username=root -d dwh

Also: Are SQL Databases dead?

2. Configure S3 access

Tungsten uses S3 heavily to move data into Redshift.

(I outlined this previously in 5 Reasons to move data to Amazon Redshift.

Install s3tools package

Tungsten uses the s3cmd to interface with the Amazon S3 API. Let’s install that:


$ apt-get install s3cmd

Now edit the .s3cfg file of tungsten user. Change


[default]
access_key = AAAAAAA
secret_key = BBBBBBB

Lastly edit the tungsten /opt/continuent/share/s3-config-redshift.json. There are four parameters.


{
"awsS3Path" : "s3://tungstenbucket",
"awsAccessKey" : "AAAAAAA",
"awsSecretKey" : "BBBBBBB",
"cleanUpS3Files" : "false",
}

Related: Is Oracle killing MySQL?

3. Create tables on Redshift

In a heterogenous environment, that is where source and destination databases are different platforms, Tungsten cannot create tables for you.

It will however, give you a helping hand in the process. Enter the ddlscan tool, which scans the CREATE TABLE statements on your source database, and generates them for your target platform.

For each table in source database, there will be a stage table in Redshift:


$ ddlscan jdbc:mysql://localhost:3306/test -user sync -db test -template ddl-mysql-redshift-staging.vm > test_stage.sql

$ cat test_stage.sql
/*
SQL generated on Thu Jun 04 20:06:45 UTC 2015 by ./ddlscan utility of Tungsten

url = jdbc:mysql:thin://tungsten01.mydomain.net:3306/test?jdbcCompliantTruncation=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&allowMultiQueries=true&yearIsDateType=false
user = sync
dbName = test
*/

CREATE SCHEMA test;

DROP TABLE test.stage_xxx_sean;
CREATE TABLE test.stage_xxx_sean
(
tungsten_opcode CHAR(2),
tungsten_seqno INT,
tungsten_row_id INT,
tungsten_commit_timestamp TIMESTAMP,
c1 VARCHAR(256) /* VARCHAR(64) */,
id INT,
PRIMARY KEY (tungsten_opcode, tungsten_seqno, tungsten_row_id)
);

And also a base table in redshift:


$ ddlscan jdbc:mysql://localhost:3306/test -user sync -db test -template ddl-mysql-redshift.vm > test.sql

$ cat test.sql
/*
SQL generated on Thu Jun 04 20:06:51 UTC 2015 by ./ddlscan utility of Tungsten

url = jdbc:mysql:thin://tungsten01.mydomain.net:3306/test?jdbcCompliantTruncation=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&allowMultiQueries=true&yearIsDateType=false
user = sync
dbName = test
*/

CREATE SCHEMA test;

DROP TABLE test.sean;
CREATE TABLE test.sean
(
c1 VARCHAR(256) /* VARCHAR(64) */,
id INT,
PRIMARY KEY (id)
);

Lastly apply those scripts to your redshift database:


$ psql
dwh# \i file_stage.sql
dwh# \i file_table.sql

Read: Are we fast approaching cloud-mageddon?

4. Troubleshoot applier

***
Encountered “Delimiter Not Found” issue

This issue was mysterious and remains so a bit. What I did to fix it:

had an issue with the path, but fixed that:


  "awsS3Path" : "s3://tungstenbucket",

It was causing an interim bucket to be created. But that did not solve things.

Ok. So I hacked this a bit.

Anyone can help me troubleshoot what happened & why?

A. I skipped transactions

I brought the applier back online with this command.


trepctl -service redshift online -skip-seqno 1,1-100

B. I did lots of inserts & deletes on MySQL

I then did about 200 of these:


mysql> insert into test.sean values ('hi there', 20);
mysql> delete from test.sean where id = 20;

C. Now seeing data


dwh=# select * from test.sean;
                 c1                  | id 
-------------------------------------+----
 working......                       | 25
 hello sean i have an exclamation !! | 27
 hello sean i came from mysql        | 26
(3 rows)

I also set cleanupS3Files to false. Now I’m seeing files like this:
test-sean-417.csv
test-sean-418.csv
test-sean-419.csv
test-sean-420.csv

So that indicates all those INSERT followed by DELETES cleaned up things.

Also: How do I find entrepreneurial focus?

5. Test data & table changes

B. Tested INSERT

At first the csv files were getting cleanedup by Tungsten. I added this option to s3-config-redshift.json file:


"cleanUpS3Files" : "false",

Then the files are kept around so we can review them. An insert record shows up in S3 like this:


"I","417","1","2015-06-05 17:44:35.000","tungsten new csv file? ","33",null

C. Tested DELETE

A DELETE record shows up in S3 like this:


"D","419","1","2015-06-05 17:45:48.000",null,"26",null

D. Tested UPDATE

An UPDATE record shows up in S3 like this:


"D","420","1","2015-06-05 17:48:55.000",null,"31",null
"I","420","2","2015-06-05 17:48:55.000","changed message text for redshift+tungsten update","31",null

A. Tested ALTER TABLE

As mentioned previously, this is *NOT* supported. However after doing the ALTER, the applier does *NOT* go offline. Also there are no errors. That’s because Tungsten does not support these and will filter them in a heterogenous environment.

The applier *DOES* go offline, after you try a new INSERT. That’s because it gets a new record for INSERT that doesn’t match.

“trepctl status” shows the following:

pendingExceptionMessage: CSV loading failed: schema=test table=sean CSV file=/tmp/staging/redshift/staging0/test-sean-413.csv message=Wrapped org.postgresql.util.PSQLException: ERROR: Load into table ‘stage_xxx_sean’ failed. Check ‘stl_load_errors’ system table for details. (../../tungsten-replicator//samples/scripts/batch/redshift.js#145)

redshift# alter table test.sean add column c3 integer default null;

redshift# alter table test.stage_xxx_sean add column c3 integer default null;

Then I brought the applier back online:

$ trepctl -service redshift online

Then check the status. It should say ONLINE for state.


$ trepctl status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : mysqld-bin.000022:0000000000000566;-1
appliedLastSeqno : 424
appliedLatency : 300585.739
autoRecoveryEnabled : false
autoRecoveryTotal : 0
channels : 1
clusterName : redshift
currentEventId : NONE
currentTimeMillis : 1433878195573
dataServerHost : my-dw.aaaa.us-east-1.redshift.amazonaws.com
extensions :
host : my-dw.aaaa.us-east-1.redshift.amazonaws.com
latestEpochNumber : 0
masterConnectUri : thl://tungsten01.mydomain.net:2112/
masterListenUri : null
maximumStoredSeqNo : 424
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : thl://tungsten01.mydomain.net:2112/
relativeLatency : 304511.573
resourcePrecedence : 99
rmiPort : 10000
role : slave
seqnoType : java.lang.Long
serviceName : redshift
serviceType : local
simpleServiceName : redshift
siteName : default
sourceId : my-dw.aaaa.us-east-1.redshift.amazonaws.com
state : ONLINE
timeInStateSeconds : 351940.007
timezone : GMT
transitioningTo :
uptimeSeconds : 600921.759
useSSLConnection : false
version : Tungsten Replicator 4.0.0 build 18
Finished status command...
$

Lastly, let’s see what’s in the table, fire up the postgresql shell and take a look:


dwh=# select * from test.sean;
c1 | id | c3
---------------------------------------------------+----+----
working...... | 25 |
hello sean i have an exclamation !! | 27 |
hello will i break? | 30 |
some more records | 32 |
tungsten new csv file? | 33 |
another tungsten csv file? | 34 |
changed message text for redshift+tungsten update | 31 |
(7 rows)

Also: Was Fred Wilson wrong about Apple?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Wrestling with bears or how I tamed Tungsten replicator

tungsten replicator

I just dove into Tungsten replicator very recently as I need to replicate from Amazon RDS to Redshift. I’d heard a lot of great things about Tungsten, but had yet to really dig my heels in.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

I fetched the binary and began to dig through the docs. Within a day I felt like I was sinking in quicksand. Why was this thing so darn complicated? To my mind unix software is a config file simple.cfg, a logfile simple.log, a daemon simpled. Open some ports & voila you’re cooking.

Unfortunately for beginners Tungsten took a very different approach. Although they support .ini files to config, they seem to encourage these huge commands to “configure” which means generate a config file in Tungsten speak, and install which literally means install the software for you into /opt/continuent.

After various posts to the forums, and a lot of head scratching, I discovered the cookbook. At first I thought this referenced puppet or chef type cookbooks. But it’s something different. It allowed me to setup a very basic tungsten just to see things working.

Tungsten supports all sorts of “topologies”. For this example I am just doing master-slave. There are two nodes and each has mysql running on it. Node1 serves as the master, and runs the tungsten replicator (master node service). And node2 serves as the slave and runs the tungsten applier (slave node service).

Good luck and hope this helps others speedup the learning curve!

1. Download tarball (on master)

Note the forums indicated they may be moving off of google code. So this url may change.


$ cd /tmp
$ wget http://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-oss-4.0.0-18.tar.gz

Also: Why Airbnb didn’t have to fail

2. Expand the tarball (on master)

Use your vast unix skills to expand the tarball!


$ mv download.php\?file\=tungsten-replicator-oss-4.0.0-18.tar.gz tungsten.tgz
$ tar xvzf tungsten.tgz
$ mv tungsten-replicator-4.0.0-18 stage

Also: Is the difference between dev & ops a four-letter word?

3. Install MySQL (each box)

Hopefully you’ve done this before. Pretty straightforward:


$ apt-get install mysql-client mysql-server

Also: Are SQL Databases Dead?

4. Edit cookbook files (on master)

Inside /tmp/stage/cookbook you’ll need to make a few simple edits:

edit COMMON_NODES.sh

Edit NODE1 and NODE2 and comment out other lines.


export NODE1=ip-172-31-0-117
export NODE2=ip-172-31-1-188

edit NODES_MASTER_SLAVE.sh


export MASTERS=($NODE1)
export SLAVES=($NODE2)

USER_VALUES.sh


export TUNGSTEN_BASE=/opt/continuent
export MY_CNF=/etc/mysql/my.cnf
export DATABASE_USER=sync
export DATABASE_SUPER_USER=root
export DATABASE_PASSWORD=

Also: 5 Things toxic to scalability

5. Create install directory (each box)

The /tmp/stage directory you created above is just a holding ground for the tarball. Continuent in it’s infinite wisdom wants to install itself. So, create a directory for it:

As root:


$ mkdir /opt/continuent
$ chown tungsten /opt/continuent

Then as tungsten:


$ touch /opt/continuent/testfile.txt

Also: How to deploy on Amazon EC2 with Vagrant?

6. Configure aws security groups

AWS security group permissions are key to getting any of this tungsten stuff working. And there are a lot of moving parts here. Ping is required for various tests, as is MySQL’s port. But you’ll also need to enable Tungsten History Log port 2112 and the replicator ports.

o ping ICMP from your group
o enable inbound 3306 from your group
o enable THL – inbound 2112 from your group
o enable RMI – inbound 10000, 10001 from your group

Also: Howto interview an AWS expert for managers, recruiters & engineers alike

7. Test mysql client from the other (each box)


$ mysql -h ip-172-31-1-188 -u root

$ mysql -h ip-172-31-0-117 -u root

If these hang, verify 3306 in your aws security groups. If you get a mysql error, be sure the “host” is appropriate. Check with:


mysql> select user, host, password from mysql.user where user = 'root';

For more info see Connect to MySQL in the Amazon public cloud.

Also: 5 Reasons to move to amazon redshift

8. Setup ssh auto-login (each box)

You need to be able to login to each box as a user “tungsten” without a password.


$ adduser tungsten
$ ssh-keygen
$ scp .ssh/id_rsa.pub tungsten@ip-172-31-1-188:/home/tungsten/.ssh/authorized_keys

Be sure the authorized_keys file is 600:


$ chmod 600 .ssh/authorized_keys

Also: Did MySQL & Mongo have a beautiful baby called Aurora

9. Install ruby (each box)

You need to have ruby on both boxes.


$ apt-get install ruby

Also: Top interview questions for a MySQL expert – recruiters, managers & candidates

10. Create sync user (each box)

Both of your mysql instances will need a user that tungsten connects as. I called it “sync”.


mysql> create user 'sync'@'%' identified by 'secret';
mysql> grant all privileges on *.* to 'sync'@'%';
mysql> flush privileges;

Note, you may want to use a blank password at this stage, to eliminate that as a potential problem to debug. Also consider the security implications of ‘%’ and consider a subnet wildcard such as ’10.0.%’.

Also: Myth of five nines why high availability is overrated

11. Enable binary log (each box)

Enable the mysql binary log with log_bin parameter. Also ensure that /var/lib/mysql is readable by the tungsten user, either by changing /var/lib to 655 or adding tungsten to the mysql group. Test this as well using less or cat.

Also: What is high availability and why is it important?

12. Update MySQL startup settings (each box)

Fire up your favorite editor and update /etc/mysql/my.cnf settings (both servers). The following are the main ones:


server_id = 1 (server_id=2 for slave)
sync_binlog = 1
max_allowed_packet = 52m
open_files_limit = 65535
innodb_log_file_size=50m
#bind_address localhost (comment it to disable)

Note that changing innodb_log_file_size is tricky. You’ll need to stop mysql, rename old ib_logfile0 to ib_logfile0.old and ib_logfile1 to ib_logfile1.old. Then change the param in my.cnf and start mysql. Otherwise you’ll get errors on startup.

Also: Is the difference between dev & ops a four-letter word?

13. Run the installer (on master)

This step is fairly straightforward. If there are problems in this step, you’ll see ERROR lines in the output. Sift through them and resolve one by one.


$ ./cookbook/install_master_slave

Also: RDS or MySQL – Ten use cases

14. Check tungsten status (each box)

The “trepctl” utility allows you to check the current status. It does a lot more, but for now that’s enough. If you want to make it easier add “/opt/continuent/tungsten/tungsten-replicator/bin” to your path.

Also notice the line “state”. It should be ONLINE. If it says “GOING-ONLINE:SYNCHRONIZING” that likely means you didn’t open up the tungsten ports. You’ll need both RMI ports 10000 and 10001 as well as THL ports 2112. We all know how finicky AWS security groups can be. I’ll leave it to your own exercise to confirm those are open.

ON MASTER


root@ip-172-31-0-117:/opt/continuent/tungsten/tungsten-replicator/bin# ./trepctl status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : mysqld-bin.000005:0000000000000321;235
appliedLastSeqno : 5
appliedLatency : 34824.086
autoRecoveryEnabled : false
autoRecoveryTotal : 0
channels : 1
clusterName : cookbook
currentEventId : mysqld-bin.000005:0000000000000321
currentTimeMillis : 1432840323683
dataServerHost : ip-172-31-0-117
extensions :
host : ip-172-31-0-117
latestEpochNumber : 0
masterConnectUri : thl://localhost:/
masterListenUri : thl://ip-172-31-0-117:2112/
maximumStoredSeqNo : 5
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : /var/lib/mysql
relativeLatency : 44450.683
resourcePrecedence : 99
rmiPort : 10000
role : master
seqnoType : java.lang.Long
serviceName : cookbook
serviceType : local
simpleServiceName : cookbook
siteName : default
sourceId : ip-172-31-0-117
state : ONLINE
timeInStateSeconds : 82860.917
timezone : GMT
transitioningTo :
uptimeSeconds : 82861.492
useSSLConnection : false
version : Tungsten Replicator 4.0.0 build 18
Finished status command...
root@ip-172-31-0-117:/opt/continuent/tungsten/tungsten-replicator/bin#

ON SLAVE


tungsten@ip-172-31-1-188:/opt/continuent/tungsten/tungsten-replicator/bin$ ./trepctl status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : mysqld-bin.000005:0000000000000321;235
appliedLastSeqno : 5
appliedLatency : 42569.62
autoRecoveryEnabled : false
autoRecoveryTotal : 0
channels : 1
clusterName : cookbook
currentEventId : NONE
currentTimeMillis : 1432840348936
dataServerHost : ip-172-31-1-188
extensions :
host : ip-172-31-1-188
latestEpochNumber : 0
masterConnectUri : thl://ip-172-31-0-117:2112/
masterListenUri : thl://ip-172-31-1-188:2112/
maximumStoredSeqNo : 5
minimumStoredSeqNo : 0
offlineRequests : NONE
pendingError : NONE
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: NONE
pipelineSource : thl://ip-172-31-0-117:2112/
relativeLatency : 44475.936
resourcePrecedence : 99
rmiPort : 10000
role : slave
seqnoType : java.lang.Long
serviceName : cookbook
serviceType : local
simpleServiceName : cookbook
siteName : default
sourceId : ip-172-31-1-188
state : ONLINE
timeInStateSeconds : 1906.6
timezone : GMT
transitioningTo :
uptimeSeconds : 82882.334
useSSLConnection : false
version : Tungsten Replicator 4.0.0 build 18
Finished status command...
tungsten@ip-172-31-1-188:/opt/continuent/tungsten/tungsten-replicator/bin$

Also: Is zero downtime even possible with Amazon RDS?

15. Perform simple test

Create a table on the master mysql node.


mysql master> create database test;
mysql master> create table test.sean (c1 varchar(64));
mysql master> insert into test.sean values ('hi there');
mysql master> insert into test.sean values ('this should show up in tungsten thl file');
mysql master> insert into test.sean values ('new data on tungsten02 THL??');

Verify that the table is on the slave mysql node.


mysql slave> show databases;
mysql slave> select * from test.sean;
+------------------------------------------+
| c1 |
+------------------------------------------+
| hi there |
| this should show up in tungsten thl file |
| new data on tungsten02 THL?? |
+------------------------------------------+
3 rows in set (0.00 sec)

Success!

Also: Why are MySQL & other database experts so hard to find?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is upgrading RDS like a shit-storm that will not end?

aws logo

Join 29,000 others and follow Sean Hull on twitter @hullsean.

Can RDS worsen an outage ?? That’s another way to think about this question. In my experience, it very clearly increases outages, by tying one or both hands behind your back. Believe me when I say, that is terribly frustrating when you’re putting out fires!

1. Changing Parameters

An everyday occurance, is the need to change database parameters. Want to enable a login, great no problem. Except in RDS it becomes a problem! Ok, you’re thinking, why is that?

In regular MySQL, you login with the shell & issue SET GLOBAL parameter = value; Nice, easy, straightforward. No servers restarting, no nonsense. If the parameter requires a reboot, MySQL will tell you.

In RDS, the process is waay more complex. First you edit a parameter group. You can copy an existing one, or change the one you’re using. If that parameter group applies to many servers, be careful!

Ok, what next? Now you APPLY that new parameter group. You can do so immediately, or during the next maintenance window. Here’s the tricky part. Is Amazon going to restart my instance? That’s something your boss or manager will surely ask you. Well you might think it would only do so if the parameter in question required it. But I tried to enable the general log recently and Amazon tells me the status of “pending-reboot”. This change shouldn’t require that! I’m sitting there scared Amazon might suddenly decide to reboot a production server for no reason!

This is where you feel you’ve lost control. You can dig through docs all you want, but you can’t ever say for sure if a managed service will behave predictably. There’s already more layers of software between you and your relational database. Not what you want.

Also: Did MySQL & Mongo have a beautiful baby called Aurora?

2. How much longer?

Another question you’ll ask yourself is, how long will this maintenance take? With MySQL at the command line, you can run through test after test & time the process. When you go to perform tasks offhours, you already have a clear picture.

With RDS, things can’t be predicted. Servers are restarted when they needn’t be. Rebuilds take forever, and you have no progress bar. EBS performance has a hiccup and your snapshot time doubles. The troubles go on and on.

Related: Is automation killing old-school operations?

3. Why did Amazon just force an OS upgrade?

Here’s another surprise I ran into. Again we have a managed solution, so Amazon must take opportunities when they can. But you pay for it in unpredictability.

Going to perform a MySQL 5.1 to 5.5 upgrade, and I’d run through test after test in advance. Timed the process to about 45 minutes. Then went to do it in production. Amazon decided to throw in the OS upgrade too, adding 40 minutes of surprise time. What’s worse? No progress bar on that either.

Upgrades are nerve wracking enough, without this kind of stuff scaring the daylights out of you.

Read: Do managers underestimate operational cost?

4. What’s happening on my server?

All of the questions about progress are opaque on RDS because you lack command line. You can’t watch processes, disk I/O or any of the granular stuff. In my surgery analogy below, it’s as though you can’t touch the patient, find their pulse or guage if their skin is cold, clammy or pale.

Also: Is the difference between dev & ops a four-letter word?

5. Surgery with blunt instruments

At the end of the day, RDS feels like surgery with blunt instruments. If command line were your scalpel, windows & GUI tools may be your remote video surgery. And worse still, RDS would be like doing surgery on the Opportunity Mars rover, after it’s landed & stuck in a valley. Everything is delayed, it’s hard to tell what’s going on, and the worst environment to work in when you have an emergency with your database.

If you have any operations experience, deploy your own MySQL on an EC2 instance. You’ll thank yourself later.

Also: Is zero downtime even possible on RDS?

Upside to RDS

Is there any upside? Why do people use it? Push-button replication. Check. Push-button multi-az, check. Those are great if you have no DBA. Automated backups so you don’t shoot yourself in the foot, check.

I guess there is *something* to love.

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is Zero downtime even possible on RDS?

amazon rds mysql

Join 29,000 others and follow Sean Hull on twitter @hullsean.

Oh RDS, you offer such promise, but damn it if the devil isn’t always buried in the details.

Diving into a recent project, I’ve been looking at upgrading RDS MySQL. Major MySQL upgrades can be a bit messy. Since the entire engine is rebuilt, queries performance can change, syntax can break, and surely triggers & stored procedures can have problems.

That’s not even getting into it with storage engines. Still have some tables on MyISAM? Beware.

The conclusion I would make is if you want zero downtime, or even nearly zero, you’re going to want to roll your own MySQL on EC2 instances.

Read: Why high availability is so very hard to deliver

1. How long did that upgrade take?

First thing I set out to do was upgrade a test instance. One of the first questions my client asked, how long did that take? “Ummm… you know I can’t tell you clearly.” For an engineer this is the worst feeling. We live & die by finding answers. When your hands are tied, you really can’t say what’s going on behind the curtain.

While I’m sitting at the web dashboard, I feel like I’m trying to pickup a needle with thick leather gloves. Nothing to grasp here. At one point the dashboard was still spinning, and I was curious what was happening. I logged out and back in again, and found the entire upgrade step had already completed. I think that added five minutes to perceived downtime.

Sure I can look at the RDS instance log, and tell you when RDS logged various events. But when did the machine go offline, and when did it return for users? That’s a harder question to answer.

Without command line, I can’t monitor the process carefully, and minimize downtime. I can only give you a broad brush idea of what’s happening.

Also: RDS or MySQL 10 use cases

2. Did we need to restart the instance?

RDS insists on rebooting the instance itself, everytime it performs a “Modify” operations. Often restarting the MySQL process would have been enough! This is like hunting squirrels with a bazooka. Definitely overkill.

As a DBA, it’s frustrating to watch the minutes spin by while your hands are tied. At some point I’m starting to wonder… Why am I even here?

Related: Howto automate MySQL slow query analysis with Amazon RDS

3. EBS Snapshots are blunt instruments

RDS provides some protection against a failed upgrade. The process will automatically snapshot your volume before it begins. That’s great. If I spend

See also: Is Amazon RDS hard to manage

4. Even promoting a read-replica sucks

I also evaluated using a read-replica. Here you spinup a slave first. You then upgrade *THAT* box to 5.6 ahead of your master. While your master is still sending data to the slave, your downtime would in theory be very minimal. Put master in read-only mode, wait few seconds for slave to catchup and switch application to point to slave, then promote it!

All that would work well with command line, as your instances don’t restart. But with RDS, it takes over seven long minutes!

Read this: 5 Reasons to move data to Amazon Redshift

5. RDS can upgrade to MySQL 5.6!

MySQL 5.6 introduced a new timestamp datatype which allows for fractional seconds. Great feature, but it means the on-disk datastructures are different. Uh oh!

If you’re doing replication with MySQL 5.5 to 5.6 it will break because the rows will flow out in one size, and break the 5.6 formatted datafiles! Not good.

The solution requires running ALTER commands run on the master beforehand. That in turn locks up tables. So it turns out promoting a read-replica is a non-starter for 5.5 to 5.6. Doesn’t really save much.

All of this devil in the details stuff is terrible when you don’t have command line access.

Read: Are SQL databases dead?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Howto automate MySQL slow query analysis with amazon RDS

iRobot1

If you’ve used relational databases for more than ten minutes, I hope you’ve heard of slow queries. Those are those pesky little gremlins that are slowing down your startup, and preventing scalability you so desperately need.

Luckily there’s a solution. What I’ve found is if I send a report to developers every week, it keeps these issues front and center, for folks that are very busy indeed.

The script below is for RDS, but you can surely modify it if you have a physical server or roll-your-own MySQL box on Amazon. Take a look & enjoy!

Join 26,000 others and follow Sean Hull on twitter @hullsean.

1. install percona tools

Percona as many probably already know, are a wildly successful services firm that support MySQL and related technologies. They also have a very popular & scalable MySQL distribution by the same name.

Even if you’re not using Percona MySQL, you definitely want to get ahold of the percona toolkit. It provides all sorts of useful tools, including the one this article is based on, query-digest.

This tool takes your stock MySQL slow query logfile as input, and summarizes it into a very useful and readable report. Formerly mk-query-digest, it’s not called pt-query-digest. See below.

You can install the percona tools easily by grabbing the repository file and installing that with rpm. From there you can just use yum or apt-get depending on your distribution.

Related: Why a killer title can make or break your content efforts

2. install aws command line tool

Amazon has consolidated all it’s command line tools into a single one called just “aws”. The options can be a little arcane, and the error messages misleading besides. What’s good though is it is slightly easier to install & configure.

Do you already use Python? Install it this way:


$ pip install awscli

If not, you’ll need to dig into the aws cli installation instructions further.

Also: Do managers underestimate operational costs?

3. edit .aws/config

After you get the tool installed, you need to setup your environment. I edited a file named /home/shull/.aws/config as follows:


[default]
region = us-east-1
aws_access_key_id = BLIBJZMKLWIL5UTNRBMQ
aws_secret_access_key = MF5J/2z7HmN92lQUrV12ZO/FBXNjDVjL52TNRWsG

Those access_key_id and secret_access_key you can find on your amazon dashboard. Click upper right hand corner under your name, select the menu item “Security Credentials”.

Check out: Are SQL Databases Dead?

4. edit send_query_report.sh

I wrote the script below so you can fairly easily edit it.


#!/bin/bash
#

# get the rds db instanceID from command line (or crontab) entry
#
AWS_INSTANCE=$1

# here's where we'll store the latest slowquery.log
#
SLOWLOG=/tmp/rds_slow.log
#SLOWLOG=`/bin/ls -tr /home/shull/*.log | /usr/bin/tail -1`

# fetch slow query log from rds box
# here I always grab the latest one.
#
/usr/local/bin/aws rds download-db-log-file-portion --db-instance-identifier $AWS_INSTANCE --output text --log-file-name slowquery/mysql-slowquery.log > $SLOWLOG

# query report output
SLOWREPORT=/tmp/reportoutput.txt

# pt-query-digest location
MKQD=/usr/local/bin/pt-query-digest

# run the tool to get analysis report
$MKQD $SLOWLOG > $SLOWREPORT

# today's date in a variable
TODAY=`/bin/date +\%m/\%d/\%Y-\%H:\%S`
#YESTERDAY=`/bin/date -d "1 day ago" +\%m/\%d/\%Y-\%H:\%S`

# report subject
SUBJECT="Sean Query Report -- $TODAY "

# recipient
EMAIL="hullsean@gmail.com"

# send an email using /bin/mail
/usr/bin/mailx -s "$SUBJECT" "$EMAIL" < $SLOWREPORT

Note, if you don't have mailx installed, it should be available in your repository. Use apt-get or yum as necessary to get it installed.

Also: Is high availability overrated & near impossible to deliver?

5. Add to crontab

After you've tested the above script from command line, you will want to add it to a weekly cron job. Voila, automation! Don't forget to chmod +x to make it executable. :)


00 09 * * 5 /home/shull/send_query_report.sh seandb

Read: Are MySQL DBA's impossible to find?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Connect to MySQL in the Amazon Public Cloud

MySQL on Amazon Cloud AWS

Troubleshooting MySQL on Amazon can be a real test of patience. There are quite a few different things to watch out for in terms of connectivity & networking. Sometimes a checklist can help.

Join 16,000 others and follow Sean Hull on twitter @hullsean.

Here’s my exhaustive list of things that can block you.

1. Be sure to create users & grants

Chances are you did something like this to create your user:


mysql> CREATE USER ‘sean’@‘localhost’ IDENTIFIED BY ‘password’;
mysql> GRANT ALL PRIVILEGES ON sean_schema.* TO ‘sean’@‘localhost' WITH GRANT OPTION;

But that won’t help you when connecting from a remote Amazon box. So what to do? Here’s an example:


mysql> CREATE USER ‘sean’@’10.10.%’ IDENTIFIED BY ‘password’;
mysql> GRANT ALL PRIVILEGES ON sean_schema.* TO ‘sean’@‘%’ WITH GRANT OPTION;

You may need to make your source IP wildcard *more* aggressive. For example consider ’10.%’. You *may* even with with ‘%’ which allows *all* source IPs. This may sound dangerous, but if you use a tight security group (see item #3 below), you can still be safe.

Related: Why Oracle Won’t Kill MySQL

2. Make sure iptables is not a problem

IPTables is a Linux service that acts like a private firewall for each server. Some AMIs will have it enabled by default. If you’re having trouble like I did, this can definitely trip you up. That’s because your connection will fail silently without telling you, hey the OS won’t let me into that port!

If you are a networking pro you’ve probably already fiddled with iptables. Feel free to add specific rules, and keep it turned on. However I’d recommend just disabling it completely, and using your Amazon security groups to protect your ports.


$ /etc/init.d/iptables stop
$ chkconfig --list iptables
iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
$ chkconfig --del iptables
$ chkconfig --list iptables
service iptables supports chkconfig, but is not referenced in any runlevel (run 'chkconfig --add iptables')

Also: Are SQL Databases Dying Out?

3. Test & verify amazon security group settings

Security groups in Amazon can be tricky. I recommend the following:

o create a security group webserver_group
- allow port 80 from 0.0.0.0/0
- allow port 443 from 0.0.0.0/0
- allow port 22 from

o create a security group db_group
- allow port 22 from
- allow 3306 from

What’s happening here? We can’t specify a fixed set of IP addresses because they can change in Amazon. So essentially what we’ve done is say *any* requests from servers in our Amazon package, which are in the webserver_group security group, can connect to port 3306. Pretty cool right?

This means we’re pretty locked down. No internet connections to 3306, so we can be a little looser (see item #1 above) about our grants and source IPs.

What about if you want to use your GUI tools to hit your Amazon hosted MySQL boxes? Say you like to use the Oracle Workbench, Navicat or Toad to connect to MySQL. One way you could do this is configure your db_group to allow 3306 from your office subnet. Then anyone VPN’d into your office will be able to use the tools they like.

Another option is to use Amazon VPC for your servers. You’ll setup an Amazon Virtual Private Gateway, which is a direct VPN connection between Amazon’s datacenter and your datacenter. This can be a messy process, and you’ll want to contact your network admin to help. Once it’s setup, amazon boxes appear to sit on your office or datacenter network. Cool stuff!


$ mysql -h xxx.xxx.xxx.xxx -u admin -p
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx'

Read this: Why are MySQL experts in such short supply?

4. MySQL network settings

If MySQL is bound to the wrong IP address you can have real problems. First be sure skip_networking is OFF. If it is ON change it in /etc/my.cnf & restart MySQL.


mysql> show variables like 'skip_net%';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| skip_networking | OFF |
+-----------------+-------+
1 row in set (0.00 sec)

The other MySQL setting that can be problematic is bind-address. First check what it is set to:


$ cat /etc/my.cnf | grep bind
bind-address=127.0.0.1

This isn’t going to allow remote connections. In amazon however, your IP address may change upon reboot. So there is a special setting to allow binding to any IP:


bind-address=0.0.0.0

Related: Bulletproofing MySQL Replication with Checksums

5. installing mysql client & telnet for troubleshooting

You have two options for troubleshooting on the webserver side. If you’re simply trying to check by mysql command line, you may get blocked up if the network settings & security groups aren’t configured right. So use telnet first.


$ yum install -y telnet

$ telnet 10.10.10.1 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
4
5.1.71??gu9Y6B'/y9Oay`QV Connection closed by foreign host.
$

If you don't get a responce, it's not an issue with users or grants, but rather that the port isn't opened. Check iptables, check bind-address and check security groups.

Check this: Top MySQL DBA Interview Questions

6. SE Linux related issues

SE Linux will do a lot of good, if managed properly. However if you're not aware of it's existence, it can be very very frustrating. Symptoms can be as abstract as allergies, a cold or flu. It can monitor files, and prevent MySQL from being able to write where it needs to,

Read this: Migrating MySQL to Oracle

7. RPM & later centos yum repo install conflicts

I had real problems doing a custom install for a customer. They didn't want to use a repository for various settings, but preferred downloading RPMs. There were a few other customizations which were tripping things up.

Based on all the connectivity issues I was having, I backed out of the RPM based install, and then ran through a stock yum install. After doing that, I started seeing these weird errors in the mysqld.log

120328 21:32:40 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
120328 21:32:40 [ERROR] Do you already have another mysqld server running on port: 3306 ?
120328 21:32:40 [ERROR] Aborting
If I run "netstat -nat | grep 3306" in my terminal, I get the following:
tcp4 0 0 *.3306 . LISTEN

I spent hours spinning my wheels and not able to figure out what was happening here. At first it seemed a leftover pid file was the culprit. In the end it appeared the *old* /etc/init.d/mysql script was still in place, and the new yum packages wouldn't work with that.

I ended up just scrapping the whole box, and starting from scratch. Sometimes you have to do that. After a clean build, all was fine.

Related: RDS or MySQL 10 Use Cases

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don't work with recruiters

Are SQL Databases Dead?

mesa verde city

I like the image of this city of Mesa Verde. It’s fascinating to see how ancient cities were built, especially as an inhabitant of one of the worlds largest cities today, New York.

I’m a long time relational database guy. I worked at scores of dot-coms in the 90′s as an old-guard Oracle DBA, and pivoted to MySQL into the new century. Would a guy like me who’s seen 20 years of relational database dominance really believe they could be dying?

There’s a lot to be excited about in this new realm of db, and some interesting bigger trends that are pushing things in a new way.

Join 15,100 others and follow Sean Hull on twitter @hullsean.

1. Growing use of ORMs

ORM probably sounds like some strange fossil archeologists just dug up in the ancient city of Mesa Verde. But they’re important. You may know them by their real-life names, Hibernate, Active Record, SQL Alchemy and Cake. There are many others. Object Relational Modelers provide a middleware between developers and the SQL of your chosen relational database. They abstract away the nitty gritty, and encapsulate it into a library.

In a way they’re like code generators. Mark Winand talks about them in SQL Performance Explained warning of the “eager fetching” problem. This is DBA speak for specifying all columns (SELECT *) or fetching all rows, when you don’t need them all. It’s inefficient in terms of asking the database to read & cache all that data, but also to send it across the network and then discard it on the webserver side. Like a lazy housekeeper the clutter & dust will grow to overwhelm you.

Martin Fowler is the author of the great book NoSQL Distilled. He tries to walk the fence in his post ORM Hate, trying to balance developers love of ORMs, and the obvious need for scalability. Ted Neward calls ORMs the Vietnam of Computer Science.

Mattias Geniar points out that BAD ORMs are infinitely worse than bad SQL and another on High Scalability by Drewsky The Case Against ORM Frameworks.

If you agree the ORM conversation is still a huge mess, you’ll be excited to know that NoSQL sidesteps it completely. They’re built out of the box to interface more like data structures, than reading rows and columns. So you eliminate the scalability problems they introduce when you go NoSQL. That makes developers happy, and pleases DBAs and techops too. Win!

Read: Why Oracle won’t kill MySQL

2. Widening field of options

NoSQL databases are not simply key value stores, though some like Memcache and Riak do fit that mold.

Mongodb offers configurable consistency & durability & the advantages of document storage, no need for an ORM here. You also have a mix of indexing options, that go a little deeper than other NoSQL solutions. A sort of middle ground solution that offers the best of both worlds.

Cassandra, a powerful db that is clustered out of the box. All nodes are writeable, and there are various ways to handle conflict resolution to suit your needs. Cassandra can grow big, and naturally takes advantage of cloud nodes. It also has a nice feature to naturally age out data, based on settings you control. No more monumental archiving jobs.

Hbase is the database part of Hadoop, based on Google’s seminal Bigtable paper.

Redis is another option with growing popularity. It’s a key-value store, but allowing more complex data in it’s buckets, such as hashes, lists, sets and sorted sets. Developers should be salivating at this one.

Also: 5 Great Things about Markus Winand’s Book SQL Performance Explained

3. Lowering bar

The old world of relational databases treat data as sacrosanct. DBAs are tasked with protecting it’s integrity & consistency. They manage backups to protect against disaster. In this world, every bit of data written is as sacred as any other, whether it’s your bank account balance, or a comment added to a facebook discussion.

But modern non-relational databases introduce the idea of eventually consistent. DBAs and architects would say we are relaxing our durability requirements. What they mean is data can get slightly out of sync and we’re ok with that. We’ll build our web applications to plan for that, or even in the case of Riak expose the levers of durability directly to the developers, allowing them to make some changes instant, while others more lax and lazy.

Check this: Why high availability is so very hard to deliver

4. Cloud demands

Virtualized environments like Amazon EC2, give easy access to legions of servers. Availability zones & regions only widen the deployment options. So deploying a single writeable master, the way traditional relational databases work best, is not natural.

Databases like Cassandra, Mongo & Redis are clustered right out of the box. They grew up in this virtual datacenter environment and feel comfortable there.

Related: Why I wrote the book on Oracle & Open Source

5. Only DBAs understand them

Devs may whine at this statement, and to be fair it’s a generalization. The popularity of ORMs speaks volumes here. Anything to eliminate the dreaded SQL writing. Meanwhile DBAs bemoan the use of ORMs for they represent everything they’re trying to fix.

SQL is hard enough, but the ugly truth is each database vendor has their own implementation, their own optimizations, their own optimal tweaks. Even between database versions, SQL code may not perform consistently.

Identifying slow SQL and tweaking it remains one of the primary tasks of performance tuning, for this reason. It hasn’t changed much in my two decades on the job.

Also: Why bemoaning AWS performance sounds like Linux detractors circa 1999

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 great things about Markus Winand’s book SQL Performance Explained

markus winand sql performance explained book

Join 12,100 others and follow Sean Hull on twitter @hullsean.

1. Covers databases broadly

You may not have noticed, but there’s a whole spectrum of relational databases on offer. Of course in the database world, most get infatuated with one, and that becomes their bread & butter before long. Their life, their passion, their devotion.

That’s fine as far as it goes, but Winand really stands out, offering a spectrum of ideas and optimization techniques for different platforms. If you’re an Oracle-only or MySQL-only dba you’ll gain a lot from this book but even more importantly if you work in professional services, and need to communicate with DBAs brought up on one of these platforms, it becomes like a rosetta stone for SQL query tuning.

Read: Why devops talent is in short supply

2. Shows you how to fish

I find database books and methods fall into two sort of broad categories. There’s the call Oracle support method, where you’ll be handed one very specific set of steps, commands, and a path to solve each specific problem. It’s more about memorization, it’s like they actually hand you the fish.

Then there is the investigative method, where you learn how to use a magnifying glass to look at fingerprints, and check for DNA samples, and interrogate suspects. You know learn the tools of the trade.

That’s what Markus brings you, in all it’s delicious glory.

Read: Why high availability is so very hard to deliver

3. It’s concise

Another gripe I have of technical books is that the publishing model is, this is a textbook and since the price tag is large, let’s make the physical book large! Of course no one wants to carry those books around. I even recently bought a kindle to solve this problem.

SQL Performance Explained is more a paperback book form factor, and that means you can tote it around with you easily, and keep it with you at work. Read it on the train, commuting to work.

200 pages packed cover to cover with all sorts of good chapters, including a primer on indexes & types, scalability & performance, joins, clustering, Top-N queries, DML, and more.

Read this: Why a four letter word divides dev and ops

4. It’s technical but accessible

If you’re a real rock bottom beginner, you might want to dig a bit more on your SQL syntax, and some of the basics. You could also keep a 101 book side-by-side, while you’re reading this book.

For the intermediate & advanced DBAs out there, this book will sit comfortably in your paws as you flip the pages and learn something new. For instance just today I learned that Postgres can index NULLs while MySQL, Oracle and SQL*Server cannot. Learn something new everyday.

Related: MySQL interview guide for managers and candidates alike

5. Gives you answers you can use today

After twenty years of consulting, I’ve seen a few patterns emerge. Besides the spectrum of team & communication challenges, firms hitting the performance wall often have issues with their relational databases.

Yes those databases are sometimes on the wrong hardware, or their are other obscure problems with setup or configuration. But the bulk of issues center on badly written SQL.

SQL is a much reviled language and often misunderstood. And it doesn’t seem like developers have gotten that much better at it over the years. It would explain the rise of NoSQL databases, as they often speak REST or xml, no need for pesky sequel.

One parting note. For all the devs and architects out there, who want to sing the virtues of ORMs, this book hits that squarely in the nose. By showing how differently each relational database implements SQL, performs work, and optimizes, Winand also illustrates the naivete behind trying to write database independent application code.

If you’re a developer and don’t know how to profile a query or run explain plan, don’t walk, run to your closest Amazon.com store and get this book!

Also: 5 more things deadly to scalability

Criticisms

If I were to offer two slight criticisms, it would be these. First, the index is a bit wonky. When I look under “P” for example, there’s no Postgres, while one quarter of the book is obviously devoted to that platform. Further, looking up NULL which are covered in depth, in various places in the book, only has one entry in the index, p54 on Oracle. So the index could be a bit more robust to be useful.

The other criticism is more perhaps my bias. On page 96, when he discusses ORMs I thought he was rather… shall we say gentle. Although he clearly states that “eager fetching” is problematic, I don’t think he goes far enough to condemn it. In my experience ORMs are always trouble.

Then again why am I complaining, their use keeps me forever employed.

Want a copy? Markus Winand’s book site has all the goods!.

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Why Oracle won’t kill MySQL

oracle mysql database

Join 15,000 others and follow Sean Hull on twitter @hullsean.

1. MySQL does not compete with Oracle

It’s a myth that MySQL somehow poses a threat to Oracle. Oracle’s customers tend to be large enterprises running apps like e-business suite. These are certified to run on Oracle, and further they sit close to finance.

MySQL tends to be a choice of scrappy but nimble startups for their web-facing applications. They want to deploy in the cloud, and don’t want to deal with licenses. Plus they have the techops chops to handle the bushwacking of open source.

Related: Why I wrote the book on Oracle & Open Source

2. Oracle bought Sun for the hardware business

Remember when Oracle acquired Sun? A lot of folks assumed Larry was after MySQL. Grab it & slowly smother it. But actually it was more frosting on the cake. Larry had for years expressed interest in cubes and clusters, and building an Oracle appliance. Whether this ever came to profitable fruition in the form of Exadata remains to be seen. But buying Sun for a song helped him do this.

Also: Why bemoaning AWS performance sounds like Linux detractors circa 1999

3. Larry blows with the wind on open source

He’s money minded, so you’ll see in his decisions that comes first.

In the late 90’s when a customer might spend $100k on Sun and $100k on Oracle licenses, Larry realized porting to Linux and pushing commodity hardware would be a win. So he pushed Linux, and customers could now spend $20k on commodity hardware and $180k on Oracle licenses for them. Imagine the 10million dollar budget if you’re having trouble with the math here.

He also eventually moved the middle tier to Apache for similar reasons. I would argue Oracle corp overall pays lip service to contributing to open source, but they do that to some degree.

Read: Why MySQL dbas are so hard to find

4. MySQL support business is real

What’s more, just as adopting Linux, and then offering their “unbreakable Linux” distro, and pricey support along with it, they’re doing similar things with MySQL. For enterprise customers, and those already comfortable with making the call to Redwood Shores, sales folks will happily direct them support contracts and enterprise add-ons. Naturally.

Read: Why your startup needs real techops

5. There are real viable alternatives to keep balance

And let’s not forget folks, there are already a bunch of forks. There’s the popular and every growing Mariadb which Google has put their muscle behind.

Of course let’s not forget the very popular, very capable, and very bulletproof Percona distribution, along with the Percona toolkit and xtrabackup for real hotbackups.

And for those looking to experiment, there’s Drizzle a work in progress, complete rewrite, and one that’s unfortunately not a drop-in replacement.

Read this: What’s the four letter word dividing dev and ops?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What is eventually consistent and will it work for you?

aws dynamodb

In the tech world we’re fond of inventing all sorts of technical terms, that are admittedly kind of confusing.

Join 13,500 others and follow Sean Hull on twitter @hullsean.

I was attending an excellent talk recently called Data at Scale, part of the Database Month series that Eric Benari hosts. In it Mark Uhrmacher presented some phenomenal solutions which worked for flash site ideeli. It allowed them to support their incredible business model, where 15% of traffic would happen in 15 minutes everyday. As he called it a “self-imposed denial of service attack”. Interesting analogy.

What occurred to me though, is that a lot of companies and startups struggle to understand which database solutions will work for them, and what the strengths and weaknesses of each are, and further what tradeoffs they’ll grapple with.

One concept that we hear a lot is “eventually consistent”. Many of the new NoSQL databases achieve their speed & availability this way. But what’s it all about?

Let’s change a smartphone contact

I’m sure you have a smartphone in your pocket, and for demonstration sake I’ll use the iphone configured with iCloud.

Let’s go ahead and dial up your *OWN* contact card. Click “Edit” and go ahead and change something. Let’s change your title to “rock star”. Now click “Done”. We’ll wait a minute. Now go to your desktop and open up Contacts. Scroll through to your contact and verify that the Title field now shows “rock star”.

How does all this happen? When you click the “Done” button, the iphone sends changes up to iCloud. iCloud then lets your laptop know a change has happened and those then sync up.

Now let’s run through the same exercise, but change it in two places. We’ll change the smartphone contact to “Founder” and the desktop Contacts record title to “Consultant”. Wait a little bit and you’ll notice they will both eventually show “Consultant”.

Also: Why a killer title can make or break your content efforts

How long were laptop & phone out of sync?

As you probably noticed, the iCloud seems to lean in favor of the desktop client. It’s not clear to me what rules it uses here, nor does it seem to be configurable. Nevertheless eventually both the desktop and smartphone with have the same contact card for you. Quite a feat of magic!

Read: Why high availability is so very hard to deliver

Handling collisions

There is only one *YOU* and presumably your digital rolodex reflects that too. You have one and only one contact card. Or do you? As far as these digital tools are concerned there are actually THREE! One on your desktop, one in iCloud and one on your phone. Each time you change in any of those places, it syncs *UP* to iCloud and then down to the other devices.

Collisions happen if you make changes in two places. Imagine if you’re a road warrior and your laptop was offline for some days, or your smartphone for that matter. In those cases that syncing would happen much later, and collisions more likely.

Related: Why the twitter IPO made a shocking admission on scalability

In the high frequency world of online databases

With online databases, all of this becomes vastly more complex. Web based applications may have 100,000 simultaneous users. Some may be coming from IMEA while others the Americas. It gets pretty darn complex when you have databases in each of those regions.

We deploy applications this way, so one datacenter, say the East Coast region one version, can fail, but all the others still operate. They can still change data, read and write, without being impacted by the New York outage.

Once that datacenter is restored, the databases will then sync up and reconcile missing data.

Also: Why a killer title can make or break your content efforts

MariaDB and Amazon RDS read replicas

MySQL and it’s variants of MariaDB, Percona and Amazon RDS can do something like this with read-replicas. The read-only copies of the database are asynchronous and take time to catch up to changes. You can have the read-only copies in different regions.

This improves availability for browsing your application, but not for making changes. In other words MySQL can use this method to scale reads but not writes. That’s why I recommend your applications also support a browse only mode which means availability won’t be impacted if your authoritative master dies.

Although you can try to do the same for writes by sharding your MySQL instances, this starts to get very messy very fast. Imagine backing up 10 shards, 10x the complexity, and even more when you want to go and do a restore.

Read: Why devops talent is in short supply

Amazon’s Dynamo DB

Amazon’s DynamoDB is a technology based around the original Dynamo whitepaper which attempts to solve a whole class of problems by easing eventually consistent constraints.

What you get is more availability, it’s hard for the whole cluster to go down. That’s great for applications because they can continue to operate if one or more nodes fails. It also scales writes, which is a sort of holy grail in the database world as it’s typically hard to do.

But remember all this comes at a cost. Traditionally scaling writes is hard to do because all changes are kept in one place. You maintain a single authoritative master. If you want to imagine why this matters, think back to our smartphone example. We changed our contact card on our phone and our desktop at the same time. One of those two changes won the battle. But that’s a case where we’re not overly concerned.

If you imagine a bank doing the same thing, and you wire $1000 via phone and desktop, you can quickly see that there is a whole class of applications that won’t be happy with eventually consistent. Your web application may be one of those. Or it may not. Consider carefully before you go with Amazon RDS or DynamoDB as your datastore.

Read: Why startups need more than great developers to achieve scalability

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters