Tag Archives: big data

Five ways to get your data into Redshift

redshift data pipeline

Everybody is hot under the collar this data over Redshift. I heard one customer say, a query that took 10 Hours before now finishes in under a minute. Without modification. When businesses see 600 times speedup, that can change the way they do business.

Join 32,000 others and follow Sean Hull on twitter @hullsean.

What’s more Redshift is easy to deploy. No complicated licenses like the Oracle days. No hardware, just create your cluster & go.

So you’ve made the decision, and you have data in your transactional database, MySQL RDS or Postgres. Now what?

Here are some systems that will help you synchronize data on the regular. And keep it in sync. Most of these are near real-time, so you can expect reports to be looking at the data your business created today.

1. RJ Metrics Pipeline

One of the simplest options, RJ Metrics Pipeline. Setup a trial account, configure your Redshift credentials in the warehouse section (port, user, password, endpoint) and save. Then configure your data source. For MySQL specify hostname, user, password & port. You get the option to go through an ssh tunnel for security. That’s good. You’ll also be given the grant code to create a user in MySQL for RJM.

rjmetrics table config screen

RJM uses a primary or unique key to figure out which rows have changed. Well that’s not completely true. Only if you’re using incremental refresh. If you’re using complete refresh, then it just selects all the data & replaces it each time.

The user interface is a bit clunky. You have to go in and CONFIGURE EACH TABLE you want to replicate. There’s no REPLICATE-ALL option. This is a pain. If you have 500 tables, it might take hours to configure them all.

Also since RJM isn’t CDC (change data capture) based, it won’t be as close to real-time as some of the other options.

Still RJM works and it’s pretty point-n-click.

Also: Is Amazon too big to fail?

2. xplenty

xplenty is really a lot more than just a sync tool. It’s a full featured ETL system. Want to avoid writing tons of python jobs to convert datatypes, transform 0 to paid & 1 to free, things like that? Well xplenty is made to allow building ETL systems without code.

xplenty main dashboard

It’s a bit complex to setup at first, but very full featured. It is the DIY developer or DBAs tool of the bunch. If you need hardcore functionality, xplenty seems to have it.

Also: When hosting data on Amazon turns bloodsport?

Is Data your dirty little secret?

3. Alooma

Alooma might possibly be the most interesting of the bunch.

After a few stumbles during the setup process, we managed to get this up and running smoothly. Again as with xplenty & Fivetran, it uses CDC to grab changes from the MySQL binlogs. That means you get near realtime.

alooma dashboard

Although it’s a bit more complex to setup than Fivetran, it gives you a lot more. There’s excellent visibility around data errors, which you *will* have. Knowing where they happen, means your data team can be very proactive. This is great for the business.

What’s more there is a python based Code Engine which allows you to write bits of code that transform data in the pipeline. That’s huge! Want to do some simple ETL, this is a way to do that. Also you can send notifications, or requeue events. All this means you get state of the art pipeline, with good configurability & logging.

Read: Is aws a patient that needs constant medication?

4. Fivetran

Fivetran is super point-n-click. It is CDC based like Flydata & Alooma, so you’re gonna get near realtime sync with low overhead. It monitors your binlogs for changed data, and ships it to Redshift. No mess.

The dashboard is simple, the setup is trivial, and it just seems to work. Least pain, best bang.

Related: Does Amazon eat it’s own dogfood?

5. Other options

There are lots of other ways to get data into Redshift.


I did manage to get Flydata working at a customer last year. It’s a very viable option. I wrote at length about that solution I’ll leave you to read all about it there.

AWS Data Pipeline

I’ve started to kick the tires of AWS Data Pipeline but haven’t decided if it’s the best option for customers.

Nightly rebuild

The Donors Choose Tech Blog posted about their project which can move data from postgres to redshift. You can find the project here.

This will do a *full* reload each night, so if your db is too big for that, it might need modifications. Also if you’re using MySQL as source db you’ll need to change code. One thing I found in there was Perl & Sed commands to transform your source schema CREATE & ALTER statements into Redshift compatible ones. That in itself is worth a look.

Lambda to the rescue

The awslabs github team has put together a lambda-based redshift loader. This might be just what you need. Remember thought that’ll you’ll need to deliver your source data in CSV files to S3 on the regular. So you’ll need some method to dump it. But if you have that half of the equation, this is ideal.

Data Migration Serve or DMS

This appears to have supported Redshift early on, but does not appear to do so now. I’ve gotten conflicting reports, so I should dig a bit more. Anybody want to comment on this one?


I tried & tried & tried to get Tungsten to work. I did have some success but was still blocked by data problems which remained unresolved. To my mind the project is still broken or at least very buggy.

Also: Is AWS too complex for small dev teams?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

What happens when entrepreneurs treat data as a product?

Sneachta Pix
Sneachta Pix

I’ve been reading DJ Patil’s thoughts on building data products. As the chief data scientist of the united states, he knows a thing or two.

Join 30,000 others and follow Sean Hull on twitter @hullsean.

I also attended a recent Look & Tell event, where Lincoln Ritter talked about Data Democracy at Animoto. He expressed many of Patil’s lessons.

I took away a few key lessons from these that seem to be repeating refrains…

1. UX of data

UX design involves looking at how customers actually use a product in the real world. What parts of the product work for them, how they flow through that product and so on.

That same design sense can be applied to data. At high level that means exposing data in a measured, meaningful & authoritative way. Not all the tables & all the data points but rather key ones that help the business make decisions. Then layering on top discovery tools like Looker to allow the biz-ops to make more informed decisions.

Also: 5 Reasons to move data to Amazon Redshift

2. Be iterative

Clean data, presented to business operations in a meaningful way, allows them to explore the data, and find useful trends. What’s more with good discovery tools, biz-ops is empowered to do their own reporting.

All this reduces the need to go to engineering for each report. It reduces friction and facilitates faster iteration. That’s agile!

Related: Is automation killing old-school operations?

3. Be authoritative

Handing the keys to the data kingdom over to business means more eyes on the prize. That may well surface data inconsistencies. Each such case can reduce trust on your data.

Being authoritative means building checks into your data feeds, and identifying where data is amiss. Then fixing it at the source.

Read: Are SQL Databases dead?

4. Spot checks & balances

Spot checks on data are like unit tests on code. They keep you honest. Those rules for how your business works, and what your data should look like, can be captured in code, then applied as tests against source data.

Also: Is Apple betting against big data?

5. Monitoring for data outages

As data is treated as a product, it should be monitored just like other production systems. A data inconsistency or failed spot check then becomes an “outage”. By taking these very seriously, and fire fighting just as you do other production systems, you can build trust in that data, as those fires become less frequent.

Also: Why Airbnb didn’t have to fail

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

The Needle in Big Data Noise

Join 5500 others and follow Sean Hull on twitter @hullsean.


Also take a look at: I hacked Disqus Digests to discover new blogs

Who the heck is Bayes

Thomas Bayes was a scientist & thinker, Fellow of the Royal Society, and back in 1763 author of “An Essay toward Solving a Problem in the Doctrine of Chances”. His method advocated learning by approximation, to get closer and closer to the truth by gathering more information, and factoring that into probabilities & predictions.

What isn’t acceptable under Bayes’s theorem is to pretend that you don’t have any prior beliefs. You should work to reduce your biases, but to say you have none is a sign that you have many.

Why should you care?

From hurricane & earthquake warnings, to financial storms or terrorism, prediction is more important than ever. Epidemiologists can make use of Bayesian techniques to protect populations, gamblers can use it in sport, and investors for markets.

See also Amp up your blog traffic by improving your pagerank

Why Nate Silver is different

Nate is famous for predicting the 2012 presidential election with uncanny accuracy. So the book is an in depth look at how he thinks, and how he works with data. He talks of Hedgehogs – those who believe in big ideas and work from large principals, versus Foxes who see the world as messy, often inconsistent and unpredictable, but who nevertheless tend to present better though less definitive predictions. The philosophy is less of modeling, and more of testing, and adjusting along the way to get closer to the truth.

See also Sales sucks, but a bear market offers hard lessons

For engineers & startups

Nate interviewed John Sanders of a scout for the LA Dodgers. He identified five abilities and characteristics that predict success in baseball. Looking at them together, I think they can well predict success in Startup land too.

1. Preparedness & work ethic
2. Concentration & focus
3. Competitiveness & self-confidence
4. Stress management & humility
5. Adaptiveness & learning ability

The book is a bit technical and sometimes long winded. But it is choc full of real insight, and wisdom that we can all put to use in our careers and businesses.

Also: AirBNB didn’t have to fail – AWS outages be damned!

Get a whole lot more! Scalable Startups. Exclusive. Monthly.. We share special content. Here’s a sample

Big Data – What is it and why is it important?

There’s lots of debate about exactly what constitutes “big” when talking about big data.  Technical folks may be inclined to want a specific number.

But when most CTOs and operations managers are talking about big data, they mean data warehouse and analytics databases.  Data warehouses are unique in that they are tuned to run large reporting queries and churn through large multi-million row tables.  Here you load up on indexes to support those reports, because the data is not constantly changing as in a web-facing transaction oriented database.

More and more databases such as MySQL which were originally built as web-facing databases are being used to support big data analytics.  MySQL does have some advanced features to support large databases such as partitioned tables, but many operations still cannot be done *online* such as table alters, and index creation.  In these cases configuring MySQL in a master-master active/passive cluster provides higher availability.  Perform blocking operations on the inactive side of the cluster, and then switch the active node.

We’ve worked with MySQL databases as large as 750G in size and single user tables as large as 40 million records without problems.  Table size, however has to be taken into consideration for many operations and queries.  But as long as your tables are indexed to fit the query, and you minimize table scans especially on joins, your MySQL database server will happily support these huge datasets.

Sean Hull discusses on Quora – What is Big Data and why is it important?