Big data scientist interview questions

Screen-shot-2012-08-02-at-1.28.35-PM

Everybody wants to hire a data scientist these days. According to read write the role is overhyped and overpaid. Hype aside, what’s a good approach to interviewing these hard to find people?

Here’s Read Write’s guide. Also Hilary Mason has an interview guide. Also take a look at Chris Pearson’s data scientist hiring guide.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

While you’ll surely have technical questions to ask, we figure you may already have a handle on those. They’ll vary from business to business.

What we’ve put together is a series of questions that we hope will tease out some good stories, and underscore a candidates real-world experience. These are are also great for the cross section of folks involved in the hiring process, higher level managers, HR & recruiters, plus technical folks that may have data near & dear to them.

1. What’s common?

What key metrics do you see firms repeatedly missing? Why are they important?

You’ve worked as data scientist before, and run into a lot of problems at different firms. Inevitably, some of those repeat themselves. Give an example of a metric you see over-and-over, that’s essential, but often missing attention.

Also: Is the difference between dev & ops a four-letter word?

2. What’s your favorite?

What is your favorite KPI and why?

As a data scientist, you’ve probably approached different companies, and found a couple of indicators that you particularly like. Maybe they highlight potential for growth? Or lead to other interesting discoveries about the business?

Related: Is automation killing old-school operations?

3. Let’s talk dollars

Give an example of a financial benefit you brought to a firm. How & how much?

Give an example where a measurement you made, and a business change it informed had real ROI for the business. What was that discovery? How did the business make the change? What was the financial benefit to the bottom line?

Read: Do managers underestimate operational cost?

4. Business data discovery

Give an example where you discovered data the business didn’t know it had. What & How?

Sometimes businesses have assets stored, that have been forgotten. Perhaps they’ve been archived, or a collection job has been forgotten. Perhaps it’s a corner of salesforce that hasn’t been evaluated. How did you bring the new data to light, and make use of it?

Also: Is the difference between dev & ops a four-letter word?

5. Why do you love data?

Why is data scientist your chosen career path?

This is an open ended question, but should spark some stories. Perhaps the candidate enjoys working with tech, product & biz-ops equally? Why are your skills uniquely suited to the role over other technical careers?

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Why you need a performance dashboard like StackExchange

stackexchange

Most startups talk about performance crucial. But often with all the other pressing business demands, it can be forgotten until it becomes a real problem.

Flipping through High Scalability today, I found a post about Stack Exchange’s performance dashboard.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

The dashboard for Stack Exchange performance is truly a tectonic shift. They have done a tremendous job with the design, to make this all visually appealing.

But to focus just on the visual aesthetics would be to miss many of the other impacts to the business.

1. Highlight reliability to the business

Many dashboards, from Cacti to New Relic present performance data. But they’re also quite technical and complicated to understand. This inhibits their usefulness across the business.

The dashboard at Stack Exchange boils performance down to the essentials. What customers are viewing, how quickly the site is serving them, and where bottlenecks are if any.

Also: Is the difference between dev & ops a four-letter word?

2. What’s our architecture?

Another thing their dashboard does is illustrate their infrastructure clearly.

I can’t count the number of startups I’ve worked at where there are extra services running, odd side utility boxes performing tasks, and general disorganization. In some cases engineering can’t tall you what one service or server does.

By outlining the architecture here, they create a living network diagram that everyone benefits from.

Related: Is automation killing old-school operations?

3. Because Fred Wilson says so

If you’re not convinced by what google says, consider Fred Wilson who surely should know. He says speed is an essential feature. In fact *the* essential feature.

The 10 Golden Principles of Successful Web Apps from Carsonified on Vimeo.

Read: Do managers underestimate operational cost?

4. Focus on page loading times!

If you scroll to the very bottom of the dashboard, you have two metrics. Homepage load time, and their “questions” page. The homepage is a metric everyone can look at, as many customers will arrive at your site though this portal. The questions page will be different for everyone. But there will be some essential page or business process that it highlights.

By sifting down to just these two metrics, we focus on what’s most important. All of this computing power, all these servers & networks are all working together to bring the fastest page load times possible!

Also: Is the difference between dev & ops a four-letter word?

5. Expose reliability to the customer

This performance page doesn’t just face the business. It also faces the customers. It lets them know how important speed is, and can underscore how serious the business takes it’s customers. Having an outage or a spike that’s slowing you down. Customers have some transparency into what’s happening.

Also: Is the difference between dev & ops a four-letter word?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

If you’re building a startup tech blog you need to ask yourself this question

Editor & writer in friendly dialog

Join 28,000 others and follow Sean Hull on twitter @hullsean.

I work at a lot of startups, and these days more and more are building tech blogs. With titles like labs or engineering at acme inc, these can be great ways to build your brand, and bring in strong talent.

So how do we make them succeed? It turns out many of the techniques that work for other blogs apply here, and regular attention can yield big gains.

1. Am I using snappy headlines?

Like it or not we live in a news world dominated by sites like Upworthy, Business Insider, Gawker & Huffpo. Ryan Holiday gained fame using a gonzo style as director of marketing at American Apparel. Ryan argues that old-style yellow journalism is back with a vengence.

Click bait asside, you *do* still need to write headlines that will click. What works often is for your title to be a little sound bite, encapsulating the gist of your post, but leaving enough hook that people need to click. Don’t be afraid to push the envelope a bit.

Also: Which tech do startups use most?

2. Line up those share buttons & feedburner

Of course you want to make the posts easy as hell to share. Cross posting on twitter, linkedin, facebook and whereever else your audience hangs out is a must. Use tools like hootsuite & buffer to line up a pipeline of content, and try different titles to see which are working.

You’ll also want to enable feedburner. Some folks will add your blog to feedly. Subscriber counts there can be a good indication of how it is growing in popularity too.

Related: Do today’s startups assemble software at their own risk?

3. Watch & listen to google analytics

You’re going to keep an eye on traffic by installing a beacon into your page header. There are lots of solutions, GA being the obvious one because it’s free. But how to use it?

Ask yourself questions. Who are my readers? Where are they coming from? How long do they spend on average? Do some pages spur readers to read more? Is there copy that works better for readers? Are my readers converting?

It’ll take time if you’re new to the tool, but start with questions like those.

Read: Is automation killing old-school operations?

4. Optimize your SEO a little bit

Although you don’t want to go overboard here, you do want to pay some attention. Using keyword rich titles, and < h2 > tags, along with wordpress SEO plugins that support other meta html tags means you’ll be speaking the language search engines understand. Add tags & categories that are relevant to your content.

Don’t overdo it though. Stick to a handful of tags per post. If you add zillions with lots of word order combinations & so forth, this kind of stuff may tip of the search engines in ways that work against you.

Check out: How to hire a developer that doesn’t suck

5. Search for untapped keywords

When I first started getting serious about blogging, I had an intern helping me with SEO. She did some searching with the moz keyword research tools and found some gems. These are searches that internet users are doing, but for which there still is not great content for.

For example if results showed “cool tech startups in gowanus brooklyn” had no strong results, then writing an article that covered this topic would be a winner right away.

These are big opportunities, because it means if you write directly for that search, you’ll rank highly for all those readers, and quickly grow traffic.

Read also: 5 things toxic to scalability

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is Zero downtime even possible on RDS?

amazon rds mysql

Join 29,000 others and follow Sean Hull on twitter @hullsean.

Oh RDS, you offer such promise, but damn it if the devil isn’t always buried in the details.

Diving into a recent project, I’ve been looking at upgrading RDS MySQL. Major MySQL upgrades can be a bit messy. Since the entire engine is rebuilt, queries performance can change, syntax can break, and surely triggers & stored procedures can have problems.

That’s not even getting into it with storage engines. Still have some tables on MyISAM? Beware.

The conclusion I would make is if you want zero downtime, or even nearly zero, you’re going to want to roll your own MySQL on EC2 instances.

Read: Why high availability is so very hard to deliver

1. How long did that upgrade take?

First thing I set out to do was upgrade a test instance. One of the first questions my client asked, how long did that take? “Ummm… you know I can’t tell you clearly.” For an engineer this is the worst feeling. We live & die by finding answers. When your hands are tied, you really can’t say what’s going on behind the curtain.

While I’m sitting at the web dashboard, I feel like I’m trying to pickup a needle with thick leather gloves. Nothing to grasp here. At one point the dashboard was still spinning, and I was curious what was happening. I logged out and back in again, and found the entire upgrade step had already completed. I think that added five minutes to perceived downtime.

Sure I can look at the RDS instance log, and tell you when RDS logged various events. But when did the machine go offline, and when did it return for users? That’s a harder question to answer.

Without command line, I can’t monitor the process carefully, and minimize downtime. I can only give you a broad brush idea of what’s happening.

Also: RDS or MySQL 10 use cases

2. Did we need to restart the instance?

RDS insists on rebooting the instance itself, everytime it performs a “Modify” operations. Often restarting the MySQL process would have been enough! This is like hunting squirrels with a bazooka. Definitely overkill.

As a DBA, it’s frustrating to watch the minutes spin by while your hands are tied. At some point I’m starting to wonder… Why am I even here?

Related: Howto automate MySQL slow query analysis with Amazon RDS

3. EBS Snapshots are blunt instruments

RDS provides some protection against a failed upgrade. The process will automatically snapshot your volume before it begins. That’s great. If I spend

See also: Is Amazon RDS hard to manage

4. Even promoting a read-replica sucks

I also evaluated using a read-replica. Here you spinup a slave first. You then upgrade *THAT* box to 5.6 ahead of your master. While your master is still sending data to the slave, your downtime would in theory be very minimal. Put master in read-only mode, wait few seconds for slave to catchup and switch application to point to slave, then promote it!

All that would work well with command line, as your instances don’t restart. But with RDS, it takes over seven long minutes!

Read this: 5 Reasons to move data to Amazon Redshift

5. RDS can upgrade to MySQL 5.6!

MySQL 5.6 introduced a new timestamp datatype which allows for fractional seconds. Great feature, but it means the on-disk datastructures are different. Uh oh!

If you’re doing replication with MySQL 5.5 to 5.6 it will break because the rows will flow out in one size, and break the 5.6 formatted datafiles! Not good.

The solution requires running ALTER commands run on the master beforehand. That in turn locks up tables. So it turns out promoting a read-replica is a non-starter for 5.5 to 5.6. Doesn’t really save much.

All of this devil in the details stuff is terrible when you don’t have command line access.

Read: Are SQL databases dead?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Is there a devops talent gap?

jenkins_docker

New programming languages & services are being invented at a staggering pace. Hosting is changing, networking is changing, race to market is quickening.

But what does all of this mean to the search for talent? Who understands all these components? Who is an expert in any one?

Join 28,000 others and follow Sean Hull on twitter @hullsean.

1. That new car smell

We all remember that time. You know when you drove out of the dealership with your brand new wheels. Driving down the road, you feel on top of the world. You start dreaming of all the fun times you’ll have in your new car. For days and weeks afterward you walk out to your car, open the door & sit inside. It all feels special. You kind of hang out there for a few minutes enjoying the smell before you drive off. Right?

Let’s be vigilant to remember the same thing happens, or rather is happeing in technology all the time. As we automate our infrastructures with Ansible, Puppet & Chef, deploy continuous integration with Jenkins, Travis or Codeship, we should give pause. Each of these tools has it’s own syntax, it’s own bugs, it’s own community, it’s own speed of development & change, it’s own life.

Also: Does a four letter word divide dev & ops?

2. A lot of rushing

Google tells me the synonyms of agile are, nimble, lithe, supple & acrobatic. So in a fast moving world it’s no wonder agile is so big. Anything that allows us to respond to customers quicker & evolve our product faster is a good thing. Yes it is.

Over the years I’ve worked with a lot of clients & customers. Some right out of the gates are in a hurry. There is a sense of urgency even from the initial meeting. Although not in every case, sometimes these are the sign of the perpetually late. They end up throwing money around, throwing technology around, and all in a desperate attempt to plug a leaking ship.

In our race to automate & remain agile & nimble, we should also consider the future. Lets attempt to find a balance & consider future implications of technology decisions & choices.

Read: Is automation killing old-school operations?

3. Hosting, what’s that?

For many of the startups I work with today, they’ve never deployed on anything but Amazon. There was no rack of computers in a closet & a T1 line, circa 1997. There was no rackspace hosted servers or a colo in New Jersey circa 2005. Right from the beginning it was all on-demand computing.

This shift has surely brought a lot of benefit. But no one can argue it isn’t still very new. And with newness there is a learning curve. And bugs & surprises.

Related: Does a devop need to practice the art of resistance?

4. More complexity in troubleshooting

The wild ride really begins when you’re troubleshooting performance problems. Running your database on RDS you say? How the heck do I get to the terminal and run “top”? Can I do an iostat?

And what does iostat output really mean in multi-tenant Amazon, where your disk is an EBS volume across an unknown & unfriendly network. Who knows why it just slowed to a crawl, then sped up dramatically a few minutes later.

Even fetching the relevant logfile can be complicated. For all the problems the cloud eliminates, it sure introduces a few of it’s own. And who is the expert, and how to find them?

Read this: When fat fingers take down your database

5. More tech, fewer experts

I asked the question a few weeks back Do todays startups require assembly of a lot of parts that no one really understands?.

I’ve taken to browsing the stacks at the lovely StackShare site lately. There you can see what some of the top startups are using for their technology stacks. Docker, Yammer, Yelp, Stripe, Vine, Spotify & Stack Overflow are all there today.

There are new message queues like NSQ & programming languages like Markdown, Coffeescript & Clojure. Even Java. Are people still building web apps in Java. No please no!

While it’s wonderful to see such an explosion of innovation, I look at this from an operations perspective. In five years, when the first & second wave of developers at your startup have left, picture yourself trying to find talent in a long since out-of-fashion language like Dart or Swift. What’s more how do you untangle the mess you’ve now built?

Check this: Is the SQL database dead?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Best Howto posts on Scalable Startups

vagrant logo

Join 28,000 others and follow Sean Hull on twitter @hullsean.

MySQL slow query on RDS

If you run MySQL as your backend datastore is there one thing you can do to improve performance across the application?.

Those SQL queries are surely key. And the quickest way to find the culprits is to regularly analyze your log. I’ve put together a howto & script for doing this on Amazon RDS.

Automate mysql slow query analysis on Amazon RDS

Vagrant & Amazon EC2

Among the automation tools that are getting a lot of attention these days are Chef, puppet & ansible for devops, plus Docker & Vagrant.

Can we use vagrant to spinup EC2 instances? As it turns out yes. And it can be a great way to automate dev environments and works in conjunction with docker.

How to deploy on amazon EC2 with vagrant

Cache websites for speed

Is Fred Wilson right that speed is an essential feature? We certainly think so.

And besides tweaking & tuning the database, the next best way is caching. You cache objects at the browser, add a page cache and memcache, redis or elasticache. Here’s our howto.

5 tips to cache websites & boost speed

DB Change Management

Everyone uses version control for application code, whether it’s PHP, Ruby or Node.js. But are you using it for database changes?

DDL, those statements that create objects should also be included in version control. But how to do it properly? Database change management is one part art, but there are some helpful tools to get you on the right track.

With some luck you’ll be able to roll forward & backward to versions of your database schema just as easily as you can versions of your software.

5 tips better db change management

MySQL Scalability

MySQL is the big bad beast that still hobbles a lot of site performance. Here are some key tips, narrowed down to just the essentials.

5 ways to boost mysql scalability

Cloud Scalability

The cloud enables scalability, but is it ready out of the box? There are some key things to remember on your road to high scalability in the cloud.

3 ways to boost cloud scalability

Fortify MySQL Replication

MySQL replication is pretty awesome for what it is. Still there are gotchas & potholes. Here’s our guide to smooth sailing.

5 ways fortify mysql replication

MySQL replica with Hotbackup

If you’re building your own MySQL instances on EC2, you’ll also build your own replicas. Luckily there are some great tools that make this reliable & smooth. Install percona’s hotbackup tool & you’re off to the races.

Easier mysql replication using hotbackups

MySQL Backups

If you’ve forgotten all about backups since your cloud or managed solution does all that for you, think again! There are still things you should do in addition. At the very least run a fire drill & find out if all the parts are there for rebuild.

10 things to remember with mysql backups

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

5 reasons to move data to Amazon Redshift

redshift amazon

Join 28,000 others and follow Sean Hull on twitter @hullsean.

Amazon is rolling out new database offerings at a rapid clip. I wondered Did MySQL and Mongodb just have a beautiful baby called Aurora? That was last month.

Another that’s been out for a while is the data warehouse offering called RedShift.

1. old-fashioned SQL interface

Ok, yes Redshift can support petabyte databases and this in itself is staggering to consider. But just after you digest that little fact, you’ll probably discover that it’s SQL compatible.

This is a godsend. It means the platform can leverage all of the analytical tools already in the marketplace, ones that your organization is already familiar. Many are already certified on RedShift such as Looker and Chart IO.

Also: Are SQL Databases Dead?

2. Lots of ways to load data

After you build your first cluster, the first question on your mind will be, “How do I get my data into RedShift?” Fortunately there are lots of ways.

Stage in S3 & use COPY

Everyone using AWS is already familiar with S3, and RedShift uses this as a staging ground. Create a bucket for your csv or other datafiles, then parallel load them with the special COPY command.

For those coming from the Oracle world, this is like SQL*Loader, which doesn’t go through the SQL engine, but directly loads data as blocks into datafiles. Very fast, very parallel.

AWS Data Pipeline

Some folks are leveraging the AWS Data Pipeline to copy MySQL tables straight into RedShift.

FlyData for Amazon MySQL RDS

I’m in the process of evaluating FlyData sync. This is a service based solution which connects to your Amazon RDS for MySQL instance, capturing binload data much like Oracle’s GoldenGate does, and ships it across to RedShift for you.

If you have constantly changing data, this may be ideal as you don’t have a one-shot dataload option, implied by the basic COPY command solution.

Read: What is ETL and why is it important?

3. Very fast or very big nodes

There are essentially two types of compute nodes for RedShift, DW2 are dense compute running on SSD. As we all know, these are very fast solid state memory drives, and bring huge disk I/O benefits. Perfect for a data warehouse. They cost about $1.50/Tb per hour.

The second type is DW1 or so-called dense storage nodes. These can scale up to a petabyte of storage. They are running on traditional storage disks so aren’t SSD fast. They’re also around $0.50/Tb per year. So a lot cheaper.

Amazon recommends if you’re less than 1Tb of data, go with Dense Compute or DW2. That makes sense as you get SSD speed right out of the gates.

Related: What is a data warehouse?

4. distkeys, sortkeys & compression

The nice thing about NoSQL databases is you don’t have to jump through all the hoops trying to shard your data with a traditional database like MySQL. That’s because distribution is supported right out of the box.

When you create tables you’ll choose a distkey. You can only have one on a table, so be sure it’s the column you join on most often. A timestamp field, or user_id, perhaps would make sense. You’ll choose diststyle as well. ALL means keep an entire copy of the table on each node, key means organize based on this distkey, and EVEN the default means let Amazon try to figure it out.

RedShift also has sortkeys. You can have more than one of these on your table, and they are something like b-tree indexes. They order values, and speed up sorting.

Check: 8 Questions to ask an AWS expert

5. Compression, defragmentation & constraints

Being a columnar database, Redshift also supports collumn encodings or compression. There is LZO often used for varchar columns, bytedict and runlength are also common. One way to determine these is to load a sample of data, say 100,000 rows. From there you can ANALYZE COMPRESSION on the table, and RedShift will make recommendations.

A much easier way however, is to use the COPY command with COMPUPDATE ON. During the initial load, this will tell RedShift to analyze data as it is loaded and set the column compression types. This is by far the most streamlined approach.

RedShift also supports Table constraints, however they don’t restrict data. Sounds useless right? Execept they do inform the optimizer. What’s that mean? If you know you have a primary key id column, tell RedShift about it. No it won’t enforce that but since your source database is, you’re able to pass along that information to RedShift for optimizing queries.

You’ll also find some of the defragmentation options from Oracle & MySQL present in Redshift. There is vacuum which reorganizes the table & resets the high water mark, while it is still online for updates. And then there is Deep Copy which is more thorough, but takes the table offline to do it. It’s faster, but locks the table.
o deep copy

Related: Is Oracle killing MySQL?

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Best of Scalability, Speed & Performance posts

Russian_Dolls

Join 28,000 others and follow Sean Hull on twitter @hullsean.

Twitter IPO

Why did the Twitter IPO filing mention scalability?

It’s been a while since the twitter IPO, and they’ve had their ups and downs. An interesting little side note in the IPO filing mentioned speed, performance & scalability.

5 things toxic to scalability

5 Things toxic to scalability

Still one of our all time most popular articles, this post garnered 20,000 views alone. Covering the five biggest problems web applications face around scalability.

Pitfalls

5 Scalability pitfalls to avoid

Another twist on a popular theme, some of the common pitfalls startups stumble over on scalability.

Hire generalists

Are generalists better at scaling the web?

If you’re hiring to scale the web, think twice before hiring specialists. It may be the generalists that provide the most comprehensive help.

Scalablity happiness

What one change promotes scalability happiness?

If there’s one thing that can help most websites with speed & performance, this has got to be it!

Is scalability big business?

Why is scalability such big business?

Scalability remains a challenge for many web startups. What’s the reason and does that make it big business?

Are ceos hiding scalability problems?

Are Startup CEOs hiding scalability problems?

Are their technology choices that amount to sweeping problems under the rug?

5 ways startups misstep on scalability

5 Ways startups misstep on scalability

Missteps abound, here are some of the biggest for startups.

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

Did MySQL & Mongo have a beautiful baby called Aurora?

amazon aurora slide

Amazon recently announced RDS Aurora a new addition to their database as a service offerings.

Here’s Mark Callaghan’s take on what’s happening under the hood and thoughts from Fusheng Han.

Amazon is uniquely positioned with RDS to take on offerings like Clustrix. So it’s definitely worth reading Dave Anselmi’s take on Aurora.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

1. Big availability gains

One of the big improvements that Aurora seems to offer is around availability. You can replicate with aurora, or alternatively with MySQL binlog type replication as well. They’re also duplicating data two times in three different availability zones for six copies of data.

All this is done over their SSD storage network which means it’ll be very fast indeed.

Read: What’s best RDS or MySQL? 10 Use Cases

2. SSD means 5x faster

The Amazon RDS Aurora FAQ claims it’ll be 5x faster than equivalent hardware, but making use of it’s proprietary SSD storage network. This will be a welcome feature to anyone already running on MySQL or MySQL for RDS.

Also: Is MySQL talent in short supply?

3. Failover automation

Unplanned failover takes just a few minutes. Here customers will really be benefiting from the automation that Amazon has built around this process. Existing customers can do all of this of course, but typically require operations teams to anticipate & script the necessary steps.

Related: Will Oracle Kill MySQL?

4. Incremental backups & recovery

The new Aurora supports incremental backups & point-in-time recovery. This is traditionally a fairly manual process. In my experience MySQL customers are either unaware of the feature, or not interested in using it due to complexity. Restore last nights backup and we avoid the hassle.

I predict automation around this will be a big win for customers.

Check out: Are SQL Databases dead?

5. Warm restarts

RDS Aurora separates the buffer cache from the MySQL process. Amazon has probably accomplished this by some recoding of the stock MySQL kernel. What that means is this cache can survive a restart. Your database will then start with a warm cache, avoiding any service brownout.

I would expect this is a feature that looks great on paper, but one customers will rarely benefit from.

See also: The Myth of Five Nines – Is high availability overrated?

Unanswered questions

The FAQ says point-in-time recovery up to the last five minutes. What happens to data in those five minutes?

Presumably aurora duplication & read-replicas provide this additional protection.

If Amazon implemented Aurora as a new storage engine, doesn’t that mean new code?

As with anything your mileage may vary, but Innodb has been in the wild for many years. It is widely deployed, and thus tested in a variety of environments. Aurora may be a very new experiment.

Will real-world customers actually see 500% speedup?

Again your mileage may vary. Lets wait & see!

Related: 5 Things toxic to scalability

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters

How do you get prepared for Infrastructure Engineering jobs?

datacenter-rack

I just started contributing to a great site called Career Dean. It offers a forum where students and new college graduates can learn from those with established careers in industry.

Join 28,000 others and follow Sean Hull on twitter @hullsean.

A recent question…

Application infrastructure is not something we learned in my college, and it’s definitely not something I will learn anytime soon in my current job (I work as a mobile developer for a mid-sized startup). I also think it’s not something you can just goof around with in your own computer. 

Do companies prepare their software engineers when hiring infrastructure engineers, or do they all expect you to know your skills and tools? 

Also: Is automation killing old-school operations

For example, My guess is that Facebook has a huge infrastructure team making the site usable and fast for as many people as possible. Where can you learn that skills, or get prepared for that time of job? Do you think it is possible to self-learn those skills?

Here’s my take on some of this. Since the invention of Linux, experimenting with infrastructure has been within reach. In the present day there are some even better reasons to experiment & teach yourself about this important aspect of devops & backend server management.

Early Linux circa 1992

Before Linux (in the 80′s we’re talking about) it was a lot harder. Into the 90′s Linux came on the scene and you could cobble together parts, video, motherboard, memory, ide or scsi bus & disks & build a 486 tower. You could then start building linux. I mean because of course everything had to be hand rolled (compiled by hand & debugged usually)!

Also: Is five nines availability a myth in todays datacenters?

Present day virtualization

Fast forward 20 years, and it’s an incredible time to be messing with infrastructure. Why? Because virtualization means you can do it all right on your laptop.

Also: Are SQL databases dead?

What to learn

Start learning Vagrant. It automates the provisioning of virtual machines on your own desktop. You can boot those linux boxes to your hearts content, network between them, hack them, run services on them, build your skills.

I’d also recommend digging into docker. It is the lightening fast younger brother to Virtualization.

Also: Is Oracle trying to kill MySQL?

Fundamentals

You really need those fundamentals. Build some 1.x Linux kernels and see if you can get ‘em running. That’ll teach you some hacking & troubleshooting skills. Find forums to get answers.

Also take a look at CoreOS. It has some really cool stuff around infrastructure management & automation.

Also: Is the art of resistance important to devops success?

After all of that, you might want to play around with puppet or chef. Learn how to setup continuous integration, jenkins etc.

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Our latest Why I don’t work with recruiters