Will microservices just die already?

via GIPHY

I was just reading Dave Kerr’s piece
The Death of Microservice Madness in 2018.

Not just because it has an awesome title, but because it was trending on news.ycombinator.com for a while, and that is always a good quality signal.

And I’m all about quality. ๐Ÿ™‚

Join 37,000 others and follow Sean Hull on twitter @hullsean.

I quickly found that I agreed with him on a lot of points. There were also a bunch of serious criticisms in there that I hadn’t heard before.





Here are some of my comments on the piece:


Dave, this piece is genius. You hit on a lot of stuff here, and offered critical thought with such finesse. It’s not easy to stand up and be contrary to the trends!

o increased complexity for devs
– so true, setting up the entire suite of services on dev is tough
– and lets not forget about integration testing, which also becomes tough

Check out: The Myth of five nines


o systems have poorly defined boundaries
– very true. We can break them up into easy teams at the start, but over time things get messy, and they overlap.

Read: Lambda & serverless interview questions


o complexities of state

– Do you use a monolithic db? If so the architecture isn’t really microservices.
– If each service has it’s own, transactions that touch multiple services become very tough.
– And what about backups for all these individual databases?
– How about at restore time? How do you manage them all to restore at a SINGLE POINT IN TIME?

Check out: my get started with serverless & lambda in 5 minutes guide


o Databases without schemas push logic into the application

– They sure do. And ones without complex joins do too. It’s a dirty little secret of NoSQL

Check out: How to hire a developer that doesn’t suck

o Versioning can be hard
– Absolutely. Sure each service has it’s own version, but as Dave says you have to manage cross version compatibility. if they are truly independent, this will drift over time, in and unpredictably complex way
– And what about backup versions?

Related: Lambda & serverless interview questions

o Distributed transactions
– with a monolithic db broken up into little pieces, sometimes… maybe often, you will need to do things across data in multiple services. then what?

I like the graphic Dave put together. It’s great. I do like serverless too. I’m also critical of it. I wrote a piece 30 questions to ask a serverless fanboy ๐Ÿ™‚

Also: Is AWS too complex for small dev teams?

Get more. I write one piece every month & share it through email.. Tech, startups & innovation. My latest Can daily notes help projects succeed?




Five More Things Deadly to Scalability

The.Rohit - Flickr
The.Rohit – Flickr

Join 19k others and follow Sean Hull on twitter @hullsean.

1. Slow Disk I/O – RAID 5 – Multi-tenant EBS

Disk is the grounding of all your servers, and the base of their performance. True with larger and larger main memory, much is available in cache, a server still needs to constantly read from disk and flush things from memory. So it’s a very very important component to performance and scalability.

What’s wrong with Raid 5?

Raid 5 was designed to give you more space, using fewer disks. It’s often used in a server with few slots or because ops misunderstand how bad it will impact performance. On a database server it can be particularly bad.

All writes see a performance hit. What’s worse is if you lose a disk, the RAID though technically still on line, will perform SO SLOWLY as to be offline. And a rebuild takes many hours. Worse still is the risk to lose another drive during that rebuild. What if you have order a drive and it takes a couple of days?

RAID 10 is the solution. Mirror each set of two drives, then stripe over those. Even with only four slots available, it’s worth it. Good read performance, good write performance, and protection.

What the heck is multi-tentant?

In the cloud, you share servers, network & disk just like you do apartments in a building. Hence the name. Amazon’s EBS or elastic block storage, extends this metaphor, offering you the welcome flexibility of a storage network. But your bottleneck can be fighting with other tenants on that same network.

Default servers do have this problem, however AWS has addressed this serious problem with a little known but VERY VERY useful feature called Provisioned IOPS. It’s a technical name, but means you can lock in reliable disk I/O. Just what the scalability doctor ordered.

Check out our original post: 5 Things Toxic to Scalability

2. Using the database for Queuing

MySQL is good at a lot of things, but it’s not ideal for managing application queues. Do you have a table like JOBS in your database, with a status column including values like “queued”, “working”, and “completed”? If so you’re probably using the database to queue work in your application.

It’s not a great use of MySQL because of locking problems that come up, as well as the search and scan to find the next task.

Luckily their are great solutions for developers. RabbitMQ is a great queuing solution, as is Amazon’s SQS solution. What’s more as external services they’re easier to scale.

[quote]
Scalability becomes key to your business, as you customer base grows. But it doesn’t have to be impossible. Disk I/O, caching, queuing and searching are all key areas where you can make a big dent, in a manageable way. Juggle your technical debt too, and you’re golden!
[/quote]

Also take a look at: Why Generalists are Better at Scaling the Web

3. Using Database for full-text searching

Oracle has full text search support, why shouldn’t we assume the same in MySQL? Well MySQL *does* have this, but in many versions only with the old MyISAM storage engine. It has it’s set of corruption problems, and isn’t really very performant.

Better to use a proven search solution like Apache Solr. It is built specifically for search, includes excellent library support for developers of most modern web languages and best of all is easy to SCALE! Just add more servers on your network, or distributed globally.

For folks interested in the bleeding edge, Fulltext is coming to Innodb crash safe & transactional storage engine in 5.6. That said you’re still probably better off going with an external solution like Solr or Sphinx and the MySQL Sphinx SE plugin.

[mytweetlinks]

How to find A Mythical MySQL DBA

4. Insufficient Caching at all layers

Cache, cache, and cache some more. Your webservers should use a solid memcache or other object cache between them & the database. All those little result sets will sit in resident memory, waiting for future web pages that need them.

Next use a page cache such as varnish. This sits in front of the webserver, think of it as a mini-webserver that handles very simple pages, but in a very high speed way. Like a pack of motorbikes riding down an otherwise packed freeway, they speed up your webserver to do more complex work.

Browser caching is also important. But you can’t get at your customers browsers, or can you? Well not directly, but you can instruct them what things to cache. Do that with proper expires headers. Have your system administrator configure apache to support this.

Also: Tweaking Disqus to Find Experts & Drive Traffic

5. Too much technical debt

Technical debt can bite. What is it? As you’re developing an new idea, you’ll build prototypes. As those get deployed to customers, change gets harder, and past things you glossed over because problems. One team leaves, and another inherits the application, and the problems multiple. Overtime you’re building your technical debt as your team spends more time supporting old code and fixing bugs, and less time building new features. At some point a rewrite of problem code becomes necessary.

Also: How I increased my blog pagerank to 5

Get more. Grab our exclusive monthly Scalable Startups. We share tips and special content. Here’s a sample