Last Updated: February 25, 2016
·
1.847K
· passcod

Separate your worker/clock

When I started trying to deploy on the sides to Heroku, I quickly hit a problem, as my setup is somewhat delicate: I have three processes, two use MRI, the other JRuby. In development, I had a single Procfile which started everything. There were complex interactions with Bundler and Rbenv. Trying to make it fit onto Heroku took me a day before I gave up and started an EC2 instance... where I wasted another day. Setting up a production system is hard when you’re used to Heroku's git-push-deploy flow.

So I split everything. It took surprisingly little time (just a few hours), and the architecture is much more solid. I use git submodules to share code with interesting consequences. You can see it all on Github.

Each of the three processes have their own repo, which pushes to their own Heroku instance, and is managed separately. This leads to somewhat complex deploy scenarios whenever a database migration is in order, but otherwise it's great.

Submodules are checked out at a specific commit.

The only things in thesides-core are database models and worker definitions. Because submodules are ‘frozen’, logic dictates I should have to update it in all affected repos whenever I change something. Turns out this isn't completely true:

  • Sidekiq clients only need to know the workers’ name (and maybe arity). So as long as I don't add workers, I only need to update the submodule in thesides-worker (my Sidekiq server), not the others.
  • The clock actually fires Sidekiq jobs for everything (except one). So it never needs to know about the database. I only very rarely update thesides-clock's submodule.

Zero-downtime deployment

When I deploy a new version of the web front-end, the worker is still running and isn't affected the least. Even better, I can improve the worker without incurring user-visible (web) downtime. I deploy the clock rather rarely.

Application-driven meta-management

The clock, of course, pings all three services regularly to ensure they stay up (including itself.) But it also restarts the Sidekiq server every six hours to keep it fresh. Similarly, I can imagine a service noticing something is slow, and requesting another dyno to be spun up... from within. This could bring in advanced, granular automagic scaling. All very meta!

More languages and versions

I am currently running two MRI 1.9.3 instances, and a JRuby 1.7.0.RC2 (I haven’t upgraded yet.) This is quite rubyish, but seeing how easy it was, I am considering splitting up even more functionality and using languages suited for specific tasks instead of applying many different concepts within the same codebase. Event-based programming, for example, could be more suited to NodeJS.

Refactoring

This modular approach even makes high-level refactoring possible: I could switch out DataMapper and use another solution progressively, even while modifying a table’s schema. Maybe more daring, I could probably take some services off Heroku entirely without significant downtime.