A task distribution and coordination library for use in clusters
Last updated 7 months ago by edusteinhorst .
ISC · Repository · Bugs · Original npm · Tarball · package.json
$ cnpm install cluster-task-sharder 
SYNC missed versions from official npm registry.

Cluster Task Sharder


This module implements a simple cluster task distribution mechanism. The task orchestration is done via native locks on the file system, which greatly simplifies deployment - you only need a shared data directory (e.g. an NFS share), no centralized databased or messaging infrastructure needed. The attribute mtime to check for stale locks, which is almost universally supported across filesytems.

There are two main features availabe: exclusive task attribution and sharding range attribution.

Exclusive task attribution

These are tasks that, at any given time, should have only one instance running. Simply call isTaskMine("myTaskName") with an arbitrary task name, and the task will be immediately assigned to a single instance. There's no other setup needed.

For example, imagine your system needs to periodically fetch weather information from an external source. It would be sufficient for one instance to fetch the information and update a database:

const cluster = require('cluster-task-sharder');

function tryWeatherUpdate(){
    if (cluster.isTaskMine(`weatherUpdate`)){
        // check and persist the information

cluster.join({sharedDirectory: '/tmp/cluster'});
setTimeout(tryWeatherUpdate, 1000);

Sharding range attribution

These are tasks that run concurrently across instances, but with each instance attending to a specific shard range. This allows you to share the load across the computing resources instead of limiting to a single instance.

For example, imagine a simple REST API that accepts new emails, and later tries do send them asynchronously. Each new email received via the REST endpoint would be persisted with a random shard index (available by calling getNewShardIndex()). A background task would periodically sweep the pending emails within the shard range attributed for that instance (available by calling getMyShardRange()). This ensures the load can be evenly distributed across instances without competing for emails:

const cluster = require('cluster-task-sharder');

function createEmail(email){
    email.shardIndex = cluster.getNewShardIndex();
    // persist email

const cluster = require('cluster-task-sharder');

function sendPendingEmails(){
    let myShardRange = cluster.getMyShardRange();
    // send emails that are within this shard range (e.g. 0 to 1000)

cluster.join({sharedDirectory: '/tmp/cluster'});
setTimeout(sendPendingEmails, 1000);


The module behaviour can be customized by passing options to the join({}) function.

  • sharedDirectory: the directory shared across instances (e.g. mounted NFS share). Default: /tmp/cluster.
  • heartbeatInterval: the interval between heartbeats in ms. Default: 500.
  • heartbeatTimeout: timeout in ms after which tasks are reassigned to other instances. Default: 2000.
  • checkInterval: interval in ms between checks for stale instances. Default: 1000.
  • pauseAfterWorkersetChange: interval in ms between a change in available instances and redistribution of tasks/sharding ranges. Allow for enough time so that any ongoing tasks can be finished before being redistributed. Default: 5000.


From a reliability standpoing, it is important to assure your tasks are idempotent, and that eventual concurrent execution won't cause corruption of data. This is important in the event that one instance misbehaves (e.g. stops heartbeating and keeps executing a given task).


icon by alvianwijaya from the Noun Project

Current Tags

  • 0.0.10                                ...           latest (7 months ago)

9 Versions

  • 0.0.10                                ...           7 months ago
  • 0.0.9                                ...           8 months ago
  • 0.0.8                                ...           8 months ago
  • 0.0.7                                ...           a year ago
  • 0.0.6                                ...           a year ago
  • 0.0.5                                ...           a year ago
  • 0.0.4                                ...           a year ago
  • 0.0.3                                ...           a year ago
  • 0.0.2                                ...           a year ago
Maintainers (1)
Today 0
This Week 0
This Month 0
Last Day 0
Last Week 0
Last Month 2
Dependencies (4)
Dev Dependencies (0)
Dependents (0)

Copyright 2014 - 2017 © |