Lock-free Exclusive Processing

The problem that I'd like to explore in this article is related to multithreaded processing, and it goes something like this: 

Design a component that executes tasks one after the other - there is no overlapping in processing. Tasks could be submitted by different threads.

Of course, the most obvious solution would be to synchronize the execution of tasks.

Each thread would try to acquire the lock on this object if possible, otherwise, it will block and wait for the lock to be released. While waiting for this lock to be released, the thread cannot do anything else. This approach creates a lot of contention and besides that, acquiring and releasing the lock is quite an expensive operation. So, it is not performant as well. However, we must admire the simplicity of the solution.

Is there a more performant solution that does not block the thread which wants to execute the task? 
Firstly, let's decouple task submission and task execution - a queue sounds like a perfect fit for the problem. Since multiple threads can submit to this queue, we must use a concurrent implementation of the queue.

Once we have the queue, we must process its elements - drain the queue. So, as long as there are tasks in the queue, we are polling and executing them.

What about exclusivity? How do we ensure that task execution is not overlapping? We've learned that old-fashioned locks are expensive. In Java (as well as in most other languages) there is the concurrent package which has atomic values. These values provide support for lock-free and thread-safe programming. Essentially, in a single operation, we can compare the value of the variable and set a new value if the comparison is satisfying, so-called CAS (Compare and Swap). 
For our use case, AtomicBoolean is going to provide us with exclusive execution. It is going to serve as a gate which once obtained cannot be obtained anymore until released.

Tying things together

We have all the necessary parts to compose our component. When we receive a task, we put it in the queue and signal the draining of the queue. Only one thread at a time can have an exclusive right to drain the queue. The exclusivity is guaranteed since only one thread can obtain the gate (to flip the AtomicBoolean). 

If you are more of a visual type of person, here is the graphical schema of the algorithm.

the algorithm
Graphical schema of the algorithm

Food for thought: why do we check whether the queue is empty while signaling and again while draining the queue?

Discussion

Currently, the thread that can obtain the gate processes all tasks in the queue. In certain cases, this might not be the desired behavior. However, this obstacle is easy to overcome. We can introduce a single-threaded Executor that drains the queue. 
If throttling is something to consider as well, using a ScheduledExecutor which schedules processing in predetermined time intervals is a viable option.

Tasks that our component can process are not returning any results. If we wanted to have a result of the task execution, we'd need to, first of all, change the signature of the submit method to return CompletableFuture (and accept a Callable or a Supplier), and store its reference together with the task in the queue. Once we got the result from the task, we'd complete the CompletableFuture.

The implementation of the queue used in the example has no upper bound. This is not a good idea - it could easily blow up the heap. We must put an upper bound to the queue and decide upon the strategy of what happens when the queue gets filled.

Conclusion

In most cases, opting for locking mechanisms is expensive. It is worth the effort to find lock-free algorithms as a substitution for locking mechanisms.

Comments

Popular posts from this blog

Ordering in Event-Sourced Systems

Myths about Distributed Business Transactions