Deadlock Avoidance via Scheduling
Let's look at another solution to the problem of deadlock by avoiding deadlocks in the first place.
Instead of deadlock prevention, in some scenarios deadlock avoidance is preferable. Avoidance requires some global knowledge of which locks various threads might grab during their execution, and subsequently, schedules said threads in a way as to guarantee no deadlock can occur.
For example, assume we have two processors and four threads that must be scheduled upon them. Assume further we know that Thread 1 (T1) grabs locks L1
and L2
(in some order, at some point during its execution), T2 grabs L1
and L2
as well, T3 grabs just L2
, and T4 grabs no locks at all. We can show these lock acquisition demands of the threads in tabular form:
A smart scheduler could thus compute that as long as T1 and T2 are not run at the same time, no deadlock could ever arise. Here is one such schedule:
Note that it is OK for (T3 and T1) or (T3 and T2) to overlap. Even though T3 grabs lock L2
, it can never cause a deadlock by running concurrently with other threads because it only grabs one lock.
Let’s look at one more example. In this one, there is more contention for the same resources (again, locks L1
and L2
), as indicated by the following contention table:
In particular, threads T1, T2, and T3 all need to grab both locks L1
and L2
at some point during their execution. Here is a possible schedule that guarantees that no deadlock could ever occur:
As you can see, static scheduling leads to a conservative approach where T1, T2, and T3 are all run on the same processor, and thus the total time to complete the jobs is lengthened considerably. Though it may have been possible to run these tasks concurrently, the fear of deadlock prevents us from doing so, and the cost is performance.
One famous example of an approach like this is
Get hands-on with 1400+ tech skills courses.