Round Robin
In this lesson, you will get familiar with the Round Robin (RR) scheduling policy.
We'll cover the following
Introduction
To solve this problem, we will introduce a new scheduling algorithm, classically referred to as
To understand RR in more detail, let’s look at an example. Assume three jobs , , and arrive at the same time in the system, and that they each wish to run for 5 seconds. An SJF scheduler runs each job to completion before running another (as seen in the figure below).
In contrast, RR with a time-slice of 1 second would cycle through the jobs quickly. Check out the figure below:
TIP: AMORTIZATION CAN REDUCE COSTS
The general technique of amortization is commonly used in systems when there is a fixed cost to some operation. By incurring that cost less often (i.e., by performing the operation fewer times), the total cost to the system is reduced. For example, if the time slice is set to 10 ms, and the context-switch cost is 1 ms, roughly 10% of the time is spent context switching and is thus wasted. If we want to amortize this cost, we can increase the time slice, e.g., to 100 ms. In this case, less than 1% of the time is spent context switching, and thus the cost of time-slicing has been amortized.
The average response time of RR is: ; for SJF, average response time is: .
As you can see, the length of the time slice is critical for RR. The shorter it is, the better the performance of RR under the response-time metric. However, making the time slice too short is problematic: suddenly the cost of context switching will dominate overall performance. Thus, deciding on the length of the time slice presents a trade-off to a system designer, making it long enough to amortize the cost of switching without making it so long that the system is no longer responsive.
Note that the cost of context switching does not arise solely from the OS actions of saving and restoring a few registers. When programs run, they build up a great deal of state in CPU caches, TLBs, branch predictors, and other on-chip hardware.
Trade-off
RR, with a reasonable time slice, is thus an excellent scheduler if response time is our only metric. But what about our old friend, turnaround time? Let’s look at our example above again. , , and , each with running times of 5 seconds, arrive at the same time, and RR is the scheduler with a (long) 1-second time slice. We can see from the picture above that finishes at 13, at 14, and at 15, for an average of 14. Pretty awful! It is not surprising, then, that RR is indeed one of the worst policies if the turnaround time is our metric. Intuitively, this should make sense: what RR is doing is stretching out each job as long as it can, by only running each job for a short bit before moving to the next. Because turnaround time only cares about when jobs finish, RR is nearly pessimal, even worse than simple FIFO in many cases. More generally, any policy (such as RR) that is fair, i.e., that evenly divides the CPU among active processes on a small time scale, will perform poorly on metrics such as turnaround time. Indeed, this is an inherent trade-off: if you are willing to be unfair, you can run shorter jobs to completion, but at the cost of response time; if you instead value fairness, response time is lowered, but at the cost of turnaround time. This type of trade-off is common in systems; you can’t have your cake and eat it too.
We have developed two types of schedulers. The first type (SJF, STCF) optimizes turnaround time but is bad for response time. The second type (RR) optimizes response time but is bad for turnaround. And we still have two assumptions which need to be relaxed: assumption 4 (that jobs do no I/O), and assumption 5 (that the run-time of each job is known). Let’s tackle those assumptions next.
TIP: OVERLAP ENABLES HIGHER UTILIZATION
When possible, overlap operations to maximize the utilization of systems. The overlap is useful in many different domains, including when performing disk I/O or sending messages to remote machines; in either case, starting the operation and then switching to other work is a good idea, and improves the overall utilization and efficiency of the system.
Get hands-on with 1400+ tech skills courses.