Basic Concept: Tickets Represent Your Share
This lesson briefly explains the basic concept of lottery scheduling.
We'll cover the following
Underlying lottery scheduling is one very basic concept: tickets, which are used to represent the share of a resource that a process (or user or whatever) should receive. The percent of tickets that a process has represents its share of the system resource in question.
Example
Let’s look at an example. Imagine two processes, and , and further that has 75 tickets while has only 25. Thus, what we would like is for to receive 75% of the CPU and the remaining 25%.
Lottery scheduling achieves this probabilistically (but not deterministically) by holding a lottery every so often (say, every time slice). Holding a lottery is straightforward: the scheduler must know how many total tickets there are (in our example, there are 100). The scheduler then picks a winning ticket, which is a number from 0 to 99.
Assuming holds tickets 0 through 74 and 75 through 99, the winning ticket simply determines whether or runs. The scheduler then loads the state of that winning process and runs it.
TIP: USE RANDOMNESS
One of the most beautiful aspects of lottery scheduling is its use of randomness. When you have to make a decision, using such a randomized approach is often a robust and simple way of doing so.
Random approaches have at least three advantages over more traditional decisions. First, random often avoids strange corner-case behaviors that a more traditional algorithm may have trouble handling. For example, consider the LRU replacement policy (studied in more detail in a future chapter on virtual memory); while often a good replacement algorithm, LRU attains worst-case performance for some cyclic-sequential workloads. Random, on the other hand, has no such worst case.
Second, random also is lightweight, requiring little state to track alternatives. In a traditional fair-share scheduling algorithm, tracking how much CPU each process has received requires per-process accounting, which must be updated after running each process. Doing so randomly necessitates only the most minimal of per-process state (e.g., the number of tickets each has).
Finally, random can be quite fast. As long as generating a random number is quick, making the decision is also, and thus random can be used in a number of places where speed is required. Of course, the faster the need, the more random tends towards pseudo-random.
Here is an example output of a lottery scheduler’s winning tickets:
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49
Here is the resulting schedule:
A A A A A A A A A A A A A A AB B B B
As you can see from the example, the use of randomness in lottery scheduling leads to probabilistic correctness in meeting the desired proportion, but no guarantee. In our example above, only gets to run 4 out of 20 time slices (20%), instead of the desired 25% allocation. However, the longer these two jobs compete, the more likely they are to achieve the desired percentages.
TIP: USE TICKETS TO REPRESENT SHARES
One of the most powerful (and basic) mechanisms in the design of lottery (and stride) scheduling is that of the ticket. The ticket is used to represent a process’s share of the CPU in these examples but can be applied much more broadly.
. Thus, if you are ever in need of a mechanism to represent a proportion of ownership, this concept just might be … (wait for it) … the ticket. For example, in more recent work on virtual memory management for hypervisors, Waldspurger shows how tickets can be used to represent a guest operating system’s share of memory “Memory Resource Management in VMware ESX Server” by Carl A. Waldspurger. OSDI ’02, Boston, Massachusetts. The paper to read about memory management in VMMs (a.k.a., hypervisors). In addition to being relatively easy to read, the paper contains numerous cool ideas about this new type of VMM-level memory management.
Get hands-on with 1400+ tech skills courses.