Lottery Scheduler


deterministically fairly to give time tick to another process. Current scheduler designs like CFS and ULE's drawback 1. At best, priority schemes are ad hoc. Top ranking always prevails. A feedback loop is used to change priorities to attain "fair share" over the (very) long run 2. P priorities on Unix are dynamic. 3. Lower-priority t due to priority inversion. 4. Schedulers are intricate, and other tasks may stop high-priority tasking to debate.


A probabilistic process scheduling algorithm is lottery scheduling. Each process is given a small number of lottery tickets, and the scheduler holds a drawing to select a key at random. The process with the winning ticket is given the CPU. The distribution of the ticket keys could not be uniform. A process has a greater chance of winning and being chosen for execution if it has more tickets than the competition. The problem of famine is resolved, and probabilistic fairness is assured because a process with at least one ticket key will ultimately win a lottery. The distribution and allocation mechanism utilized determines the system's throughput and responsiveness.

Ticket transfers: This technique is used when a client is held up while holding a resource due to some dependencies while another client is waiting for the first client to release the shared resource. The concept behind this is to shift the tickets to the blocked client to be more likely to execute, accomplish the task faster, and release the resource sooner to benefit both clients. Any client they depend on must be able to transfer their tickets to one or more other clients.

Ticket Inflation: The Cost of Tickets A client can produce new tickets for themselves in this case, which is seen as an alternative to direct ticket transfers. On the one hand, this strategy may cause issues because a client may monopolize a resource by printing several lottery tickets. However, inflation/deflation can be utilized to modify resource allocations without explicit client-to-client communication, resulting in higher throughput. Tickets for Compensation If a client only consumes a portion f of the resource unit allotted to it (for instance, CPU cycles/time), it may be given compensation that increases the number of tickets it receives by a proportion of 1/f. This guarantees equity and improved responsiveness when it comes to process scheduling. Interactive processes commonly show this behavior.

Experiment for Significance

They try to prove the effectiveness of fairness and flexible control for multiple resources to contend. They also discussed the concurrency control, but they didn't bring up the searching for ticket time which is O(log n)

Experiment Fallacy

The Monte Carlo experiment provides the dynamic control feature but not giving a good idea of the inner functionality's percentage. I think compensation tickets are unnecessary because, in the long term, the odds should be balanced between the clients. Because we have to keep track of how many compensation tickets each client has, there is a cost associated with this. On the other hand, I believe this is a good illustration of the fresh possibilities the system's dynamic presents.
Also, the graphs were unreadable, and it would have been better to use normalized charts.


The novelty is it describes a different scheduling system while you study this document regarding time-sharing tools. Probably the most straightforward scheduling algorithm is round-robin. It just keeps the resource occupied and has a high client throughput. But it's too easy. It does not offer the system any control over how much of the resourcehelp is allocated to each client. To address this, multilevel queue scheduling is used. The user has the option to group customers and assigns each group a distinct priority. We get control in exchange for new issues like priority inversion and famine. To address this issue further, mechanisms like priority inheritance and aging have been included. The scheduling algorithms have become more complex and challenging to comprehend.


It's a good approach, but not fast in today's operating system because of the complexity. If you write a distributed system, you definitely will use Round Robin + Consistent Hash and will not consider the scheduling with the nonconstant algorithm. But in other user applications, this scheduling will take place.
Also, we need to consider other metrics like the workload and context switch time to decide whether we need to use this, like implementing CFS/REAL_TIMERT/lottery and fallback to the different schedulers by different workloads.