Computer Operating System - Chapter 5: CPU Scheduling

Basic Concepts
■ Scheduling Criteria
■ Scheduling Algorithms
■ Thread Scheduling
■ Multi-Processor Scheduling
■ Real-Time CPU Scheduling
■ Operating Systems Examples
■ Algorithm Evaluation 
pdf 81 trang xuanthi 30/12/2022 2460
Bạn đang xem 20 trang mẫu của tài liệu "Computer Operating System - Chapter 5: CPU Scheduling", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.

File đính kèm:

  • pdfcomputer_operating_system_chapter_5_cpu_scheduling.pdf

Nội dung text: Computer Operating System - Chapter 5: CPU Scheduling

  1. Chapter 5: Outline ■ Basic Concepts ■ Scheduling Criteria ■ Scheduling Algorithms ■ Thread Scheduling ■ Multi-Processor Scheduling ■ Real-Time CPU Scheduling ■ Operating Systems Examples ■ Algorithm Evaluation Operating System Concepts 2 Silberschatz, Galvin and Gagne ©2018
  2. Basic Concepts ■ Almost all computer resources are scheduled before use ■ Maximum CPU utilization obtained with multiprogramming ■ CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait ● CPU burst followed by I/O burst ● CPU burst distribution is of main concern Operating System Concepts 4 Silberschatz, Galvin and Gagne ©2018
  3. CPU Scheduler ■ The CPU scheduler selects one ■ Scheduling under 1 and 4 is process from among the nonpreemptive processes in ready queue, and 4 No choice in terms of allocates the CPU core to it scheduling 4 Queue may be ordered in ■ All other scheduling is various ways: FIFO, priority, tree, linked list preemptive, and can result in race conditions ■ CPU scheduling decisions may take place when a process: ● Consider access to shared data 1. switches from running to ● Consider preemption while in waiting state kernel mode 2. switches from running to ready state ● Consider interrupts occurring during crucial OS activities 3. switches from waiting to ready 4. terminates Operating System Concepts 6 Silberschatz, Galvin and Gagne ©2018
  4. Scheduling Criteria ■ CPU utilization – keep the CPU as busy as possible ■ Throughput – number of processes that complete their execution per time unit ■ Turnaround time – amount of time to execute a particular process ■ Waiting time – amount of time a process spends waiting in the ready queue ■ Response time – amount of time it takes from when a request was submitted until the first response is produced, not outputting the response (for time-sharing environment or in an interactive system) #top Operating System Concepts 8 Silberschatz, Galvin and Gagne ©2018
  5. First-Come, First-Served (FCFS) Scheduling ■ Motivation: for simplicity, consider FIFO-like policy Process Burst Time (ms) P1 24 P2 3 P3 3 ■ Suppose that the processes arrive at time 0 in the order: P1 , P2 , P3 ■ The Gantt Chart for the schedule is: P1 P2 P3 0 24 27 30 ● Waiting time for P1 = 0; P2 = 24; P3 = 27 ● Average waiting time = (0 + 24 + 27)/3 = 17 Operating System Concepts 10 Silberschatz, Galvin and Gagne ©2018
  6. Shortest-Job-First (SJF) Scheduling ■ Motivation: Moving a short process before a long one decreases the waiting time of the short process more than it increases the waiting time of the long process ● The shortest-next-CPU-burst algorithm ■ Associate with each process the length of its next CPU burst ● When the CPU is available, it is assigned to the process that has the smallest next CPU burst ● FCFS scheduling is used if the next CPU bursts of two processes are the same ■ SJF is provably optimal – gives minimum average waiting time for a given set of processes ● The difficulty is how to know the length of the next CPU request ● Could ask the user Operating System Concepts 12 Silberschatz, Galvin and Gagne ©2018
  7. Determining Length of Next CPU Burst ■ Can only estimate the length – should be similar to the previous one ● Then pick process with shortest predicted next CPU burst ■ Can be done by using exponential averaging of the measured lengths of previous CPU bursts as follows th 1. tn = actual length of n CPU burst 2. t n+1 = predicted value for the next CPU burst 3. a, 0 £ a £ 1 4. Define : t n=1 = a tn + (1-a )t n . ■ Commonly, α controls the relative weight of recent and past history in the prediction and sets to ½ ■ Preemptive version called Shortest-Remaining-Time-First (SRTF) Operating System Concepts 14 Silberschatz, Galvin and Gagne ©2018
  8. Examples of Exponential Averaging ■ a = 0 ● tn+1 = tn ● Recent history does not count ■ a =1 ■ Since both a and (1 - a) are ● tn+1 = a tn less than or equal to 1, each ● Only the actual last CPU burst successive term has less counts weight than its predecessor ■ If we expand the formula, we get: tn+1 = a tn + (1 - a)a tn -1 + j + (1 - a ) a tn -j + n +1 + (1 - a ) t0. Operating System Concepts 16 Silberschatz, Galvin and Gagne ©2018
  9. Round Robin (RR) Scheduling ■ Motivation: try scheduling algorithm similar to FCFS scheduling, but preemption is added to enable the system to switch between processes ■ Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue ■ If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units ■ Timer interrupts every quantum to schedule next process ■ Performance ● q large Þ FIFO ● q small Þ q must be large with respect to context switch, otherwise overhead is too high Operating System Concepts 18 Silberschatz, Galvin and Gagne ©2018
  10. Time Quantum and Context Switch Time Operating System Concepts 20 Silberschatz, Galvin and Gagne ©2018
  11. Priority Scheduling ■ Motivation: A priority number (integer) is associated with each process ■ The CPU is allocated to the process with the highest priority (smallest integer º highest priority). Equal-priority processes are scheduled in FCFS or RR ● Preemptive ● Nonpreemptive ■ SJF is priority scheduling where priority is the inverse of predicted next CPU burst time ■ Problem º Starvation – low priority processes may never execute ● Solution º Aging – as time progresses, increase the priority of the process Operating System Concepts 22 Silberschatz, Galvin and Gagne ©2018
  12. Priority Scheduling w/ Round-Robin ProcessA arri Burst TimeT Priority P1 4 3 P2 5 2 P3 8 2 P4 7 1 P5 3 3 ■ Run the process with the highest priority. Processes with the same priority run round-robin ■ Gantt Chart with 2 ms time quantum ● Average waiting time = ? Operating System Concepts 24 Silberschatz, Galvin and Gagne ©2018
  13. Example of Multilevel Queue ■ Prioritization based upon process type Operating System Concepts 26 Silberschatz, Galvin and Gagne ©2018
  14. Example of Multilevel Feedback Queue ■ Three queues: ■ Scheduling 4 Q0 – RR with time quantum 8 ● A new job enters queue Q0 milliseconds which is served FCFS 4 Q1 – RR with time quantum 16 4 When it gains CPU, job milliseconds receives 8 milliseconds 4 Q2 – FCFS 4 If it does not finish in 8 milliseconds, job is moved to queue Q1 ● At Q1 job is again served FCFS and receives 16 additional milliseconds 4 If it still does not complete, it is preempted and moved to queue Q2 Operating System Concepts 28 Silberschatz, Galvin and Gagne ©2018
  15. POSIX Pthread Scheduling ■ API allows specifying either PCS or SCS during thread creation ● PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling ● PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling ■ Can be limited by OS – Linux and macOS only allow PTHREAD_SCOPE_SYSTEM ■ Pthread IPC (Inter-process Communication) provides two functions for setting ● pthread attr setscope(pthread attr t *attr, int scope) ● pthread attr getscope(pthread attr t *attr, int *scope) Operating System Concepts 30 Silberschatz, Galvin and Gagne ©2018
  16. Pthread Scheduling API (Cont.) /* set the scheduling algorithm to PCS or SCS */ pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); /* create the threads */ for (i = 0; i < NUM_THREADS; i++) pthread_create(&tid[i],&attr,runner,NULL); /* now join on each thread */ for (i = 0; i < NUM_THREADS; i++) pthread_join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { /* do some work */ pthread_exit(0); } Operating System Concepts 32 Silberschatz, Galvin and Gagne ©2018
  17. Multiple-Processor Scheduling (Cont.) ■ Symmetric multiprocessing (SMP) is where each processor is self- scheduling ■ Two possible strategies ● All threads may be in a common ready queue (a) ● Each processor may have its own private queue of threads (b) Operating System Concepts 34 Silberschatz, Galvin and Gagne ©2018
  18. Multithreaded Multicore System ■ Each core has > 1 hardware threads. ■ If one thread has a memory stall, switch to another thread! Operating System Concepts 36 Silberschatz, Galvin and Gagne ©2018
  19. Multithreaded Multicore System ■ Two levels of scheduling: 1. The operating system deciding which software thread to run on a logical CPU 2. How each core decides which hardware thread to run on the physical core. Operating System Concepts 38 Silberschatz, Galvin and Gagne ©2018
  20. Multiple-Processor Scheduling – Processor Affinity ■ When a thread has been running on one processor, the cache contents of that processor stores the memory accesses by that thread. ■ We refer to this as a thread having affinity for a processor (i.e. “processor affinity”) ■ Load balancing may affect processor affinity as a thread may be moved from one processor to another to balance loads, yet that thread loses the contents of what it had in the cache of the processor it was moved off of. ■ Soft affinity – the operating system attempts to keep a thread running on the same processor, but no guarantees. ■ Hard affinity – allows a process to specify a set of processors it may run on. Operating System Concepts 40 Silberschatz, Galvin and Gagne ©2018
  21. Real-Time CPU Scheduling ■ Can present obvious challenges ■ Soft real-time systems – Critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled ■ Hard real-time systems – task must be serviced by its deadline Operating System Concepts 42 Silberschatz, Galvin and Gagne ©2018
  22. Interrupt Latency Operating System Concepts 44 Silberschatz, Galvin and Gagne ©2018
  23. Priority-based Scheduling ■ For real-time scheduling, scheduler must support preemptive, priority- based scheduling ● But only guarantees soft real-time ■ For hard real-time, it must also provide ability to meet deadlines ■ Processes have new characteristics: periodic ones require CPU at constant intervals ● Has processing time t, deadline d, period p ● 0 ≤ t ≤ d ≤ p ● Rate of periodic task is 1/p Operating System Concepts 46 Silberschatz, Galvin and Gagne ©2018
  24. Missed Deadlines with Rate Monotonic Scheduling ■ Process P2 misses finishing its deadline at time 80 Operating System Concepts 48 Silberschatz, Galvin and Gagne ©2018
  25. Proportional Share Scheduling ■ T shares are allocated among all processes in the system ■ An application receives N shares where N < T ■ This ensures each application will receive N / T of the total processor time Operating System Concepts 50 Silberschatz, Galvin and Gagne ©2018
  26. POSIX Real-Time Scheduling API #include int main(int argc, char *argv[]) #include { #define NUM_THREADS 5 int i, policy; pthread_t_tid[NUM_THREADS]; pthread_attr_t attr; /* get the default attributes */ pthread_attr_init(&attr); /* get the current scheduling policy */ if (pthread_attr_getschedpolicy(&attr, &policy) != 0) fprintf(stderr, "Unable to get policy.\n"); else { if (policy == SCHED_OTHER) printf("SCHED_OTHER\n"); else if (policy == SCHED_RR) printf("SCHED_RR\n"); else if (policy == SCHED_FIFO) printf("SCHED_FIFO\n"); } Operating System Concepts 52 Silberschatz, Galvin and Gagne ©2018
  27. Operating System Examples ■ Linux scheduling ■ Windows scheduling ■ Solaris scheduling Operating System Concepts 54 Silberschatz, Galvin and Gagne ©2018
  28. Linux Scheduling in Version 2.6.23 + ■ Completely Fair Scheduler (CFS) ■ Scheduling classes ● Each has specific priority ● Scheduler picks highest priority task in highest scheduling class ● Rather than quantum based on fixed time allotments, based on proportion of CPU time ● 2 scheduling classes included, others can be added 4 default 4 real-time Operating System Concepts 56 Silberschatz, Galvin and Gagne ©2018
  29. CFS Performance Operating System Concepts 58 Silberschatz, Galvin and Gagne ©2018
  30. Linux Scheduling (Cont.) ■ Linux supports load balancing, but is also NUMA-aware ■ Scheduling domain is a set of CPU cores that can be balanced against one another ■ Domains are organized by what they share (i.e., cache memory.) Goal is to keep threads from migrating between domains Operating System Concepts 60 Silberschatz, Galvin and Gagne ©2018
  31. Windows Priority Classes ■ Win32 API identifies several priority classes to which a process can belong 4 REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS, ABOVE_NORMAL_PRIORITY_CLASS, NORMAL_PRIORITY_CLASS, BELOW_NORMAL_PRIORITY_CLASS, IDLE_PRIORITY_CLASS 4 All are variable except REALTIME ■ A thread within a given priority class has a relative priority 4 TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL, BELOW_NORMAL, LOWEST, IDLE ■ Priority class and relative priority combine to give numeric priority ■ Base priority is NORMAL within the class ■ If quantum expires, priority lowered, but never below base Operating System Concepts 62 Silberschatz, Galvin and Gagne ©2018
  32. Windows Priorities Operating System Concepts 64 Silberschatz, Galvin and Gagne ©2018
  33. Solaris Dispatch Table Operating System Concepts 66 Silberschatz, Galvin and Gagne ©2018
  34. Solaris Scheduling (Cont.) ■ Scheduler converts class-specific priorities into a per-thread global priority ● Thread with highest priority runs next ● Runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority thread ● Multiple threads at same priority selected via RR Operating System Concepts 68 Silberschatz, Galvin and Gagne ©2018
  35. Deterministic Evaluation ■ For each algorithm, calculate minimum average waiting time ■ Simple and fast, but requires exact numbers for input, applies only to those inputs ● FCS is 28ms: ● Non-preemptive SFJ is 13ms: ● RR is 23ms: Operating System Concepts 70 Silberschatz, Galvin and Gagne ©2018
  36. Little’s Formula ■ n = average queue length ■ W = average waiting time in queue ■ λ = average arrival rate into queue ■ Little’s law – in steady state, processes leaving queue must equal processes arriving, thus: n = λ x W ● Valid for any scheduling algorithm and arrival distribution ■ For example, if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds Operating System Concepts 72 Silberschatz, Galvin and Gagne ©2018
  37. Evaluation of CPU Schedulers by Simulation Operating System Concepts 74 Silberschatz, Galvin and Gagne ©2018
  38. Summary ■ CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher. ■ Scheduling algorithms may be either preemptive (where the CPU can be taken away from a process) or nonpreemptive (where a process must voluntarily relinquish control of the CPU). Almost all modern operating systems are preemptive. ■ Scheduling algorithms can be evaluated according to the following five criteria: (1) CPU utilization, (2) throughput, (3) turnaround time, (4) waiting time, and (5) response time. ■ First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes. Operating System Concepts 76 Silberschatz, Galvin and Gagne ©2018
  39. Summary (Cont.) ■ Multilevel queue scheduling partitions processes into several separate queues arranged by priority, and the scheduler executes the processes in the highest-priority queue. Different scheduling algorithms may be used in each queue. ■ Multilevel feedback queues are similar to multilevel queues, except that a process may migrate between different queues. ■ Multicore processors place one or more CPUs on the same physical chip, and each CPU may have more than one hardware thread. From the perspective of the operating system, each hardware thread appears to be a logical CPU. ■ Load balancing on multicore systems equalizes loads between CPU cores, although migrating threads between cores to balance loads may invalidate cache contents and therefore may increase memory access times. Operating System Concepts 78 Silberschatz, Galvin and Gagne ©2018
  40. Summary (Cont.) ■ Linux uses the completely fair scheduler (CFS), which assigns a proportion of CPU processing time to each task. The proportion is based on the virtual runtime (vruntime) value associated with each task. ■ Windows scheduling uses a preemptive, 32-level priority scheme to determine the order of thread scheduling. ■ Solaris identifies six unique scheduling classes that are mapped to a global priority. CPU-intensive threads are generally assigned lower priorities (and longer time quantums), and I/O-bound threads are usually assigned higher priorities (with shorter time quantums.) ■ Modeling and simulations can be used to evaluate a CPU scheduling algorithm. Operating System Concepts 80 Silberschatz, Galvin and Gagne ©2018