2. 10
3. Scheduling Strategies
• The goal of any scheduling strategy is to maximize CPU
usage and throughput while minimizing turnaround
time, waiting time, and response time.
• Here we focus on the problems of deciding which
process should use the CPU and when a process should
be removed from using the CPU.
• Scheduling Strategies:
1. First-Come-First-Served Strategy
2. Shortest-Job-First Strategy
3. Round-Robin Strategy
4. Priority Strategy
5. Multiple-Queuing Strategy
6. Real-time Strategy
10
3. 11
1. First-Come-First-Served Strategy
• Provides easiest way to schedule a CPU
• It allows the first process that requests a CPU to use
the CPU until the process is completed.
• When one process is using the CPU, other processes
that need the CPU simply queue up in the ready
queue.
• This allows the head of the ready queue to be used
as the next process to be scheduled.
• It is non pre-emptive. Processes are removed from
the CPU only when they are in the waiting state or
they have terminated.
4. 1. First-Come-First-Served Strategy contd…
• Consider the following set of
processes that arrive to use a
CPU in the order stated:
• An FCFS scheduler would
schedule them as shown in
Figure 5.1.
12
•Consider the following set of processes that arrive to use a
CPU in the order stated:
•The throughput of our
imaginary system is 4
processes in 53 time units
•the average waiting time
is 28 time units.
5. 1. First-Come-First-Served Strategy contd…
• Consider a different ordering of processes.
• In this situation, we can measure time in the
following way:
13
•In this scenario, the throughput of our system is the
same: 4 processes in 53 time units.
• But the waiting time is much better: the average
waiting time for this example is 9.5 time units.
6. 1. First-Come-First-Served Strategy contd..
14
•FCFS strategy does not guarantee minimal criteria
and measures may vary substantially depending on
process execution times and the order of execution.
•Fairness issues hurt the consideration of this
scheduling strategy.
•FCFS is inherently unpredictable and may very
likely produce unfair schedules.
7. 2. Shortest-Job-First Strategy
• If we always chose the process with the shortest
running time first, it would seem that we could
improve the measurements we are watching.
• As an example, consider a new set of processes:
15
8. 2. Shortest-Job-First Strategy contd…
16
•The shortest-job-first (SJF) scheduling strategy is illustrated in Figure given below
•This ordering produces the following measurements:
•This order of
processing has an
average wait time of
10.75 time units.
•If we had used the
FCFS strategy, the
average waiting time
would be
12.25 time units.
9. 2. Shortest-Job-First Strategy contd…
• While it is possible to prove that an SJF strategy
is optimal for average times, the strategy has
several issues.
1. it penalizes long processes simply for being
long.
2. Secondly, it becomes possible to starve a
process.
Starvation occurs when a process is waiting in the
ready queue but never makes it to the running state.
As long as processes enter the queue with running
times shorter than it, that process is never run on
the CPU.
17
10. 3. Round-Robin Strategy
• Both FCFS and SJF are usually used as non-pre-
emptive strategies. However, we still have the
criterion of fairness to consider.
▫ Fairness is a measure of how much time each
process spends on the CPU.
▫ If we schedule processes to run to completion or we
depend on processes to give up the CPU when they
can, we can make measurements but we can make no
statement about fairness.
• Fairness can only be assured when we use a pre-
emptive strategy.
▫ In a round-robin strategy, all processes are given the
same time slice and are pre-empted and placed on the
ready queue in the order they ran on the CPU.
18
11. 3. Round-Robin Strategy contd…
19
• For example, consider the processes below:
•Let’s say the time slice in this
system is 5 time units.
• A roundrobin scheduling strategy
would produce a timeline like that
shown in above Figure 5.4.
12. 3. Round-Robin Strategy contd…
• There is very little to manage about a round-robin
strategy.
▫ The only variable in the scheme is the length of the
time slice – the amount of time spent on the processor.
▫ Setting this time to be too short – say close to the time
it takes to perform a context switch – is
counterproductive. It lets the context-switching time
dominate performance and lowers CPU efficiency.
• However, making a time slice too long gets away
from the benefits of a round-robin strategy.
• Response time decreases for short requests.
• A round-robin strategy puts all processes on an
equal footing.
20
13. 4. Priority Strategy
• A priority-scheduling strategy takes into account
that processes have different importance placed
upon them.
• In priority scheduling, the process in the ready
queue with the highest priority is chosen to run.
• This type of scheduling can be either pre-
emptive or non-pre-emptive, as it is the
choice of the next process that defines a priority-
scheduling strategy.
21
14. 4. Priority Strategy contd…
• Priority scheduling makes certain requirements
of the operating system.
▫ The operating system must employ the concept of
process priority.
The priority is an attribute of a process that
measures the importance of the process in
relationship to others and allows the operating
system to make decisions about scheduling and the
amount of time to keep a process on a processor.
In a pre-emptive scheduling environment, process
priority is a very useful attribute to assign to a
process.
The priority of the process is usually given by a
number, which is kept in the process’s PCB.
22
15. 4. Priority Strategy contd…
• It is usually required that the operating system be
allowed to manipulate priorities somehow.
▫ Priorities are set by the user or the process creator but
the operating system is usually allowed to change priorities
dynamically.
▫ The operating system requires the ability to adjust the
priority of such a process as it moves through its
execution.
• Priorities themselves usually take the form of numbers,
quantities that can be easily compared by the operating
system.
• Microsoft Windows assigns values from 0 to 31; various
Unix implementations assign negative as well as
positive values to reflect user (negative) and system
(positive) priority assignment.
• As that shows, there is no general agreement on
assigning priority values.
23
16. 4. Priority Strategy contd…
• example of priority scheduling. Let’s say that
requests for the processor are made in the
following order:
24
• For the purposes of this example,
higher numbers mean higher
priority
17. 5. Multiple-Queuing Strategy
• We have described priority scheduling as a
matter of choice: choosing the process with the
highest priority to schedule on the processor.
▫ In many operating systems, processes do not have
unique priorities. There may be many processes with
the same priority.
▫ Many system processes have the same high priority.
This means that the scheduler is eventually going to
have to choose between processes with the same
priority.
• Priority scheduling is often implemented with
priority queues.
• A priority queue holds processes of a certain
priority value or range of values.
25
18. 5. Multiple-Queuing Strategy contd…
• The multiple-queuing scheduling strategy could
even use multiple strategies:
▫ different scheduling strategies for different
queues.
▫ Processes are
either permanently assigned to a specific queue,
based on their characteristics upon entering the
system,
Or they can move between queues, based on their
changing characteristics as they are executed in the
system.
26
19. 6. Real-time Strategy
• real-time systems can be classified as one of two
different system types, each with different scheduling
needs.
▫ Hard real-time systems guarantee that time constraints are
met.
there is usually a specific amount of time that is specified
along with the process-scheduling request.
The system guarantees that the process is run in the specified
amount of time or that the process is not run at all.
▫ Soft real-time systems place a priority on time-critical
processes and are less restrictive.
It do not guarantee performance; they give real-time processes
favorable treatment and keep certain latency times to a
minimum.
A real-time operating system must be able to assume that
scheduling overhead is restricted to a certain time.
27
20. 4.Scheduling in Symbian OS
• It is a mobile phone operating system that is
intended to have the functionality of a general-
purpose operating system.
• It can load arbitrary code and execute it at run time;
• It can interact with users through applications.
• At the same time, the operating system must
support real-time functionality, especially where
communication functions are concerned.
▫ Because of the real-time requirements, Symbian OS is
implemented as a real-time operating system.
28
21. 4.Scheduling in Symbian OS contd…
• It is built to run on multiple phone platforms,
without specialized hardware, so the operating
system is considered to be a soft real-time
system.
• It needs enough real-time capabilities to run the
protocols for mobile protocol stacks, such as
GSM and 3G.
29
22. 4.Scheduling in Symbian OS contd…
• The combination of general-purpose functionality with
real-time system requirements means that the best
choice for implementation is a system that uses a static,
monotonic scheduling strategy, augmented by time
slices.
• Static, monotonic scheduling is a simple strategy to
use-
▫ It organizes processes with the shortest deadline first
and the introduction of time slices means that
processes with the same deadline (or no deadline) can
be assigned time slices and scheduling using a
priority-scheduling scheme.
▫ There are 64 levels of priority in Symbian OS.
30
23. 4.Scheduling in Symbian OS contd…
• As we discussed before, a key to soft real-time
performance is predictable execution time.
• If an operating system can predict how long a
process will run, then a static, monotonic
scheduling strategy will work, since it makes
some big assumptions about run time.
• Predicting execution time is based on the
process and several system characteristics.
31
24. 4.Scheduling in Symbian OS contd…
• There are several important characteristics that must be
predictable, including:
• latency times: an important benchmark is the latency of
handling interrupts: the time from an interrupt to a user
thread and from an interrupt to a kernel thread.
• the time to get information about processes: for example, the
time it takes to find the highest priority thread in the ready
state.
• the time to move threads between queues and the CPU:
manipulating scheduling queues – for example, moving
processes to and from the ready queue – must be bounded.
▫ This functionality is used all the time and it must have a bound
on it or the system cannot predict performance.
• Predicting these quantities is important and is reflected in the
design of the scheduler.
32