You are on page 1of 32

INTERNAL AND EXTERNAL COMMANDS IN MS-DOS

MS-DOS Internal Command : Internal commands in MS-DOS are commands


that are built into the command interpreter itself. These commands are available
directly from the command prompt without the need for external executable files.
Here are some commonly used internal commands in MS-DOS:

1. MD COMMAND (Make Directory) - MD command is used to create a new


directory or subdirectory in the Disk.
Syntax–
C:\>md (directory
name) Example- c:\>
md amir

2. CLS COMMAND (Clear)-CLS is one of the internal DOS commands that is


used to clear the DOS screen. If you can clear whatever command you run on
the screen through the CLS command.
Syntax-
C:\>cls

3. CD COMMAND (Change Directory) -CD is an internal dos command used


to change the directory (to insider or come out). If you want to go from one
directory to another, then you can go with the CD command.
Syntax–
C:\>cd (Directory name)
Example-
C:\cd amir
4. COPY CON COMMAND - COPY CON command is used to basically create
a file. The only disadvantage of the command is that the created by COPY CON
command cannot be modified.
Syntax-
C:\copy con (file name with extension)
Example-
C:\copy con amir.txt

5. COPY COMMAND - The COPY command is used to copy the file and move
the file to another location or folder or drive.
Syntax-
C: \> Copy <File Name><New Name>
C: \> Copy <Path File Name><Target Drive>
Example-
C:> Copy C:\ABC*.* D:\amir and Press enter.

6. DATE COMMAND - The Date command is used to view the system’s current
date. if you want to modify the date then you can easily do it from the date
command.
Syntax- C:
\> date
Example
C:>date
The current date is: 10/12/2021
Enter the new date: (dd-mm-yy) 09/12/2021

7. TIME COMMAND - The time command is used to view the system’s current
date. if you want to modify the date then you can easily do it from the time
command.
Syntax- C: \>time
Example-
C:>time
The current time is: 23:23:27.63
Enter the new time: 23:25:50.43

8. DEL COMMAND - A DEL command is used to remove a file from the disk.
To delete any files or directory from any drive you would define the path and
folder.
SYNTAX-
C:\> Del (file name) delete only one file.
C:\>ABC Del *.* delete all files from ABC folder.

9. RD or RMDIR - RD or RMDIR command is used to remove a particular


directory or subdirectory from the disk. Only an empty directory or
subdirectory can be removed.
Syntax-
C: \> RD <DIR name>
Example-
C: \> RD amir
10. REN COMMAND - A REN command is basically used to change the name of
an existing file or folder. If you want to change any folder name by the REN
command you can easily change the folder name. Syntax and example are as
below- Syntax-
C: \> REN <Old File Name><New File Name>
Example-
C: > REN abc amir

11. TYPE COMMAND - Type command is an internal command used to view


the content of any file. If you want to see what is saved in that file then you can
see by type command.
Syntax-
C:\ > TYPE <DIR name>
EXAMPLE-
C: > TYPE amir.txt

12. VER COMMAND - If you want to see your Windows Operating system then
you can see the version information by the VER command. Follow the syntax
and example are below- Syntax-
C:>ver

13. MOVE COMMAND - The move is an internal command used to change the
name of any directory. Follow the syntax and example are below- Syntax-
C:/> move old dir(name) new dir (name)
C:/> move amir amir1

14. COLOR COMMAND - The COLOR command is used to change the default
color of the background. IF you want to change the default background color
of the DOS or Windows command line, from the use of color command
you can easily change. The color attributes are as follow-

0- Black 8- Gray

1- Blue 9- Light Blue

2-Green A- Light Green

3-Aqua B- Light Aqua

4- Red C- Light Red

5- Purple D- Light Purple

6- Yellow E- Light Yellow

7- White F- Bright White

Syntax- color
(attribute)
Example- C:\Users\>color
D

MS-DOS External Commands : External commands in the context of operating


systems refer to commands that are not directly built into the command interpreter
(shell) or the operating system kernel. These commands are separate executable files
stored in specific directories that can be accessed and executed from the command
prompt or shell. Here are some of the most commonly used external commands in
MS-DOS, along with their syntax:

1. FORMAT:
Syntax: FORMAT [drive:] [/V:label] [/Q] [/F:size] [/S] [/U] [/C] Explanation: Formats
a disk or drive.

2. CHKDSK:
Syntax: CHKDSK [drive:] [/F] [/R]
Explanation: Checks a disk for errors and attempts to fix them.

3. DISKCOPY:
Syntax: DISKCOPY [drive1: [drive2:]] [/V] [/B]
Explanation: Copies the entire contents of one disk to another. 4.

ATTRIB:

Syntax: ATTRIB [+R|-R] [+A|-A] [+S|-S] [+H|-H] [drive:][path][filename]


Explanation: Modifies file attributes such as Read-only, Archive, System, and Hidden.

5. XCOPY:

Syntax: XCOPY source [destination] [/E] [/S] [/V] [/P] [/Q] Explanation:
Copies files and directories, including subdirectories.

6. EDIT:
Syntax: EDIT [filename]
Explanation: Opens a simple text editor.

7. DEBUG:
Syntax: DEBUG
Explanation: A debugging tool that allows low-level programming and editing of binary
files.

8. TREE:
Syntax: TREE [drive:][path]
Explanation: Displays a graphical representation of the directory structure.

9. FDISK:
Syntax: FDISK
Explanation: A disk partitioning utility. 10.

DELTREE:

Syntax: DELTREE [/Y] [drive:][path]


Explanation: Deletes a directory and all its contents. 11.

MOVE:

Syntax: MOVE [drive:][path]filename1 [drive:][path]filename2 Explanation: Moves


a file from one location to another.

12. FIND:
Syntax: FIND "string" [drive:][path]filename
Explanation: Searches for a specific string in a file or set of files.

13. SORT: Syntax: SORT [drive:][path]filename [/O outputfile]


Explanation: Sorts the contents of a text file and displays the result or saves it to another
file.

14. COMP:
Syntax: COMP [drive1:][path1]filename1 [drive2:][path2]filename2 [/A] [/L]
Explanation: Compares the contents of two files byte by byte.

15. FC:
Syntax: FC [drive1:][path1]filename1 [drive2:][path2]filename2 [/A] [/L] [/N] [/C]
Explanation: Compares two files or sets of files and displays the differences between
them.

These commands cover a range of operations including disk management, file


manipulation, and text processing, and are frequently used in MS-DOS environments.
Hardware and software requirements for UNIX operating system

The hardware and software requirements for a UNIX operating system can vary
depending on the specific distribution and version of UNIX you are considering.
However, here are some general guidelines:

Hardware Requirements:
1. Processor: Most UNIX distributions support a wide range of processors,
including x86, x86-64, ARM, SPARC, and PowerPC.

2. Memory (RAM): The minimum RAM requirement varies, but typically a


UNIX system can run with 512 MB to 1 GB of RAM. However, for better
performance and to accommodate more processes, it is recommended to have at least
2-4 GB of RAM.

3. Storage: UNIX distributions generally require a minimum of several gigabytes


of disk space for the installation. The actual disk space required will depend on the
specific distribution and the software packages you choose to install.
4. Network Interface: A network interface card (NIC) is required if you want to
connect your UNIX system to a network.

Software Requirements:
1. UNIX Distribution: Choose a specific UNIX distribution based on your needs
and preferences. Some popular UNIX distributions include Linux (e.g., Ubuntu,
Fedora, CentOS), BSD (e.g., FreeBSD, Open BSD), and Solaris.

2. Boot Loader: The UNIX distribution will typically come with a boot loader,
such as GRUB or LILO, which allows you to select the operating system to boot
when you start your computer.

3. File System: UNIX supports various file systems, including ext4, XFS, ZFS,
and UFS. The specific file system you use will depend on the UNIX distribution and
your requirements.

4. Graphical Environment (Optional): If you prefer a graphical user interface


(GUI), you may need to install a desktop environment, such as GNOME or KDE,
along with the required display drivers.
These are general requirements, and it's important to consult the documentation or
system requirements specific to the UNIX distribution you plan to use. Different
UNIX distribution may have additional or slightly different hardware and software
requirements.
Hardware and software requirements for LINUX OS
Linux is a UNIX-like operating system, and the hardware and software requirements
for Linux distributions can vary depending on the specific distribution and version
you choose. Here are some general guidelines:
Hardware Requirements:
Processor: Linux supports a wide range of processors, including x86, x86-64, ARM,
SPARC, and PowerPC. The specific requirements will depend on the distribution and
the software packages you plan to use.
Memory (RAM): The minimum RAM requirement varies, but most Linux
distributions can run with 1 GB to 2 GB of RAM. However, for smoother
performance and to accommodate more demanding applications, it is recommended
to have at least 4 GB of RAM.
Storage: Linux distributions typically require several gigabytes of disk space for
installation. The actual disk space needed will depend on the distribution and the
software packages you choose to install. A minimum of 10-20 GB of disk space is
generally recommended.
Network Interface: A network interface card (NIC) is required if you want to connect
your Linux system to a network. Software Requirements:
Linux Distribution: Choose a specific Linux distribution based on your needs and
preferences. Some popular Linux distributions include Ubuntu, Fedora, CentOS,
DEBIAN , and Linux Mint. Each distribution has its own hardware and software
requirements, so it's important to consult the specific documentation for the
distribution you choose.
Boot Loader: Linux distributions typically come with a boot loader, such as GRUB or
LILO, which allows you to select the operating system to boot when you start your
computer.
File System: Linux supports various file systems, including ext4, XFS, and BTRFs.
The specific file system you use will depend on the distribution and your
requirements.
Graphical Environment (Optional): If you prefer a graphical user interface (GUI), you may
need to install a desktop environment, such as GNOME, KDE, or XFCE, along with the
required display drivers.
Again, it's important to note that these are general requirements, and the specific hardware and
software requirements can vary depending on the Linux distribution you choose and the
intended use of your system. Always refer to the documentation or system requirements
provided by the specific Linux distribution for accurate and up-to-date information.
Hardware and software requirements for WINDOWS 7
The following are the recommended hardware and software requirements for Windows 7:

Hardware Requirements:
1. Processor: 1 GHz or faster processor (32-bit or 64-bit)

2. Memory (RAM): 1 GB RAM for 32-bit systems or 2 GB RAM for 64-bit systems

3. Storage: 16 GB of available hard disk space for 32-bit systems or 20 GB for 64-bit
systems

4. Graphics: DirectX 9 graphics device with WDDM 1.0 or higher driver.

Software Requirements:
1. Operating System: Windows 7 is the operating system itself.

2. Display: A monitor capable of at least 800x600 resolution (higher resolutions are


recommended)

3. Internet Connection: Some features of Windows 7, such as Windows Update, require an


internet connection.

4. Optional: DVD/CD drive if you plan to install Windows 7 from a disc.


It's important to note that these are the minimum recommended requirements, and for
optimal performance, it's often beneficial to have more powerful hardware, such as a faster
processor, more RAM, and a larger hard disk space. Additionally, some software
applications or games may have their own specific requirements that go beyond the
minimum system requirements of Windows 7.
Hardware and Software requirements for WINDOWS 10
The following are the recommended hardware and software requirements for Windows 10:

Hardware Requirements:
1. Processor: 1 GHz or faster processor with at least 2 cores (64-bit)
2. Memory (RAM): 4 GB RAM or more
3. Storage: 64 GB of available hard disk space or more
4. Graphics: DirectX 9 or later with WDDM 2.0 driver
5. Display: A monitor capable of at least 800x600 resolution (higher resolutions are
recommended).
Software Requirements:
1. Operating System: Windows 10 is the operating system itself.
2. Internet Connection: Some features of Windows 10, such as Windows Update and online
services, require an internet connection.
3. Microsoft Account: While not mandatory, having a Microsoft account can enable access to
additional features and cloud-based services.

It's important to note that these are the recommended requirements, and the actual hardware
and software requirements may vary depending on the specific usage scenario and software
applications you intend to run. Certain resource-intensive applications or games may have
their own specific requirements beyond the minimum system requirements of Windows 10
Hardware and software requirements for WINDOWS 11
The following are the recommended hardware and software requirements for Windows 11:

Hardware Requirements:
1. Processor: 1 GHz or faster with at least 2 cores on a 64-bit compatible processor.
2. Memory (RAM): 4 GB RAM or more.
3. Storage: 64 GB of available storage or more.
4. System Firmware: UEFI firmware with Secure Boot capability.
5. TPM Version: TPM version 2.0.
6. Graphics: DirectX 12 or later with a WDDM 2.0 driver.
7. Display: A high-definition (720p) display, 9" or larger diagonally, with 8 bits per color
channel.

Software Requirements:
1. Operating System: Windows 11 is the operating system itself.
2. Internet Connection: Some features and updates in Windows 11 require an internet
connection.
3. Microsoft Account: While not mandatory, having a Microsoft account can enable access to
additional features and cloud-based services.

It's important to note that Windows 11 introduces stricter hardware requirements compared
to previous versions of Windows. The TPM 2.0 requirement and specific processor, storage,
and display specifications need to be met for the installation and proper functioning of
Windows 11. Additionally, not all existing hardware that meets Windows 10 requirements
may be compatible with Windows 11. Microsoft provides a PC Health Check tool that can
help determine if your system meets the requirements for Windows 11.
System Calls in Operating System (OS)
A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system
calls to satisfy the request. A system call can be written in assembly language or a
highlevel language like C or Pascal. System calls are predefined functions that the
operating system may directly invoke if a high-level language is used.
In this article, you will learn about the system calls in the operating system and discuss
their types and many other things.

What is a System Call?


A system call is a method for a computer program to request a service from the kernel
of the operating system on which it is running. A system call is a method of interacting
with the operating system via programs. A system call is a request from computer
software to an operating system's kernel.

The Application Program Interface (API) connects the operating system's functions
to user programs. It acts as a link between the operating system and a process, allowing
user-level programs to request operating system services. The kernel system can only
be accessed using system calls. System calls are required for any programs that use
resources.
UNIX System calls for Process management

A system is used to create a new process or a duplicate process called a fork.


The duplicate process consists of all data in the file description and registers common.
The original process is also called the parent process and the duplicate is called the
child process.
The fork call returns a value, which is zero in the child and equal to the child’s PID
(Process Identifier) in the parent. The system calls like exit would request the services
for terminating a process.
Loading of programs or changing of the original image with duplicate needs execution
of exec. PID would help to distinguish between child and parent processes. Here are
some common Unix system calls for process management:
fork(): Creates a new process by duplicating the existing process. It returns twice: once in
the parent process (with the process ID of the child) and once in the child process (with a
return value of 0). exec() family: Replaces the current process with a new process image.

Execve(const char *path, char *const argv[], char *const envp[]):


Loads and executes a new program from the given file path, with the specified commandline
arguments and environment variables.
execl(const char *path, const char *arg0, ..., const char *argn,
(char *) NULL): Executes a new program from the given file path with a variable

number of command-line arguments. wait() and waitpid(): Used to wait for the
termination of child processes.

wait(int *status): Suspends the current process until one of its child processes
terminates. It can retrieve the termination status of the child process.

waitpid(pid_t pid, int *status, int options): Suspends the current process
until the child process with the specified process ID terminates. exit(): Terminates the
current process and returns the exit status to the parent process.

getpid(): Returns the process ID of the current process.

UNIX system calls for File Management


File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc. several Unix
system calls commonly used for file management:

Opens a file specified by


open(const char *path, int flags, mode_t mode):
the file path, with specified flags indicating the file's intended use (e.g., read, write, create,
etc.) and the file mode (permissions) if the file is being created. close(int fd): Closes
the file descriptor fd that was previously opened.
read(int fd, void *buf, size_t count): Reads data from the file associated with
the file descriptor fd into the buffer buf of size count.

write(int fd, const void *buf, size_t count): Writes data from the buffer
buf of size count to the file associated with the file descriptor fd.

lseek(int fd, off_t offset, int whence): Changes the file offset (position) of

the file associated with the file descriptor fd. The whence parameter specifies how the
offset is calculated (e.g., from the beginning, current position, or end of the file).
unlink(const char *path): Deletes (unlinks) the file specified by the file path.

rename(const char *oldpath, const char *newpath): Renames a file, changing


its name from oldpath to newpath.

chmod(const char *path, mode_t mode): Changes the permissions of a file

specified by the file path to the specified mode.

UNIX system calls for input/output


The following are the UNIX system calls for I/O:

-Open: to open a file Syntax: open


(pathname, flag, and mode).

- Create: To create a file. Syntax: create (pathname,

mode).

- Close: To close a file. Syntax:


close (filedes). - Read: To read data from a

file that is opened.

Syntax: read (filedes,

buffer, bytes)
- Write: To write data to a file that is opened. Syntax: write

(filedes, buffer, bytes)

- Lseek: To position the file pointer at given location in the


file. Syntax: lseek (filedes, offset, from).
- Dup: To make a duplicate copy of an existing file
descriptor.

Syntax:

dup (filedes).
SCHEDULING POLICIES

Shortest Job First


Shortest Job First (SJF) is a scheduling policy used in operating systems to prioritize
the execution of processes or tasks based on their expected execution time. In SJF, the
scheduler selects the process with the shortest burst time or duration to be executed
first. It assumes that shorter jobs will complete faster, resulting in better system
performance and reduced waiting times.

When a new process arrives, the scheduler compares its burst time with the remaining
burst times of the processes already in the system. If the new process has a shorter burst
time than the currently running process or the processes waiting in the queue, it is given
the highest priority and is scheduled for execution. This continues until all processes
are executed.

Advantages of Shortest Job First:

The advantages of Shortest Job First (SJF) scheduling policy include:

1. Minimized average waiting time: SJF has the potential to achieve the minimum
average waiting time among all scheduling algorithms when the job lengths are
known in advance. By executing shorter jobs first, processes spend less time waiting
in the ready queue, leading to reduced waiting times.
2. Optimal efficiency: SJF can provide optimal efficiency in terms of overall system
performance. By prioritizing shorter jobs, it maximizes the utilization of system
resources and ensures faster completion of processes. This can result in higher
throughput and improved response times.
3. Reduced turnaround time: Turnaround time refers to the time taken for a process
to complete its execution, including waiting time and execution time. SJF
scheduling aims to minimize the turnaround time by prioritizing shorter jobs,
leading to faster completion and improved system efficiency.
4. Improved response time: Response time is the time taken from when a process is
submitted until the first response is produced. SJF scheduling can reduce response
time by quickly executing shorter jobs, providing faster feedback to the users or
applications.
5. Favourable for interactive systems: SJF is particularly advantageous for
interactive systems where users expect quick responses. By giving priority to short
jobs, SJF can provide a more responsive and interactive user experience.
6. Efficient resource utilization: SJF maximizes the utilization of system resources
by executing shorter jobs first. This approach helps in utilizing CPU and other
resources efficiently, resulting in better system performance and resource
management.

Disadvantages of Shortest Job First:

Shortest Job First (SJF) scheduling policy has certain disadvantages, including:

1. Difficulty in predicting job duration: Accurately estimating the duration of a job


or process can be challenging. In many cases, job lengths are not known in advance
or may vary dynamically. If the estimated job durations are incorrect, it can lead to
unexpected delays and inefficient scheduling.
2. Possibility of starvation: Longer jobs or processes may suffer from starvation in an
SJF scheduling policy. Since the focus is on executing the shortest job first, longer
jobs can be continually delayed by shorter jobs that keep arriving. This can result in
longer jobs experiencing significant waiting times and reduced priority, affecting
system fairness.
3. Dependency on accurate job information: SJF relies heavily on having precise
information about the length of jobs or processes. If accurate job duration estimates
are not available, the scheduling decisions based on SJF may be suboptimal, leading
to inefficient resource utilization and longer waiting times.
4. Unsuitability for dynamic environments: SJF assumes that the job lengths are
known in advance or do not change over time. However, in dynamic environments
where jobs arrive unpredictably or their durations vary, SJF may not be practical. It
may be challenging to constantly update and revise job duration estimates, making
SJF less suitable for such scenarios.
5. Potential for increased overhead: Pre-emptive SJF, which allows the interruption
of currently executing processes, can introduce additional overhead due to the
context switching between processes. This overhead can reduce the overall
efficiency of the system.
6. Lack of fairness: SJF scheduling does not guarantee fairness in resource allocation.
Shorter jobs are given higher priority, which can result in longer jobs or processes
experiencing delays and reduced access to system resources. This can be
problematic in scenarios where fairness is an essential requirement. Example of
Shortest Job First:

Let's consider an example to illustrate how Shortest Job First (SJF) scheduling policy
works. Assume we have the following three processes with their respective burst times
(execution times): Process A: 6 units
Process B: 3 units
Process C: 2 units

With SJF scheduling, the order of execution would be as follows:


1. Process C (2 units)
2. Process B (3 units)
3. Process A (6 units)
In this example, SJF prioritizes the process with the shortest burst time. Process C, with
a burst time of 2 units, is the shortest job and is executed first. Then, Process B, with a
burst time of 3 units, is executed. Finally, Process A, with a burst time of 6 units, is
executed last.

SJF aims to minimize the average waiting time by giving priority to shorter jobs. In this
case, Process C completes first, followed by Process B, and finally, Process A. By
executing the shorter jobs earlier, the overall waiting time is reduced, leading to
improved system performance.

It's important to note that SJF scheduling assumes accurate knowledge of job durations
in advance. If the burst times are not accurately estimated or if new processes arrive
with different burst times dynamically, the scheduling decisions may not be optimal.

First-Come First-Served
First-Come, First-Served (FCFS) is a scheduling policy used in operating systems to
determine the order in which processes or tasks are executed. In FCFS, the processes
are executed in the order they arrive in the ready queue. The process that arrives first is
allocated the CPU first, and subsequent processes are executed in the same order they
entered the queue.

Under FCFS scheduling, the CPU is assigned to the first process that arrives and
remains allocated to it until the process completes or enters a waiting state. If a process
voluntarily releases the CPU, such as for I/O operations, it goes back to the end of the
queue and waits for its turn to use the CPU again.
FCFS scheduling does not consider the burst time or execution time of processes. The
order of execution solely depends on the arrival order. Once a process starts executing,
it continues until it completes or enters a waiting state, regardless of the burst time or
the arrival of other processes.

Advantages of First-Come First-Served:

The advantages of First-Come, First-Served (FCFS) scheduling policy are as follows:

1. Simplicity: FCFS is easy to understand and implement. It follows a straightforward


rule of executing processes in the order they arrive. There are no complex algorithms
or calculations involved in determining the execution order.
2. Non-pre-emptive: FCFS is a non-pre-emptive scheduling policy, meaning that once
a process starts executing, it continues until it completes or enters a waiting state.
This characteristic can be desirable in certain scenarios where process continuity is
important or where context switching overhead is undesirable.
3. Fairness: FCFS ensures fairness in the sense that processes are executed in the same
order they arrive. It provides equal opportunity to all processes in terms of CPU
allocation, without any bias or prioritization based on process characteristics.
4. No starvation: FCFS scheduling guarantees that all processes will eventually get
their turn to execute. Since the processes are executed in the order they arrive, there
is no possibility of starvation, where a process is indefinitely delayed or starved of
CPU time.
5. Minimal overhead: FCFS scheduling involves minimal overhead in terms of
process scheduling. Once a process is assigned the CPU, it continues until
completion, eliminating the need for frequent context switches or priority
evaluations.
6. Predictability: FCFS provides a predictable execution order, as it is solely based on
the arrival order of processes. This predictability can be advantageous in certain
scenarios, such as real-time systems or when the order of execution affects the
outcome.

Disadvantages of First-Come First-Served:


First-Come, First-Served (FCFS) scheduling policy has certain disadvantages,
including:

1. Poor average waiting time: FCFS may result in longer average waiting times,
especially if processes with longer burst times arrive earlier. Since the processes are
executed in the order of arrival, processes with shorter burst times have to wait
behind longer processes, leading to increased waiting times for subsequent
processes.
2. Inefficiency in resource utilization: FCFS does not consider the burst time or
execution time of processes. As a result, a long-running process may occupy the
CPU for a significant duration, even if there are shorter processes waiting. This can
lead to inefficient utilization of CPU resources and potentially underutilization of
other system resources.
3. Lack of responsiveness: FCFS scheduling does not prioritize processes based on
their urgency or priority. If a high-priority process arrives after a long-running
process, it has to wait until all previously arrived processes are executed. This lack
of responsiveness can be problematic in scenarios where timely response is crucial,
such as real-time systems or interactive applications.
4. No consideration for process characteristics: FCFS treats all processes equally,
regardless of their specific characteristics or resource requirements. This lack of
consideration for individual process requirements can lead to suboptimal scheduling
decisions, such as allocating CPU time to a process that does not require it urgently
or that can execute in parallel with other processes.
5. Convoy effect: FCFS scheduling can suffer from the convoy effect, where a
longrunning process holds up other shorter processes in the queue. This can occur if
a CPU-intensive process occupies the CPU, causing other processes to wait, even if
they have shorter execution times. It can result in overall system slowdown and
reduced throughput.
6. Lack of adaptability to dynamic situations: FCFS assumes that the arrival order
of processes remains static. In dynamic environments where new processes arrive
or the burst times of existing processes change, FCFS may not be the most suitable
scheduling policy. It does not dynamically adapt to changing priorities or job
characteristics.

Example of First-Come First-Served:


Let's consider an example to illustrate how the First-Come, First-Served (FCFS)
scheduling policy works. Assume we have three processes with their respective burst
times (execution times):

Process A: 8 units
Process B: 4 units
Process C: 6 units
With FCFS scheduling, the order of execution would be as follows:
1. Process A (8 units)
2. Process B (4 units)
3. Process C (6 units)

In this example, the processes are executed in the order they arrived. Process A arrived
first, so it gets the CPU first and runs for 8 units of time. Once Process A completes,
Process B, which arrived second, starts executing and runs for 4 units of time. Finally,
Process C, which arrived last, gets the CPU and runs for 6 units of time.

FCFS scheduling follows a simple rule of executing processes in the order they arrive.
It provides fairness as each process is executed in the same sequence it entered the
system. However, FCFS can lead to longer waiting times for subsequent processes if
earlier processes have longer burst times.
Priority-Scheduling
Priority scheduling is a scheduling policy used in operating systems to determine the
order in which processes or tasks are executed based on their priority levels. Each
process is assigned a priority value, which indicates its relative importance or urgency
compared to other processes.

In priority scheduling, the process with the highest priority is allocated the CPU first,
followed by processes with lower priority levels. The priority value can be assigned
based on various factors, such as the type of task, importance of the process, deadline
requirements, or system-defined criteria.

There are two types of priority scheduling:

1. Pre-emptive Priority Scheduling: In this type, a process with a higher priority can
pre-empt or interrupt the execution of a lower-priority process. If a higherpriority
process arrives or becomes ready to execute, it can pre-empt the currently executing
process and start executing immediately. This ensures that processes with higher
priority receive immediate attention.
2. Non-pre-emptive Priority Scheduling: In this type, once a process starts
executing, it continues until it completes or voluntarily releases the CPU. The
scheduler selects the highest-priority process from the ready queue and allocates the
CPU to it. The process keeps the CPU until it finishes its execution or enters a
waiting state, regardless of the arrival or priority of other processes.

Advantages of Priority Scheduling:


The advantages of priority scheduling, a scheduling policy based on priority levels,
include:

1. Improved responsiveness: Priority scheduling allows critical or high-priority


processes to receive immediate attention and CPU time. This leads to improved
responsiveness for important tasks, such as real-time systems or time-sensitive
applications, where timely execution is crucial.
2. Efficient resource allocation: By assigning priorities to processes, priority
scheduling ensures that high-priority processes are allocated resources, including
CPU time, promptly. This efficient resource allocation helps in meeting the
requirements and deadlines of critical processes.
3. Customizable scheduling: Priority scheduling provides flexibility by allowing
administrators or system designers to assign priority levels to processes based on
their specific needs and importance. This customization enables the scheduling
policy to align with the specific requirements of the system or application.
4. Fairness and equal opportunity: Priority scheduling ensures that higher-priority
processes receive preferential treatment in terms of CPU allocation. However, it
also ensures fairness by allowing lower-priority processes to execute once
higherpriority processes have completed or entered waiting states. This equal
opportunity for lower-priority processes maintains fairness in resource utilization.
5. Efficient utilization of system resources: By giving priority to critical processes,
priority scheduling optimizes the utilization of system resources. High-priority
processes that require immediate attention are given the necessary resources, which
can lead to improved system performance and throughput.
6. Flexibility in priority adjustments: Priority scheduling allows for dynamic
adjustments of process priorities. The priority levels of processes can be modified
during runtime based on changing requirements, allowing administrators or system
managers to adapt to evolving conditions.

Disadvantages of Priority Scheduling:


The disadvantages of priority scheduling, a scheduling policy based on priority levels,
include:

1. Starvation of lower-priority processes: If high-priority processes constantly


arrive or have long execution times, lower-priority processes may experience
starvation. Lower-priority processes might have to wait indefinitely or receive very
limited CPU time, which can lead to delays or inefficiencies in their execution.
2. Possibility of priority inversion: Priority inversion can occur when a low-priority
process holds a resource that a high-priority process requires. In such cases, the
lowpriority process may continue to hold the resource, preventing the highpriority
process from executing and potentially causing delays or unexpected behaviour.
3. Risk of priority inversion due to synchronization: When processes need to
synchronize or communicate with each other, priority inversion can occur. If a
lowpriority process holds a synchronization lock or resource required by a
higherpriority process, it can cause delays and disrupt the intended execution order.
4. Lack of fairness: Priority scheduling may not always guarantee fairness in resource
allocation. High-priority processes receive preferential treatment, potentially
causing lower-priority processes to wait for extended periods. This unfairness can
be problematic in certain scenarios or when equal opportunity for resource
allocation is required.
5. Possibility of priority inversion due to aging: To prevent starvation, priority
scheduling often incorporates aging mechanisms, where the priorities of waiting
processes gradually increase over time. However, this can introduce the risk of
priority inversion if the aging process is not properly designed or managed.
6. Difficulty in assigning accurate priorities: Assigning accurate and appropriate
priorities to processes can be challenging. Determining the relative importance or
urgency of processes accurately requires careful consideration and understanding
of the system requirements. Inaccurate or inappropriate priority assignments can
lead to inefficient resource utilization or suboptimal scheduling decisions.
Example of Priority Scheduling:
Let's consider an example to illustrate how priority scheduling works. Assume we have
four processes with their respective priorities and burst times (execution times):

With priority scheduling, the order of execution would be as follows:


Proce Priorit Burst
ss y Time
A 3 6 units
B 1 4 units
C 2 8 units
D 4 5 units

1. Process B (Priority 1, 4 units)


2. Process C (Priority 2, 8 units)
3. Process A (Priority 3, 6 units)
4. Process D (Priority 4, 5 units) In this example, the processes are executed in order
of their priority values. Process B, with the highest priority of 1, is executed first.
It runs for 4 units of time. Then,
Process C, with a priority of 2, starts executing and runs for 8 units. After that, Process
A, with a priority of 3, is executed for 6 units. Finally, Process D, with the lowest
priority of 4, runs for 5 units.

Priority scheduling ensures that higher-priority processes receive preferential treatment


and are executed before lower-priority processes. It allows critical or important
processes to be executed promptly, based on their assigned priorities.
Round-Robin Scheduling
Round Robin (RR) is a scheduling policy used in operating systems to allocate CPU
time to multiple processes or tasks in an interleaved manner. It is a pre-emptive
scheduling algorithm that divides the available CPU time into fixed time intervals
called time slices or quantum.

Each process is assigned a time slice, and the processes are executed in a cyclic order.
The CPU is allocated to each process for a fixed time quantum, typically ranging from
a few milliseconds to a few tens of milliseconds. Once a process has consumed its time
quantum, it is pre-empted and moved to the back of the ready queue, allowing the next
process to execute.

The cycle repeats until all processes complete their execution or enter a waiting state.
If a process completes before consuming its entire time quantum, it is removed from
the queue. If a process requires more time to execute than the allocated time quantum,
it is pre-empted and moved to the back of the queue to resume execution later.
Round Robin scheduling ensures fairness by providing equal opportunities for all
processes to execute. Each process gets a chance to utilize the CPU for a fixed time
quantum, regardless of its priority or execution time. This property makes Round Robin
particularly suitable for time-sharing systems or scenarios where fairness and
responsiveness are important.

Advantages of Round-Robin Scheduling:


The advantages of Round-Robin (RR) scheduling, a time-sharing scheduling policy,
include:

1. Fairness: Round Robin scheduling provides fairness by ensuring that each process
gets an equal opportunity to execute. The fixed time quantum guarantees that no
process monopolizes the CPU for an extended period, preventing starvation of
lower-priority processes. This fairness is particularly important in time-sharing
systems where multiple users or processes need to be served simultaneously.
2. Responsiveness: Round Robin scheduling offers good responsiveness to interactive
applications and processes. Since each process gets a chance to execute within a
fixed time quantum, even processes with lower priority or shorter burst times
receive timely CPU time, leading to improved system responsiveness and reduced
perceived latency.
3. Shared resource allocation: Round Robin facilitates efficient sharing of CPU
resources among multiple processes. Each process gets a fair share of CPU time,
ensuring that no single process dominates resource utilization. This is especially
beneficial in multi-user systems or scenarios where resources need to be distributed
equitably.
4. Simplicity and ease of implementation: Round Robin scheduling is relatively
simple to implement compared to more complex scheduling policies. It follows a
cyclic and predictable pattern of execution, making it easier to understand,
implement, and manage. Its simplicity also contributes to lower overhead and
computational complexity compared to other scheduling algorithms.
5. Predictable behaviour: Round Robin scheduling exhibits predictable behaviour
due to its fixed time quantum. This predictability makes it easier for system
administrators and users to estimate the response time and plan their tasks
accordingly.
6. Context switching overhead: Although context switching introduces overhead in
Round Robin scheduling, it allows for efficient time-sharing and responsiveness.
The fixed time quantum helps balance the overhead of context switching with fair
CPU allocation, ensuring that processes get reasonable CPU time without
unnecessary delays.
7. Pre-emptive scheduling: Round Robin is a pre-emptive scheduling policy,
meaning that processes can be pre-empted after their time quantum expires. This
pre-emptive nature allows for better control over CPU allocation and the ability to
prioritize certain processes or handle time-sensitive tasks effectively.
Round Robin scheduling is widely used in operating systems, particularly in
timesharing systems and environments that require fairness, responsiveness, and
resource sharing. However, the choice of time quantum and consideration of system
characteristics are crucial to optimize its performance and achieve the desired tradeoffs
between fairness, responsiveness, and system overhead.

Disadvantages of Round-Robin Scheduling:


Round-robin scheduling, a widely used algorithm in operating systems, has several
disadvantages:

1. Poor response time: Round-robin scheduling gives each process an equal amount
of time to execute, regardless of its priority or resource requirements. This can lead
to poor response times for interactive processes that require immediate user input.
If a high-priority process gets a large burst of CPU time, it may cause a noticeable
delay for other processes.
2. Inefficient for long-running processes: Round-robin scheduling is not efficient for
long-running processes that need to execute for extended periods. The algorithm
divides CPU time equally among all processes, which means longrunning processes
will have to give up the CPU frequently, leading to unnecessary context switching
overhead.
3. Inefficient utilization of CPU: Round-robin scheduling may result in inefficient
utilization of the CPU. If a process finishes its burst before the time quantum
expires, the remaining time is wasted. This can occur if a process completes its work
quickly or when there are many short processes in the system.
4. No priority differentiation: Round-robin scheduling does not consider the priority
of processes. All processes are treated equally, and each receives the same time
quantum. In scenarios where certain processes require more attention or have higher
priorities, round-robin may not be the most suitable scheduling algorithm.
5. Inefficient for resource-intensive processes: If there are resource-intensive
processes that require a significant amount of CPU time, round-robin scheduling
may not allocate enough time for them to complete their work efficiently. These
processes may end up getting interrupted frequently, leading to lower overall
performance.
6. High overhead due to context switching: Round-robin scheduling requires
frequent context switching between processes, as each process is given a fixed time
quantum. The overhead associated with saving and restoring process context can be
significant, especially if the number of processes is large or the time quantum is
small.
7. Inability to prioritize I/O-bound processes: Round-robin scheduling treats all
processes equally, regardless of whether they are CPU-bound or I/O-bound. This
can be problematic when dealing with I/O-bound processes, as they may spend a
significant amount of time waiting for I/O operations to complete. In such cases, it
would be more efficient to prioritize other processes that can make better use of the
CPU.

Example of Round-Robin Scheduling:

Consider the set of 5 processes whose arrival time and burst time are given below-

Process ID Arrival time Burst time

P1 0 5
P2 1 3

P3 2 1

P4 3 2

P5 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, then, Gantt

Chart:

0 2 4 5 7 9 11 12 13 14
P1 P2 P3 P1 P4 P5 P2 P1 P5

Process ID Exit time Turn Around time Waiting time

P1 13 13 – 0 = 13 13 – 5 = 8 11
–3=8

P2 12 12 – 1 = 11

P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4

10 – 3 = 7
P5 14 14 – 4 = 10

Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit

• Average Waiting Time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit


It is important to note that the order of execution can vary based on the time quantum
and the specific implementation of the round-robin scheduling algorithm.

You might also like