Professional Documents
Culture Documents
5. COPY COMMAND - The COPY command is used to copy the file and move
the file to another location or folder or drive.
Syntax-
C: \> Copy <File Name><New Name>
C: \> Copy <Path File Name><Target Drive>
Example-
C:> Copy C:\ABC*.* D:\amir and Press enter.
6. DATE COMMAND - The Date command is used to view the system’s current
date. if you want to modify the date then you can easily do it from the date
command.
Syntax- C:
\> date
Example
C:>date
The current date is: 10/12/2021
Enter the new date: (dd-mm-yy) 09/12/2021
7. TIME COMMAND - The time command is used to view the system’s current
date. if you want to modify the date then you can easily do it from the time
command.
Syntax- C: \>time
Example-
C:>time
The current time is: 23:23:27.63
Enter the new time: 23:25:50.43
8. DEL COMMAND - A DEL command is used to remove a file from the disk.
To delete any files or directory from any drive you would define the path and
folder.
SYNTAX-
C:\> Del (file name) delete only one file.
C:\>ABC Del *.* delete all files from ABC folder.
12. VER COMMAND - If you want to see your Windows Operating system then
you can see the version information by the VER command. Follow the syntax
and example are below- Syntax-
C:>ver
13. MOVE COMMAND - The move is an internal command used to change the
name of any directory. Follow the syntax and example are below- Syntax-
C:/> move old dir(name) new dir (name)
C:/> move amir amir1
14. COLOR COMMAND - The COLOR command is used to change the default
color of the background. IF you want to change the default background color
of the DOS or Windows command line, from the use of color command
you can easily change. The color attributes are as follow-
0- Black 8- Gray
Syntax- color
(attribute)
Example- C:\Users\>color
D
1. FORMAT:
Syntax: FORMAT [drive:] [/V:label] [/Q] [/F:size] [/S] [/U] [/C] Explanation: Formats
a disk or drive.
2. CHKDSK:
Syntax: CHKDSK [drive:] [/F] [/R]
Explanation: Checks a disk for errors and attempts to fix them.
3. DISKCOPY:
Syntax: DISKCOPY [drive1: [drive2:]] [/V] [/B]
Explanation: Copies the entire contents of one disk to another. 4.
ATTRIB:
5. XCOPY:
Syntax: XCOPY source [destination] [/E] [/S] [/V] [/P] [/Q] Explanation:
Copies files and directories, including subdirectories.
6. EDIT:
Syntax: EDIT [filename]
Explanation: Opens a simple text editor.
7. DEBUG:
Syntax: DEBUG
Explanation: A debugging tool that allows low-level programming and editing of binary
files.
8. TREE:
Syntax: TREE [drive:][path]
Explanation: Displays a graphical representation of the directory structure.
9. FDISK:
Syntax: FDISK
Explanation: A disk partitioning utility. 10.
DELTREE:
MOVE:
12. FIND:
Syntax: FIND "string" [drive:][path]filename
Explanation: Searches for a specific string in a file or set of files.
14. COMP:
Syntax: COMP [drive1:][path1]filename1 [drive2:][path2]filename2 [/A] [/L]
Explanation: Compares the contents of two files byte by byte.
15. FC:
Syntax: FC [drive1:][path1]filename1 [drive2:][path2]filename2 [/A] [/L] [/N] [/C]
Explanation: Compares two files or sets of files and displays the differences between
them.
The hardware and software requirements for a UNIX operating system can vary
depending on the specific distribution and version of UNIX you are considering.
However, here are some general guidelines:
Hardware Requirements:
1. Processor: Most UNIX distributions support a wide range of processors,
including x86, x86-64, ARM, SPARC, and PowerPC.
Software Requirements:
1. UNIX Distribution: Choose a specific UNIX distribution based on your needs
and preferences. Some popular UNIX distributions include Linux (e.g., Ubuntu,
Fedora, CentOS), BSD (e.g., FreeBSD, Open BSD), and Solaris.
2. Boot Loader: The UNIX distribution will typically come with a boot loader,
such as GRUB or LILO, which allows you to select the operating system to boot
when you start your computer.
3. File System: UNIX supports various file systems, including ext4, XFS, ZFS,
and UFS. The specific file system you use will depend on the UNIX distribution and
your requirements.
Hardware Requirements:
1. Processor: 1 GHz or faster processor (32-bit or 64-bit)
2. Memory (RAM): 1 GB RAM for 32-bit systems or 2 GB RAM for 64-bit systems
3. Storage: 16 GB of available hard disk space for 32-bit systems or 20 GB for 64-bit
systems
Software Requirements:
1. Operating System: Windows 7 is the operating system itself.
Hardware Requirements:
1. Processor: 1 GHz or faster processor with at least 2 cores (64-bit)
2. Memory (RAM): 4 GB RAM or more
3. Storage: 64 GB of available hard disk space or more
4. Graphics: DirectX 9 or later with WDDM 2.0 driver
5. Display: A monitor capable of at least 800x600 resolution (higher resolutions are
recommended).
Software Requirements:
1. Operating System: Windows 10 is the operating system itself.
2. Internet Connection: Some features of Windows 10, such as Windows Update and online
services, require an internet connection.
3. Microsoft Account: While not mandatory, having a Microsoft account can enable access to
additional features and cloud-based services.
It's important to note that these are the recommended requirements, and the actual hardware
and software requirements may vary depending on the specific usage scenario and software
applications you intend to run. Certain resource-intensive applications or games may have
their own specific requirements beyond the minimum system requirements of Windows 10
Hardware and software requirements for WINDOWS 11
The following are the recommended hardware and software requirements for Windows 11:
Hardware Requirements:
1. Processor: 1 GHz or faster with at least 2 cores on a 64-bit compatible processor.
2. Memory (RAM): 4 GB RAM or more.
3. Storage: 64 GB of available storage or more.
4. System Firmware: UEFI firmware with Secure Boot capability.
5. TPM Version: TPM version 2.0.
6. Graphics: DirectX 12 or later with a WDDM 2.0 driver.
7. Display: A high-definition (720p) display, 9" or larger diagonally, with 8 bits per color
channel.
Software Requirements:
1. Operating System: Windows 11 is the operating system itself.
2. Internet Connection: Some features and updates in Windows 11 require an internet
connection.
3. Microsoft Account: While not mandatory, having a Microsoft account can enable access to
additional features and cloud-based services.
It's important to note that Windows 11 introduces stricter hardware requirements compared
to previous versions of Windows. The TPM 2.0 requirement and specific processor, storage,
and display specifications need to be met for the installation and proper functioning of
Windows 11. Additionally, not all existing hardware that meets Windows 10 requirements
may be compatible with Windows 11. Microsoft provides a PC Health Check tool that can
help determine if your system meets the requirements for Windows 11.
System Calls in Operating System (OS)
A system call is a way for a user program to interface with the operating system. The
program requests several services, and the OS responds by invoking a series of system
calls to satisfy the request. A system call can be written in assembly language or a
highlevel language like C or Pascal. System calls are predefined functions that the
operating system may directly invoke if a high-level language is used.
In this article, you will learn about the system calls in the operating system and discuss
their types and many other things.
The Application Program Interface (API) connects the operating system's functions
to user programs. It acts as a link between the operating system and a process, allowing
user-level programs to request operating system services. The kernel system can only
be accessed using system calls. System calls are required for any programs that use
resources.
UNIX System calls for Process management
number of command-line arguments. wait() and waitpid(): Used to wait for the
termination of child processes.
wait(int *status): Suspends the current process until one of its child processes
terminates. It can retrieve the termination status of the child process.
waitpid(pid_t pid, int *status, int options): Suspends the current process
until the child process with the specified process ID terminates. exit(): Terminates the
current process and returns the exit status to the parent process.
write(int fd, const void *buf, size_t count): Writes data from the buffer
buf of size count to the file associated with the file descriptor fd.
lseek(int fd, off_t offset, int whence): Changes the file offset (position) of
the file associated with the file descriptor fd. The whence parameter specifies how the
offset is calculated (e.g., from the beginning, current position, or end of the file).
unlink(const char *path): Deletes (unlinks) the file specified by the file path.
mode).
buffer, bytes)
- Write: To write data to a file that is opened. Syntax: write
Syntax:
dup (filedes).
SCHEDULING POLICIES
When a new process arrives, the scheduler compares its burst time with the remaining
burst times of the processes already in the system. If the new process has a shorter burst
time than the currently running process or the processes waiting in the queue, it is given
the highest priority and is scheduled for execution. This continues until all processes
are executed.
1. Minimized average waiting time: SJF has the potential to achieve the minimum
average waiting time among all scheduling algorithms when the job lengths are
known in advance. By executing shorter jobs first, processes spend less time waiting
in the ready queue, leading to reduced waiting times.
2. Optimal efficiency: SJF can provide optimal efficiency in terms of overall system
performance. By prioritizing shorter jobs, it maximizes the utilization of system
resources and ensures faster completion of processes. This can result in higher
throughput and improved response times.
3. Reduced turnaround time: Turnaround time refers to the time taken for a process
to complete its execution, including waiting time and execution time. SJF
scheduling aims to minimize the turnaround time by prioritizing shorter jobs,
leading to faster completion and improved system efficiency.
4. Improved response time: Response time is the time taken from when a process is
submitted until the first response is produced. SJF scheduling can reduce response
time by quickly executing shorter jobs, providing faster feedback to the users or
applications.
5. Favourable for interactive systems: SJF is particularly advantageous for
interactive systems where users expect quick responses. By giving priority to short
jobs, SJF can provide a more responsive and interactive user experience.
6. Efficient resource utilization: SJF maximizes the utilization of system resources
by executing shorter jobs first. This approach helps in utilizing CPU and other
resources efficiently, resulting in better system performance and resource
management.
Shortest Job First (SJF) scheduling policy has certain disadvantages, including:
Let's consider an example to illustrate how Shortest Job First (SJF) scheduling policy
works. Assume we have the following three processes with their respective burst times
(execution times): Process A: 6 units
Process B: 3 units
Process C: 2 units
SJF aims to minimize the average waiting time by giving priority to shorter jobs. In this
case, Process C completes first, followed by Process B, and finally, Process A. By
executing the shorter jobs earlier, the overall waiting time is reduced, leading to
improved system performance.
It's important to note that SJF scheduling assumes accurate knowledge of job durations
in advance. If the burst times are not accurately estimated or if new processes arrive
with different burst times dynamically, the scheduling decisions may not be optimal.
First-Come First-Served
First-Come, First-Served (FCFS) is a scheduling policy used in operating systems to
determine the order in which processes or tasks are executed. In FCFS, the processes
are executed in the order they arrive in the ready queue. The process that arrives first is
allocated the CPU first, and subsequent processes are executed in the same order they
entered the queue.
Under FCFS scheduling, the CPU is assigned to the first process that arrives and
remains allocated to it until the process completes or enters a waiting state. If a process
voluntarily releases the CPU, such as for I/O operations, it goes back to the end of the
queue and waits for its turn to use the CPU again.
FCFS scheduling does not consider the burst time or execution time of processes. The
order of execution solely depends on the arrival order. Once a process starts executing,
it continues until it completes or enters a waiting state, regardless of the burst time or
the arrival of other processes.
1. Poor average waiting time: FCFS may result in longer average waiting times,
especially if processes with longer burst times arrive earlier. Since the processes are
executed in the order of arrival, processes with shorter burst times have to wait
behind longer processes, leading to increased waiting times for subsequent
processes.
2. Inefficiency in resource utilization: FCFS does not consider the burst time or
execution time of processes. As a result, a long-running process may occupy the
CPU for a significant duration, even if there are shorter processes waiting. This can
lead to inefficient utilization of CPU resources and potentially underutilization of
other system resources.
3. Lack of responsiveness: FCFS scheduling does not prioritize processes based on
their urgency or priority. If a high-priority process arrives after a long-running
process, it has to wait until all previously arrived processes are executed. This lack
of responsiveness can be problematic in scenarios where timely response is crucial,
such as real-time systems or interactive applications.
4. No consideration for process characteristics: FCFS treats all processes equally,
regardless of their specific characteristics or resource requirements. This lack of
consideration for individual process requirements can lead to suboptimal scheduling
decisions, such as allocating CPU time to a process that does not require it urgently
or that can execute in parallel with other processes.
5. Convoy effect: FCFS scheduling can suffer from the convoy effect, where a
longrunning process holds up other shorter processes in the queue. This can occur if
a CPU-intensive process occupies the CPU, causing other processes to wait, even if
they have shorter execution times. It can result in overall system slowdown and
reduced throughput.
6. Lack of adaptability to dynamic situations: FCFS assumes that the arrival order
of processes remains static. In dynamic environments where new processes arrive
or the burst times of existing processes change, FCFS may not be the most suitable
scheduling policy. It does not dynamically adapt to changing priorities or job
characteristics.
Process A: 8 units
Process B: 4 units
Process C: 6 units
With FCFS scheduling, the order of execution would be as follows:
1. Process A (8 units)
2. Process B (4 units)
3. Process C (6 units)
In this example, the processes are executed in the order they arrived. Process A arrived
first, so it gets the CPU first and runs for 8 units of time. Once Process A completes,
Process B, which arrived second, starts executing and runs for 4 units of time. Finally,
Process C, which arrived last, gets the CPU and runs for 6 units of time.
FCFS scheduling follows a simple rule of executing processes in the order they arrive.
It provides fairness as each process is executed in the same sequence it entered the
system. However, FCFS can lead to longer waiting times for subsequent processes if
earlier processes have longer burst times.
Priority-Scheduling
Priority scheduling is a scheduling policy used in operating systems to determine the
order in which processes or tasks are executed based on their priority levels. Each
process is assigned a priority value, which indicates its relative importance or urgency
compared to other processes.
In priority scheduling, the process with the highest priority is allocated the CPU first,
followed by processes with lower priority levels. The priority value can be assigned
based on various factors, such as the type of task, importance of the process, deadline
requirements, or system-defined criteria.
1. Pre-emptive Priority Scheduling: In this type, a process with a higher priority can
pre-empt or interrupt the execution of a lower-priority process. If a higherpriority
process arrives or becomes ready to execute, it can pre-empt the currently executing
process and start executing immediately. This ensures that processes with higher
priority receive immediate attention.
2. Non-pre-emptive Priority Scheduling: In this type, once a process starts
executing, it continues until it completes or voluntarily releases the CPU. The
scheduler selects the highest-priority process from the ready queue and allocates the
CPU to it. The process keeps the CPU until it finishes its execution or enters a
waiting state, regardless of the arrival or priority of other processes.
Each process is assigned a time slice, and the processes are executed in a cyclic order.
The CPU is allocated to each process for a fixed time quantum, typically ranging from
a few milliseconds to a few tens of milliseconds. Once a process has consumed its time
quantum, it is pre-empted and moved to the back of the ready queue, allowing the next
process to execute.
The cycle repeats until all processes complete their execution or enter a waiting state.
If a process completes before consuming its entire time quantum, it is removed from
the queue. If a process requires more time to execute than the allocated time quantum,
it is pre-empted and moved to the back of the queue to resume execution later.
Round Robin scheduling ensures fairness by providing equal opportunities for all
processes to execute. Each process gets a chance to utilize the CPU for a fixed time
quantum, regardless of its priority or execution time. This property makes Round Robin
particularly suitable for time-sharing systems or scenarios where fairness and
responsiveness are important.
1. Fairness: Round Robin scheduling provides fairness by ensuring that each process
gets an equal opportunity to execute. The fixed time quantum guarantees that no
process monopolizes the CPU for an extended period, preventing starvation of
lower-priority processes. This fairness is particularly important in time-sharing
systems where multiple users or processes need to be served simultaneously.
2. Responsiveness: Round Robin scheduling offers good responsiveness to interactive
applications and processes. Since each process gets a chance to execute within a
fixed time quantum, even processes with lower priority or shorter burst times
receive timely CPU time, leading to improved system responsiveness and reduced
perceived latency.
3. Shared resource allocation: Round Robin facilitates efficient sharing of CPU
resources among multiple processes. Each process gets a fair share of CPU time,
ensuring that no single process dominates resource utilization. This is especially
beneficial in multi-user systems or scenarios where resources need to be distributed
equitably.
4. Simplicity and ease of implementation: Round Robin scheduling is relatively
simple to implement compared to more complex scheduling policies. It follows a
cyclic and predictable pattern of execution, making it easier to understand,
implement, and manage. Its simplicity also contributes to lower overhead and
computational complexity compared to other scheduling algorithms.
5. Predictable behaviour: Round Robin scheduling exhibits predictable behaviour
due to its fixed time quantum. This predictability makes it easier for system
administrators and users to estimate the response time and plan their tasks
accordingly.
6. Context switching overhead: Although context switching introduces overhead in
Round Robin scheduling, it allows for efficient time-sharing and responsiveness.
The fixed time quantum helps balance the overhead of context switching with fair
CPU allocation, ensuring that processes get reasonable CPU time without
unnecessary delays.
7. Pre-emptive scheduling: Round Robin is a pre-emptive scheduling policy,
meaning that processes can be pre-empted after their time quantum expires. This
pre-emptive nature allows for better control over CPU allocation and the ability to
prioritize certain processes or handle time-sensitive tasks effectively.
Round Robin scheduling is widely used in operating systems, particularly in
timesharing systems and environments that require fairness, responsiveness, and
resource sharing. However, the choice of time quantum and consideration of system
characteristics are crucial to optimize its performance and achieve the desired tradeoffs
between fairness, responsiveness, and system overhead.
1. Poor response time: Round-robin scheduling gives each process an equal amount
of time to execute, regardless of its priority or resource requirements. This can lead
to poor response times for interactive processes that require immediate user input.
If a high-priority process gets a large burst of CPU time, it may cause a noticeable
delay for other processes.
2. Inefficient for long-running processes: Round-robin scheduling is not efficient for
long-running processes that need to execute for extended periods. The algorithm
divides CPU time equally among all processes, which means longrunning processes
will have to give up the CPU frequently, leading to unnecessary context switching
overhead.
3. Inefficient utilization of CPU: Round-robin scheduling may result in inefficient
utilization of the CPU. If a process finishes its burst before the time quantum
expires, the remaining time is wasted. This can occur if a process completes its work
quickly or when there are many short processes in the system.
4. No priority differentiation: Round-robin scheduling does not consider the priority
of processes. All processes are treated equally, and each receives the same time
quantum. In scenarios where certain processes require more attention or have higher
priorities, round-robin may not be the most suitable scheduling algorithm.
5. Inefficient for resource-intensive processes: If there are resource-intensive
processes that require a significant amount of CPU time, round-robin scheduling
may not allocate enough time for them to complete their work efficiently. These
processes may end up getting interrupted frequently, leading to lower overall
performance.
6. High overhead due to context switching: Round-robin scheduling requires
frequent context switching between processes, as each process is given a fixed time
quantum. The overhead associated with saving and restoring process context can be
significant, especially if the number of processes is large or the time quantum is
small.
7. Inability to prioritize I/O-bound processes: Round-robin scheduling treats all
processes equally, regardless of whether they are CPU-bound or I/O-bound. This
can be problematic when dealing with I/O-bound processes, as they may spend a
significant amount of time waiting for I/O operations to complete. In such cases, it
would be more efficient to prioritize other processes that can make better use of the
CPU.
Consider the set of 5 processes whose arrival time and burst time are given below-
P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 3
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, then, Gantt
Chart:
0 2 4 5 7 9 11 12 13 14
P1 P2 P3 P1 P4 P5 P2 P1 P5
P1 13 13 – 0 = 13 13 – 5 = 8 11
–3=8
P2 12 12 – 1 = 11
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
10 – 3 = 7
P5 14 14 – 4 = 10
Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit