You are on page 1of 22

OS : Q-Bank Unit : 01-02

What is Operating System? Explain different types of operating


system.

Operating System (OS):

The main software that runs on a computer, managing its


resources and allowing users to interact with it.
It controls hardware like the CPU, memory, and storage,
ensuring they work together smoothly.
Provides interfaces for users and programs to interact with the
computer.

Types of Operating Systems:

1. Single-tasking Operating System:


Runs only one program at a time.
Example: MS-DOS.
2. Multi-tasking Operating System:
Allows multiple programs to run simultaneously.
Examples: Windows, macOS, Linux.
3. Single-user Operating System:
Designed for use by one person at a time.
Example: Most personal computers run single-user
operating systems.
4. Multi-user Operating System:
Supports multiple users accessing the system
simultaneously.
Common in servers and mainframe computers.
Example: Unix/Linux servers.
5. Real-time Operating System (RTOS):
Responds to input instantly, with minimal delay.
:
Used in systems requiring immediate responses, like
robotics and aerospace.
Example: VxWorks, QNX.
6. Embedded Operating System:
Built into devices and often tailored to specific hardware.
Found in smartphones, IoT devices, and appliances.
Examples: Android (for smartphones), FreeRTOS (for
embedded systems).
7. Distributed Operating System:
Runs on multiple interconnected computers, working
together as a single system.
Enables resource sharing and communication across a
network.
Example: Google’s distributed operating system for its data
centers.
8. Network Operating System (NOS):
Manages network resources and provides services like file
sharing and printing.
Allows multiple computers to communicate and share
resources.
Example: Novell NetWare, Windows Server.

Define operating system. Explain the different views of


operating system.

Explain evolution of operating system

Define process. Differentiate between a process and a program.

The term process (Job) refers to program code that has been loaded
into a computer’s memory so that it can be executed by the central
processing unit (CPU). A process can be described as an instance of
a program running on a computer or as an entity that can be
:
assigned to and executed on a processor. A program becomes a
process when loaded into memory and thus is an active entity.

Program Process
Program contains a set of instructions
Process is an instance of an executing program.
designed to complete a specific task.
Program is a passive entity as it resides Process is a active entity as it is created during
in the secondary memory. execution and loaded into the main memory.
Program exists at a single place and Process exists for a limited span of time as it gets
continues to exist until it is deleted. terminated after the completion of task.
Program is a static entity. Process is a dynamic entity.
Program does not have any resource Process has a high resource requirement, it needs
requirement, it only requires memory resources like CPU, memory address, I/O during its
space for storing the instructions. lifetime.
Program does not have any control Process has its own control block called Process
block. Control Block.
In addition to program data, a process also requires
Program has two logical components:
additional information required for the management
code and data.
and execution.
Many processes may execute a single program.
Program does not change itself. There program code may be the same but program
data may be different. these are never same.
Program contains instructions Process is a sequence of instruction execution.

Explain different service provided by operating system.

An operating system provides several services to users and


applications running on a computer system. The main services
provided by an operating system include:

1. Process Management: The operating system manages the


execution of programs or processes on the system. It allocates
resources such as CPU time, memory, and I/O devices to
processes based on their priorities and scheduling algorithms. It
also provides mechanisms for inter-process communication and
synchronization.
2. Memory Management: The operating system manages the
memory resources of the system. It allocates memory to
:
processes and ensures that each process can access only its
own memory. It also provides mechanisms for virtual memory
management and memory protection to prevent unauthorized
access to memory.
3. Device Management: The operating system manages the
input/output (I/O) devices of the system. It provides a uniform
interface for device drivers and manages the allocation of
devices to processes. It also handles errors and interrupts
generated by the devices.
4. File System Management: The operating system manages the
storage resources of the system. It provides a file system that
organizes and manages data stored on disks or other storage
devices. It provides mechanisms for file access, protection, and
sharing among processes.
5. Security Management: The operating system manages the
security resources of the system. It provides mechanisms for
user authentication, access control, and auditing to ensure that
only authorized users can access system resources and data.
6. Network Management: The operating system manages the
network resources of the system. It provides mechanisms for
network configuration, communication, and security.
7. User Interface: The operating system provides a user interface
that allows users to interact with the system. The user interface
may be a command-line interface or a graphical user interface
(GUI).
8. Error Handling: The operating system provides mechanisms for
handling errors and exceptions generated by the system or by
applications running on the system. It also provides
mechanisms for logging and reporting errors and exceptions.
9. Performance Monitoring: The operating system provides tools
for monitoring system performance and resource utilization. It
allows users and system administrators to identify performance
:
bottlenecks and optimize system performance.

What is operating system? Give the view of OS as a resource


manager.

What is Batch operating System? Discuss its advantages and


disadvantages.

What is Time-sharing operating System? Discuss its advantages


and disadvantages.

What is distributed operating System? Discuss its advantages


and disadvantages.

What is Real-time operating System? Discuss its advantages


and disadvantages.

Difference between process and thread.

S.NO Process Thread


Process means any program is in
1. Thread means a segment of a process.
execution.
The process takes more time to
2. The thread takes less time to terminate.
terminate.
3. It takes more time for creation. It takes less time for creation.
It also takes more time for context
4. It takes less time for context switching.
switching.
The process is less efficient in Thread is more efficient in terms of
5.
terms of communication. communication.
We don’t need multi programs in action for
Multiprogramming holds the
6. multiple threads because a single process
concepts of multi-process.
consists of multiple threads.
7. The process is isolated. Threads share memory.
The process is called the A Thread is lightweight as each thread in a
8.
heavyweight process. process shares code, data, and resources.
Thread switching does not require calling an
Process switching uses an
9. operating system and causes an interrupt to the
interface in an operating system.
kernel.
If one process is blocked then it
will not affect the execution of If a user-level thread is blocked, then all other
10.
:
other processes user-level threads are blocked.

Explain the objectives and functions of operating systems.

Explain different states of a process with a suitable diagram.

What is PCB? Discuss its major fields.

A Process Control Block (PCB) is a data structure used by operating


systems to manage information about a particular process. The PCB
contains information about the process, including its current state,
priority, scheduling information, memory usage, and I/O status.

The PCB is a crucial component of an operating system, as it enables


the operating system to manage multiple processes concurrently.
When a process is created, the operating system creates a PCB for
that process, which is used to store and update information about
the process as it executes.

Here are some of the key pieces of information that are typically
stored in a PCB:

1. Process ID: A unique identifier for the process.


2. Program counter: The current value of the program counter,
which indicates the address of the next instruction to be
executed.
3. CPU registers: The values of the CPU registers used by the
process.
4. Memory management information: Informa on about the
memory that is being used by the process, including the base
and limit registers.
5. Process state: The current state of the process, such as ready,
running, blocked, or terminated.
6. Priority: The priority level of the process, which determines how
:
much CPU me it is allocated.
7. I/O status: Information about any I/O operations that are
currently being performed by the process.

The operating system uses the information stored in the PCB to


manage the execution of the process. For example, when a process
is scheduled to run, the opera ng system retrieves the process’s PCB
and updates the CPU registers with the values stored in the PCB.
When the process is blocked, the opera ng system updates the PCB
with information about the I/O opera on that the process is waiting
for.

Overall, the PCB is a critical component of an opera ng system’s


process management system. It enables the opera ng system to
manage the execution of multiple processes concurrently, by
providing a centralized location for storing and updating information
about each process.

Explain the microkernel system architecture in detail.

Explain monolithic operating system structure.

Define a process. Explain the process state transition with a


neat diagram.

A process is an instance of a program in execution


A process is the basic unit of execution in an operating system.
Each process has a process identifier associated with it. Known
as the process_id.

The process state diagram for UNIX system is given below:

UNIX uses two categories of processes: System processes and


user processes.
:
System processes operate in kernel mode and execute
operating system code to perform administrative and
housekeeping functions.
User processes operate in user mode to execute user programs
and utilities

The various states over here are:

Created: Process is newly created and is not ready to run.


User Running: The process is currently executing in user mode.
Kernel Running: The process is currently running in kernel
mode.
Ready to run, in memory: The process is ready to run and is
waiting for the Kernel to schedule it.
Ready to run, swapped: The process is ready to run, but is
currently no in the main memory and is waiting for the swapper
to swap it to the main memory.
Sleep, Swapped: A blocked state wherein the process is
awaiting an event and has been swapped to secondary storage.
Asleep in memory: A blocked state wherein the process is
waiting for an event to occur and is currently in the main
memory.
Pre-empted: Process is running from kernel to user mode. But
the kernel pre-empts it and does a process switch and
schedules another process.
Zombie: Process no longer exists. It still leaves behind a record
of it for its parent process to access.

Working:

A process after creation is not immediately put into Ready state.


Instead it is put to Created state. This is done so that we create
a process without enough memory available to run.
:
Then it can move to Ready to run in memory or Ready to run
swapped state based on the available memory.
Now, when it arrives in memory, based on the scheduling we
move to kernel mode or User mode as per the requirements.
Interrupts in between will send the system into a Sleep state
(Blocked).
When the process finishes execution, it invokes an Exit system
call and goes into Zombie state.

Explain Thread Life Cycle with diagram.

What is thread? Explain thread Structure? And explain any one


type of thread in details.

What is thread and what are the differences between user-level


threads and kernel supported threads?

Thread is a single sequence stream within a process. Threads have


same properties as of the process so they are called as light weight
processes.

Don't need all points 5-6 point are ok

S.
Parameters User Level Thread Kernel Level Thread
No.
Implemented User threads are implemented by Kernel threads are implemented
1.
by users. by Operating System (OS).
The operating System doesn’t Kernel threads are recognized by
2. Recognize
recognize user-level threads. Operating System.
Implementation of User threads is Implementation of Kernel-Level
3. Implementation
easy. thread is complicated.
Context switch
4. Context switch time is less. Context switch time is more.
time
Hardware Context switch requires no
5. Hardware support is needed.
support hardware support.
If one user-level thread performs If one kernel thread performs a
Blocking
6. a blocking operation then the blocking operation then another
operation
entire process will be blocked. thread can continue execution.
:
Multithread applications cannot
7. Multithreading take advantage of Kernels can be multithreaded.
multiprocessing.
User-level threads can be Kernel-level level threads take
Creation and
8. created and managed more more time to create and
Management
quickly. manage.
Operating Any operating system can Kernel-level threads are
9.
System support user-level threads. operating system-specific.
The application code does not
The thread library contains the
contain thread management
code for thread creation,
Thread code. It is merely an API to the
10. message passing, thread
Management kernel mode. The Windows
scheduling, data transfer, and
operating system makes use of
thread destroying
this feature.
Example: Java thread, POSIX
11. Example Example: Window Solaris.
threads.
User Level Threads are simple Scheduling multiple threads that
and quick to create. Can run on belong to the same process on
any operating system They different processors is possible
perform better than kernel in kernel-level threads.
12. Advantages threads since they don’t need to Multithreading can be there for
make system calls to create kernel routines. When a thread at
threads. In user-level threads, the kernel level is halted, the
switching between threads does kernel can schedule another
not need kernel mode privileges. thread for the same process.
Transferring control within a
Multithreaded applications on
process from one thread to
user-level threads cannot benefit
another necessitates a mode
from multiprocessing. If a single
13. Disadvantages switch to kernel mode. Kernel-
user-level thread performs a
level threads take more time to
blocking operation, the entire
create and manage than user-
process is halted.
level threads.
In kernel-level threads have their
In user-level threads, each thread own stacks and their own
Memory
14. has its own stack, but they share separate address spaces, so they
management
the same address space. are better isolated from each
other.
User-level threads are less fault-
Kernel-level threads can be
tolerant than kernel-level threads.
managed independently, so if
15. Fault tolerance If a user-level thread crashes, it
one thread crashes, it doesn’t
can bring down the entire
necessarily affect the others.
process.
User-level threads don’t take full
It can access to the system-level
advantage of the system
Resource features like I/O operations. So it
16. resources, As they don’t have
utilization can take full Advantages of
direct access to the system-level
System Resources.
features like I/O operations
User-level threads are more Kernel-level threads are less
17. Portability portable than kernel-level
:
threads. portable than User-level threads.

Define term Scheduler, Scheduling and Scheduling Algorithm


with example.

Define mutual exclusion. How mutual exclusion can be


achieved?

Mutual exclusion is a concept in computer science and concurrent


programming that ensures that only one process or thread can
access a particular resource or critical section of code at any given
time. This means that while one process is using the resource or
executing the critical section, all other processes are prevented from
accessing it to avoid conflicts and maintain data integrity.

There are several methods or approaches to achieve mutual


exclusion:

Locks: The lock method involves using a data structure or a


variable to control access to a resource. The resource is locked
when a process begins to access it, and unlocked when it is
released.
Semaphores: A semaphore is a synchronization object that
allows multiple processes to access a shared resource at the
same time. It contains a value that can be incremented or
decremented by processes as they enter or leave the critical
section.
Monitors: A monitor is an abstract data type that allows safe
access to shared resources by ensuring that only one process
can execute a critical section of code at a time.
Atomic Operations: Atomic operations are operations that are
indivisible and cannot be interrupted by other processes. They
are used to ensure that a critical section of code is executed as
a single, atomic unit.
:
Explain context switching.

Context switching is a fundamental operation performed by the


operating system to manage multiple processes or threads
concurrently on a single CPU core. It involves saving the current
state of a running process or thread (its context) and restoring the
state of another process or thread so that it can continue execution.
Context switching allows the CPU to rapidly switch between different
tasks, giving the illusion of parallel execution.

Here’s how context switching typically works:

1. Saving Context: When the CPU needs to switch from executing


one process or thread to another, the current state of the
executing process or thread is saved. This includes information
such as the program counter (the address of the next
instruction to execute), register values, stack pointer, and other
relevant CPU state.
2. Loading Context: Once the current context is saved, the
operating system selects the next process or thread to execute.
It loads the saved context of the selected process or thread into
the CPU registers, allowing it to resume execution from where it
left off.
3. Execution: With the new context loaded, the CPU begins
executing instructions from the selected process or thread. The
process or thread continues running until it voluntarily
relinquishes the CPU (e.g., by blocking on I/O operations) or
until the operating system preempts it to allow another process
or thread to run.
4. Repeat: The process of saving the current context, loading the
context of the next process or thread, and resuming execution
continues in a cyclic manner, allowing the CPU to switch
between multiple tasks rapidly.
:
Context switching is a crucial mechanism for multitasking operating
systems, allowing them to efficiently utilize CPU resources by
sharing them among multiple processes or threads. However,
context switching comes with overhead, as saving and restoring
context requires CPU time and memory bandwidth. Minimizing
context switching overhead is essential for optimizing system
performance.

What is System call? Discuss different types of system calls.

System calls are functions provided by the kernel that allow user-
level programs to interact with the operating system. They provide a
standardized interface for accessing system resources such as files,
memory, and I/O devices. System calls are invoked by user-level
programs using a special instruction called a trap or interrupt, which
switches the CPU from user mode to kernel mode. The kernel then
executes the requested system call and returns control back to the
user-level program.

Here are some examples of system calls in UNIX:

1. open(): This system call is used to open a file for reading,


writing, or both. It takes a filename and a set of flags as
arguments and returns a file descriptor, which is a unique
identifier for the open file.
2. read(): This system call is used to read data from a file or a
device. It takes a file descriptor, a buffer, and the number of
bytes to read as arguments and returns the number of bytes
actually read.
3. write(): This system call is used to write data to a file or a
device. It takes a file descriptor, a buffer, and the number of
bytes to write as arguments and returns the number of bytes
actually written.
:
4. fork(): This system call is used to create a new process. It
duplicates the calling process, creating a new process with the
same memory and resources. The new process is called the
child process, while the original process is called the parent
process.
5. exec(): This system call is used to replace the current process
image with a new process image. It takes a filename and a set of
arguments as arguments and loads the specified program into
memory, replacing the current program.

Write short note: 1) Semaphores 2) Monitors

1. Semaphores: Semaphores are synchronization primitives used


in concurrent programming to control access to shared
resources. They are essentially integer variables that support
two fundamental operations: “wait” (P) and “signal” (V).
Semaphores can be used to enforce mutual exclusion,
coordinate the execution of concurrent processes, and prevent
race conditions. They play a crucial role in solving
synchronization problems, such as the producer-consumer
problem and the readers-writers problem.
2. Monitors: Monitors are high-level synchronization constructs
used to control access to shared resources in concurrent
programming. A monitor encapsulates shared data and the
procedures (or methods) that operate on that data. It ensures
mutual exclusion by allowing only one thread to execute within
the monitor at any given time. Monitors provide a structured
approach to synchronization and are easier to use than low-
level primitives like semaphores. They promote modular design
and help in writing concurrent programs that are easier to
understand and maintain.

Define : 1) Critical Section 2) Waiting Time 3) Race condition


:
1. Race Condition: A race condition arises when two or more
processes or threads access a shared resource or variable
simultaneously, and the final outcome depends on the order of
their execution. These conditions lead to unpredictable results
and are notoriously difficult to reproduce and debug. Preventing
race conditions requires careful synchronization of access to
shared resources.
2. Waiting Time:
Waiting time, in the context of process scheduling, refers to
the amount of time a process spends in the ready state,
waiting to be assigned to the CPU for execution.
When a process is ready to execute but cannot do so
immediately due to other processes currently running or
the CPU being busy, it enters the ready queue and waits for
its turn.
Waiting time is an essential metric in process scheduling
algorithms, as minimizing waiting time helps improve
system efficiency and overall performance.
3. Critical Section: A critical section is a segment of a program
where a shared resource or variable is accessed by one process
or thread at a time. It’s designed to prevent race conditions by
ensuring exclusive access to the shared resource. Critical
sections are typically enclosed within locking mechanisms like
mutexes, semaphores, or monitors, which allow only one
process or thread to enter the critical section at a time. To
minimize contention and waiting time for other processes or
threads, critical sections should be kept as short as possible.

Explain producer-consumer problem and solve it using


semaphore. Write pseudo code for the same.

Explain the IPC Problem known as Dining Philosopher Problem.


:
Explain IPC Problem – Readers & Writers Problem.

The Readers and Writers problem is a classic synchronization


problem in inter process communication (IPC). The problem involves
a shared resource, such as a file or database, that is accessed by
multiple processes. The processes can be categorized as either
readers or writers. Readers only read the resource, while writers both
read and write to the resource.

The problem arises when multiple processes attempt to access the


shared resource simultaneously. If a writer is currently writing to the
resource, no other process should be allowed to read or write until
the writer has finished. Similarly, if a reader is currently reading the
resource, other readers should be allowed to read, but writers should
be prevented from writing until all readers have finished.

To solve the Readers and Writers problem, a synchronization


mechanism must be used to ensure mutual exclusion between
readers and writers. Several approaches can be used to solve the
problem:

First Reader Writer Problem: In this solution, the first process


that arrives and requests access to the resource is given
priority. If a reader is currently accessing the resource, other
readers are allowed to read simultaneously. However, if a writer
is accessing the resource, no other process can access it until
the writer has finished.
Second Reader Writer Problem: In this solution, writers are given
priority over readers. If a writer requests access to the resource,
all other processes must wait until the writer has finished. If a
reader requests access to the resource while a writer is writing,
the reader must wait until the writer has finished.
Third Reader Writer Problem: In this solution, readers and
:
writers are given equal priority. If a writer requests access to the
resource, it is granted exclusive access. If a reader requests
access while a writer is writing, the reader must wait. However, if
no writers are waiting, readers are allowed to access the
resource simultaneously.

What is Mutex? Write a pseudo code to achieve mutual exclusion


using mutex.

What do you mean by Deadlock Avoidance? Explain the use of


Banker’s Algorithm for Deadlock Avoidance with illustration.

The banker’s algorithm is a resource allocation and deadlock


avoidance algorithm that tests for safety by simulating the allocation
for the predetermined maximum possible amounts of all resources,
then makes an “s-state” check to test for possible activities, before
deciding whether allocation should be allowed to continue.

Consider the snapshot of the system with Five Processes and


Four types of resources A,B,C,D.

Currently Available set of resources is (1,5,2,0). Find the content


of Need Matrix. Is the System in Safe State?

Which are the necessary conditions for Deadlock? Explain


Deadlock recovery in brief.

What is Deadlock? List the conditions that lead to deadlock.


How Deadlock can be prevented?

Deadlock is a situation in which two or more competing actions are


each waiting for the other to finish, preventing any of them from
progressing. In other words, it’s a scenario where two or more
processes are unable to proceed because each is waiting for the
:
other to release a resource.

Conditions that lead to deadlock, often referred to as the necessary


conditions for deadlock, are:

1. Mutual Exclusion: At least one resource must be held in a non-


shareable mode. This means that only one process can use the
resource at a time.
2. Hold and Wait: A process must be holding at least one resource
and waiting to acquire additional resources that are currently
held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a
process; they must be released voluntarily by the process
holding them.
4. Circular Wait: There must exist a set of waiting processes such
that P0 is waiting for a resource held by P1, P1 is waiting for a
resource held by P2, …, and Pn is waiting for a resource held by
P0, creating a circular chain of waiting.

To prevent deadlock, several strategies can be employed:

1. Avoidance: Systematically ensure that the conditions necessary


for deadlock cannot occur. This can be done using various
algorithms such as Banker’s algorithm, which ensures that
resources are allocated in such a way that deadlock cannot
occur.
2. Prevention: Design the system in such a way that one or more
of the necessary conditions for deadlock cannot occur. For
example, if we eliminate the “hold and wait” condition by
requiring processes to request all necessary resources at once,
deadlock can be prevented. However, this may lead to reduced
efficiency.
3. Detection and Recovery: Allow deadlock to occur, but detect it
:
when it happens and take corrective action to recover from it.
This might involve killing some processes or preemptively
releasing resources to break the deadlock. However, this
approach can be complex and may result in loss of work or
system instability.
4. Avoidance Heuristics: Employ heuristics to dynamically
allocate resources in such a way as to avoid the possibility of
deadlock. This approach may not guarantee deadlock
avoidance in all cases but can work well in practice.

Difference between deadlock and starvation.

S.NO Deadlock Starvation


High priority processes keep
All processes keep waiting for each other
1. executing and low priority processes
to complete and none get executed
are blocked
Resources are continuously utilized
2. Resources are blocked by the processes
by high priority processes
Necessary conditions Mutual Exclusion,
Priorities are assigned to the
3. Hold and Wait, No preemption, Circular
processes
Wait
4. Also known as Circular wait Also known as lived lock
It can be prevented by avoiding the
5. It can be prevented by Aging
necessary conditions for deadlock

What is RAG? Explain briefly.

Explain UNIX Multi-level feedback queue scheduling.

Find average waiting time for Shortest job first scheduling, and
Round robin scheduling algorithm.

Solve following by SJF preemptive and non-preemptive. Draw


Gantt Chart, Average Waiting Time and Average Turnaround
Time. Which one is better as per average turnaround time?

Consider the following set of processes with the length of CPU


:
burst time given in the milliseconds.
Calculate average turnaround time and average waiting time for
First-come first served scheduling, Shortest job first scheduling
and Priority scheduling algorithm.

Write a Shell Script to find factorial of given number.

factorial() {
if [ $1 -eq 0 ] || [ $1 -eq 1 ]; then
echo 1
else
local n=$1
local result=1
while [ $n -gt 1 ]; do
result=$((result * n))
n=$((n - 1))
done
echo $result
fi
}
# Main script
read -p "Enter a number to find factorial: " num

# Check if input is a non-negative integer


if ! [[ $num =~ ^[0-9]+$ ]]; then
echo "Error: Please enter a non-negative integer."
exit 1
fi

# Calculate and display factorial


fact=$(factorial $num)
echo "Factorial of $num is: $fact"

Explain following Commands in UNIX : man, cat, sort, grep,


chmod, head, tail, ls, mkdir … (All commands)
:
1. man: Short for “manual”, the man command displays the manual
pages (documentation) for Unix commands. For example, man
ls will display information about the ls command.
2. cat: The cat command is used to concatenate and display the
contents of files. It can also be used to create new files. For
example, cat file1.txt file2.txt will display the contents of
both file1.txt and file2.txt to the standard output.
3. sort: The sort command is used to sort the lines of text files
alphabetically or numerically. By default, it sorts in ascending
order. For example, sort file.txt will display the contents of
file.txt with its lines sorted alphabetically.
4. grep: The grep command is used to search for specific patterns
or text within files. It searches for a specified pattern and prints
the lines containing that pattern. For example, grep "pattern"
file.txt will display lines from file.txt that contain the
specified pattern.
5. chmod: The chmod command is used to change the permissions
of files and directories in Unix-like operating systems. It allows
users to specify who can read, write, and execute files. For
example, chmod +x script.sh will make script.sh executable.
6. head: The head command is used to display the beginning lines
of a file. By default, it displays the first 10 lines of a file. For
example, head file.txt will display the first 10 lines of file.txt.
7. tail: The tail command is used to display the ending lines of a
file. By default, it displays the last 10 lines of a file. For example,
tail file.txt will display the last 10 lines of file.txt.
8. ls: The ls command is used to list files and directories in a Unix-
like operating system. It displays the contents of the current
directory by default. For example, ls -l will display a detailed
listing of files and directories.
:
9. mkdir: The mkdir command is used to create new directories
(folders) in a Unix-like operating system. For example, mkdir
new_directory will create a new directory named new_directory.

Write a shell script to find greater number out of 3 numbers.

find_greatest() {
if [ $1 -gt $2 ] && [ $1 -gt $3 ]; then
echo $1
elif [ $2 -gt $1 ] && [ $2 -gt $3 ]; then
echo $2
else
echo $3
fi
}
# Main script
read -p "Enter the first number: " num1
read -p "Enter the second number: " num2
read -p "Enter the third number: " num3

# Check if inputs are valid numbers


if ! [[ $num1 =~ ^[0-9]+$ && $num2 =~ ^[0-9]+$ && $num3 =~ ^[0-9]+$ ]]; then
echo "Error: Please enter valid numbers."
exit 1
fi
# Find and display the greatest number
greatest=$(find_greatest $num1 $num2 $num3)
echo "The greatest number is: $greatest"

Solve following by Round Robin process scheduling algorithm.


Draw Gantt Chart, Average Waiting Time and Average
Turnaround Time for time slice=4 and time slice=2.
:

You might also like