Name: Rajat Chowdhry Roll Number: 520810922 Learning Center: 2017 Subject Code: BC0042 Subject: Operating Systems

Assignment No.: 1 Course: Bachelor Of Computer Application (II Semester) Date of Submission at the Learning Center: 8th Dec, 2009

Ques. 1 Ans.

What is kernel? What is the main component of kernel?

The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX has evolved. We can view the traditional UNIX operating system as being layered. Everything below the system call interface and above the physical hardware is the kernel. The kernel provides the file system, CPU scheduling, memory management, and other operating-system functions through system calls. Taken in sum, that is an enormous amount of functionality to be combined into one level. This monolithic structure was difficult to implement and maintain.

Ques. 2 Ans.

Describe in brief the function of an operating system

Functions of an Operating System Modern Operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state. To hide details of hardware An abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction. First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations. Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks. Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction. Fourth, the operating system can enforce security through abstraction.

Ques. 3 Ans.

Discuss the Process vs. Threads

Processes Vs Threads As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are: Similarities · Like processes threads share CPU and only one thread is running at a time. · Like processes, threads within processes execute sequentially. · Like processes, thread can create children. · And like process, if one thread is blocked, another thread can run. Differences · Unlike processes, threads are not independent of one another. · Unlike processes, all threads can access every address in the task.

· Unlike processes, threads are designed to assist one other. (Processes might or might not assist one another because processes may originate from different users.)


Use First Come First Serve algorithm to schedule process given: Process P1 P2 P3 P4 P5 Burst Time 7 5 8 2 3


Ques. 5 Ans.

What do you mean by deadlock?

Sometimes a process has to reserve more than one resource. For example, a process which copies file from one tape to another generally requires two tapes drives. A process, which deals with databases, may need to lock multiple records in a database. A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function. The earliest computer operating systems ran only one program at a time. All of there sources of the system were available to this one program. Later, operating systems ran multiple programs at once, interleaving them. Programs were required to specify in advance what resources they needed so that they could avoid conflicts with other programs running at the same time. Eventually some operating systems offered dynamic allocation of resources. Programs could request further allocations of resources after they had begun running. This led to the problem of the deadlock. Here is the simplest example: Program 1 requests resource A and receives it. Program 2 requests resource B and receives it. Program 1 requests resource B and is queued up, pending the release of B. Program 2 requests resource A and is queued up, pending the release of A. Now neither program can proceed until the other program releases a resource. The operating system cannot know what action to take. At this point the only alternative is to abort (stop) one of the programs. Learning to deal with deadlocks had a major impact on the development of operating systems and the structure of databases. Data was structured and the order of requests was constrained in order to avoid creating deadlocks. In general, resources allocated to a process are not preempt able; this means that once a resource has been allocated to a process, there is no simple mechanism by which the system can take the resource back from the process unless the process voluntarily gives it up or the system administrator kills the process. This can lead to a situation called deadlock. A set of processes or threads is deadlocked when each process or thread is waiting for a resource to be freed, which is controlled by another process.

Ques. 6 What is the advantage of Direct Memory Access? Briefly explain the timing mechanisms that can be used in data transfer through DMA

Direct Memory Access In most mini-and mainframe computer systems, a great deal of input and output occurs between the disk system and the processor. It would be very inefficient to perform these operations directly through the processor; it is much more efficient if such devices, which can transfer data at a very high rate, place the data directly into the memory, or take the data directly from the processor without direct intervention from the processor. I/O performed in this way is usually called direct memory access, or DMA. The controller for a device employing DMA must have the capability of generating address signals for the memory, as well as all of the memory control signals. The processor informs the DMA controller that data is available (or is to be placed into) a block of memory locations starting at a certain address in memory. The controller is also informed of the length of the data block. There are two possibilities for the timing of the data transfer from the DMA controller to memory: · The controller can cause the processor to halt if it attempts to access data in the same bank of memory into which the controller is writing. This is the fastest option for the I/O device, but may cause the processor to run more slowly because the processor may have to wait until a full block of data is transferred. · The controller can access memory in memory cycles, which are not used by the particular bank of memory into which the DMA controller is writing data. This approach, called ``cycle stealing,'' is perhaps the most commonly used approach. (In a processor with a cache that has a high hit rate this approach may not slow the I/O transfer significantly).