You are on page 1of 4

Question 1 (General) (a) Explain the important design issues associated with cache management.

Ans: When a system writes a datum to cache, it must at some point write that datum to backing store as well. The timing of this write is controlled by what is known as the write policy. There are two basic writing approaches: • Write-through - Write is done synchronously both to the cache and to the backing store. • Write-back (or Write-behind) - Initially, writing is done only to the cache. The write to the backing store is postponed until the cache blocks containing the data are about to be modified/replaced by new content. (b) We have discussed two common models for inter-process communication – the message passing model and the shared-memory model. What are the strength and weakness of the two approaches? Ans: Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for inter computer communication. Shared memory allows maximum speed and convenience of communication, since it can be done at memory speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory. (c) Why does an operating systems needs to provide system calls? Ans: There are certain tasks that can only be done if a process is running in kernel mode which is made possible by system calls. (d) In what way is the modular kernel approach the same as the layered kernel approach? Ans: Similarity of modular kernel approach to the layered approach • In layered approach the operating system are divided in to number of layers. Here it is divided in lo system-and-user-level-programs. • In modular kernel provide the communication between the client program and the various services that are also running in user space. Differences • It provides the services for message passing and process scheduling. • It also handles low-level network communication and hardware interrupts. • The modular kernel approach coordinates the message passing between client application and application servers Question 2 (Process and threads) (a) What is a process? Describe all of the components that constitute a process. Ans: A process can be thought of as a program in execution, A process will need certain resources— such as CPU time, memory, files, and I/O devices—to accomplish its task. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity, as represented by the value of the program counter and the contents of the processor's registers. A process generally also includes the process stack, and a data section, which contains global variables. A process may also include a heap. (b) The Nachos Thread contains the Yield() and the Sleep() methods. Explain each method, and discuss their differences and similarities. Ans: Sleep() causes the currently executing thread to sleep (temporarily cease execution).

the OS places it on a wait queue. no other process can access the semaphore until operation has completed. This function is usually called when the current thread’s quantum has expired. the other processes will be excluded from doing the same thing. but the lock implementation needs to be very careful about how disabling and enabling interrupts is controlled as processes are put to sleep on the wait queue. written as P(S) or wait (S). If several processes attempt a P(S) simultaneously. seen below. operates as follows: V(S): IF (one or more process are waiting on S) THEN (let one of these processes proceed) ELSE S := S +1 It is guaranteed that once a semaphore operation has stared. Mutual exclusion on the semaphore. This is a more efficient approach. (b) What is mutual exclusion? How is mutual exclusion ensured using a semaphore? Ans: Mutual Exclusion: A way of making sure that if one process is using a shared modifiable data. it will continuously check the lock to see when it becomes available again.1 ELSE (wait on S) The V (or signal or wakeup or up) operation on semaphore S. operates as follows: P(S): IF S > 0 THEN S := S . Where in case of threads extensive sharing makes thread inexpensive. While this is simple to implement. Ans: In busy waiting. Explain how a procedure is attached to a thread and the thread is initialized so that context switching can occur. .Yield() causes the currently executing thread object to temporarily pause and allow other threads to execute Both the function leads the thread to waiting state. is enforced within P(S) and V(S). Explain both implementation and discuss the advantages and disadvantages of both implementations. These processes will not be scheduled by the OS until the process currently using the lock calls release(). it can be inefficient because the process is using CPU time to constantly check a lock which another process has control of. (c) Nachos threads can be used to run procedures within the kernel. S. Question 3 (Synchronization) (a) Two implementations of semaphores have been discussed – a busy-wait implementation and a waiting queue implementation. In wait queues. We just have to store register set information of thread no memory -management related work is to be done. Ans: Nachos thread does a context switch it must save two sets of registers. Ans: In process switch lots of work is to be done like storing the state of old process in its PCB and loading the saved state of new process which takes time and thus is complete overhead to CPU. when a process calls acquire(). when a process calls acquire(). (c) What is a race condition? Give an example of a race condition. only process will be allowed to proceed. and a new thread needs to be switched in (d) Explain why it is faster to create or context switch a thread compared to a process. written as V(S) or signal (S). This can be seen in the code for Scheduler::Run(). The P (or wait or sleep or down) operation on semaphores S.

a deadlock is a situation which occurs when a process enters a waiting state because a resource requested by it is being held by another waiting process. then the system is said to be in a deadlock Suppose a computer has three CD drives and three processes. (d) What is a deadlock? Give an example of how a deadlock can occur. (c) What is a page fault? Describes the actions taken by the operating system when a page fault occurs. Whereas physical address is the actual address of the process in the memory. Ans: The aim of this two-level scheme is to reduce the amount of RAM required for per-process Page Tables. Thus. When page size are of be 4 MB instead of 4 KB in size). How is it used to avoid thrashing? Ans: The locality model states that. (b) On general purpose operating system why do programs use a logical address space and not the physical address space? Ans: Logical address is the address generated by the CPU. as a process executes. A program is generally composed of several different localities. Question 4 (Memory) (a) The Intel Pentium processors supports two page sizes 4KB and 4MB. To avoid thrashing processes must be given as many frames as they “need”. is called a race condition. If each process now requests another drive. Logical address space ensure better portability of programs as difference machine may not have same physical address space but they can have same logical address space. where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. the kernel can do without intermediate Page Tables and thus save memory and preserve TLB entries. Extended paging is used to translate large contiguous linear address ranges into corresponding physical ones.What are the advantages of having two page size. it results in a circular chain. (d) What is the locality model of program execution. If it is valid. aborting the program if it is invalid. Each of the three processes holds one of the drives. which may overlap. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process. Upon completion of I/O. it moves from locality to locality. the three processes will be in a deadlock. in these cases. a free frame is located and I/O is requested to read the needed page into the free frame. We can use the locality of reference principle to help determine how many frames a process needs: Question 5 (File Systems) . The two-level scheme reduces the memory by requiring Page Tables only for those virtual memory regions actually used by a process. A locality is a set of pages that are actively used together. Ans: In an operating system.Ans: A situation like this. Manipulating a variable counter concurrently by two process concurrently results in race condition. which can be only caused by one of the other waiting processes. which in turn is waiting for another resource. The operating system verifies the memory access. Each process will be waiting for the "CD drive released" event. the process table and page table are updated and the instruction is restarted. Ans: A page fault occurs when an access to a page that has not been brought into main memory takes place.

(b) Name two(2) on disk data structure that can be found in a file system.this is extremely expensive on disk. File can grow without complications. data control blocks. and enters name in directory. (d) What is the difference between hard links ad soft links in Unix. You cannot create a hard link for a directory. A soft link can link to a directory. linked allocation and indexed allocation. if no changes are to be made. Contiguous: The disadvantage of contiguous allocation is that it is often difficult to find free space for a new file. a linear list stores the directory entries. Soft Links have different inodes numbers. The hash table takes a value computed from the file name and returns a pointer to the file name in the linear list. List the advantage and disadvantage of each method. Unix will not allow the creation of hard link to a directory. creates data control blocks. 2. it also allocates space. Why is this? Ans: Hard Links 1. Soft Link contains the path for original file and not the contents. Indexed allocation: Large overhead is required for metadata. Links have actual file contents 3. Ans: The simplest method of implementing a directory is to use a linear list of file names with pointers to the data blocks. it is effective only for sequential-access files. Deletes buffers. and other data structures. (c) There are three main methods to allocate storage for file: contiguous allocation. Also easiest for random access files. This method is simple to program but time-consuming to execute.(a) What is the purpose and result of executing the open() and close () system routines? Ans: Open creates memory buffers. Another data structure used for a file directory is a hash table. it must start at the beginning of that file and follow the pointers until the ith block is reached. c. Contiguous allocation: Fastest. Close outputs last buffer of information. you must perform a mark-and-sweep garbage collection to detect when isolated cycles of directories (no longer reachable from the root) can be finally deleted . Once you allow cycles to form. Hard Links have same inodes number. With this method. Indexed allocation: Supports direct access without external fragmentation. but a hash data structure is also used. 2. Hard links are not permitted because they would lead to cycles. Ans: Advantages: a. b. If file is new. 3. Soft Links 1. To find the ith block of a file. Disadvantages: a. b. and creates other data structures needed for the I/O. . c. Linked allocation: No external fragmentation. And briefly describe what they are used for. Linked allocation: The major problem is that it is inefficient to support direct-access.