You are on page 1of 20

8.

11 Given six memory partitions of 300KB, 600KB, 350KB, 200KB,


750KB,

and 125KB(in order), how would the first-fit, best-fit, and worst-fit

algorithms place processes of size 115KB, 500KB, 358KB, 200KB, and

375KB(in order)? Rank the algorithms in terms of how efficiently they

use memory.

Ans:
a. First-fit:
b. 115 KB is put in 300 KB partition, leaving (185 KB, 600 KB, 350 KB,200 KB, 750 KB, 125 KB)

c. 500 KB is put in 600 KB partition, leaving (185 KB, 100 KB, 350 KB,200 KB, 750 KB, 125 KB)

d. 358 KB is put in 750 KB partition, leaving (185 KB, 100 KB, 350 KB,200 KB, 392 KB, 125 KB)

e. 200 KB is put in 350 KB partition, leaving (185 KB, 100 KB, 150 KB,200 KB, 392 KB, 125 KB)

f. 375 KB is put in 392 KB partition, leaving (185 KB, 100 KB, 150 KB,200 KB, 17 KB, 125 KB)

g. Best-fit:
h. 115 KB is put in 125 KB partition, leaving (300 KB, 600 KB, 350 KB,200 KB, 750 KB, 10 KB)

i. 500 KB is put in 600 KB partition, leaving (300 KB, 100 KB, 350 KB,200 KB, 750 KB, 10 KB)

j. 358 KB is put in 750 KB partition, leaving (300 KB, 100 KB, 350 KB,200 KB, 392 KB, 10 KB)

k. 200 KB is put in 200 KB partition, leaving (300 KB, 100 KB, 350 KB, 0KB, 392 KB, 10 KB)

l. 375 KB is put in 392 KB partition, leaving (300 KB, 100 KB, 350 KB, 0KB, 17 KB, 10 KB)

m. Worst-fit:
n. 115 KB is put in 750 KB partition, leaving (300 KB, 600 KB, 350 KB,200 KB, 635 KB, 125 KB)

o. 500 KB is put in 635 KB partition, leaving (300 KB, 600 KB, 350 KB,200 KB, 135 KB, 125 KB)

p. 358 KB is put in 600 KB partition, leaving (300 KB, 242 KB, 350 KB,200 KB, 135 KB, 125 KB)
q. 200 KB is put in 350 KB partition, leaving (300 KB, 242 KB, 150 KB,200 KB, 135 KB, 125 KB)

r. 375 KB must waitIn this example, only worst-fit does not allow a request to be satisfied.An argument could
be made that best-fit is most efficient as it leaves thelargest holes after allocation. However, best-fit runs at
time O(n) and first-fit runs in constant time O(1).

8.14 On a system with paging, a process cannot access memory that it


does not own. Why? How could the operating system allow access to
other memory? Why should it or should it not?

Answer:

An address on a paging system is a logical page number and an offset.

The physical page is found by searching a table based on the logical

page number to produce a physical page number. Because the


operating system controls the contents of this table, it can limit a
process to accessing only those physical pages allocated to the process.
There is no way for a process to refer to a page it does not own because
the page will not be in the page table. To allow such access, an
operating system simply needs to allow entries for non-process
memory to be added to the process’s page table. This is useful when
two or more processes need to exchange data— they just read and
write to the same physical addresses (which may be at varying logical
addresses). This makes for very efficient interprocess communication.

8.15 Explain why mobile operating systems such as iOSand Android do


not support swapping.

Answer:
There are three reasons: First is that these mobile devices typically use
flash memory with limited capacity and swapping is avoided because of
this space constraint. Second, flash memory can support a limited
number of write operations before it becomes less reliable. Lastly,
there is typically poor throughput betweenmainmemory and flash
memory.

8.23 Consider a logical address space of 256 pages with a 4-KBpage


size, mapped onto a physical memory of 64 frames.

a.How many bits are required in the logical address?

b.How many bits are required in the physical address?

Answer:
8.28 Consider the following segment table:

Segment Base Length

0219600

1230014

290100
31327580

4195296

What are the physical addresses for the following logical addresses?

a.0,430

b.1,10

c.2,500

d.3,400

e.4,112

Answer :

a. 219 + 430 = 649

b. 2300 + 10 = 2310

c. illegal reference, trap to operating system

d. 1327 + 400 = 1727

e. illegal reference, trap to operating system


9.6 An operating system supports a paged virtual memory. The central

processor has a cycle time of 1 microsecond. It costs an additional 1

microsecond to access a page other than the current one. Pages have
1,000 words, and the paging device is a drum that rotates at 3,000
revolutions per minute and transfers 1 million words per second. The
following statistical measurements were obtained from the system:
•One percent of all instructions executed accessed a page other than the
current page.

•Of the instructions that accessed another page, 80 percent accessed

a page already in memory.

•When a new page was required, the replaced page was modified 50
percent of the time. Calculate the effective instruction time on this
system, assuming that the system is running one process only and that
the processor is idle during drum transfers.

Answer:

effective access time = 0.99 × (1 sec + 0.008 × (2micro sec)

+ 0.002 × (10,000 sec + 1,000 microsec)

+ 0.001 × (10,000 sec + 1,000 microsec)

= (0.99 + 0.016 + 22.0 + 11.0)micro sec

= 34.0 micro sec

9.16 Consider a system that uses pure demand paging.

a.When a process first starts execution, how would you characterize the
page-fault rate?

b.Once the working set for a process is loaded into memory, how would
you characterize the page-fault rate?

c.Assume that a process changes its locality and the size of the new

working set is too large to be stored in available free memory.Identify


some options system designers could choose from to handle this
situation.
Answer:
a. If the system is using pure demand paging, the page fault rate

will be very high because none of the pages that the process

needs will be in memory. If the system is using pre-paging, and

the pre-paging algorithm guesses correctly, the page fault rate

will be low.

b. If the working set is in memory, the page fault rate will be 0%.

c. The process needs more memory than is currently available. The

system designer could:

1. Swap the entire process out until enough memory is

available.

2. Allow the process to steal pages from other processes.

However, since no free memory is available, this could mean

that the total current memory demand is greater than the

amount of physical memory. If so, stealing pages will lead to

thrashing and wreck the performance of every process in the

system.

3. Allow the process to continue to execute, but not allocate

any more frames to it. The process will have a very high

page fault rate and will appear to execute very slowly, but

the other processes on the system will continue to execute


normally (except when they fault).

9.16 Consider a system that uses pure demand paging.

a.When a process first starts execution, how would you characterize the
page-fault rate?

b.Once the working set for a process is loaded into memory, how would
you characterize the page-fault rate?

Ready

Blocked Running

Figure 9.31Thread state diagram for Exercise 9.15.

c.Assume that a process changes its locality and the size of the new

working set is too large to be stored in available free memory.Identify


some options system designers could choose from to handle this
situation.

Answer

a. If the system is using pure demand paging, the page fault


ratewill be very high because none of the pages that the
process needs will be in memory. If the system is using
pre-paging, and the pre-paging algorithm guesses
correctly, the page fault rate will be low.
b. If the working set is in memory, the page fault rate will be
0%.
c. The process needs more memory than is currently
available. The
system designer could:
1.Swap the entire process out until enough memory is
available.
2.Allow the process to steal pages from other
processes.However, since no free memory is available,
this could mean that the total current memory
demand is greater than the amount of physical
memory. If so, stealing pages will lead to thrashing and
wreck the performance of every process in the
system.
3. Allow the process to continue to execute, but not
allocateany more frames to it. The process will have a
very high page fault rate and will appear to execute
very slowly, but the other processes on the system will
continue to execute normally (except when they
fault).

2 Copy-on-write is a kernel feature that takes advantage of a


system’s paging hardware to avoid copying the contents of a
frame when a page needs to be duplicated. When a page copy
is requested, the OS creates a new page table entry for the
copied page. However, this entry doesn’t point to a new frame.
Instead, it points to the frame that holds the original page.
Thus, two page table entries point to the same frame. These
page table entries can either be in the same process or
different processes. The process or processes access the
“copied” page just like normal memory.
3 This scheme breaks down when a process attempts to write
to the memory. At this point, the kernel must physically copy
the page into a new frame and update the corresponding page
table.
4 Since COW is implemented using demand paging, paging
hardware is required. In addition, the system must be able to
intercept writes to a COW page (usually through a trap) before
the write hits memory.
5 Thus, we need some way of indicating that a page is a COW
page. This can be implemented using an additional bit in the
page table entry, or by marking the page as unwritable, if the
page table has a

9.22 The page table shown in Figure 9.32 is for a system with 16-bit
virtual and physical addresses and with 4,096-byte pages. The reference
bit is set to 1 when the page has been referenced. Periodically, a thread
zeroes out all values of the reference bit. A dash for a page frame
indicates the page is not in memory. The page-replacement algorithm is
localized LRU, and all numbers are provided in decimal.

a.Convert the following virtual addresses (in hexadecimal) to the


equivalent physical addresses. You may provide answers in either

PagePage FrameReference Bit

090

110

2140

3100
4–0

5130

680

7150

8–0

900

1050

1140

12–0

13–0

1430

1520

Figure 9.32Page table for Exercise 9.22.

hexadecimal or decimal. Also set the reference bit for the appro-

priate entry in the page table.

•0xE12C

•0x3A9D

•0xA9D9

•0x7001

•0xACA1
b.Using the above addresses as a guide, provide an example of a logical
address (in hexadecimal) that results in a page fault.

c.From what set of page frames will theLRUpage-replacement algorithm


choose in resolving a page fault?

9.32 What is the cause of thrashing? How does the system detect
thrashing? Once it detects thrashing, what can the system do to eliminate
this problem?

Answer:
9.33 Is it possible for a process to have two working sets, one
representing data and another representing code? Explain.

Answer:
10.9 None of the disk-scheduling disciplines, exceptFCFS, is truly fair
(starvation may occur).

a.Explain why this assertion is true.

b.Describe a way to modify algorithms such asSCANto ensure fairness.

c.Explain why fairness is an important goal in a time-sharing system.


d.Give three or more examples of circumstances in which it is important
that the operating system be unfair in servingI/O requests.

Answer:
(a) New requests for the track over which the head currently resides can
theoretically arrive as quickly as these requests are being serviced.
(b) All requests older than some predetermined age could be "forced" to
the top of the queue, and an associated bit for each could be set to indicate
that no new request could be moved ahead of these requests. For SSTF,
the rest of the queue would have to be reorganized with respect to the last
of these "old" requests.
(c) To prevent unusually long response times.
(d) Paging and swapping should take priority over user requests. It may be
desirable for other kernel-initiated I/O, such as the writing of file system
metadata, to take precedence over user I/O. If the kernel supports real-time
process priorities, the I/O requests of those processes should be favored.

10.10 Explain why SSDs often use anFCFSdisk-scheduling algorithm.

Answer:
10.14 Describe some advantages and disadvantages of usingSSDs as a
caching tier and as a disk-drive replacement compared with using only
magnetic disks.

Answer:
10.15 Compare the performance ofC-SCANandSCANscheduling,
assuming a uniform distribution of requests. Consider the average
response time (the time between the arrival of a request and the
completion of that request’s service), the variation in response time, and
the effective bandwidth. How does performance depend on the relative
sizes of seek time and rotational latency?

Answer:
11.11 What are the advantages and disadvantages of providing
mandatory locks instead of advisory locks whose use is left to users’
discretion?

11.12 Provide examples of applications that typically access files


according to the following methods:
•Sequential

•Random

Answer:
(a) Print the contents of the file. (b) Print the contents of record i. This record can
be found using hashing or index techniques.

12.2 What problems could occur if a system allowed a file system to be


mounted simultaneously at more than one location?

Answer :
There would be multiple paths to the same file, which could confuse users or
encourage mistakes (deleting a file with one path deletes the file in all the other
paths).

12.3 Why must the bit map for file allocation be kept on mass storage,
rather than in main memory?

Answer :
In case of system crash (memory failure) the free-space list would not be lost as it
would be if the bit map had been stored in main memory

12.6 How do caches help improve performance? Why do systems not


use more or larger caches if they are so useful?
Answer: Caches allow components of differing speeds to communicate more
efficiently by storing data from the slower device, temporarily, in a faster device
(the cache). Caches are, almost by definition, more expensive than the device
they are caching for, so increasing the number or size of caches would increase
system cost.

You might also like