You are on page 1of 7

CS 560: Operating Systems May 2, 2007

Spring 2007, Dr. Micah Beck Final Exam


There are seven questions on this exam, each worth 20 points. All parts of a question are
of equal weight, except as noted. The question on which you score the minimum number
of points will be discarded; the question on which you score the next-to-minimum
number of points will be treated as extra credit. The remaining five questions will be
summed to generate your score out of 100.

Write on blank sheets of paper, not on the exam. Write your name clearly at the top of
each page. Fasten the pages with a staple. Read the entire exam before you begin
writing. If you make any assumptions not stated in the question, write them down as part
of your answer. Ask questions if anything is unclear.

1. (20 points) Paged virtual memory

Consider a memory management system that translates 16 bit virtual addresses to 24 bit
physical addresses. These addresses are byte addresses; the memory consists of 16 bit
words. The system uses a two-level page table, with a 4-bit first level page number, a 4-
bit second level page number, and an 8-bit offset within the page.

a. What is the maximum number of physical memory frames that can be


addressed by this memory management system?

24 first level page table entries x 24 second level page table entries = 28 frames

b. Explain the circumstances under which the pages tables in this system will
take up less memory than a single-level page table with an 8-bit page number
and an 8-bit offset. It may be useful to give an example, but this is not
required for full credit.

If most second page tables are empty, they need not be represented by this system,
and thus take up no memory.

The space taken is 16(1 + n) where n is the number of non-empty second level
page tables. This system takes less memory when 16(1 + n) < 64, ie when n < 3.
CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam

2. (20 points) Processor scheduling to minimize page faults

Consider a processor scheduling system for preemptive multitasking that is based on a


multi-level feedback queue.

• There are three queues, #1 being the highest priority and #3 being the lowest
priority.

o The scheduler will run processes in queue #1 first


o If queue #1 is empty it will run processes in queue #2, and
o Only if both queues #1 and #2 are empty will it run processes in queue #3.

• Processes are moved between queues as follows:

o New processes are placed in queue #1. Any process that completes its
time slice without causing a page fault is placed at the end of queue #1.
o Any process that causes a page fault will be moved to the next highest
numbered queue (processes in queue #3 remain there).

a. Assuming that the virtual memory system has a fixed allocation of F frames to
each processes, how might a programmer take advantage of the above
information to obtain favorable scheduling for their program?

A program whose memory references are arranged to fall into F frames will run
in queue #1 without interruption. By arranging their program to focus memory
references into at most F frames, a programmer can ensure that their program
will run uninterrupted.

b. In what way can this scheduling scheme cause some processes to exhibit very
poor performance? Explain using a simple example.

A program running without interruption can cause starvation of all other


processes. An example would be a program in a tight infinite loop.
CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam

3. (20 points) Page replacement algorithms

a. Apply true LRU replacement to the following reference string of page number
references, assuming that 3 page frames are allocated to the process. Show
which three frames are in memory after each page reference. Show which
page each frame is allocated to (if any) after each reference.

Reference string: 3 2 4 5 4 1 2 4 3 4

3 3 3 5 5 5 2 2 2 2
Pages in memory: 2 2 2 2 1 1 1 3 3
4 4 4 4 4 4 4 4

b. Apply the Second Chance Clock Algorithm to the same reference string.
When a frame is newly allocated, its use bit is initially set to 0. Show which
page each frame is allocated to (if any) and the value of each page’s use bit
after each reference.

The resident pages are the same as shown above for true LRU in this example.
The contents of the used bit and the location of the clock hand is as follows:

Reference string: 3 2 4 5 4 1 2 4 3 4

0 0 0 0 0 0 0 0 0 0
← ← ← ← ←
0 0 0 0 0 0 0 0 0 0
Pages in memory:
← ← ← ←
0 0 0 0 1 1 0 1 1 1

CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam
4. (20 points) File Systems

a. Explain why the indexed method of disk space allocation in files performs
better than the linked method in implementing non-sequential access.

The indexed method concentrates all the links between blocks into a small number
of index blocks which can be cached in memory, whereas the linked method
embeds the links in the data blocks. For non-sequential access, every access may
require that a sequence of links be followed starting at the first block of the file.
Thus, the indexed method reduces the number of block accesses required to follow
this sequence of links.

b. What is one benefit of the linked method over the indexed method in
implementing large files?

The linked method separates the links that define one file from those that define
other files. Thus, with the linked method, a very large linked file does not take up
space in common link blocks, making them larger than will fit into memory, as
can happen with the indexed method.

c. Explain how the Unix inode combines the strengths of both methods in its use
of both direct blocks (similar to indexed) and indirect blocks (similar to
linked).

The inode is a block which maintains an index of links to the initial data blocks in
a file. For larger files, it then stores pointers to indirect blocks which hold links to
more of the file’s data blocks. Very larger files are implemented using double and
triple indirect blocks to store pointers to the data blocks implementing the rest of
the file. The initial block of a small file can be cached, providing good non-
sequential access. However, if the file is large, then indirect blocks can be
brought into memory as needed. However, each file’s indirect blocks are cached
separately, so the need to read the links implementing a very large file need not
affect the performance of access to other, smaller files.
CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam

5. (20 points) Security


a. How can a buffer overflow condition be used to hijack a process, and cause it
to implement code that is supplied by an attacker?

A buffer overflow condition allows input that is larger than the supplied buffer to
write into the data memory adjacent to the buffer, filling it with whatever contents
the attacker supplies. If the buffer is in stack memory, then the attacker can write
malicious code onto the stack, and overwrite the function return address (stored
on the stack) with the location of that code. Upon returning from the function, the
program will branch to the malicious code.

b. What is a Trojan Horse attack? Give one example of how such an attack can
be made in a Unix/Linux system.

A Trojan Horse attack runs a piece of code that presents the user with a trusted
interface (such as a login screen) that legitimately requires privileged information
(such as a user password). Upon receiving this information, the attack may
perform the expected operation on behalf of the user (in order to avoid detection)
but it also communicates the privileged information to back to the attacker.

c. Shared secret cryptography such as DES/AES and public key cryptography


have different issues with the secure initial distribution of keys. Explain the
problem in each case.

Shared secret cryptography requires both sender and receiver to have the same
key, which must be communicated to them. However, without a preexisting
secure channel, the key may be intercepted, breaking the security of the scheme.

Public key cryptography allows a user’s public key to be distributed freely


without the need for secrecy as long as their private key remains a secret.
However, it is impossible to know that particular public key belongs to a
particular user unless it can be authenticated. One way to authenticate the key is
to send it in a preexisting secure channel.
CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam
6. (20 points) End-to-End arguments
A secure disk driver encrypts every block before writing it out to the disk, and
decrypts it when it is read from the disk, using a key that is compiled into the
kernel. A user level secure I/O library encrypts data before before calling the
kernel write() primitive and decrypts it after calling read(), using a key that the
user provides each time the program is run.

a. Give an “End-to-End” argument that the user level library is more secure.

The user level library allows the data to remain encrypted until after it has
been delivered to the reader’s process, avoiding schemes whereby the
operating system is attacked and unencrypted data is read from kernel
memory. Also, the decryption key is known only to the client, and not to the
individual who configures the kernel.

b. Also give an argument in favor of the secure driver without contradicting your
argument in part (a) above. The second argument can be on the basis of
performance, security, or any other important system property.

An example argument: A driver that performs encryption and decryption in


the will decrypt data transferred from disk due to file read-ahead, before it
has been explicitly read by the user. Because read-ahead increases resource
utilization, it can most efficiently be performed in the kernel, which can
schedule resources on behalf of all processes.
CS 560: Operating Systems May 2, 2007
Spring 2007, Dr. Micah Beck Final Exam

7. (20 points) Storage Systems


a. (5 pts) Explain why RAID 0 (striping) increases read performance but
decreases fault tolerance.

With RAID 0, a single read must access data on multiple disks. This increases
performance because the aggregate bandwidth of these disks is greater than
the bandwidth of a single disk. However, if any one disk fails, then no data
can be read, so fault tolerance is decreased.

b. (5 pts) Explain how RAID 1 (mirroring) increases both read performance and
fault tolerance (surviving any single fault) at the cost of doubling disk usage.

With RAID 1, all data is written to two disks (doubling disk usage), so a single
read can access data on both of them, increasing performance as in RAID 0.
However, if one disk fails, then the single remaining disk can be used, which
increase fault tolerance (although performance suffers after a failure).

c. (10 pts) RAID 4 stripes data blocks across N disks (for some value of N), and
for every N data blocks calculates a parity block which it stores on a dedicated
parity disk. How is the parity block calculated, and if there is a fault that
results in the loss of a data block, how can the data be recovered?

Let the N data blocks be denoted D0, D1, … DN-1.

The parity block is calculated using a bit-wise XOR operation:

P = D0 ⊕ D1 ⊕ … ⊕ DN-1

If one block (data or parity) is lost, let the remaining N-2 blocks be denoted
B0, B1, … BN-2. The lost block L can then be recovered by calculating
B

L = B0 ⊕ B1 ⊕ … ⊕ BN-2

You might also like