You are on page 1of 12

1.

Define Modularity and explain its need in computer programs.


Modularity is the degree to which a system's components may be separated and
recombined. The meaning of the word, however, can vary somewhat by context: In
biology, modularity is the concept that organisms or metabolic pathways are
composed of modules
Its need in computer programs
Modular programming is a software design technique that emphasizes separating the
functionality of a program into independent, interchangeable modules, such that
each contains everything necessary to execute only one aspect of the desired
functionality.
A module interface expresses the elements that are provided and required by the
module. The elements defined in the interface are detectable by other modules. The
implementation contains the working code that corresponds to the elements declared
in the interface. Modular programming is closely related to structured
programming and object-oriented programming, all having the same goal of
facilitating construction of large software programs and systems by decomposition
into smaller pieces, and all originating around the 1960s. While historically usage of
these terms has been inconsistent, today "modular programming" refers to high-level
decomposition of the code of an entire program into pieces, structured programming
to the low-level code use of structured control flow, and object-oriented programming
to the data use of objects, a kind of data structure. In object-oriented programming,
the use of interfaces as an architectural pattern to construct modules is known
as interface-based programming
In software engineering, modularity refers to the extent to which a software/Web
application may be divided into smaller modules. Software modularity indicates that
the numbers of application modules are capable of serving a specified business
domain.
Modularity is successful because developers use prewritten code, which saves
resources. Overall, modularity provides greater software development manageability.
Modern business issues grow on a continuous basis - in terms of size, complexity and
demand. Enhanced software capability requirements force developers to enhance
developed systems with new functionalities. Software engineering modularity allows
typical applications to be divided into modules, as well as integration with similar
modules, which helps developers use prewritten code. Modules are divided based on
functionality, and programmers are not involved with the functionalities of other
modules. Thus, new functionalities may be easily programmed in separate modules
The benefits of using modular programming include:

Less code has to be written.


A single procedure can be developed for reuse, eliminating the need to retype
the code many times.

Programs can be designed more easily because a small team deals with only a
small part of the entire code.

Modular programming allows many programmers to collaborate on the same


application.

The code is stored across multiple files.

Code is short, simple and easy to understand.

Errors can easily be identified, as they are localized to a subroutine or function.

The same code can be used in many applications.

The scoping of variables can easily be controlled .

2.Define Queue and explain how we can implement the Queue.


The queue is a data structure that has one important difference from a stack. A
stack is called a LIFO list (Last-In-First-Out) in that the last element pushed onto a
stack will be the first element popped off. A queue, on the other hand, is a FIFO list
(First-In-First-Out) in that the first element inserted in the queue will be the first
element removed. The easiest way to visualize a queue is to think of a line of
customers waiting to purchase theater tickets or see a bank teller. Usually the next
person to be served is the one who has been waiting the longest and late-comers are
added to the end of the line. (The British people "queue up" instead of waiting in
lines.)
In computer science, queues are used in operating systems for a timeshared
computer to keep track of tasks waiting for the processor. If all tasks have the same
priority, the one that receives the processor is the one that has been waiting the
longest. Queues are also used by operating systems to keep track of the list of jobs
waiting to be printed. We will describe the queue as an abstract data type and show
how to use a queue to represent a list of airline passengers waiting to see a ticket
agent. We will also implement an abstract data type for a queue. We will illustrate
how to use a process called simulation to determine the amount of time customers
spend waiting in queues. Lastly, we will introduce a variation on the queue known as
a priority queue.
The implementation file
First, let us study an implementation without any particular tricks. Here is a
possibility:
#include "queue.h"
#include "list.h"
#include < stdlib.h>
struct queue
{
list head;
list tail;
};
queue
queue_create(void)
{
queue q = malloc(sizeof(struct queue));

q -> head = q -> tail = NULL;


return q;
}
int
queue_empty(queue q)
{
return q -> head == NULL;
}
void
queue_enq(queue q, void *element)
{
if(queue_empty(q))
q -> head = q -> tail = cons(element, NULL);
else
{
q -> tail -> next = cons(element, NULL);
q -> tail = q -> tail -> next;
}
}
void *
queue_deq(queue q)
{
assert(!empty(q));
{
void *temp = q -> head -> element;
q -> head = cdr_and_free(q -> head);
return temp;
}
}
In the createoperation, we use the fact that in C, the assignment operation is
an expression (and thus has a value) in order to shorted the code by one line. The
value of q -> tail = NULL is itself NULL, which can be used as a value to assign to q
-> head. A similar trick is used in the enq operation.
The head field in the queue structure points to the first cell of the list, which
corresponds to the head of the queue (from where elements are dequeued), and
thetail field points to the last cell of the list, which corresponds to the tail of the
queue (where elements are enqueued). Notice the special case necessary when the
queue is empty before an enqueue operation. After a dequeue operation resulting in
an empty queue, the tail field will point to a list cell that is no longer allocated. This
might be a problem if a garbage collector is used, as it might keep that cell (and
worse, its contents) alive even though it will never be used again. In that case,
another special case must be introduced in the dequeue operation, like this:
void *
queue_deq(queue q)
{

assert(!empty(q));
{
void *temp = q -> head -> element;
q -> head = cdr_and_free(q -> head);
if(empty(q))
q -> tail = NULL;
return temp;
}

3.

List the Advantages and Disadvantages of Linear and linked


representation of tree.
Advantages and disadvantages of linear representation
Advantages:
1. This representation is very easy to understand.
2. This is the best representation for complete and full binary tree representation.
3. Programming is very easy.

4. It is very easy to move from a child to its parents and vice versa.
Disadvantages:
1. Lot of memory area wasted.
2. Insertion and deletion of nodes needs lot of data movement.
3. Execution time high.
4. This is not suited for trees other than full and complete tree
List the advantages of linear representation and linked representation of binary tree.
In this representation, each node of a binary tree is defined as shown below
LEFT
DATA
RIGHT
DATA field contains information. LEFT field holds the address of left child node and
RIGHT field holds the address of right child node. If there are no children, the address
fields hold null value. The first node address is called pointer of the tree. Using this
address only we can access the tree.
Consider the tree given below.
The linked list representation of the above tree is
Advantages and disadvantages of linked representation
Advantages
1. A particular node can be placed at any location in the memory.
2. Insertions and deletions can be made directly without data movements.
3. It is best for any type of trees.
4. It is flexible because the system takes care of allocating and freeing of nodes.
Disadvantages
1. It is difficult to understand.
2. Additional memory is needed for storing pointers
3. Accessing a particular node is not easy.

4. Define binary tree traversal and explain any one traversal with example.
A. Binary tree traversal:- Binary tree traversal is defined as a process of visiting all
the nodes in the binary tree once. The visit always starts from the root node.
1. To read (print) or write data in the node. It is denoted by letter v.
2. To move to the left of that node. It is denoted by letter L.
3. To move to the right of that node. It is denoted by letter R.
a. In order traversal
b. Pre order traversal
c. Post order traversal
In order traversal (LVR)
This traversal is denoted by the letters LVR. The steps to be followed are
a. Start from the root node and move left until there is no left child. Then visit
the last node (print the data).
b. Move right and move left until there is no left child. Then visit the last node
c. If it is not possible to move right go back one node visit the node (print the
data) and do step
This traversal is called in order traversal.
Example: consider the binary tree
Steps
Start from the root node A and move left till last node that is up to node B and
print the data B
Move one step back and print the data A

Then move right to node C and move left till last node that is upto print the
data D
Move one step back and print the data C

Then move right to node E since no path to move. Print the data E
Therefore the result of the in order traversal is B A D C E.
Binary tree creation:
The following steps are used to create the binary tree.
Define a self-referential structure to define the node.

Allocate memory space for an empty node using the defined self-referential
structure.
Create the binary tree node using step i) and ii).
Step (i)- defining self-referential structure to define a node.
A node can be defined as a self-referential structure as shown below.
Struct node
{
datatype data;
Struct node *link;
struct node *rlink;
};
Where

Node name of the structure


Data type valid data type such as int, float etc.
The above structure has tree fields namely.
Data - variable to store data
*lchild - pointer variable to store the address of the left child.
*rchild - pointer variable to store the address of the right child.
The figure given below shows he logical structure of the node defined by the above
structure.
Lchild
Data
rchild
Step (ii) Allocating a free node
Allocation is a process of allocating free memory area for the defined struct node
type for storing the data value and the address of left and right child. The starting
address of the allocated of left and right child. The staring address of the allocated
area is stored in a pointer variable. This can be implemented in c language using the
function given below.
4. List and explain any five types of graphs.
The graphs are very useful and fairly common data structures. They are used to
describe a wide variety of relationships between objects and in practice can be
related to almost everything. As we will see later, trees are a subset of the graphs
and also lists are special cases of trees and thus of graphs, i.e. the graphs represent
a generalized structure that allows modeling of very large set of real-world situations.
Frequent use of graphs in practice has led to extensive research in "graph theory",
in which there is a large number of known problems for graphs and for most of them
there are well-known solutions.
Type of Graphs: Undirected Graphs
Directed Graphs
Vertex Labeled Graphs
Cyclic Graphs
Edge Labeled Graphs
Weighted Graphs
Directed Acyclic Graphs
Disconnected Graphs
1. Undirected Graphs.
In an undirected graph, the order of the vertices in the pairs in the Edge set
doesn't matter. Thus, if we view the sample graph above we could have written
the Edge set as {(4,6),(4,5),(3,4),(3,2),(2,5)),(1,2)),(1,5)}. Undirected graphs
usually are drawn with straight lines between the vertices.
The adjacency relation is symetric in an undirected graph, so if u ~ v then it is
also the case that v ~ u.
2. Directed Graphs.
In a directed graph the order of the vertices in the pairs in the edge set
matters. Thus u is adjacent to v only if the pair (u,v) is in the Edge set. For

directed graphs we usually use arrows for the arcs between vertices. An arrow
from u to v is drawn only if (u,v) is in the Edge set. The directed graph below

Has the following parts.


o The underlying set for the Verticies set is capital letters.
o The Vertices set = {A,B,C,D,E}
o The Edge set = {(A,B),(B,C),(D,C),(B,D),(D,B),(E,D),(B,E)}
Note that both (B,D) and (D,B) are in the Edge set, so the arc between B and D
is an arrow in both directions.
3. Vertex labeled Graphs.
In a labeled graph, each vertex is labeled with some data in addition to the
data that identifies the vertex. Only the indentifying data is present in the pair
in the Edge set. This is silliar to the (key,satellite) data distinction for sorting.

Here we have the following parts.


a. The underlying set for the keys of the Vertices set is the integers.
b. The underlying set for the satellite data is Color.
c. The Vertices set = {(2,Blue),(4,Blue),(5,Red),(7,Green),(6,Red),(3,Yellow)}
d. The Edge set = {(2,4),(4,5),(5,7),(7,6),(6,2),(4,3),(3,7)}
4. Cyclic Graphs.
A cyclic graph is a directed graph with at least one cycle. A cycle is a path
along the directed edges from a vertex to itself. The vertex labeled graph
above as several cycles. One of them is 2 4 5 7 6 2
5. Edge labeled Graphs.
A Edge labeled graph is a graph where the edges are associated with labels.
One can indicate this be making the Edge set be a set of triples. Thus
if (u,v,X) is in the edge set, then there is an edge from u to v with label X
Edge labeled graphs are usually drawn with the labels drawn adjacent to the
arcs specifying the edges.

Here we have the following parts.


a. The underlying set for the the Vertices set is Color.
b. The underlying set for the edge labels is sets of Color.
c. The Vertices set = {Red,Green,Blue,White}
d. The Edge set = {(red,white,{white,green}) ,(white,red,{blue}) ,
(white,blue,{green,red}) ,(red,blue,{blue}) ,(green,red,
{red,blue,white}) ,(blue,green,{white,green,red})}
5. Explain
1. Fixed block storage allocation
2. Variable block storage allocation.
Fixed block storage allocation:First block storage allocation is the simplest case of dynamic storage allocation. This
is the straight forward method in which, all the blocks are of identical in size. The
user can decide the size of the block. The operating system keeps a pointer called
AVAIL. This pointer points to memory.
A user program communicates with the memory manager by means of two functions,
first is GETNODE (NODE) and second is RETURNNODE (ptr). GETNODE() procedure is
used to avail an free node from the AVAIL list whereas the RETURNNODE() is used to
return the block of memory, whenever a memory block is no longer required.

Whenever the GETNODE() is executed it checks for the availability of free node in the
AVAIL list and get a memory block from the AVAIL list.

Procedure returnnode() used to return back the used memory to the avail list, it will
be added as end of the list.

So far as the implementation of fixed block allocation is concerned, this is the


simplest strategy. But main drawback of this strategy is the wastage of space. For
example, suppose each memory block is of size 1k (1024 bytes)now for a request of
a memory block, say, of size 1.1k we have to avail 2 block (that is 2k memory space),
thus wasting 0.9k memory space. Making the size of the block too small reduces the
wastage of space, however, it also reduces the overall performance of the scheme.

Variable block Storage Allocation:To over come the disadvantages of fixed block storage, block of variable sizes are a
used, which is represented in figure 1.1. Here also linked lists play a vital role for the
management of memory blocks. The procedures used for allocation and deallocation
of memory blocks from the variable block storage are as follows.

This procedure assumes the blocks of memory are stored in ascending order of their
sizes. The node structure maintains a field to store the size of the block, namely SIZE.
SIZEOF () is a method that will return the size of the node. The above procedure
initially check for the NULL status of AVAIL list and proceeds with the search for the
exact size of block which is requested and return the same, if the pool does not have
the requested size it will return the bigger size of block.

Procedure RETURNNODE() will return the used block of memory to the memory pool.
While returning useless like fixed block, it researches for the size of the existing pool
where it can be inserted. Because we assume that the free pool memory blocks are
arranged in ascending the size of the returning block and the same will be inserted
into the memory pool.
6. What is the use of external storage devices?
External storage Any type of storage device that is connected to and controlled
by a computer but is not integrated within it. Generally the devices are peripheral

units such as disk drives or tape transports. An external storage device may be
shared by more than one computer.
This takes the form of a stand-alone device that is separate from the computer.
External drives are connected to the computer with a cable plugged into a suitable
interface such as an USB port. Data then passes back and forth across the interface.

Once an external drive is attached to the system, it appears as an extra drive letter
in the folder tree, for example, E drive or K drive. The user can transfer files in the
usual way by using the drag and drop method.
The main advantage of external drives is that they are portable and so data is easily
moved from one location to another. External drives also allow safe backup of
internally stored data.
The main disadvantage compared to an internal drive is data transfer is slower and
they also take up space around the computer. Constant plugging in and out can also
physically wear out the port over time.
External storage takes many forms, for example:
portable hard disks
magnetic tape

memory stick / flash drive

solid state memory cards

DVD or CDs

PAN Drive

1. Optical Drive (CD/ DVD)


CDs and DVDs are ideal for storing a list of songs, movies, media or software for
distribution or for giving to a friend due to the very low cost per disk. They do not
make good storage options for backups due to their shorter lifespan, small storage
space and slower read and write speeds.

Capacity CD : 650MB to 900MB


Capacity DVD : 4.7GB to 17.08GB
Advantages :

Low cost per disk

Disadvantages :

Relatively shorter life span than other storage options

Not as reliable as other storage options like external hard disk and SSD. One
damaged disk in a backup set can make the whole backup unusuable.

2. Solid State Drive (SSD)


Solid State Drives look and function similar to traditional mechanical/ magnetic hard
drives but the similarities stop there. Internally, they are completely different. They
have no moving parts or rotating platers. They rely solely on semiconductors and
electronics for data storage making it a more reliable and robust than traditional
magnetic. No moving parts also means that they use less power than traditional hard
drives and are much faster too.
With the prices of Solid State Drives coming down and is lower power usage, SSDs
are used extensively on laptops and mobile devices. External SSDs are also a viable
option for data backups.
Capacity : 64GB to 256GB
Connections : USB 2.0/3.0 and SATA
Advantages :

Faster read and write performance

More robust and reliable than traditional magnetic hard drives

Highly portable. Can be easily taken offsite


Disadvantages :

Still relatively expensive when compared to traditional hard drives

Storage space is typically less than that of traditional magnetic hard drives.

You might also like