You are on page 1of 92

Name- Naveen Kumar

BACHELOR OF COMPUTER APPLICATION

ASSIGNMENT

Sem-3

BCA 301 (ENTREPRENEURSHIP DEVELOPMENT)


Q-1. Explain the concept of entrepreneurship and its requirement to current scenario.

Ans-

Concept of Entrepreneurship

Before proceeding further, let us first understand the concept of entrepreneurship and
its importance to the economy. An entrepreneur is the sole owner and manager of his
business. Actually, the word translates to “the one who undertakes” in French.

From an economics point of view, an entrepreneur is the one who bears all the risk of a
business. And in return, he gets to enjoy all of the profits from the business as well.
While understanding the concept of entrepreneurship we will also learn about the
importance of entrepreneurs in the economy. They bring in new goods, services,
technologies, etc. to the market.

Four Key Elements of Entrepreneurship

After having studied the concept of entrepreneurship, now let us look at some key
elements that are necessary for entrepreneurship. We will be looking at four of the most
important elements.
Browse more Topics under Introduction To Entrepreneurship

Traits and Characteristics of an Entrepreneur

Benifits of Entrepreneurship

Types of Entrepreneurs

Entrepreneurs vs Managers

Intrapreneurship

Innovation

An entrepreneur is the key source of innovation and variation in an economy. It is


actually one of the most important tools of an entrepreneurs success. They use
innovation to exploit opportunities available in the market and overcome any threats.

So this innovation can be a new product, service, technology, production technique,


marketing strategy, etc. Or innovation can involve doing something better and more
economically. Either way in the concept of entrepreneurship, it is a key factor.

Traits and Characteristics of an Entrepreneur

Risk-Taking

Entrepreneurship and risk-taking go hand in hand. One of the most important features
of entrepreneurship is that the whole business is run and managed by one person. So
there is no one to share the risks with.

Not taking any risks can stagnate a business and excessive impulsive risk-taking can
cause losses. So a good entrepreneur knows how to take and manage the risks of his
business. But the willingness of an entrepreneur to take risks gives them a competitive
edge in the economy. It helps them exploit the opportunities the economy provides.

Vision

Vision or foresight is one of the main driving forces behind any entrepreneur. It is the
energy that drives the business forward by using the foresight of the entrepreneur. It is
what gives the business an outline for the future – the tasks to complete, the risks to
take, the culture to establish, etc.

All great entrepreneurs of the world that started with an entrepreneurship business are
known to have great vision. This helps them set out short term and long term goals for
their business and also plan ways to achieve these objectives.

Organization

In entrepreneurship, it is essentially a one-man show. The entrepreneur bears all the


risks and enjoys all the rewards. And sure he has the help of employees and middle-
level management, yet he must be the one in ultimate control. This requires a lot of
organization and impeccable organizational skills.

An entrepreneur must be able to manage and organize his finances, his employees, his
resources, etc. So his organizational abilities are one of the most important elements of
entrepreneurship.

Q-2. Describe the characteristics of an entrepreneur in short.

Ans-

Here are the 10 Best Characteristics Of An Entrepreneur that one must nurture:

1) Creativity:
Creativity gives birth to something new. For without creativity, there is no
innovation possible. Entrepreneurs usually have the knack to pin down a lot of
ideas and act on them. Not necessarily every idea might be a hit. But the
experience obtained is gold.

Creativity helps in coming up with new solutions for the problems at hand and
allows one to think of solutions that are out of the box. It also gives an
entrepreneur the ability to devise new products for similar markets to the ones
he’s currently playing in.

2) Professionalism:

Professionalism is a quality which all good entrepreneurs must possess. An


entrepreneurs mannerisms and behavior with their employees and clientele goes
a long way in developing the culture of the organization.

Along with professionalism comes reliability and discipline. Self-discipline enables


an entrepreneur to achieve their targets, be organized and set an example for
everyone.

Reliability results in trust and for most ventures, trust in the entrepreneur is what
keeps the people in the organization motivated and willing to put in their best.
Professionalism is one of the most important characteristics of an entrepreneur.

3) Risk-taking:

A risk-taking ability is essential for an entrepreneur. Without the will to explore


the unknown, one cannot discover something unique. And this uniqueness might
make all the difference. Risk-taking involves a lot of things. Using unorthodox
methods is also a risk. Investing in ideas, nobody else believes in but you is a risk
too.

Entrepreneurs have a differentiated approach towards risks. Good entrepreneurs


are always ready to invest their time and money. But, they always have a backup
for every risk they take.

For exploring in the unknown, one must be bestowed with a trump card; a good
entrepreneur has one, always. Also, evaluation of the risk to be undertaken is also
essential. Without knowing the consequences, a good entrepreneur wouldn’t risk
it all.

4) Passion:

Your work should be your passion. So when you work, you enjoy what you’re
doing and stay highly motivated. Passion acts as a driving force, with which, you
are motivated to strive for better.

It also allows you the ability to put in those extra hours in the office which can or
may make a difference. At the beginning of every entrepreneurial venture or any
venture, there are hurdles but your passion ensures that you are able to
overcome these roadblocks and forge ahead towards your goal.

5) Planning:

Perhaps, this is the most important of all steps required to run a show. Without
planning, everything would be a loose string as they say, “If you fail to plan, you
plan to fail.”
Planning is strategizing the whole game ahead of time. It basically sums up all the
resources at hand and enables you to come up with a structure and a thought
process for how to reach your goal.

The next step involves how to make optimum use of these resources, to weave
the cloth of success.Facing a situation or a crisis with a plan is always better. It
provides guidelines with minimum to no damage incurred to a business. Planning
is one of the most important characteristics of an entrepreneur.

6) Knowledge:

Knowledge is the key to success. An entrepreneur should possess complete


knowledge of his niche or industry. For only with knowledge can a difficulty be
solved or a crisis is tackled.

It enables him to keep track of the developments and the constantly changing
requirements of the market that he is in. May it is a new trend in the market or an
advancement in technology or even a new advertiser’s entry, an entrepreneur
should keep himself abreast of it. Knowledge is the guiding force when it comes
leaving the competition behind. New bits and pieces of information may just
prove as useful as a newly devised strategy.

He should know what his strengths & weaknesses are so that they can be worked
on and can result in a healthier organization.
A good entrepreneur will always try to increase his knowledge, which is why he is
always a learner. The better an entrepreneur knows his playground, the easier he
can play in it.

7) Social Skills:

A skillset is an arsenal with which an entrepreneur makes his business work.


Social Skills are also needed to be a good entrepreneur. Overall, these make up
the qualities required for an entrepreneur to function.

Social Skills involve the following:

Relationship Building

Hiring and Talent Sourcing

Team Strategy Formulation

And many more.

8) Open-mindedness towards learning, people, and even failure:

An entrepreneur must be accepting. The true realization of which scenario or


event can be a useful opportunity is necessary. To recognize such openings, an
open-minded attitude is required.

An entrepreneur should be determined. He should face his losses with a positive


attitude and his wins, humbly. Any good businessman will know not to frown on a
defeat. Try till you succeed is the right mentality. Failure is a step or a way which
didn’t work according to the plan. A good entrepreneur takes the experience of
this setback and works even hard with the next goal in line.
This experience is inculcated through the process of accepted learning. Good
entrepreneurs know they can learn from every situation and person around them.
Information obtained can be used for the process of planning.

Learning with an open mind lets you look at your faults humbly. New information
always makes an entrepreneur question his current resolve. It also provides a new
perspective towards a particular aspect. Open-mindedness also enables you to
know and learn from your competition.

9) Empathy:

Perhaps the least discussed value in the world today is empathy or having high
emotional intelligence. Empathy is the understanding of what goes on in
someone’s mind. This a skill that is worth a mention. A good entrepreneur should
know the strengths and weaknesses of every employee who works under him.You
must understand that it is the people who make the business tick! You’ve got to
deploy empathy towards your people.

Unhappy employees are not determined and as an entrepreneur, it is up to you to


create a working environment where people are happy to come. To look after
their well-being, an entrepreneur should try to understand the situation of
employees. What can be a motivational factor? How can I make my employees
want to give their best? All this is understood through empathy.

Keeping a workplace light and happy is essential. For without empathy, an


entrepreneur cannot reach the hearts of employees nor the success he desires.
Empathy is one of the most important characteristics of an entrepreneur.
10) And lastly, the customer is everything:

A good entrepreneur will always know this; a business is all about the customer.
How you grab a customer’s attention is the first step. This can be done through
various mediums such as marketing and advertising.

It is also important that you know the needs of your customers. The product or
service which is being created by your organization needs to cater to the needs of
your consumers. Personalising a business for consumers will also boost the sales.

The ability to sell yourself in front of a potential investment when it comes in the
form of a customer is also required. Being ready with the knowledge to please a
customer, is a way to have a successful business.

It isn’t necessary that every entrepreneurial venture is a huge success. In addition


to a brilliant idea, viability is an equally important aspect of a business, which is
where having a business education can play an important role. All these
characteristics of an entrepreneur can be instilled in an individual

Summer courses, full-time courses that can take young entrepreneurs through
the nuances of finance, marketing and organization become important and can
help in this journey. Columbia Business School Summer Programme in association
with JBCN International School is one such opportunity budding entrepreneurs
can learn from

Q-3. Give the detail about Factors Conducive to India's Economic Growth.

Ans- Factors Determining Economic Development in India


Regarding the determinants of economic growth Prof. Ragner Nurkse observed
that, “Economic development has much to do with human endowments, social
attitude, political condition and historical accidents.” Again, Prof. PT. Bauer also
mentioned that “The main determinants of economic development are aptitude,
abilities, qualities, capacities and facilities.”

Thus economic development of a country depends on both economic and non-


economic factors. The following are some of the economic and non-economic
factors determining the pace of economic development in a country like India.

A. Economic Factors:

Economic environment is working as an important determinant of economic


development of a country. Economic environment can determine the pace of
economic development as well as the rate of growth of the economy. This
economic environment is influenced by the economic factors like— population
and manpower resources, natural resources and its utilization, capital formation
and accumulation, capital output ratio, occupational structure, external
resources, extent of the market, investing pattern, technological advancement,
development planning, infrastructural facilities, suitable industrial relations etc.

1. Population and Manpower Resources:

Population is considered as an important determinant of economic growth. In this


respect population is working both as a stimulant and hurdles to economic
growth. Firstly, population provides labour and entrepreneurship as an important
factor service.

Natural resources of the country can be properly exploited with manpower


resources. With proper human capital formation, increasing mobility and division
of labour, manpower resources can provide useful support to economic
development.

On the other hand, higher rate of growth of population increases demand for
goods and services as a means of consumption leading to increasing consumption
requirements, lesser balance for investment and export, lesser capital formation,
adverse balance of trade, increasing demand for social and economic
infrastructural facilities and higher unemployment problem.

Accordingly, higher rate of population growth can put serious hurdles on the path
of economic development. Moreover, growth of population at a higher rate
usually eats up all the benefits of economic development leading to a slow growth
of per capita income as it is seen in case of India.

But it has also been argued by some modern economists that with the growing
momentum of economic development, standard of living of the general masses
increases which would ultimately create a better environment for the control of
population growth

Q-4. What are the activities performed in Agripreneurship?

Ans- Agripreneurship Development as a Tool to Upliftment of Agriculture

A shift from agriculture to agribusiness is an essential pathway to revitalize


Indian agriculture and to make more attractive and profitable venture.
Agripreneurship have the potential to contribute to a range of social and
economic development such as employment generation, income generation,
poverty reduction and improvements in nutrition, health and overall food security
in the national economy. Agripreneurship has potential to generate growth,
diversifying income, providing widespread employment and entrepreneurial
opportunities in rural areas. This paper mainly focused on basic concepts of
agripreneurship, entrepreneurship skills, and needs of agripreneurship
development in India along with major reason for promoting agripreneurship
development in country.

Index Terms- Agripreneurship, Entrepreneurs, Entrepreneurship Skills, Potential


areas, Employment Generation, Poverty Reduction and Agribusiness.
I. INTRODUCTION Indian economy is basically agrarian economy. On 2.4 percent
of world land India is managing 17.5 percent of world population. At the time of
independence, more than half of the national income was contributed by
agriculture along with more than 70 percent of total population was dependent
on agriculture (Pandey, 2013). Agriculture and allied sectors are considered to be
mainstay of the Indian economy because these are important sources of raw
materials for industries and they demands for many industrial products
particularly fertilizers, pesticides, agriculture implements and a variety of
consumer goods (Bairwa et al., 2014a). Due to the changing socio, economic,
political, environmental and cultural dimensions over the world, farmers’ and
nations’ options for survival and for sustainably ensuring success in changing their
respective economic environments has become increasingly critical. It is also
worth noting that the emergence of the free market economies globally has
resulted in the development of a new spirit of enterprise “Agripreneurship” and
the increased individual need for responsibility for running their own businesses
(Alex, 2011). Entrepreneurship is connected with finding ways and means to
create and develop a profitable farm business. The term The terms,
entrepreneurship and agripreneurship are frequently used in the context of
education and small business formation in agriculture. Dollinger (2003) defines
entrepreneurship in agriculture as the creation of innovative economic
organization for the purpose of growth or gain under conditions of risk and
uncertainty in agriculture. Gray (2002) on the other hand defines an entrepreneur
as an individual who manages a business with the intention of expanding the

business and with the leadership and managerial skills necessary for achieving
those goals. In the face of growing unemployment and poverty in rural areas and
slow growth of agriculture there is need of entrepreneurship in agriculture for
more productivity and profitability of agriculture. The Agripreneurship program is
necessary to develop entrepreneurs and management workforce to cater
agricultural Industry across the world (Bairwa et al., 2014b). Agripreneurship is
greatly influenced mainly by the economic situation, education and culture (Singh,
2013).
II. BASIC TERMINOLOGY RELATED WITH AGRIPRENEURSHIP DEVELOPMENT 1-
Agripreneurs – in general, agripreneurs should be proactive, curious, determined,
persistence, visionary, hard working, honest, integrity with strong management
and organizational skills. Agripreneurs also known as entrepreneurs.
Entrepreneurs may be defined as innovators who drive change in the economy by
serving new markets or creating new ways of doing things. Thus, an agripreneurs
may be someone who undertakes a variety of activities in agriculture sector in
order to be an entrepreneur. 2- Agripreneurship – Agripreneurship is the
profitable marriage of agriculture and entrepreneurship. Agripreneurship turn
your farm into an agribusiness. The term Agripreneurship is synonym with
entrepreneurship in agriculture and refers to agribusiness establishment in
agriculture and allied sector. 3- Agriclinics – these are envisaged to provide expert
advice and services to farmers on technology, cropping practices, protection from
pests and diseases, market trends, prices of various crops in the markets and also
clinical services for animal health which would enhance productivity of
crops/animals and increased income to farmers (Global Agrisystem, 2010). 4-
Agribusiness Centres – these are envisaged to provide farm equipments on hire,
sale of inputs and other services.These centres will provide a package of input
facilities; consultancy and other services with the aim of strengthen transfer of
technology and extension services and also provide self employment
opportunities to technically trained persons (Chandra shekara, 2003).

III. NEED OF AGRIPRENEURSHIP DEVELOPMENT Since the inception of New


Economic Reforms, adoption of liberalization, privatization and globalization (LPG)
and world trade organization (WTO) in 1992 – 95, it is expected that rural area
will grow at par with urban area. Performance of agriculture

Q-5. What is the role of the Institution in the idea generation?

Ans- Idea Generation – Techniques, Tools, Examples, Sources And Activities


Ideas form the base from which we start building up. They could be abstract,
concrete, or visual. An idea generation technique is a creative process of coming
up with solutions and ideas. It also involves developing these ideas and
communicating them.

Idea Generation in Entrepreneurship

Entrepreneurship is being able to create and run a business. In entrepreneurship,


idea generation is one of the main factors that lead to its success. The idea
thought of here should be able to solve a problem.

And along with being unique, the idea should also be easy to execute. For
example, let’s suppose you feel a lot of people have a problem understanding
legal jargon and legal proceedings.

So, in this case, your entrepreneurial idea could be setting up a platform that
caters to all the legal needs of people and helps them understand it easily.

Idea Generation in Product Development

Idea generation is the first step for any product development. This requires you to
look for feasible product options that can be executed. It is a very important step
for organizations to solve their problems.

It requires you to do market research and SWOT analysis. You should aim to come
up with an idea that is unique from your competitors and can be used profitably.

For example, self-sanitizing door handles can be a product that you look at. It is
unique and would be in high demand because of the current shift towards a
healthy lifestyle.
Idea Generation Process

The process may be different for different organizations and different people. But
there are three main steps in the process. It starts with the identification of the
question or the problem we need to solve.

After which we need to come up with ideas and probable solutions. Finally, in the
third stage, we select the most suitable idea and execute it. For example, let’s
suppose you are opening up a restaurant.

So firstly, you need to identify what question you need to answer. Let’s assume
you want to decide upon a name for the restaurant. Now you will use different
techniques (brainstorming, mind mapping, etc) to come up with ideas for names.

In the last step, you will choose the most appropriate name from the different
names you came up with within the second step.

BCA 302 (COMPUTER ORGANIZATION)

Q-1. What do you mean by absolute mode? Q

Ans-Mean Absolute Deviation Formula

Ratio of sum of all absolute values of deviation from central measure to the total
number of observations.

M.A. D = (Σ Absolute Values of Deviation from Central Measure) / (Total Number


of Observations)
Calculate Mean Absolute Deviation

Steps to find the mean deviation from mean:

(i)Find the mean of the given observations.

(ii)Calculate the difference between each observation and the calculated mean

(iii)Evaluate the mean of the differences obtained in the second step.

This gives you the mean deviation from mean.

Suppose that the deviation from a central value a is given as (x-a), where x is any
observation of the set of data. To find out the mean deviation, we need to find
the average of all the deviations from a in the given data set. Since the measure of
central tendency lies between the maximum and minimum values of the data
set,we can see that some deviations would be positive and rest would be
negative.The sum of such deviations would give a zero. Let us see an example to
make this point clearer to you.

Consider the following data set

Name of student

Maximum Marks(in percentage)

Anmol

85

Kushagra

75

Garima

80

The mean of the given data is given as:

¯¯¯
x

∑i=1nxin

¯¯¯

⇒x¯

85

75

+
80

85+75+803

80

80

Now, if we calculate the deviation from mean for the given values, we have:

Name of student

Maximum Marks(in percentage)

This does not give us any idea about measure of variability of the data which is
the actual purpose of finding the mean deviation. So, we find the absolute value
of deviation from the mean.

∑Absolute values of Deviation from cental measureTotal Number of observations

Although to calculate mean absolute deviation, any measure of central tendency


can be used but generally mean and median are the most common ones.

In the upcoming discussions, we will be discussing about calculating deviations for


various types of data. The following diagram represents the methods to calculate
the mean deviation from mean for two types of data, i.e. grouped and ungrouped
data.

Q-2. Write briefly about computer fundamental system?

Ans- it can be described as the learning or studying some basic functions of


computers starting from their origin to the modern day
Study of basic computer types to their characteristics, advantages, and
disadvantages are included in the Learning of fundamentals of computers.

Before Shifting to advance computer knowledge it is highly recommended to be


aware of this topic thoroughly as it would make you more confident and
comfortable while acquiring more advanced computer skills.

A Computer can be defined or described as a machine or device which can work


with information such as to store, retrieve, manipulate, and process data.

The term computer is derived from the word “computare”.

The word is derived from a “Latin” word which means to calculate.

Therefore computer can be further defined as a programmable machine that is


used for some numerical calculations.

Some years back these devices | machines were used only for the purpose of
calculations but presently they are widely and proudly used in all sections of
human society.

Modern Computers are incredibly advanced thanks to the new up-gradation and
enhancement of technologies.

They can store huge amounts of data in the internal as well as external storage
units. Computer Hard disk is the external source of storing data.

These days computer speed has dramatically increased the work or task which
used to take long hours to perform can be done in few seconds this is because of
the rapid development in the IT [Information Technology] sector especially in the
computer hardware section

The Computer peripherals and devices manufactured these days are of highest
quality with affordable price.

The technology has made these devices to perform more speedily than ever
before also this is the important characteristics of computer system which made
them very famous and a part of human lives.
It has also been observed that the life of modern peripherals and devices has
been extended due to the excellent quality of raw material used while
preparations of these devices.

For example the warranty a hard disk drive is up to 2 to 5 yrs.

The Speed of computer mainly and primarily depends upon some factors such as
What type of motherboard you are using, Processor Speed and RAM [Random
Access Memory].

Motherboard:: Computer Motherboard is designed on a piece of PCB Which is


called a Printed Circuit Board where all other components are attached to it such
as hard disk, processor, ram, etc.

Processor:: Processor is again called as CPU which stands for central Processing
Unit.

It is also called as Heart | Brain of Computer System.

RAM:: RAM stands for Random Access Memory which is temporary storage
medium and its volatile memory.

They tend to lose data when power is off.

However, the speed of the computer depends upon ram as well.

You can install more capacity of ram to increase your computer speed but firstly
you have to check the compatibility factors of motherboards and other
components or the device.

Hard Disk:: This is a permanent storage unit of a computer which can store data in
high volume and also you can retrieve data whenever and wherever you need.
This HDD available in market in huge data storing capacity.

Basic Fundamental Functions of Computer

There are mainly four common functions of computer system

Input

Output

Processing

Storage

Input:: The Computer receives its data from input devices in the form of raw data
and later this data is processed in human-readable form with the help of other
computer devices.

The primary input devices of computer system are

Keyboard

Mouse

Scanner

Trackball

Lightpen

Joystick

Output :: The output devices of computer receive data from the system and
further process the data in human-readable form.

Some common output devices are::

Printers

Monitors
Speakers

Headphones

Projectors

Processing:: This is the core function of the modern-day Personal Computer.

When the data is received from the memory it transfers the data or information
for further processing.

Storage:: There are mainly two storage unit of the personal computer [PC]

Primary Storage

Secondary Storage

Primary Storage:: Random Access Memory [RAM] is the primary storage unit of
computers.

Secondary Storage:: Hard Disk Drives and Pen drives are called as secondary
Storage units.

Also Read ::

What is a Motherboard

Motherboard Form Factors

What Motherboard Do I Have?

How To Reset BIOS Password

What is a Password

Different Types of Computer


The overall development of computer has reached to new heights due to vast
improvement in modern technology.

The fundamentals of computers have changed rapidly. They are categorized into
four different types according to their speed, size, capabilities, and cost.

Super Computer

Mainframe

Mini

Micro

Super:: They are the fastest and most expensive computers compared to others.
They require huge space for their installation.

Mainframe:: They are not as fast as supercomputers and require huge space for
installation also they are very expensive.

Mini:: They are smaller, cheaper, and slower compared to super and mainframe
computers.

Micro:: They are called as Personal computer [PC].

Q-3. Explain the operations of stacks and queues.

Ans- We deal with data all the time, so how we store, organise or group our data,
matters.

Data Structures are tools which are used to store data in a structured way in
computer to use it efficiently. Efficient data structures plays a vital role in
designing good algorithms. Data structures are considered as key organising
factors in software design in some designing methods and programming
languages. Examples: In english dictionaries, we can access any word easily as the
data is stored in a sorted way using a particular data structure.

In online city map, data like position landmark, road network connections, we
show this data using geometry using two dimensional plane.

Here, we will discuss about Stacks and Queues Data Structures.

Stacks: Stack is collection of elements, that follows the LIFO order. LIFO stands for
Last In First Out, which means element which is inserted most recently will be
removed first. Imagine a stack of tray on the table. When you put a tray there you
put it at top, and when you remove it, you also remove it from top.

A stack has a restriction that insertion and deletion of element can only be done
from only one end of stack and we call that position as top. The element at top
position is called top element. Insertion of element is called PUSH and deletion is
called POP.

Operations on Stack:

push( x ) : insert element x at the top of stack.

void push (int stack[ ] , int x , int n) {

if ( top == n-1 ) { //if top position is the last of position of stack, means stack
is full .

cout << “Stack is full.Overflow condition!” ;

else{

top = top +1 ; //incrementing top position

stack[ top ] = x ; //inserting element on incremented position .

}
}

pop( ) : removes element from the top of stack.

void pop (int stack[ ] ,int n )

if( isEmpty ( ) )

cout << “Stack is empty . Underflow condition! ” << endl ;

else

top = top - 1 ; //decrementing top’s position will detach last element from
stack .

topElement ( ) : access the top element of stack.

int topElement ( )

return stack[ top ];

isEmpty ( ) : check whether the stack is empty or not.

bool isEmpty ( )
{

if ( top == -1 ) //stack is empty .

return true ;

else

return false;

size ( ) : tells the current size of stack .

int size ( )

return top + 1;

Implementation:

#include <iostream>

using namespace std;

int top = -1; //globally defining the value of top ,as the stack is empty .

void push (int stack[ ] , int x , int n)

if ( top == n-1 ) //if top position is the last of position of stack, means stack
is full .

cout << “Stack is full.Overflow condition!” ;


}

else

top = top +1 ; //incrementing top position

stack[ top ] = x ; //inserting element on incremented position .

bool isEmpty ( )

if ( top == -1 ) //stack is empty .

return true ;

else

return false;

void pop (int stack[ ] ,int n )

if( isEmpty ( ) )

cout << “Stack is empty . Underflow condition! ” << endl ;

else
{

top = top - 1 ; //decrementing top’s position will detach last element from
stack .

int size ( )

return top + 1;

int topElement ( )

return stack[ top ];

// Now lets implementing these functions on the above stack

int main( )

int stack[ 3 ];

// pushing element 5 in the stack .

push(stack , 5 , 3 ) ;

cout << “Current size of stack is ” << size ( ) << endl ;


push(stack , 10 , 3);

push (stack , 24 , 3) ;

cout << “Current size of stack is ” << size( ) << endl ;

//As now the stack is full ,further pushing will show overflow condition .

push(stack , 12 , 3) ;

//Accessing the top element .

cout << ”The current top element in stack is “ << topElement( ) << endl;

//now removing all the elements from stack .

for(int i = 0 ; i < 3;i++ )

pop( );

cout << “Current size of stack is ” << size( ) << endl ;

//as stack is empty , now further popping will show underflow condition .

pop ( );

Output :
Current size of stack is 1

Current size of stack is 3

The current top element in stack is 24

Stack is full.Overflow condition!

Current size of stack is 0

Stack is empty . Underflow condition!

Queue:

Queue is a data structure that follows the FIFO principle. FIFO means First In First
Out i.e the element added first in the queue will be the one to be removed first.
Elements are always added to the back and removed from the front. Think of it as
a line of people waiting for a bus at the bus stand. The person who will come first
will be the first one to enter the bus.

Queue supports some fundamental functions:

Enqueue: Adds an element to the back of the queue if the queue is not full
otherwise it will print “OverFlow”.

void enqueue(int queue[], int element, int& rear, int arraySize) {

if(rear == arraySize) // Queue is Full

printf(“OverFlow\n”);

else {

queue[rear] = element; // Add the element to the back

rear++;

}
}

Dequeue: Removes the element from the front of the queue if the queue is not
empty otherwise it will print “UnderFlow”.

void dequeue(int queue[], int& front, int rear) {

if(front == rear) // Queue is empty

printf(“UnderFlow\n”);

else {

queue[front] = 0; // Delete the front element

front++;

Front: Return the front element of the queue

Q-4. Illustrate the characteristics of some common memory technologies.

Ans- Semiconductor Memory Types & Technologies

Semiconductor memory is used in all forms of computer applications: there are


many types, technologies and terminologies - DRAM, SRAM, Flash, DDR3, DDR4,
DDR5, and more.

Semiconductor Memory Tutorial Includes:

Memory types & technologies Memory specifications & parameters


Memory types: DRAM EEPROM Flash FRAM MRAM Phase change
memory SDRAM SRAM

Semiconductor memory is used in any electronics assembly that uses computer


processing technology. Semiconductor memory is the essential electronics
component needed for any computer based PCB assembly.

In addition to this, memory cards have become commonplace items for


temporarily storing data - everything from the portable flash memory cards used
for transferring files, to semiconductor memory cards used in cameras, mobile
phones and the like.

The use of semiconductor memory has grown, and the size of these memory
cards has increased as the need for larger and larger amounts of storage is
needed.

To meet the growing needs for semiconductor memory, there are many types and
technologies that are used. As the demand grows new memory technologies are
being introduced and the existing types and technologies are being further
developed.

A variety of different memory technologies are available - each one suited to


different applications.. Names such as ROM, RAM, EPROM, EEPROM, Flash
memory, DRAM, SRAM, SDRAM, as well as F-RAM and MRAM are available, and
new types are being developed to enable improved performance.

Terms like DDR3, DDR4, DDR5 and many more are seen and these refer to
different types of SDRAM semiconductor memory.

In addition to this the semiconductor devices are available in many forms - ICs for
printed board assembly, USB memory cards, Compact Flash cards, SD memory
cards and even solid state hard drives. Semiconductor memory is even
incorporated into many microprocessor chips as on-board memory.
Q-5. Define the term instruction pre-fetching..

Ans- Prefetching is the loading of a resource before it is required to decrease the


time waiting for that resource. Examples include instruction prefetching where a
CPU caches data and instruction blocks before they are executed, or a web
browser requesting copies of commonly accessed web pages. Prefetching
functions often make use of a cache.

Techopedia Explains Prefetching

Prefetching allows applications and hardware to maximize performance and


minimize wait times by preloading resources that users will need before they
request them.

Web browsers employ prefetching by preloading commonly accessed pages.


When the user navigates to the page, it loads quickly because the browser is
pulling it from the cache. Some browser plugins download all of the pages that
have been hyperlinked to attempt to speed up the browser.

Some operating systems, such as Windows, cache files that a program needs on
startup to make loading faster.

BCA 303 (DATA STRUCTURES & ALGORITHMS)

Q-1. How can we work through example on object oriented programs?

Ans- Object-oriented programming (OOP) is a programming paradigm based on


the concept of "objects", which may contain data, in the form of fields, often
known as attributes; and code, in the form of procedures, often known as
methods. For example, a person is an object which has certain properties such as
height, gender, age, etc. It also has certain methods such as move, talk, and so on.

Object
This is the basic unit of object-oriented programming. That is both data and
function that operate on data are bundled as a unit called an object.

Class

When you define a class, you define a blueprint for an object. This doesn't
actually define any data, but it does define what the class name means, that is,
what an object of the class will consist of and what operations can be performed
on such an object.

OOP has four basic concepts on which it is totally based. Let's have a look at them
individually −

Abstraction − It refers to, providing only essential information to the outside


world and hiding their background details. For example, a web server hides how it
processes data it receives, the end user just hits the endpoints and gets the data
back.

Encapsulation − Encapsulation is a process of binding data members (variables,


properties) and member functions (methods) into a single unit. It is also a way of
restricting access to certain properties or component. The best example for
encapsulation is a class.

Inheritance − The ability to create a new class from an existing class is called
Inheritance. Using inheritance, we can create a Child class from a Parent class
such that it inherits the properties and methods of the parent class and can have
its own additional properties and methods. For example, if we have a class
Vehicle that has properties like Color, Price, etc, we can create 2 classes like Bike
and Car from it that have those 2 properties and additional properties that are
specialized for them like a car has numberOfWindows while a bike cannot. Same
is applicable to methods.

Polymorphism − The word polymorphism means having many forms. Typically,


polymorphism occurs when there is a hierarchy of classes and they are related by
inheritance. C++ polymorphism means that a call to a member function will cause
a different function to be executed depending on the type of object that invokes
the function.

Object-

-Oriented Programming is a method of programming where programmers define


the type of data as well the operations that the data can perform.

In a nutshell, Object-Oriented Programming is a simple engineering advance to


build software systems which models real-world entities using classes and objects.

So going further, the next question is…

What is an Object?

An object in terms of software programming is nothing but a real-world entity or


an abstract concept which has an identity, state, and behavior.

Q-2. Define Abstract Data Type with an example.

Ans- Abstract Data Types

Abstract Data type (ADT) is a type (or class) for objects whose behaviour is
defined by a set of value and a set of operations.

The definition of ADT only mentions what operations are to be performed but not
how these operations will be implemented. It does not specify how data will be
organized in memory and what algorithms will be used for implementing the
operations. It is called “abstract” because it gives an implementation-independent
view. The process of providing only the essentials and hiding the details is known
as abstraction.

The user of data type does not need to know how that data type is implemented,
for example, we have been using Primitive values like int, float, char data types
only with the knowledge that these data type can operate and be performed on
without any idea of how they are implemented. So a user only needs to know
what a data type can do, but not how it will be implemented. Think of ADT as a
black box which hides the inner structure and design of the data type. Now we’ll
define three ADTs namely List ADT, Stack ADT, Queue ADT.

List ADT

The data is generally stored in key sequence in a list which has a head structure
consisting of count, pointers and address of compare function needed to compare
the data in the list

The data node contains the pointer to a data structure and a self-referential
pointer which points to the next node in the list.

//List ADT Type Definitions

typedef struct node

void *DataPtr;

struct node *link;

} Node;

typedef struct

int count;

Node *pos;

Node *head;

Node *rear;
int (*compare) (void *argument1, void *argument2)

} LIST;

A list contains elements of the same type arranged in sequential order and
following operations can be performed on the list.

get() – Return an element from the list at any given position.

insert() – Insert an element at any position of the list.

remove() – Remove the first occurrence of any element from a non-empty list.

removeAt() – Remove the element at a specified location from a non-empty list.

replace() – Replace an element at any position by another element.

size() – Return the number of elements in the list.

isEmpty() – Return true if the list is empty, otherwise return false.

isFull() – Return true if the list is full, otherwise return false.

Stack ADT

In Stack ADT Implementation instead of data being stored in each node, the
pointer to data is stored.

The program allocates memory for the data and address is passed to the stack
ADT.

Q-3. What are the characteristics of single linked list in DSA?

Ans- A node in the singly linked list consist of two parts: data part and link part.
Data part of the node stores actual information that is to be represented by the
node while the link part of the node stores the address of its immediate
successor. One way chain or singly linked list can be traversed only in one
direction.

Linked List Basics


A linked-list is a sequence of data structures which are connected together via
links.

Linked List is a sequence of links which contains items. Each link contains a
connection to another link. Linked list the second most used data structure after
array. Following are important terms to understand the concepts of Linked List.

Link − Each Link of a linked list can store a data called an element.

Next − Each Link of a linked list contain a link to next link called Next.

LinkedList − A LinkedList contains the connection link to the first Link called First.

Linked List Representation

As per above shown illustration, following are the important points to be


considered.

LinkedList contains an link element called first.

Each Link carries a data field(s) and a Link Field called next.

Each Link is linked with its next link using its next link.

Last Link carries a Link as null to mark the end of the list.

Types of Linked List

Following are the various flavours of linked list.

Simple Linked List − Item Navigation is forward only.

Doubly Linked List − Items can be navigated forward and backward way.

Circular Linked List − Last item contains link of the first element as next and and
first element has link to last element as prev.
Linked List can be defined as collection of objects called nodes that are randomly
stored in the memory.

A node contains two fields i.e. data stored at that particular address and the
pointer which contains the address of the next node in the memory.

The last node of the list contains pointer to the null.

Q-4. Define the Data Structure Operations.

Ans- An organizational scheme, such as records or array, that can be applied to


data in facilitate interpreting the data or performing operation on it. Data may be
organized in many different ways: the logical or mathematical model of a
particular organization of data is called data structure. Data model depends on
two things. First, it much be rich enough in structure to mirror the actual
relationship of the data in the real world. On other hand, the structure should be
simple to execute the process the data when necessary.

Data are also organized into more complex types of structures. The study of such
data structure, which forms the subject matter of the text, includes the following
three steps:

(1) Logical or mathematical description of the structure.

(2) Implementation of the structure on a computer.

(3) Quantitative analysis of the structure, which include determining the amount
of memory needed to store the structure and the time required to process the
structure.
Data Structure Operations

Data are processed by means of certain operations which appearing in the data
structure. Data has situation on depends largely on the frequency with which
specific operations are performed. This section introduces the reader to some of
the most frequently used of these operations.

(1) Traversing: Accessing each records exactly once so that certain items in the
record may be processed.

(2) Searching: Finding the location of a particular record with a given key value, or
finding the location of all records which satisfy one or more conditions.

(3) Inserting: Adding a new record to the structure.

(4) Deleting: Removing the record from the structure.

(5) Sorting: Managing the data or record in some logical order(Ascending or


descending order).

(6) Merging: Combining the record in two different sorted files into a single sorted
file.

Types of Data structure

Data structure are classified into two type such as linear or non-linear.

Linear: A data structure is said to be linear if its elements form a sequence. The
elements of linear data structure represents by means of sequential memory
locations. The other way is to have the linear relationship between the elements
represented by means of pointers or links. Ex- Array and Link List
Q-5. Explain how function calls may be implemented using stacks for return
values.

Ans- Before we jump into the actual material. I want to briefly revisit the various
ways for assembly language accessing the data in memory (i.e., addressing mode).
The following table is adapted from CSAPP (2nd edition):

Type

Form

Operand Value

Name

Immediate

$Imm

$Imm

Imm

Imm

Immediate

Register

Ea

R[

a
]

R[Ea]

Register addressing

Memory

Imm

Imm

M[Imm]

M[Imm]

Direct addressing

Memory

(Ea)

M[R[

]]

M[R[Eb]]

Indirect addressing

Memory
Imm(

,s)

Imm(Eb,Ei,s)

M[Imm+R[

]+(R[

]⋅s)]

M[Imm+R[Eb]+(R[Ei]⋅s)]

Scaled indexed addressing 1

In the above table,

Imm

Imm

refers to a constant value, e.g.

0x8048d8e
0x8048d8e

or

48

48

Ex

refers to a register, e.g.

%eax

%eax

R[

R[Ex]

refers to the value stored in register

Ex

M[x]

M[x]

refers to the value stored at memory address


x

Main course

Now, let's bring our main course onto the table: understanding how function
works. I'll first clear up some terms we will use during the explanation. Then, we'll
take a look at the stack and understand how it supports function calls. Lastly, we'll
examine two assembly programs and understand the whole picture of function
calls.

Some terms

Let's first consider what the key elements we need in order to form a function:

function name

A function's name is a symbol that represents the address where the function's
code starts.

function arguments

A function's arguments (aka. parameters) are the data items that are explicitly
given to the function for processing. For example, in mathematics, there is a

sin

sin

function. If you were to ask a computer to find the

sin(2)

sin⁡(2)

sin

sin
would be the function's name, and

would be the argument (or parameter).

local variables

Local variables are data storage that a function uses while processing that is
thrown away when it returns. It's knid of like a scratch pad of paper. Functions get
a new piece of paper every time they are activated, and they have to throw it
away when they are finished processing.

return address

The return address is an "invisible" parameter in that it isn't directly used during
the function. The return address is a parameter which tells the function where to
resume executing after the function is completed. This is needed because
functions can be called to do processing from many different parts of our
program, and the function needs to be able to get back to wherever it was called
from. In most programming languages, this parameter is passed automatically
when the function is called. In assembly language, the call instruction handles
passing the return address for you, and ret handles using that address to return
back to where you called the function from.

return value

The return value is the main method of transferring data back to the main
program. Most programming languages only allow a sinlge return value for
function.

THE PROCESSOR'S STACK OPERATION


There are two CPU registers that are important for the functioning of the stack
which hold information that is necessary when calling data residing in the
memory. Their names are ESP and EBP in 32 bits system. The ESP (Extended Stack
Pointer) holds the top stack address. ESP is modifiable either directly or
indirectly. Directly: by using direct operations for example (Windows/Intel):

add esp, 0Ch

This instruction causes the stack to shrink by 12 bytes. And

sub esp, 0Ch

That causes the stack to grow by 12 bytes. (Keep in mind that it may seem
confusing. In fact, the bigger the ESP value, the smaller the stack size and vice
versa because the stack grows downwards in memory as it gets bigger and vice
versa).

Indirectly: by adding data elements to the stack with PUSH or removing data
elements from the stack with POP stack operation. For example:

push ebp ; Save ebp, put it on the stack

pop ebp ; Restore ebp, remove it from the stack

In addition to the stack pointer, which points to the top of the stack (lower
numerical address); it is often convenient to have a stack frame pointer (FP) which
holds an address that point to a fixed location within a frame. Looking at the
stack frame, local variables could be referenced by giving their offsets from ESP.
However, as data are pushed onto the stack and popped off the stack, these
offsets change, so the reference of the local variables is not consistent.
Consequently, many compilers use another register, generally called Frame
Pointer (FP), for referencing both local variables and parameters because their
distances from FP do not change with PUSHes and POPs. On Intel CPUs, EBP
(Extended Base Pointer) is used for this purpose. On the Motorola CPUs, any
address register except A7 (the stack pointer) will do. Because the way stack
grows, actual parameters have positive offsets and local variables have negative
offsets from FP as shown below. Let examine the following simple C program.

BCA 304 (DATA BASE MANAGEMENT SYSTEMS)

Q-1. What is transaction is DBMS?

Ans- A transaction, in the context of a database, is a logical unit that is


independently executed for data retrieval or updates. Experts talk about a
database transaction as a “unit of work” that is achieved within a database design
environment.

In relational databases, database transactions must be atomic, consistent,


isolated and durable—summarized as the ACID acronym. Engineers have to look
at the build and use of a database system to figure out whether it supports the
ACID model or not. Then, as newer kinds of database systems have emerged, the
question of how to handle transactions becomes more complex.

Techopedia Explains Transaction

In traditional relational database design, transactions are completed by COMMIT


or ROLLBACK SQL statements, which indicate a transaction’s beginning or end.
The ACID acronym defines the properties of a database transaction, as follows:

Atomicity: A transaction must be fully complete, saved (committed) or completely


undone (rolled back). A sale in a retail store database illustrates a scenario which
explains atomicity, e.g., the sale consists of an inventory reduction and a record of
incoming cash. Both either happen together or do not happen—it's all or nothing.

Consistency: The transaction must be fully compliant with the state of the
database as it was prior to the transaction. In other words, the transaction cannot
break the database’s constraints. For example, if a database table’s Phone
Number column can only contain numerals, then consistency dictates that any
transaction attempting to enter an alphabetical letter may not commit.

Isolation: Transaction data must not be available to other transactions until the
original transaction is committed or rolled back.

Durability: Transaction data changes must be available, even in the event of


database failure.

Transactions and Terminology

For reference, one of the easiest ways to describe a database transaction is that it
is any change in a database, any “transaction” between the database components
and the data fields that they contain.

However, the terminology becomes confusing, because in enterprise as a whole,


people are so used to referring to financial transactions as simply “transactions.”
That sets up a central conflict in tech-speak versus the terminology of the average
person.

A database “transaction” is any change that happens. To talk about handling


financial transactions in database environments, the word “financial” should be
used explicitly. Otherwise, confusion can easily crop up. Database systems will
need specific features, such as PCI compliance features, in order to handle
financial transactions specifically.

As databases have evolved, transaction handling systems have also evolved. A


new kind of database called NoSQL is one that does not depend on the traditional
relational database data relationships to operate.
While many NoSQL systems offer ACID compliance, others utilize processes like
snapshot isolation or may sacrifice some consistency for other goals. Experts
sometimes talk about a trade-off between consistency and availability, or similar
scenarios where consistently may be treated differently by modern database
environments. This type of question is changing how stakeholders look at
database systems, beyond the traditional relational database paradigms.

Q-2. Explain the difference between logical and physical data independence.

Ans- Difference between Physical and Logical Data Independence

September 20, 2020 by Samyak Nagrare

Compare Physcial Data Independence with Logical Data Independence:

Physical Data Independence

Logical Data Independence

Physical Data Independence modifies the physical schema without causing the
application program to be rewritten.

Logical Data Independence modifies the logical schema without causing the
application program to be rewritten.

Physical Data Independence is easy to achieve.

Logical Data Independence is difficult to achieve.

Modifications at the physical level are occasionally required.

Modifications at the logical level are necessary required.

At the second level

At the first level


Digram Physcial Data Independece

Digram Logical Data Independece

Data Independence in DBMS :-

Data independence modify the schema definition in a lover level without affecting
the schema definition in the next higher level is called data independence.

Two Level of Data Independence

1) Physical data independence

2) Logical data independence

Physical data independence:-

Physical date independence hiding the details of storage structure from user
applications.

Logical data independence:-

Logical data independence changes the logical schema without changing the
external schema.

Q-3.Define the concept of object identity?

Ans- Object Identity

Object identity: An object retains its identity even if some or all of the values of
variables or definitions of methods change over time.

This concept of object identity is necessary in applications but doe not apply to
tuples of a relational database.

Object identity is a stronger notion of identity than typically found in


programming languages or in data models not based on object orientation.
Several forms of identity:

value: A data value is used for identity (e.g., the primary key of a tuple in a
relational database).

name: A user-supplied name is used for identity (e.g., file name in a file system).

built-in: A notion of identity is built-into the data model or programming


languages, and no user-supplied identifier is required (e.g., in OO systems).

Object identity is typically implemented via a unique, system-generated OID. The


value of the OID is not visible to the external user, but is used internally by the
system to identify each object uniquely and to create and manage inter-object
references.

There are many situations where having the system generate identifiers
automatically is a benefit, since it frees humans from performing that task.
However, this ability should be used with care. System-generated identifiers are
usually specific to the system, and have to be translated if data are moved to a
different database system. System-generated identifiers may be redundant if the
entities being modeled already have unique identifiers external to the system,
e.g., SIN#.

Object identity is a fundamental object orientation concept. With object identity,


objects can contain or refer to other objects. Identity is a property of an object
that distinguishes the object from all other objects in the application.

There are many techniques for identifying objects in programming languages,


databases and operating systems. According to the authors the most commonly
used technique for identifying objects is user-defined names for objects. There
are of course practical limitations to the use of variable names without the
support of object identity.

To identify an object with a unique key (also called identifier keys) is a method
that is commonly used in database management systems. Using identifier keys for
object identity confuses identity and data values. According to the authors there
are three main problems with this approach:
1. Modifying identifier keys. Identifier keys cannot be allowed to change, even
though they are user-defined descriptive data.

2. Non-uniformity. The main source of non-uniformity is that identifier keys in


different tables have different types or different combinations of attributes. And
more serious problem is that the attribute(s) to use for an identifier key may need
to change.

3. "Unnatural" joins. The use of identifier keys causes joins to be used in retrievals
instead of simpler and more direct object retrievals.

There are many operations associated with identity. Since the state of an object is
different from its identity, there are three types of equality predicates from
comparing objects. The most obvious equality predicate is the identical predicate,
which checks whether two objects are actually one and the same object. The two
other equality predicates (shallow equal and deep equal) actually compare the
states of objects. Shallow equal goes one level deep in comparing corresponding
instances variables. Deep equal ignores identities and compares the values of
corresponding base objects. As far as copying objects, the counterparts of deep
and shallow equality are deep and shallow copy.

Q-4. What is a relation? What are its characteristics?

Ans- Relational model stores data in the form of tables. This concept purposed by
Dr. E.F. Codd, a researcher of IBM in the year 1960s. The relational model consists
of three major components:

1. The set of relations and set of domains that defines the way data can be
represented (data structure).

2. Integrity rules that define the procedure to protect the data (data integrity).

3. The operations that can be performed on data (data manipulation).


A rational model database is defined as a database that allows you to group its
data items into one or more independent tables that can be related to one
another by using fields common to each related table.

Relational database systems have the following characteristics

• The whole data is conceptually represented as an orderly arrangement of data


into rows and columns, called a relation or table.

.All values are scalar. That is, at any given row/column position in the relation
there is one and only one value.

. All operations are performed on an entire relation and result is an entire


relation, a concept known as closure.

Dr. Codd, when formulating the relational model, chose the term “relation”
because it vas comparatively free of connotations, unlike, for example, the word
“table”. It is a common misconception that the relational model is so called
because relationships are established between tables. In fact, the name is derived
from the relations on whom it is based. Notice that the model requires only that
data be conceptually represented as a relation, it does not specify how the data
should be physically implemented. A relation is a relation provided that it is
arranged in row and column format and its values are scalar. Its existence is
completely independent of any physical representation.

Basic Terminology used in Relational Model

The figure shows a relation with the. Formal names of the basic components
marked the entire structure is, as we have said, a relation.

Tuples of a Relation

Each row of data is a tuple. Actually, each row is an n-tuple, but the “n-” is usually
dropped.

Q-5. List the extended relational operators and purpose of each.


Ans- The basic relational-algebra operations have been extended in several ways.
A simple extension is to allow arithmetic operations as part of projection. An
important extension is to allow aggregate operations such as computing the sum
of the elements of a

set, or their average. Another important extension is the outer-join operation,


which allows relational-algebra expressions to deal with null values, which model
missing information.

Generalized Projection

The generalized-projection operation extends the projection operation by


allowing arithmetic functions to be used in the projection list. The generalized
projection operation has the form

where E is any relational-algebra expression, and each of F1, F2,... , Fn is an


arithmetic expression involving constants and attributes in the schema of E. As a
special case, the arithmetic expression may be simply an attribute or a constant.

For example, suppose we have a relation credit-info, as in Figure 3.25, which lists
the credit limit and expenses so far (the credit-balance on the account). If we
want to find how much more each person can spend, we can write the following
expression:

The attribute resulting from the expression limit − credit –balance does not have a
name. We can apply the rename operation to the result of generalized projection
in order to give it a name. As a notational convenience, renaming of attributes can
be combined with generalized projection as illustrated below:
The second attribute of this generalized projection has been given the name
credit- available. Figure 3.26 shows the result of applying this expression to the
relation in Figure 3.25.

Aggregate Functions

Aggregate functions take a collection of values and return a single value as a


result. For example, the aggregate function sum takes a collection of values and
returns the sum of the values. Thus, the function sum applied on the collection

{1, 1, 3, 4, 4, 11}

returns the value 24. The aggregate function avg returns the average of the
values. When applied to the preceding collection, it returns the value 4. The
aggregate func- tion count returns the number of the elements in the collection,
and returns 6 on the preceding collection. Other common aggregate functions
include min and max, which return the minimum and maximum values in a
collection; they return 1 and 11, respectively, on the preceding collection.

The collections on which aggregate functions operate can have multiple


occurrences of a value; the order in which the values appear is not relevant. Such
collections are called multisets. Sets are a special case of multisets where there is
only one copy of each element.

Null Values∗∗

In this section, we define how the various relational algebra operations deal with
null values and complications that arise when a null value participates in an
arithmetic operation or in a comparison. As we shall see, there is often more than
one possible way of dealing with null values, and as a result our definitions can
sometimes be arbitrary. Operations and comparisons on null values should
therefore be avoided, where possible.
Since the special value null indicates “value unknown or nonexistent,” any
arithmetic operations (such as +, −, ∗, /) involving null values must return a null
result.

Similarly, any comparisons (such as <, <=, >, >=, /=) involving a null value evaluate
to special value unknown; we cannot say for sure whether the result of the
comparison is true or false, so we say that the result is the new truth value
unknown.

Comparisons involving nulls may occur inside Boolean expressions involving the
and, or, and not operations. We must therefore define how the three Boolean
operations deal with the truth value unknown.

• and: (true and unknown) = unknown; (false and unknown) = false; (unknown
and unknown) = unknown.

• or: (true or unknown) = true; (false or unknown) = unknown; (unknown or un-


known) = unknown.

• not: (not unknown) = unknown.

We are now in a position to outline how the different relational operations deal
with null values. Our definitions follow those used in the SQL language.

• select: The selection operation evaluates predicate P in σP (E) on each tuple t in


E. If the predicate returns the value true, t is added to the result. Otherwise, if the
predicate returns unknown or false, t is not added to the result.

• join: Joins can be expressed as a cross product followed by a selection. Thus, the
definition of how selection handles nulls also defines how join operations handle
nulls.

In a natural join, say r s, we can see from the above definition that if two tuples, tr
∈ r and ts ∈ s, both have a null value in a common attribute, then the tuples do
not match.
• projection: The projection operation treats nulls just like any other value when
eliminating duplicates. Thus, if two tuples in the projection result are exactly the
same, and both have nulls in the same fields, they are treated as duplicates.

The decision is a little arbitrary since, without knowing the actual value, we do not
know if the two instances of null are duplicates or not.

• union, intersection, difference: These operations treat nulls just as the


projection operation does; they treat tuples that have the same values on all
fields as duplicates even if some of the fields have null values in both tuples.

The behavior is rather arbitrary, especially in the case of intersection and


difference, since we do not know if the actual values (if any) represented by the
nulls are the same.

• generalized projection: We outlined how nulls are handled in expressions at the


beginning of Section 3.3.4. Duplicate tuples containing null values are handled as
in the projection operation.

• aggregate: When nulls occur in grouping attributes, the aggregate operation


treats them just as in projection: If two tuples are the same on all grouping
attributes, the operation places them in the same group, even if some of their
attribute values are null.

When nulls occur in aggregated attributes, the operation deletes null values at
the outset, before applying aggregation. If the resultant multiset is empty, the
aggregate result is null.

Note that the treatment of nulls here is different from that in ordinary arithmetic
expressions; we could have defined the result of an aggregate operation as null if
even one of the aggregated values is null. However, this would mean a single
unknown value in a large group could make the aggregate result on the group to
be null, and we would lose a lot of useful information.

• outer join: Outer join operations behave just like join operations, except on
tuples that do not occur in the join result. Such tuples may be added to the result
(depending on whether the operation is , , or ), padded with nulls.
BCA 305 (COMPUTER NETWORKS)

Q-1. What is Point-to-Point Communication.

Ans- Point - to - Point Protocol (PPP) is a communication protocol of the data link
layer that is used to transmit multiprotocol data between two directly connected
(point-to-point) computers. It is a byte - oriented protocol that is widely used in
broadband communications having heavy loads and high speeds. Since it is a data
link layer protocol, data is transmitted in frames. It is also known as RFC 1661.

Services Provided by PPP

The main services provided by Point - to - Point Protocol are −

Defining the frame format of the data to be transmitted.

Defining the procedure of establishing link between two points and exchange of
data.

Stating the method of encapsulation of network layer data in the frame.

Stating authentication rules of the communicating devices.

Providing address for network communication.

Providing connections over multiple links.

Supporting a variety of network layer protocols by providing a range os services.

Components of PPP

Point - to - Point Protocol is a layered protocol having three components −

Encapsulation Component − It encapsulates the datagram so that it can be


transmitted over the specified physical layer.
Link Control Protocol (LCP) − It is responsible for establishing, configuring, testing,
maintaining and terminating links for transmission. It also imparts negotiation for
set up of options and use of features by the two endpoints of the links.

Authentication Protocols (AP) − These protocols authenticate endpoints for use of


services. The two authentication protocols of PPP are −

Password Authentication Protocol (PAP)

Challenge Handshake Authentication Protocol (CHAP)

Network Control Protocols (NCPs) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there. Some of the NCPs of PPP are −

Internet Protocol Control Protocol (IPCP)

OSI Network Layer Control Protocol (OSINLCP)

Internetwork Packet Exchange Control Protocol (IPXCP)

DECnet Phase IV Control Protocol (DNCP)

NetBIOS Frames Control Protocol (NBFCP)

IPv6 Control Protocol (IPV6CP)

PPP Frame

PPP is a byte - oriented protocol where each field of the frame is composed of
one or more bytes. The fields of a PPP frame are −

Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern
of the flag is 01111110.

Address − 1 byte which is set to 11111111 in case of broadcast.

Control − 1 byte set to a constant value of 11000000.

Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
Payload − This carries the data from the network layer. The maximum length of
the payload field is 1500 bytes. However, this may be negotiated between the
endpoints of communication.

FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)

Byte Stuffing in PPP Frame − Byte stuffing is used is PPP payload field whenever
the flag sequence appears in the message, so that the receiver does not consider
it as the end of the frame. The escape byte, 01111101, is stuffed before every
byte that contains the same byte as the flag byte or the escape byte. The receiver
on receiving the message removes the escape byte before passing it onto the
network layer.

Q-2. Describe how hierarchal algorithm works?

Ans- Hierarchical clustering algorithm

Hierarchical clustering algorithm is of two types:

i) Agglomerative Hierarchical clustering algorithm or AGNES (agglomerative


nesting) and

ii) Divisive Hierarchical clustering algorithm or DIANA (divisive analysis).

Both this algorithm are exactly reverse of each other. So we will be covering
Agglomerative Hierarchical clustering algorithm in detail.
Agglomerative Hierarchical clustering -This algorithm works by grouping the
data one by one on the basis of the nearest distance measure of all the pairwise
distance between the data point. Again distance between the data point is
recalculated but which distance to consider when the groups has been formed?
For this there are many available methods. Some of them are:

1) single-nearest distance or single linkage.

2) complete-farthest distance or complete linkage.

3) average-average distance or average linkage.

4) centroid distance.

5) ward's method - sum of squared euclidean distance is minimized.

This way we go on grouping the data until one cluster is formed. Now on the basis
of dendogram graph we can calculate how many number of clusters should be
actually present.

Algorithmic steps for Agglomerative Hierarchical clustering

Let X = {x1, x2, x3, ..., xn} be the set of data points.

1) Begin with the disjoint clustering having level L(0) = 0 and sequence number m
= 0.

2) Find the least distance pair of clusters in the current clustering, say pair (r), (s),
according to d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of
clusters in the current clustering.

3) Increment the sequence number: m = m +1.Merge clusters (r) and (s) into a
single cluster to form the next clustering m. Set the level of this clustering to L(m)
= d[(r),(s)].

4) Update the distance matrix, D, by deleting the rows and columns


corresponding to clusters (r) and (s) and adding a row and column corresponding
to the newly formed cluster. The distance between the new cluster, denoted (r,s)
and old cluster(k) is defined in this way: d[(k), (r,s)] = min (d[(k),(r)], d[(k),(s)]).
5) If all the data points are in one cluster then stop, else repeat from step 2).

Divisive Hierarchical clustering - It is just the reverse of Agglomerative Hierarchical


approach.

Advantages

1) No apriori information about the number of clusters required.

2) Easy to implement and gives best result in some cases.

Disadvantages

1) Algorithm can never undo what was done previously.

2) Time complexity of at least O(n2 log n) is required, where ‘n’ is the number of
data points.

3) Based on the type of distance matrix chosen for merging different algorithms
can suffer with one or more of the following:

i) Sensitivity to noise and outliers

ii) Breaking large clusters

iii) Difficulty handling different sized clusters and convex shapes

4) No objective function is directly minimized

5) Sometimes it is difficult to identify the correct number of clusters by the


dendogram.

Q-3. How BGP-4 work?

Ans- Border Gateway Protocol (BGP) is a standardized exterior gateway protocol


designed to exchange routing and reachability information among autonomous
systems (AS) on the Internet.[1] BGP is classified as a path-vector routing
protocol,[2] and it makes routing decisions based on paths, network policies, or
rule-sets configured by a network administrator.

BGP used for routing within an autonomous system is called Interior Border
Gateway Protocol, Internal BGP (iBGP). In contrast, the Internet application of the
protocol is called Exterior Border Gateway Protocol, External BGP (eBGP).

Operation[edit]

BGP neighbors, called peers, are established by manual configuration among


routers to create a TCP session on port 179. A BGP speaker sends 19-byte keep-
alive messages every 30 seconds (protocol default value, tunable) to maintain the
connection[5]. Among routing protocols, BGP is unique in using TCP as its
transport protocol.

When BGP runs between two peers in the same autonomous system (AS), it is
referred to as Internal BGP (iBGP or Interior Border Gateway Protocol). When it
runs between different autonomous systems, it is called External BGP (eBGP or
Exterior Border Gateway Protocol). Routers on the boundary of one AS
exchanging information with another AS are called border or edge routers or
simply eBGP peers and are typically connected directly, while iBGP peers can be
interconnected through other intermediate routers. Other deployment topologies
are also possible, such as running eBGP peering inside a VPN tunnel, allowing two
remote sites to exchange routing information in a secure and isolated manner.

The main difference between iBGP and eBGP peering is in the way routes that
were received from one peer are typically propagated by default to other peers:

New routes learned from an eBGP peer are re-advertised to all iBGP and eBGP
peers.

New routes learned from an iBGP peer are re-advertised to all eBGP peers only.

These route-propagation rules effectively require that all iBGP peers inside an AS
are interconnected in a full mesh with iBGP sessions.
How routes are propagated can be controlled in detail via the route-maps
mechanism. This mechanism consists of a set of rules. Each rule describes, for
routes matching some given criteria, what action should be taken. The action
could be to drop the route, or it could be to modify some attributes of the route
before inserting it in the routing table.

Extensions negotiation[edit]

BGP state machine

During the peering handshake, when OPEN messages are exchanged, BGP
speakers can negotiate optional capabilities of the session,[6] including
multiprotocol extensions[7] and various recovery modes. If the multiprotocol
extensions to BGP are negotiated at the time of creation, the BGP speaker can
prefix the Network Layer Reachability Information (NLRI) it advertises with an
address family prefix. These families include the IPv4 (default), IPv6, IPv4/IPv6
Virtual Private Networks and multicast BGP. Increasingly, BGP is used as a
generalized signaling protocol to carry information about routes that may not be
part of the global Internet, such as VPNs.[8]

In order to make decisions in its operations with peers, a BGP peer uses a simple
finite state machine (FSM) that consists of six states: Idle; Connect; Active;
OpenSent; OpenConfirm; and Established. For each peer-to-peer session, a BGP
implementation maintains a state variable that tracks which of these six states
the session is in. The BGP defines the messages that each peer should exchange in
order to change the session from one state to another.

The first state is the Idle state. In the Idle state, BGP initializes all resources,
refuses all inbound BGP connection attempts and initiates a TCP connection to the
peer. The second state is Connect. In the Connect state, the router waits for the
TCP connection to complete and transitions to the OpenSent state if successful. If
unsuccessful, it starts the ConnectRetry timer and transitions to the Active state
upon expiration. In the Active state, the router resets the ConnectRetry timer to
zero and returns to the Connect state. In the OpenSent state, the router sends an
Open message and waits for one in return in order to transition to the
OpenConfirm state. Keepalive messages are exchanged and, upon successful
receipt, the router is placed into the Established state. In the Established state,
the router can send and receive: Keepalive; Update; and Notification messages to
and from its peer.

BGP confederation[edit]

Confederations are sets of autonomous systems. In common practice,[15] only


one of the confederation AS numbers is seen by the Internet as a whole.
Confederations are used in very large networks where a large AS can be
configured to encompass smaller more manageable internal ASs.

The confederated AS is composed of multiple ASs. Each confederated AS alone


has iBGP fully meshed and has connections to other ASs inside the confederation.
Even though these ASs have eBGP peers to ASs within the confederation, the ASs
exchange routing as if they used iBGP. In this way, the confederation preserves
next hop, metric, and local preference information. To the outside world, the
confederation appears to be a single AS. With this solution, iBGP transit AS
problems can be resolved as iBGP requires a full mesh between all BGP routers:
large number of TCP sessions and unnecessary duplication of routing traffic.

Confederations can be used in conjunction with route reflectors. Both


confederations and route reflectors can be subject to persistent oscillation unless
specific design rules, affecting both BGP and the interior routing protocol, are
followed.[16]

However, these alternatives can introduce problems of their own, including the
following:

route oscillation

sub-optimal routing
increase of BGP convergence time[17]

Additionally, route reflectors and BGP confederations were not designed to ease
BGP router configuration. Nevertheless, these are common tools for experienced
BGP network architects. These tools may be combined, for example, as a
hierarchy of route reflectors.

Q-4. What are the basic advantages of coaxial cable in LAN implementation?

Ans- GREMCO’s coaxial cables (coax cables) are produced for optimal use in high-
frequency ranges. We focus on achieving the best possible electrical values, such
as the attenuation, capacity and characteristic impedance. Due to the high-quality
outer conductor, the inner conductors are extremely well shielded from
interference radiation. Owing to our high storage capacity, GREMCO’s coaxial
cables are also available upon demand in different variations (RG400, RG142, etc.)
at prices naturally in line with market requirements.

They are found in many applications, such as in satellite dishes and aircraft GPS
receivers, due to the versatility of their possible uses.

Specific application possibilities & advantages

Very good electrical damping

High electrical capacitance

Excellent electrical characteristic impedance

Available upon demand

Broadband system—Coax has a sufficient frequency range to support multiple


channels, which allows for much greater throughput.

Greater channel capacity—Each of the multiple channels offers substantial


capacity. The capacity depends on where you are in the world. In the North
American system, each channel in the cable TV system is 6MHz wide, according to
the National Television Systems Committee (NTSC) standard. In Europe, with the
Phase Alternate Line (PAL) standard, the channels are 8MHz wide. Within one of
these channels, you can provision high-speed Internet access—that's how cable
modems operate. But that one channel is now being shared by everyone using
that coax from that neighborhood node, which can range from 200 to 2,000
homes.

Greater bandwidth—Compared to twisted-pair, coax provides greater bandwidth


systemwide, and it also offers greater bandwidth for each channel. Because it has
greater bandwidth per channel, it supports a mixed range of services. Voice, data,
and even video and multimedia can benefit from the enhanced capacity.

Lower error rates—Because the inner conductor is in a Faraday shield, noise


immunity is improved, and coax has lower error rates and therefore slightly better
performance than twisted-pair. The error rate is generally 10–9 (i.e., 1 in 1 billion)
bps.

Greater spacing between amplifiers—Coax's cable shielding reduces noise and


crosstalk, which means amplifiers can be spaced farther apart than with twisted-
pair.

The main disadvantages of coax are as follows:

Q-5. How does the encryption affect performance of network?

Ans- The two main characteristics that identify and differentiate one encryption
algorithm from another are its ability to secure the protected data against attacks
and its speed and efficiency in doing so. This paper provides a performance
comparison between four of the most common encryption algorithms: DES, 3DES,
Blowfish and AES (Rijndael). The comparison has been conducted by running
several encryption settings to process different sizes of data blocks to evaluate
the algorithm's encryption/decryption speed. Simulation has been conducted
using C# language.

1. Introduction
As the importance and the value of exchanged data over the Internet or other
media types are increasing, the search for the best solution to offer the necessary
protection against the data thieves' attacks along with providing these services
under timely manner is one of the most active subjects in the security related
communities.

This paper tries to present a fair comparison between the most common and
used algorithms in the data encryption field. Since our main concern here is the
performance of these algorithms under different settings, the presented
comparison takes into consideration the behavior and the performance of the
algorithm when different data loads are used.

Section 2 will give a quick overview of cryptography and its main usages in our
daily life; in addition to that it will explain some of the most used terms in
cryptography along with a brief description of each of the compared algorithm to
allow the reader to understand the key differences between them.Section 3 will
show the results achieved by other contributions and their conclusions. Section 4
will walk through the used setup environment and settings and the used system
components. Section 5 illustrates the performance evaluation methodology and
the chosen settings to allow a better comparison. Section 6 gives a thorough
discussion about the simulation results, and finally section 7 concludes this paper
by summaries the key points and other related considerations.

2. Cryptography: Overview

An overview of the main goals behind using cryptography will be discussed in this
section along with the common terms used in this field.

Cryptography is usually referred to as "the study of secret", while nowadays is


most attached to the definition of encryption. Encryption is the process of
converting plain text "unhidden" to a cryptic text "hidden" to secure it against
data thieves. This process has another part where cryptic text needs to be
decrypted on the other end to be understood. Fig.1 shows the simple flow of
commonly used encryption algorithms.
Fig.1 Encryption-Decryption Flow

As defined in RFC 2828 [RFC2828], cryptographic system is "a set of cryptographic


algorithms together with the key management processes that support use of the
algorithms in some application context." This definition defines the whole
mechanism that provides the necessary level of security comprised of network
protocols and data encryption algorithms.

2.1 Cryptography Goals

This section explains the five main goals behind using Cryptography.

Every security system must provide a bundle of security functions that can assure
the secrecy of the system. These functions are usually referred to as the goals of
the security system. These goals can be listed under the following five main
categories[Earle2005]:

Authentication: This means that before sending and receiving data using the
system, the receiver and sender identity should be verified.

Secrecy or Confidentiality: Usually this function (feature) is how most people


identify a secure system. It means that only the authenticated people are able to
interpret the message (date) content and no one else.

Integrity: Integrity means that the content of the communicated data is assured
to be free from any type of modification between the end points (sender and
receiver). The basic form of integrity is packet check sum in IPv4 packets.

Non-Repudiation: This function implies that neither the sender nor the receiver
can falsely deny that they have sent a certain message.

Service Reliability and Availability: Since secure systems usually get attacked by
intruders, which may affect their availability and type of service to their users.
Such systems should provide a way to grant their users the quality of service they
expect.
2.2 Block Ciphers and Stream Ciphers

One of the main categorization methods for encryption techniques commonly


used is based on the form of the input data they operate on. The two types are
Block Cipher and Stream Cipher. This section discusses the main features in the
two types, operation mode, and compares between them in terms of security and
performance.

2.2.1 Block Cipher

Before starting to describe the key characteristics of block cipher, the definition of
cipher word must be presented. "A cipher is an algorithm for performing
encryption (reverse is decryption) "[Wikipedia-BC].

In this method data is encrypted and decrypted if data is in from of blocks. In its
simplest mode, you divide the plain text into blocks which are then fed into the
cipher system to produce blocks of cipher text.

ECB(Electronic Codebook Mode) is the basic form of clock cipher where data
blocks are encrypted directly to generate its correspondent ciphered blocks
(shown in Fig. 2). More discussion about modes of operations will be discussed
later.

Fig.2 Block Cipher ECB Mode.

2.2.2 Stream Ciphers

Stream cipher functions on a stream of data by operating on it bit by bit. Stream


cipher consists of two major components: a key stream generator, and a mixing
function. Mixing function is usually just an XOR function, while key stream
generator is the main unit in stream cipher encryption technique. For example, if
the key stream generator produces a series of zeros, the outputted ciphered
stream will be identical to the original plain text. Figure 3 shows the operation of
the simple mode in stream cipher.
Fig. 3 Stream Cipher (Simple Mode)

2.3 Mode of Operations

This section explains the two most common modes of operations in Block Cipher
encryption-ECB and CBC- with a quick visit to other modes.

There are many variances of block cipher, where different techniques are used to
strengthen the security of the system. The most common methods are: ECB
(Electronic Codebook Mode), CBC (Chain Block Chaining Mode), and OFB (Output
Feedback Mode). ECB mode is the CBC mode uses the cipher block from the
previous step of encryption in the current one, which forms a chain-like
encryption process. OFB operates on plain text in away similar to stream cipher
that will be described below, where the encryption key used in every step
depends on the encryption key from the previous step.

There are many other modes like CTR (counter), CFB (Cipher Feedback), or 3DES
specific modes that are not discussed in this paper due to the fact that in this
paper the main concentration will be on ECB and CBC modes.

2.4 Symmetric and Asymmetric encryptions

Data encryption procedures are mainly categorized into two categories depending
on the type of security keys used to encrypt/decrypt the secured data. These two
categories are: Asymmetric and Symmetric encryption techniques

2.4.1 Symmetric Encryption

In this type of encryption, the sender and the receiver agree on a secret (shared)
key. Then they use this secret key to encrypt and decrypt their sent messages. Fig.
4 shows the process of symmetric cryptography. Node A and B first agree on the
encryption technique to be used in encryption and decryption of communicated
data. Then they agree on the secret key that both of them will use in this
connection. After the encryption setup finishes, node A starts sending its data
encrypted with the shared key, on the other side node B uses the same key to
decrypt the encrypted messages.
Fig.4 Symmetric Encryption

The main concern behind symmetric encryption is how to share the secret key
securely between the two peers. If the key gets known for any reason, the whole
system collapses. The key management for this type of encryption is troublesome,
especially if a unique secret key is used for each peer-to-peer connection, then
the total number of secret keys to be saved and managed for n-nodes will be n(n-
1)/2 [Edney2003] .

2.4.2 Asymmetric Encryption

Asymmetric encryption is the other type of encryption where two keys are used.
To explain more, what Key1 can encrypt only Key2 can decrypt, and vice versa. It
is also known as Public Key Cryptography (PKC), because users tend to use two
keys: public key, which is known to the public, and private key which is known
only to the user. Figure 5 below illustrates the use of the two keys between node
A and node B. After agreeing on the type of encryption to be used in the
connection, node B sends its public key to node A. Node A uses the received
public key to encrypt its messages. Then when the encrypted messages arrive,
node B uses its private key to decrypt them.

Fig.5 Asymmetric Encryption

This capability surmounts the symmetric encryption problem of managing secret


keys. But on the other hand, this unique feature of public key encryption makes it
mathematically more prone to attacks. Moreover, asymmetric encryption
techniques are almost 1000 times slower than symmetric techniques, because
they require more computational processing power[Edney2003]
[ Hardjono2005] .

To get the benefits of both methods, a hybrid technique is usually used. In this
technique, asymmetric encryption is used to exchange the secret key, symmetric
encryption is then used to transfer data between sender and receiver.
2.5 Compared Algorithms

This section intends to give the readers the necessary background to understand
the key differences between the compared algorithms.

DES: (Data Encryption Standard), was the first encryption standard to be


recommended by NIST (National Institute of Standards and Technology). It is
based on the IBM proposed algorithm called Lucifer. DES became a standard in
1974 [TropSoft] . Since that time, many attacks and methods recorded that exploit
the weaknesses of DES, which made it an insecure block cipher.

3DES: As an enhancement of DES, the3DES (Triple DES) encryption standard was


proposed. In this standard the encryption method is similar to the one in original
DES but applied 3 times to increase the encryption level. But it is a known fact
that 3DES is slower than other block cipher methods.

AES: (Advanced Encryption Standard), is the new encryption standard


recommended by NIST to replace DES. Rijndael (pronounced Rain Doll) algorithm
was selected in 1997 after a competition to select the best encryption standard.
Brute force attack is the only effective attack known against it, in which the
attacker tries to test all the characters combinations to unlock the encryption.
Both AES and DES are block ciphers.

Blowfish: It is one of the most common public domain encryption algorithms


provided by Bruce Schneier - one of the world's leading cryptologists, and the
president of Counterpane Systems, a consulting firm specializing in cryptography
and computer security.

Blowfish is a variable length key, 64-bit block cipher. The Blowfish algorithm was
first introduced in 1993.This algorithm can be optimized in hardware applications
though it's mostly used in software applications. Though it suffers from weak keys
problem, no attack is known to be successful against [BRUCE1996][Nadeem2005].

In this section a brief description of the compared encryption algorithms have


been introduced. This introductions to each algorithm are to provided the
minimum information to distinguish the main differences between them.
3. Related Work Results

To give more prospective about the performance of the compared algorithms,


this section discusses the results obtained from other resources.

One of the known cryptography libraries is Crypto++ [Crypto++]. Crypto++ Library


is a free C++ class library of cryptographic schemes. Currently the library consists
of the following, some of which are other people's code, repackaged into classes.

Table 1 contains the speed benchmarks for some of the most commonly used
cryptographic algorithms. All were coded in C++, compiled with Microsoft Visual
C++ .NET 2003 (whole program optimization, optimize for speed, P4 code
generation), and ran on a Pentium 4 2.1 GHz processor under Windows XP SP 1.
386 assembly routines were used for multiple-precision addition and subtraction.
SSE2 intrinsics were used for multiple-precision multiplication.

It can be noticed from the table that not all the modes have been tried for all the
algorithms. Nonetheless, these results are good to have an indication about what
the presented comparison results should look like.

Also it is shown that Blowfish and AES have the best performance among others.
And both of them are known to have better encryption (i.e. stronger against data
attacks) than the other two.

Algorithm

Megabytes(2^20 bytes) Processed

Time Taken

MB/Second

Blowfish

256
3.976

64.386

Rijndael (128-bit key)

256

4.196

61.010

Rijndael (192-bit key)

256

4.817

53.145

Rijndael (256-bit key)

256

5.308

48.229

Rijndael (128) CTR

256

4.436

57.710

Rijndael (128) OFB

256

4.837

52.925
Rijndael (128) CFB

256

5.378

47.601

Rijndael (128) CBC

256

4.617

55.447

DES

128

5.998

21.340

(3DES)DES-XEX3

128

6.159

20.783

(3DES)DES-EDE3

64

6.499

9.848

Table 1 Comparison results using Crypto++


[Nadeem2005] In this paper, the popular secret key algorithms including DES,
3DES, AES (Rijndael), Blowfish, were implemented, and their performance was
compared by encrypting input files of varying contents and sizes. The algorithms
were implemented in a uniform language (Java), using their standard
specifications, and were tested on two different hardware platforms, to compare
their performance.

Tables 2 and 3 show the results of their experiments, where they have conducted
it on two different machines: P-II 266 MHz and P-4 2.4 GHz.

Table 3 Comparative execution times (in seconds) of encryption algorithms in ECB


mode on a P-4 2.4 GHz machine

From the results it is easy to observe that Blowfish has an advantage over other
algorithms in terms of throughput. [Nadeem2005] has also conducted comparison
between the algorithms in stream mode using CBC, but since this paper is more
focused on block cipher the results were omitted.

The results showed that Blowfish has a very good performance compared to
other algorithms. Also it showed that AES has a better performance than 3DES
and DES. Amazingly it shows also that 3DES has almost 1/3 throughput of DES, or
in other words it needs 3 times than DES to process the same amount of data.

[Dhawan2002] has also done experiments for comparing the performance of the
different encryption algorithms implemented inside .NET framework. Their results
are close to the ones shown before (Figure 6).
Fig. 6 Comparison results using .NET implemntations[Dhawan2002]

The comparison was performed on the following algorithms: DES, Triple DES
(3DES), RC2 and AES (Rijndael). The results shows that AES outperformed other
algorithms in both the number of requests processes per second in different user
loads, and in the response time in different user-load situations.

This section gave an overview of comparison results achieved by other people in


the field.

4. Simulation Setup

This section describes the simulation environment and the used system
components.

As mentioned this simulation uses the provided classes in .NET environment to


simulate the performance of DES, 3DES and AES (Rijndael). Blowfish
implementation used here is the one provided by Markus Hahn [BlowFish.NET]
under the name Blowfish.NET. This implementation is thoroughly tested and is
optimized to give the maximum performance for the algorithm.

The implementation uses managed wrappers for DES, 3DES and Rijndael available
in System.Security.Cryptography that wraps unmanaged implementations
available in CryptoAPI. These are DESCryptoServiceProvider,
TripleDESCryptoServiceProvider and RijndaelManaged respectively. There is only
a pure managed implementation of Rijndael available in
System.Security.Cryptography, which was used in the tests.

Table 4 shows the algorithms settings used in this experiment. These settings are
used to compare the results initially with the result obtained from [Dhawan2002].

Algorithm

Key Size

(Bits)

Block Size
(Bits)

DES

64

64

3DES

192

64

Rijndael

256

128

Blowfish

448

64

Table 4 Algorithms settings

3DES and AES support other settings, but these settings represent the maximum
security settings they can offer. Longer key lengths mean more effort must be put
forward to break the encrypted data security.

Since the evaluation test is meant to evaluate the results when using block cipher,
due to the memory constraints on the test machine (1 GB) the test will break the
load data blocks into smaller sizes .The load data are divided into the data blocks
and they are created using the RandomNumberGenerator class available in
System.Security.Cryptography namespace.

5. Performance Evaluation Methodology


This section describes the techniques and simulation choices made to evaluate
the performance of the compared algorithms. In addition to that, this section will
discuss the methodology related parameters like: system parameters, experiment
factor(s), and experiment initial settings.

5.1 System Parameters

The experiments are conducted using 3500+ AMD 64bit processor with 1GB of
RAM. The simulation program is compiled using the default settings in .NET 2003
visual studio for C# windows applications. The experiments will be performed
couple of times to assure that the results are consistent and are valid to compare
the different algorithms.

5.2 Experiment Factors

In order to evaluate the performance of the compared algorithms, the


parameters that the algorithms must be tested for must be determined.

Since the security features of each algorithm as their strength against


cryptographic attacks is already known and discussed. The chosen factor here to
determine the performance is the algorithm's speed to encrypt/decrypt data
blocks of various sizes.

5.3 Simulation Procedure

By considering different sizes of data blocks (0.5MB to 20MB) the algorithms were
evaluated in terms of the time required to encrypt and decrypt the data block. All
the implementations were exact to make sure that the results will be relatively
fair and accurate.

The Simulation program (shown below in Fig. 7) accepts three inputs: Algorithm,
Cipher Mode and data block size. After a successful execution, the data
generated, encrypted, and decrypted are shown. Notice that most of the
characters can not appear since they do not have character representation.
Another comparison is made after the successful encryption/decryption process
to make sure that all the data are processed in the right way by comparing the
generated data (the original data blocks) and the decrypted data block generated
from the process.

Fig.7 GUI of the simulation program

6. Simulation Results

This section will show the results obtained from running the simulation program
using different data loads. The results show the impact of changing data load on
each algorithm and the impact of Cipher Mode (Encryption Mode) used.

6.1 Performance Results with ECB

The first set of experiments were conducted using ECB mode, the results are
shown in figure 8 below. The results show the superiority of Blowfish algorithm
over other algorithms in terms of the processing time. It shows also that AES
consumes more resources when the data block size is relatively big. The results
shown here are different from the results obtained by [Dhawan2002] since the
data block sizes used here are much larger than the ones used in their
experiment.

Another point can be noticed here that 3DES requires always more time than DES
because of its triple phase encryption characteristic. Blowfish ,although it has a
long key (448 bit) , outperformed other encryption algorithms. DES and 3DES are
known to have worm holes in their security mechanism, Blowfish and AES, on the
other hand, do not have any so far.

These results have nothing to do with the other loads on the computer since each
single experiment was conducted multiple times resulting in almost the same
expected result. DES, 3DES and AES implementation in .NET is considered to be
the best in the market.
Fig.8 Performance Results with ECB Mode

6.2 Performance Results with CBC

As expected CBC requires more processing time than ECB because of its key-
chaining nature. The results show in Fig. 9 indicates also that the extra time added
is not significant for many applications, knowing that CBC is much better than ECB
in terms of protection. The difference between the two modes is hard to see by
the naked eye, the results showed that the average difference between ECB and
CBC is 0.059896 second, which is relatively small.

Fig. 9 Performance Results with CBC Mode

This section showed the simulation results obtained by running the four
compared encryption algorithms using different Cipher Modes. Different load
have been used to determine the processing power and performance of the
compared algorithms.

7. Conclusion

The presented simulation results showed that Blowfish has a better performance
than other common encryption algorithms used. Since Blowfish has not any
known security weak points so far, which makes it an excellent candidate to be
considered as a standard encryption algorithm. AES showed poor performance
results compared to other algorithms since it requires more processing power.
Using CBC mode has added extra processing time, but overall it was relatively
negligible especially for certain application that requires more secure encryption
to a relatively large data blocks.

References
[RFC2828],"Internet Security Glossary", http://www.faqs.org/rfcs/rfc2828.html

[Nadeem2005]Aamer Nadeem et al, "A Performance Comparison of Data


Encryption Algorithms", IEEE 2005

[Earle2005] "Wireless Security Handbook,". Auerbach Publications 2005

[Dhawan2002] Priya Dhawan., "Performance Comparison: Security Design


Choices," Microsoft Developer Network October 2002.
http://msdn2.microsoft.com/en-us/library/ms978415.aspx

[Edney2003]," Real 802.11 Security: Wi-Fi Protected Access and 802.11i ,".
Addison Wesley 2003

[Wikipedia-BC] "Block Cipher", http://en.wikipedia.org/wiki/Block_cipher

[Hardjono2005]," Security In Wireless LANS And MANS ,". Artech House


Publishers 2005

[TropSoft] "DES Overview", [Explains how DES works in details, features and
weaknesses]

[Bruce1996] BRUCE SCHNEIER, "Applied Cryptography" , John Wiley & Sons, Inc
1996

[Crypto++]"Crypto++ benchmark",
http://www.eskimo.com/~weidai/benchmarks.html [Results of comparing tens of
encryption algorithms using different settings].

[BlowFish.NET] "Coder's Lagoon",http://www.hotpixel.net/software.html [List of


resources to be used under GNU]
BCA 306 (VISUAL PROGRAMMING)

Q-1. Describe Error Handling?

Ans- Error handling refers to the response and recovery procedures from error
conditions present in a software application. In other words, it is the process
comprised of anticipation, detection and resolution of application errors,
programming errors or communication errors. Error handling helps in maintaining
the normal flow of program execution. In fact, many applications face numerous
design challenges when considering error-handling techniques.

Techopedia Explains Error Handling

Error handling helps in handling both hardware and software errors gracefully and
helps execution to resume when interrupted. When it comes to error handling in
software, either the programmer develops the necessary codes to handle errors
or makes use of software tools to handle the errors. In cases where errors cannot
be classified, error handling is usually done with returning special error codes.
Special applications known as error handlers are available for certain applications
to help in error handling. These applications can anticipate errors, thereby helping
in recovering without actual termination of application.

There are four main categories of errors:

Logical errors

Generated errors

Compile-time errors

Runtime errors

Error-handling techniques for development errors include rigorous proofreading.


Error-handling techniques for logic errors or bugs is usually by meticulous
application debugging or troubleshooting. Error-handling applications can resolve
runtime errors or have their impact minimized by adopting reasonable
countermeasures depending on the environment. Most hardware applications
include an error-handling mechanism which allows them to recover gracefully
from unexpected errors.

As errors could be fatal, error handling is one of the crucial areas for application
designers and developers, regardless of the application developed or
programming languages used. In worst-case scenarios, the error handling
mechanisms force the application to log the user off and shut down the system.

Q-2. What is conditional statement?

Ans- In the study of logic, there are two types of statements, conditional
statement and bi-conditional statement. These statements are formed by
combining two statements, which are called compound statements. Suppose a
statement is- if it rains, then we don’t play. This is a combination of two
statements. These types of statements are mainly used in computer programming
languages such as c, c++, etc. Let us learn more here with examples.

Conditional Statement Definition

A conditional statement is represented in the form of “if…then”. Let p and q are


the two statements, then statements p and q can be written as per different
conditions, such as;

p implies q

p is sufficient for q

q is necessary for p

p⇒q

Points to remember:

A conditional statement is also called implications.


Sign of logical connector conditional statement is →. Example P → Q pronouns as
P implies Q.

The state P → Q is false if the P is true and Q is false otherwise P → Q is true.

Q-3. Explain the For-Next loop? With example.

Ans- It repeats a group of statements a specified number of times and a loop


index counts the number of loop iterations as the loop executes.

The syntax for this loop construct is −

For counter [ As datatype ] = start To end [ Step step ]

[ statements ]

[ Continue For ]

[ statements ]

[ Exit For ]

[ statements ]

Next [ counter ]

Flow Diagram

Example

Live Demo

Module loops

Sub Main()

Dim a As Byte

' for loop execution


For a = 10 To 20

Console.WriteLine("value of a: {0}", a)

Next

Console.ReadLine()

End Sub

End Module

When the above code is compiled and executed, it produces the following result −

value of a: 10

value of a: 11

value of a: 12

value of a: 13

value of a: 14

value of a: 15

value of a: 16

value of a: 17

value of a: 18

value of a: 19

value of a: 20

If you want to use a step size of 2, for example, you need to display only even
numbers, between 10 and 20 −

Live Demo

Module loops
Sub Main()

Dim a As Byte

' for loop execution

For a = 10 To 20 Step 2

Console.WriteLine("value of a: {0}", a)

Next

Console.ReadLine()

End Sub

End Module

When the above code is compiled and executed, it produces the following result −

value of a: 10

value of a: 12

value of a: 14

value of a: 16

value of a: 18

value of a: 20

Q-4. Write short note on Auto Scrolling Form.

Ans- Although the trackpad is taking over as the primary pointing device in the
laptop form factor, many users still prefer the mouse as it provides better control
and more functionality when doing precision-based work like gaming or content
editing.
To increase user satisfaction, Windows often implements many additional
features. However, some of these may backfire, and Microsoft has often faced
backlash for releasing buggy and untested content. Windows 10 often
misbehaves, and many users report a multitude of errors, which hamper their use
of the device.

One such error that I shall discuss today is one where Windows 10 keeps scrolling
down automatically.

Change Windows Settings For Your Mouse

There is a feature in Windows 10, which I find very useful is the inactive window
scrolling. Personally, this lets me adjust the onscreen content without having to
change the active window. However, this is also prone to bugs, and many users
have reported that turning this setting on lead to their mouse scrolling on its own.

See also What Is Windows WS? Is It Safe To Delete Windows Setup Files?

To turn this setting off, you need to follow these steps:

Open the Windows Settings You can do so from the start menu, or use the
keyboard shortcut Win + I.

Click on Devices.

From the left pane menu, click on Mouse.

In the right pane, turn the toggle towards off below ‘Scroll inactive windows when
I Hover over them’.

Restart your computer once this is done

heck Physical Buttons

If the device keeps scrolling automatically without any user input, check if there
are any problems with your keyboard, especially the down arrow key and the
Page down key.
Some users on Microsoft Community have discussed that there was indeed a
hardware problem with their keyboard, which led to Windows 10 scrolling by
itself without any mouse input. This can happen due to many reasons like dust,
damage from use, etc.

Make sure that there is no physical damage to your keyboard and your mouse
wheel. If there is, repair the device if you can, or take it to a service center and get
the keys replaced. Plugin your repaired device and check if still Windows 10 keeps
scrolling down.

FIX 3: Update Or Roll Back Mouse Drivers

To eliminate the possibility of an outdated driver, you must check and update any
mouse drivers available if Windows 10 keeps scrolling down.

Note: this is assuming that you are on the latest build Windows and have all the
updates installed.

To update the mouse drivers, here are the steps you can follow:

Open a Run Window by pressing Win + R.

Type devmgmt.msc and press Enter to open the device manager.

Q-5. Compare and contrast between conditional operator and logical operator.

Ans- A conditional logical operator, in C#, includes a conditional AND operator


(&&) and conditional OR operator (||). It is a conditional version of a Boolean
logical operator (& and |).

Conditional logical operators are used in decision-making statements, which


determine the path of execution based on the condition specified as a
combination of multiple Boolean expressions. They are helpful in generating
efficient code by ignoring unnecessary logic and saving execution time, especially
in logical expressions where multiple conditional operators are used.
Unlike the Boolean logical operators "&" and "|," which always evaluate both the
operands, conditional logical operators execute the second operand only if
necessary. As a result, conditional logical operators are faster than Boolean logical
operators and are often preferred. The execution using the conditional logical
operators is called as “short-circuit” or “lazy” evaluation.

Conditional logical operators are also known as short-circuiting logical operators.

Techopedia Explains Conditional Logical Operator

The conditional AND operator (&&) is used to perform a logical AND of its
operands of Boole type. The evaluation of the second operand occurs only if it is
necessary. It is similar to the Boolean logical operator "&," except for the
condition when the first operand returns false, the second operand will not be
evaluated. This is because the "&&" operation is true only if the evaluation of
both the operands returns true.

The conditional OR operator (||) is used to perform a logical OR of its operands of


Boole type. The evaluation of the second operand does not occur if the first
operand is evaluated as true. It differs from the Boolean logical operator "|" by
performing a “short-circuit” evaluation wherein the second operand is not
evaluated when the first operand is evaluated as true. This is due to the fact that
the result of the "||" operation is true if the evaluation of any of the two
operands returns true.

For example, to validate a number to be within an upper and a lower limit, the
logical AND operation can be performed on the two conditions checking for the
upper and lower limit, which are expressed as Boolean expressions.

Conditional logical operators are left-associative, which implies that they are
evaluated in order from left to right in an expression where these operators exist
in multiple occurrences.

You might also like