Professional Documents
Culture Documents
ASSIGNMENT
Sem-3
Ans-
Concept of Entrepreneurship
Before proceeding further, let us first understand the concept of entrepreneurship and
its importance to the economy. An entrepreneur is the sole owner and manager of his
business. Actually, the word translates to “the one who undertakes” in French.
From an economics point of view, an entrepreneur is the one who bears all the risk of a
business. And in return, he gets to enjoy all of the profits from the business as well.
While understanding the concept of entrepreneurship we will also learn about the
importance of entrepreneurs in the economy. They bring in new goods, services,
technologies, etc. to the market.
After having studied the concept of entrepreneurship, now let us look at some key
elements that are necessary for entrepreneurship. We will be looking at four of the most
important elements.
Browse more Topics under Introduction To Entrepreneurship
Benifits of Entrepreneurship
Types of Entrepreneurs
Entrepreneurs vs Managers
Intrapreneurship
Innovation
Risk-Taking
Entrepreneurship and risk-taking go hand in hand. One of the most important features
of entrepreneurship is that the whole business is run and managed by one person. So
there is no one to share the risks with.
Not taking any risks can stagnate a business and excessive impulsive risk-taking can
cause losses. So a good entrepreneur knows how to take and manage the risks of his
business. But the willingness of an entrepreneur to take risks gives them a competitive
edge in the economy. It helps them exploit the opportunities the economy provides.
Vision
Vision or foresight is one of the main driving forces behind any entrepreneur. It is the
energy that drives the business forward by using the foresight of the entrepreneur. It is
what gives the business an outline for the future – the tasks to complete, the risks to
take, the culture to establish, etc.
All great entrepreneurs of the world that started with an entrepreneurship business are
known to have great vision. This helps them set out short term and long term goals for
their business and also plan ways to achieve these objectives.
Organization
An entrepreneur must be able to manage and organize his finances, his employees, his
resources, etc. So his organizational abilities are one of the most important elements of
entrepreneurship.
Ans-
Here are the 10 Best Characteristics Of An Entrepreneur that one must nurture:
1) Creativity:
Creativity gives birth to something new. For without creativity, there is no
innovation possible. Entrepreneurs usually have the knack to pin down a lot of
ideas and act on them. Not necessarily every idea might be a hit. But the
experience obtained is gold.
Creativity helps in coming up with new solutions for the problems at hand and
allows one to think of solutions that are out of the box. It also gives an
entrepreneur the ability to devise new products for similar markets to the ones
he’s currently playing in.
2) Professionalism:
Reliability results in trust and for most ventures, trust in the entrepreneur is what
keeps the people in the organization motivated and willing to put in their best.
Professionalism is one of the most important characteristics of an entrepreneur.
3) Risk-taking:
For exploring in the unknown, one must be bestowed with a trump card; a good
entrepreneur has one, always. Also, evaluation of the risk to be undertaken is also
essential. Without knowing the consequences, a good entrepreneur wouldn’t risk
it all.
4) Passion:
Your work should be your passion. So when you work, you enjoy what you’re
doing and stay highly motivated. Passion acts as a driving force, with which, you
are motivated to strive for better.
It also allows you the ability to put in those extra hours in the office which can or
may make a difference. At the beginning of every entrepreneurial venture or any
venture, there are hurdles but your passion ensures that you are able to
overcome these roadblocks and forge ahead towards your goal.
5) Planning:
Perhaps, this is the most important of all steps required to run a show. Without
planning, everything would be a loose string as they say, “If you fail to plan, you
plan to fail.”
Planning is strategizing the whole game ahead of time. It basically sums up all the
resources at hand and enables you to come up with a structure and a thought
process for how to reach your goal.
The next step involves how to make optimum use of these resources, to weave
the cloth of success.Facing a situation or a crisis with a plan is always better. It
provides guidelines with minimum to no damage incurred to a business. Planning
is one of the most important characteristics of an entrepreneur.
6) Knowledge:
It enables him to keep track of the developments and the constantly changing
requirements of the market that he is in. May it is a new trend in the market or an
advancement in technology or even a new advertiser’s entry, an entrepreneur
should keep himself abreast of it. Knowledge is the guiding force when it comes
leaving the competition behind. New bits and pieces of information may just
prove as useful as a newly devised strategy.
He should know what his strengths & weaknesses are so that they can be worked
on and can result in a healthier organization.
A good entrepreneur will always try to increase his knowledge, which is why he is
always a learner. The better an entrepreneur knows his playground, the easier he
can play in it.
7) Social Skills:
Relationship Building
Learning with an open mind lets you look at your faults humbly. New information
always makes an entrepreneur question his current resolve. It also provides a new
perspective towards a particular aspect. Open-mindedness also enables you to
know and learn from your competition.
9) Empathy:
Perhaps the least discussed value in the world today is empathy or having high
emotional intelligence. Empathy is the understanding of what goes on in
someone’s mind. This a skill that is worth a mention. A good entrepreneur should
know the strengths and weaknesses of every employee who works under him.You
must understand that it is the people who make the business tick! You’ve got to
deploy empathy towards your people.
A good entrepreneur will always know this; a business is all about the customer.
How you grab a customer’s attention is the first step. This can be done through
various mediums such as marketing and advertising.
It is also important that you know the needs of your customers. The product or
service which is being created by your organization needs to cater to the needs of
your consumers. Personalising a business for consumers will also boost the sales.
The ability to sell yourself in front of a potential investment when it comes in the
form of a customer is also required. Being ready with the knowledge to please a
customer, is a way to have a successful business.
Summer courses, full-time courses that can take young entrepreneurs through
the nuances of finance, marketing and organization become important and can
help in this journey. Columbia Business School Summer Programme in association
with JBCN International School is one such opportunity budding entrepreneurs
can learn from
Q-3. Give the detail about Factors Conducive to India's Economic Growth.
A. Economic Factors:
On the other hand, higher rate of growth of population increases demand for
goods and services as a means of consumption leading to increasing consumption
requirements, lesser balance for investment and export, lesser capital formation,
adverse balance of trade, increasing demand for social and economic
infrastructural facilities and higher unemployment problem.
Accordingly, higher rate of population growth can put serious hurdles on the path
of economic development. Moreover, growth of population at a higher rate
usually eats up all the benefits of economic development leading to a slow growth
of per capita income as it is seen in case of India.
But it has also been argued by some modern economists that with the growing
momentum of economic development, standard of living of the general masses
increases which would ultimately create a better environment for the control of
population growth
business and with the leadership and managerial skills necessary for achieving
those goals. In the face of growing unemployment and poverty in rural areas and
slow growth of agriculture there is need of entrepreneurship in agriculture for
more productivity and profitability of agriculture. The Agripreneurship program is
necessary to develop entrepreneurs and management workforce to cater
agricultural Industry across the world (Bairwa et al., 2014b). Agripreneurship is
greatly influenced mainly by the economic situation, education and culture (Singh,
2013).
II. BASIC TERMINOLOGY RELATED WITH AGRIPRENEURSHIP DEVELOPMENT 1-
Agripreneurs – in general, agripreneurs should be proactive, curious, determined,
persistence, visionary, hard working, honest, integrity with strong management
and organizational skills. Agripreneurs also known as entrepreneurs.
Entrepreneurs may be defined as innovators who drive change in the economy by
serving new markets or creating new ways of doing things. Thus, an agripreneurs
may be someone who undertakes a variety of activities in agriculture sector in
order to be an entrepreneur. 2- Agripreneurship – Agripreneurship is the
profitable marriage of agriculture and entrepreneurship. Agripreneurship turn
your farm into an agribusiness. The term Agripreneurship is synonym with
entrepreneurship in agriculture and refers to agribusiness establishment in
agriculture and allied sector. 3- Agriclinics – these are envisaged to provide expert
advice and services to farmers on technology, cropping practices, protection from
pests and diseases, market trends, prices of various crops in the markets and also
clinical services for animal health which would enhance productivity of
crops/animals and increased income to farmers (Global Agrisystem, 2010). 4-
Agribusiness Centres – these are envisaged to provide farm equipments on hire,
sale of inputs and other services.These centres will provide a package of input
facilities; consultancy and other services with the aim of strengthen transfer of
technology and extension services and also provide self employment
opportunities to technically trained persons (Chandra shekara, 2003).
And along with being unique, the idea should also be easy to execute. For
example, let’s suppose you feel a lot of people have a problem understanding
legal jargon and legal proceedings.
So, in this case, your entrepreneurial idea could be setting up a platform that
caters to all the legal needs of people and helps them understand it easily.
Idea generation is the first step for any product development. This requires you to
look for feasible product options that can be executed. It is a very important step
for organizations to solve their problems.
It requires you to do market research and SWOT analysis. You should aim to come
up with an idea that is unique from your competitors and can be used profitably.
For example, self-sanitizing door handles can be a product that you look at. It is
unique and would be in high demand because of the current shift towards a
healthy lifestyle.
Idea Generation Process
The process may be different for different organizations and different people. But
there are three main steps in the process. It starts with the identification of the
question or the problem we need to solve.
After which we need to come up with ideas and probable solutions. Finally, in the
third stage, we select the most suitable idea and execute it. For example, let’s
suppose you are opening up a restaurant.
So firstly, you need to identify what question you need to answer. Let’s assume
you want to decide upon a name for the restaurant. Now you will use different
techniques (brainstorming, mind mapping, etc) to come up with ideas for names.
In the last step, you will choose the most appropriate name from the different
names you came up with within the second step.
Ratio of sum of all absolute values of deviation from central measure to the total
number of observations.
(ii)Calculate the difference between each observation and the calculated mean
Suppose that the deviation from a central value a is given as (x-a), where x is any
observation of the set of data. To find out the mean deviation, we need to find
the average of all the deviations from a in the given data set. Since the measure of
central tendency lies between the maximum and minimum values of the data
set,we can see that some deviations would be positive and rest would be
negative.The sum of such deviations would give a zero. Let us see an example to
make this point clearer to you.
Name of student
Anmol
85
Kushagra
75
Garima
80
¯¯¯
x
x¯
∑i=1nxin
¯¯¯
⇒x¯
85
75
+
80
85+75+803
80
80
Now, if we calculate the deviation from mean for the given values, we have:
Name of student
This does not give us any idea about measure of variability of the data which is
the actual purpose of finding the mean deviation. So, we find the absolute value
of deviation from the mean.
Some years back these devices | machines were used only for the purpose of
calculations but presently they are widely and proudly used in all sections of
human society.
Modern Computers are incredibly advanced thanks to the new up-gradation and
enhancement of technologies.
They can store huge amounts of data in the internal as well as external storage
units. Computer Hard disk is the external source of storing data.
These days computer speed has dramatically increased the work or task which
used to take long hours to perform can be done in few seconds this is because of
the rapid development in the IT [Information Technology] sector especially in the
computer hardware section
The Computer peripherals and devices manufactured these days are of highest
quality with affordable price.
The technology has made these devices to perform more speedily than ever
before also this is the important characteristics of computer system which made
them very famous and a part of human lives.
It has also been observed that the life of modern peripherals and devices has
been extended due to the excellent quality of raw material used while
preparations of these devices.
The Speed of computer mainly and primarily depends upon some factors such as
What type of motherboard you are using, Processor Speed and RAM [Random
Access Memory].
Processor:: Processor is again called as CPU which stands for central Processing
Unit.
RAM:: RAM stands for Random Access Memory which is temporary storage
medium and its volatile memory.
You can install more capacity of ram to increase your computer speed but firstly
you have to check the compatibility factors of motherboards and other
components or the device.
Hard Disk:: This is a permanent storage unit of a computer which can store data in
high volume and also you can retrieve data whenever and wherever you need.
This HDD available in market in huge data storing capacity.
Input
Output
Processing
Storage
Input:: The Computer receives its data from input devices in the form of raw data
and later this data is processed in human-readable form with the help of other
computer devices.
Keyboard
Mouse
Scanner
Trackball
Lightpen
Joystick
Output :: The output devices of computer receive data from the system and
further process the data in human-readable form.
Printers
Monitors
Speakers
Headphones
Projectors
When the data is received from the memory it transfers the data or information
for further processing.
Storage:: There are mainly two storage unit of the personal computer [PC]
Primary Storage
Secondary Storage
Primary Storage:: Random Access Memory [RAM] is the primary storage unit of
computers.
Secondary Storage:: Hard Disk Drives and Pen drives are called as secondary
Storage units.
Also Read ::
What is a Motherboard
What is a Password
The fundamentals of computers have changed rapidly. They are categorized into
four different types according to their speed, size, capabilities, and cost.
Super Computer
Mainframe
Mini
Micro
Super:: They are the fastest and most expensive computers compared to others.
They require huge space for their installation.
Mainframe:: They are not as fast as supercomputers and require huge space for
installation also they are very expensive.
Mini:: They are smaller, cheaper, and slower compared to super and mainframe
computers.
Ans- We deal with data all the time, so how we store, organise or group our data,
matters.
Data Structures are tools which are used to store data in a structured way in
computer to use it efficiently. Efficient data structures plays a vital role in
designing good algorithms. Data structures are considered as key organising
factors in software design in some designing methods and programming
languages. Examples: In english dictionaries, we can access any word easily as the
data is stored in a sorted way using a particular data structure.
In online city map, data like position landmark, road network connections, we
show this data using geometry using two dimensional plane.
Stacks: Stack is collection of elements, that follows the LIFO order. LIFO stands for
Last In First Out, which means element which is inserted most recently will be
removed first. Imagine a stack of tray on the table. When you put a tray there you
put it at top, and when you remove it, you also remove it from top.
A stack has a restriction that insertion and deletion of element can only be done
from only one end of stack and we call that position as top. The element at top
position is called top element. Insertion of element is called PUSH and deletion is
called POP.
Operations on Stack:
if ( top == n-1 ) { //if top position is the last of position of stack, means stack
is full .
else{
}
}
if( isEmpty ( ) )
else
top = top - 1 ; //decrementing top’s position will detach last element from
stack .
int topElement ( )
bool isEmpty ( )
{
return true ;
else
return false;
int size ( )
return top + 1;
Implementation:
#include <iostream>
int top = -1; //globally defining the value of top ,as the stack is empty .
if ( top == n-1 ) //if top position is the last of position of stack, means stack
is full .
else
bool isEmpty ( )
return true ;
else
return false;
if( isEmpty ( ) )
else
{
top = top - 1 ; //decrementing top’s position will detach last element from
stack .
int size ( )
return top + 1;
int topElement ( )
int main( )
int stack[ 3 ];
push(stack , 5 , 3 ) ;
push (stack , 24 , 3) ;
//As now the stack is full ,further pushing will show overflow condition .
push(stack , 12 , 3) ;
cout << ”The current top element in stack is “ << topElement( ) << endl;
pop( );
//as stack is empty , now further popping will show underflow condition .
pop ( );
Output :
Current size of stack is 1
Queue:
Queue is a data structure that follows the FIFO principle. FIFO means First In First
Out i.e the element added first in the queue will be the one to be removed first.
Elements are always added to the back and removed from the front. Think of it as
a line of people waiting for a bus at the bus stand. The person who will come first
will be the first one to enter the bus.
Enqueue: Adds an element to the back of the queue if the queue is not full
otherwise it will print “OverFlow”.
printf(“OverFlow\n”);
else {
rear++;
}
}
Dequeue: Removes the element from the front of the queue if the queue is not
empty otherwise it will print “UnderFlow”.
printf(“UnderFlow\n”);
else {
front++;
The use of semiconductor memory has grown, and the size of these memory
cards has increased as the need for larger and larger amounts of storage is
needed.
To meet the growing needs for semiconductor memory, there are many types and
technologies that are used. As the demand grows new memory technologies are
being introduced and the existing types and technologies are being further
developed.
Terms like DDR3, DDR4, DDR5 and many more are seen and these refer to
different types of SDRAM semiconductor memory.
In addition to this the semiconductor devices are available in many forms - ICs for
printed board assembly, USB memory cards, Compact Flash cards, SD memory
cards and even solid state hard drives. Semiconductor memory is even
incorporated into many microprocessor chips as on-board memory.
Q-5. Define the term instruction pre-fetching..
Some operating systems, such as Windows, cache files that a program needs on
startup to make loading faster.
Object
This is the basic unit of object-oriented programming. That is both data and
function that operate on data are bundled as a unit called an object.
Class
When you define a class, you define a blueprint for an object. This doesn't
actually define any data, but it does define what the class name means, that is,
what an object of the class will consist of and what operations can be performed
on such an object.
OOP has four basic concepts on which it is totally based. Let's have a look at them
individually −
Inheritance − The ability to create a new class from an existing class is called
Inheritance. Using inheritance, we can create a Child class from a Parent class
such that it inherits the properties and methods of the parent class and can have
its own additional properties and methods. For example, if we have a class
Vehicle that has properties like Color, Price, etc, we can create 2 classes like Bike
and Car from it that have those 2 properties and additional properties that are
specialized for them like a car has numberOfWindows while a bike cannot. Same
is applicable to methods.
Object-
What is an Object?
Abstract Data type (ADT) is a type (or class) for objects whose behaviour is
defined by a set of value and a set of operations.
The definition of ADT only mentions what operations are to be performed but not
how these operations will be implemented. It does not specify how data will be
organized in memory and what algorithms will be used for implementing the
operations. It is called “abstract” because it gives an implementation-independent
view. The process of providing only the essentials and hiding the details is known
as abstraction.
The user of data type does not need to know how that data type is implemented,
for example, we have been using Primitive values like int, float, char data types
only with the knowledge that these data type can operate and be performed on
without any idea of how they are implemented. So a user only needs to know
what a data type can do, but not how it will be implemented. Think of ADT as a
black box which hides the inner structure and design of the data type. Now we’ll
define three ADTs namely List ADT, Stack ADT, Queue ADT.
List ADT
The data is generally stored in key sequence in a list which has a head structure
consisting of count, pointers and address of compare function needed to compare
the data in the list
The data node contains the pointer to a data structure and a self-referential
pointer which points to the next node in the list.
void *DataPtr;
} Node;
typedef struct
int count;
Node *pos;
Node *head;
Node *rear;
int (*compare) (void *argument1, void *argument2)
} LIST;
A list contains elements of the same type arranged in sequential order and
following operations can be performed on the list.
remove() – Remove the first occurrence of any element from a non-empty list.
Stack ADT
In Stack ADT Implementation instead of data being stored in each node, the
pointer to data is stored.
The program allocates memory for the data and address is passed to the stack
ADT.
Ans- A node in the singly linked list consist of two parts: data part and link part.
Data part of the node stores actual information that is to be represented by the
node while the link part of the node stores the address of its immediate
successor. One way chain or singly linked list can be traversed only in one
direction.
Linked List is a sequence of links which contains items. Each link contains a
connection to another link. Linked list the second most used data structure after
array. Following are important terms to understand the concepts of Linked List.
Link − Each Link of a linked list can store a data called an element.
Next − Each Link of a linked list contain a link to next link called Next.
LinkedList − A LinkedList contains the connection link to the first Link called First.
Each Link carries a data field(s) and a Link Field called next.
Each Link is linked with its next link using its next link.
Last Link carries a Link as null to mark the end of the list.
Doubly Linked List − Items can be navigated forward and backward way.
Circular Linked List − Last item contains link of the first element as next and and
first element has link to last element as prev.
Linked List can be defined as collection of objects called nodes that are randomly
stored in the memory.
A node contains two fields i.e. data stored at that particular address and the
pointer which contains the address of the next node in the memory.
Data are also organized into more complex types of structures. The study of such
data structure, which forms the subject matter of the text, includes the following
three steps:
(3) Quantitative analysis of the structure, which include determining the amount
of memory needed to store the structure and the time required to process the
structure.
Data Structure Operations
Data are processed by means of certain operations which appearing in the data
structure. Data has situation on depends largely on the frequency with which
specific operations are performed. This section introduces the reader to some of
the most frequently used of these operations.
(1) Traversing: Accessing each records exactly once so that certain items in the
record may be processed.
(2) Searching: Finding the location of a particular record with a given key value, or
finding the location of all records which satisfy one or more conditions.
(6) Merging: Combining the record in two different sorted files into a single sorted
file.
Data structure are classified into two type such as linear or non-linear.
Linear: A data structure is said to be linear if its elements form a sequence. The
elements of linear data structure represents by means of sequential memory
locations. The other way is to have the linear relationship between the elements
represented by means of pointers or links. Ex- Array and Link List
Q-5. Explain how function calls may be implemented using stacks for return
values.
Ans- Before we jump into the actual material. I want to briefly revisit the various
ways for assembly language accessing the data in memory (i.e., addressing mode).
The following table is adapted from CSAPP (2nd edition):
Type
Form
Operand Value
Name
Immediate
$Imm
$Imm
Imm
Imm
Immediate
Register
Ea
R[
a
]
R[Ea]
Register addressing
Memory
Imm
Imm
M[Imm]
M[Imm]
Direct addressing
Memory
(Ea)
M[R[
]]
M[R[Eb]]
Indirect addressing
Memory
Imm(
,s)
Imm(Eb,Ei,s)
M[Imm+R[
]+(R[
]⋅s)]
M[Imm+R[Eb]+(R[Ei]⋅s)]
Imm
Imm
0x8048d8e
0x8048d8e
or
48
48
Ex
%eax
%eax
R[
R[Ex]
Ex
M[x]
M[x]
Main course
Now, let's bring our main course onto the table: understanding how function
works. I'll first clear up some terms we will use during the explanation. Then, we'll
take a look at the stack and understand how it supports function calls. Lastly, we'll
examine two assembly programs and understand the whole picture of function
calls.
Some terms
Let's first consider what the key elements we need in order to form a function:
function name
A function's name is a symbol that represents the address where the function's
code starts.
function arguments
A function's arguments (aka. parameters) are the data items that are explicitly
given to the function for processing. For example, in mathematics, there is a
sin
sin
sin(2)
sin(2)
sin
sin
would be the function's name, and
local variables
Local variables are data storage that a function uses while processing that is
thrown away when it returns. It's knid of like a scratch pad of paper. Functions get
a new piece of paper every time they are activated, and they have to throw it
away when they are finished processing.
return address
The return address is an "invisible" parameter in that it isn't directly used during
the function. The return address is a parameter which tells the function where to
resume executing after the function is completed. This is needed because
functions can be called to do processing from many different parts of our
program, and the function needs to be able to get back to wherever it was called
from. In most programming languages, this parameter is passed automatically
when the function is called. In assembly language, the call instruction handles
passing the return address for you, and ret handles using that address to return
back to where you called the function from.
return value
The return value is the main method of transferring data back to the main
program. Most programming languages only allow a sinlge return value for
function.
That causes the stack to grow by 12 bytes. (Keep in mind that it may seem
confusing. In fact, the bigger the ESP value, the smaller the stack size and vice
versa because the stack grows downwards in memory as it gets bigger and vice
versa).
Indirectly: by adding data elements to the stack with PUSH or removing data
elements from the stack with POP stack operation. For example:
In addition to the stack pointer, which points to the top of the stack (lower
numerical address); it is often convenient to have a stack frame pointer (FP) which
holds an address that point to a fixed location within a frame. Looking at the
stack frame, local variables could be referenced by giving their offsets from ESP.
However, as data are pushed onto the stack and popped off the stack, these
offsets change, so the reference of the local variables is not consistent.
Consequently, many compilers use another register, generally called Frame
Pointer (FP), for referencing both local variables and parameters because their
distances from FP do not change with PUSHes and POPs. On Intel CPUs, EBP
(Extended Base Pointer) is used for this purpose. On the Motorola CPUs, any
address register except A7 (the stack pointer) will do. Because the way stack
grows, actual parameters have positive offsets and local variables have negative
offsets from FP as shown below. Let examine the following simple C program.
Consistency: The transaction must be fully compliant with the state of the
database as it was prior to the transaction. In other words, the transaction cannot
break the database’s constraints. For example, if a database table’s Phone
Number column can only contain numerals, then consistency dictates that any
transaction attempting to enter an alphabetical letter may not commit.
Isolation: Transaction data must not be available to other transactions until the
original transaction is committed or rolled back.
For reference, one of the easiest ways to describe a database transaction is that it
is any change in a database, any “transaction” between the database components
and the data fields that they contain.
Q-2. Explain the difference between logical and physical data independence.
Physical Data Independence modifies the physical schema without causing the
application program to be rewritten.
Logical Data Independence modifies the logical schema without causing the
application program to be rewritten.
Data independence modify the schema definition in a lover level without affecting
the schema definition in the next higher level is called data independence.
Physical date independence hiding the details of storage structure from user
applications.
Logical data independence changes the logical schema without changing the
external schema.
Object identity: An object retains its identity even if some or all of the values of
variables or definitions of methods change over time.
This concept of object identity is necessary in applications but doe not apply to
tuples of a relational database.
value: A data value is used for identity (e.g., the primary key of a tuple in a
relational database).
name: A user-supplied name is used for identity (e.g., file name in a file system).
There are many situations where having the system generate identifiers
automatically is a benefit, since it frees humans from performing that task.
However, this ability should be used with care. System-generated identifiers are
usually specific to the system, and have to be translated if data are moved to a
different database system. System-generated identifiers may be redundant if the
entities being modeled already have unique identifiers external to the system,
e.g., SIN#.
To identify an object with a unique key (also called identifier keys) is a method
that is commonly used in database management systems. Using identifier keys for
object identity confuses identity and data values. According to the authors there
are three main problems with this approach:
1. Modifying identifier keys. Identifier keys cannot be allowed to change, even
though they are user-defined descriptive data.
3. "Unnatural" joins. The use of identifier keys causes joins to be used in retrievals
instead of simpler and more direct object retrievals.
There are many operations associated with identity. Since the state of an object is
different from its identity, there are three types of equality predicates from
comparing objects. The most obvious equality predicate is the identical predicate,
which checks whether two objects are actually one and the same object. The two
other equality predicates (shallow equal and deep equal) actually compare the
states of objects. Shallow equal goes one level deep in comparing corresponding
instances variables. Deep equal ignores identities and compares the values of
corresponding base objects. As far as copying objects, the counterparts of deep
and shallow equality are deep and shallow copy.
Ans- Relational model stores data in the form of tables. This concept purposed by
Dr. E.F. Codd, a researcher of IBM in the year 1960s. The relational model consists
of three major components:
1. The set of relations and set of domains that defines the way data can be
represented (data structure).
2. Integrity rules that define the procedure to protect the data (data integrity).
.All values are scalar. That is, at any given row/column position in the relation
there is one and only one value.
Dr. Codd, when formulating the relational model, chose the term “relation”
because it vas comparatively free of connotations, unlike, for example, the word
“table”. It is a common misconception that the relational model is so called
because relationships are established between tables. In fact, the name is derived
from the relations on whom it is based. Notice that the model requires only that
data be conceptually represented as a relation, it does not specify how the data
should be physically implemented. A relation is a relation provided that it is
arranged in row and column format and its values are scalar. Its existence is
completely independent of any physical representation.
The figure shows a relation with the. Formal names of the basic components
marked the entire structure is, as we have said, a relation.
Tuples of a Relation
Each row of data is a tuple. Actually, each row is an n-tuple, but the “n-” is usually
dropped.
Generalized Projection
For example, suppose we have a relation credit-info, as in Figure 3.25, which lists
the credit limit and expenses so far (the credit-balance on the account). If we
want to find how much more each person can spend, we can write the following
expression:
The attribute resulting from the expression limit − credit –balance does not have a
name. We can apply the rename operation to the result of generalized projection
in order to give it a name. As a notational convenience, renaming of attributes can
be combined with generalized projection as illustrated below:
The second attribute of this generalized projection has been given the name
credit- available. Figure 3.26 shows the result of applying this expression to the
relation in Figure 3.25.
Aggregate Functions
{1, 1, 3, 4, 4, 11}
returns the value 24. The aggregate function avg returns the average of the
values. When applied to the preceding collection, it returns the value 4. The
aggregate func- tion count returns the number of the elements in the collection,
and returns 6 on the preceding collection. Other common aggregate functions
include min and max, which return the minimum and maximum values in a
collection; they return 1 and 11, respectively, on the preceding collection.
Null Values∗∗
In this section, we define how the various relational algebra operations deal with
null values and complications that arise when a null value participates in an
arithmetic operation or in a comparison. As we shall see, there is often more than
one possible way of dealing with null values, and as a result our definitions can
sometimes be arbitrary. Operations and comparisons on null values should
therefore be avoided, where possible.
Since the special value null indicates “value unknown or nonexistent,” any
arithmetic operations (such as +, −, ∗, /) involving null values must return a null
result.
Similarly, any comparisons (such as <, <=, >, >=, /=) involving a null value evaluate
to special value unknown; we cannot say for sure whether the result of the
comparison is true or false, so we say that the result is the new truth value
unknown.
Comparisons involving nulls may occur inside Boolean expressions involving the
and, or, and not operations. We must therefore define how the three Boolean
operations deal with the truth value unknown.
• and: (true and unknown) = unknown; (false and unknown) = false; (unknown
and unknown) = unknown.
We are now in a position to outline how the different relational operations deal
with null values. Our definitions follow those used in the SQL language.
• join: Joins can be expressed as a cross product followed by a selection. Thus, the
definition of how selection handles nulls also defines how join operations handle
nulls.
In a natural join, say r s, we can see from the above definition that if two tuples, tr
∈ r and ts ∈ s, both have a null value in a common attribute, then the tuples do
not match.
• projection: The projection operation treats nulls just like any other value when
eliminating duplicates. Thus, if two tuples in the projection result are exactly the
same, and both have nulls in the same fields, they are treated as duplicates.
The decision is a little arbitrary since, without knowing the actual value, we do not
know if the two instances of null are duplicates or not.
When nulls occur in aggregated attributes, the operation deletes null values at
the outset, before applying aggregation. If the resultant multiset is empty, the
aggregate result is null.
Note that the treatment of nulls here is different from that in ordinary arithmetic
expressions; we could have defined the result of an aggregate operation as null if
even one of the aggregated values is null. However, this would mean a single
unknown value in a large group could make the aggregate result on the group to
be null, and we would lose a lot of useful information.
• outer join: Outer join operations behave just like join operations, except on
tuples that do not occur in the join result. Such tuples may be added to the result
(depending on whether the operation is , , or ), padded with nulls.
BCA 305 (COMPUTER NETWORKS)
Ans- Point - to - Point Protocol (PPP) is a communication protocol of the data link
layer that is used to transmit multiprotocol data between two directly connected
(point-to-point) computers. It is a byte - oriented protocol that is widely used in
broadband communications having heavy loads and high speeds. Since it is a data
link layer protocol, data is transmitted in frames. It is also known as RFC 1661.
Defining the procedure of establishing link between two points and exchange of
data.
Components of PPP
Network Control Protocols (NCPs) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there. Some of the NCPs of PPP are −
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of
one or more bytes. The fields of a PPP frame are −
Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern
of the flag is 01111110.
Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
Payload − This carries the data from the network layer. The maximum length of
the payload field is 1500 bytes. However, this may be negotiated between the
endpoints of communication.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
Byte Stuffing in PPP Frame − Byte stuffing is used is PPP payload field whenever
the flag sequence appears in the message, so that the receiver does not consider
it as the end of the frame. The escape byte, 01111101, is stuffed before every
byte that contains the same byte as the flag byte or the escape byte. The receiver
on receiving the message removes the escape byte before passing it onto the
network layer.
Both this algorithm are exactly reverse of each other. So we will be covering
Agglomerative Hierarchical clustering algorithm in detail.
Agglomerative Hierarchical clustering -This algorithm works by grouping the
data one by one on the basis of the nearest distance measure of all the pairwise
distance between the data point. Again distance between the data point is
recalculated but which distance to consider when the groups has been formed?
For this there are many available methods. Some of them are:
4) centroid distance.
This way we go on grouping the data until one cluster is formed. Now on the basis
of dendogram graph we can calculate how many number of clusters should be
actually present.
Let X = {x1, x2, x3, ..., xn} be the set of data points.
1) Begin with the disjoint clustering having level L(0) = 0 and sequence number m
= 0.
2) Find the least distance pair of clusters in the current clustering, say pair (r), (s),
according to d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of
clusters in the current clustering.
3) Increment the sequence number: m = m +1.Merge clusters (r) and (s) into a
single cluster to form the next clustering m. Set the level of this clustering to L(m)
= d[(r),(s)].
Advantages
Disadvantages
2) Time complexity of at least O(n2 log n) is required, where ‘n’ is the number of
data points.
3) Based on the type of distance matrix chosen for merging different algorithms
can suffer with one or more of the following:
BGP used for routing within an autonomous system is called Interior Border
Gateway Protocol, Internal BGP (iBGP). In contrast, the Internet application of the
protocol is called Exterior Border Gateway Protocol, External BGP (eBGP).
Operation[edit]
When BGP runs between two peers in the same autonomous system (AS), it is
referred to as Internal BGP (iBGP or Interior Border Gateway Protocol). When it
runs between different autonomous systems, it is called External BGP (eBGP or
Exterior Border Gateway Protocol). Routers on the boundary of one AS
exchanging information with another AS are called border or edge routers or
simply eBGP peers and are typically connected directly, while iBGP peers can be
interconnected through other intermediate routers. Other deployment topologies
are also possible, such as running eBGP peering inside a VPN tunnel, allowing two
remote sites to exchange routing information in a secure and isolated manner.
The main difference between iBGP and eBGP peering is in the way routes that
were received from one peer are typically propagated by default to other peers:
New routes learned from an eBGP peer are re-advertised to all iBGP and eBGP
peers.
New routes learned from an iBGP peer are re-advertised to all eBGP peers only.
These route-propagation rules effectively require that all iBGP peers inside an AS
are interconnected in a full mesh with iBGP sessions.
How routes are propagated can be controlled in detail via the route-maps
mechanism. This mechanism consists of a set of rules. Each rule describes, for
routes matching some given criteria, what action should be taken. The action
could be to drop the route, or it could be to modify some attributes of the route
before inserting it in the routing table.
Extensions negotiation[edit]
During the peering handshake, when OPEN messages are exchanged, BGP
speakers can negotiate optional capabilities of the session,[6] including
multiprotocol extensions[7] and various recovery modes. If the multiprotocol
extensions to BGP are negotiated at the time of creation, the BGP speaker can
prefix the Network Layer Reachability Information (NLRI) it advertises with an
address family prefix. These families include the IPv4 (default), IPv6, IPv4/IPv6
Virtual Private Networks and multicast BGP. Increasingly, BGP is used as a
generalized signaling protocol to carry information about routes that may not be
part of the global Internet, such as VPNs.[8]
In order to make decisions in its operations with peers, a BGP peer uses a simple
finite state machine (FSM) that consists of six states: Idle; Connect; Active;
OpenSent; OpenConfirm; and Established. For each peer-to-peer session, a BGP
implementation maintains a state variable that tracks which of these six states
the session is in. The BGP defines the messages that each peer should exchange in
order to change the session from one state to another.
The first state is the Idle state. In the Idle state, BGP initializes all resources,
refuses all inbound BGP connection attempts and initiates a TCP connection to the
peer. The second state is Connect. In the Connect state, the router waits for the
TCP connection to complete and transitions to the OpenSent state if successful. If
unsuccessful, it starts the ConnectRetry timer and transitions to the Active state
upon expiration. In the Active state, the router resets the ConnectRetry timer to
zero and returns to the Connect state. In the OpenSent state, the router sends an
Open message and waits for one in return in order to transition to the
OpenConfirm state. Keepalive messages are exchanged and, upon successful
receipt, the router is placed into the Established state. In the Established state,
the router can send and receive: Keepalive; Update; and Notification messages to
and from its peer.
BGP confederation[edit]
However, these alternatives can introduce problems of their own, including the
following:
route oscillation
sub-optimal routing
increase of BGP convergence time[17]
Additionally, route reflectors and BGP confederations were not designed to ease
BGP router configuration. Nevertheless, these are common tools for experienced
BGP network architects. These tools may be combined, for example, as a
hierarchy of route reflectors.
Q-4. What are the basic advantages of coaxial cable in LAN implementation?
Ans- GREMCO’s coaxial cables (coax cables) are produced for optimal use in high-
frequency ranges. We focus on achieving the best possible electrical values, such
as the attenuation, capacity and characteristic impedance. Due to the high-quality
outer conductor, the inner conductors are extremely well shielded from
interference radiation. Owing to our high storage capacity, GREMCO’s coaxial
cables are also available upon demand in different variations (RG400, RG142, etc.)
at prices naturally in line with market requirements.
They are found in many applications, such as in satellite dishes and aircraft GPS
receivers, due to the versatility of their possible uses.
Ans- The two main characteristics that identify and differentiate one encryption
algorithm from another are its ability to secure the protected data against attacks
and its speed and efficiency in doing so. This paper provides a performance
comparison between four of the most common encryption algorithms: DES, 3DES,
Blowfish and AES (Rijndael). The comparison has been conducted by running
several encryption settings to process different sizes of data blocks to evaluate
the algorithm's encryption/decryption speed. Simulation has been conducted
using C# language.
1. Introduction
As the importance and the value of exchanged data over the Internet or other
media types are increasing, the search for the best solution to offer the necessary
protection against the data thieves' attacks along with providing these services
under timely manner is one of the most active subjects in the security related
communities.
This paper tries to present a fair comparison between the most common and
used algorithms in the data encryption field. Since our main concern here is the
performance of these algorithms under different settings, the presented
comparison takes into consideration the behavior and the performance of the
algorithm when different data loads are used.
Section 2 will give a quick overview of cryptography and its main usages in our
daily life; in addition to that it will explain some of the most used terms in
cryptography along with a brief description of each of the compared algorithm to
allow the reader to understand the key differences between them.Section 3 will
show the results achieved by other contributions and their conclusions. Section 4
will walk through the used setup environment and settings and the used system
components. Section 5 illustrates the performance evaluation methodology and
the chosen settings to allow a better comparison. Section 6 gives a thorough
discussion about the simulation results, and finally section 7 concludes this paper
by summaries the key points and other related considerations.
2. Cryptography: Overview
An overview of the main goals behind using cryptography will be discussed in this
section along with the common terms used in this field.
This section explains the five main goals behind using Cryptography.
Every security system must provide a bundle of security functions that can assure
the secrecy of the system. These functions are usually referred to as the goals of
the security system. These goals can be listed under the following five main
categories[Earle2005]:
Authentication: This means that before sending and receiving data using the
system, the receiver and sender identity should be verified.
Integrity: Integrity means that the content of the communicated data is assured
to be free from any type of modification between the end points (sender and
receiver). The basic form of integrity is packet check sum in IPv4 packets.
Non-Repudiation: This function implies that neither the sender nor the receiver
can falsely deny that they have sent a certain message.
Service Reliability and Availability: Since secure systems usually get attacked by
intruders, which may affect their availability and type of service to their users.
Such systems should provide a way to grant their users the quality of service they
expect.
2.2 Block Ciphers and Stream Ciphers
Before starting to describe the key characteristics of block cipher, the definition of
cipher word must be presented. "A cipher is an algorithm for performing
encryption (reverse is decryption) "[Wikipedia-BC].
In this method data is encrypted and decrypted if data is in from of blocks. In its
simplest mode, you divide the plain text into blocks which are then fed into the
cipher system to produce blocks of cipher text.
ECB(Electronic Codebook Mode) is the basic form of clock cipher where data
blocks are encrypted directly to generate its correspondent ciphered blocks
(shown in Fig. 2). More discussion about modes of operations will be discussed
later.
This section explains the two most common modes of operations in Block Cipher
encryption-ECB and CBC- with a quick visit to other modes.
There are many variances of block cipher, where different techniques are used to
strengthen the security of the system. The most common methods are: ECB
(Electronic Codebook Mode), CBC (Chain Block Chaining Mode), and OFB (Output
Feedback Mode). ECB mode is the CBC mode uses the cipher block from the
previous step of encryption in the current one, which forms a chain-like
encryption process. OFB operates on plain text in away similar to stream cipher
that will be described below, where the encryption key used in every step
depends on the encryption key from the previous step.
There are many other modes like CTR (counter), CFB (Cipher Feedback), or 3DES
specific modes that are not discussed in this paper due to the fact that in this
paper the main concentration will be on ECB and CBC modes.
Data encryption procedures are mainly categorized into two categories depending
on the type of security keys used to encrypt/decrypt the secured data. These two
categories are: Asymmetric and Symmetric encryption techniques
In this type of encryption, the sender and the receiver agree on a secret (shared)
key. Then they use this secret key to encrypt and decrypt their sent messages. Fig.
4 shows the process of symmetric cryptography. Node A and B first agree on the
encryption technique to be used in encryption and decryption of communicated
data. Then they agree on the secret key that both of them will use in this
connection. After the encryption setup finishes, node A starts sending its data
encrypted with the shared key, on the other side node B uses the same key to
decrypt the encrypted messages.
Fig.4 Symmetric Encryption
The main concern behind symmetric encryption is how to share the secret key
securely between the two peers. If the key gets known for any reason, the whole
system collapses. The key management for this type of encryption is troublesome,
especially if a unique secret key is used for each peer-to-peer connection, then
the total number of secret keys to be saved and managed for n-nodes will be n(n-
1)/2 [Edney2003] .
Asymmetric encryption is the other type of encryption where two keys are used.
To explain more, what Key1 can encrypt only Key2 can decrypt, and vice versa. It
is also known as Public Key Cryptography (PKC), because users tend to use two
keys: public key, which is known to the public, and private key which is known
only to the user. Figure 5 below illustrates the use of the two keys between node
A and node B. After agreeing on the type of encryption to be used in the
connection, node B sends its public key to node A. Node A uses the received
public key to encrypt its messages. Then when the encrypted messages arrive,
node B uses its private key to decrypt them.
To get the benefits of both methods, a hybrid technique is usually used. In this
technique, asymmetric encryption is used to exchange the secret key, symmetric
encryption is then used to transfer data between sender and receiver.
2.5 Compared Algorithms
This section intends to give the readers the necessary background to understand
the key differences between the compared algorithms.
Blowfish is a variable length key, 64-bit block cipher. The Blowfish algorithm was
first introduced in 1993.This algorithm can be optimized in hardware applications
though it's mostly used in software applications. Though it suffers from weak keys
problem, no attack is known to be successful against [BRUCE1996][Nadeem2005].
Table 1 contains the speed benchmarks for some of the most commonly used
cryptographic algorithms. All were coded in C++, compiled with Microsoft Visual
C++ .NET 2003 (whole program optimization, optimize for speed, P4 code
generation), and ran on a Pentium 4 2.1 GHz processor under Windows XP SP 1.
386 assembly routines were used for multiple-precision addition and subtraction.
SSE2 intrinsics were used for multiple-precision multiplication.
It can be noticed from the table that not all the modes have been tried for all the
algorithms. Nonetheless, these results are good to have an indication about what
the presented comparison results should look like.
Also it is shown that Blowfish and AES have the best performance among others.
And both of them are known to have better encryption (i.e. stronger against data
attacks) than the other two.
Algorithm
Time Taken
MB/Second
Blowfish
256
3.976
64.386
256
4.196
61.010
256
4.817
53.145
256
5.308
48.229
256
4.436
57.710
256
4.837
52.925
Rijndael (128) CFB
256
5.378
47.601
256
4.617
55.447
DES
128
5.998
21.340
(3DES)DES-XEX3
128
6.159
20.783
(3DES)DES-EDE3
64
6.499
9.848
Tables 2 and 3 show the results of their experiments, where they have conducted
it on two different machines: P-II 266 MHz and P-4 2.4 GHz.
From the results it is easy to observe that Blowfish has an advantage over other
algorithms in terms of throughput. [Nadeem2005] has also conducted comparison
between the algorithms in stream mode using CBC, but since this paper is more
focused on block cipher the results were omitted.
The results showed that Blowfish has a very good performance compared to
other algorithms. Also it showed that AES has a better performance than 3DES
and DES. Amazingly it shows also that 3DES has almost 1/3 throughput of DES, or
in other words it needs 3 times than DES to process the same amount of data.
[Dhawan2002] has also done experiments for comparing the performance of the
different encryption algorithms implemented inside .NET framework. Their results
are close to the ones shown before (Figure 6).
Fig. 6 Comparison results using .NET implemntations[Dhawan2002]
The comparison was performed on the following algorithms: DES, Triple DES
(3DES), RC2 and AES (Rijndael). The results shows that AES outperformed other
algorithms in both the number of requests processes per second in different user
loads, and in the response time in different user-load situations.
4. Simulation Setup
This section describes the simulation environment and the used system
components.
The implementation uses managed wrappers for DES, 3DES and Rijndael available
in System.Security.Cryptography that wraps unmanaged implementations
available in CryptoAPI. These are DESCryptoServiceProvider,
TripleDESCryptoServiceProvider and RijndaelManaged respectively. There is only
a pure managed implementation of Rijndael available in
System.Security.Cryptography, which was used in the tests.
Table 4 shows the algorithms settings used in this experiment. These settings are
used to compare the results initially with the result obtained from [Dhawan2002].
Algorithm
Key Size
(Bits)
Block Size
(Bits)
DES
64
64
3DES
192
64
Rijndael
256
128
Blowfish
448
64
3DES and AES support other settings, but these settings represent the maximum
security settings they can offer. Longer key lengths mean more effort must be put
forward to break the encrypted data security.
Since the evaluation test is meant to evaluate the results when using block cipher,
due to the memory constraints on the test machine (1 GB) the test will break the
load data blocks into smaller sizes .The load data are divided into the data blocks
and they are created using the RandomNumberGenerator class available in
System.Security.Cryptography namespace.
The experiments are conducted using 3500+ AMD 64bit processor with 1GB of
RAM. The simulation program is compiled using the default settings in .NET 2003
visual studio for C# windows applications. The experiments will be performed
couple of times to assure that the results are consistent and are valid to compare
the different algorithms.
By considering different sizes of data blocks (0.5MB to 20MB) the algorithms were
evaluated in terms of the time required to encrypt and decrypt the data block. All
the implementations were exact to make sure that the results will be relatively
fair and accurate.
The Simulation program (shown below in Fig. 7) accepts three inputs: Algorithm,
Cipher Mode and data block size. After a successful execution, the data
generated, encrypted, and decrypted are shown. Notice that most of the
characters can not appear since they do not have character representation.
Another comparison is made after the successful encryption/decryption process
to make sure that all the data are processed in the right way by comparing the
generated data (the original data blocks) and the decrypted data block generated
from the process.
6. Simulation Results
This section will show the results obtained from running the simulation program
using different data loads. The results show the impact of changing data load on
each algorithm and the impact of Cipher Mode (Encryption Mode) used.
The first set of experiments were conducted using ECB mode, the results are
shown in figure 8 below. The results show the superiority of Blowfish algorithm
over other algorithms in terms of the processing time. It shows also that AES
consumes more resources when the data block size is relatively big. The results
shown here are different from the results obtained by [Dhawan2002] since the
data block sizes used here are much larger than the ones used in their
experiment.
Another point can be noticed here that 3DES requires always more time than DES
because of its triple phase encryption characteristic. Blowfish ,although it has a
long key (448 bit) , outperformed other encryption algorithms. DES and 3DES are
known to have worm holes in their security mechanism, Blowfish and AES, on the
other hand, do not have any so far.
These results have nothing to do with the other loads on the computer since each
single experiment was conducted multiple times resulting in almost the same
expected result. DES, 3DES and AES implementation in .NET is considered to be
the best in the market.
Fig.8 Performance Results with ECB Mode
As expected CBC requires more processing time than ECB because of its key-
chaining nature. The results show in Fig. 9 indicates also that the extra time added
is not significant for many applications, knowing that CBC is much better than ECB
in terms of protection. The difference between the two modes is hard to see by
the naked eye, the results showed that the average difference between ECB and
CBC is 0.059896 second, which is relatively small.
This section showed the simulation results obtained by running the four
compared encryption algorithms using different Cipher Modes. Different load
have been used to determine the processing power and performance of the
compared algorithms.
7. Conclusion
The presented simulation results showed that Blowfish has a better performance
than other common encryption algorithms used. Since Blowfish has not any
known security weak points so far, which makes it an excellent candidate to be
considered as a standard encryption algorithm. AES showed poor performance
results compared to other algorithms since it requires more processing power.
Using CBC mode has added extra processing time, but overall it was relatively
negligible especially for certain application that requires more secure encryption
to a relatively large data blocks.
References
[RFC2828],"Internet Security Glossary", http://www.faqs.org/rfcs/rfc2828.html
[Edney2003]," Real 802.11 Security: Wi-Fi Protected Access and 802.11i ,".
Addison Wesley 2003
[TropSoft] "DES Overview", [Explains how DES works in details, features and
weaknesses]
[Bruce1996] BRUCE SCHNEIER, "Applied Cryptography" , John Wiley & Sons, Inc
1996
[Crypto++]"Crypto++ benchmark",
http://www.eskimo.com/~weidai/benchmarks.html [Results of comparing tens of
encryption algorithms using different settings].
Ans- Error handling refers to the response and recovery procedures from error
conditions present in a software application. In other words, it is the process
comprised of anticipation, detection and resolution of application errors,
programming errors or communication errors. Error handling helps in maintaining
the normal flow of program execution. In fact, many applications face numerous
design challenges when considering error-handling techniques.
Error handling helps in handling both hardware and software errors gracefully and
helps execution to resume when interrupted. When it comes to error handling in
software, either the programmer develops the necessary codes to handle errors
or makes use of software tools to handle the errors. In cases where errors cannot
be classified, error handling is usually done with returning special error codes.
Special applications known as error handlers are available for certain applications
to help in error handling. These applications can anticipate errors, thereby helping
in recovering without actual termination of application.
Logical errors
Generated errors
Compile-time errors
Runtime errors
As errors could be fatal, error handling is one of the crucial areas for application
designers and developers, regardless of the application developed or
programming languages used. In worst-case scenarios, the error handling
mechanisms force the application to log the user off and shut down the system.
Ans- In the study of logic, there are two types of statements, conditional
statement and bi-conditional statement. These statements are formed by
combining two statements, which are called compound statements. Suppose a
statement is- if it rains, then we don’t play. This is a combination of two
statements. These types of statements are mainly used in computer programming
languages such as c, c++, etc. Let us learn more here with examples.
p implies q
p is sufficient for q
q is necessary for p
p⇒q
Points to remember:
[ statements ]
[ Continue For ]
[ statements ]
[ Exit For ]
[ statements ]
Next [ counter ]
Flow Diagram
Example
Live Demo
Module loops
Sub Main()
Dim a As Byte
Console.WriteLine("value of a: {0}", a)
Next
Console.ReadLine()
End Sub
End Module
When the above code is compiled and executed, it produces the following result −
value of a: 10
value of a: 11
value of a: 12
value of a: 13
value of a: 14
value of a: 15
value of a: 16
value of a: 17
value of a: 18
value of a: 19
value of a: 20
If you want to use a step size of 2, for example, you need to display only even
numbers, between 10 and 20 −
Live Demo
Module loops
Sub Main()
Dim a As Byte
For a = 10 To 20 Step 2
Console.WriteLine("value of a: {0}", a)
Next
Console.ReadLine()
End Sub
End Module
When the above code is compiled and executed, it produces the following result −
value of a: 10
value of a: 12
value of a: 14
value of a: 16
value of a: 18
value of a: 20
Ans- Although the trackpad is taking over as the primary pointing device in the
laptop form factor, many users still prefer the mouse as it provides better control
and more functionality when doing precision-based work like gaming or content
editing.
To increase user satisfaction, Windows often implements many additional
features. However, some of these may backfire, and Microsoft has often faced
backlash for releasing buggy and untested content. Windows 10 often
misbehaves, and many users report a multitude of errors, which hamper their use
of the device.
One such error that I shall discuss today is one where Windows 10 keeps scrolling
down automatically.
There is a feature in Windows 10, which I find very useful is the inactive window
scrolling. Personally, this lets me adjust the onscreen content without having to
change the active window. However, this is also prone to bugs, and many users
have reported that turning this setting on lead to their mouse scrolling on its own.
See also What Is Windows WS? Is It Safe To Delete Windows Setup Files?
Open the Windows Settings You can do so from the start menu, or use the
keyboard shortcut Win + I.
Click on Devices.
In the right pane, turn the toggle towards off below ‘Scroll inactive windows when
I Hover over them’.
If the device keeps scrolling automatically without any user input, check if there
are any problems with your keyboard, especially the down arrow key and the
Page down key.
Some users on Microsoft Community have discussed that there was indeed a
hardware problem with their keyboard, which led to Windows 10 scrolling by
itself without any mouse input. This can happen due to many reasons like dust,
damage from use, etc.
Make sure that there is no physical damage to your keyboard and your mouse
wheel. If there is, repair the device if you can, or take it to a service center and get
the keys replaced. Plugin your repaired device and check if still Windows 10 keeps
scrolling down.
To eliminate the possibility of an outdated driver, you must check and update any
mouse drivers available if Windows 10 keeps scrolling down.
Note: this is assuming that you are on the latest build Windows and have all the
updates installed.
To update the mouse drivers, here are the steps you can follow:
Q-5. Compare and contrast between conditional operator and logical operator.
The conditional AND operator (&&) is used to perform a logical AND of its
operands of Boole type. The evaluation of the second operand occurs only if it is
necessary. It is similar to the Boolean logical operator "&," except for the
condition when the first operand returns false, the second operand will not be
evaluated. This is because the "&&" operation is true only if the evaluation of
both the operands returns true.
For example, to validate a number to be within an upper and a lower limit, the
logical AND operation can be performed on the two conditions checking for the
upper and lower limit, which are expressed as Boolean expressions.
Conditional logical operators are left-associative, which implies that they are
evaluated in order from left to right in an expression where these operators exist
in multiple occurrences.