You are on page 1of 23

1. a.

What are the reasons for studying the concepts of programming languages
Reasons for Studying Concepts of Programming Languages

Increased ability to express ideas.

 It is believed that the depth at which we think is influenced by the


expressive power of the language in which we communicate our
thoughts. It is difficult for people to conceptualize structures they can't
describe, verbally or in writing.

 Language in which they develop S/W places limits on the kinds of


control structures. data structures, and abstractions they can use.
 Awareness of a wider variety of P/L features can reduce such
limitations in S/W development.

 Can language constructs be simulated in other languages that do not


support those constructs directly?

Improved background for choosing appropriate languages

 Many programmers, when given a choice of languages for a new


project, continue to use the language with which they are most
familiar, even if it is poorly suited to new projects.

 If these programmers were familiar with other languages available,


they would be in a better position to make informed language choices.

Greater ability to learn new languages

 Programming languages are still in a state of continuous evolution,


which means continuous learning is essential.

 Programmers who understand the concept of OO programming will


have easier time learning Java.
 Once a thorough understanding of the fundamental concepts of
languages is acquired, it becomes easier to see how concepts are
incorporated into the design of the language being learned.
Understand significance of implementation

 Understanding of implementation issues leads to an understanding of


why languages are designed the way they are
 This in turn leads to the ability to use a language more intelligently, as
it was designed to be used

Ability to design new languages

 The more languages you gain knowledge of, the better understanding
of

programming languages concepts you understand.

 In some cases, a language became widely used, at least in part, b/c


those in positions to choose languages were not sufficiently familiar
with P/L concepts.

 Many believe that AL.GOL 60 was a better language than Fortran;


however, Fortran was most widely used. It is attributed to the fact that
the programmers and managers didn't understand the conceptual
design of ALGOL 60.

1. b. Define different methods of language evaluation criteria and explain in


brief.

Language Evaluation Criteria

Readability

 Software development was largely thought of in term of writing code-


LOC.
 Language constructs were designed more from the point of view
computer than the users.

 Because ease of maintenance is determined in large part by the


readability of programs, readability became an important measure of
the quality of programs and programming languages. The result is a
crossover from focus on machine orientation to focus on human
orientation.

The most important criterion-case of use

Overall simplicity -Strongly affects readability

 Too many features make the language difficult to learn. Programmers


tend to learn a subset of the language and ignore its other features. -
ALGOL 60

 Multiplicity of features is also a complicating characteristic-having


more than one way to accomplish a particular operation

Ex-Java:

count = count + 1 count += 1

count++

++count

 Although the last two statements have slightly different meaning from
each other and from the others, all four have the same meaning when
used as stand-alone expressions.

 Operator overloading where a single operator symbol has more than


one meaning.
 Although this is a useful feature, it can lead to reduced readability if
users are allowed to create their own overloading and do not do it
sensibly.

Orthogonality:

 Makes the language easy to learn and read.


 Meaning is context independent. Pointers should be able to point to
any type of variable or data structure. The lack of orthogonality leads
to exceptions to the rules of the language.

 A relatively small set of primitive constructs can be combined in a


relatively small number of ways to build the control and data
structures of the language.
 Every possible combination is legal and meaningful.

 The more orthogonal the design of a language, the fewer exceptions


the language rules require.

 The most orthogonal programming language is ALGOL 68. Every


language construct has a type, and there are no restrictions on those
types.

 This form of orthogonality leads to unnecessary complexity.


2. a. Describe the language design Trade-offs.
Language Design Trade-Offs
 Reliability vs. cost of execution:Conflicting criteria. Ex: Javademands
all references to array elements be checked for properindexing but that
leads to increased execution costs. C does notrequire index range
checking. C program executes faster thanequivalent Java program,
although Java programs more reliable.Java designer traded execution
efficiency for reliability
 Readability vs. writability: anotherconflicting criterion. Example:APL
provides many powerful operators for array operands (and alarge
number of new symbols), allowing complex computations tobe
written in a compact program but at the cost of poor readability.Many
ALP operators can be used in a single long complexexpression. APL
is very writable but poor readability.
 Writability (flexibility) vs.reliability: Another conflicting criterion.Ex:
C++ pointers are powerful and very flexible but not reliablyused.
Because of the potential reliability problems with pointers,they are not
included in Java.
2. b. mention the formal methods of describing the syntax
Formal Methods of Describing Syntax
Syntax of programming languages uses grammars to describe the formal
language mechanisms using BNF of context free grammar.
Context-free Grammars
Context free grammar originally comes from formal language theory. In
computer science, context free grammar involves with programming
language syntax which is tokens.
Backus-Naur Form (BNF)
In computer science, BNF (Backus Normal Form or Backus-Naur Form) is a
notation technique for context-free grammars, often used to describe the
syntax of languages used in computing, such as computer programming
languages, document formats, instruction sets and communication protocols
(Wikipedia).
Basic of Metalanguage
A metalanguage is a language used to explain and elaborate another
language. In programming languages, BNF is the metalanguage.
BNF uses abstractions for syntactic structures. A simple Java assignment
statement, for example, might be represented by the abstraction <assign>.
The actual definition of <assign> may be given by
LHS – is the abstraction being defined
<assign> ƒ <var> = <expression>
This statement is a rule
RHS – the definition of LHS, consist of token, lexem, and reference
 (Noted: LHS is left hand side, RHS is right hand side)
 This particular rule specifies that the abstraction <assign> is defined
as an instance of the abstraction <var>, followed by the lexeme =,
followed by an instance of the abstraction <expression>.
 Nonterminal symbols are abstraction in BNF description.
 Terminal symbols are lexemes and tokens of the rules in BNF.
 Collections of rule are grammar in BNF.
 Nonterminal symbols have definitions that represent two or more
syntactic forms in the language. If there is multiple definitions, it can
be written in separate by using symbol , as a single rule. For example,
in Java for if-else statement
<if_stmt> ƒ if <logic_expr> else <stmt>
<if_stmt> ƒ if <logic_expr> else if <stmt> else <stmt>
Or, by joining them together to become
<if_stmt> ƒ if <logic_expr> else <stmt>
| if <logic_expr> else if <stmt> else <stmt>
Lists
BNF uses recursion in rule to represent lists. The rule is recursive if
statement in LHS appears in RHS. For example:
<ident_list> ƒ identifier
| identifier, <ident_list>
This defines <ident_list> as either a single token (identifier) or an identifier
followed by a comma followed by another instances of <ident_list>.
Grammars and derivations
A grammar starts with a special nonterminal of the grammar which is called
the start symbol. The next sentence elaborating the start sentence is called a
derivation. Usually, the start symbol starts with <program>. For example,

A grammar for a Small Language


<program > ƒ begin <stmt_list> end
<stmt_list> ƒ <stmt>
<stmt>; <stmt_list>
<stmt> ƒ <var> = <expression>
<var> ƒ A | B | C
<expression> ƒ <var> + <var>
<var> – <var>
<var>
The language starts with begin and ends with end. The statements in the
language are separated by using semicolon. The language only has one type
of statement which is assignment. The assignment is assigning expression to
a variable, var. The expression is either + or -. There are only three variables
in the language which are A, B, C.

If we derive the language, it may become:


<program> => begin <stmt_list> end
=> begin <stmt> ; <stmt_list> end
=> begin <var> = <expression> ; <stmt_list> end
=> begin A = <expression> ; <stmt_list> end
=> begin A = <var> + <var> ; <stmt_list> end
=> begin A = B + <var> ; <stmt_list> end
=> begin A = B + C ; <stmt_list> end
=> begin A = B + C ; <stmt> end
=> begin A = B + C ; <var> = <expression> end
=> begin A = B + C ; B = <expression> end
=> begin A = B + C ; B = <var> end
=> begin A = B + C ; B = C end
This derivation starts with the <program> followed by => which means or
reads asderives. Each successive string in the sequence is derived from the
previous string by replacing one of the nonterminals with one of that
nonterminal’s definitions. Another example,
A Grammar for Simple Assignment Statements
<assign> ƒ <id> = <expr>
<id> ƒ A | B | C
<expr> ƒ <id> + <expr>
<id> * <expr>
(<expr>)
<id>

The grammar above only derives assignment statements. It has id and expr
as expression. id is variable A, B, or C. Expression is only addition,
multiplication and parentheses. For the statement
A=B*(A+C)
It is generated by the leftmost derivation as:
<assign> => <id> = <expr>
=> A = <expr>
=> A = <id> * <expr>
=> A = B * <expr>
=> A = B * (<expr>)
=> A = B * (<id> + <expr>)
=> A = B * (A + <expr>)
=> A = B * (A + <id>)
=> A = B * (A + C)
Parse Tree
Parse tree is a representation of syntactical grammars in hierarchical
structure form. Every internal mode of a parse tree is labeled with a
nonterminal symbol; every leaf is labeled with a terminal symbol. Every
sub-tree of a parse tree describes one instance of an abstraction in the
sentence.
If we take above statement, A = B * ( A + C ), below is its parse tree:

<assign>
<id>
=
<expr>
A
<id>
*
<expr>
B
(
<expr>
)
<id>
+
<expr>
A
<id>
C
Ambiguity
Ambiguity happens in parse tree if there are two or more distinct parse tree.
Let say we have below statement:

A=B+C*A
and below grammar for assignment statement:
<assign> ƒ <id> = <expr>
<id> ƒ A | B | C
<expr> ƒ <expr> + <expr>
<expr> * <expr>
(<expr>)
<id>

Therefore, the grammar will produce two distinct parse trees which it cannot
be determined by compiler.
Operator Precedence
The problem in ambiguous grammar can be solved by having separate rule
for addition and multiplication. When they are separated, they are
maintained in higher to lower ordering, respectively in the parse tree. Below
is the new grammar:
<assign> ƒ <id> = <expr>
<id> ƒ A | B | C
<expr> ƒ <expr> + <term>
<term>
<term> ƒ <term> * <factor>
<factor>
<factor> ƒ (<expr>)
<id>
If we use above grammar for A = B + C * A, we get below derivation which
is left most derivation:
<assign> => <id> = <expr>
=> A = <expr>
=> A = <expr> + <term>
=> A = <term> + <term>
=> A = <factor> + <term>
=> A = <id> + <term>
=> A = B + <term>
=> A = B + <term> * <factor>
=> A = B + <factor> * <factor>
=> A = B + <id> * <factor>
=> A = B + C * <factor>
=> A = B + C * <id>
=> A = B + C * A

The parse tree using unambiguous grammar is below:


<assign>
<id>
=
<expr>
A
<expr>
+
<term>
<term>
<term>
*
<factor>
<factor>
<factor>
<id>
<id>
<id>
A
B
C

3. a.Explain static and dynamic binding


Connecting a method call to the method body is known as binding.
There are two types of binding
1. Static Binding (also known as Early Binding).
2. Dynamic Binding (also known as Late Binding).
Understanding Type
Let's understand the type of instance.
1) variables have a type
Each variable has a type, it may be primitive and non-primitive.
1. int data=30;  
Here data variable is a type of int.
2) References have a type
1. class Dog{  
2.  public static void main(String args[]){  
3.   Dog d1;//Here d1 is a type of Dog  
4.  }  
5. }  
3) Objects have a type
An object is an instance of particular java class,but it is also an instance of its superclass.
1. class Animal{}  
2.   
3. class Dog extends Animal{  
4.  public static void main(String args[]){  
5.   Dog d1=new Dog();  
6.  }  
7. }  
Here d1 is an instance of Dog class, but it is also an instance of Animal.

static binding
When type of the object is determined at compiled time(by the compiler), it is
known as static binding.
If there is any private, final or static method in a class, there is static binding.
Example of static binding
1. class Dog{  
2.  private void eat(){System.out.println("dog is eating...");}  
3.   
4.  public static void main(String args[]){  
5.   Dog d1=new Dog();  
6.   d1.eat();  
7.  }  
8. }  

Dynamic binding
When type of the object is determined at run-time, it is known as dynamic binding.
Example of dynamic binding
1. class Animal{  
2.  void eat(){System.out.println("animal is eating...");}  
3. }  
4.   
5. class Dog extends Animal{  
6.  void eat(){System.out.println("dog is eating...");}  
7.   
8.  public static void main(String args[]){  
9.   Animal a=new Dog();  
 a.eat();  
}  
}
3. b.   Write a note on variables and explain different types of variables
A variable is any property, a characteristic, a number, or a quantity that
increases or decreases over time or can take on different values (as opposed to
constants, such as n, that do not vary) in different situations.
Types of Variable

1. Qualitative Variables.
2. Quantitative Variables.
3. Discrete Variable.
4. Continuous Variable.
5. Dependent Variables.
6. Independent Variables.
7. Background Variable.
8. Moderating Variable.
9. Extraneous Variable.
10.Intervening Variable.
11.Suppressor Variable.
Qualitative Variables

An important distinction between variables is between the qualitative variable and


the quantitative variable.

Qualitative variables are those that express a qualitative attribute such as hair


color, religion, race, gender, social status, method of payment, and so on. The
values of a qualitative variable do not imply a meaningful numerical ordering.

The value of the variable ‘religion’ (Muslim, Hindu, ..,etc.) differs qualitatively; no
ordering of religion is implied. Qualitative variables are sometimes referred to
as categorical variables.

For example, the variable sex has two distinct categories: ‘male’ and ‘female.’
Since the values of this variable are expressed in categories, we refer to this as a
categorical variable.

Similarly, place of residence may be categorized as being urban and rural and thus
is a categorical variable.

Categorical variables may again be described as nominal and ordinal.


Ordinal variables are those which can be logically ordered or ranked higher or
lower than another but do not necessarily establish a numeric difference between
each category, such as examination grades (A+, A, B+, etc., clothing size (Extra
large, large, medium, small).

Quantitative Variables

Quantitative variables, also called numeric variables, are those variables that are


measured in terms of numbers. A simple example of a quantitative variable is a
person’s age.

The age can take on different values because a person can be 20 years old, 35 years
old, and so on. Likewise, family size is a quantitative variable, because a family
might be comprised of one, two, three members, and so on.

That is, each of these properties or characteristics referred to above varies or


differs from one individual to another. Note that these variables are expressed in
numbers, for which we call them quantitative or sometimes numeric variables.

A quantitative variable is one for which the resulting observations are numeric and
thus possesses a natural ordering or ranking.

Discrete and Continuous Variables

Quantitative variables are again of two types: discrete and continuous.

Variables such as some children in a household or number of defective items in a


box are discrete variables since the possible scores are discrete on the scale.

For example, a household could have three or five children, but not 4.52 children.

Other variables, such as ‘time required to complete an MCQ test’ and ‘waiting
time in a queue in front of a bank counter,’ are examples of a continuous variable.
The time required in the above examples is a continuous variable, which could be,
for example, 1.65 minutes, or it could be 1.6584795214 minutes.

Of course, the practicalities of measurement preclude most measured variables


from being continuous.
No matter how close two observations might be, if the instrument of measurement
is precise enough, a third observation can be found, which will fall between the
first two.

A continuous variable generally results from measurement and can assume


countless values in the specified range.

Dependent and Independent Variables

In many research settings, there are two specific classes of variables that need to be
distinguished from one another, independent variable and dependent variable.

Many research studies are aimed at unrevealing and understanding the causes of
underlying phenomena or problems with the ultimate goal of establishing a causal
relationship between them.

Look at the following statements:

 Low intake of food causes underweight.


 Smoking enhances the risk of lung cancer.
 Level of education influences job satisfaction.
 Advertisement helps in sales promotion.
 The drug causes the improvement of a health problem.
 Nursing intervention causes more rapid recovery.
 Previous job experiences determine the initial salary.
 Blueberries slow down aging.
 The dividend per share determines share prices.

In each of the above queries, we have two variables: one independent and one
dependent. In the first example, ‘low intake of food’ is believed to have caused the
‘problem of underweight.’

It is thus the so-called independent variable. Underweight is the dependent variable


because we believe that this ‘problem’ (the problem of underweight) has been
caused by ‘the low intake of food’ (the factor).

Similarly, smoking, dividend, and advertisement all are independent variables, and
lung cancer, job satisfaction, and sales are dependent variables.
In general, an independent variable is manipulated by the experimenter or
researcher, and its effects on the dependent variable are measured.

Background Variable

In almost every study, we collect information such as age, sex, educational


attainment, socioeconomic status, marital status, religion, place of birth, and the
like. These variables are referred to as background variables.

These variables are often related to many independent variables so that they
influence the problem indirectly. Hence they are called background variables.

If the background variables are important to the study, they should be measured.
However, we should try to keep the number of background variables as few as
possible in the interest of the economy.

Moderating Variable

In any statement of relationships of variables, it is normally hypothesized that in


some way, the independent variable ’causes’ the dependent variable to occur. In
simple relationships, all other variables are extraneous and are ignored. In actual
study situations, such a simple one-to-one relationship needs to be revised to take
other variables into account to better explain the relationship.

This emphasizes the need to consider a second independent variable that is


expected to have a significant contributory or contingent effect on the originally
stated dependent-independent relationship. Such a variable is termed a moderating
variable.

Suppose you are studying the impact of field-based and classroom-based training
on the work performance of the health and family planning workers, you consider
the type of training as the independent variable.

If you are focusing on the relationship between the age of the trainees and work
performance, you might use ‘type of training’ as a moderating variable.

Extraneous Variable

Most studies concern the identification of a single independent variable and the
measurement of its effect on the dependent variable.
But still, several variables might conceivably affect our hypothesized independent-
dependent variable relationship, thereby distorting the study. These variables are
referred to as extraneous variables.

Extraneous variables are not necessarily part of the study. They exert a
confounding effect on the dependent-independent relationship and thus need to be
eliminated or controlled for.

An example may illustrate the concept of extraneous variables. Suppose we are


interested in examining the relationship between the work-status of mothers and
breastfeeding duration.

It is not unreasonable in this instance to presume that the level of education of


mothers as it influences work-status might have an impact on breastfeeding
duration too.

Education is treated here as an extraneous variable. In any attempt to eliminate or


control the effect of this variable, we may consider this variable as a confounding
variable.

An appropriate way of dealing with confounding variables is to follow the


stratification procedure, which involves a separate analysis for the different levels
of lies confounding variables.

For this purpose, one can construct two crosstables: one for illiterate mothers and
the other for literate mothers. If we find a similar association between work status
and duration of breastfeeding in both the groups of mothers, then we conclude that
the educational level of mothers is not a confounding variable.

Intervening Variable

Often an apparent relationship between two variables is caused by a third variable.

For example, variables X and Y may be highly correlated, but only because X
causes the third variable, Z, which in turn causes Y. In this case, Z is
the intervening variable.

An intervening variable theoretically affects the observed phenomena but cannot


be seen, measured, or manipulated directly; its effects can only be inferred from
the effects of the independent and moderating variables on the observed
phenomena.

In the work-status and breastfeeding relationship, we might view motivation or


counseling as the intervening variable.

Thus, motive, job satisfaction, responsibility, behavior, justice are some of the
examples of intervening variables.

Suppressor Variable

In many cases, we have good reasons to believe that the variables of interest have a
relationship within themselves, but our data fail to establish any such relationship.
Some hidden factors may be suppressing the true relationship between the two
original variables.

Such a factor is referred to as a suppressor variable because it suppresses the


actual relationship between the other two variables.

The suppressor variable suppresses the relationship by being positively correlated


with one of the variables in the relationship and negatively correlated with the
other. The true relationship between the two variables will reappear when the
suppressor variable is controlled for.

Thus, for example, low age may pull education up but income down. In contrast, a
high age may pull income up but education down, effectively canceling out the
relationship between education and income unless age is controlled for.

4.a. explain type checking? explain strong typing with examples?


Type checking is the activity of providing that the operands of an operator are of
compatible types. A compatible type is one that is legal for the operator or is
enabled under language rules to be implicitly modified by compiler-generated
code to a legal type. This automatic conversion is known as coercion. A type error
is the application of an operator to an operand of an improper type. It can illustrate
the concept of type checking considers the following statement.
c:=a + 3 * b;
Here b should be of a type that allows multiplication by an integer. Similarly, the
operands for addition and assignment can be evaluated.
If all binding of variables to type is static in a language, then type checking can
virtually continually be completed statistically. It is more to identify errors at
compile time than at runtime because the previous correction is generally less
expensive. Therefore, static type checking involves examining the program text,
usually during translation.
A strongly typed programming language is one in which each type of data, such as
integers, characters, hexadecimals and packed decimals, is predefined as part of the
programming language, and all constants or variables defined for a
given program must be described with one of the data types. Certain operations
might be allowable only with specific data types.

Strongly typed programming language refers to an idea that describes the


enforcement of firm restrictions on mixing different data types and values. When
there is a violation, errors, also known as exceptions, occur. Strongly typed
programming languages use a language compiler to enforce data
typing compliance.

Advantages and disadvantages of strongly typed programming languages


Strongly typed programming languages have their benefits and drawbacks.

A key advantage of strong data typing is that it enforces a rigorous set of rules to
ensure a certain consistency of results. Further, a compiler can quickly detect an
object being sent a message to which it will not respond, preventing runtime errors.
Other benefits include no runtime penalties for determining types, accelerated
development by detecting errors earlier and better-optimized code from the
compiler.

However, a major disadvantage of using a strongly typed programming language is


that programmers lose flexibility. For example, strongly typed programming
languages prevent the programmer from inventing a data type not anticipated by
the developers of the programming language. It limits how creative one can be in
using a given data type. Strong typing in a programming language also makes it
more challenging to define collections of heterogeneous objects.
4.b. define unconditional branching.Explain guarded commands

Unconditional branching is when the programmer forces the execution of a


program to jump to another part of the program. Theoretically, this can be done
using a good combination of loops and if statements. In fact, as a programmer, you
should always try to avoid such unconditional branching and use this technique
only when it is very difficult to use a loop.

We can use “goto” statements (unconditional branch statements) to jump to a label


in a program.
The goto statement is used for unconditional branching or transfer of the program
execution to the labeled statement.

UNCONDITIONAL CONTROL STATEMENT:


 C supports an unconditional control statement that is goto.
 goto is used to transfer control from one point to other in a C program.
 Goto is a branching statement.
 It requires a label.
 goto is a keyword.

The term guarded command, as defined by Dijkstra (1975), is synonymous with a


conditionally executed statement. More precisely, a guarded command is the
combination of a condition (boolean expression) B and the (possibly compound)
statement S whose execution is controlled by B. In a sense, B "guards" the
execution of S. In Dijkstra's notation, a guarded command is represented as B → S.
In more common notation, the meaning of a guarded command is very much like
that of the simple selection structure (if statement): if B then S. Unlike the if
statement, however, a guarded command, by itself, is not a complete statement in a
programming language. Rather, it is one component of a more extensive control
structure containing one or more guarded commands. The most interesting
applications of guarded commands are those involving a set of n of them, for n > 1.

5.a. Local referencing environment.

You might also like