You are on page 1of 24

Logic assignment

Chapter four: introduction to other logic .


 Modal logic,syntax of modal logic
 Fuzzy logic
 Intuitionistic logic
 Lukasiewicz logic
 Probabilistic logic

Compiler assignment

 Bottom up parsing and


 code generator
Modal Logic
modal logic studies reasoning that involves the use of the expressions ‘necessarily’ and ‘possibly’.
However, the term ‘modal logic’ is used more broadly to cover a family of logics with similar rules and a
variety of different symbols.

Modal logic is a type of formal logic primarily developed in the 1960s that extends
classical propositional and predicate logic to include operators expressing modality. A modal—a word
that expresses a modality—qualifies a statement. For example, the statement "John is happy" might be
qualified by saying that John is usually happy, in which case the term "usually" is functioning as a modal.
The traditional alethic modalities, or modalities of truth, include possibility ("Possibly, p", "It is possible
that p"), necessity ("Necessarily, p", "It is necessary that p"), and impossibility ("Impossibly, p", "It is
impossible that p").

A formal modal logic represents modalities using modal operators. For example, "It might rain today"
and "It is possible that rain will fall today" both contain the notion of possibility. In a modal logic this is
represented as an operator, "Possibly", attached to the sentence "It will rain today".

These systems usually are a Propositional logic that has two new symbols:

, which denotes necessity; and , which denotes possibility.


Syntax
The syntax of Modal logic is usually the syntax of propositional logic, with a new rule: If

is a well-formed formula, then and are well-formed formulas.

The symbols of modal logic consistute of an infinite countable set P of propositional variables,
logical connectives, parenthesization, and the modal operator . The choice of logical
connectives depends on the development of propositional logic one wants to follow; below I
choose negation and implication. The set of modal formulas is defined recursively as follows.
Every propositional variable is a formula. If φ and ψ are formulas then so are ¬φ, φ → ψ, and
φ. All formulas are obtained by a repeated application of these constructions

. ( φ → ψ) → ¬ φ is a beautiful modal formula when φ, ψ are propositional variables.


The formal proof system of modal logic includes the proof system for propositional logic as a
subset. Here I choose to use the Hilbert system, with the following axioms: • φ → (ψ → φ); •
(implication distribution) (φ → (ψ → θ)) → ((φ → ψ) → (φ → θ)); • (¬φ → ¬ψ) → (ψ → φ),

and the modus ponens inference rule: from φ and φ → ψ infer ψ. The modal proof system also
must include rules and axioms for the modal operator. There is only one extra inference rule, the
generalization or necessitation or introduction: from φ infer φ. There are several
possibilities for extra axioms, resulting in different modal logics. • the logic K is obtained by
adding the distribution axiom: (φ → ψ) → ( φ→ ψ); • the logic K4 is obtained by
adding the distribution axiom and the K4 axiom φ→ φ; • the provability logic or GL
logic is obtained by adding the distribution and K4 axiom and the L¨ob axiom: ( φ → φ)
→ φ.

Fuzzy logic

Fuzzy logic is a set of mathematical principles


for knowledge representation based on degrees
of membership.
Unlike two-valued Boolean logic, fuzzy logic is
multi-valued. It deals with degrees of
membership and degrees of truth. Fuzzy logic
uses the continuum of logical values between 0
(completely false) and 1 (completely true). Instead
of just black and white, it employs the spectrum of
colours, accepting that things can be partly true and
partly false at the same time.

Fuzzy Logic is a form of multi-valued logic derived from fuzzy set theory to deal with reasoning that is
approximate rather than precise. Fuzzy logic is not a vague logic system, but a system of logic for dealing
with vague concepts. As in fuzzy set theory the set membership values can range (inclusively) between 0
and 1, in fuzzy logic the degree of truth of a statement can range between 0 and 1 and is not
constrained to the two truth values true/false as in classic predicate logic.

Examples of Fuzzy Logic In a Fuzzy Logic washing machine, Fuzzy Logic detects the type and amount of
laundry in the drum and allows only as much water to enter the machine as is really needed for the
loaded amount. So, less water will heat up quicker - which means less energy consumption. Additional
properties:

• Foam detection: Too much foam is compensated by an additional rinse cycle.

• Imbalance compensation: In the event of imbalance calculate the maximum possible speed, sets this
speed and starts spinning.

• Water level adjustment: Fuzzy automatic water level adjustment adapts water and energy
consumption to the individual requirements of each wash programme, depending on the amount of
laundry and type of fabric.

Fuzzy logic is an approach to computing based on "degrees of truth" rather than the
usual "true or false" (1 or 0) Boolean logic on which the modern computer is based. 
In logic , fuzzy logic is a form of many-valued logic in which the truth value of variables
may be any real number between 0 and 1 both inclusive. It is employed to handle the
concept of partial truth, where the truth value may range between completely true and
completely false.[1] By contrast, in Boolean logic, the truth values of variables may only
be the integer values 0 or 1.
Fuzzification is the process of assigning the numerical input of a system to fuzzy sets
with some degree of membership. This degree of membership may be anywhere within
the interval [0,1]. If it is 0 then the value does not belong to the given fuzzy set, and if it
is 1 then the value completely belongs within the fuzzy set. Any value between 0 and 1
represents the degree of uncertainty that the value belongs in the set. These fuzzy sets
are typically described by words, and so by assigning the system input to fuzzy sets, we
can reason with it in a linguistically natural manner.
For example, in the image below the meanings of the expressions cold, warm, and hot
are represented by functions mapping a temperature scale. A point on that scale has
three "truth values"—one for each of the three functions. The vertical line in the image
represents a particular temperature that the three arrows (truth values) gauge. Since
the red arrow points to zero, this temperature may be interpreted as "not hot"; i.e. this
temperature has zero membership in the fuzzy set "hot". The orange arrow (pointing at
0.2) may describe it as "slightly warm" and the blue arrow (pointing at 0.8) "fairly cold".
Therefore, this temperature has 0.2 membership in the fuzzy set "warm" and 0.8
membership in the fuzzy set "cold". The degree of membership assigned for each fuzzy
set is the result of fuzzification.
Advantage of fuzzy

 This system can work with any type of inputs whether it is imprecise, distorted or noisy input information.
 The construction of Fuzzy Logic Systems is easy and understandable.
 Fuzzy logic comes with mathematical concepts of set theory and the reasoning of that is quite simple.
 It provides a very efficient solution to complex problems in all fields of life as it resembles human reasoning and
decision making.
 The algorithms can be described with little data, so little memory is required.

Fuzzy Logic is defined as a many-valued logic form which may have truth values of variables in any
real number between 0 and 1. It is the handle concept of partial truth. In real life, we may come across
a situation where we can't decide whether the statement is true or false. At that time, fuzzy logic offers
very valuable flexibility for reasoning.

Characteristics of Fuzzy Logic

Here, are some important characteristics of fuzzy logic:

 Flexible and easy to implement machine learning technique


 Helps you to mimic the logic of human thought
 Logic may have two values which represent two possible solutions
 Highly suitable method for uncertain or approximate reasoning
 Fuzzy logic views inference as a process of propagating elastic constraints
 Fuzzy logic allows you to build nonlinear functions of arbitrary complexity.
 Fuzzy logic should be built with the complete guidance of experts

Intuitionistic logic
A type of logic which rejects the axiom law of excluded middle or, equivalently, the law of
double negation and/or Peirce's law. It is the foundation of intuitionism.
Intuitionistic logic is designed to capture a kind of reasoning where moves like the one in
the first proof are disallowed. Proving the existence of an x satisfying ϕ(x) means that
you have to give a specific x, and a proof that it satisfies ϕ, like in the second proof.
Proving that ϕ or ψ holds requires that you can prove one or the other. Formally
speaking, intuitionistic logic is what you get if you restrict a proof system for classical
logic in a certain way. From the mathematical point of view, these are just formal
deductive systems, but, as already noted, they are intended to capture a kind of
mathematical reasoning. One can take this to be the kind of reasoning that is justified
on a certain philosophical view of mathematics (such as Brouwer’s intuitionism); one
can take it to be a kind of mathematical reasoning which is more “concrete” and
satisfying (along the lines of Bishop’s constructivism); and one can argue about whether
or not the formal description captures the informal motivation. But whatever
philosophical positions we may hold, we can study intuitionistic logic as a formally
presented logic; and for whatever reasons, many mathematical logicians find it
interesting to do so.

The language of intuitionistic propositional logic is the same as classical propositional


logic, but the meaning of formulas is dierent
• propositional letters represent abstract problems
• more complex problems are formed by using the connectives
• solutions of abstract problems are called proofs
• it does not make sense to speak about a formula having a truth value
• we are only interested in how to prove formulas
The Brouwer-Heyting-Kolmogorov Interpretation Proofs of complex formulas are given
in terms of the proofs of their constituents:
• a proof of ϕ ∧ ψ is a proof of ϕ together with a proof of ψ
• a proof of ϕ ∨ ψ is a proof of ϕ or a proof of ψ
• a proof of ϕ → ψ is a procedure that can be seen to produce a proof of ψ from a proof
of ϕ
• there is no proof of ⊥
For three propositional letters a, b, c we can prove
• a → a Given a proof u of a, we can produce a proof of a, namely u itself.
• (a ∧ b) → a Assume we have a proof v of a ∧ b. Then we can extract from it a proof of
a, since it must contain both a proof of a and a proof of b.
• a → (a ∨ b)
• a → (b → a)
• (a → (b → c)) → (a → b) → (a → c)

 The classical logic without non-constructive reasoning principles:


 The law of excluded middle P ∨ ¬P
 Double negation elimination ¬¬P → P 2
 Often introduce principles that contradict the classical logic: continuity principle
3
 In consequence, proofs have to be done constructively:
 P ∨ Q: have to specify if it is P or Q
 ∃x. P(x): have to construct x that satisfies P(x)

Lukasiewicz logic
Lukasiewicz’s [1930] three-valued Aussagenkalk¨uls allows propositions to have the values ‘1’, ‘0’,‘1/2’
(see [9]). For convenience in comparison with standard truth-value semantics, these will be interpreted
in what follows as ‘true’ (T), ‘false’ (F), and ‘undetermined’ (U) . The exact axiomatization of
ÃLukasiewicz’s logic is unimportant for present purposes, but several versions of the theory have been
offered .Proof of the internal determinacy metatheorem requires only a consideration of the logic’s
characteristic nonstandard truth value semantics, which can be given as truth tables or matrix
definitions by cases of some choice of primitive propositional connectives. Here negation and the
conditional are defined, to which the other truth functions are reducible in the usual way.

=used only nagation and implication as primitive.


Lukasiewicz’ proof system is a particularly elegant example of this idea. It covers formulas whose only
logical operators are IMPLIES (→) and NOT.

1) (¬P → P) → P

2) P →(¬P → Q)

3) (P → Q) → ((Q → R) → (P → R)) The only rule: modus ponens


Probabilistic logic

The term probabilistic in our context refers to the use of probabilistic representations and reasoning
mechanisms grounded in probability theory

 Integration of (propositional) logic- and probability-based representation and reasoning


formalisms.
 Reasoning from logical constraints and interval restrictions for conditional probabilities (also
called conditional constraints).
 Reasoning from convex sets of probability distributions.
 Model-theoretic notion of logical entailment.

probabilistic logic (also probability logic and probabilistic reasoning) is to combine the capacity
of probability theory to handle uncertainty with the capacity of deductive logic to exploit
structure of formal argument. The result is a richer and more expressive formalism with a broad
range of possible application areas. Probabilistic logics attempt to find a natural extension of
traditional logic truth tables: the results they define are derived through probabilistic
expressions instead. A difficulty with probabilistic logics is that they tend to multiply the
computational complexities of their probabilistic and logical components.
Compiler assignment
Bottom-up parsing
A parser is a compiler or interpreter component that breaks data into smaller
elements for easy translation into another language. A parser takes input in the
form of a sequence of tokens, interactive commands, or program instructions and
breaks them up into parts that can be used by other components in programming.

Compiler Design - Bottom-Up Parser. Bottom-up parsing starts from the leaf nodes of a tree and
works in upward direction till it reaches the root node. Here, we start from a sentence and then apply
production rules in reverse manner in order to reach the start symbol.

A bottom-up parsing constructs the parse tree for an input string beginning from the
bottom (the leaves) and moves to work towards the top (the root). Bottom-up parsing
is a parser that reduces the string to the start symbol of the grammar.  During the
reduction, a specific substring matching the right side or body of the production will be
replaced by a non – terminal at the head of that production. Bottom-up parsing
constructs rightmost derivation in reverse order while scanning the input from left to
right.

As the name suggests, bottom-up parsing starts with the input symbols and tries to construct
the parse tree up to the start symbol.

Example:
Input string : a + b * c
Production rules:
S → E
E → E + T
E → E * T
E → T
T → id

Let us start bottom-up parsing


a + b * c

Read the input and check if any production matches with the input:
a + b * c
T + b * c
E + b * c
E + T * c
E * c
E * T
E
S

Build the parse tree from leaves to root.


Bottom-up parsing can be defined as an
attempt to reduce the input string w to the start
symbol of grammar by tracing out the
rightmost derivations of w in reverse.

Bottom-up parsing is more general than topdown parsing

– And just as efficient

– Builds on ideas in top-down parsing

– Preferred method in practice

• Also called LR parsing

– L means that tokens are read left to right

– R means that it constructs a rightmost derivation

LR parsers don’t need left-factored grammars and can also handle left-recursive grammars
Bottom-up Parser is the parser which generates the parse tree for the given
input string with the help of grammar productions by compressing the non-
terminals i.e. it starts from non-terminals and ends on the start symbol. It
uses reverse of the right most derivation. 
Further Bottom-up parser is classified into 2 types: LR parser, and Operator
precedence parser. 
(i). LR parser: 
LR parser is the bottom-up parser which generates the parse tree for the given string by using unambiguous grammar. It follow
reverse of right most derivation. 
LR parser is of 4 types: 

(a). LR(0)
(b). SLR(1)
(c). LALR(1)

(d). CLR(1)

 (ii). Operator precedence parser: 


It generates the parse tree form given grammar
and string but the only condition is two
consecutive non-terminals and epsilon never
appear in the right-hand side of any production. 

 Attempts to traverse a parse tree bottom up (post-order traversal)


 Reduces a sequence of tokens to the start symbol
 At each reduction step, the RHS of a production is replaced with LHS
 A reduction step corresponds to the reverse of a rightmost derivation
 Example: given the following grammar

E→E+T|T

T→T*F|F

F → ( E ) | id
A rightmost derivation for id + id * id is shown below:

E ⇒rm E + T ⇒rm E + T * F ⇒rm E + T * id

⇒rm E + F * id ⇒rm E + id * id ⇒rm T + id * id

⇒rm F + id * id ⇒rm id + id * id

Stack Implementation of a Bottom-Up Parser


A bottom-up parser uses an explicit stack in its implementation

The main actions are shift and reduce

* A bottom-up parser is also known as as shift-reduce parser

Four operations are defined: shift, reduce, accept, and error 

*Shift: parser shifts the next token on the parser stack

* Reduce: parser reduces the RHS of a production to its LHS  The handle always appears on top of the
stack

* Accept: parser announces a successful completion of parsing

*Error: parser discovers that a syntax error has occurred

The parser operates by:

* Shifting tokens onto the stack

* When a handle β is on top of stack, parser reduces β to LHS of production

* Parsing continues until an error is detected or input is reduced to start symbol


Shift-Reduce Parsing

Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are
known as shift-step and reduce-step.

 Shift step: The shift step refers to the advancement of the input pointer to the
next input symbol, which is called the shifted symbol. This symbol is pushed onto
the stack. The shifted symbol is treated as a single node of the parse tree.
 Reduce step : When the parser finds a complete grammar rule (RHS) and
replaces it to (LHS), it is known as reduce-step. This occurs when the top of the
stack contains a handle. To reduce, a POP function is performed on the stack
which pops off the handle and replaces it with LHS non-terminal symbol.

LR Parser

The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a wide class of


context-free grammar which makes it the most efficient syntax analysis technique. LR
parsers are also known as LR(k) parsers, where L stands for left-to-right scanning of the
input stream; R stands for the construction of right-most derivation in reverse, and k
denotes the number of lookahead symbols to make decisions.

There are three widely used algorithms available for constructing an LR parser:

 SLR(1) – Simple LR Parser:


o Works on smallest class of grammar
o Few number of states, hence very small table
o Simple and fast construction
 LR(1) – LR Parser:
o Works on complete set of LR(1) Grammar
o Generates large table and large number of states
o Slow construction
 LALR(1) – Look-Ahead LR Parser:
o Works on intermediate size of grammar
o Number of states are same as in SLR(1)
LL LR

Does a leftmost derivation. Does a rightmost derivation in reverse.

Starts with the root nonterminal on the


Ends with the root nonterminal on the stack.
stack.

Ends when the stack is empty. Starts with an empty stack.

Uses the stack for designating what is still Uses the stack for designating what is
to be expected. already seen.

Builds the parse tree top-down. Builds the parse tree bottom-up.

Continuously pops a nonterminal off the Tries to recognize a right hand side on the
stack, and pushes the corresponding right stack, pops it, and pushes the corresponding
hand side. nonterminal.

Expands the non-terminals. Reduces the non-terminals.

Reads the terminals when it pops one off Reads the terminals while it pushes them on
the stack. the stack.

Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
 code generator

code generator
Code generation can be considered as the final phase of compilation. Through post
code generation, optimization process can be applied on the code, but that can be seen
as a part of code generation phase itself. The code generated by the compiler is an
object code of some lower-level programming language, for example, assembly
language. We have seen that the source code written in a higher-level language is
transformed into a lower-level language that results in a lower-level object code, which
should have the following minimum properties:

 It should carry the exact meaning of the source code.


 It should be efficient in terms of CPU usage and memory management.

A code generator is expected to have an understanding of the target machine’s runtime


environment and its instruction set. The code generator should take the following things into
consideration to generate the code:

 Target language : The code generator has to be aware of the nature of the
target language for which the code is to be transformed. That language may
facilitate some machine-specific instructions to help the compiler generate the
code in a more convenient way. The target machine can have either CISC or
RISC processor architecture.
 IR Type : Intermediate representation has various forms. It can be in Abstract
Syntax Tree (AST) structure, Reverse Polish Notation, or 3-address code.
 Selection of instruction : The code generator takes Intermediate
Representation as input and converts (maps) it into target machine’s instruction
set. One representation can have many ways (instructions) to convert it, so it
becomes the responsibility of the code generator to choose the appropriate
instructions wisely.
 Register allocation : A program has a number of values to be maintained during
the execution. The target machine’s architecture may not allow all of the values
to be kept in the CPU memory or registers. Code generator decides what values
to keep in the registers. Also, it decides the registers to be used to keep these
values.
 Ordering of instructions : At last, the code generator decides the order in which
the instruction will be executed. It creates schedules for instructions to execute
them.
Code generator is used to produce the target code for three-address statements. It uses
registers to store the operands of the three address statement.

Consider the three address statement x:= y + z. It can have the following sequence of codes:

MOV x, R0

ADD y, R0

Register and Address Descriptors:


o A register descriptor contains the track of what is currently in each register. The register
descriptors show that all the registers are initially empty.
o An address descriptor is used to store the location where current value of the name can
be found at run time.

A code-generation algorithm:
The algorithm takes a sequence of three-address statements as input. For each three
address statement of the form a:= b op c perform the various actions. These are as
follows:

1. Invoke a function getreg to find out the location L where the result of computation b op
c should be stored.
2. Consult the address description for y to determine y'. If the value of y currently in
memory and register both then prefer the register y' . If the value of y is not already in L
then generate the instruction MOV y' , L to place a copy of y in L.
3. Generate the instruction OP z' , L where z' is used to show the current location of z. if z is
in both then prefer a register to a memory location. Update the address descriptor of x
to indicate that x is in location L. If x is in L then update its descriptor and remove x from
all other descriptor.
4. If the current value of y or z have no next uses or not live on exit from the block or in
register then alter the register descriptor to indicate that after execution of x : = y op z
those register will no longer contain y or z.
Generating Code for Assignment Statements:

The assignment statement d:= (a-b) + (a-c) + (a-c) can be translated into the following
sequence of three address code:

1. t:= a-b  
2.         u:= a-c  
3.         v:= t +u   
4.         d:= v+u  

The final phase of compilation is code generation. Optimization process is applied on the code through
post code generation. The compiler generates the code, which is an object code of the lower-level
programming language, such as assembly language. The higher-level source code language is
transformed into lower-level language and thus come up with lower-level object code. The object code
thus generated has the following properties:

Role of Code Generator


● From IR to target program.

● Must preserve the semantics of the source program.

– Meaning intended by the programmer in the original source program should carry forward in each
compilation stage until code-generation.

● Target code should be of high quality

– execution time or space or energy or …

● Code generator itself should run efficiently.

1.instruction selection,

2.register allocation and

3. instruction ordering.

Code Generator in Reality


● The problem of generating an optimal target program is undecidable.

● Several subproblems are NP-Hard (such as register allocation).

● Need to depend upon

– Approximation algorithms

– Heuristics

– Conservative estimates

Front Code Code optimizer


end optimizer

Issues in the Design of Code Generator


 The most important criterion is that it produces correct code
 Input to the code generator
 IR + Symbol table
 We assume front end produces low-level IR, i.e. values of names in it can be directly
manipulated by the machine instructions.
 Syntactic and semantic errors have been already detected
 The target program
 Common target architectures are: RISC, CISC and Stack based machines
 In this chapter we use a very simple RISC-like computer with addition of some CISC-like
addressing modes

INTRODUCTION INTRODUCTION • The final phase of a compiler is code generator The


final phase of a compiler is code generator
• It receives an intermediate representation (IR) with It receives an intermediate
representation (IR) with supplementary information in symbol table supplementary
information in symbol table

• The requirements imposed on a code generator are severe The requirements imposed on a
code generator are severe

• The target program must preserve the The target program must preserve the semantic
meaning of the source program semantic meaning of the source program

• Target program must be of high quality; Target program must be of high quality; that is, it
must make effective use of the that is, it must make effective use of the available resources of
the target machine. available resources of the target machine.

• Compilers that need to produce efficient target programs, include an optimization phase
prior to code generation.

• The optimizer maps the IR into IR from which more efficient code can be generated.

• In general, the code optimization and code-generation phases of a compiler, often referred to
as the back end, may make multiple passes over the IR before generating the target program.

• Code generator main tasks:


o Instruction selection
o Register allocation and assignment
o Instruction ordering

• Instruction selection involves choosing appropriate target- machine instructions to


implement the IR statements.

• Register allocation and assignment involves deciding what values to keep in which
registers.

• Instruction ordering involves deciding in what order to schedule the execution of


instructions.

ISSUES IN THE DESIGN OF CODE GENERATOR CODE GENERATOR

• The most important criterion for a code generator is that The most important criterion for a
code generator is that it produce correct code. it produce correct code.

• Correctness takes on special significance because of the Correctness takes on special


significance because of the number of special cases that a code generator might number of
special cases that a code generator might face.
Issues in Code Generation

• Input to the Code Generator

• Target Programs

• Memory Management

• Instruction Selection

• Register Allocation

• Choice of Evaluation Order

• Approaches to Code Generation

 The final phase of a compiler is code generator


 It receives an intermediate representation (IR) with supplementary information in symbol table
 Produces a semantically equivalent target program
 Code generator main tasks:
 Instruction selection
 Register allocation and assignment
 Insrtuction ordering

You might also like