You are on page 1of 31

UNIT 1

In the field of programming languages, the concepts of syntax, semantics, and pragmatics are fundamental in understanding
how programming languages are structured and how they function. Let's explore each of these concepts in more detail:

1. Syntax: Syntax refers to the rules and structure of a programming language. It defines how statements and
expressions are formed and organized in order to create valid programs. Syntax rules specify the correct placement
of keywords, punctuation, and other elements in the language. A programming language's syntax often includes
rules for variable declaration, control flow structures (such as loops and conditionals), function definitions, and
more. Following the syntax rules of a programming language is essential to write programs that can be understood
and executed by a computer.
For example, in the programming language Python, the syntax for a simple "Hello, World!" program is:

pythonCopy code

print("Hello, World!")
Here, the print function is used to display the text "Hello, World!" on the console. The syntax specifies the use of
parentheses to enclose the text to be printed and the use of double quotes to denote a string literal.
2. Semantics: Semantics deals with the meaning or interpretation of programs. It defines how the statements and
expressions in a programming language are executed or evaluated. Semantics determine the behavior of programs,
including how variables are assigned values, how expressions are evaluated, and how control flow constructs alter
the flow of program execution. Semantics provide the rules for understanding the intended behavior of a program.
For example, consider the following code snippet in Python:

pythonCopy code

x=5
y = 10
z=x+y
print(z)
In this code, the semantics of Python define that the variables x and y are assigned the values 5 and 10, respectively. The
expression x + y is evaluated, resulting in the value 15, which is then assigned to the variable z. Finally, the value of z is
printed to the console.
3. Pragmatics: Pragmatics refers to the practical aspects of programming languages, focusing on how programs are
written and used in real-world scenarios. It encompasses the context in which a program is written, the conventions
and best practices followed by developers, and the intended purpose and behavior of the program. Pragmatics
includes considerations such as code readability, maintainability, performance optimization, and adherence to
coding standards.
For example, in a programming language like Java, following pragmatic principles might involve writing code that is well-
organized, using meaningful variable names, and properly commenting the code to enhance readability and maintainability.
Pragmatic considerations also involve designing code that is efficient and meets specific performance requirements.

variables, expressions, and statements


1. Variable: A variable is a named storage location in a computer's memory that holds a value. It allows you to store and
manipulate data in your program. Variables have a name and a data type, which determines the kind of data that can be
stored in them (e.g., integers, floating-point numbers, strings, etc.). You can assign values to variables, read their values,
and modify them throughout the program.
For example, in Python, you can declare and assign a value to a variable like this:
python
x = 10
In this case, `x` is the variable, and it holds the value `10`.

2. Expression: An expression is a combination of values, variables, and operators that evaluate to a single value. It can be as
simple as a single constant value or a complex combination of operations. Expressions are used to perform computations
and produce results.
For example, in Python, you can have expressions like:
python
x+5
Here, `x + 5` is an expression that adds the value of `x` to `5`. The result of this expression will depend on the current value
of `x`.

3. Statement: A statement is a complete instruction or command that performs a specific action in a program. Statements
can include variable declarations, assignments, function calls, loops, conditionals, and more. They control the flow of
execution and determine the behavior of the program.
For example, in Python, you can have statements like:
python
print("Hello, world!")
This statement prints the text "Hello, world!" to the console when executed.

Binding time spectrum


The concept of the "binding time spectrum" in programming languages refers to the different points in the software
development process at which certain characteristics or decisions become fixed or "bound." It describes a range of
possibilities in terms of when specific aspects of a program are determined or resolved. These aspects include variable
bindings, memory allocations, code generation, and other characteristics related to the execution of the program.

The terms "binding" and "binding time" are related to the process of associating names or variables with their
corresponding values or resources in a program. However, they refer to different aspects of this process:

1. Binding: Binding is the association or connection between a name or identifier and the entity it represents. It
establishes the relationship between a variable name and its value or a function name and its code. When a
program executes, it needs to resolve these bindings to access the appropriate data or execute the correct code.
For example, in the statement "x = 10;", the name "x" is bound to the value 10. In a function declaration like "def foo():",
the name "foo" is bound to the code that defines the function.

2. Binding Time: Binding time refers to the point in the program's life cycle when a particular binding is determined or
established. It can be classified into different phases or stages, such as compile time, load time, link time, and run
time.
Assignment
In programming, an assignment refers to the act of assigning a value to a variable. It is a fundamental operation that allows
you to store and manipulate data in your program. When you assign a value to a variable, you are essentially storing that
value in the variable's memory location.

In most programming languages, the assignment operator is denoted by the equals sign (=). The syntax for an assignment
statement typically follows this pattern:

variable = value;

Here, "variable" is the name of the variable to which you want to assign a value, and "value" represents the data that you
want to store in the variable. The value can be a constant, the result of an expression, or the contents of another variable.
Here's an example in Python:
x=5
In this example, the variable "x" is assigned the value 5. After this assignment, the variable "x" will hold the value 5, and you
can use it in your program as needed.

Assignments can be performed with different types of values, such as numbers, strings, booleans, or even more complex
data structures like arrays or objects, depending on the programming language you're using.

Assignments are essential for manipulating and working with data in a program. By assigning values to variables, you can
store information, perform calculations, and modify data throughout your code.

In programming, "i" and "r" are often used as variable names to represent different types of values, depending on the
context. Here are some common interpretations for "i" and "r":

1. "i" as an integer value: "i" is frequently used as a variable name to represent an integer value in loops, counters, or array
indices. For example, in a for loop, you might see:

for i in range(10):
print(i)

In this case, "i" takes on integer values from 0 to 9 in each iteration of the loop.

2. "i" as a complex number: In mathematical or scientific programming, "i" is often used to represent the imaginary unit. For
instance, in Python, you can define a complex number using "j" instead of "i" because "i" is typically used as a loop variable:
c = 3 + 4j
Here, "3" is the real part, and "4j" is the imaginary part.

3. "r" as a floating-point or real number: "r" is sometimes used as a variable name to denote a real or floating-point value. It
could represent a radius, a ratio, or any other quantity that involves real numbers. For example:

radius = 5.0
circumference = 2 * 3.14159 * radius
print(circumference)

In this case, "r" represents the radius of a circle, and the circumference is calculated using the formula 2πr.

It's important to note that the choice of variable names is not limited to these interpretations. Programmers can use "i" and
"r" as variable names for various purposes, depending on the conventions or specific requirements of the programming
language and the context of the code.

"Environment" and "Stores"


In programming languages, the terms "environment" and "stores" are used to refer to different concepts. Let's
take a closer look at each of them:

1. Environment: In programming, an environment typically refers to a data structure that holds variables and their
associated values. It represents the context in which a program or a function is executed. The environment provides a way
to store and access variables and their bindings (values).

Depending on the programming language, an environment can take various forms. For example:

1. In languages like Python, each running program has a global environment, which is a dictionary-like structure that maps
variable names to their values. Additionally, functions can have their own local environments (sometimes referred to as
"lexical environments") that store the variables specific to that function.
2. In languages like JavaScript, environments are often implemented using a combination of lexical environments and
scope chains. Lexical environments represent the variables in a specific scope, while scope chains provide a way to
access variables from outer scopes.
The environment is crucial for managing variables and their values during program execution, enabling variable
assignment, retrieval, and scoping.

2. Stores: The term "stores" is not a widely used concept in programming languages, at least not in the same context as
"environment." However, it is possible that the term "stores" could be used to refer to data structures used for storing
information, such as databases or data stores. These are often used for persistent storage of data that needs to be retrieved
and modified over time, rather than in-memory variables within a program.

Depending on the programming language and the specific requirements, there are various types of stores that can be
used, including relational databases, NoSQL databases, key-value stores, file systems, and more. These stores provide
mechanisms for data storage, retrieval, querying, and manipulation.

Overall, the environment is a fundamental concept in programming languages, whereas "stores" may refer to external data
storage mechanisms or be used in a more specialized context.

Constants
1. a value that cannot be altered by the program during normal execution, i.e., the value is
constant.
2. when associated with an identifier, a constant is said to be “named,” although the terms
“constant” and “named constant” are often used interchangeably
3. contrasted with a variable, which is an identifier with a value that can be changed during
normal execution, i.e., the value is variable
4. useful for both programmers and compilers
• for programmers they are a form of self-documenting code and allow reasoning aboutcorrectness
• for compilers they allow compile-time and run-time checks that verify that constancy
assumptions are not violated, and allow or simplify some compiler optimizations.
Initialization
1. assignment of an initial value for a data object or variable
2. manner in which initialization is performed depends on programming language, as well as
type, storage class, etc., of an object to be initialized
3. constructs which perform initialization are typically called initializers and initializer lists.
4. distinct from (and preceded by) declaration, although sometimes conflated in practice
done either by statically embedding the value at compile time, or else by assignment at run
time
UNIT 2

primitive data types are the most basic and fundamental types of data that can be used to represent values in a
programming language. These data types are typically provided by the programming language itself and are not
composed of other data types. Primitive data types are usually built into the language and have a fixed size and
behavior.

Non-primitive data types, also known as composite or reference data types, are data types that are derived from
primitive data types and can hold a collection of values or objects. These data types are typically defined by the
programming language or can be created by the programmer

Pointers

Pointers are variables in programming languages that store the memory address of another variable. They are used to
indirectly access and manipulate data stored in memory. Pointers allow efficient memory management and provide
flexibility in programming.
In languages like C and C++, pointers are an integral part of the language and are widely used. They can be used to perform
operations such as passing parameters by reference, dynamically allocating memory, and implementing data structures like
linked lists, trees, and graphs.
How to use a pointer?
1. Establish a pointer variable.
2. employing the unary operator (&), which yields the address of the variable, to assign a pointer to a variable's
address.
3. Using the unary operator (*), which gives the variable's value at the address provided by its argument, one
can access the value stored in an address

1. datatype *var_name;
2. int *ptr; // ptr can point to an address which holds int data
Wild pointer
In computer programming, a wild pointer refers to a pointer that is uninitialized or has been freed (deallocated) but is still
being used. It is called "wild" because its value is unpredictable or arbitrary, pointing to a memory location that may or may
not contain valid data.

Using a wild pointer can lead to unexpected behavior and crashes in a program. When a pointer is uninitialized, it does not
point to a valid memory location, so attempting to read or write through that pointer can result in accessing random or
invalid data. Similarly, if a pointer has been freed, the memory it was pointing to may have been deallocated and potentially
reused for other purposes, causing data corruption or access violations.

Handling wild pointers is essential to ensure the stability and security of your program. Wild pointers are
uninitialized or non-null pointers that point to arbitrary memory locations, leading to undefined behavior when
accessed. Here are some steps to handle wild pointers effectively:

1. Initialize Pointers: Always initialize pointers before using them. Assign them a valid memory address or set them to
NULL if they are not immediately assigned a valid value.

2. Null Check: Check if the pointer is NULL before dereferencing it. This check ensures that the pointer is pointing to a
valid memory location before accessing or modifying its value.

3. Avoid Uninitialized Pointers: Avoid using uninitialized pointers. Make sure to assign them a valid memory address
before using them.

4. Use Dynamic Memory Allocation Correctly: When using dynamic memory allocation functions like malloc, calloc,
or realloc, ensure that the allocated memory is assigned to the pointer correctly. Failure to do so can result in wild pointers.
Type conversion and type casting are related concepts, but they are not exactly the same thing.
Type Conversion:
Type conversion, also known as type coercion or implicit conversion, refers to the automatic conversion of one data
type to another by the programming language itself. In type conversion, the language automatically converts the value
of one type to another type, if it is necessary and possible. For example, if you assign an integer value to a variable of
type float, the language will automatically convert the integer to a floating-point value.

Type Casting:
Type casting, on the other hand, refers to the explicit conversion of one data type to another by the programmer. In
type casting, the programmer explicitly specifies the desired data type and instructs the programming language to
convert the value accordingly. This can involve narrowing or widening the data type. For example, if you have a variable
of type float and you want to treat it as an integer, you can use type casting to convert the value to an integer explicitly.

double x, y;
x = 3; // implicitly coercion (coercion)
y = (double) 5; // explicitly coercion (casting)

In summary, type conversion happens automatically by the language itself, while type casting is an explicit conversion
done by the programmer.

Differences between Type Coercion and Type Casting


Type coercion and type casting are two methods of converting data types in Python, but they differ in several ways.
Some of the key differences between type coercion and type casting include:
1. Control: Type coercion is an implicit process, whereas type casting is explicit and performed by the programmer. Type
casting provides greater control over the conversion process, as the programmer can specify the desired data type.
2. Predictability: Type coercion can result in unexpected results, as it is performed according to predefined rules. Type
casting, on the other hand, is more predictable, as the programmer specifies the desired data type.
3. Loss of Information: Type coercion can result in loss of information or precision, especially when converting from a
higher precision data type to a lower precision data type. Type casting can also result in loss of information, but the
programmer has greater control over the process and can take measures to preserve necessary information.
4. Flexibility: Type coercion is limited to predefined rules, whereas type casting allows for more flexibility in converting
data types.
Advantages of Type Coercion:
1. Simplifies code: Type coercion is an implicit process, and as such, it can simplify code by eliminating the need for
explicit type casting.
2. Easy to use: Type coercion is easy to use, as it is performed automatically by the Python interpreter.
Disadvantages of Type Coercion:
1. Lack of control: Type coercion is an implicit process and as such, provides limited control over the type conversion
process.
2. Unexpected results: Type coercion can result in unexpected results, especially when working with large or complex
data sets.
3. Loss of information: Type coercion can result in loss of information or precision, especially when converting from a
higher precision data type to a lower precision data type.
Type equivalence
The notion of type equivalence refers to the comparison or equality of types in a programming language or type system.
Type equivalence determines whether two types are considered the same or compatible based on certain criteria.

In programming languages, types define the nature of data and how it can be manipulated. For example, in a statically
typed language, variables are assigned a specific type at compile time, and type checking is performed to ensure that
operations are performed on compatible types.

There are different ways in which type equivalence can be defined, and the specific rules depend on the programming
language and its type system. Here are some common notions of type equivalence:

1. Structural Equivalence: Two types are considered equivalent if their structures match. In this case, the internal
structure or members of the types are compared to determine equivalence. This is often used in languages like
Haskell or ML.

2. Name Equivalence: Two types are considered equivalent if they have the same name. In this case, the names of the
types are compared to determine equivalence. This is commonly used in languages like Java or C#.

3. Type Compatibility: Two types are considered equivalent if they can be used interchangeably in a program without
causing type errors. This notion of equivalence is based on the compatibility of operations and assignments
between the types.

Polymorphism

Polymorphism in Java is a concept by which we can perform a single action in different ways. Polymorphism is derived
from 2 Greek words: poly and morphs. The word "poly" means many and "morphs" means forms. So polymorphism
means many forms.
1) Function overloading
Function overloading is defined as using one function for different purposes. Here, one function performs many tasks by
changing the function signature(number of arguments and types of arguments). It is an example of compile-time
polymorphism because what function is to be called isdecided at the time of compilation.

#include <iostream>
using namespace std;
void add(int a, int b)
Syntax for function
{
overiding:
cout << "sum = " << (a + b); class Parent{
}
access_modifier:
void add(double a, double b)
{ // overridden function
cout << endl << "sum = " << (a + b); return_type
} name_of_the_function(){}
};
// Driver code
int main() }
{ class child : public Parent {
add(10, 2);
access_modifier:
add(5.3, 6.2);
return 0; } // overriding function
Output return_type
sum = 12 name_of_the_function(){}

sum = 11.5 };
}
Function overriding
When we say a function is overridden it means that a new definition of that function is given in the derived class. Thus, in
function overriding, we have two definitions of one in the base class and the other in the derived class. The decision of
choosing a function is decided at the run time.
Function overriding is a feature in object-oriented programming that allows a subclass to provide its own implementation of
a method that is already defined in its superclass. When a subclass overrides a method, it provides a specialized
implementation of that method that is specific to the subclass.

In most object-oriented programming languages, including Java, C++, and Python, function overriding is achieved by
declaring a method with the same name, return type, and parameters in the subclass as the one defined in the superclass.
By doing so, the subclass effectively replaces the implementation of the method inherited from the superclass with its own
implementation.

Virtual functions

Virtual functions are a key feature of object-oriented programming (OOP) languages, such as C++ and Java. They allow a
class to provide a common interface for different derived classes and enable dynamic dispatch, which means the
appropriate function implementation is selected at runtime based on the actual object type.

In C++, virtual functions are declared in a base class and can be overridden by derived classes. The base class declares a
function as virtual using the virtual keyword, and the derived class provides its own implementation of the function with the
same signature
Inheritance in Java
Inheritance in Java is a mechanism in which one object acquires all the properties and behaviors of a parent object. It is an
important part of OOPs (Object Oriented programming system).

The idea behind inheritance in Java is that you can create new classes that are built upon existing classes. When you inherit
from an existing class, you can reuse methods and fields of the parent class. Moreover, you can add new methods and fields
in your current class also.
Inheritance represents the IS-A relationship which is also known as a parent-child relationship.
Why use inheritance in java
o For Method Overriding (so runtime polymorphism can be achieved).

o For Code Reusability.

Terms used in Inheritance


o Class: A class is a group of objects which have common properties. It is a template or blueprint from which objects
are created.
o Sub Class/Child Class: Subclass is a class which inherits the other class. It is also called a derived class, extended class,
or child class.
o Super Class/Parent Class: Superclass is the class from where a subclass inherits the features. It is also called a base
class or a parent class.
o Reusability: As the name specifies, reusability is a mechanism which facilitates you to reuse the fields and methods
of the existing class when you create a new class. You can use the same fields and methods already defined in the
previous class.

The syntax of Java Inheritance


1. class Subclass-name extends Superclass-name
2. {
3. //methods and fields
4. }

The extends keyword indicates that you are making a new class that derives from an existing class. The meaning of "extends"
is to increase the functionality.

In the terminology of Java, a class which is inherited is called a parent or superclass, and the new class is called child or subclass

Java Inheritance Example


Types of inheritance in java

On the basis of class, there can be three types of inheritance in java: single, multilevel and hierarchical.In java programming,
multiple and hybrid inheritance is supported through interface only. We will learn about interfaces later.

Note: Multiple inheritance is not supported in Java through class.

When one class inherits multiple classes, it is known as multiple inheritance. For Example:

1. Single inheritance: In single inheritance, a class inherits properties and behaviors from a single parent class. This means
that one class serves as the base or parent class, and another class derives from it as a child or derived class.

2. Multiple inheritance: Multiple inheritance allows a class to inherit properties and behaviors from more than one parent
class. In this case, a child class can have multiple base or parent classes, and it inherits the combined attributes and
methods of all the parent classes.

3. Multilevel inheritance: Multilevel inheritance involves a series of classes in a hierarchical manner, where each class
serves as the base class for the one below it. In this type of inheritance, the child class inherits properties and behaviors
from its immediate parent class, which in turn inherits from its own parent class, and so on.

4. Hierarchical inheritance: Hierarchical inheritance occurs when multiple classes inherit from a single base or parent class.
In this type of inheritance, the parent class serves as the common ancestor for multiple child classes, each of which may
have its own additional properties and behaviors.

5. Hybrid inheritance: Hybrid inheritance is a combination of multiple types of inheritance. It can be achieved by combining
any of the above types, such as multiple inheritance with multilevel inheritance or hierarchical inheritance.

6. Interface inheritance: Interface inheritance refers to the inheritance of method signatures or contract-like structures
from an interface. An interface defines a set of methods that a class must implement, and multiple classes can inherit
from the same interface.

Type parameterization

Type parameterization, also known as generics or parametric polymorphism, is a feature in programming languages that allows
the definition of generic types or functions that can operate on a variety of data types. It enables the creation of reusable code
components by abstracting over specific types and providing flexibility and type safety.

With type parameterization, you can define a class, interface, or method with one or more type parameters, which act as
placeholders for specific types. These type parameters can be used within the component's definition, allowing it to work with
different types without sacrificing type safety. When the component is used, the type parameters are replaced with actual types,
and the code is generated or compiled accordingly.
type parameterization using generics public class Main {

public class Box<T> {


public static void main(String[] args) {

private T content; Box<Integer> integerBox = new Box<>();

public void setContent(T content) { integerBox.setContent(42);

this.content = content; int content = integerBox.getContent();

public T getContent() { System.out.println(content); // Output: 42

return content; }}

In this example, the Box class is defined with a type parameter T. This Box<String> stringBox = new Box<>();
allows the class to be instantiated with different types, such as Integer
and String. The setContent and getContent methods can then handle the stringBox.setContent("Hello, Generics!");
specific type T accordingly.
Type parameterization provides several benefits: String message = stringBox.getContent();

1. Reusability: Generic components can be used with different System.out.println(message); // Output: Hello,
data types, eliminating the need to rewrite similar code for Generics!
each type.
2. Type Safety: The compiler can enforce type constraints on the }
usage of generics, reducing the likelihood of runtime type
errors. }
3. Abstraction: Generic types and functions can abstract over
common behaviors or concepts, making code more modular
and expressive.

Abstact Datatype

1. Stack: A stack is a Last-In-First-Out (LIFO) data structure where elements are inserted and removed from one end,
called the top.
Stack ADT
1. In Stack ADT Implementation instead of data being stored in
each node, the pointer to data is stored.
2. The program allocates memory for the data and address is
passed to the stack ADT.
3. The head node and the data nodes are encapsulated in the ADT.
The calling function can only see the pointer to the stack.
4. The stack head structure also contains a pointer
to top and count of number of entries currently in stack.
5. push() – Insert an element at one end of the stack called top.
6. pop() – Remove and return the element at the top of the stack,
if it is not empty.
7. peek() – Return the element at the top of the stack without
removing it, if the stack is not empty.
8. size() – Return the number of elements in the stack.
2. Queue: A queue is a First-In-First-Out 9. isEmpty() – Return true if the stack is empty, otherwise return
(FIFO) data structure where elements false.
are inserted at one end, called the 10. isFull() – Return true if the stack is full, otherwise return false.
rear, and removed from the other
end, called the front.
• The queue abstract data type (ADT) follows the basic design
of the stack abstract data type.
Queue ADT
• Each node contains a void pointer to the data and the link
pointer to the next element in the queue. The program’s
responsibility is to allocate memory for storing the data.
• enqueue() – Insert an element at the end of the queue.
• dequeue() – Remove and return the first element of the
3. Linked List: A linked list is a collection of queue, if the queue is not empty.
nodes where each node contains data and • peek() – Return the element of the queue without removing
a reference (or link) to the next node in the it, if the queue is not empty.
sequence. • size() – Return the number of elements in the queue.
• isEmpty() – Return true if the queue is empty, otherwise
List ADT return false.
• isFull() – Return true if the queue is full, otherwise return
false.

• The data is generally stored in key sequence in a list


which has a head structure consisting
Features of ADT: of count, pointers and address of compare
Abstract data types (ADTs) are a way of encapsulating function needed to compare the data in the list..
data and operations on that data into a single unit. • The List ADT Functions is given below:
Some of the key features of ADTs include: • get() – Return an element from the list at any given
• Abstraction: The user does not need to position.
know the implementation of the data • insert() – Insert an element at any position of the
structure only essentials are provided. list.
• Better Conceptualization: ADT gives us a • remove() – Remove the first occurrence of any
better conceptualization of the real world. element from a non-empty list.
• Robust: The program is robust and has the • removeAt() – Remove the element at a specified
ability to catch errors. location from a non-empty list.
• Encapsulation: ADTs hide the internal • replace() – Replace an element at any position by
details of the data and provide a public another element.
interface for users to interact with the data. • size() – Return the number of elements in the list.
This allows for easier maintenance and • isEmpty() – Return true if the list is empty, otherwise
modification of the data structure. return false.
• Data Abstraction: ADTs provide a level of • isFull() – Return true if the list is full, otherwise
abstraction from the implementation details return false.
of the data. Users only need to know the operations that can be performed on the data, not how those
operations are implemented.
• Data Structure Independence: ADTs can be implemented using different data structures, such as arrays or
linked lists, without affecting the functionality of the ADT.
• Information Hiding: ADTs can protect the integrity of the data by allowing access only to authorized users and
operations. This helps prevent errors and misuse of the data.
• Modularity: ADTs can be combined with other ADTs to form larger, more complex data structures. This allows
for greater flexibility and modularity in programming.Advantages:
• Encapsulation: ADTs provide a way to encapsulate data and operations into a single unit, making it easier to
manage and modify the data structure.
• Abstraction: ADTs allow users to work with data structures without having to know the implementation
details, which can simplify programming and reduce errors.
• Data Structure Independence: ADTs can be implemented using different data structures, which can make it
easier to adapt to changing needs and requirements.
• Information Hiding: ADTs can protect the integrity of data by controlling access and preventing unauthorized
modifications.
• Modularity: ADTs can be combined with other ADTs to form more complex data structures, which can
increase flexibility and modularity in programming.

Disadvantages:
• Overhead: Implementing ADTs can add overhead in terms of memory and processing, which can affect
performance.
• Complexity: ADTs can be complex to implement, especially for large and complex data structures.
• Learning Curve: Using ADTs requires knowledge of their implementation and usage, which can take time and
effort to learn.
• Limited Flexibility: Some ADTs may be limited in their functionality or may not be suitable for all types of data
structures.
• Cost: Implementing ADTs may require additional resources and investment, which can increase the cost of
development.

Information hiding

Information hiding, also known as encapsulation, is a fundamental principle in programming languages that aims to restrict
access to certain components or details of a program. It is a way of organizing code and data structures to hide internal
implementation details from the outside world, allowing developers to focus on using the code without worrying about its
internal complexities.

In programming, information hiding is typically achieved through the use of access modifiers, such as public, private, and
protected, which control the visibility and accessibility of variables, functions, or classes. Let's explore some common
techniques used for information hiding in programming languages:

1. Encapsulation: Encapsulation is the process of bundling data and methods together within a class. By defining
member variables as private or protected and providing public methods (also known as getters and setters) to
access or modify them, encapsulation ensures that the internal state of an object remains hidden from external
entities.

2. Access Modifiers: Programming languages like Java, C++, C#, and others provide access modifiers to control the
visibility of class members. Public members are accessible from anywhere, private members are accessible only
within the class itself, and protected members are accessible within the class and its subclasses.

3. Interface and Abstraction: Interfaces define a contract for a class, specifying a set of methods that must be
implemented. By programming against interfaces rather than concrete implementations, you can hide the
underlying implementation details and switch implementations without affecting the client code. Abstraction refers
to the process of hiding unnecessary details and providing a simplified interface to interact with an object or
module.
4. Modularization: Splitting code into modules or components helps in isolating different parts of the program. By
exposing only necessary interfaces or APIs and hiding the internal implementation details, you can control the
access to specific functionalities.

5. Namespacing: Namespaces or packages group related classes, functions, or variables under a common hierarchical
namespace. This helps in organizing code and avoids naming conflicts. By controlling the visibility of namespaces,
you can hide specific components from being accessed outside the designated scope.

6. Information Hiding in Object-Oriented Programming: In object-oriented programming languages, classes and


objects are used to encapsulate data and behavior. By defining private member variables and providing public
methods to access and modify them, you can ensure that the internal state of an object is not directly accessible
from outside.

Abstraction
Abstraction in programming refers to the process of simplifying complex systems by representing only the essential features
and hiding unnecessary details. It is a fundamental concept in computer science and is used to manage complexity and
improve the maintainability, scalability, and reusability of software.

In programming languages, abstraction is achieved through various mechanisms, such as classes, objects, functions,
modules, and interfaces. These mechanisms allow developers to create abstract representations of real-world entities,
concepts, or processes, and encapsulate their behavior and properties.

By using abstraction, programmers can focus on high-level concepts and functionality without worrying about the internal
implementation details. It provides a way to break down a complex system into smaller, more manageable parts, and allows
different components to interact with each other through well-defined interfaces.

Abstraction also promotes code reuse and modularity. By creating abstract classes or interfaces, developers can define a set
of common behaviors that can be inherited or implemented by multiple concrete classes. This allows for creating reusable
code components and enables a more flexible and adaptable system architecture.

In terms of inheritance in programming languages, visibility refers to the accessibility or scope of members (variables,
methods, etc.) of a class or object. It determines whether the members can be accessed or called from other classes or
objects.

The most common visibility modifiers used in programming languages are:

1. Public: Public members are accessible from anywhere, both within the class itself and from external classes or
objects. They can be accessed directly using the dot operator.
2. Protected: Protected members are accessible within the class itself and its subclasses (derived classes). They are
not directly accessible from external classes or objects. However, subclasses can access the protected members
inherited from the superclass.

3. Private: Private members are only accessible within the class itself. They cannot be accessed from external classes
or objects, including subclasses. Private members are often used to encapsulate implementation details and provide
data hiding.

Procedures and Modules

In programming languages, procedures and modules are two important concepts that help organize and structure code.

1. Procedures: Procedures, also known as functions or subroutines, are reusable blocks of code that perform a specific task.
They are defined with a name and can take input parameters (arguments) and return a value. By using procedures, you
can break down a complex program into smaller, manageable units, making the code more modular, readable, and easier
to maintain.

Procedures encapsulate a series of instructions that can be invoked (called) multiple times from different parts of the
program. They promote code reuse and eliminate redundancy by centralizing common functionality in one place. When a
procedure is called, the program transfers control to that procedure, executes the instructions within it, and then returns
to the calling location.
Here's a simple example of a procedure in Python that calculates the sum of two numbers:

pythonCopy code

def add_numbers(a, b):


return a + b

result = add_numbers(3, 5)
print(result) # Output: 8

1. Modules: Modules are files or libraries that contain a collection of related procedures, functions, and variables. They
provide a way to organize and group code logically, allowing for better code management, code reuse, and separation of
concerns.

Modules offer a level of abstraction, allowing developers to use predefined functionalities without needing to know the
internal implementation details. They help in organizing large-scale programs by breaking them down into smaller, self-
contained units.

Programming languages typically provide mechanisms to import and use modules in your code. You can either use built-
in modules provided by the programming language itself or create your own custom modules. Importing a module makes
its procedures, functions, and variables accessible to your code.

Here's an example of using the math module in Python to calculate the square root of a number:

pythonCopy code

import math

result = math.sqrt(16)
print(result) # Output: 4.0
In this example, the math module provides the sqrt() function, which is used to calculate the square root of a number.

Class:

Class: A class is a blueprint or a template that defines the structure and behavior of objects. It encapsulates data (attributes)
and functions (methods) that define the properties and actions associated with objects of that class. For example, if you have
a class called "Car," it would define the characteristics (attributes) like color, model, and speed, as well as behaviors
(methods) such as accelerating, braking, and turning.
Object

Object: An object is an instance of a class. It represents a specific entity or element created based on the defined class.
When you create an object, it has its own set of attributes and can perform actions defined by the class's methods.
Continuing with the "Car" class example, you can create multiple objects such as "myCar," "yourCar," and "theirCar,"
each with its own unique set of attributes and behaviors.

Package

In the context of programming languages, a package refers to a collection of related modules or libraries that provide
additional functionality to a programming language. Packages are designed to organize and encapsulate code, making it
easier to manage and reuse.

1. Python: Python has a built-in package manager called pip, which allows you to install, manage, and use packages.
Some commonly used packages in Python include NumPy (numerical computing), pandas (data analysis), matplotlib
(data visualization), and requests (HTTP requests).
2. JavaScript: In JavaScript, packages are managed using npm (Node Package Manager) or yarn.
3. Java: In Java, packages are used to organize classes and provide a hierarchical namespace.. Examples of Java
packages include java.util (contains utility classes), java.io (input/output operations), and javax.swing (graphical
user interface components).
static and dynamic scope

Static and dynamic scope are two different approaches to variable binding in programming languages. They determine how
variables are resolved and accessed within a program.

1. Static Scope (also known as Lexical Scope): Static scope is determined by the program's structure and is usually fixed at
compile time. In static scoping, variable bindings are determined based on the location of their declaration in the source
code. The scope of a variable is determined by its block or function in which it is declared. Any references to that
variable within the same block or function, or in nested blocks or functions, will use the same variable binding. Static
scoping allows for early binding, as the bindings are determined before runtime.

Example (in a pseudo-code-like syntax):

var x = 10

function foo() {
var y = 20
bar()
}

function bar() {
print(x) // Will print 10
print(y) // Error: y is not defined in the scope of `bar`
}

foo()
In the example above, x is defined in the outer scope, so it is accessible from both foo and bar. However, y is defined within
the scope of foo and is not accessible in bar because it is not in the same or a nested scope.

2. Dynamic Scope: Dynamic scope determines variable bindings based on the program's execution flow at runtime rather
than its structure. In dynamic scoping, the scope of a variable is determined by the calling context. When a function or
block is invoked, the variables it references are searched for in the calling context before searching in its own scope.
Dynamic scoping allows for late binding, as the bindings are determined dynamically at runtime.

Example (in a pseudo-code-like syntax):

scssCopy code

var x = 10

function foo() {
var y = 20
bar()
}

function bar() {
print(x) // Will print 10
print(y) // Will print 20
}

foo()
In the example above, bar is called within the context of foo, so it has access to both x and y defined in foo. The variable
lookup is performed dynamically based on the calling context rather than the lexical structure.
It's worth noting that while static scope is more common in programming languages, dynamic scope is less frequently used
due to its potential for introducing unexpected behaviors and making code harder to reason about.
Implicit and Explicit sequencing
In programming languages, both implicit and explicit sequencing refer to the order in which statements or operations are
executed.

1. Implicit Sequencing:
Implicit sequencing is when the order of execution is determined by the structure of the program itself, without the need for
explicit instructions. This is often seen in imperative programming languages where statements are executed sequentially,
one after the other. Here's an example in Python:

x=5
y = 10
z=x+y
print(z)
In the above code, the statements are implicitly sequenced. The assignment to `x` is executed first, followed by the
assignment to `y`. Finally, the addition of `x` and `y` is assigned to `z`, and then `z` is printed. The implicit sequencing is
determined by the order of the statements in the program.

2. Explicit Sequencing:
Explicit sequencing is when the programmer explicitly specifies the order in which statements or operations should be
executed. This is often achieved using control flow statements such as conditionals (if-else) or loops. Here's an example using
an explicit sequencing construct, an if statement, in Python:

x=5
y = 10
if x > y:
print("x is greater than y")
else:
print("x is not greater than y")
In this code, the order of execution depends on the outcome of the condition `x > y`. If the condition is true, the first print
statement is executed. Otherwise, the second print statement is executed. The explicit sequencing is determined by the
control flow construct, in this case, the if-else statement.

Data Control Sequence Control

The control of the communication of data among The control of the procedure of the execution of operations, both primitive
the subprograms of a program is defined as data and user-defined is defined as sequence control.
control.

Data control is ruled by the dynamic and static Sequence control is ruled by notations in expressions and the hierarchy of
scope rules for an identifier. operations.

A data object can be made available through two Sequence control structures can be either explicit or implicit. Implicit
methods such as − sequence control structures are those represented by the language and
explicit are those that the programmer may optionally use.
• Direct Transmission and
• Transmission through Reference.

Data control structures may be categorized Sequence control structures can be conventionally categorized into three
according to the referencing environment of data. groups such as −
• Structures used in expressions.
• Structures used between statements and
• Structures used between subprograms.

Data control is concerned with the binding of Sequence control is concerned with decoding the instruction and
identifiers to specific data objects and expressions into executable form.
subprograms.
Data Control Sequence Control

Explicit sequencing allows for more flexible control over the flow of execution, as the programmer can conditionally execute
statements based on specific conditions or repeatedly execute statements using loops.

Sequence control
Sequence control in programming languages refers to the order in which statements or instructions are executed within
a program. By default, statements are executed sequentially, meaning that each statement is executed one after the
other, in the order they appear in the code.

However, programming languages also provide various control flow structures that allow you to alter the sequence of
execution. These control flow structures enable you to conditionally execute statements, repeat statements, or jump to
a different part of the code based on certain conditions. Here are some common control flow structures:

1. Conditional Statements: These statements allow you to execute a block of code based on a condition. The most
common conditional statement is the "if" statement, which checks a condition and executes a block of code if the
condition is true. It may also include "else" and "else if" clauses to handle different cases. Examples of conditional
statements include:

if condition:
# Code to be executed if the condition is true
else:
# Code to be executed if the condition is false

2. Loops: Loops allow you to repeatedly execute a block of code. There are typically two types of loops:

a. "for" loop: It iterates over a sequence of elements a predetermined number of times.


for variable in sequence:
# Code to be executed in each iteration

b. "while" loop: It repeatedly executes a block of code as long as a condition is true.


while condition:
# Code to be executed in each iteration

3. Jump Statements: These statements allow you to change the normal sequential flow of execution. Common jump
statements include:
a. "break" statement: Terminates the execution of a loop and transfers control to the next statement after the loop.
while condition:
if some_condition:
break

b. "continue" statement: Skips the remaining code in a loop iteration and proceeds to the next iteration.
for variable in sequence:
if some_condition:
continue
# Code to be executed if the condition is false

c. "return" statement: Exits a function and returns a value to the caller.


def some_function():
# Code
return value

These control flow structures allow you to control the sequence of execution and make your programs more flexible
and powerful by handling different conditions and scenarios.
Block structure

In programming, a block structure refers to the organization of code within a subprogram or function. It involves
grouping related statements together within a block, which can be defined by various control structures. The block
structure helps in organizing code, improving readability, and controlling the flow of execution within the subprogram.

several control structures commonly used to create block structures within subprograms.

1. Sequential Execution: The simplest form of block structure is sequential execution, where statements are executed
one after another in the order they appear within the subprogram. There is no explicit block notation, but a logical
block is formed by the sequential ordering of statements.

Example:
subprogramName()
Statement 1
Statement 2
Statement 3
...
Statement n
end subprogramName

2. Conditional Structures: Conditional structures introduce decision-making capabilities in a subprogram, allowing


certain blocks of code to execute conditionally based on the evaluation of a Boolean expression.

Example (using if-else): Example (using switch-case):


subprogramName()
if condition subprogramName()
Statement 1 switch variable
else case value1:
Statement 2 Statement 1
end if break
end subprogramName case value2:
Statement 2
break
default:
Statement 3
end switch
end subprogramName
```

3. Looping Structures: Looping structures allow repetitive execution of a block of code until a specified condition is met.
Example (using while loop):
subprogramName()
while condition
Statement 1
end while
end subprogramName

Example (using for loop):


subprogramName()
for i = startValue to endValue
Statement 1
end for
end subprogramName
4. Exception Handling: Exception handling structures help in dealing with exceptional conditions that may arise during
the execution of a subprogram. They allow the programmer to handle errors or abnormal situations gracefully.

subprogramName()
try
Statement 1
catch exception
Statement 2
finally
Statement 3
end try
end subprogramName

These are some of the common control structures used to create block structures within subprograms. The specific
syntax and available control structures may vary depending on the programming language you are using.

Subprogram sequence control


Subprogram sequence control refers to the flow of execution within a program that includes subprograms or functions.
There are three common control mechanisms related to subprograms: call, return, and recursive calls.

1. Call: When a subprogram is called, the control is transferred from the calling program to the called subprogram. The
called subprogram is executed, and any necessary parameters are passed to it. After the execution of the subprogram is
completed, control returns to the point immediately following the call statement in the calling program.

2. Return: The return statement is used within a subprogram to transfer control back to the calling program. When a
return statement is encountered, the subprogram completes its execution, and control returns to the point in the
calling program from where it was called. Additionally, a return statement may also provide a value or result back to the
calling program if needed.

3. Recursive Calls: Recursion occurs when a subprogram calls itself either directly or indirectly. Recursive calls can be
useful for solving problems that can be divided into smaller subproblems of the same nature. In a recursive call, the
subprogram executes itself, creating a new instance of the subprogram. Each instance of the subprogram has its own
set of variables and parameters. Recursive calls continue until a termination condition is met, at which point the control
returns back through each level of the recursion until the initial call is completed.
Here's a simple example in Python to illustrate these concepts:
def countdown(n):
if n > 0:
print(n)
countdown(n - 1)
else:
print("Go!")

print("Countdown:")
countdown(5)

In this example, the `countdown` function is defined to recursively print a countdown from a given number. When the
`countdown` function is called with a positive number, it prints the number and then calls itself with `n - 1`. This process
continues until `n` reaches 0, at which point the base case is triggered, and "Go!" is printed. The control then returns back
through each level of the recursion until the initial call is completed.
Output:
Countdown:
5
4
3
2
1
Go! Note that recursive calls should always have a termination condition to avoid infinite recursion.
UNIT 4

Concurrent programming

Concurrent programming is a programming paradigm that focuses on designing and implementing programs that can
execute multiple tasks or processes simultaneously. It involves writing programs that can perform multiple independent
tasks concurrently, allowing for efficient utilization of system resources and improved program performance.

In traditional sequential programming, code execution occurs in a linear fashion, with one task or instruction executed after
another. However, in concurrent programming, multiple tasks or threads can run concurrently, potentially executing
simultaneously on different processors or processor cores.

Concurrency can be achieved through various techniques, such as:

1. Threads: Threads are lightweight execution units within a process that can run concurrently. They share the same
memory space and resources of the parent process and can communicate with each other through shared variables
or message passing.

2. Processes: Processes are independent instances of a program that can run concurrently. Each process has its own
memory space and resources, and communication between processes typically involves inter-process
communication (IPC) mechanisms, such as pipes, sockets, or shared files.

3. Parallelism: Concurrent programming can also involve parallel execution, where tasks are divided into smaller
subtasks that can be executed simultaneously on multiple processors or cores. This can lead to significant
performance improvements, especially on systems with multiple processors or cores

Communication in concurrent programming refers to the exchange of data and synchronization of activities between
concurrently executing threads or processes. In concurrent programming, multiple threads or processes run simultaneously,
and they may need to communicate with each other to share information, coordinate their actions, or avoid conflicts.

Monitors

Monitors are a synchronization construct that allows multiple threads or processes to safely access shared resources or sections
of code. Monitors provide a higher level of abstraction compared to lower-level primitives like locks or semaphores, making it
easier to reason about and control concurrent access.

1. A monitor is essentially a module that encapsulates a shared resource and provides access to that resource through a set
of procedures. The procedures provided by a monitor ensure that only one process can access the shared resource at any
given time, and that processes waiting for the resource are suspended until it becomes available.
2. Monitors are used to simplify the implementation of concurrent programs by providing a higher-level abstraction that
hides the details of synchronization. Monitors provide a structured way of sharing data and synchronization information,
and eliminate the need for complex synchronization primitives such as semaphores and locks.
3. The key advantage of using monitors for process synchronization is that they provide a simple, high-level abstraction that
can be used to implement complex concurrent systems. Monitors also ensure that synchronization is encapsulated within
the module, making it easier to reason about the correctness of the system.

The monitor is supported by programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind of module or a
package.
2. The processes running outside the monitor can’t access the internal variable of the monitor but can call
procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Thread
A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of
control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be more
than one thread inside a process. Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks. Thread is often referred to as a lightweight process.

Life cycle of thread:

1. New: A new thread is created but not working. A thread after creation and before invocation of start() method will be in
new state.
2. Runnable: A thread after invocation of start() method will be in runnable state. A thread in runnable state will be
available for thread scheduler.
3. Running: A thread in execution after thread scheduler select it, it will be in running state.
4. Blocked: A thread which is alive but not in runnable or running state will be in blocked state. A thread can be in blocked
state because of suspend(), sleep(), wait() methods or implicitly by JVM to perform I/O operations.
5. Dead: A thread after exiting from run() method will be in dead state. We can use stop() method to forcefully killed a
thread.

The process can be split down into so many threads. For example, in a browser, many tabs can be viewed as threads. MS
Word uses many threads - formatting text from one thread, processing input from another thread, etc.
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.

Types of Threads:
User Level thread (ULT) – Is implemented in the user level library, they are not created using the system calls. Thread
switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the user level thread and
manages them as if they were single-threaded processes.
Advantages of ULT –
1. Can be implemented on an OS that doesn’t support multithreading.
2. Simple representation since thread has only program counter, register set, stack space.
3. Simple to create since no intervention of kernel.
4. Thread switching is fast since no OS calls need to be made.
Limitations of ULT –
1. No or less co-ordination among the threads and Kernel.

2. If one thread causes a page fault, the entire process blocks.


3. Kernel Level Thread (KLT) – Kernel knows and manages the threads. Instead of thread table in each process, the
kernel itself has thread table (a master one) that keeps track of all the threads in the system. In addition kernel also
maintains the traditional process table to keep track of the processes. OS kernel provides system call to create and
manage threads.
a. Advantages of KLT –
i. Since kernel has full knowledge about the threads in the system, scheduler may decide to give more
time to processes having large number of threads.
ii. Good for applications that frequently block.
b. Limitations of KLT –
i. Slow and inefficient.
ii. It requires thread control block so it is an overhead.

Any thread has the following components.

1. Program counter
2. Register set
3. Stack space
Process synchronization

Process synchronization and thread synchronization are two concepts related to coordinating the execution of
processes and threads in a concurrent system. They ensure that multiple processes or threads can share resources and
execute concurrently without interfering with each other or causing unexpected behavior.

Process Synchronization:
Process synchronization involves managing the execution order and access to shared resources among multiple
processes. In a multitasking operating system, processes run concurrently and may need to access shared resources
such as files, memory, or devices. The goal of process synchronization is to prevent conflicts and ensure the correct and
predictable execution of processes. Common mechanisms for process synchronization include:

1. Semaphores: Semaphores are integer variables used for signaling and mutual exclusion. They can be used to control
access to resources by allowing or blocking processes based on the value of the semaphore.

2. Mutex (Mutual Exclusion): Mutex is a synchronization primitive that allows only one process to access a shared
resource at a time. It provides mutual exclusion and ensures that only one process holds the lock on the resource.

3. Condition Variables: Condition variables are used for synchronization between processes based on certain
conditions. They allow processes to wait until a particular condition is satisfied before proceeding.

4. Monitors: Monitors are high-level synchronization constructs that encapsulate data and synchronization
operations. They provide a structured approach to process synchronization by ensuring that only one process can
execute a synchronized method at a time.
Structured Data
Structured data is the data which conforms to a data model, has a well define structure, follows a consistent order and
can be easily accessed and used by a person or a computer program.
Structured data is usually stored in well-defined schemas such as Databases. It is generally tabular with column and
rows that clearly define its attributes.
SQL (Structured Query language) is often used to manage structured data stored in databases.
Characteristics of Structured Data:
• Data conforms to a data model and has easily identifiable structure
• Data is stored in the form of rows and columns
Example : Database
• Data is well organised so, Definition, Format and Meaning of data is explicitly known
• Data resides in fixed fields within a record or file
• Similar entities are grouped together to form relations or classes
• Entities in the same group have same attributes
• Easy to access and query, So data can be easily used by other programs
• Data elements are addressable, so efficient to analyse and process

Scope of a variable

The scope of a variable in a programming language refers to the region of a program where the variable is accessible
and can be used. It determines where the variable can be declared, assigned a value, and accessed during the execution
of the program. The scope of a variable is defined by the block of code in which it is declared.

The most common types of variable scopes in programming languages are:

1. Global scope: Variables declared in the global scope are accessible from anywhere in the program. They are typically
declared outside of any function or block and can be accessed by all functions and blocks within the program.

2. Local scope: Variables declared within a specific block, such as a function or loop, have local scope. They are
accessible only within that block and any nested blocks within it. Local variables are typically used for temporary storage
or intermediate calculations.
3. Function scope: Variables declared within a function have function scope. They are accessible only within the
function and not outside of it. Function parameters and local variables fall into this category.

4. Block scope: Some programming languages, like JavaScript and C++, support block scope. Variables declared within a
block, which is defined by a pair of curly braces `{}`, have block scope. They are accessible only within that block and any
nested blocks within it. Block scope variables are commonly used in conditional statements and loops.

It's important to note that the scope of a variable also affects its lifetime, which refers to the duration for which the
variable exists in memory. Variables with global or static scope have a longer lifetime, while variables with local scope
have a shorter lifetime that ends when the block in which they are declared is exited.

Understanding variable scope is crucial for managing data and preventing naming conflicts within a program. It allows
for the efficient and organized use of variables in different parts of the code, ensuring their appropriate visibility and
accessibility.

Operators
In programming languages, operators are symbols or keywords that perform various operations on operands (values or
variables). They allow you to manipulate data and control the flow of execution in your programs. Here are some commonly
used operators in programming languages:

1. Arithmetic Operators:

• Addition (+): Adds two operands.


• Subtraction (-): Subtracts the second operand from the first.
• Multiplication (*): Multiplies two operands.
• Division (/): Divides the first operand by the second.
• Modulo (%): Returns the remainder after division.
• Increment (++) and Decrement (--): Increases or decreases the value of an operand by 1.

2. Assignment Operators:

• Assignment (=): Assigns a value to a variable.


• Compound assignment operators (e.g., +=, -=, *=, /=): Perform an operation and assign the result to a
variable.

3. Comparison Operators:

• Equal to (==): Checks if two operands are equal.


• Not equal to (!=): Checks if two operands are not equal.
• Greater than (>), Less than (<), Greater than or equal to (>=), Less than or equal to (<=): Compare the values
of two operands.

4. Logical Operators:

• Logical AND (&&): Returns true if both operands are true.


• Logical OR (||): Returns true if either operand is true.
• Logical NOT (!): Negates the logical state of an operand.

5. Bitwise Operators:

• Bitwise AND (&): Performs a bitwise AND operation on the binary representation of two operands.
• Bitwise OR (|): Performs a bitwise OR operation on the binary representation of two operands.
• Bitwise XOR (^): Performs a bitwise XOR (exclusive OR) operation on the binary representation of two
operands.
• Bitwise NOT (~): Inverts the bits of an operand.
• Left shift (<<): Shifts the bits of the first operand to the left by the number of positions specified by the
second operand.
• Right shift (>>): Shifts the bits of the first operand to the right by the number of positions specified by the
second operand.

6. Conditional Operator (Ternary Operator):

• ?: Evaluates a condition and returns one of two values based on the result of the condition.
Recursion
Recursion is a powerful concept in programming where a function or procedure calls itself during its execution. In other
words, a recursive function solves a problem by solving smaller instances of the same problem. It can be used to break
down complex problems into simpler sub-problems.

When using recursion, there are typically two key components:


1. Base case(s): These are the terminating conditions that define when the recursion should stop. They are necessary to
prevent infinite recursion and ensure that the function eventually reaches a base case where it no longer calls itself.
2. Recursive case(s): These are the conditions where the function calls itself with a smaller or simpler input. The
function continues to call itself until it reaches a base case.

Here's an example of a recursive function in Python that calculates the factorial of a number:

def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)

In this example, the base case is when `n` equals 0, where the function returns 1. In the recursive case, the function calls
itself with a smaller input `n - 1` and multiplies the result by `n`. This process continues until the base case is reached.

Recursive rules, on the other hand, are commonly used in formal language theory and grammars. They define rules or
productions for generating or recognizing strings in a language. These rules can be defined in terms of themselves,
creating a recursive definition.

For example, consider a simplified grammar rule for a mathematical expression that allows addition:

expression ::= number | expression + number

In this grammar rule, `expression` can be defined as either a `number` or an `expression` followed by a plus sign (`+`)
and another `number`. This recursive definition allows for expressions of any length, such as `2 + 3`, `2 + 3 + 4`, and so
on.

Recursion in programming languages


and recursive rules in formal grammars
are both based on the idea of defining
something in terms of itself. While
recursion is a programming technique
used to solve problems by breaking them
down into smaller instances, recursive
rules define languages and grammars by
referring to themselves.
Input and Output
Input and output, or I/O is the communication between an information processing system, such as a
computer, and the outside world, possibly a human or another information processing system. Inputs are the
signals or data received by the system and outputs are the signals or data sent from it.
Every task we have the computer do happens inside the central processing unit (CPU) and the associated
memory. Once our program is loaded into memory and the operating system directs the CPU to start executing
our programming statements the computer looks like this:

CPU – Memory – Input/Output Devices


Our program now loaded into memory has basically two areas:
• Machine instructions – our instructions for what we want done
• Data storage – our variables that we using in our program

Often our program contains instructions to interact with the input/output devices. We need to move data into (read)
and/or out of (write) the memory data area. A device is a piece of equipment that is electronically connected to the
memory so that data can be transferred between the memory and the device. Historically this was done with punched
cards and printouts. Tape drives were used for electronic storage. With time we migrated to using disk drives for
storage with keyboards and monitors (with monitor output called soft copy) replacing punch cards and printouts (called
hard copy).Most computer operating systems and by extension programming languages have identified the keyboard as
the standard input device and the monitor as the standard output device. Often the keyboard and monitor are treated
as the default device when no other specific device is indicated.

Program Control
Program Control Instructions are the machine code that are used by machine or in assembly
language by user to command the processor act accordingly. These instructions are of various
types. These are used in assembly language by user also. But in level language, user code is
translated into machine code and thus instructions are passed to instruct the processor do the
task.
Types of Program Control Instructions: 2. Unconditional Branch Instruction:
There are different types of Program Control Instructions: It causes an unconditional change of
1. Compare Instruction: execution sequence to a new location.
Compare instruction is specifically provided, Example:
which is similar t a subtract instruction except the JUMP L2
result is not stored anywhere, but flags Mov R3, R1 goto L2
are set according to the result.
Example:
CMP R1, R2 ;

4. Subroutines:
3. Conditional Branch Instruction: A subroutine is a program fragment that lives in user space, performs
A conditional branch instruction is used a well-defined task. It is
to examine the values stored in the invoked by another user program and returns control to the calling
condition code program when finished.
register to determine whether the Example:
specific condition exists and to branch if it CALL and RET
does. 5. Halting Instructions:
Example: • NOP Instruction – NOP is no operation. It cause no change in the
Assembly Code : BE R1, R2, L1 processor state other
Compiler allocates R1 for x and R2 for y than an advancement of the program counter. It can be used to
High Level Code: if (x==y) goto L1; synchronize timing.
• HALT – It brings the processor to an orderly halt, remaining in an
idle state until restarted by interrupt, trace, reset or external action.

6. Interrupt Instructions:
Interrupt is a mechanism by which an I/O or an instruction can suspend the normal execution of
processor and get itself serviced.
• RESET – It reset the processor. This may include any or all setting registers to an initial
value or setting program counter to standard starting location.
• TRAP – It is non-maskable edge and level triggered interrupt. TRAP has the highest
priority and vectored interrupt.
• INTR – It is level triggered and maskable interrupt. It has the lowest priority. It can be
disabled by resetting the processor.
logic program design
A logic program design flowchart represents the sequence of steps involved in designing a logic program. Here is an
example of a flowchart outlining a typical logic program design process:
1. Start: Begin the logic program design process.
2. Identify Requirements: Gather and analyze the requirements for the logic program. This step involves
understanding the problem to be solved, the desired functionality, and any constraints or limitations.
3. Define Inputs and Outputs: Identify the inputs that the logic program will receive and the outputs it should produce.
This step helps in determining the scope and purpose of the program.
4. Design Logic: Determine the logical structure and algorithms required to achieve the desired functionality. Break
down the problem into smaller, manageable tasks and design the flow of operations.
5. Flowchart Representation: Create a flowchart to visualize the logic program design. Use symbols and connectors to
represent the sequence of steps, decisions, and loops involved in the program.
6. Program Flow: Define the flow of the program by connecting the steps and decisions in the flowchart. Ensure that
the program follows a logical order and handles all possible scenarios.
7. Error Handling: Incorporate error handling mechanisms into the program design. Identify potential errors or
exceptions that may occur during program execution and design appropriate error-handling routines.
8. Test and Debug: Test the logic program design to ensure it produces the expected outputs for different inputs.
Debug any errors or issues that arise during testing.
9. Refine and Improve: Review the logic program design for any areas that can be refined or improved. Consider
factors such as efficiency, maintainability, and user-friendliness.
10. Finalize Design: Once satisfied with the logic program design, finalize the flowchart and associated documentation.
11. Implement: Translate the logic program design into a specific programming language. Write the code based on the
flowchart and logic defined in the earlier steps.
12. Test and Validate: Test the implemented program to ensure it functions correctly and produces the desired outputs.
Validate the program against the original requirements.
13. Deploy: Once the logic program design has been tested and validated, deploy the program to the intended
environment or users.
14. Maintain and Update: Continuously maintain and update the logic program design as needed. Monitor its
performance, address any issues, and incorporate enhancements or modifications based on user feedback.
15. End: Complete the logic program design process.
Please note that this flowchart is a general guideline and can be customized or expanded based on the specific
requirements and complexity of the logic program being designed.

You might also like