You are on page 1of 8

Programming & Employability Skills

B.Tech CSE VI Semester


Self-Work III
Name – Devyanshi Gupta
Enrolment Number A7605221088
Course BTECH CSE
Sec – B
Sem 6
Q1 Define what is a Hash Function? What are common problem of hashing.
Ans A hash function is a mathematical function that takes an input (or "message") and
produces a fixed size string of characters, which is typically a hexadecimal number. The
output, commonly referred to as the hash code or hash value, is unique to the input data. Hash
functions are widely used in computer science and cryptography for various purposes, such as
indexing data structures, ensuring data integrity, and creating digital signatures.
Some common properties and characteristics of hash functions:
 Deterministic: For the same input, a hash function always produces the same output.
 Fixed Output Size: The output length of a hash function is fixed, regardless of the size
of the input.
 Efficient: Hash functions are computationally efficient, enabling rapid processing of
large datasets.
 Avalanche Effect: A small change in the input data should result in a significantly
different hash value.
 Noninvertible: It should be computationally infeasible to reverse engineer the original
input from its hash value.
 Collision Resistance: It should be unlikely for two different inputs to produce the
same hash value.
Common problems or challenges associated with hashing include:
 Collision Attacks: Despite the intention for hash functions to distribute hash values
uniformly across the output space, collisions can occur, where two different inputs
produce the same hash value. This poses a security risk in certain applications, such as
digital signatures and hash tables.
 Length Extension Attacks: Some hash functions are susceptible to length extension
attacks, where an attacker can append additional data to a given hash value without
knowing the original input, leading to unintended consequences or security
vulnerabilities.
 Preimage Attacks: In a preimage attack, an attacker attempts to find an input that
corresponds to a specific hash value. This undermines the non-invertibility property of
hash functions and can compromise data integrity or authentication mechanisms.
 Performance Overhead: Hash functions, particularly cryptographic hash functions
used in security sensitive applications, can introduce computational overhead due to
their complexity and resource requirements.
 Hash Function Vulnerabilities: Vulnerabilities or weaknesses in the design or
implementation of hash functions can be exploited by attackers to bypass security
measures, compromise systems, or manipulate data.

Q2 Differentiate between Hashing and Hash Tables?


Ans
Q3 Discuss different types of searching.
Ans

Q4 What would be the worst case asymptotic time complexity?


Ans The worst case asymptotic time complexity refers to the upper bound on the running
time of an algorithm in the most unfavorable scenario. Here are the worst case asymptotic
time complexities for the mentioned searching algorithms:
Q5 Explain Asymptotic analysis of an algorithm.
Ans Asymptotic analysis is a method employed in computer science to describe the
efficiency of algorithms concerning their growth rates as the input size approaches infinity. It
provides a high level understanding of how the performance of an algorithm scales with
increasing input sizes and is particularly useful for comparing algorithms without delving into
the details of hardware, constant factors, or specific implementation details.
Key aspects of asymptotic analysis include:
1. Big O Notation (O):
 Big O notation is a widely used notation in asymptotic analysis to describe the upper
bound or worst case scenario of an algorithm's time complexity.
 For a function f(n), O(g(n)) represents an upper bound on the growth rate of f(n) as n
approaches infinity.
 It is used to express the upper limit of an algorithm's running time in the worst case
scenario.
2. Theta Notation (Θ):
 Theta notation represents both the upper and lower bounds of an algorithm's time
complexity.
 For a function f(n), Θ(g(n)) denotes that f(n) grows at the same rate as g(n) for
sufficiently large n.
3. Omega Notation (Ω):
 Omega notation represents the lower bound of an algorithm's time complexity.
 For a function f(n), Ω(g(n)) denotes that f(n) grows at least as fast as g(n) for
sufficiently large n.
Asymptotic analysis focuses on understanding the dominant factors influencing an
algorithm's efficiency rather than precise details. It allows for a high level comparison
between different algorithms and helps in choosing the most suitable algorithm based on the
problem requirements.
Common time complexities expressed in big O notation include O(1) for constant time, O(log
n) for logarithmic time, O(n) for linear time, O(n log n) for linearithmic time, O(n^2) for
quadratic time, and so on.
For example, if an algorithm has a time complexity of O(n^2), it means that the running time
grows quadratically with the size of the input. Asymptotic analysis is a powerful tool for
algorithmic design and analysis, providing insights into the scalability and efficiency of
algorithms on large datasets.

Q6 What is Greedy algorithm? Explain.


Ans A greedy algorithm is an approach to problem solving that makes locally optimal
choices at each stage with the hope of finding a global optimum. In other words, a greedy
algorithm makes the best possible decision at each step without worrying about the
consequences in the future. The strategy is to choose the option that looks the best in the
current moment, with the expectation that this will lead to an optimal solution for the entire
problem.
Key characteristics of greedy algorithms include:
1. Greedy Choice Property:
 A greedy algorithm makes a series of choices, and at each step, it selects the option
that appears to be the most advantageous or beneficial at that moment.
 The choice made at each step is based solely on the information available at that
particular step, without considering the global picture.
2. Optimal Substructure:
 A problem exhibits optimal substructure if an optimal solution to the overall problem
can be constructed from optimal solutions of its subproblems.
 Greedy algorithms work well when a problem has optimal substructure, meaning that
an optimal solution to the whole problem can be constructed from optimal solutions
of its subproblems.
3. Doesn't Always Guarantee Global Optimality:
 While greedy algorithms are straightforward and easy to implement, they do not
always guarantee the globally optimal solution.
 Sometimes a greedy choice that seems optimal at the local level may lead to a
suboptimal solution overall.
Common examples of problems where greedy algorithms are applied include:
 Fractional Knapsack Problem: Given items with weights and values, the goal is to
select a combination of items to maximize the total value without exceeding a given
weight capacity.
 Huffman Coding: Used for lossless data compression, where characters are assigned
variable length codes to minimize the total encoding length.
 Dijkstra's Algorithm: Used for finding the shortest paths between nodes in a graph
with nonnegative edge weights.
 Prim's Algorithm and Kruskal's Algorithm: Used for finding minimum spanning trees
in a connected, undirected graph.
Greedy algorithms are efficient and often provide solutions that are close to the optimal.
However, careful analysis is required to ensure their correctness and to identify scenarios
where they may fail to find the global optimum.

Q7 Explain algorithm of divide & Conquer problem.


Ans The divide and conquer algorithm is a problem-solving strategy that involves
breaking a problem into smaller subproblems, solving them independently, and then
combining their solutions to obtain the solution to the original problem. The process consists
of three main steps: divide, conquer, and combine.
1. Divide: The problem is divided into smaller, more manageable subproblems. This step
continues until the subproblems are simple enough to be solved directly.
2. Conquer: Each subproblem is solved independently. This is typically done recursively,
applying the divide and conquer strategy to further break down each subproblem until a base
case is reached and a solution can be easily determined.
3. Combine: The solutions to the subproblems are combined to obtain the solution to the
original problem. This step involves merging or aggregating the results obtained from solving
the subproblems.
Key characteristics of divide and conquer algorithms include:
 Recursive Structure: Divide and conquer algorithms often exhibit a recursive
structure, where the same algorithm is applied to smaller instances of the problem.
 Efficiency: When implemented efficiently, divide and conquer algorithms can lead to
more efficient solutions compared to naive approaches, especially for problems with
large input sizes.
Examples:
Common examples of algorithms using the divide and conquer strategy include merge sort
and quicksort for sorting, binary search for searching, and various algorithms for matrix
multiplication.
Q8 Discuss with example minimum spanning tree.
Ans A spanning tree is defined as a tree-like subgraph of a connected, undirected graph
that includes all the vertices of the graph. Or, to say in Layman’s words, it is a subset of the
edges of the graph that forms a tree (acyclic) where every node of the graph is a part of the
tree.
The minimum spanning tree has all the properties of a spanning tree with an added constraint
of having the minimum possible weights among all possible spanning trees. Like a spanning
tree, there can also be many possible MSTs for a graph. Algorithms like Prim’s and Kruskal
are used for finding MSTs for a graph.

Applications of Minimum Spanning Trees:


 Network design: Spanning trees can be used in network design to find the minimum
number of connections required to connect all nodes. Minimum spanning trees, in
particular, can help minimize the cost of the connections by selecting the cheapest
edges.
 Image processing: Spanning trees can be used in image processing to identify regions
of similar intensity or color, which can be useful for segmentation and classification
tasks.
 Biology: Spanning trees and minimum spanning trees can be used in biology to
construct phylogenetic trees to represent evolutionary relationships among species or
genes.
 Social network analysis: Spanning trees and minimum spanning trees can be used in
social network analysis to identify important connections and relationships among
individuals or groups.
Q9 Discuss with example shortest path problem.
Ans The shortest path problem involves finding the shortest path between two vertices (or
nodes) in a graph. Algorithms such as the Floyd-Warshall algorithm and different variations
of Dijkstra's algorithm are used to find solutions to the shortest path problem. Applications of
the shortest path problem include those in road networks, logistics, communications,
electronic design, power grid contingency analysis, and community detection.
There are many variants of the shortest path algorithms. They are:
 Single-source single-destination (1-1): Find the shortest path from source s to
destination v.
 Single-source all-destination(1-Many): Find the shortest path from s to each vertex v.
 Single-destination shortest-paths (Many-1): Find a shortest path to a given destination
vertex t from each vertex v.
 All-pairs shortest-paths problem (Many-Many): Find a shortest path from u to v for
every pair of vertices u and v.
Example:
In the given weighted graph, the shortest path from node A to F is A-C-E-D-F(cost = 20), as
it’s cost is lesser than the other path i.e. A-B-D-F(cost = 25) or A-B-C-E-D-F(cost = 27).

You might also like