ARBA MINCH UNIVERSITY INSTITUTION OF TECHNOLOGY
DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING
DATA STRUCTURE AND ALGORITHM REPORT
GROUP NAME ID
1.GRUM-- LEMMA -------------------NSR/914/14
2.MERID-- MARKOS -----------------NSR/1132/14
3.DAWIT-- DIYAU---------------------NSR/2263/14
4.LUKAS-- MENGISTU ---------------NSR/1741/14
5.TEFERA -- TESFA -------------------NSR/2637/14
6.BINIYAM--TEKALIGN--------------NSR/1526/14
7.FETIYA-- ABDELA-------------------NSR/1627/14
8.PAULOS--WONDIYE---------------NSR/1243/14
1
CHAPTER ONE
INTRODUCTION DATA STRUCTURE AND ALGORITHMS
1. Introduction
When we write a computer program, we’re essentially trying to solve a problem. To do that
well, we need two things: a smart way to organize the information we’re working with (this is
called a data structure), and a clear plan for what steps the program should follow (this is called
an algorithm). Think of it like organizing tools in a tool box and having instructions for how to
use them.
Before diving in to coding, it’s important to understand the problem clearly. We do this by
focusing only on what matters and setting aside the rest —that’s called abstraction. Once we’ve
figured out what kind of data we’re dealing with and what we need to do with it, we can create
something called an Abstract Data Type (or ADT). This is like a blue print that tells the program
how to handle specific types of data in a structured and efficient way.
2, Explanation Definition Purpose
A computer program is fundamentally designed to solve problems using two
essential components:
1.DataStructures– Techniques for organizing and storing data in a structured
manner.
2. Algorithms– Precise, step-by-step procedures for processing data to produce
desired results.
Abstraction
Abstraction is the process of simplifying complexity by focusing solely on relevant
information while disregarding unnecessary details. This approach aids in:
Identifying which data elements is involved.
Determining the operations required to solve the problem.
2
Abstract Data Types (ADT)
An Abstract Data Type (ADT) is a conceptual model that defines the behavior of data and the
operations performed on it, independent of implementation details. Examples of ADTs include
stacks, queues, and lists.
Importance of Data Structures
Utilizing appropriate data structures is essential for building reusable, efficient software
components. Proper structure selection enhances:
Code modularity and reusability.
Performance n terms of time and memory usage.
Commonly Used Data Structures
Arrays– Store elements in contiguous memory locations.
Linked Lists– Consist of nodes linked through pointers.
Stacks– Follow the Last-In-First-Out (LIFO) principle.
Queues– Operation a First-In-First-Out (FIFO) basis.
Trees– Represent hierarchical data.
Graphs– Represent complex relationships between data points.
Algorithms and Their Analysis
An algorithm is a well-defined, finite sequence of instructions designed to solve a specific
problem efficiently.
Key Properties of Effective Algorithms
Finiteness: Every algorithm must reach an end.
Definiteness: Each instruction must be unambiguous.
Input/Output: Algorithms require input and generate output.
Effectiveness: All operations must be basic enough to execute feasibly.
3
Efficiency: Must utilize minimal computational resources.
Language Independence: Logic must be applicable across programming languages.
Algorithm Analysis
Analyzing an algorithm provides insight into its efficiency and resource usage. The main metrics
used are:
Time Complexity: The number of fundamental operations executed relative to input size.
Space Complexity: The total memory consumed during execution.
Why Not Use Clock Time?
Using real-world clock time is often unreliable due to variations in hardware, system load, and
input conditions. Instead, measuring the number of operations offers a more consistent basis
for evaluating algorithm performance.
Time Complexity Examples and Notation
Example 1: Linear Time Algorithm
Int total (int n){
Int sum = 0;
for(int i= 1; i<=n; I++)
sum = sum+1;
return sum;
Time Complexity: O(n)
Example 2: Cubic Sum
Int sum(int n){
Int partial_sum= 0;
for(int i=1; i<=n; i++)
4
partial_sum = partial_sum + (I * I * I );
return partial_sum; }
Time Complexity : O(n)
Asymptotic Notations
Asymptotic analysis examines the growth of an algorithm’s running time as the input size
increases. The three most common notations are:
Big-O(O): Represents the upper bound of an algorithm’s running time, capturing the worst-case
scenario.
Big-Omega (Ω): Describes the lower bound or best- case performance.
Theta (Θ): Provides a tight bound, representing both the upper and lower limits.
Example: Given the function f(n)=3n²+4n+1, the dominant term is n², so:
f(n)=O(n²)
f(n)=Ω(n²)
f(n)=Θ(n²)
Comparing Data Structures & Final Thoughts
Array vs. Linked List
When inserting an element at the beginning:
Arrays require shifting existing elements, which increases the number of operations.
Linked Lists allow direct insertion without shifting, resulting in better performance for this
operation.
Comparing Time Complexity
To evaluate performance, operations are executed across varying input sizes, and the number
of steps is measured. This approach reveals how an algorithm scales.
Example:
If inserting elements takes n steps for input size n, then f(n)=n.
5
Growth Function Illustration
Given f(n)=4n²+2n+7,the term n² dominates for large input values. Therefore, time complexity
is approximated as: O(n²)
Chapter two
Simple Searching and Sorting Algorithms
Introduction
Objective:
The purpose of this lab is to understand and implement basic searching and sorting algorithms
which are essential in managing and organizing data efficiently in computer programs.
Topics Covered:
✓Searching Algorithms
✓Linear Search
✓Binary Search
✓Sorting Algorithms
✓Insertion Sort
✓Selection Sort
✓Bubble Sort
Applications:
These algorithms are commonly used in areas such as:
-Searching names in a phone book
-Looking up data in databases
-Sorting records for quick access
-Arranging files or lists in ascending/descending order
Searching Algorithms
1. Linear (Sequential) Search
6
Explanation:
This algorithm scans each element in the array sequentially and compares it with the target
item. If a match is found, it returns the index; otherwise, it returns -1.
Time Complexity: O(n)
Efficiency: Works on both sorted and unsorted lists.
Example (C++-style pseudo code):
int Linear_Search(int list[], int key) {
int index = 0;
int found = 0;
do {
if (key == list[index])
found = 1;
else
index++;
} while (found == 0 && index < n);
if (found == 0)
index = -1;
return index;
2. Binary Search
Explanation:
This algorithm is faster but requires a sorted array. It divides the array into halves and
eliminates the half in which the item cannot lie.
Time Complexity: O(log n)
Example:
7
To find 17 in [2, 5, 9, 12, 17, 37, 86]:
Middle = 12 → 17 > 12 → Search right half
Middle = 37 → 17 < 37 → Search left → Found at index 4
Code Snippet:
int Binary_Search(int list[], int key) {
int left = 0, right = n - 1, mid;
while (left <= right) {
mid = (left + right) / 2;
if (list[mid] == key)
return mid;
else if (key < list[mid])
right = mid - 1;
else
left = mid + 1;
return -1;
Sorting Algorithms (Part 1)
1. Insertion Sort
Explanation:
Each item is compared with those before it and inserted into its correct position. It’s similar to
how people sort playing cards in hand.
Time Complexity: O(n²)
Best Use Case: Small data sets.
Steps:
8
Start from second element, compare backward
Shift elements as needed
Insert the element in correct position
Example Code:
void insertion_sort(int list[]) {
for (int i = 1; i < n; i++) {
int temp = list[i];
int j = i;
while (j > 0 && temp < list[j - 1]) {
list[j] = list[j - 1];
j--;
} list[j] = temp; }}
Sorting Algorithms (Part 2)
2. Selection Sort
Explanation:
The algorithm selects the smallest element from the unsorted part and swaps it with the first
unsorted element.
Time Complexity: O(n²)
Efficiency: Simple and performs fewer swaps.
Example Code:void selection_sort(int list[]) {
for (int i = 0; i < n; i++) {
int smallest = i;
for (int j = i + 1; j < n; j++) {
9
if (list[j] < list[smallest])
smallest = j;
swap(list[i], list[smallest]);
}}
Bubble Sort
Explanation:
Adjacent elements are compared and swapped if they are in the wrong order. This continues
until no more swaps are needed.
Time Complexity: O(n²)
Efficiency: Very simple but slow for large lists.
Example Code:
void bubble_sort(int list[]) {
for (int i = 0; i < n; i++) {
for (int j = n - 1; j > i; j--) {
if (list[j] < list[j - 1]) {
swap(list[j], list[j - 1]);
}}}
Chapter 3
▎Stack in Data Structures
▎Definition
10
A stack is a linear data structure that follows the Last In First Out (LIFO) principle, meaning that
the last element added to the stack will be the first one to be removed. It can be visualized as a
collection of elements arranged in a vertical manner, similar to a stack of plates or boxes.
▎Explanation
In a stack, elements are added and removed from the same end, referred to as the "top" of the
stack. This structure supports two main operations:
Push: This operation adds an element to the top of the stack.
Pop: This operation removes the element from the top of the stack.
Additionally, stacks often include a third operation:
Peek (or Top): This operation allows you to view the top element of the stack without
removing it.
isEmpty: Return true is stack empty else return false
isFull(): Return true is stack full else return false
Stacks are typically implemented using arrays or linked lists, and they are widely used in various
applications, such as:
• Function Call Management: When a function is called, its execution context is pushed onto
the call stack, and when it returns, its context is popped off.
• Expression Evaluation: Stacks can be used to evaluate expressions in postfix notation
(Reverse Polish Notation) or to convert infix expressions to postfix.
• Backtracking Algorithms: Stacks can help in exploring paths in problems like maze solving or
puzzle games
▎Examples
Here’s a simple example of how a stack operates:
1. Initialization: Start with an empty stack.
2. Push Operations:
• Push 5 → Stack: [5]
• Push 10 → Stack: [5, 10]
11
• Push 15 → Stack: [5, 10, 15]
3. Pop Operation:
• Pop → Removes 15 → Stack: [5, 10]
4. Peek Operation:
• Peek → Returns 10 (the top element) without removing it.
Type of expression
The normal (or human) way of expressing mathematical expressions is called infix form,
e.g. 4+5*5.
However, there are other ways of representing the same expression, either by writing all
operators before their operands or after them,
e.g. 4 5 5 * +
+4*55
This method is called Polish Notation (because this method was discovered by the Polish
mathematician Jan Lukasiewicz).
When the operators are written before their operands, it is called the prefix form
e.g. + 4 * 5 5
When the operators come after their operands, it is called postfix form (suffix form or reverse
polish notation)
e.g. 4 5 5 * +
▎syntax
Pop from the stack until we find a matching left parenthesis if we come across a right
parenthesis. Don't print parentheses.
The stack is pushed using operators, including parentheses.
Determine whether the current operator is less than the stack top operator, If the top
operator is less than, place the current operator on the stack.
12
Current operator is pushed onto the stack and the top operator is popped if the top operator
is bigger than (or equal to) the current.
Pop from the stack until we find a matching left parenthesis if we come across a right
parenthesis. Don't print parentheses.
Operator Priority in Order of Precedence:
1) Parentheses ’(‘-only popped if a matching ‘)’ is found
2) All unary operator (sin,cos…)
3) Division and multiplication / *
4) Addition and Subtraction + -
Chapter 4: Queue
4.1. Introduction to Queues
A queue is a fundamental data structure in computer science used to store and manage data in a
linear order. It works on the First In First Out (FIFO) principle, where the element that is
inserted first is the one to be removed first. This behavior is similar to a queue in real life, such
as people waiting in line at a ticket counter.
In a queue, we keep track of two positions:
Front: This is the position from which elements are removed.
Rear: This is the position where new elements are added.
Queues are particularly useful in scenarios where order of processing is crucial, such as CPU
task scheduling, print spooling, and managing requests in web servers.
4.2. Basic Operations on Queues
There are two primary operations in a queue:
Enqueue: Adds an element at the rear end of the queue.
Dequeue: Removes an element from the front end of the queue.
Example of Queue Operations
13
Operation Queue Content
Enqueue(B) B
Enqueue(C) B, C
Dequeue() C
Enqueue(G) C, G
Enqueue(F) C, G, F
Dequeue() G, F
Enqueue(A) G, F, A
Dequeue() F, A
Each operation modifies the state of the queue accordingly.
4.3. Implementation Using Arrays
Queues can be implemented using arrays. This simple implementation maintains a fixed-size
array and two pointers (FRONT and REAR) to manage data.
Key Variables:
int FRONT = -1; // Index of the front element
int REAR = -1; // Index of the rear element
int QUEUESIZE = 0; // Number of elements in the queue
intNum[MAX_SIZE]; // Array to hold queue elements
Enqueue Operation Logic
1. Check if REAR < MAX_SIZE - 1 to ensure space is available.
2. If yes:
o Increment REAR.
o Store the new element at Num[REAR].
o Increment QUEUESIZE.
o If FRONT == -1, set FRONT = 0.
3. Else:
o Queue overflow.
Dequeue Operation Logic
1. Check if QUEUESIZE > 0.
2. If yes:
14
o Access the element at Num[FRONT].
o Increment FRONT.
o Decrement QUEUESIZE.
3. Else:
o Queue underflow.
Code Implementation
constint MAX_SIZE = 100;
int FRONT = -1, REAR = -1, QUEUESIZE = 0;
intNum[MAX_SIZE];
voidenqueue(int x) {
if (REAR < MAX_SIZE - 1) {
REAR++;
Num[REAR] = x;
QUEUESIZE++;
if (FRONT == -1) FRONT = 0;
} else {
cout<< "Queue Overflow";
}
}
intdequeue() {
if (QUEUESIZE > 0) {
int x = Num[FRONT];
FRONT++;
QUEUESIZE--;
return x;
} else {
cout<< "Queue Underflow";
return -1;
}
}
4.4. Circular Queue Implementation
A problem with linear arrays is that they may run out of space even when unused positions are
available due to front shifts. To handle this, a circular queue wraps around the end of the array.
Circular Enqueue Logic
15
1. Check if QUEUESIZE < MAX_SIZE.
2. Increment REAR using modulo: REAR = (REAR + 1) % MAX_SIZE.
3. Insert the element and increment QUEUESIZE.
4. If FRONT == -1, set FRONT = 0.
Circular Dequeue Logic
1. Check if QUEUESIZE > 0.
2. Retrieve the element at Num[FRONT].
3. Increment FRONT = (FRONT + 1) % MAX_SIZE.
4. Decrement QUEUESIZE.
Code Implementation
voidenqueue(int x) {
if (QUEUESIZE < MAX_SIZE) {
REAR = (REAR + 1) % MAX_SIZE;
Num[REAR] = x;
QUEUESIZE++;
if (FRONT == -1) FRONT = 0;
}
else {
cout<< "Queue Overflow";
}
}
intdequeue() {
if (QUEUESIZE > 0) {
int x = Num[FRONT];
FRONT = (FRONT + 1) % MAX_SIZE;
QUEUESIZE--;
return x;
} else {
cout<< "Queue Underflow";
return -1;
}
}
4.5. Priority Queue
16
In a priority queue, each element is associated with a priority. Elements with higher priority are
dequeued before those with lower priority.
Example:
If we assign higher priority to females:
Queue = [Abebe (F), Alemu (M), Hana (F)]
Dequeue() will remove Hana before Alemu
This requires modifying the dequeue logic to search for the element with the highest priority.
This helps in processing different groups independently.
4.6. Demerging Queues
Demerging is the process of separating a single queue into multiple queues based on a specific
property or key (e.g., gender, department).
Algorithm:
while (!PriorityQueue.empty()) {
Data = DequeuePriorityQueue();
if (Data.gender == 'F')
EnqueueFemale(Data);
else
EnqueueMale(Data);
}
4.7. Merging Queues
Merging is the opposite of demerging. It combines two or more queues into a single priority
queue based on defined rules.
Example:
Merge two queues:
Female queue: F1, F2
17
Male queue: M1, M2
Priority is given to females.
Resulting priority queue: F1, F2, M1, M2
4.8. Applications of Queues
Queues are widely used in computer systems for various purposes:
Print servers: Managing a queue of print jobs.
Disk drivers: Handling I/O requests.
Operating systems: Task scheduling and process management.
Call centers: Managing calls in waiting.
Simulations: Modeling queues of people or tasks.
4.9. Summary
A queue is a linear data structure based on FIFO behavior.
Basic operations include enqueue and dequeue.
Queues can be implemented using simple arrays or circular arrays.
Priority queues allow elements with higher priority to be dequeued first.
Demerging and merging queues help in managing groups of data more effectively.
Queues are essential in real-world applications including system scheduling, resource
management, and simulations
Chapter 5 Linked Lists
Introduction to Linked Lists
Linked lists are fundamental data structures widely used in computer science to efficiently
store and manage collections of data. Unlike arrays, which store elements in contiguous
memory locations, linked lists consist of individual elements called nodes, each containing two
18
parts: the data and a reference (or pointer) to the next node in the sequence. This dynamic
nature of linked lists allows for efficient insertion and deletion of elements, especially when
compared to arrays where resizing or shifting elements can be computationally expensive.
There are several types of linked lists, each designed to serve specific purposes. The most basic
form is the singly linked list, in which each node points only to the next node. This structure
enables simple traversal in one direction, from the head (first node) to the tail (last node). A
more advanced variation is the doubly linked list, where each node contains two references:
one to the next node and one to the previous. This allows for bi-directional traversal, making
certain operations, such as reverse traversal or deletion of a node, more efficient. Another
variation is the circular linked list, where the last node points back to the first, forming a
continuous loop.
Linked lists are particularly useful in situations where the size of the data set is unknown or
changes frequently. They are commonly used in applications like implementing stacks, queues,
and graph adjacency representations. Despite their flexibility, linked lists also have limitations.
For instance, they do not support constant-time random access to elements, as traversal from
the head is needed to reach any particular node.
Understanding linked lists is essential for grasping more advanced data structures and
algorithms. Their simplicity in concept, yet power in application, makes them a vital topic in
computer science education and software development.
Linked Lists: An In-Depth Explanation
A linked list is a fundamental data structure used in computer science to organize and store
data efficiently. Unlike arrays, which store elements in contiguous memory locations, linked lists
store elements in nodes that are scattered in memory, with each node containing a reference
(or link) to the next node in the sequence. This structure allows for dynamic memory allocation,
efficient insertion and deletion, and flexibility in managing data.
1. Structure of a Linked List
A linked list is made up of nodes. Each node typically contains two parts:
Data – the value or information the node holds.
Pointer (or link) – a reference to the next node in the list.
The first node is called the head, and the last node points to null (or None in Python), signifying
the end of the list. The list can be visualized as:
19
[Data Next] -> [Data Next] -> [Data Next] -> null
2. Types of Linked Lists
There are several types of linked lists:
a. Singly Linked List
This is the simplest form of a linked list where each node points to the next node. It allows
traversal in one direction only – from the head to the tail.
b. Doubly Linked List
In a doubly linked list, each node contains two pointers: one to the next node and one to the
previous node. This allows for traversal in both directions but uses more memory.
c. Circular Linked List
In this version, the last node points back to the first node instead of null, forming a circle.
Circular lists can be singly or doubly linked and are useful in applications that require
continuous cycling through a list (e.g., round-robin scheduling).
3. Advantages of Linked Lists
Dynamic Size: Unlike arrays, linked lists don’t require a predefined size. Nodes can be added or
removed as needed without reallocating or reorganizing the entire structure.
Efficient Insertion/Deletion: Inserting or deleting elements in the middle of a linked list is more
efficient than with arrays, as it doesn't require shifting elements.
Memory Utilization: Linked lists use memory more efficiently, especially when the number of
data elements is unknown or frequently changes.
4. Disadvantages of Linked Lists
Sequential Access: Unlike arrays, random access is not possible. To access an element, one
must traverse the list from the head.
Extra Memory: Each node requires additional memory to store the pointer.
Complexity: Linked lists require careful pointer management, making them more complex to
implement than arrays.
5. Basic Operations
20
a. Traversal
To traverse a linked list, you start at the head and move from one node to the next using the
next pointer until you reach null.
b. Insertion
At the beginning: Create a new node and point its next to the current head, then update the
head.
At the end: Traverse to the last node, create a new node, and set the last node’s next to the
new node.
At a specific position: Traverse to the desired position and adjust pointers accordingly.
c. Deletion
From the beginning: Update the head to point to the second node.
From the end: Traverse to the second-to-last node and set its next to null.
From a specific position: Traverse to the node before the one to be deleted and change its next
pointer to skip the node.
d. Searching
Traverse the list from the head, comparing each node's data with the target value until the
match is found or the list ends.
6. Applications of Linked Lists
Dynamic memory management in operating systems
Implementing stacks and queues
Navigation systems like browser history
Polynomial operations and other complex data structures (e.g., hash tables, adjacency lists in
graphs)
7. Linked Lists vs. Arrays
Feature Linked List Array Size Dynamic Static Memory Usage Extra for pointers Contiguous
only Access Time O(n) O(1) Insertion/Deletion Efficient Costly (shifting)Cache-friend lines Low
High
21
Example 1
C++ Code: Singly Linked List
#include <iostream>
using namespace std;
struct Node {
int data;
Node* next;
};
void append(Node*& head, int value) {
Node* newNode = new Node();
newNode->data = value;
newNode->next = nullptr;
if (head == nullptr) { head = newNode;
else
Node* temp = head;
while (temp->next != nullptr)
temp = temp->next;
22
temp->next = newNode;
}}
void display(Node* head) {
Node* current = head;
while (current != nullptr) {
cout << current->data << " -> ";
current = current->next; }
cout << "NULL" << endl; }
int main() { Node* head = nullptr;
append(head, 10);
append(head, 20);
append(head, 30);
display(head);
// Output: 10 -> 20 -> 30 -> NULL
return 0; }
23