You are on page 1of 72

INSERTION SORT

LINK: https://www.geeksforgeeks.org/insertion-sort/
Insertion sort is a simple sorting algorithm that works the way we sort
playing cards in our hands.
Algorithm
// Sort an arr[] of size n
insertionSort(arr, n)
Loop from i = 1 to n-1.
……a) Pick element arr[i] and insert it into sorted sequence arr[0…i-1]

// C program for insertion sort


#include <math.h>
#include <stdio.h>
  
/* Function to sort an array using insertion sort*/
void insertionSort(int arr[], int n)
{
    int i, key, j;
    for (i = 1; i < n; i++) {
        key = arr[i];
        j = i - 1;
  
        /* Move elements of arr[0..i-1], that are
          greater than key, to one position ahead
          of their current position */
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j = j - 1;
        }
        arr[j + 1] = key;
    }
}
  
// A utility function to print an array of size n
void printArray(int arr[], int n)
{
    int i;
    for (i = 0; i < n; i++)
        printf("%d ", arr[i]);
    printf("\n");
}
  
/* Driver program to test insertion sort */
int main()
{
    int arr[] = { 12, 11, 13, 5, 6 };
    int n = sizeof(arr) / sizeof(arr[0]);
  
    insertionSort(arr, n);
    printArray(arr, n);
  
    return 0;
}

Time Complexity: O(n*2)

Auxiliary Space: O(1)

Boundary Cases: Insertion sort takes maximum time to sort if elements are
sorted in reverse order. And it takes minimum time (Order of n) when elements
are already sorted.

Uses: Insertion sort is used when number of elements is small. It can also be


useful when input array is almost sorted, only few elements are misplaced in
complete big array.

Merge Sort
Link: https://www.geeksforgeeks.org/merge-sort/

Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides


input array in two halves, calls itself for the two halves and then merges
the two sorted halves. 
The merge() function is used for merging two halves. The merge(arr, l, m, r)
is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one.

// pot folosi ca santinela functia INT_MAX din libraria LIMITS


#include <stdio.h>
#include <stdlib.h>
int main()
{
/* Driver program to test above functions */
int arr[]={12,11,13,5,6,7};
int arr_size=sizeof(arr)/sizeof(arr[0]);
printf("Given array is \n");
printArray(arr,arr_size);
mergeSort(arr,0,arr_size-1);
printf("\n Sorted array is \n");
printArray(arr,arr_size);
return 0;
}

// Merges two subarrays of arr[].


// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[],int l, int m, int r){
int i,j,k;
int n1=m-l+1;
int n2=r-m;

/* create temp arrays */


int left[n1],right[n2];
/*copy data to temp arrays left[] and right[]*/
for(i=0;i<n1;i++){
left[i]=arr[l+i];
}
for(j=0;j<n2;j++){
right[j]=arr[m+1+j];
}
/* Merge the temp arrays back into arr[l..r]*/
i = 0; // Initial index of first subarray
j = 0; // Initial index of second subarray
k = l; // Initial index of merged subarray
while(i<n1 &&j<n2){

if(left[i]<=right[j]){
arr[k]=left[i];
i++;
}else{
arr[k]=right[j];
j++;
}
k++;
}
/* Copy the remaining elements of left[], if there
are any */
while(i<n1){
arr[k]=left[i];
i++;
k++;
}
/* Copy the remaining elements of right[], if there
are any */
while(j<n2){
arr[k]=right[j];
j++;
k++;
}

}
/* l is for left index and r is right index of the
sub-array of arr to be sorted */
void mergeSort(int arr[], int l, int r){
if(l<r){
// Same as (l+r)/2, but avoids overflow for
// large l and h
int m=l+(r-l)/2;
//sort first and second halves
mergeSort(arr,l,m);
mergeSort(arr,m+1,r);
merge(arr, l, m , r);
}
}
/* UTILITY FUNCTIONS */
/* Function to print an array */
void printArray(int A[], int size){
int i;
for(i=0;i<size;i++){
printf("%d ",A[i]);
printf("\t");
}
}

Time Complexity: Sorting arrays on different machines. Merge Sort is a


recursive algorithm and time complexity can be expressed as following
recurrence relation.
T(n)=2T(n/2)+O(n)

Auxiliary Space: O(n)
Algorithmic Paradigm: Divide and Conquer
Applications of Merge Sort
Merge Sort is useful for sorting linked lists in O(nLogn) time.In the case of
linked lists, the case is different mainly due to the difference in memory
allocation of arrays and linked lists. Unlike arrays, linked list nodes may
not be adjacent in memory. Unlike an array, in the linked list, we can insert
items in the middle in O(1) extra space and O(1) time. Therefore merge
operation of merge sort can be implemented without extra space for linked
lists.
In arrays, we can do random access as elements are contiguous in memory. Let
us say we have an integer (4-byte) array A and let the address of A[0] be x
then to access A[i], we can directly access the memory at (x + i*4). Unlike
arrays, we can not do random access in the linked list. Quick Sort requires a
lot of this kind of access. In linked list to access i’th index, we have to
travel each and every node from the head to i’th node as we don’t have a
continuous block of memory. Therefore, the overhead increases for quicksort.
Merge sort accesses data sequentially and the need of random access is low.

QuickSort
Link: https://www.geeksforgeeks.org/quick-sort/

Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an


element as pivot and partitions the given array around the picked
pivot. There are many different versions of quickSort that pick pivot in
different ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
The key process in quickSort is partition(). Target of partitions is, given
an array and an element x of array as pivot, put x at its correct position in
sorted array and put all smaller elements (smaller than x) before x, and put
all greater elements (greater than x) after x. All this should be done in
linear time.
Partition Algorithm
There can be many ways to do partition. The logic is simple, we start from
the leftmost element and keep track of index of smaller (or equal to)
elements as i. While traversing, if we find a smaller element, we swap
current element with arr[i]. Otherwise we ignore current element.

#include <stdio.h>

#include <stdlib.h>

int partition(int a[],int p, int r){ // imparte siru asa: ia ultimu


elem ca pivot si compara elem cu el

//elem mai mici ca pivotu se pun


in stanga iar cele mai mari in greapta
int x; // o sa fie pivot

int i,j;

int aux=0;

x=a[r];

i=p-1;

for(j=p;j<=r-1;j++){

if(a[j]<=x){

i=i+1;

// schimbam elem mai mici

aux=a[i];

a[i]=a[j];

a[j]=aux;

//

a[r]=a[i+1];

a[i+1]=x;

return i+1;

}
int quick_sort(int a[],int p,int r){

int q=0;

if(p<r){

q=partition(a,p,r);

quick_sort(a,p,q-1);

quick_sort(a,q+1,r);

return 0;

void read_array(int a[], int n){

int i;

printf("introduceti sirul : \t ");

for(i=1;i<=n;i++){

scanf("%d",&a[i]);

printf("n = %d\n",n);

}
void printArray(int a[], int n){

int i;

int contor=0;

printf(" val n in print array este %d",n);

printf("sirul dv este: \t");

for(i=1;i<=n;i++){

printf("%d ",a[i]);

printf("\t");

contor++;

printf(" contor: %d\n",contor);

int main()

int a[50];

int n;

printf("cate numere are sirul ? : ");

scanf("%d",&n);

read_array(a,n);

printArray(a,n);
printf("Hello world!\n");

quick_sort(a,1,n);

printf("sirul sortat este : \n");

printf("\n");

printArray(a,n);

return 0;

// din ce am inteles din quicksort :

Partition : ia intotdeauna ultimul element ca pivot , si compara de la stanga


la dreapta elementele cu pivotal. Daca elementul este mai mic ca pivotul
}
Analysis of QuickSort
Time taken by QuickSort in general can be written as following.

T(n)=T(k) + T(n-k-1) + O(n)

The first two terms are for two recursive calls, the last term is for the
partition process. k is the number of elements which are smaller than pivot.
The time taken by QuickSort depends upon the input array and partition
strategy. Following are three cases.
Worst Case: The worst case occurs when the partition process always picks
greatest or smallest element as pivot. If we consider above partition
strategy where last element is always picked as pivot, the worst case would
occur when the array is already sorted in increasing or decreasing order. 

Best Case: The best case occurs when the partition process always picks the
middle element as pivot.
Average Case:
To do average case analysis, we need to consider all possible permutation of
array and calculate time taken by every permutation which doesn’t look easy.
We can get an idea of average case by considering the case when partition
puts O(n/9) elements in one set and O(9n/10) elements in other set.
Although the worst case time complexity of QuickSort is O(n 2) which is more
than many other sorting algorithms like Merge Sort and Heap Sort, QuickSort
is faster in practice, because its inner loop can be efficiently implemented
on most architectures, and in most real-world data. QuickSort can be
implemented in different ways by changing the choice of pivot, so that the
worst case rarely occurs for a given type of data. However, merge sort is
generally considered better when data is huge and stored in external storage.

Activity Selection Problem | Greedy Algo-1


Greedy is an algorithmic paradigm that builds up a solution piece by piece,
always choosing the next piece that offers the most obvious and immediate
benefit. Greedy algorithms are used for optimization problems. An
optimization problem can be solved using Greedy if the problem has the
following property: At every step, we can make a choice that looks best at
the moment, and we get the optimal solution of the complete problem.

If a Greedy Algorithm can solve a problem, then it generally becomes the best
method to solve that problem as the Greedy algorithms are in general more
efficient than other techniques like Dynamic Programming. But Greedy
algorithms cannot always be applied.
The greedy algorithms are sometimes also used to get an approximation for
Hard optimization problems. For example, Traveling Salesman Problem is a NP-
Hard problem. A Greedy choice for this problem is to pick the nearest
unvisited city from the current city at every step. This solutions don’t
always produce the best optimal solution but can be used to get an
approximately optimal solution.

Let us consider the Activity Selection problem as our first example of Greedy


algorithms. Following is the problem statement.
You are given n activities with their start and finish times. Select the
maximum number of activities that can be performed by a single person,
assuming that a person can only work on a single activity at a time.

The greedy choice is to always pick the next activity whose finish time is
least among the remaining activities and the start time is more than or equal
to the finish time of previously selected activity. We can sort the
activities according to their finishing time so that we always consider the
next activity as minimum finishing time activity.
1) Sort the activities according to their finishing time
2) Select the first activity from the sorted array and print it.
3) Do following for remaining activities in the sorted array.
…….a) If the start time of this activity is greater than or equal to the
finish time of previously selected activity then select this activity and
print it.
Time Complexity : It takes O(n log n) time if input activities may not be
sorted. It takes O(n) time when it is given that input activities are always
sorted.
#include <stdio.h>
#include <stdlib.h>

//functia asta verifica daca timpul de sfarsit al activitatii curente


corspunde cu timpul de incepere al urmatoarei activitati care va fi aleasa
void printMaxAct(int s[], int f[], int n){

int j, i=0;
s[0]=0; // s[]- start , f[]- final
printf(" Activitatile cu numarul : %d\t",i); // intotdeauna prima
activitate este aleasa
for(j=1;j<n;j++){
if(s[j]>=f[i]){
printf("%d\t",j);
//printf("\n");
i=j;
}

int main()
{
int n, i, j;

int s[]={10, 12, 20, 31};


int f[]={20, 25, 30, 32};
n=sizeof(s)/sizeof(s[0]); // aflam lungimea sirului
printf(" val lui n= %d \n",n);
printf("activitatile : \n");
for(i=0;i<n;i++){
printf("{ %d, %d}\n",s[i],f[i]);

printf("\t");

}
printf("Hello world!\n");
printMaxAct(s,f,n);

return 0;
}

HeapSort

Heap sort is a comparison based sorting technique based on Binary Heap data
structure. It is similar to selection sort where we first find the maximum
element and place the maximum element at the end. We repeat the same process
for remaining element.
What is Binary Heap?
Let us first define a Complete Binary Tree. A complete binary tree is a
binary tree in which every level, except possibly the last, is completely
filled, and all nodes are as far left as possible (Source Wikipedia)
A Binary Heap is a Complete Binary Tree where items are stored in a special
order such that value in a parent node is greater(or smaller) than the values
in its two children nodes. The former is called as max heap and the latter is
called min heap. The heap can be represented by binary tree or array.

Why array based representation for Binary Heap?


Since a Binary Heap is a Complete Binary Tree, it can be easily represented
as array and array based representation is space efficient. If the parent
node is stored at index I, the left child can be calculated by 2 * I + 1 and
right child by 2 * I + 2 (assuming the indexing starts at 0).
Heap Sort Algorithm for sorting in increasing order:
1. Build a max heap from the input data.
2. At this point, the largest item is stored at the root of the heap. Replace
it with the last item of the heap followed by reducing the size of heap by 1.
Finally, heapify the root of tree.
3. Repeat above steps while size of heap is greater than 1.

Notes:
Heap sort is an in-place algorithm.
Its typical implementation is not stable, but can be made stable (See this)
Time Complexity: Time complexity of heapify is O(Logn). Time complexity of
createAndBuildHeap() is O(n) and overall time complexity of Heap Sort is
O(nLogn).
Applications of HeapSort
1. Sort a nearly sorted (or K sorted) array
2. k largest(or smallest) elements in an array
Heap sort algorithm has limited uses because Quicksort and Mergesort are
better in practice. Nevertheless, the Heap data structure itself is
enormously used.

#include <stdio.h>
#include <stdlib.h>

void heapify(int a[],int n, int i){


int largest=i; // initialize largest as root
int l=2*i+1; // left= 2*i+1
int r=2*i+2; // right=2*i+2
int aux;

// if left child is larger than root


if(l<n &&a[l]>a[largest]){
largest=l;
}
// if right child is larger than largest so far
if(r<n && a[r]>a[largest]){
largest=r;
}
// if largest is not root
if(largest!=i){
aux=a[i];
a[i]=a[largest];
a[largest]=aux;

//Recursively heapify the affected sub-tree


heapify(a,n,largest);
}

// main function to do heap sort


void heapSort(int a[],int n){
//build heap (rearrange array)
int i,aux;
for(i=n/2-1;i>=0;i--){
heapify(a,n,i);
}
// one by one extract an element from heap
for(int i=n-1;i>=0;i--){
// move current root to end
aux=a[0];
a[0]=a[i];
a[i]=aux;
// call max heapify on the reduced heap
heapify(a,i,0);
}

}
void printArray(int a[], int n){
int i;
for(i=0;i<n;i++){
printf("%d",a[i]);
printf("\t");
}
}

int main()
{

printf("\n");
int a[]={12,11,13,5,6,7};
int n=sizeof(a)/sizeof(a[0]);
printf("array-u initial : \n");
printArray(a,n);

heapSort(a,n);

printf("Sorted array is : \n");


printArray(a,n);
return 0;
}
Greedy Algorithm to find Minimum number
of Coins
Given a value V, if we want to make change for V Rs, and we have infinite
supply of each of the denominations in Indian currency, i.e., we have
infinite supply of { 1, 2, 5, 10, 20, 50, 100, 500, 1000} valued coins/notes,
what is the minimum number of coins and/or notes needed to make the change?

Example:
Input: V = 70
Output: 2
We need a 50 Rs note and a 20 Rs note.

Input: V = 121
Output: 3
We need a 100 Rs note, a 20 Rs note and a
1 Rs coin.

The idea is simple Greedy Algorithm. Start from largest possible denomination
and keep adding denominations while remaining value is greater than 0. Below
is complete algorithm.

1) Initialize result as empty.


2) find the largest denomination that is
smaller than V.
3) Add found denomination to result. Subtract
value of found denomination from V.
4) If V becomes 0, then print result.
Else repeat steps 2 and 3 for new value of V
#include <stdio.h>
#include <stdlib.h>
#define COINS 9 // lungimea sirului de monede
#define MAX 20
int coins[COINS]={1,2,5,10,20,50,100,200,2000};

void findMin(int cost){

int coinList[MAX]={0};
int i, k=0;
for(i=COINS-1;i>=0;i--){ // merge de la ultimu elem-1 pana la 0 si se
decrementeaza
while(cost>=coins[i]){ //daca valoarea noastra e >= cu moneda de pe
pozitia i
cost -=coins[i]; //scade din valoare moneda gasita
coinList[k++]=coins[i]; // adauga in lista noua moneda gasita
} // while se apeleaza pana se epuizeaza
optiunea
}
for(i=0;i<k;i++){
printf(" %d\t",coinList[i]);
}
// return;

int main()
{
int i,k;

printf("lista de monezi : ");


for(i=0;i<COINS;i++){
printf("%d \t",coins[i]);
}
int n=93;
printf("\n numarul minim de monezi pt val %d is :",n);
findMin(n);
printf("Hello world!\n");
return 0;
}

Cutting a Rod
Given a rod of length n inches and an array of prices that contains prices of
all pieces of size smaller than n. Determine the maximum value obtainable by
cutting up the rod and selling the pieces. For example, if length of the rod
is 8 and the values of different pieces are given as following, then the
maximum obtainable value is 22 (by cutting in two pieces of lengths 2 and 6)

length | 1 2 3 4 5 6 7 8
--------------------------------------------
price | 1 5 8 9 10 17 17 20

And if the prices are as following, then the maximum obtainable value is 24
(by cutting in eight pieces of length 1)

length | 1 2 3 4 5 6 7 8
--------------------------------------------
price | 3 5 8 9 10 17 17 20
A naive solution for this problem is to generate all configurations of
different pieces and find the highest priced configuration. This solution is
exponential in term of time complexity. Let us see how this problem possesses
both important properties of a Dynamic Programming (DP) Problem and can
efficiently solved using Dynamic Programming.
1) Optimal Substructure:
We can get the best price by making a cut at different positions and
comparing the values obtained after a cut. We can recursively call the same
function for a piece obtained after a cut.
Let cutRod(n) be the required (best possible price) value for a rod of length
n. cutRod(n) can be written as following.
cutRod(n) = max(price[i] + cutRod(n-i-1)) for all i in {0, 1 .. n-1}
2) Overlapping Subproblems
Following is simple recursive implementation of the Rod Cutting problem. The
implementation simply follows the recursive structure mentioned above.
Itereaza acasa toata problema recursive

O sa am de pus – infinit
Cum fac asta: include libraria <limits.h>

Si folosesc INT_MIN

q=INT_MIN;

Ce am inteles:

Se pastreaza valoarea anterioara de la verificare si se verifica barele de


lungimi diferite (profit lor) cu rezultatul anterior

Cut_rod(0,n)

Memorized :

Daca avem de exemplu o bucata de 1 si una de 2 iar apoi avem o bucata de 2 si


una de 1 se repeat aceeasi chestier astfel fiind time-consuming

Initial r[n] ii initializat cu –infinit

Si se verifica daca avem deja valoarea trecuta in array , se


returneazavaloarea din array , altfel se cauta o solutie pe alta
ramura(chestia asta se face si in memorized aux)

Activity Selection- la laborator


Static int na=0; cand o declare statica se executa doar o data static, si
dupa se incrementeaza pastrandusi valoarea de la un apel la altu(salveaza
valoarea pe care o incrementeaza cumva)

CURS – 12.11.2019 – curs 6


TABELE DE DISPERSIE
Cautare in sir
-arbori
- tabele de dispersie
Cheie , valoare
Exemplu cu dictionar
Functia HASH (dispersie)– primeste la iterare un cuvant si la iesire
returneaza o valoare intre 0 si 9 ca astea le am in tabelu meu
Ia o cheie si determina pozitia und ear trebui sa fie in tabela
Coliziune- atunci cand am doua elemente care sunt pe aceeasi pozitie
Ca sa o rezolvam ne trebuie o metoda (functie) care sa se asigure ca sa
impiedicam/reducem coliziunile
Prima modalitate de rezolvarea a coliziunilor este prin inlantuire
Hash_INSERT – iau cheia elementului care se insereaza , ii fac hash si ii
determin pozitia si se insereaza direct in varful listei
Hash_SEARCH – se cauta elem cu cheia k in lista si daca are elemente , caut
element
OPERATII PE LISTE :
1. Insert
2. Search
3. Delete

Rezolvare coliziunii prin adresare deschisa


Dezavantaj- cand tabela s-a umplut ii gata … ce nu am alocat .. pla
La lista pot pune cate keys vreau nu sunt limitat la cat spatiu am alocat

LISTA VS TABELA DE DISPERSIE


SEMINAR : Lista dublu inlantuita cu santinela
Uiate pe link :
https://www.tutorialspoint.com/data_structures_algorithms/linked_list_algorit
hms.htm

makenull() , cu santinela
cauta pe net ca nu ai fost atenta
Geeks: https://www.geeksforgeeks.org/doubly-linked-list/

Ce am inteles:
Se leaga succesorul nodului head cu urmatorul element
Un nod are 3 casute , next, key, prev
SEARCH :
LIST_SEARCH(LISTA L, KEY) //L param. IN//fct returnează un pointer către
nodul găsit
Nod_lista x //Nod_lista este un pointer
x := L.head
WHILE x <> NULL AND x.cheie<> KEY DO // atata timp cat cheia nu s-a gasit si
nu e sfarsitu listei
x := x.next // x pointeaza catre element urmator
END WHILE
RETURN x // daca s-a gasit sau s-a terminat siru , se
returneaza X
END LIST_SEARCH
Atunci cand nu sunt indeplinite ambele conditii x devine NULL , pointeaza
spre valoarea NULL deoarece nu mai avem elemente

#include <stdlib.h> - are NULL , malloc +multe altele

*next , *prev is pointeri deci cu ->

Linked List | Set 1 (Introduction)

Like arrays, Linked List is a linear data structure. Unlike arrays, linked
list elements are not stored at a contiguous location; the elements are
linked using pointers.

Why Linked List?


Arrays can be used to store linear data of similar types, but arrays have the
following limitations.
1) The size of the arrays is fixed: So we must know the upper limit on the
number of elements in advance. Also, generally, the allocated memory is equal
to the upper limit irrespective of the usage.
2) Inserting a new element in an array of elements is expensive because the
room has to be created for the new elements and to create room existing
elements have to be shifted.

For example, in a system, if we maintain a sorted list of IDs in an array


id[].
id[] = [1000, 1010, 1050, 2000, 2040].
And if we want to insert a new ID 1005, then to maintain the sorted order, we
have to move all the elements after 1000 (excluding 1000).
Deletion is also expensive with arrays until unless some special techniques
are used. For example, to delete 1010 in id[], everything after 1010 has to
be moved.
Advantages over arrays
1) Dynamic size
2) Ease of insertion/deletion
Drawbacks:
1) Random access is not allowed. We have to access elements sequentially
starting from the first node. So we cannot do binary search with linked lists
efficiently with its default implementation. Read about it here.
2) Extra memory space for a pointer is required with each element of the
list.
3) Not cache friendly. Since array elements are contiguous locations, there
is locality of reference which is not there in case of linked lists.
Representation:
A linked list is represented by a pointer to the first node of the linked
list. The first node is called the head. If the linked list is empty, then
the value of the head is NULL.
Each node in a list consists of at least two parts:
1) data
2) Pointer (Or Reference) to the next node
In C, we can represent a node using structures. Below is an example of a
linked list node with integer data.

Doubly Linked List | Set 1 (Introduction and Insertion)

A Doubly Linked List (DLL) contains an extra pointer, typically called


previous pointer, together with next pointer and data which are there in
singly linked list.
Following are advantages/disadvantages of doubly linked list over singly
linked list.
Advantages over singly linked list
1) A DLL can be traversed in both forward and backward direction.
2) The delete operation in DLL is more efficient if pointer to the node to be
deleted is given.
3) We can quickly insert a new node before a given node.
In singly linked list, to delete a node, pointer to the previous node is
needed. To get this previous node, sometimes the list is traversed. In DLL,
we can get the previous node using previous pointer.
Disadvantages over singly linked list
1) Every node of DLL Require extra space for an previous pointer. It is
possible to implement DLL with single pointer though (See this and this).
2) All operations require an extra pointer previous to be maintained. For
example, in insertion, we need to modify previous pointers together with next
pointers. For example in following functions for insertions at different
positions, we need 1 or 2 extra steps to set previous pointer.
Insertion
A node can be added in four ways
1) At the front of the DLL
2) After a given node.
3) At the end of the DLL
4) Before a given node.
1) Add a node at the front: (A 5 steps process)
The new node is always added before the head of the given Linked List. And
newly added node becomes the new head of DLL. For example if the given Linked
List is 10152025 and we add an item 5 at the front, then the Linked List
becomes 510152025. Let us call the function that adds at the front of the
list is push(). The push() must receive a pointer to the head pointer,
because push must change the head pointer to point to the new node
2) Add a node after a given node.: (A 7 steps process)
We are given pointer to a node as prev_node, and the new node is inserted
after the given node.
3) Add a node at the end: (7 steps process)
The new node is always added after the last node of the given Linked List.
For example if the given DLL is 510152025 and we add an item 30 at the end,
then the DLL becomes 51015202530.
Since a Linked List is typically represented by the head of it, we have to
traverse the list till end and then change the next of last node to new node.
4) Add a node before a given node:

Steps
Let the pointer to this given node be next_node and the data of the new node
to be added as new_data.
5. Check if the next_node is NULL or not. If it’s NULL, return from the
function because any new node can not be added before a NULL
6. Allocate memory for the new node, let it be called new_node
7. Set new_node->data = new_data
8. Set the previous pointer of this new_node as the previous node of the
next_node, new_node->prev = next_node->prev
9. Set the previous pointer of the next_node as the new_node, next_node-
>prev = new_node
10. Set the next pointer of this new_node as the next_node, new_node-
>next = next_node;
11. If the previous node of the new_node is not NULL, then set the
next pointer of this previous node as new_node, new_node->prev->next =
new_node
12. Else, if the prev of new_node is NULL, it will be the new head
node. So, make (*head_ref) = new_node.

// A complete working C program to demonstrate all insertion methods


#include <stdio.h>
#include <stdlib.h>

// A linked list node


struct Node {
int data;
struct Node* next;
struct Node* prev;
};

/* Given a reference (pointer to pointer) to the head of a list


and an int, inserts a new node on the front of the list. */
void push(struct Node** head_ref, int new_data)
{
/* 1. allocate node */
struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));

/* 2. put in the data */


new_node->data = new_data;

/* 3. Make next of new node as head and previous as NULL */


new_node->next = (*head_ref);
new_node->prev = NULL;

/* 4. change prev of head node to new node */


if ((*head_ref) != NULL)
(*head_ref)->prev = new_node;

/* 5. move the head to point to the new node */


(*head_ref) = new_node;
}

/* Given a node as prev_node, insert a new node after the given node */
void insertAfter(struct Node* prev_node, int new_data)
{
/*1. check if the given prev_node is NULL */
if (prev_node == NULL) {
printf("the given previous node cannot be NULL");
return;
}

/* 2. allocate new node */


struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));

/* 3. put in the data */


new_node->data = new_data;

/* 4. Make next of new node as next of prev_node */


new_node->next = prev_node->next;

/* 5. Make the next of prev_node as new_node */


prev_node->next = new_node;

/* 6. Make prev_node as previous of new_node */


new_node->prev = prev_node;

/* 7. Change previous of new_node's next node */


if (new_node->next != NULL)
new_node->next->prev = new_node;
}

/* Given a reference (pointer to pointer) to the head


of a DLL and an int, appends a new node at the end */
void append(struct Node** head_ref, int new_data)
{
/* 1. allocate node */
struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));

struct Node* last = *head_ref; /* used in step 5*/

/* 2. put in the data */


new_node->data = new_data;

/* 3. This new node is going to be the last node, so


make next of it as NULL*/
new_node->next = NULL;

/* 4. If the Linked List is empty, then make the new


node as head */
if (*head_ref == NULL) {
new_node->prev = NULL;
*head_ref = new_node;
return;
}
/* 5. Else traverse till the last node */
while (last->next != NULL)
last = last->next;

/* 6. Change the next of last node */


last->next = new_node;

/* 7. Make last node as previous of new node */


new_node->prev = last;

return;
}

// This function prints contents of linked list starting from the given
node
void printList(struct Node* node)
{
struct Node* last;
printf("\nTraversal in forward direction \n");
while (node != NULL) {
printf(" %d ", node->data);
last = node;
node = node->next;
}

printf("\nTraversal in reverse direction \n");


while (last != NULL) {
printf(" %d ", last->data);
last = last->prev;
}
}

/* Drier program to test above functions*/


int main()
{
/* Start with the empty list */
struct Node* head = NULL;

// Insert 6. So linked list becomes 6->NULL


append(&head, 6);

// Insert 7 at the beginning. So linked list becomes 7->6->NULL


push(&head, 7);

// Insert 1 at the beginning. So linked list becomes 1->7->6->NULL


push(&head, 1);

// Insert 4 at the end. So linked list becomes 1->7->6->4->NULL


append(&head, 4);
// Insert 8, after 7. So linked list becomes 1->7->8->6->4->NULL
insertAfter(head->next, 8);

printf("Created DLL is: ");


printList(head);

getchar();
return 0;
}

Codu meu : #include <stdio.h>


#include <stdlib.h>

typedef struct nod_lista{


int cheie;
struct nod_lista *next, *prev;
}t_nod_lista;

typedef struct {
t_nod_lista *head;
}t_lista;

void makenull(t_lista *L)


{
printf("alocare memorie \n");
// nu am alocat bine memoria
(t_nod_lista*)malloc(sizeof(t_nod_lista));
printf("dupa alocare\n");
// aici crapa programu
L->head->next=NULL;
printf("intre NULL-uri\n");
L->head->prev=NULL;
printf(" la sf makenull\n");
}

int list_search(t_lista L, int key){


t_nod_lista *x;
x=L.head;
while(x!=NULL && x->cheie!=key){
x=x->next;
}
return x;
}
void list_delete(t_lista *L, t_nod_lista *X){
if(X->prev!=NULL){
X->prev->next=X->next;
}else{
L->head->next=X->next;
}
if(X->next!=NULL){
X->next->prev=X->prev;
}
}
void list_insert(t_lista *L, t_nod_lista *X){
if(X==NULL){
return ;
}
X->next=L->head->next;
if(L->head->next!=NULL){
L->head->next->prev=X;
}
L->head->next=X;
X->prev=L->head;
}
void list_print(t_lista *L){
t_nod_lista *x;
x=L->head->next;
if(x==NULL){
printf("lista e vida");
}
while(x!=NULL){
printf("%d",x->cheie);
x=x->next;
}
}

void list_free(t_lista *L){

t_nod_lista *x;
x=L->head->next;
while(x!=NULL){
list_delete(L,x);
free(x);
x=L->head->next;
}
free(L->head);
}

int main()
{
printf("Bun venit in program\n");
int s,key;

t_lista *L;
t_nod_lista *x;
printf("crapa la makenull\n");
makenull(L); // merge in makenull si crapa
s=1;
printf(" inainte de while\n");
while(s!=0){
printf("in while");

printf("cheia : ");
scanf("%d",&key);

if(s==1){
printf("cheia : ");
scanf("%d",&key);
malloc(sizeof(x));
list_insert(L,x);

}else if(s==2){
printf("cheia : ");
scanf("%d",&key);
x=list_search(*L,key);
if(x!=NULL){
printf(" key= %d,x= %d",key,x);
}else{
printf("cheie negasita");
}

}else if(s==3){
printf("cheia : ");
scanf("%d",&key);
x=list_search(*L,key);
if(x!=NULL){
list_delete(L,x);
}else{
printf("cheia negasita");
}
}else{
list_print(L);
}

}
list_free(L);
printf("Hello World");

return 0;
}

O alta abordare care merge dar e putin diferita + nu elibereaza spatiu


alocat :
#include <stdio.h>
#include <stdlib.h>

// a linked list node:


struct Node {
int data;
struct Node* next;
struct Node* prev;
};
/* Given a reference (pointer to pointer) to the head of a list and an int,
inserts a new node on front of the list
*/
void push(struct Node** head_ref, int new_data)
{
//1. aloca spatiu nod:
struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));

//2. punde data in el :


new_node->data = new_data;

//3. faci next-u noului nod ca si cap si prev=NULL


new_node->next = (*head_ref);
new_node->prev = NULL;
//4.schimba prev a capului nodului cu nou nod
if((*head_ref)!=NULL){
(*head_ref)->prev=new_node;}
//5. muta capu sa pointeze spre noul nod
(*head_ref)=new_node;
}

/* given a node as prev_node, insert a new node after the given node*/

void insert_after(struct Node* prev_node, int new_data){

//1. verifica daca prev_node dat e NULL


if(prev_node==NULL){
printf("the given previous node cannot be null ");
return;
}
//2. aloca memorie noului nod
struct Node* new_node=(struct Node*)malloc(sizeof(struct Node));
//3. pune data in interior
new_node->data=new_data;
//4. fa nextu noului nod ca next a prev_node
new_node->next=prev_node->next;
//5. fa nextu lui prev_node noul nod
prev_node->next=new_node;

//6.facem prev_node ca pre-u noului nod


new_node->prev=prev_node;
//7.shimba prevu lui new_node next nod
if(new_node->next!=NULL){
new_node->next->prev=new_node;
}

}
void append(struct Node** head_ref, int new_data){
//1. aloca memorie pt nod
struct Node* new_node=(struct Node*)malloc(sizeof(struct Node));
struct Node* last=*head_ref; //folosit in pasu 5
// 2.put in the data
new_node->data=new_data;
//3.nodu asta o sa fie ultimu nod asa ca facem next-u lui ca fiind NULL
new_node->next=NULL;
//4.daca lista linkuita ii goala, fa un nou nod ca si cap
if((*head_ref)==NULL){
new_node->prev=NULL;
*head_ref=new_node;
return;
}
//5.else traverse till the last node
while(last->next!=NULL){
last=last->next;
}
//6.chimba nextu ultimului nod
last->next=new_node;
//7. fa last node , prev-u noului nod
new_node->prev=last;
return;
}
void printList(struct Node* node){
struct Node* last;
printf("\nTraversal in forward direction \n");
while(node!=NULL){
printf("%d",node->data);
last=node;
node=node->next;
}
printf("\nTraversal in reverse direction \n");
while(last!=NULL){
printf("%d",last->data);
last=last->prev;
}
}
void deleteNode(struct Node** head_ref,struct Node* del){

// cazu de baza cand nu exista elem in lista sau nu avem elem de sters
if(*head_ref==NULL||del==NULL){
return;
}
// daca nodul de sters este capul listei:
if(*head_ref==del){
*head_ref= del->next;
}
// schimbam next numai daca nodul de sters nu e ultimu elem din sir

if(del->next!=NULL){
del->next->prev=del->prev;
}
// schimbam prev- daca nodu de sters nu e head

if(del->prev!=NULL){
del->prev->next=del->next;
}

// eliberam memoria ocupata de del


free(del);
return;
}
int main(){
printf("Hello, World!\n");
// incepem cu o lista goala:
struct Node* head=NULL;
// insert 6 . lista devine 6->NULL
append(&head,6);
// insert 7 la inceputu listei, lista devine 7->6->NULL
push(&head,7);
// insert 1 la inceputu listei, lista dev 1->7->6->NULL
push(&head,1);
// insert 4 la sfarsit , lista : 1->7->6->4->NULL
append(&head,4);
// insert 8 dupa 7 , lista: 1->7->8->6->4->NULL
insert_after(head->next,8);
printf("created DLL is: ");
printList(head);
deleteNode(&head,head);
deleteNode(&head,head->next->next);
printf("\nlista dupa stergere: \n");
printList(head);

return 0;
}

HASH TABLE
Coliziune: prin adresare directa sau inlantuire
Atunci cand avem doua chei pe aceeasi pozitie
HASH_INIT(T[], M) //tabela este T[0..M-1]
FOR j := 0 TO M – 1
T[j] := -1 END FOR END HASH_INIT
// initializeaza fiecare element din sir cu –1 ca sa stim ca slotul e
gol(putem sa il folosim)
// cauta functia de probare liniara! (no ideea what it is)
HASH_PROBEF(K, I, M) //functia de probare liniara
RETURN (HASH_PRIM(K, M)+I) MOD M
END HASH_PROBEF
// cauta o pozitie pe care sa insereze un numar , se duce de la 0 la m
returnand o pozitie
// de obicei incearca sa puna numarul la pozitia = cu numaru daca are cum

Daca am un sir cu 9 elemente si nu sunt toate ocupate si incercam sa inseram


nr 39 initial o sa vrea sa o puna pe pozitia 39 dar cum nu exista face
hash_probef si o sa o puna pe pozitia 9 (din calcule) si daca mi ocupat
slotul o sa reia de la inceput lista si o sa verifice toate pozitiile pana
gaseste un loc liber sau tabela se termina si face return –1 , adica tabela e
plina

NOTITA: fa listele simple inlantuite si tabelele de dispersie...


https://www.geeksforgeeks.org/hashing-data-structure/
Collision Handling: Since a hash function gets us a small number for a big
key, there is possibility that two keys result in same value. The situation
where a newly inserted key maps to an already occupied slot in hash table is
called collision and must be handled using some collision handling technique.
Following are the ways to handle collisions:

 Chaining:The idea is to make each cell of hash table point to a linked


list of records that have same hash function value. Chaining is simple,
but requires additional memory outside the table.
 Open Addressing: In open addressing, all elements are stored in the
hash table itself. Each table entry contains either a record or NIL.
When searching for an element, we one by one examine table slots until
the desired element is found or it is clear that the element is not in
the table.

Cod facut de mine pt coliziune prin adresare directa:


#include <stdio.h>
void hash_init(int T[],int m){
int j;
for(j=0;j<=m-1;j++){
T[j]=-1;
}

}
int hash_prim(int k, int m){
return (k%m);
}
int hash_probef(int k, int i, int m){

return (hash_prim(k,m)+1)%m;
}
int hash_insert(int T[], int k , int m){
int i=0,j=0;
j=hash_probef(k,i,m); // determina pozitia pe care sa insereze
if(T[j]==-1){ // daca pozitia are valoarea -1 inseamna ca e libera
T[j]=k; // se insereaza cheia pe pozitia gasita
return j; // returneaza pozitia cheii inserate
}else{
i=i+1; // altfel continua sa caute o alta pozitie
}

while(i!=m){ // atata timp cat nu s-a ajuns la finalul listei


return -1;
}
}

void hash_print(int T[], int m){


int j=0;
for(j=0;j<=m-1;j++){
printf("\n %d",T[j]);
}
}
int hash_search(int T[], int k, int m){
int i,j;
i=0;
j=hash_probef(k,i,m);
if(T[j]==k){
return j;
}
i=i+1;
while(i!=m&&T[j]!=-1){
return -1; // flag pt eroare , cheie negasita
}
}
int hash_delete(int T[],int k, int m){
int j;
j=hash_search(T,k,m);
if(j>-1){
T[j]=-1;
return j;
}else{
return -1;
}
}
int main()
{
int T[5];
int m,j,k;
printf("cat de lunga e tabela? : ");
scanf("%d",&m);
hash_init(T,m);

printf("\ncheia de inserat: ");


scanf("%d",&k);

while(k!=0){
j=hash_insert(T,k,m);
printf("cheia inserata la locatia %d", j);
printf("\nnext key: ");
scanf("%d",&k);
}
printf("tabela cu numere inserate este: ");
hash_print(T,m);

printf(" \ncautare cheie: ");


scanf("%d",&k);

j=hash_search(T,k,m);
if(j>-1){
printf("cheie gasita la locatia: %d",j);
}else{
printf("CHEIE NEGASITA");
}
printf("\nstergere cheie: ");
scanf("%d",&k);

j=hash_delete(T,k,m);
if(j>-1){
printf("\n cheie stearsa de la locatia %d",j);
hash_print(T,m);
}else{
printf(" \n cheia nu a fost stearsa");
}
printf("\nHello World");

return 0;
}
In inserare : cand se umple lista o sa returneze –1;
Separate Chaining:
The idea is to make each cell of hash table point to a linked list of records
that have same hash function value.
Let us consider a simple hash function as “key mod 7” and sequence of keys as
50, 700, 76, 85, 92, 73, 101.
Advantages:
1) Simple to implement.
2) Hash table never fills up, we can always add more elements to the chain.
3) Less sensitive to the hash function or load factors.
4) It is mostly used when it is unknown how many and how frequently keys may
be inserted or deleted.
Disadvantages:
1) Cache performance of chaining is not good as keys are stored using a
linked list. Open addressing provides better cache performance as everything
is stored in the same table.
2) Wastage of Space (Some Parts of hash table are never used)
3) If the chain becomes long, then search time can become O(n) in the worst
case.
4) Uses extra space for links.
Vezi linku asta:
https://www.sanfoundry.com/c-program-implement-hash-tables-chaining-doubly-
linked-lists/
\

ARBORY BINARI DE CAUTARE

Ce am inteles : la inserare , avem o valoare de ex 15 si vrem sa inseram


elemente in sir , il facem pe 15 root , avem chil left chilr right , parent
si root. Atunci cand se doreste a se insera un nr in lista se verifica daca
este mai mic ca root atunci se pune in stanga iar daca e mai mare se
insereaza in dreapta . Urmatoru pas e sa creem legaturile adica facem ca
parent de elem inserat sa pointeze catre root in cazu nostru iar la root o sa
creem o legatura catre noul nod inserat care o sa fie copilul root-ului, iar
daca se insereaza un alt element mai mic ca root dar mai mare ca si copilu
nostru atunci se ataseaza numarul in dreapta copilului …
Stergerea unui nod : cazul fericit ii cand vrem sa stergem un copil
frunza(capatu sirului) atunci ii simplu , schimb pointeru copilului ultimului
parinte ca fiind NULL
Cazul naspa vrem sa stergem un element care are copii … trebuie sa avansam
copilu in locul elementului pe care vrem sa il stergem. Inprincipiu ii
asemanator cu stergerea unui nod in lista dublu inlantuita
Cazu mai naspa ii cand nodu are doi copii :
Succesoru nodului de sters o sa fie nodu cel mai mare da mai mic.... adica se
duce pe ramura din dreapta si cauta numaru mai mic sub el din dreapta , si
avem cazu in care elementu de sters e parintele elementului succesor caz in
care se procedeaza intr-un fel : aproximativ ca la liste dublu inlantuite …
si mai e cazu in care parintele succesorului nodului de sters ii diferit
Mai ii cazu in care nodu de sters ii root : pe acelasi principiu numa chestia
in plus e ca pointeru catre root dupa stergere pointeaza catre noul nod
succesor.

Codul meu : #include <stdio.h>


#include <stdlib.h>

typedef struct nod_arb{


int key;
struct nod_arb *left,*right,*parent;

}t_nod_arb;

typedef struct{
t_nod_arb *root;
}t_arbore;

t_nod_arb* make_root(t_arbore *A, int key){


A->root= (t_nod_arb*)malloc(sizeof(t_nod_arb));
A->root->key=NULL;
A->root->left=NULL;
A->root->right=NULL;
A->root->parent=NULL;
return A->root; // returneaza referinta la nodul radacina

}
t_nod_arb* create_node(int key){
t_nod_arb *n;
n=(t_nod_arb*)malloc(sizeof(t_nod_arb));
n->key=key;
n->left=NULL;
n->right=NULL;
n->parent=NULL;
return n;
}

void inorder_walk(t_nod_arb *root){


if(root!=NULL){
inorder_walk(root->left);
printf("%d",root->key);
inorder_walk(root->right);
}
}
void preorder_walk(t_nod_arb *root){
if(root!=NULL){
printf("root.key : %d\t",root->key);
preorder_walk(root->left);
preorder_walk(root->right);
}
}
void postorder_walk(t_nod_arb *root){
if(root!=NULL){
postorder_walk(root->left);
postorder_walk(root->right);
printf("%d",root->key);
}
}
int tree_min(t_nod_arb *n){
while(n->left!=NULL){
n=n->left;
}
return n;
}
int tree_max(t_nod_arb *n){
while(n->right!=NULL){
n=n->right;
}
return n;
}
int tree_succesor(t_nod_arb *n){
t_nod_arb *y;
if(n->right!=NULL){
return tree_min(n->right);
}
y=n->parent;
while(y!=NULL && n==y->right){
n=y;
y=y->parent;
}
return y;
}
int tree_search(t_nod_arb *n,int key){
if(n==NULL || key==n->key){
return n;
}
if(key<n->key){
return tree_search(n->left,key);
}else{
return tree_search(n->right,key);
}
}
int it_tree_search(t_nod_arb *n,int key){
while(n!=NULL && key!=n->key){
if(key<n->left){
n=n->left;
}else{
n=n->right;
}
}
return n;
}
void tree_insert(t_arbore *A, t_nod_arb *n){
t_nod_arb *y,*x;
y=NULL;
x=A->root;
while(x!=NULL){
y=x;
if(n->key<x->key){
x=x->left;
}else{
x=x->right;
}
}
n->parent=y;
if(y==NULL){
A->root=n;
}else if(n->key<y->key){
y->left=n;
}else{
y->right=n;
}

}
void transplant(t_arbore *A, t_nod_arb *u, t_nod_arb *v){
if(u->parent==NULL){
A->root=v;
}else if(u==u->parent->left){
u->parent->left=v;
}else{
u->parent->right=v;
}
if(v!=NULL){
v->parent=u->parent;
}
}
void tree_delete(t_arbore *A,t_nod_arb *n){
t_nod_arb *y;
if(n->left==NULL){ //nodul n are 1 copil dreapta/stanga
transplant(A,n,n->right);
}else if(n->right==NULL){
transplant(A,n,n->left);
}else{ // nodul n are 2 copii
y=tree_min(n->right);
if(y->parent!=n){
transplant(A,y,y->right);
y->right=n->right;
y->right->parent=y;
}
transplant(A,n,y);
y->left=n->left;
y->left->parent=y;
}
}
int main()
{
t_arbore T[20];
int x;
t_nod_arb *n,*r,*s;
printf("\nnodul radacina cu cheia x=");
scanf("%d",&x);
r=make_root(T,x);
printf("\nintroduceti pana la citirea lui o\n");
printf("x=");
scanf("%d",&x);
while(x!=0){
n=create_node(x);
tree_insert(T,n);
printf("x=");
scanf("%d",&x);
}
printf("inorder_walk:\n");
inorder_walk(r);
printf("\npreorder_walk: ");
preorder_walk(r);
printf("\n");
printf("Regasire nod cu cheia x= ");
scanf("%d",&x);

n=it_tree_search(r,x);
if(n!=NULL){
printf("nodul cu cheia %d gasit iterativ\n",n->key);
}else{
printf("nodul nu a fost gasit iterativ\n");
}
n=tree_search(r,x);
if(n!=NULL){
printf("nodul cu cheia %d gasit recursiv\n",n->key);
}else{
printf("nodul nu a fost gasit recursiv\n");
}
n=tree_min(r);
printf("\nTree minimum: %d\n",n->key);
n=tree_max(r);
printf("tree maximum: %d\n",n->key);

printf("succesorul lui x= ");


scanf("%d",&x);

n=tree_search(r,x);
// succesorul nodului n va fi nodul succesorul
s=tree_succesor(n);
if(s!=NULL){
printf("Nod succesor: %d \n",s->key);
}else{
printf(" NULL \n");
}

printf("stergere nod cu cheia x= ");


scanf("%d",&x);
n=tree_search(r,x);
if(n!=NULL){
tree_delete(T,n);
printf("\nnod sters\n");
inorder_walk(r);
}else{
printf("\n nod negasit!");
}
return 0;
}

Binary Tree | Set 1 (Introduction)

Trees: Unlike Arrays, Linked Lists, Stack and queues, which are linear data
structures, trees are hierarchical data structures.

Tree Vocabulary: The topmost node is called root of the tree. The elements
that are directly under an element are called its children. The element
directly above something is called its parent. For example, ‘a’ is a child of
‘f’, and ‘f’ is the parent of ‘a’. Finally, elements with no children are
called leaves.

tree ---- j <-- root / \ f k / \ \


a h z <-- leaves

Why Trees?
1. One reason to use trees might be because you want to store information
that naturally forms a hierarchy. For example, the file system on a computer:
file system----------- / <-- root / \... home /
\ ugrad course / / | \ ... cs101 cs112
cs113

2. Trees (with some ordering e.g., BST) provide moderate access/search


(quicker than Linked List and slower than arrays).
3. Trees provide moderate insertion/deletion (quicker than Arrays and slower
than Unordered Linked Lists).
4. Like Linked Lists and unlike Arrays, Trees don’t have an upper limit on
number of nodes as nodes are linked using pointers.

Main applications of trees include:


1. Manipulate hierarchical data.
2. Make information easy to search (see tree traversal).
3. Manipulate sorted lists of data.
4. As a workflow for compositing digital images for visual effects.
5. Router algorithms
6. Form of a multi-stage decision-making (see business chess).

Binary Tree: A tree whose elements have at most 2 children is called a binary
tree. Since each element in a binary tree can have only 2 children, we
typically name them the left and right child.

Binary Tree Representation in C: A tree is represented by a pointer to the


topmost node in tree. If the tree is empty, then value of root is NULL.
A Tree node contains following parts.
1. Data
2. Pointer to left child
3. Pointer to right chil
Summary: Tree is a hierarchical data structure. Main uses of trees include
maintaining hierarchical data, providing moderate access and insert/delete
operations. Binary trees are special cases of tree where every node has at
most two children.

1) The maximum number of nodes at level ‘l’ of a binary tree is 2l-1.


Here level is number of nodes on path from root to the node (including root
and node). Level of root is 1.
This can be proved by induction.
For root, l = 1, number of nodes = 21-1 = 1
Assume that maximum number of nodes on level l is 2l-1
Since in Binary tree every node has at most 2 children, next level would have
twice nodes, i.e. 2 * 2l-1
2) Maximum number of nodes in a binary tree of height ‘h’ is 2h – 1.
Here height of a tree is maximum number of nodes on root to leaf path. Height
of a tree with single node is considered as 1.
This result can be derived from point 2 above. A tree has maximum nodes if
all levels have maximum nodes. So maximum number of nodes in a binary tree of
height h is 1 + 2 + 4 + .. + 2h-1. This is a simple geometric series with h
terms and sum of this series is 2h – 1.
In some books, height of the root is considered as 0. In this convention, the
above formula becomes 2h+1 – 1

3) In a Binary Tree with N nodes, minimum possible height or minimum number


of levels is ? Log2(N+1) ?
This can be directly derived from point 2 above. If we consider the
convention where height of a leaf node is considered as 0, then above formula
for minimum possible height becomes ? Log2(N+1) ? – 1

4) A Binary Tree with L leaves has at least ? Log2L ? + 1 levels


A Binary tree has maximum number of leaves (and minimum number of levels)
when all levels are fully filled. Let all leaves be at level l, then below is
true for number of leaves L.
L <= 2l-1 [From Point 1] l = ? Log2L ? + 1 where l is the
minimum number of levels.

5) In Binary tree where every node has 0 or 2 children, number of leaf nodes
is always one more than nodes with two children.
L = T + 1Where L = Number of leaf nodes T = Number of internal nodes
with two children

RED_BLACK TREE :

Red-Black Tree | Set 1 (Introduction)


Red-Black Tree is a self-balancing Binary Search Tree (BST) where every node
follows following rules.

1) Every node has a color either red or black.

2) Root of tree is always black.

3) There are no two adjacent red nodes (A red node cannot have a red parent
or red child).

4) Every path from a node (including root) to any of its descendant NULL node
has the same number of black nodes.

Why Red-Black Trees?


Most of the BST operations (e.g., search, max, min, insert, delete.. etc)
take O(h) time where h is the height of the BST. The cost of these operations
may become O(n) for a skewed Binary tree. If we make sure that height of the
tree remains O(Logn) after every insertion and deletion, then we can
guarantee an upper bound of O(Logn) for all these operations. The height of a
Red-Black tree is always O(Logn) where n is the number of nodes in the tree.

Comparison with AVL Tree


The AVL trees are more balanced compared to Red-Black Trees, but they may
cause more rotations during insertion and deletion. So if your application
involves many frequent insertions and deletions, then Red Black trees should
be preferred. And if the insertions and deletions are less frequent and
search is a more frequent operation, then AVL tree should be preferred over
Red-Black Tree.

How does a Red-Black Tree ensure balance?


A simple example to understand balancing is, a chain of 3 nodes is not
possible in the Red-Black tree. We can try any combination of colours and see
all of them violate Red-Black tree property.

A chain of 3 nodes is nodes is not possible in Red-Black Trees. Following are


NOT Red-Black Trees 30 30 30
/ \ / \ / \ 20 NIL 20 NIL 20
NIL / \ / \ / \ 10 NIL 10 NIL
10 NIL Violates Violates ViolatesProperty 4. Property
4 Property 3 Following are different possible Red-Black Trees with
above 3 keys 20 20 / \
/ \ 10 30 10 30 / \ / \
/ \ / \ NIL NIL NIL NIL NIL NIL NIL NIL

From the above examples, we get some idea how Red-Black trees ensure balance.
Following is an important fact about balancing in Red-Black Trees.

Black Height of a Red-Black Tree :


Black height is number of black nodes on a path from root to a leaf. Leaf
nodes are also counted black nodes. From above properties 3 and 4, we can
derive, a Red-Black Tree of height h has black-height >= h/2.

Number of nodes from a node to its farthest descendant leaf is no more than
twice as the number of nodes to the nearest descendant leaf.

Every Red Black Tree with n nodes has height <= 2Log2(n+1)

This can be proved using following facts:


1) For a general Binary Tree, let k be the minimum number of nodes on all
root to NULL paths, then n >= 2k – 1 (Ex. If k is 3, then n is atleast 7).
This expression can also be written as k <= Log2(n+1)

2) From property 4 of Red-Black trees and above claim, we can say in a Red-
Black Tree with n nodes, there is a root to leaf path with at-most Log2(n+1)
black nodes.

3) From property 3 of Red-Black trees, we can claim that the number black
nodes in a Red-Black tree is at least ⌊ n/2 ⌋ where n is the total number of
nodes.

From above 2 points, we can conclude the fact that Red Black Tree with n
nodes has height <= 2Log2(n+1)

In this post, we introduced Red-Black trees and discussed how balance is


ensured. The hard part is to maintain balance when keys are added and
removed. We will soon be discussing insertion and deletion operations in
coming posts on the Red-Black tree.

Un link util : https://algorithmtutor.com/Data-Structures/Tree/Red-Black-


Trees/
Ce am inteles:
Uitate pe animatia de la laborator, ajuta...
Arborele trebuie sa fie echilibrat …
La left rotate si right rotate : right rotate ii chiar inversu lui left
rotate
Rotirea se face pe parinte
Fixup : avem 3 cazuri ..
Atunci cand vrem sa inseram un nou nod , verifica niste chestii si in functie
de alea se insereaza : se verifica daca tatal nodului ce trebuie inserat are
frati (daca e echilibrat ) si daca are si este rosu , se insereaza nodu asa:
parintele parintelui nodului nou o sa isi schimbe culoarea din negru in
rosu , parintele nodului si fratele parintelui o sa devina negrii , iar nodul
nou o sa fie rosu. Urmatorul pas ii sa echilibram arborele
Cand z este copil la dreapta , fac rotire la stanga , cand z este copil la
dreapta , fac rotire la stanga
Dupa rotire trebuie din nou sa il echilibram si sa vedem ce culori au si sa
schimbam culorile sa fie bine

Nodurile rosii au descendenti negrii … ii taboo sa fie 2 noduri rosii una


dupa alta
La examen nu vine operatia de delete pe arborii rb , numa insertu vine
Termina acasa restu

Red-Black Tree | Set 2 (Insert)

In the previous post, we discussed introduction to Red-Black Trees. In this


post, insertion is discussed.

In AVL tree insertion, we used rotation as a tool to do balancing after


insertion caused imbalance. In Red-Black tree, we use two tools to do
balancing.

1) Recoloring
2) Rotation
We try recoloring first, if recoloring doesn’t work, then we go for rotation.
Following is detailed algorithm. The algorithms has mainly two cases
depending upon the color of uncle. If uncle is red, we do recoloring. If
uncle is black, we do rotations and/or recoloring.

Color of a NULL node is considered as BLACK.

Let x be the newly inserted node.


1) Perform standard BST insertion and make the color of newly inserted nodes
as RED.

2) If x is root, change color of x as BLACK (Black height of complete tree


increases by 1).

3) Do following if color of x’s parent is not BLACK and x is not root.


….a) If x’s uncle is RED (Grand parent must have been black from property 4)
……..(i) Change color of parent and uncle as BLACK.
……..(ii) color of grand parent as RED.
……..(iii) Change x = x’s grandparent, repeat steps 2 and 3 for new x.

….b) If x’s uncle is BLACK, then there can be four configurations for x, x’s
parent (p) and x’s grandparent (g) (This is similar to AVL Tree)
……..i) Left Left Case (p is left child of g and x is left child of p)
……..ii) Left Right Case (p is left child of g and x is right child of p)
……..iii) Right Right Case (Mirror of case i)
……..iv) Right Left Case (Mirror of case ii)

Following are operations to be performed in four subcases when uncle is


BLACK.
All four cases when Uncle is BLACK
Left Left Case (See g, p and x)

Left Right Case (See g, p and x)

Right Right Case (See g, p and x)

Right Left Case (See g, p and x)


Examples of Insertion

La practic:

Atunci cand avem tabele de dispersie si avem functii dinalea dubioase in


cerinta atunci se schimba functia h probef
La arbori cand zice in cerinta ca trebe arbore binar care tre sa fie arbore
balansat atunci trebe red-black

GRAFURI :
Grafuri orientate si neorientate …
Ii un sir de liste dublu inlantuite , lucram ca si la listele dublu
inlantuite , avem head dar de data asta are si o cheie , next si prev , si
inseram vecinii cheii in acelasi mod ca si la liste da inainte se face
functia makenull
NSERT_VECINI(LISTA_ADIACENTA G,S)
NOD_LISTA nod
PRINT “Nod sursa ”+S+”: “
G[S].head->cheie := S // se apeleaza cu punct g[s].head
Ii punctulet pentru ca ii pointer catre lista , adica pointeaza catre toata
lista
S- nodul sursaa
Graph Data Structure And Algorithms

A Graph is a non-linear data structure consisting of nodes and edges. The


nodes are sometimes also referred to as vertices and the edges are lines or
arcs that connect any two nodes in the graph. More formally a Graph can be
defined as,
A Graph consists of a finite set of vertices(or nodes) and set of Edges which
connect a pair of nodes.

In the above Graph, the set of vertices V = {0,1,2,3,4} and the set of edges
E = {01, 12, 23, 34, 04, 14, 13}.
Graphs are used to solve many real-life problems. Graphs are used to
represent networks. The networks may include paths in a city or telephone
network or circuit network. Graphs are also used in social networks like
linkedIn, Facebook. For example, in Facebook, each person is represented with
a vertex(or node). Each node is a structure and contains information like
person id, name, gender, locale etc.

LINK-URI UTILE : https://www.geeksforgeeks.org/find-the-weight-of-the-


minimum-spanning-tree/
https://www.geeksforgeeks.org/find-the-minimum-spanning-tree-with-
alternating-colored-edges/

Daca vine o problema cu cost minim , case, strazi , ii kruskal


Kruskal’s Minimum Spanning Tree using STL in C++

Given an undirected, connected and weighted graph, find Minimum Spanning Tree
(MST) of the graph using Kruskal’s algorithm.

Input : Graph as an array of


edgesOutput : Edges of MST are 6 - 7 2 - 8 5 - 6
0 - 1 2 - 5 2 - 3 0 - 7 3 - 4
Weight of MST is 37Note : There are two possible MSTs, the other MST
includes edge 1-2 in place of 0-7.
We have discussed below Kruskal’s MST implementations.
Greedy Algorithms | Set 2 (Kruskal’s Minimum Spanning Tree Algorithm)

Below are the steps for finding MST using Kruskal’s algorithm
13. Sort all the edges in non-decreasing order of their weight.
14. Pick the smallest edge. Check if it forms a cycle with the
spanning tree formed so far. If cycle is not formed, include this edge.
Else, discard it.
15. Repeat step#2 until there are (V-1) edges in the spanning tree.
Here are some key points which will be useful for us in implementing the
Kruskal’s algorithm using STL.
16. Use a vector of edges which consist of all the edges in the graph
and each item of a vector will contain 3 parameters: source,
destination and the cost of an edge between the source and destination.
vector<pair<int, pair<int, int> > > edges;
17. Here in the outer pair (i.e pair<int,pair<int,int> > ) the first
element corresponds to the cost of a edge while the second element is
itself a pair, and it contains two vertices of edge.
18. Use the inbuilt std::sort to sort the edges in the non-decreasing
order; by default the sort function sort in non-decreasing order.
19. We use the Union Find Algorithm to check if it the current edge forms a
cycle if it is added in the current MST. If yes discard it, else
include it (union).
Pseudo Code:
// Initialize resultmst_weight = 0// Create V single item setsfor each vertex
vparent[v] = v;rank[v] = 0;Sort all edges into non decreasing order by weight
wfor each (u, v) taken from the sorted list E do if FIND-SET(u) != FIND-
SET(v) print edge(u, v) mst_weight += weight of edge(u, v)
UNION(u, v)

Applications of Minimum Spanning Tree Problem

Minimum Spanning Tree (MST) problem: Given connected graph G with positive
edge weights, find a min weight set of edges that connects all of the
vertices.
MST is fundamental problem with diverse applications.
Network design.
– telephone, electrical, hydraulic, TV cable, computer, road
The standard application is to a problem like phone network design. You have
a business with several offices; you want to lease phone lines to connect
them up with each other; and the phone company charges different amounts of
money to connect different pairs of cities. You want a set of lines that
connects all your offices with a minimum total cost. It should be a spanning
tree, since if a network isn’t a tree you can always remove some edges and
save money.
Approximation algorithms for NP-hard problems.
– traveling salesperson problem, Steiner tree
A less obvious application is that the minimum spanning tree can be used to
approximately solve the traveling salesman problem. A convenient formal way
of defining this problem is to find the shortest path that visits each point
at least once.
Note that if you have a path visiting all points exactly once, it’s a special
kind of tree. For instance in the example above, twelve of sixteen spanning
trees are actually paths. If you have a path visiting some vertices more than
once, you can always drop some edges to get a tree. So in general the MST
weight is less than the TSP weight, because it’s a minimization over a
strictly larger set.
On the other hand, if you draw a path tracing around the minimum spanning
tree, you trace each edge twice and visit all points, so the TSP weight is
less than twice the MST weight. Therefore this tour is within a factor of two
of optimal.
Indirect applications.
– max bottleneck paths
– LDPC codes for error correction
– image registration with Renyi entropy
– learning salient features for real-time face verification
– reducing data storage in sequencing amino acids in a protein
– model locality of particle interactions in turbulent fluid flows
– autoconfig protocol for Ethernet bridging to avoid cycles in a network
Cluster analysis
k clustering problem can be viewed as finding an MST and deleting the k-1
most
expensive edges.

LINK_URI UTILE: https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-


algorithm-greedy-algo-2/
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5

We have discussed Kruskal’s algorithm for Minimum Spanning Tree. Like


Kruskal’s algorithm, Prim’s algorithm is also a Greedy algorithm. It starts
with an empty spanning tree. The idea is to maintain two sets of vertices.
The first set contains the vertices already included in the MST, the other
set contains the vertices not yet included. At every step, it considers all
the edges that connect the two sets, and picks the minimum weight edge from
these edges. After picking the edge, it moves the other endpoint of the edge
to the set containing MST.
A group of edges that connects two set of vertices in a graph is called cut
in graph theory. So, at every step of Prim’s algorithm, we find a cut (of two
sets, one contains the vertices already included in MST and other contains
rest of the verices), pick the minimum weight edge from the cut and include
this vertex to MST Set (the set that contains already included vertices).
How does Prim’s Algorithm Work? The idea behind Prim’s algorithm is simple, a
spanning tree means all vertices must be connected. So the two disjoint
subsets (discussed above) of vertices must be connected to make a Spanning
Tree. And they must be connected with the minimum weight edge to make it a
Minimum Spanning Tree.
Algorithm
1) Create a set mstSet that keeps track of vertices already included in MST.
2) Assign a key value to all vertices in the input graph. Initialize all key
values as INFINITE. Assign key value as 0 for the first vertex so that it is
picked first.
3) While mstSet doesn’t include all vertices
….a) Pick a vertex u which is not there in mstSet and has minimum key value.
….b) Include u to mstSet.
….c) Update key value of all adjacent vertices of u. To update the key
values, iterate through all adjacent vertices. For every adjacent vertex v,
if weight of edge u-v is less than the previous key value of v, update the
key value as weight of u-v

The idea of using key values is to pick the minimum weight edge from cut. The
key values are used only for vertices which are not yet included in MST, the
key value for these vertices indicate the minimum weight edges connecting
them to the set of vertices included in MST.
Let us understand with the following example:

The set mstSet is initially empty and keys assigned to vertices are {0, INF,
INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the
vertex with minimum key value. The vertex 0 is picked, include it in mstSet.
So mstSet becomes {0}. After including to mstSet, update key values of
adjacent vertices. Adjacent vertices of 0 are 1 and 7. The key values of 1
and 7 are updated as 4 and 8. Following subgraph shows vertices and their key
values, only the vertices with finite key values are shown. The vertices
included in MST are shown in green color.

Pick the vertex with minimum key value and not already included in MST (not
in mstSET). The vertex 1 is picked and added to mstSet. So mstSet now becomes
{0, 1}. Update the key values of adjacent vertices of 1. The key value of
vertex 2 becomes 8.

Pick the vertex with minimum key value and not already included in MST (not
in mstSET). We can either pick vertex 7 or vertex 2, let vertex 7 is picked.
So mstSet now becomes {0, 1, 7}. Update the key values of adjacent vertices
of 7. The key value of vertex 6 and 8 becomes finite (1 and 7 respectively).

Pick the vertex with minimum key value and not already included in MST (not
in mstSET). Vertex 6 is picked. So mstSet now becomes {0, 1, 7, 6}. Update
the key values of adjacent vertices of 6. The key value of vertex 5 and 8 are
updated.

We repeat the above steps until mstSet includes all vertices of given graph.
Finally, we get the following graph.

Problem Solving for Minimum Spanning Trees (Kruskal’s and Prim’s)

Minimum spanning Tree (MST) is an important topic for GATE. Therefore, we


will discuss how to solve different types of questions based on MST. Before
understanding this article, you should understand basics of MST and their
algorithms (Kruskal’s algorithm and Prim’s algorithm).
Type 1. Conceptual questions based on MST –
There are some important properties of MST on the basis of which conceptual
questions can be asked as:

 The number of edges in MST with n nodes is (n-1).


 The weight of MST of a graph is always unique. However there may be
different ways to get this weight (if there edges with same weights).
 The weight of MST is sum of weights of edges in MST.
 Maximum path length between two vertices is (n-1) for MST with n
vertices.
 There exists only one path from one vertex to another in MST.
 Removal of any edge from MST disconnects the graph.
 For a graph having edges with distinct weights, MST is unique.

Que – 1. Let G be an undirected connected graph with distinct edge weight.


Let emax be the edge with maximum weight and emin the edge with minimum
weight. Which of the following statements is false? (GATE CS 2000)
(A) Every minimum spanning tree of G must contain emin.
(B) If emax is in a minimum spanning tree, then its removal must disconnect
G
(C) No minimum spanning tree contains emax
(D) G has a unique minimum spanning tree

Solution: As edge weights are unique, there will be only one edge emin and
that will be added to MST, therefore option (A) is always true.
As spanning tree has minimum number of edges, removal of any edge will
disconnect the graph. Therefore, option (B) is also true.
As all edge weights are distinct, G will have a unique minimum spanning
tree. So, option (D) is correct.
Option C is false as emax can be part of MST if other edges with lesser
weights are creating cycle and number of edges before adding emax is less
than (n-1).
Type 2. How to find the weight of minimum spanning tree given the graph –
This is the simplest type of question based on MST. To solve this using
kruskal’s algorithm,

 Arrange the edges in non-decreasing order of weights.


 Add edges one by one if they don’t create cycle until we get n-1 number
of edges where n are number of nodes in the graph.
Que – 2. Consider a complete undirected graph with vertex set {0, 1, 2, 3,
4}. Entry Wij in the matrix W below is the weight of the edge {i, j}. What is
the minimum possible weight of a spanning tree T in this graph such that
vertex 0 is a leaf node in the tree T? (GATE CS 2010)
(A) 7
(B) 8
(C) 9
(D) 10
Solution: In the adjacency matrix of the graph with 5 vertices (v1 to v5),
the edges arranged in non-decreasing order are:
(v1,v2), (v1,v4), (v4,v5), (v3,v5), (v1,v5), (v2,v4), (v3,v4), (v1,v3),
(v2,v5), (v2,v3)
As it is given, vertex v1 is a leaf node, it should have only one edge
incident to it. Therefore, we will consider it in the end. Considering
vertices v2 to v5, edges in non decreasing order are:
(v4,v5), (v3,v5), (v2,v4), (v3,v4), (v2,v5), (v2,v3)
Adding first three edges (v4,v5), (v3,v5), (v2,v4), no cycle is created.
Also, we can connect v1 to v2 using edge (v1,v2). The total weight is sum of
weight of these 4 edges which is 10.
Type 3. How many minimum spanning trees are possible using Kruskal’s
algorithm for a given graph –

 If all edges weight are distinct, minimum spanning tree is unique.


 If two edges have same weight, then we have to consider both
possibilities and find possible minimum spanning trees.
Que – 3. The number of distinct minimum spanning trees for the weighted graph
below is ____ (GATE-CS-2014)
(A) 4
(B) 5
(C) 6
(D) 7
Solution: There are 5 edges with weight 1 and adding them all in MST does not
create cycle.
As the graph has 9 vertices, therefore we require total 8 edges out of which
5 has been added. Out of remaining 3, one edge is fixed represented by f.
For remaining 2 edges, one is to be chosen from c or d or e and another one
is to be chosen from a or b. Remaining black ones will always create cycle so
they are not considered. So, possible MST are 3*2 = 6.

Type 4. Out of given sequences, which one is not the sequence of edges added
to the MST using Kruskal’s algorithm –
To solve this type of questions, try to find out the sequence of edges which
can be produced by Kruskal. The sequence which does not match will be the
answer.
Que – 4. Consider the following graph:

Which one of the following


is NOT the sequence of edges added to the minimum spanning tree using
Kruskal’s algorithm? (GATE-CS-2009)
(A) (b,e), (e,f), (a,c), (b,c), (f,g), (c,d)
(B) (b,e), (e,f), (a,c), (f,g), (b,c), (c,d)
(C) (b,e), (a,c), (e,f), (b,c), (f,g), (c,d)
(D) (b,e), (e,f), (b,c), (a,c), (f,g), (c,d)
Solution: Kruskal algorithms adds the edges in non-decreasing order of their
weights, therefore, we first sort the edges in non-decreasing order of weight
as:
(b,e), (e,f), (a,c), (b,c), (f,g), (a,b), (e,g), (c,d), (b,d), (e,d), (d,f).
First it will add (b,e) in MST. Then, it will add (e,f) as well as (a,c)
(either (e,f) followed by (a,c) or vice versa) because of both having same
weight and adding both of them will not create cycle.
However, in option (D), (b,c) has been added to MST before adding (a,c). So
it can’t be the sequence produced by Kruskal’s algorithm.
Arbori minimali de acoperire
Kruscal....
Obtinerea unui cost minim
Trebuie sa incep de la nodul cu ponderea cea mai mica(sunt obligat)

PRIM:
Pot incepe de la orice nod
Initializez toate valorile cu infinit, si incep cu primul nod, verific
vecinii nodului si ne uitam la valorile lor , actualizez valorile (ca le.am
pus ca si infinit), aleg calea mai scurta si scot nodu din coada(pasii se
repeta)
Ideea e sa fie conectate fiecare “casa”

You might also like