Module 9.4

Algorithms in C

Master fundamental algorithms that power modern software. Learn sorting algorithms (bubble, selection, insertion, merge, quick sort), searching algorithms (linear, binary), and understand algorithm efficiency with Big O notation.

60 min read
Intermediate
Hands-on Examples
What You'll Learn
  • Sorting algorithms
  • Searching algorithms
  • Algorithm analysis (Big O)
  • Recursion in algorithms
  • Choosing the right algorithm
Contents
01

Algorithm Analysis

An algorithm is a step-by-step procedure for solving a problem. Algorithm analysis helps us understand how efficiently an algorithm uses resources (time and space) as input size grows. This understanding is crucial for choosing the right algorithm.

Concept

Big O Notation

Big O notation describes the upper bound of an algorithm's time or space complexity as input size (n) grows. It focuses on the dominant term, ignoring constants and lower-order terms.

Key insight: Big O describes the worst-case scenario and how the algorithm scales, not exact running time.

Common Time Complexities

// O(1) - Constant Time
// Time doesn't change with input size
int getFirst(int arr[], int n) {
    return arr[0];  // Always one operation
}

// O(log n) - Logarithmic Time
// Halves the problem each step (binary search)
// 1000 elements → ~10 steps, 1M elements → ~20 steps

// O(n) - Linear Time
// Visits each element once
int sum(int arr[], int n) {
    int total = 0;
    for (int i = 0; i < n; i++) {  // n iterations
        total += arr[i];
    }
    return total;
}

// O(n log n) - Linearithmic Time
// Efficient sorting algorithms (merge sort, quick sort)

// O(n²) - Quadratic Time
// Nested loops over data
void printPairs(int arr[], int n) {
    for (int i = 0; i < n; i++) {        // n iterations
        for (int j = 0; j < n; j++) {    // n iterations each
            printf("(%d,%d) ", arr[i], arr[j]);
        }
    }
}  // Total: n × n = n² operations

// O(2^n) - Exponential Time
// Doubles with each additional input (recursive fibonacci)

Complexity Comparison Table

Big O Name n=10 n=100 n=1000 Example
O(1) Constant 1 1 1 Array access
O(log n) Logarithmic 3 7 10 Binary search
O(n) Linear 10 100 1,000 Linear search
O(n log n) Linearithmic 33 664 9,966 Merge sort
O(n²) Quadratic 100 10,000 1,000,000 Bubble sort
O(2^n) Exponential 1,024 1.27×10³⁰ Naive recursion

Space Complexity

// Space complexity measures memory usage

// O(1) Space - In-place algorithm
void swap(int *a, int *b) {
    int temp = *a;  // Only one extra variable
    *a = *b;
    *b = temp;
}

// O(n) Space - Creates copy of input
int *copyArray(int arr[], int n) {
    int *copy = malloc(n * sizeof(int));  // n extra elements
    for (int i = 0; i < n; i++) {
        copy[i] = arr[i];
    }
    return copy;
}

// O(n) Space - Recursive call stack
int factorial(int n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);  // n stack frames
}
Practice Questions

Task: What is the time complexity of this function?

int mystery(int n) {
    int count = 0;
    for (int i = 1; i < n; i *= 2) {
        count++;
    }
    return count;
}
Show Solution
// Answer: O(log n)
// 
// The loop variable i doubles each iteration: 1, 2, 4, 8, 16...
// It stops when i >= n
// After k iterations: i = 2^k
// Loop ends when 2^k >= n, so k = log₂(n)
// 
// Example: n = 16
// i values: 1, 2, 4, 8, 16 → 5 iterations ≈ log₂(16) = 4
02

Sorting Algorithms

Sorting arranges elements in a specific order (ascending or descending). Different algorithms have different trade-offs in terms of time complexity, space usage, and stability. Simple algorithms are easier to understand but slower for large datasets.

Bubble Sort - O(n²)

// Bubble Sort: Repeatedly swap adjacent elements if out of order
// Simple but inefficient - best for educational purposes

void bubbleSort(int arr[], int n) {
    for (int i = 0; i < n - 1; i++) {
        // Flag to optimize - stop if no swaps
        int swapped = 0;
        
        // Each pass "bubbles" largest element to end
        for (int j = 0; j < n - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                // Swap adjacent elements
                int temp = arr[j];
                arr[j] = arr[j + 1];
                arr[j + 1] = temp;
                swapped = 1;
            }
        }
        
        // If no swaps, array is sorted
        if (!swapped) break;
    }
}

// Visualization:
// [5, 3, 8, 4, 2] - Original
// [3, 5, 4, 2, 8] - After pass 1 (8 bubbled to end)
// [3, 4, 2, 5, 8] - After pass 2 (5 in place)
// [3, 2, 4, 5, 8] - After pass 3
// [2, 3, 4, 5, 8] - After pass 4 (sorted!)

// Time: O(n²) worst/average, O(n) best (already sorted)
// Space: O(1) - in-place
// Stable: Yes (equal elements maintain relative order)

Selection Sort - O(n²)

// Selection Sort: Find minimum, place at beginning, repeat
// Fewer swaps than bubble sort, but still O(n²)

void selectionSort(int arr[], int n) {
    for (int i = 0; i < n - 1; i++) {
        // Find minimum element in unsorted part
        int minIdx = i;
        for (int j = i + 1; j < n; j++) {
            if (arr[j] < arr[minIdx]) {
                minIdx = j;
            }
        }
        
        // Swap minimum with first unsorted element
        if (minIdx != i) {
            int temp = arr[i];
            arr[i] = arr[minIdx];
            arr[minIdx] = temp;
        }
    }
}

// Visualization:
// [64, 25, 12, 22, 11] - Find min (11), swap with 64
// [11, 25, 12, 22, 64] - Find min (12), swap with 25
// [11, 12, 25, 22, 64] - Find min (22), swap with 25
// [11, 12, 22, 25, 64] - Find min (25), already in place
// [11, 12, 22, 25, 64] - Sorted!

// Time: O(n²) always (no early termination)
// Space: O(1) - in-place
// Stable: No (swapping can change relative order)

Insertion Sort - O(n²)

// Insertion Sort: Build sorted array one element at a time
// Excellent for small or nearly-sorted arrays

void insertionSort(int arr[], int n) {
    for (int i = 1; i < n; i++) {
        int key = arr[i];  // Element to insert
        int j = i - 1;
        
        // Shift elements greater than key to the right
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }
        
        // Insert key at correct position
        arr[j + 1] = key;
    }
}

// Visualization (inserting each element into sorted portion):
// [5, |3, 8, 4, 2] - Insert 3: [3, 5, |8, 4, 2]
// [3, 5, |8, 4, 2] - Insert 8: [3, 5, 8, |4, 2]
// [3, 5, 8, |4, 2] - Insert 4: [3, 4, 5, 8, |2]
// [3, 4, 5, 8, |2] - Insert 2: [2, 3, 4, 5, 8]

// Time: O(n²) worst/average, O(n) best (already sorted)
// Space: O(1) - in-place
// Stable: Yes
// Note: Very fast for small n (<20) - used as base case in hybrid sorts
When to use simple sorts: Insertion sort is often the best choice for small arrays (n < 20) or nearly-sorted data. Many efficient algorithms like Timsort use insertion sort for small subarrays.
Practice Questions

Task: Trace bubble sort on array [4, 2, 7, 1, 3]. Show array after each pass.

Show Solution
// Initial: [4, 2, 7, 1, 3]

// Pass 1: Compare and swap adjacent pairs
// 4>2: swap → [2, 4, 7, 1, 3]
// 4<7: no swap
// 7>1: swap → [2, 4, 1, 7, 3]
// 7>3: swap → [2, 4, 1, 3, 7]
// After pass 1: [2, 4, 1, 3, 7] (7 in place)

// Pass 2:
// 2<4: no swap
// 4>1: swap → [2, 1, 4, 3, 7]
// 4>3: swap → [2, 1, 3, 4, 7]
// After pass 2: [2, 1, 3, 4, 7] (4, 7 in place)

// Pass 3:
// 2>1: swap → [1, 2, 3, 4, 7]
// 2<3: no swap
// After pass 3: [1, 2, 3, 4, 7] (sorted!)

// Pass 4: No swaps needed → algorithm terminates

Task: Modify insertion sort to sort in descending order.

Show Solution
void insertionSortDesc(int arr[], int n) {
    for (int i = 1; i < n; i++) {
        int key = arr[i];
        int j = i - 1;
        
        // Change > to < to reverse order
        while (j >= 0 && arr[j] < key) {
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
}

// Test: [3, 1, 4, 1, 5]
// Result: [5, 4, 3, 1, 1]
03

Advanced Sorting

Efficient sorting algorithms achieve O(n log n) time complexity using divide-and-conquer strategies. Merge sort and quick sort are the workhorses of practical sorting.

Merge Sort - O(n log n)

// Merge Sort: Divide array, sort halves, merge sorted halves
// Guaranteed O(n log n) but requires O(n) extra space

void merge(int arr[], int left, int mid, int right) {
    int n1 = mid - left + 1;
    int n2 = right - mid;
    
    // Create temp arrays
    int *L = malloc(n1 * sizeof(int));
    int *R = malloc(n2 * sizeof(int));
    
    // Copy data to temp arrays
    for (int i = 0; i < n1; i++)
        L[i] = arr[left + i];
    for (int j = 0; j < n2; j++)
        R[j] = arr[mid + 1 + j];
    
    // Merge temp arrays back
    int i = 0, j = 0, k = left;
    while (i < n1 && j < n2) {
        if (L[i] <= R[j]) {
            arr[k++] = L[i++];
        } else {
            arr[k++] = R[j++];
        }
    }
    
    // Copy remaining elements
    while (i < n1) arr[k++] = L[i++];
    while (j < n2) arr[k++] = R[j++];
    
    free(L);
    free(R);
}

void mergeSort(int arr[], int left, int right) {
    if (left < right) {
        int mid = left + (right - left) / 2;
        
        // Sort first and second halves
        mergeSort(arr, left, mid);
        mergeSort(arr, mid + 1, right);
        
        // Merge sorted halves
        merge(arr, left, mid, right);
    }
}

// Usage
void mergeSortWrapper(int arr[], int n) {
    mergeSort(arr, 0, n - 1);
}

// Visualization:
//        [38, 27, 43, 3, 9, 82, 10]
//       /                         \
//   [38, 27, 43, 3]         [9, 82, 10]
//     /        \              /      \
// [38, 27]  [43, 3]       [9, 82]  [10]
//   / \      / \           / \       |
// [38][27] [43][3]      [9][82]   [10]
//   \ /      \ /           \ /       |
// [27, 38] [3, 43]       [9, 82]  [10]
//     \      /              \      /
//   [3, 27, 38, 43]      [9, 10, 82]
//         \                  /
//      [3, 9, 10, 27, 38, 43, 82]

// Time: O(n log n) always
// Space: O(n) for temp arrays
// Stable: Yes

Quick Sort - O(n log n) average

// Quick Sort: Pick pivot, partition around it, recursively sort
// Fastest in practice, but O(n²) worst case (rare with good pivot)

int partition(int arr[], int low, int high) {
    int pivot = arr[high];  // Choose last element as pivot
    int i = low - 1;        // Index of smaller element
    
    for (int j = low; j < high; j++) {
        // If current element is smaller than pivot
        if (arr[j] < pivot) {
            i++;
            // Swap arr[i] and arr[j]
            int temp = arr[i];
            arr[i] = arr[j];
            arr[j] = temp;
        }
    }
    
    // Place pivot in correct position
    int temp = arr[i + 1];
    arr[i + 1] = arr[high];
    arr[high] = temp;
    
    return i + 1;  // Return pivot's position
}

void quickSort(int arr[], int low, int high) {
    if (low < high) {
        // Partition and get pivot position
        int pi = partition(arr, low, high);
        
        // Recursively sort elements before and after pivot
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}

// Usage
void quickSortWrapper(int arr[], int n) {
    quickSort(arr, 0, n - 1);
}

// Visualization (pivot underlined):
// [10, 7, 8, 9, 1, 5̲]
// After partition: [1, 5̲, 8, 9, 10, 7]  (wrong, let me fix)
// Actually: [1, 5, 8, 9, 10, 7] with 5 as pivot
// 
// Partitioning [10, 7, 8, 9, 1, 5]:
// pivot = 5
// i starts at -1
// j=0: 10 > 5, no swap
// j=1: 7 > 5, no swap
// j=2: 8 > 5, no swap
// j=3: 9 > 5, no swap
// j=4: 1 < 5, i=0, swap arr[0] and arr[4] → [1, 7, 8, 9, 10, 5]
// Place pivot: swap arr[1] and arr[5] → [1, 5, 8, 9, 10, 7]
// pivot (5) is now at index 1
// Recursively sort [1] and [8, 9, 10, 7]

// Time: O(n log n) average, O(n²) worst (sorted/reverse sorted)
// Space: O(log n) for recursion stack
// Stable: No

Optimized Quick Sort

// Median-of-three pivot selection to avoid worst case
int medianOfThree(int arr[], int low, int high) {
    int mid = low + (high - low) / 2;
    
    // Sort low, mid, high
    if (arr[low] > arr[mid]) {
        int temp = arr[low]; arr[low] = arr[mid]; arr[mid] = temp;
    }
    if (arr[mid] > arr[high]) {
        int temp = arr[mid]; arr[mid] = arr[high]; arr[high] = temp;
    }
    if (arr[low] > arr[mid]) {
        int temp = arr[low]; arr[low] = arr[mid]; arr[mid] = temp;
    }
    
    // Place median at high-1 position
    int temp = arr[mid];
    arr[mid] = arr[high - 1];
    arr[high - 1] = temp;
    
    return arr[high - 1];  // Return median value as pivot
}

// Hybrid approach: use insertion sort for small subarrays
void hybridQuickSort(int arr[], int low, int high) {
    if (high - low < 16) {
        // Use insertion sort for small arrays
        for (int i = low + 1; i <= high; i++) {
            int key = arr[i];
            int j = i - 1;
            while (j >= low && arr[j] > key) {
                arr[j + 1] = arr[j];
                j--;
            }
            arr[j + 1] = key;
        }
    } else {
        int pi = partition(arr, low, high);
        hybridQuickSort(arr, low, pi - 1);
        hybridQuickSort(arr, pi + 1, high);
    }
}
Practice Questions

Task: Trace the partition function on [3, 6, 8, 10, 1, 2, 1] with last element as pivot.

Show Solution
// Array: [3, 6, 8, 10, 1, 2, 1]
// pivot = 1 (last element)
// i = -1

// j=0: 3 > 1, no change
// j=1: 6 > 1, no change
// j=2: 8 > 1, no change
// j=3: 10 > 1, no change
// j=4: 1 <= 1, i=0, swap arr[0] and arr[4]
//      → [1, 6, 8, 10, 3, 2, 1]
// j=5: 2 > 1, no change

// Place pivot: swap arr[i+1] and arr[high]
//      swap arr[1] and arr[6]
//      → [1, 1, 8, 10, 3, 2, 6]

// Final: [1, 1, 8, 10, 3, 2, 6]
// Pivot index returned: 1
// Elements <= pivot are at indices 0-1
// Elements > pivot are at indices 2-6
04

Searching Algorithms

Searching algorithms find a specific element in a collection. Linear search works on any data, while binary search requires sorted data but is much faster for large datasets.

Linear Search - O(n)

// Linear Search: Check each element sequentially
// Works on unsorted data

int linearSearch(int arr[], int n, int target) {
    for (int i = 0; i < n; i++) {
        if (arr[i] == target) {
            return i;  // Found at index i
        }
    }
    return -1;  // Not found
}

// Find all occurrences
int linearSearchAll(int arr[], int n, int target, int results[]) {
    int count = 0;
    for (int i = 0; i < n; i++) {
        if (arr[i] == target) {
            results[count++] = i;
        }
    }
    return count;  // Number of occurrences
}

// Example:
// arr = [4, 2, 7, 1, 9, 3, 7]
// linearSearch(arr, 7, 7) returns 2 (first occurrence)
// linearSearchAll finds indices [2, 6]

// Time: O(n) worst/average, O(1) best (found at start)
// Space: O(1)

Binary Search - O(log n)

// Binary Search: Repeatedly divide search space in half
// REQUIRES SORTED ARRAY

int binarySearch(int arr[], int n, int target) {
    int left = 0;
    int right = n - 1;
    
    while (left <= right) {
        int mid = left + (right - left) / 2;  // Avoid overflow
        
        if (arr[mid] == target) {
            return mid;  // Found!
        } else if (arr[mid] < target) {
            left = mid + 1;  // Search right half
        } else {
            right = mid - 1;  // Search left half
        }
    }
    
    return -1;  // Not found
}

// Recursive version
int binarySearchRecursive(int arr[], int left, int right, int target) {
    if (left > right) return -1;  // Base case: not found
    
    int mid = left + (right - left) / 2;
    
    if (arr[mid] == target) return mid;
    
    if (arr[mid] < target) {
        return binarySearchRecursive(arr, mid + 1, right, target);
    } else {
        return binarySearchRecursive(arr, left, mid - 1, target);
    }
}

// Visualization: Find 23 in [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]
//
// Step 1: left=0, right=9, mid=4
//         arr[4]=16 < 23, search right half
//         
// Step 2: left=5, right=9, mid=7
//         arr[7]=56 > 23, search left half
//         
// Step 3: left=5, right=6, mid=5
//         arr[5]=23 == 23, FOUND at index 5!
//
// Only 3 comparisons for 10 elements (vs up to 10 for linear search)

// Time: O(log n)
// Space: O(1) iterative, O(log n) recursive

Binary Search Variations

// Find first occurrence (lower bound)
int lowerBound(int arr[], int n, int target) {
    int left = 0, right = n - 1;
    int result = -1;
    
    while (left <= right) {
        int mid = left + (right - left) / 2;
        
        if (arr[mid] == target) {
            result = mid;
            right = mid - 1;  // Continue searching left
        } else if (arr[mid] < target) {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }
    return result;
}

// Find last occurrence (upper bound)
int upperBound(int arr[], int n, int target) {
    int left = 0, right = n - 1;
    int result = -1;
    
    while (left <= right) {
        int mid = left + (right - left) / 2;
        
        if (arr[mid] == target) {
            result = mid;
            left = mid + 1;  // Continue searching right
        } else if (arr[mid] < target) {
            left = mid + 1;
        } else {
            right = mid - 1;
        }
    }
    return result;
}

// Count occurrences using bounds
int countOccurrences(int arr[], int n, int target) {
    int first = lowerBound(arr, n, target);
    if (first == -1) return 0;
    int last = upperBound(arr, n, target);
    return last - first + 1;
}

// Example: arr = [1, 2, 2, 2, 3, 4, 5]
// lowerBound(2) = 1, upperBound(2) = 3
// countOccurrences(2) = 3 - 1 + 1 = 3

Search Comparison

Aspect Linear Search Binary Search
Time Complexity O(n) O(log n)
Requires Sorted No Yes
Best For Small or unsorted data Large sorted data
1 million elements Up to 1,000,000 comparisons Up to 20 comparisons
Practice Questions

Task: Search for a target in a rotated sorted array. Example: [4,5,6,7,0,1,2] was originally [0,1,2,4,5,6,7].

Show Solution
int searchRotated(int arr[], int n, int target) {
    int left = 0, right = n - 1;
    
    while (left <= right) {
        int mid = left + (right - left) / 2;
        
        if (arr[mid] == target) return mid;
        
        // Determine which half is sorted
        if (arr[left] <= arr[mid]) {
            // Left half is sorted
            if (target >= arr[left] && target < arr[mid]) {
                right = mid - 1;  // Target in left half
            } else {
                left = mid + 1;   // Target in right half
            }
        } else {
            // Right half is sorted
            if (target > arr[mid] && target <= arr[right]) {
                left = mid + 1;   // Target in right half
            } else {
                right = mid - 1;  // Target in left half
            }
        }
    }
    return -1;
}

// Example: [4,5,6,7,0,1,2], target=0
// mid=3, arr[3]=7, left sorted [4,5,6,7]
// 0 not in [4,7], search right
// mid=5, arr[5]=1, right sorted [1,2]
// 0 not in [1,2], search left
// mid=4, arr[4]=0, FOUND!
05

Recursion in Algorithms

Recursion is a technique where a function calls itself to solve smaller subproblems. Many algorithms, especially divide-and-conquer ones like merge sort and quick sort, are naturally recursive.

Concept

Recursion

A recursive function is one that calls itself. Every recursive function needs: (1) a base case that stops recursion, and (2) a recursive case that breaks the problem into smaller subproblems.

Classic Recursive Examples

// Factorial: n! = n × (n-1) × ... × 1
int factorial(int n) {
    // Base case
    if (n <= 1) return 1;
    
    // Recursive case
    return n * factorial(n - 1);
}
// factorial(5) = 5 * 4 * 3 * 2 * 1 = 120

// Fibonacci: F(n) = F(n-1) + F(n-2)
int fibonacci(int n) {
    // Base cases
    if (n <= 0) return 0;
    if (n == 1) return 1;
    
    // Recursive case (inefficient - O(2^n))
    return fibonacci(n - 1) + fibonacci(n - 2);
}

// Efficient Fibonacci with memoization - O(n)
int fibMemo[100] = {0};
int fibonacciMemo(int n) {
    if (n <= 0) return 0;
    if (n == 1) return 1;
    
    if (fibMemo[n] != 0) return fibMemo[n];
    
    fibMemo[n] = fibonacciMemo(n - 1) + fibonacciMemo(n - 2);
    return fibMemo[n];
}

Recursion vs Iteration

// Sum of array - Recursive
int sumRecursive(int arr[], int n) {
    if (n == 0) return 0;
    return arr[n - 1] + sumRecursive(arr, n - 1);
}

// Sum of array - Iterative
int sumIterative(int arr[], int n) {
    int total = 0;
    for (int i = 0; i < n; i++) {
        total += arr[i];
    }
    return total;
}

// Power function - Recursive O(log n)
long power(int base, int exp) {
    if (exp == 0) return 1;
    
    long half = power(base, exp / 2);
    
    if (exp % 2 == 0) {
        return half * half;
    } else {
        return base * half * half;
    }
}
// power(2, 10) = 1024 with only 4 multiplications!

Tail Recursion

// Non-tail recursive factorial (must wait for recursive call)
int factorialNonTail(int n) {
    if (n <= 1) return 1;
    return n * factorialNonTail(n - 1);  // Operation after recursive call
}

// Tail recursive factorial (recursive call is last operation)
int factorialTailHelper(int n, int accumulator) {
    if (n <= 1) return accumulator;
    return factorialTailHelper(n - 1, n * accumulator);  // No operation after
}

int factorialTail(int n) {
    return factorialTailHelper(n, 1);
}

// Benefits: Tail recursion can be optimized by compilers
// to use constant stack space (tail call optimization)
Stack Overflow Warning: Deep recursion can cause stack overflow. For very large inputs, consider iterative solutions or increase stack size. Each recursive call adds a stack frame with local variables and return address.
Practice Questions

Task: Write a recursive function to find the sum of digits of a number.

Show Solution
int sumDigits(int n) {
    // Handle negative numbers
    if (n < 0) n = -n;
    
    // Base case
    if (n < 10) return n;
    
    // Recursive case: last digit + sum of remaining digits
    return (n % 10) + sumDigits(n / 10);
}

// Trace: sumDigits(1234)
// = 4 + sumDigits(123)
// = 4 + 3 + sumDigits(12)
// = 4 + 3 + 2 + sumDigits(1)
// = 4 + 3 + 2 + 1
// = 10

Task: Solve Tower of Hanoi - move n disks from source to destination using auxiliary peg.

Show Solution
void hanoi(int n, char from, char to, char aux) {
    if (n == 1) {
        printf("Move disk 1 from %c to %c\n", from, to);
        return;
    }
    
    // Move n-1 disks from source to auxiliary
    hanoi(n - 1, from, aux, to);
    
    // Move largest disk from source to destination
    printf("Move disk %d from %c to %c\n", n, from, to);
    
    // Move n-1 disks from auxiliary to destination
    hanoi(n - 1, aux, to, from);
}

// Usage: hanoi(3, 'A', 'C', 'B');
// Output:
// Move disk 1 from A to C
// Move disk 2 from A to B
// Move disk 1 from C to B
// Move disk 3 from A to C
// Move disk 1 from B to A
// Move disk 2 from B to C
// Move disk 1 from A to C
// Total moves for n disks: 2^n - 1
06

Graph Algorithms

Graphs model relationships between objects - from social networks to road maps. Graph algorithms solve problems like finding shortest paths, detecting cycles, and traversing all nodes efficiently.

Graph Representation

// Graph can be represented as:
// 1. Adjacency Matrix - O(V²) space, O(1) edge lookup
// 2. Adjacency List - O(V+E) space, efficient for sparse graphs

#define MAX_VERTICES 100

// Adjacency Matrix representation
typedef struct {
    int vertices;
    int matrix[MAX_VERTICES][MAX_VERTICES];
} GraphMatrix;

void initGraphMatrix(GraphMatrix *g, int v) {
    g->vertices = v;
    for (int i = 0; i < v; i++)
        for (int j = 0; j < v; j++)
            g->matrix[i][j] = 0;
}

void addEdgeMatrix(GraphMatrix *g, int src, int dest) {
    g->matrix[src][dest] = 1;
    g->matrix[dest][src] = 1;  // For undirected graph
}

// Adjacency List representation
typedef struct Node {
    int vertex;
    struct Node *next;
} Node;

typedef struct {
    int vertices;
    Node **adjList;
} GraphList;

GraphList *createGraph(int v) {
    GraphList *g = malloc(sizeof(GraphList));
    g->vertices = v;
    g->adjList = malloc(v * sizeof(Node*));
    for (int i = 0; i < v; i++)
        g->adjList[i] = NULL;
    return g;
}

void addEdgeList(GraphList *g, int src, int dest) {
    Node *newNode = malloc(sizeof(Node));
    newNode->vertex = dest;
    newNode->next = g->adjList[src];
    g->adjList[src] = newNode;
    
    // For undirected graph, add reverse edge
    newNode = malloc(sizeof(Node));
    newNode->vertex = src;
    newNode->next = g->adjList[dest];
    g->adjList[dest] = newNode;
}

Breadth-First Search (BFS) - O(V+E)

// BFS: Explore all neighbors before going deeper
// Uses queue - finds shortest path in unweighted graphs

void BFS(GraphList *g, int start) {
    int visited[MAX_VERTICES] = {0};
    int queue[MAX_VERTICES];
    int front = 0, rear = 0;
    
    // Start with source vertex
    visited[start] = 1;
    queue[rear++] = start;
    
    printf("BFS traversal: ");
    
    while (front < rear) {
        int current = queue[front++];
        printf("%d ", current);
        
        // Visit all adjacent vertices
        Node *temp = g->adjList[current];
        while (temp) {
            int adj = temp->vertex;
            if (!visited[adj]) {
                visited[adj] = 1;
                queue[rear++] = adj;
            }
            temp = temp->next;
        }
    }
    printf("\n");
}

// BFS for shortest path (unweighted graph)
int shortestPath(GraphList *g, int src, int dest) {
    int visited[MAX_VERTICES] = {0};
    int distance[MAX_VERTICES];
    int queue[MAX_VERTICES];
    int front = 0, rear = 0;
    
    for (int i = 0; i < g->vertices; i++)
        distance[i] = -1;
    
    visited[src] = 1;
    distance[src] = 0;
    queue[rear++] = src;
    
    while (front < rear) {
        int current = queue[front++];
        
        Node *temp = g->adjList[current];
        while (temp) {
            int adj = temp->vertex;
            if (!visited[adj]) {
                visited[adj] = 1;
                distance[adj] = distance[current] + 1;
                queue[rear++] = adj;
                
                if (adj == dest) return distance[adj];
            }
            temp = temp->next;
        }
    }
    return -1;  // No path found
}

Depth-First Search (DFS) - O(V+E)

// DFS: Explore as deep as possible before backtracking
// Uses stack (or recursion) - good for cycle detection, topological sort

// Recursive DFS
void DFSUtil(GraphList *g, int v, int visited[]) {
    visited[v] = 1;
    printf("%d ", v);
    
    Node *temp = g->adjList[v];
    while (temp) {
        if (!visited[temp->vertex]) {
            DFSUtil(g, temp->vertex, visited);
        }
        temp = temp->next;
    }
}

void DFS(GraphList *g, int start) {
    int visited[MAX_VERTICES] = {0};
    printf("DFS traversal: ");
    DFSUtil(g, start, visited);
    printf("\n");
}

// Iterative DFS using stack
void DFSIterative(GraphList *g, int start) {
    int visited[MAX_VERTICES] = {0};
    int stack[MAX_VERTICES];
    int top = -1;
    
    stack[++top] = start;
    printf("DFS traversal: ");
    
    while (top >= 0) {
        int current = stack[top--];
        
        if (!visited[current]) {
            visited[current] = 1;
            printf("%d ", current);
            
            Node *temp = g->adjList[current];
            while (temp) {
                if (!visited[temp->vertex]) {
                    stack[++top] = temp->vertex;
                }
                temp = temp->next;
            }
        }
    }
    printf("\n");
}

// Cycle detection using DFS
int hasCycleUtil(GraphList *g, int v, int visited[], int parent) {
    visited[v] = 1;
    
    Node *temp = g->adjList[v];
    while (temp) {
        if (!visited[temp->vertex]) {
            if (hasCycleUtil(g, temp->vertex, visited, v))
                return 1;
        } else if (temp->vertex != parent) {
            return 1;  // Back edge found - cycle exists
        }
        temp = temp->next;
    }
    return 0;
}

int hasCycle(GraphList *g) {
    int visited[MAX_VERTICES] = {0};
    for (int i = 0; i < g->vertices; i++) {
        if (!visited[i]) {
            if (hasCycleUtil(g, i, visited, -1))
                return 1;
        }
    }
    return 0;
}

BFS vs DFS Comparison

Aspect BFS DFS
Data Structure Queue Stack / Recursion
Order Level by level Depth first
Shortest Path Yes (unweighted) No
Memory More (stores level) Less
Use Cases Shortest path, level order Cycle detection, topological sort
07

Dynamic Programming Basics

Dynamic Programming (DP) solves complex problems by breaking them into overlapping subproblems and storing results to avoid redundant computation. It transforms exponential algorithms into polynomial time solutions.

Concept

Dynamic Programming

Dynamic Programming is an optimization technique that solves problems with two key properties: Optimal Substructure (optimal solution built from optimal subsolutions) and Overlapping Subproblems (same subproblems solved multiple times).

Fibonacci: Recursion vs DP

// Naive Recursion - O(2^n) Time, O(n) Space
int fibNaive(int n) {
    if (n <= 1) return n;
    return fibNaive(n-1) + fibNaive(n-2);  // Same subproblems computed many times!
}
// fibNaive(5) computes fibNaive(2) THREE times!

// Memoization (Top-Down DP) - O(n) Time, O(n) Space
int memo[100] = {-1};
int fibMemo(int n) {
    if (n <= 1) return n;
    if (memo[n] != -1) return memo[n];  // Already computed
    
    memo[n] = fibMemo(n-1) + fibMemo(n-2);
    return memo[n];
}

// Tabulation (Bottom-Up DP) - O(n) Time, O(n) Space
int fibTab(int n) {
    if (n <= 1) return n;
    
    int dp[n + 1];
    dp[0] = 0;
    dp[1] = 1;
    
    for (int i = 2; i <= n; i++) {
        dp[i] = dp[i-1] + dp[i-2];
    }
    return dp[n];
}

// Space Optimized - O(n) Time, O(1) Space
int fibOptimized(int n) {
    if (n <= 1) return n;
    
    int prev2 = 0, prev1 = 1, curr;
    for (int i = 2; i <= n; i++) {
        curr = prev1 + prev2;
        prev2 = prev1;
        prev1 = curr;
    }
    return prev1;
}

Classic DP Problem: Longest Common Subsequence

// LCS: Find longest subsequence common to two strings
// "ABCDGH" and "AEDFHR" → LCS = "ADH" (length 3)

// Recursive solution with memoization
int dp[100][100];

int lcsRecursive(char *s1, char *s2, int m, int n) {
    if (m == 0 || n == 0) return 0;
    
    if (dp[m][n] != -1) return dp[m][n];
    
    if (s1[m-1] == s2[n-1]) {
        dp[m][n] = 1 + lcsRecursive(s1, s2, m-1, n-1);
    } else {
        int skip1 = lcsRecursive(s1, s2, m-1, n);
        int skip2 = lcsRecursive(s1, s2, m, n-1);
        dp[m][n] = (skip1 > skip2) ? skip1 : skip2;
    }
    return dp[m][n];
}

// Bottom-up tabulation
int lcsTabulation(char *s1, char *s2) {
    int m = strlen(s1);
    int n = strlen(s2);
    int dp[m + 1][n + 1];
    
    // Base case: empty string
    for (int i = 0; i <= m; i++) dp[i][0] = 0;
    for (int j = 0; j <= n; j++) dp[0][j] = 0;
    
    // Fill table
    for (int i = 1; i <= m; i++) {
        for (int j = 1; j <= n; j++) {
            if (s1[i-1] == s2[j-1]) {
                dp[i][j] = 1 + dp[i-1][j-1];
            } else {
                dp[i][j] = (dp[i-1][j] > dp[i][j-1]) ? 
                           dp[i-1][j] : dp[i][j-1];
            }
        }
    }
    return dp[m][n];
}

// Time: O(m*n), Space: O(m*n)

Classic DP Problem: 0/1 Knapsack

// Knapsack: Maximize value with weight constraint
// Given items with weights and values, fill knapsack maximally

int knapsack(int W, int weights[], int values[], int n) {
    int dp[n + 1][W + 1];
    
    for (int i = 0; i <= n; i++) {
        for (int w = 0; w <= W; w++) {
            if (i == 0 || w == 0) {
                dp[i][w] = 0;
            } else if (weights[i-1] <= w) {
                // Max of: include item or exclude item
                int include = values[i-1] + dp[i-1][w - weights[i-1]];
                int exclude = dp[i-1][w];
                dp[i][w] = (include > exclude) ? include : exclude;
            } else {
                dp[i][w] = dp[i-1][w];  // Can't include, too heavy
            }
        }
    }
    return dp[n][W];
}

// Example:
// weights = [1, 2, 3], values = [10, 15, 40], W = 5
// Optimal: items 1 and 3 (weight=4, value=50)
// Or items 2 and 3 (weight=5, value=55) ← Maximum!

DP Problem-Solving Steps

Top-Down (Memoization)
  1. Write recursive solution
  2. Identify overlapping subproblems
  3. Add memoization array/map
  4. Check memo before computing
Bottom-Up (Tabulation)
  1. Define state (what dp[i] means)
  2. Define recurrence relation
  3. Set base cases
  4. Fill table iteratively
When to use DP: Look for optimization problems (min/max) with choices at each step, counting problems, or problems that can be broken into similar smaller problems. Keywords: "minimum", "maximum", "count ways", "is it possible".
06

Algorithm Comparison

Choosing the right algorithm depends on data characteristics, size, and constraints. Here is a comprehensive comparison to help you make informed decisions.

Sorting Algorithms Comparison

Algorithm Best Average Worst Space Stable
Bubble Sort O(n) O(n²) O(n²) O(1) Yes
Selection Sort O(n²) O(n²) O(n²) O(1) No
Insertion Sort O(n) O(n²) O(n²) O(1) Yes
Merge Sort O(n log n) O(n log n) O(n log n) O(n) Yes
Quick Sort O(n log n) O(n log n) O(n²) O(log n) No
Heap Sort O(n log n) O(n log n) O(n log n) O(1) No

When to Use Which Algorithm

Use Insertion Sort When
  • Array is small (n < 20)
  • Data is nearly sorted
  • Memory is limited
  • Online sorting (data arrives piece by piece)
Use Merge Sort When
  • Guaranteed O(n log n) needed
  • Stability required
  • Sorting linked lists
  • External sorting (data on disk)
Use Quick Sort When
  • Average case performance matters most
  • Memory is a concern (O(log n) space)
  • Cache performance important
  • General-purpose sorting
Use Binary Search When
  • Data is sorted
  • Random access available (arrays)
  • Searching will be done many times
  • Large dataset

Real-World Algorithm Usage

// C Standard Library: qsort() - typically uses optimized quicksort
#include <stdlib.h>

int compare(const void *a, const void *b) {
    return (*(int*)a - *(int*)b);
}

void stdlibSort() {
    int arr[] = {64, 25, 12, 22, 11};
    int n = sizeof(arr) / sizeof(arr[0]);
    
    qsort(arr, n, sizeof(int), compare);
    // arr is now sorted: [11, 12, 22, 25, 64]
}

// C Standard Library: bsearch() - binary search
void stdlibSearch() {
    int arr[] = {11, 12, 22, 25, 64};  // Must be sorted!
    int n = sizeof(arr) / sizeof(arr[0]);
    int target = 22;
    
    int *result = bsearch(&target, arr, n, sizeof(int), compare);
    
    if (result) {
        printf("Found %d at index %ld\n", *result, result - arr);
    } else {
        printf("Not found\n");
    }
}

Key Takeaways

Big O Matters

Algorithm choice impacts performance dramatically at scale

O(n log n) Sorting

Merge/Quick sort are optimal comparison-based sorts

Binary Search Power

O(log n) makes searching millions of items trivial

Recursion Pattern

Base case + recursive case solves complex problems

Trade-offs Exist

Time vs space, simplicity vs efficiency

Context Matters

Best algorithm depends on data size and characteristics

Knowledge Check

Quick Quiz

Test what you have learned about algorithms in C

1 What is the time complexity of binary search?
2 Which sorting algorithm has the best worst-case time complexity?
3 What does binary search require?
4 What are the two essential parts of a recursive function?
5 Which algorithm is typically fastest for sorting in practice?
6 How many comparisons does binary search need for 1,000,000 elements (worst case)?
Answer all questions to check your score