\

Chapter 15: Recursion

29 min read

Recursion

Core Idea of Recursion: At its simplest, recursion means a function calls itself to solve a smaller version of the same problem.

Prompt Link(my private discussion for this chapter using ai studio and this chapters main system prompt along with the pdf of this chapter)

Two Key Ingredients :

  • Base Cases: These are the simplest instances of the problem that the function can solve directly, without further recursion. Intuition: “When do I know the answer without asking for more help?”
  • Progress (Recursive Call with different arguments): The function must call itself with arguments that move it closer to a base case. If it doesn’t make progress, it’ll loop forever.

Recursion: The “Ask a Mini-Me” Approach:

Imagine you’re a kid and your parent asks you to clean your entire room. It’s a disaster. Toys everywhere. You (the function): “Ugh, this is too much!”

Your thought process:

  • Base Case (Simplest Job): “Is there just ONE toy left on the floor?” If yes, you pick it up. Done! Easy.
  • Recursive Step (Making it Simpler): “Okay, there’s more than one toy. What if I pick up just one toy and put it away? Now the room is slightly cleaner. The remaining mess is a smaller version of the original problem.”
  • The “Call a Mini-Me”: “Hey, Mini-Me (you call yourself again), can you clean this slightly cleaner room?” Trusting Mini-Me: You assume Mini-Me will somehow get the job done. What you do after Mini-Me is done: Nothing! You already did your one piece of work (picking up one toy).

Why is recursion useful?

  • When the input itself is recursive (like file system directories, or a computer grammar).
  • For problems like searching, enumeration (listing all possibilities), and divide-and-conquer.

Divide-and-Conquer vs. Recursion:

  • Divide-and-Conquer: A strategy. It breaks a problem into independent smaller subproblems of the same type, solves them, and combines their solutions. Merge Sort is a classic.
  • Recursion: A technique. It’s how you often implement divide-and-conquer. But recursion is more general. You might have only one subproblem (like binary search or factorial), or the subproblems might not be independent (hello, Dynamic Programming!).

Example: Counting Down

Let’s say you want to write a function that prints numbers from N down to 1. countdown(N):

Job: Print N, then N-1, then N-2, …, down to 1.


def countdown(n):
    if n == 0:  # Base case: stop when we reach 0 (or 1, if you prefer to print 1)
        print("Blast off!")
        return      # This is important! Stop the recursion.
    
    print(n)          # Do a small piece of work
    countdown(n - 1)  # Ask a "mini-me" to do the rest (a smaller version)

countdown(3)
# Output:
# 3
# 2
# 1
# Blast off!

1. Recursion boot camp

  1. Euclidean Algorithm for GCD (Greatest Common Divisor)

    • Problem: Find the largest number that divides both x and y without a remainder.
    • Core Idea (from the book): GCD(x, y) is the same as GCD(y, x % y). (Assuming x > y, initially it might be GCD(x-y, y) repeatedly, which simplifies to GCD(x % y, y) and then swapping to keep the first argument larger or just using GCD(y, x%y)).

Let’s think recursively: def gcd(x, y):


def gcd(x, y):
    # Base case: if y is 0, then x is the GCD.
    # Think: GCD(12, 0) -> what's the largest number that divides 12 and 0? It's 12.
    return x if y == 0 else gcd(y, x % y)
    
  • The “Work”: The % (modulo) operation is the work that makes the problem smaller.
  • “Smaller”: The numbers y and x % y are generally smaller than x and y. Specifically, x % y is guaranteed to be less than y.
  • Base Case: y == 0. When the second number is zero, the first number is the GCD. GCD(12, 0) = 12.

2. Mutilated Chessboard (Page 218)

This is a fantastic example of divide-and-conquer.

🧩 Problem

Cover an 8x8 board with one square missing (mutilated) using L-shaped triominoes (3 squares each).

  • Total squares: 63
  • Required triominoes: 21

πŸ’‘ EPI’s Reasoning

Don’t think n β†’ n+1. That’s a dead end.
Instead, think:
If I can solve for an n x n Mboard, can I solve for a 2n x 2n Mboard?

✨ The “Aha!” Insight (Figure 15.1(b))

  1. Imagine a 2n x 2n board.
  2. Divide it into four n x n quadrants.
  3. One quadrant contains the original missing square.
    • This is already an n x n Mboard (mutilated board) β€” by hypothesis, we can tile it!
  4. The other three quadrants are not Mboards yet.

🧠 Clever Step

  • Place one triomino in the center of the 2n x 2n board.
  • Align it such that it covers one corner square from each of the three “full” n x n quadrants.
  • This action creates a new missing square in each of the three quadrants, turning them into Mboards!

By hypothesis, we can now tile all four n x n Mboards β€” problem solved!


3. πŸ“˜ Table 15.1: Top Tips for Recursion (Page 219)

1. πŸ” Recursive Rules Input

If the problem description sounds recursive, then recursion is a natural fit.

Example: “A directory contains files and other directories.”

2. 🧭 Search, Enumerate, Divide-and-Conquer

These problem types are prime candidates for recursion:

  • πŸ” Search:
    “Is the item in the left half? Or the right half?”
    β†’ Example: Binary Search

  • πŸ“‹ Enumerate:
    “What if I pick this? What are the options then? What if I don’t pick this?”
    β†’ Example: Generating all subsets

  • 🧩 Divide-and-Conquer:
    Break the problem into smaller subproblems of the same type.

3. πŸ”„ Alternative to Nested Loops (of Undefined Depth)

When you don’t know how many nested loops you’ll need (e.g., problems with variable segments like IP address parsing),
recursion gracefully handles varying depth.

4. 🧱 Removing Recursion (Mimic the Call Stack)

If you’re asked to make a recursive solution iterative:

Use your own stack data structure to keep track of β€œwhat to do next.”

5. 🎯 Tail Recursion

If the recursive call is the last operation in the function, it can often be converted into a loop.

Note: Python does not optimize tail recursion, but understanding it is still valuable.

6. πŸ’Ύ Caching Repeated Calls (Memoization)

If your recursive function recomputes the same result multiple times:

  • Store the result (e.g., fib(3) is needed by both fib(5) and fib(4))
  • This is the gateway to Dynamic Programming!

🧠 Problem 15.1: The Towers of Hanoi (Page 219)

This is a classic recursion problem.
If you understand this one, you’ve got a solid grasp of basic recursive thinking.

πŸ—οΈ The Setup

  • Three pegs:

    • P1 β†’ Source
    • P2 β†’ Destination
    • P3 β†’ Auxiliary / Helper
  • n disks of decreasing sizes stacked on P1:

    • Largest at the bottom, smallest at the top
  • Goal:
    Move all n disks from P1 to P2.

πŸ“ Rules

  1. Only one disk can be moved at a time.
  2. A disk can only be moved if it is the top disk on a peg.
  3. A larger disk can never be placed on top of a smaller disk.

🧾 Task

Write a program that prints the sequence of operations
required to move all n disks from P1 to P2, obeying the rules.

The recursive solution has a pattern:

If n > 0:

  1. Hanoi(n-1, src, aux, dest)
  2. Move disk n from src to dest
  3. Hanoi(n-1, aux, dest, src)

Define the recursive function:

compute_tower_hanoi_steps(num_rings_to_move, from_peg, to_peg, use_peg)

Base Case:

  • If num_rings_to_move == 0: do nothing.
  • If num_rings_to_move == 1: move the single disk from from_peg to to_peg.

Note: The EPI book uses an implicit base case with if num_rings_to_move > 0:. This works because:

  • When num_rings_to_move == 1, the first recursive call is with 0 disks (no action),
  • then it moves the disk,
  • then a second call with 0 disks (again, no action). This pattern is correct and avoids explicitly writing a base case. However, using num_rings_to_move == 1 explicitly can improve clarity when learning recursion.

Recursive Step (if num_rings_to_move > 1):

  1. Move (n - 1) disks from from_peg to use_peg using to_peg as a temporary: compute_tower_hanoi_steps(num_rings_to_move - 1, from_peg, use_peg, to_peg)
  2. Move the nth disk (the largest in this subproblem) from from_peg to to_peg: print(“Move disk”, num_rings_to_move, “from”, from_peg, “to”, to_peg) (In EPI: pegs[to_peg].append(pegs[from_peg].pop()); result.append(…))
  3. Move (n - 1) disks from use_peg to to_peg using from_peg as a temporary: compute_tower_hanoi_steps(num_rings_to_move - 1, use_peg, to_peg, from_peg)

Backtracking and Recursion Pre-requiste

What is Backtracking? It’s a Problem-Solving Technique (not a specific formula).

Think of backtracking as a systematic way to explore all possible solutions to a problem, especially when the problem involves a sequence of choices.

Imagine you’re trying to find your way out of a maze:

  1. You reach a junction (a decision point). You have multiple paths to choose from.
  2. You CHOOSE one path.
  3. You EXPLORE that path.
    • If this path leads to the exit (a solution!), great! You might record it.
    • If this path leads to a dead end (not a solution, or violates a rule), or if you’ve explored all it has to offer…
  4. You BACKTRACK: You return to the junction where you made your last choice.
  5. You UNDO your previous choice (mentally or actually “un-take” that path).
  6. You CHOOSE a different, unexplored path from that same junction and explore it.
  7. If all paths from a junction have been explored (and possibly led to dead ends), you backtrack even further to the previous junction.

Core Components of Backtracking:

  1. Problem Representation: The problem can be broken down into a sequence of decisions. The set of all possible decision sequences forms a “search space” or “state-space tree.”

    • N-Queens: Decision at each row = which column to place the queen.
    • Permutations: Decision at each position = which available item to place.
    • Power Set: Decision for each item = include it or exclude it.
  2. Recursive Structure: Backtracking is almost always implemented using recursion because it naturally handles the “explore deeper” and “return to previous state” aspects.

    • A function call represents making a set of choices up to a certain point and then trying to make the next choice.
  3. The “Try, Recurse, Undo” Pattern (The Heart of Backtracking): Inside a recursive backtracking function, for the current decision point:

    • Iterate through all possible CHOICES for the current step.
      • For each choice: a. Validate the choice: Can I make this choice given the previous choices? (e.g., Is it safe to place a queen here? Is this letter already used?). b. If choice is valid: i. MAKE the choice: Update the current state (e.g., place the queen, add the letter to the word). ii. EXPLORE (Recurse): Call the function recursively to make the next set of choices, building upon the current choice. iii. UNDO the choice (Backtrack): Revert the state to what it was before making this choice. This is CRUCIAL. It allows the loop to correctly try the next possible choice for the current decision point without being affected by the exploration of the previous choice.
  4. Base Cases:

    • Solution Found: The sequence of choices has led to a valid, complete solution. (e.g., N queens placed, word of K length formed). Record it.
    • Dead End / Invalid Path: The current path of choices cannot possibly lead to a solution, or it violates a fundamental constraint. (e.g., no safe spot for the current queen). The function returns, triggering backtracking in the caller.

Why is it called “Backtracking”? Because when a path doesn’t lead to a solution (or all solutions down that path have been found), the algorithm “backtracks” up the decision tree (i.e., recursive calls return) to a previous decision point to try a different option.

Is it a formula? No, it’s more like a template or a strategy for designing algorithms. The specific implementation details (how state is stored, how choices are made, how validity is checked, how “undo” happens) vary from problem to problem.

When to Think “Backtracking”:

  • You need to generate all possible configurations/solutions (permutations, combinations, subsets, placements).
  • The problem involves a sequence of decisions.
  • You can check the validity of a partial solution or a next choice.
  • You need to explore many possibilities, but some paths can be identified as “dead ends” early.

Backtracking is the general search technique. The “undo” step is what makes it backtracking rather than just a one-way greedy search. Without the “undo,” you’d explore one path and get stuck with its consequences. The “undo” allows you to say, “Okay, that path (or sub-path) is done; let’s rewind and try something else from my last major decision point.”

Goal

Find all 2-letter “words” using letters from “ABC”, where each letter is used at most once.
Expected Output: “AB”, “AC”, “BA”, “BC”, “CA”, “CB”

Recursive Function Let our recursive function be:
find_words(current_word, used_letters)

Base Case If len(current_word) == 2:

  • We’ve built a 2-letter word.
  • Add it to the list of solutions.
  • Return.

Recursive Step Iterate through each letter in "ABC":

  • Check if the letter is already in used_letters.
  • If the letter is not used:
    a. Place / Choose
    • Add the letter to current_word
    • Add the letter to used_letters
      b. Recurse
    • Call find_words(current_word, used_letters)
      c. Backtrack / Undo
    • Remove the letter from current_word
    • Remove the letter from used_letters

Backtracking ensures the state is clean when trying the next available letter.

Visualization of This Simpler Problem

Imagine you have 3 slots to pick from: A, B, C. You are building a 2-letter word.

1st letter choice:
Pick ‘A’
β†’ Now need 1 more letter.
β†’ Used: {‘A’}
β†’ Word so far: “A”

2nd letter choice (can’t be ‘A’):
Pick ‘B’
β†’ Word is “AB” β€” Done! (Base case)
Backtrack: Forget ‘B’
β†’ Word so far: “A”
β†’ Used: {‘A’}

Pick ‘C’
β†’ Word is “AC” β€” Done! (Base case)
Backtrack: Forget ‘C’
β†’ Word so far: “A”
β†’ Used: {‘A’}

Backtrack: Forget ‘A’
β†’ Word so far: ""
β†’ Used: {}

Pick ‘B’
β†’ Now need 1 more letter.
β†’ Used: {‘B’}
β†’ Word so far: “B”

2nd letter choice (can’t be ‘B’):
Pick ‘A’
β†’ Word is “BA” β€” Done!
Backtrack: Forget ‘A’
β†’ Word so far: “B”
β†’ Used: {‘B’}

Pick ‘C’
β†’ Word is “BC” β€” Done!
Backtrack: Forget ‘C’
β†’ Word so far: “B”
β†’ Used: {‘B’}

Backtrack: Forget ‘B’
β†’ Word so far: ""
β†’ Used: {}

Pick ‘C’
β†’ Now need 1 more letter.
β†’ Used: {‘C’}
β†’ Word so far: “C”

2nd letter choice (can’t be ‘C’):
Pick ‘A’
β†’ Word is “CA” β€” Done!
Backtrack: Forget ‘A’
β†’ Word so far: “C”
β†’ Used: {‘C’}

Pick ‘B’
β†’ Word is “CB” β€” Done!
Backtrack: Forget ‘B’
β†’ Word so far: “C”
β†’ Used: {‘C’}

Backtrack: Forget ‘C’
β†’ Word so far: ""
β†’ Used: {}

Here the Undo part ( Backtrack ) is EXPLICIT when we do “FORGET” aplhabet (A/B/C)

Let’s re-look at the Alphabet example with the “Backtracking Template” in mind: find_words(current_word_list, used_letters_set) (Decision: What letter to pick next?)

  • Base Case (Solution Found): len(current_word_list) == K.
  • Iterate through choices: for letter in all_available_letters:
    • Validate: if letter not in used_letters_set:
      • Make Choice: current_word_list.append(letter), used_letters_set.add(letter)
      • Explore (Recurse): find_words(next_state...)
      • Undo Choice: used_letters_set.remove(letter), current_word_list.pop()

Key Characteristics/Benefits of Backtracking:

  • Finds All Solutions: If you let it run its course without early termination, it will systematically find all solutions.
  • Handles Constraints: The “validate choice” step allows you to prune the search space by not exploring paths that violate problem constraints.
  • Conceptually Simple (once grasped): The “try all options, and for each, recursively try all further options” is a natural way to think about exhaustive search.
  • Often a starting point: For many hard problems (NP-complete problems), backtracking is a fundamental approach. Optimizations (like memoization in dynamic programming, or better pruning heuristics) can then be built on top of it.

When is the O(NΒ²) Simple Iterative Solution “Good Enough”?

Preferred When:

  • The depth of choices is small and fixed (e.g., K = 2).
  • The state between choices is minimal.
  • Raw performance is critical and recursion’s function call overhead, though usually minor, matters.

Mental Model:

  • Iterative Nested Loops:

    • Best for fixed, shallow decision trees.
    • Like building with a fixed number of Lego blocks in a specific sequence.
  • Recursive Backtracking:

    • Best for variable-depth or complex decision trees.
    • Like exploring a maze with unknown turns and constraints.
    • BACKTRACKING HELPS FIND ALL POSSIBLE SOLUTIONS

β™› 15.2 GENERATE ALL NONATTACKING PLACEMENTS OF N-QUEENS (Page 221)

Goal: Place N queens on an NΓ—N chessboard such that no two queens attack each other.

🧩 The Problem:

  • Given an N Γ— N chessboard.
  • Place N queens on the board such that no two queens attack each other.

βš”οΈ Attack Rules:

  • Queens can attack horizontally, vertically, and diagonally.

🎯 Goal:

  • Return all distinct configurations (valid placements) of these N queens.

🧾 Task

Exact problem: Place exactly one queen per row. Find ALL Possible combinations

Recusrion Backtracking: “choose, explore, undo” pattern

TLDR;

  • How it happens in N-Queens (EPI style):
    1. You are trying to place a queen in Row X.
    2. You try putting it in Column A. board_config[X] = A.
    3. You recursively try to solve for Row X+1 and beyond.
    4. When that recursion finishes and returns, you are still in the loop for Row X.
    5. Next, you try putting the queen for Row X in Column B.
    6. The “Undo”: board_config[X] = B overwrites board_config[X] = A. The choice of A for Row X is now gone for future explorations from Row X.

Recursive Function Idea: solve_queens(row_to_place_in, current_board_config)

  1. “Building the solution piece by piece” (like building the word letter by letter):

    • In N-Queens: We are trying to decide the column for the queen in the row_to_place_in.
    • This is like deciding the next letter for our word.
  2. “What choices do I have for the current piece?”

    • Alphabet problem: Choose any unused letter from ‘A’, ‘B’, ‘C’.
    • N-Queens: For the current row_to_place_in, try placing the queen in column 0, then column 1, then column 2, …, up to column N-1.
  3. solve_queens(row, placement) (where placement[r] stores the column of the queen in row r)

    • Base Case (Solution Found):

      • Alphabet: if len(current_word) == K_target_length: Solution found.
      • N-Queens: if row == N: We have successfully placed queens in row 0 to N-1. All N queens are on the board. This is a valid complete solution.
        • Action: Add a copy of placement to our list of results. Return.
    • Recursive Step (Trying to place a queen in the current row):

      • Loop through all possible choices for this step:
        • Alphabet: for each_letter in available_letters:

        • N-Queens: for col_choice from 0 to N-1: (This is trying each column for the queen in the current row)

        • Check if the choice is valid/allowed:

          • Alphabet: if each_letter not in used_letters:
          • N-Queens: if is_safe_to_place_queen_at(row, col_choice, placement_so_far):
            • The is_safe_to_place_queen_at function checks:
              1. Is col_choice already taken by a queen in a previous row (placement[prev_row] == col_choice)?
              2. Is (row, col_choice) on a diagonal with any queen in a previous row
                (abs(placement[prev_row] - col_choice) == abs(prev_row - row))?
        • If the choice IS VALID/SAFE: a. “Place” / Make the Choice:

          • Alphabet: current_word.append(each_letter), used_letters.add(each_letter)
          • N-Queens: placement[row] = col_choice (We’ve decided the queen for the current row goes in col_choice)

          b. “Explore” / Recurse (Try to complete the rest of the solution based on this choice):

          • Alphabet: find_words(current_word, used_letters) (which internally will try to fill the next letter position)
          • N-Queens: solve_queens(row + 1, placement) (Try to place queens for all subsequent rows, starting with the very next row)

          c. “Backtrack” / Undo the Choice (CRUCIAL for exploring other options for the current step):

          • Alphabet: current_word.pop(), used_letters.remove(each_letter)
          • N-Queens: This is where the EPI code is a bit more implicit.
          • When the solve_queens(row + 1, placement) call returns (meaning it has fully explored all possibilities from placing a queen at (row, col_choice), or it hit a dead end), the execution comes back to the for col_choice ... loop.
          • If the loop continues to the next col_choice for the same current row, the line placement[row] = new_col_choice will simply overwrite the previous col_choice. This acts as the “undo” for the specific assignment to placement[row].
          • If the for col_choice ... loop finishes for the current row (all columns tried), the solve_queens(row, placement) function itself returns. This signifies that all paths starting with the configuration of queens up to row-1 (that led to this call) have been explored for this particular row. The state placement[0...row-1] remains untouched by this returning function, allowing the caller (which was trying to place a queen in row-1) to potentially try a different column for row-1.

Q. so what if we find all the valid placements in the first go itself . Then don’t you think our return and the loop for placement[row] = new_col_choice iterating further undoing our setup wasted when we Have a valid placement?

You’re asking: If a particular choice for placement[row] (say, col_A) leads to one or more valid complete solutions through the solve(row+1, ...) call, why does the for loop for row continue to try other columns (like col_B) for that same row? Isn’t that wasted effort if we’ve already found solutions?

The answer depends on the goal of the problem:

  • Goal: Find ALL distinct non-attacking placements (as in N-Queens).
    If the goal is to find all possible solutions, then no, it’s not wasted. We must continue exploring.

    • Just because placing a queen at (row, col_A) led to some solutions doesn’t mean that placing a queen at (row, col_B) (if safe) won’t also lead to other, different valid solutions.
    • The N-Queens problem specifically asks for all distinct nonattacking placements. To achieve this, the backtracking algorithm must systematically explore every possible valid path in the decision tree.
    • When solve(row+1, ...) returns after exploring the consequences of placement[row] = col_A, it has done its job for that branch. The for loop for row then must try placement[row] = col_B (and col_C, etc.) to see if those choices also lead to valid complete solutions.
      Each of these will be a distinct branch.
  • Goal: Find ANY ONE non-attacking placement (or determine if one exists).
    If the problem was modified to “find just one solution and then stop,” then yes, you could optimize.

    • In this scenario, your recursive function could return a boolean: true if a solution was found down that path, false otherwise.

15.3 GENERATE PERMUTATIONS (Page 222)

Problem: Given an array of distinct integers, generate all possible orderings (permutations) of those integers. No permutation should appear more than once.

Core Idea (Recursive Backtracking): To form a permutation of N items:

  1. Choose one item to be the first item in the permutation.
  2. Then, generate all permutations of the remaining N-1 items.
  3. Prepend the chosen first item to each of these N-1 item permutations.
  4. Repeat this by choosing every possible item as the “first” item.

Simplified Recursive Logic: generate_perms(index, current_array)

  • index: The current position in the permutation we are trying to fill (from 0 to N-1).
  • current_array: The array whose elements are being swapped to form permutations.
  1. Base Case:

    • If index == N-1 (or index == N depending on how you structure it): We’ve fixed elements for all positions from 0 to N-1. The current_array now holds one complete permutation.
    • Action: Add a copy of current_array to the results. Return.
  2. Recursive Step (For the current index): Iterate i from index to N-1 (these are the elements available to be placed at the current index).

    • Choose/Place: Swap current_array[index] with current_array[i]. This brings the element originally at current_array[i] into the index-th position.
    • Explore/Recurse: Call generate_perms(index + 1, current_array). This will generate all permutations for the rest of the array (from index + 1 onwards), given the choice made for index.
    • Undo/Backtrack: Swap current_array[index] with current_array[i] again. This restores the array to its state before this iteration’s choice, allowing the next iteration of the for loop (for i) to correctly pick a different element for the index-th position. This is crucial.

Example: A = [1,2,3], directed_permutations_recursive(0)

  1. idx_to_fill = 0: (Think of idx_to_fill as “locking down” elements from left to right. Everything to the left of idx_to_fill is considered “fixed” for the current branch of recursion.)
    • i = 0: A=[1,2,3]. Swap A[0],A[0] (no change). Recurse d_p(1) with A=[1,2,3].
      • idx_to_fill = 1: (Processing A=[1,_,_])
        • i = 1: A=[1,2,3]. Swap A[1],A[1]. Recurse d_p(2) with A=[1,2,3].
          • idx_to_fill = 2. Base Case. Add [1,2,3] to result. Return.
        • Undo swap A[1],A[1]. A=[1,2,3].
        • i = 2: A=[1,2,3]. Swap A[1],A[2] -> A=[1,3,2]. Recurse d_p(2) with A=[1,3,2].
          • idx_to_fill = 2. Base Case. Add [1,3,2] to result. Return.
        • Undo swap A[1],A[2] -> A=[1,2,3].
      • d_p(1) returns.
    • Undo swap A[0],A[0]. A=[1,2,3].
    • i = 1: A=[1,2,3]. Swap A[0],A[1] -> A=[2,1,3]. Recurse d_p(1) with A=[2,1,3].
      • idx_to_fill = 1: (Processing A=[2,_,_])
        • i = 1: A=[2,1,3]. Swap A[1],A[1]. Recurse d_p(2) with A=[2,1,3].
          • Add [2,1,3] to result.
        • Undo.
        • i = 2: A=[2,1,3]. Swap A[1],A[2] -> A=[2,3,1]. Recurse d_p(2) with A=[2,3,1].
          • Add [2,3,1] to result.
        • Undo.
      • d_p(1) returns.
    • Undo swap A[0],A[1] -> A=[1,2,3].
    • i = 2: A=[1,2,3]. Swap A[0],A[2] -> A=[3,2,1]. Recurse d_p(1) with A=[3,2,1]. (generates [3,2,1] and [3,1,2])
    • Undo.
  • d_p(0) returns.

Complexity:

  • There are N! permutations.
  • Each permutation is of length N. Generating it and copying it takes O(N).
  • The recursion tree has roughly N! leaf nodes. The path to each leaf involves N levels of recursion. The work done at non-leaf nodes involves a loop and swaps.
  • Time: O(N * N!) because we spend O(N) work (loop and swaps) along the path to each of the N! permutations, and then O(N) to copy the result. More precisely, the number of recursive calls C(N) is N * C(N-1), leading to N! calls roughly. Each call involves a loop. The book derives it as sum(N! / (N-k)!) calls, which is O(N!).
  • Space: O(N) for the recursion call stack. O(N * N!) if storing all results.

The “swap, recurse, swap back” is the classic pattern for generating permutations in place.


15.4 GENERATE THE POWER SET (Page 224)

Problem: Given a set S (represented as a list/array for ordering purposes), return its power set. The power set is the set of all possible subsets of S, including the empty set and S itself.

Example: If S = {0, 1, 2}, its power set is: {{}, {0}, {1}, {2}, {0,1}, {0,2}, {1,2}, {0,1,2}}

Core Idea (Recursive Backtracking - “Take it or Leave it”): For each element in the input set, we have two choices:

  1. Include the element in the current subset we are building.
  2. Do NOT include the element in the current subset we are building.

We explore both choices recursively.

Recursive Logic: generate_power_set_recursive(index_of_element_to_consider, current_subset_being_built)

  • index_of_element_to_consider: The index of the element in the input set S that we are currently deciding whether to include or not.
  • current_subset_being_built: The subset formed so far based on decisions for elements S[0] through S[index-1].
  1. Base Case:

    • If index_of_element_to_consider == len(S): We have made a decision (include/exclude) for every element in the input set S.
    • Action: The current_subset_being_built is now one complete subset of S. Add a copy of it to our list of all subsets (the power set). Return.
  2. Recursive Step (For S[index_of_element_to_consider]):

    • Let element = S[index_of_element_to_consider].
    • Choice 1: EXCLUDE element
      • Don’t add element to current_subset_being_built.
      • Recursively call generate_power_set_recursive(index_of_element_to_consider + 1, current_subset_being_built). (Move to decide for the next element).
    • Choice 2: INCLUDE element
      • “Place”/Choose: Add element to current_subset_being_built.
      • Recursively call generate_power_set_recursive(index_of_element_to_consider + 1, current_subset_being_built). (Move to decide for the next element, with the current one included).
      • “Undo”/Backtrack: Remove element from current_subset_being_built. This is crucial so that when the “EXCLUDE” path for the next higher level of recursion (or other branches) is taken, current_subset_being_built is in the correct state.

EPI’s directed_power_set(to_be_selected, selected_so_far) (Page 225):

  • to_be_selected: Corresponds to my index_of_element_to_consider.
  • selected_so_far: Corresponds to my current_subset_being_built.

Example: input_set = [0, 1], directed_power_set_recursive(0, [])

  1. dps(0, []): (Deciding for element 0)
    • Exclude 0: Call dps(1, [])
      • dps(1, []): (Deciding for element 1)
        • Exclude 1: Call dps(2, [])
          • dps(2, []): idx(2) == len(2). Base Case! Add [] to results. Return.
        • Back in dps(1, []).
        • Include 1:
          • current_subset.append(1) -> current_subset is [1]
          • Call dps(2, [1])
            • dps(2, [1]): idx(2) == len(2). Base Case! Add [1] to results. Return.
          • current_subset.pop() -> current_subset is []
        • dps(1, []) returns.
    • Back in dps(0, []). current_subset is still [].
    • Include 0:
      • current_subset.append(0) -> current_subset is [0]
      • Call dps(1, [0])
        • dps(1, [0]): (Deciding for element 1)
          • Exclude 1: Call dps(2, [0])
            • dps(2, [0]): idx(2) == len(2). Base Case! Add [0] to results. Return.
          • Back in dps(1, [0]).
          • Include 1:
            • current_subset.append(1) -> current_subset is [0, 1]
            • Call dps(2, [0, 1])
              • dps(2, [0, 1]): idx(2) == len(2). Base Case! Add [0, 1] to results. Return.
            • current_subset.pop() -> current_subset is [0]
          • dps(1, [0]) returns.
      • current_subset.pop() -> current_subset is []
    • dps(0, []) returns.

Results: [ [], [1], [0], [0,1] ] (Order might vary based on recursion path, but all subsets are there).

Complexity:

  • There are 2^N subsets for a set of N elements.
  • Each subset can have up to N elements. Copying a subset takes O(N).
  • The recursion tree has 2^N leaf nodes (each representing a subset). The depth is N.
  • Time: O(N * 2^N) because we generate 2^N subsets, and for each, we might do O(N) work (e.g., appending to list, copying at the end). The number of nodes in the decision tree is 2^(N+1) - 1.
  • Space: O(N) for the recursion call stack (depth N) and O(N) for current_subset. If storing all results, then O(N * 2^N).

Alternative Iterative Approach (Bit Manipulation - Page 225):

  • If N is small (e.g., <= 64), each integer from 0 to 2^N - 1 can represent a subset.
  • The k-th bit of the integer corresponds to the k-th element of the set. If the bit is 1, include the element; if 0, exclude.
    • Example: S = {a,b,c} (N=3)
      • 0 (000_2) -> {}
      • 1 (001_2) -> {c} (assuming c is 0th, b is 1st, a is 2nd if mapping right to left)
      • 2 (010_2) -> {b}
      • 7 (111_2) -> {a,b,c}
  • This is often very fast in practice. Complexity is O(N * 2^N) because you iterate 2^N numbers, and for each, you might iterate up to N bits to construct the subset.

This “take it or leave it” recursive pattern is fundamental for many subset, combination, and related problems.


15.5 GENERATE ALL SUBSETS OF SIZE k (Page 226)

Problem: Given n and k, compute all subsets of size k from the set {1, 2, ..., n}. (The book uses {1,2,...,n} for the example, but the code can be adapted for any input set if needed by passing the set and an offset). This is also known as generating combinations: “n choose k”.

Example: n=5, k=2. Subsets of size 2 from {1,2,3,4,5}: {{1,2}, {1,3}, {1,4}, {1,5}, {2,3}, {2,4}, {2,5}, {3,4}, {3,5}, {4,5}}

Core Idea (Recursive Backtracking - Focused “Take it or Leave it”): We want to build a combination of size k. We can iterate through the numbers from 1 to n. For each number, we decide:

  1. Do we include this number in our current combination?
  2. If we include it, we then need to find k-1 more numbers from the remaining available numbers (i.e., numbers greater than the current one to avoid duplicates and maintain order).
  3. If we don’t include it, we need to find k numbers from the remaining available numbers.

To make it more structured and avoid duplicate combinations (like {1,2} and {2,1} which are the same set), we can enforce that the numbers in our partial_combination are always added in increasing order.

Recursive Logic: generate_combinations_recursive(start_number_to_consider, current_combination)

  • start_number_to_consider: The smallest number we can currently pick to add to our combination. This ensures we pick numbers in increasing order.
  • current_combination: The list of numbers picked so far for the current combination.
  1. Base Case 1 (Combination Complete):

    • If len(current_combination) == k: We have found a combination of the desired size k.
    • Action: Add a copy of current_combination to our list of results. Return.
  2. Base Case 2 (Not enough elements left to form a combination of size k):

    • If start_number_to_consider > n and len(current_combination) < k: We’ve run out of numbers to pick from, but haven’t reached size k. This path is a dead end. Return.
    • More precisely, if the number of remaining elements (n - start_number_to_consider + 1) is less than the number of elements we still need (k - len(current_combination)), then we can’t complete the combination. This check can prune branches earlier. (EPI code handles this implicitly with loop bounds).
  3. Recursive Step (For numbers starting from start_number_to_consider):

    • Iterate num_to_add from start_number_to_consider up to n.
      • (Optimization from EPI: num_to_add should not go so far that there aren’t enough remaining elements to complete a k-sized combination. The loop in EPI is while i <= n and num_remaining <= n - i + 1: which captures this). a. “Place”/Choose: Add num_to_add to current_combination. b. Explore/Recurse: Call generate_combinations_recursive(num_to_add + 1, current_combination).
        • We pass num_to_add + 1 as the next start_number_to_consider to ensure subsequent numbers are greater, maintaining sorted order and preventing duplicate combinations. c. “Undo”/Backtrack: Remove num_to_add from current_combination. This is crucial to allow the for loop to try the next num_to_add for the current position in the combination, or for previous recursive calls to explore different branches.

EPI’s directed_combinations(offset, partial_combination) (Page 226):

  • offset: Corresponds to my start_number_to_consider. (EPI uses 1-based indexing for numbers, so offset starts at 1).
  • partial_combination: Corresponds to my current_combination.

Example: n=4, k=2, directed_combinations_recursive(1, [])

  1. dcr(1, []): needed=2. end_loop_at = 4-2+1 = 3. Loop num_to_add from 1 to 3.
    • num_to_add = 1: combo=[1]. Call dcr(2, [1]).
      • dcr(2, [1]): needed=1. end_loop_at = 4-1+1 = 4. Loop num_to_add_inner from 2 to 4.
        • num_to_add_inner = 2: combo=[1,2]. Call dcr(3, [1,2]).
          • dcr(3, [1,2]): len==k. Base! Add [1,2]. Return.
        • combo.pop() -> [1].
        • num_to_add_inner = 3: combo=[1,3]. Call dcr(4, [1,3]).
          • dcr(4, [1,3]): len==k. Base! Add [1,3]. Return.
        • combo.pop() -> [1].
        • num_to_add_inner = 4: combo=[1,4]. Call dcr(5, [1,4]).
          • dcr(5, [1,4]): len==k. Base! Add [1,4]. Return.
        • combo.pop() -> [1].
        • Loop ends. dcr(2, [1]) returns.
      • combo.pop() -> [].
    • num_to_add = 2: combo=[2]. Call dcr(3, [2]).
      • dcr(3, [2]): needed=1. end_loop_at = 4-1+1 = 4. Loop num_to_add_inner from 3 to 4.
        • num_to_add_inner = 3: combo=[2,3]. Call dcr(4, [2,3]). Add [2,3].
        • combo.pop() -> [2].
        • num_to_add_inner = 4: combo=[2,4]. Call dcr(5, [2,4]). Add [2,4].
        • combo.pop() -> [2].
        • Loop ends. dcr(3, [2]) returns.
      • combo.pop() -> [].
    • num_to_add = 3: combo=[3]. Call dcr(4, [3]).
      • dcr(4, [3]): needed=1. end_loop_at = 4-1+1 = 4. Loop num_to_add_inner from 4 to 4.
        • num_to_add_inner = 4: combo=[3,4]. Call dcr(5, [3,4]). Add [3,4].
        • combo.pop() -> [3].
        • Loop ends. dcr(4, [3]) returns.
      • combo.pop() -> [].
    • Loop ends. dcr(1, []) returns.

Results: [[1,2], [1,3], [1,4], [2,3], [2,4], [3,4]]

Complexity:

  • Number of combinations is “n choose k”, denoted C(n,k) = n! / (k! * (n-k)!).
  • Each combination has k elements.
  • Time: O(k * C(n,k)) because we generate C(n,k) combinations, and forming/copying each takes about O(k). The recursion depth is k.
  • Space: O(k) for the recursion stack and current_combo. O(k * C(n,k)) if storing all results.

This pattern is very similar to the power set, but with the added constraint on the size k and the optimization of only picking numbers greater than the previously picked one to ensure unique combinations.