Dynamic Programming Patterns
Dynamic programming (DP) solves optimization and counting problems by breaking them into overlapping subproblems, solving each once, and storing the results. The key insight is that optimal substructure holds: the optimal solution to the whole problem can be built from optimal solutions to subproblems.Classic DP
0/1 Knapsack: given n items with weights and values, and a capacity W, choose items to maximize total value without exceeding W.Digit DP
Digit DP counts integers in a range [a, b] that satisfy a digit-level constraint (e.g., digits are non-decreasing, digit sum divisible by m). The standard template uses adfs(pos, state, limit) function with memoization.
Core idea: build the number digit by digit from the most significant position. At each position you may place any digit from 0 to 9 (or 0 to A[pos] if you are still tight against the upper bound). The limit flag tracks whether you are still constrained by the upper bound.
Critical rule: you can only memoize a state when limit == false. When limit == true the valid digit range is restricted to A[pos], so the same (pos, state) pair may have a different answer depending on the path taken—memoizing it would produce wrong results.
mod:
if (limit) instead of if (!limit) when checking whether to skip the memoization update, the cache will still produce correct values—but the time complexity degrades from polynomial to factorial because the pruning never fires.
Tree DP
Tree DP defines state on each subtree and transitions up to the parent via DFS. The standard form usesf[u][0] / f[u][1] to represent two choices at each node (e.g., selected / not selected in an independent set).
Interval DP
Interval DP solves problems over contiguous subarrays. The outer loop iterates over interval length; the inner loop over starting positions; the innermost loop over split points.Graph Algorithms
BFS and DFS
BFS explores a graph level by level using a queue, finding shortest paths by hop count in O(V + E). DFS explores deeply before backtracking using a stack (or recursion), useful for topological order, cycle detection, and generating all paths. Multi-source BFS: initialize the queue with multiple source nodes simultaneously. All sources start at distance 0. This solves “nearest X to each cell” problems in a single BFS pass.Dijkstra’s Algorithm
Dijkstra finds the single-source shortest paths in a graph with non-negative edge weights. It uses a greedy strategy: always relax the unvisited node with the smallest current distance.Floyd-Warshall Algorithm
Floyd-Warshall computes all-pairs shortest paths in O(V³) using DP over intermediate nodes.dist[x][y] holds the shortest path between every pair. Floyd-Warshall handles negative edges but not negative cycles. Run it on dense graphs or when you need all-pairs results.
Topological Sort (Kahn’s Algorithm)
Greedy Algorithms
A greedy algorithm makes a locally optimal choice at each step, hoping to find a global optimum. Greedy works when the greedy choice property holds (a locally optimal choice leads to a globally optimal solution) and the problem has optimal substructure.Activity Selection / Maximum Non-Overlapping Intervals
Given intervals, select the maximum number of non-overlapping intervals. Greedy rule: always pick the interval that ends earliest. Sort by end time; scan left to right, adding an interval only if its start is ≥ the end of the last selected interval.Invariant / Potential-Based Greedy
Some greedy problems require finding a non-obvious invariant. A classic example: given two arrays A and B that can be transformed via adjacent swaps, determine whether A can become B and find the minimum number of swaps. Key insight: define the potential (index + value) array. Arrays A and B can be transformed into each other if and only if their potential multisets are equal. To minimize swaps, use a greedy “nearest first” strategy combined with a BIT (Binary Indexed Tree / Fenwick Tree) to count inversions efficiently.Common Greedy Patterns
| Problem type | Greedy rule |
|---|---|
| Interval scheduling | Sort by end time, pick earliest-ending non-conflicting interval |
| Interval covering | Sort by start, pick the interval with the farthest reach |
| Fractional knapsack | Sort by value/weight ratio, take greedily |
| Huffman coding | Always merge the two lowest-frequency nodes |
| Minimize maximum | Binary search on the answer + greedy feasibility check |
Mathematical Algorithms
Fast Exponentiation (Binary Exponentiation)
Computesbase^exp mod m in O(log exp) time by repeatedly squaring.
Modular Inverse
The modular inverse ofa mod m is the value x such that a * x ≡ 1 (mod m). It exists when gcd(a, m) = 1.
When m is prime, Fermat’s little theorem gives: a^(m-1) ≡ 1 (mod m), so the inverse is a^(m-2) mod m:
Matrix Exponentiation
When a recurrence can be expressed as matrix multiplication, fast exponentiation on matrices reduces O(n) DP to O(m³ log n) where m is the state dimension. Fibonacci example: the Fibonacci recurrenceF(n) = F(n-1) + F(n-2) maps to: