← Back to Home
DP
ALGO / Dynamic Programming
Contact us
Tags
Dynamic ProgrammingOptimization
Algorithm

What is Dynamic Programming?

Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them into overlapping subproblems and storing their solutions for future use. It applies when a problem has optimal substructure — the optimal solution to a problem can be constructed from optimal solutions of its subproblems.

DP transforms exponential-time algorithms into polynomial-time algorithms. It was formalized by Richard Bellman in the 1950s and is used extensively in bioinformatics, economics, logistics, and AI.

When to Use Dynamic Programming

  • ›Optimal Substructure: The optimal solution of a larger problem can be built from optimal solutions of smaller subproblems
  • ›Overlapping Subproblems: The same subproblem is solved multiple times. Without DP, these repeated computations cause exponential slowdown
  • ›Well-Defined Recurrence: There is a clear recurrence relation that expresses the answer for a larger problem in terms of answers for smaller problems
  • ›Examples: Fibonacci, shortest paths, knapsack, edit distance, longest common subsequence

Two Approaches to DP

Memoization (Top-Down)Recursive

Start from the top and recursively solve subproblems. Store results in a cache (memo table). When a subproblem is encountered again, return the cached result immediately.

// Pseudocode
function fib(n, memo = {}):
    if n in memo: return memo[n]
    if n <= 1: return n
    memo[n] = fib(n-1, memo) + fib(n-2, memo)
    return memo[n]
Pros:Only computes needed subproblems — efficient when not all subproblems are requiredCons:Recursive calls have overhead; risk of stack overflow for very deep recursions
Tabulation (Bottom-Up)Iterative

Start from the smallest subproblems and build upward. Fill a table iteratively from base cases to the final answer. No recursion overhead.

// Pseudocode
function fib(n):
    if n <= 1: return n
    dp = array of size n+1
    dp[0], dp[1] = 0, 1
    for i = 2 to n:
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]
Pros:No recursion overhead; better cache locality; no stack overflow riskCons:May compute ALL subproblems even when only a subset is needed

DP vs. Related Techniques

TechniqueSubproblems Overlap?Optimal Substructure?Key Difference
Dynamic ProgrammingYes — same subproblem solved multiple timesYesCaches results to avoid redundant computation
Divide & ConquerNo — subproblems are independentYesNo memoization needed — each subproblem is unique
GreedyNoSometimesMakes locally optimal choice at each step; may not yield global optimum
BacktrackingNoYesExplores ALL possibilities; exponential time — no caching

Classic DP Problems in This Tool

›
Fibonacci:The classic DP example — compute the nth Fibonacci number. Demonstrates both memoization and tabulation.
›
0/1 Knapsack:Maximize value within a weight capacity. Each item can be taken at most once.
›
Longest Increasing Subsequence (LIS):Find the longest subsequence where elements are in increasing order.
›
Coin Change:Minimum number of coins needed to make a target amount, using unlimited coins of given denominations.
›
Edit Distance:Minimum edit operations (insert, delete, substitute) to transform one string into another.