Dynamic Programming
- Use When
- Optimal substructure
- Solution to a large problem can be obtained by the solutions to smaller optimal problems
- Overlapping subproblems
- Space of sub problems must be small
- Any recursive algorithm solving the problem should solve the same the same sub problems over and over rather then generating new sub problems
- Allows the solutions of the chunks to be cached
- This is what sets it apart from divide and conquer
- Guarantees optimal solution in terms of correctness and time
- Examples
- Methods
- Top-down: Memoization
- Store the results of the subproblems and then reuse the results
- Basically recursion with a cache
- Bottom-up: Tabulation
- Solve a lot of smaller subproblems and then combine them to get a solution
- Disadvantage: sometimes computes things that weren’t necessary to compute (because it calculates the whole table)