Watch 10 video solutions for Perfect Squares, a medium level problem involving Math, Dynamic Programming, Breadth-First Search. This walkthrough by NeetCode has 73,016 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
Given an integer n, return the least number of perfect square numbers that sum to n.
A perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 1, 4, 9, and 16 are perfect squares while 3 and 11 are not.
Example 1:
Input: n = 12 Output: 3 Explanation: 12 = 4 + 4 + 4.
Example 2:
Input: n = 13 Output: 2 Explanation: 13 = 4 + 9.
Constraints:
1 <= n <= 104Problem Overview: Given an integer n, return the minimum number of perfect square numbers (like 1, 4, 9, 16...) whose sum equals n. Each square can be used multiple times. The challenge is minimizing the count while exploring combinations efficiently.
Approach 1: Brute Force Recursion (Exponential Time, O(√n) Space)
The brute force approach tries every possible perfect square ≤ n and recursively reduces the remaining value. For each square s, compute 1 + solve(n - s) and track the minimum result. This forms a large recursion tree because the same subproblems (like solve(12) or solve(7)) are recomputed many times. Time complexity grows exponentially in the worst case, roughly O(√n^n), while recursion depth leads to O(√n) auxiliary space. This approach demonstrates the structure of the problem but quickly becomes impractical for larger inputs.
Approach 2: Dynamic Programming (O(n√n) Time, O(n) Space)
The optimized solution uses dynamic programming. Create a DP array where dp[i] stores the minimum number of perfect squares needed to form value i. Initialize dp[0] = 0, then iterate from 1 to n. For each i, try every square j*j ≤ i and update dp[i] = min(dp[i], dp[i - j*j] + 1). This works because every optimal solution for i builds on previously solved smaller states. The algorithm checks about √n squares for each number up to n, resulting in O(n√n) time and O(n) space.
This problem is also commonly interpreted as a shortest-path problem using breadth-first search. Treat each remainder as a node and subtract perfect squares as edges. BFS guarantees the first time you reach 0 corresponds to the minimum number of squares. That approach has similar O(n√n) complexity but models the problem as a graph instead of a DP table.
The core insight comes from number decomposition and math properties of squares. Instead of enumerating combinations blindly, reuse results for smaller numbers. Every value builds on a previously computed optimal solution.
Recommended for interviews: The dynamic programming approach is what most interviewers expect. It clearly demonstrates state definition (dp[i]), transition (dp[i - j*j] + 1), and optimization over previous states. Brute force helps explain the problem structure, but the DP version shows you can eliminate redundant work and reason about time complexity.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Brute Force Recursion | Exponential | O(√n) | Understanding the raw recursive structure of the problem |
| Dynamic Programming | O(n√n) | O(n) | General case and most common interview solution |
| Breadth-First Search | O(n√n) | O(n) | When modeling the problem as shortest path in a state graph |