Given an array of distinct integers candidates and a target integer target, return a list of all unique combinations of candidates where the chosen numbers sum to target. You may return the combinations in any order.
The same number may be chosen from candidates an unlimited number of times. Two combinations are unique if the frequency of at least one of the chosen numbers is different.
The test cases are generated such that the number of unique combinations that sum up to target is less than 150 combinations for the given input.
Example 1:
Input: candidates = [2,3,6,7], target = 7 Output: [[2,2,3],[7]] Explanation: 2 and 3 are candidates, and 2 + 2 + 3 = 7. Note that 2 can be used multiple times. 7 is a candidate, and 7 = 7. These are the only two combinations.
Example 2:
Input: candidates = [2,3,5], target = 8 Output: [[2,2,2,2],[2,3,3],[3,5]]
Example 3:
Input: candidates = [2], target = 1 Output: []
Constraints:
1 <= candidates.length <= 302 <= candidates[i] <= 40candidates are distinct.1 <= target <= 40Problem Overview: Given an array of distinct integers candidates and a target value, return all unique combinations where the chosen numbers sum to the target. Each number can be used unlimited times. The challenge is generating combinations without duplicates while exploring all valid sums.
Approach 1: Backtracking (Exponential Time, Optimal for Enumeration)
This approach treats the problem as a recursive search tree. Start from index 0, repeatedly choose a candidate, add it to the current combination, and recursively explore the remaining target. If the running sum equals the target, record the combination. If it exceeds the target, stop exploring that path. Because candidates can be reused, the recursive call continues from the same index instead of moving forward.
The key insight is pruning: once the remaining target becomes negative, the branch cannot produce a valid combination. Backtracking efficiently explores possibilities while avoiding unnecessary permutations. Time complexity is exponential, typically O(2^T) in the worst case depending on the target and smallest candidate. Space complexity is O(T) for recursion depth and combination storage. This technique is a classic application of backtracking with recursive exploration over an array.
Approach 2: Dynamic Programming (O(n * target * k))
A dynamic programming strategy builds solutions for every intermediate sum from 0 to target. Use a DP array where dp[i] stores all combinations that produce sum i. Initialize dp[0] with an empty combination. For each candidate number, iterate through target values and extend previously built combinations by appending the candidate.
The advantage of this approach is structured bottom‑up construction. Instead of exploring recursive paths, it reuses previously computed results. Time complexity is roughly O(n * target * k), where n is the number of candidates and k represents the average number of stored combinations. Space complexity is also O(target * k) due to storing all partial combinations. This technique demonstrates a combinational form of dynamic programming.
Recommended for interviews: The backtracking solution is what most interviewers expect. It shows you understand recursive exploration, pruning, and combination generation. Mentioning the dynamic programming variation demonstrates deeper problem-solving breadth, but implementing clean backtracking with correct pruning and duplicate control is usually sufficient for coding interviews.
This approach uses recursion and backtracking to generate all possible combinations by exploring each candidate. If a candidate is chosen, we explore further with the remaining target reduced by the candidate's value. We use backtracking to remove a candidate and try the next one. This way, we explore all possible combinations using depth-first search.
This C solution uses recursion to explore combinations and backtracks when a target less than zero is reached. If the target is zero, a valid combination is found and added to the result. The helper function findCombination iterates through candidates allowing reuse of the current candidate.
Time Complexity: O(2^T), where T is the target, as it potentially explores every combination of the candidates.
Space Complexity: O(T) for the recursion call stack and the path being explored.
We can use a dynamic programming (DP) approach to solve the Combination Sum problem. We maintain a DP array where each index represents the number of ways to form that particular sum using the candidates. The approach involves iterating through candidates and updating the DP array for each possible sum that can be formed with that candidate.
This C implementation describes the dynamic programming principle by using a combinationSum helper to check for combinations by evaluating the candidates against target sums, updating a dp array to track viable sets.
Time Complexity: Roughly O(T * N), for target T and candidates N.
Space Complexity: O(T) for the dp array itself in context of storage.
We can first sort the array to facilitate pruning.
Next, we design a function dfs(i, s), which means starting the search from index i with a remaining target value of s. Here, i and s are both non-negative integers, the current search path is t, and the answer is ans.
In the function dfs(i, s), we first check whether s is 0. If it is, we add the current search path t to the answer ans, and then return. If s \lt candidates[i], it means that the elements of the current index and the following indices are all greater than the remaining target value s, and the path is invalid, so we return directly. Otherwise, we start the search from index i, and the search index range is j \in [i, n), where n is the length of the array candidates. During the search, we add the element of the current index to the search path t, recursively call the function dfs(j, s - candidates[j]), and after the recursion ends, we remove the element of the current index from the search path t.
In the main function, we just need to call the function dfs(0, target) to get the answer.
The time complexity is O(2^n times n), and the space complexity is O(n). Here, n is the length of the array candidates. Due to pruning, the actual time complexity is much less than O(2^n times n).
Similar problems:
We can also change the implementation logic of the function dfs(i, s) to another form. In the function dfs(i, s), we first check whether s is 0. If it is, we add the current search path t to the answer ans, and then return. If i geq n or s \lt candidates[i], the path is invalid, so we return directly. Otherwise, we consider two situations, one is not selecting the element of the current index, that is, recursively calling the function dfs(i + 1, s), and the other is selecting the element of the current index, that is, recursively calling the function dfs(i, s - candidates[i]).
The time complexity is O(2^n times n), and the space complexity is O(n). Here, n is the length of the array candidates. Due to pruning, the actual time complexity is much less than O(2^n times n).
| Approach | Complexity |
|---|---|
| Backtracking Approach | Time Complexity: O(2^T), where T is the target, as it potentially explores every combination of the candidates. |
| Dynamic Programming Approach | Time Complexity: Roughly O(T * N), for target T and candidates N. |
| Sorting + Pruning + Backtracking | — |
| Sorting + Pruning + Backtracking(Another Form) | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Backtracking | Exponential (≈ O(2^T)) | O(T) | Standard interview solution when generating all valid combinations |
| Dynamic Programming | O(n * target * k) | O(target * k) | When you want a bottom-up construction of combinations or reuse intermediate sums |
L8. Combination Sum | Recursion | Leetcode | C++ | Java • take U forward • 925,071 views views
Watch 9 more video solutions →Practice Combination Sum with our built-in code editor and test cases.
Practice on FleetCode