You are given two integers num1 and num2.
In one operation, you can choose integer i in the range [0, 60] and subtract 2i + num2 from num1.
Return the integer denoting the minimum number of operations needed to make num1 equal to 0.
If it is impossible to make num1 equal to 0, return -1.
Example 1:
Input: num1 = 3, num2 = -2 Output: 3 Explanation: We can make 3 equal to 0 with the following operations: - We choose i = 2 and substract 22 + (-2) from 3, 3 - (4 + (-2)) = 1. - We choose i = 2 and substract 22 + (-2) from 1, 1 - (4 + (-2)) = -1. - We choose i = 0 and substract 20 + (-2) from -1, (-1) - (1 + (-2)) = 0. It can be proven, that 3 is the minimum number of operations that we need to perform.
Example 2:
Input: num1 = 5, num2 = 7 Output: -1 Explanation: It can be proven, that it is impossible to make 5 equal to 0 with the given operation.
Constraints:
1 <= num1 <= 109-109 <= num2 <= 109Problem Overview: You start with two integers num1 and num2. In one operation you subtract (2^i + num2) from num1 for any non‑negative i. The goal is to reach exactly zero using the minimum number of operations.
Approach 1: Greedy Subtraction with Powers of 2 (O(60) time, O(1) space)
The key observation is that after performing k operations, the total value removed from the powers of two is x = num1 - k * num2. That means you must express x as the sum of exactly k powers of two. Using bit manipulation, this becomes a constraint on the binary representation of x. The number of set bits (popcount(x)) must be ≤ k because each set bit already represents one power of two, and larger powers can be split into smaller ones if more operations are needed.
You iterate over possible operation counts k (typically up to 60 because 2^60 exceeds the input bounds). For each k, compute x = num1 - k * num2. A valid solution exists if x ≥ k and popcount(x) ≤ k. The first k satisfying these conditions is the minimum number of operations. This greedy check works because any integer can be decomposed into powers of two using its binary form. The approach is extremely fast and relies heavily on properties of binary representation.
Approach 2: Dynamic Programming (DP) for Minimization (O(K * B) time, O(K) space)
A more explicit strategy models the process with dynamic programming. Treat each operation as selecting a power 2^i and subtracting num2 along with it. The DP tracks the minimum operations required to form different sums of powers of two while accounting for the cumulative subtraction of num2. Each state represents how many operations have been used and the achievable remainder value.
You iterate over possible powers of two and update states similarly to a knapsack transition. While this approach demonstrates the structure of the problem, the state space grows quickly because powers can repeat and the target value changes with every operation. In practice, DP is slower and more complex than the greedy insight derived from binary decomposition.
Recommended for interviews: The greedy check using popcount and binary constraints is the expected solution. It shows strong understanding of bit manipulation and mathematical reasoning. Mentioning a DP formulation can demonstrate problem exploration, but the constant‑time greedy iteration is what interviewers usually look for.
This approach involves using a greedy method to subtract the largest possible power of 2 plus num2 from num1. At each step, we try to maximize the decrease in num1, aiming to reach exactly zero. If it becomes impossible to subtract without going below zero or exceeding the potential operations limit, we return -1.
The function iteratively subtracts the largest valid (2^i + num2) from num1. It counts the operations until num1 becomes zero. If num1 becomes negative and no valid i can be found, it returns -1.
Time Complexity: O(1) due to loop limits.
Space Complexity: O(1), only a few variables are used.
This approach leverages dynamic programming to store intermediate results of subproblems, reducing redundant calculations. We systematically explore the impact of subtracting each (2^i + num2) on num1, storing the minimum steps required to reach zero. This method is effective but requires more memory management.
This DP solution initializes a DP array where each index represents the minimum operations needed for that particular num1 value. Each valid operation updates the DP based on historical values.
C++
JavaScript
Time Complexity: O(num1 * 61), for each num1 value, checks up to 61 possibilities for i.
Space Complexity: O(num1), required to store DP results for each subproblem.
| Approach | Complexity |
|---|---|
| Approach 1: Greedy Subtraction with Powers of 2 | Time Complexity: O(1) due to loop limits. |
| Approach 2: Dynamic Programming (DP) for Minimization | Time Complexity: O(num1 * 61), for each num1 value, checks up to 61 possibilities for i. |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy Subtraction with Powers of 2 | O(60) | O(1) | Best general solution using bit manipulation and binary popcount checks |
| Dynamic Programming for Minimization | O(K * B) | O(K) | Useful for conceptual understanding or when modeling the process explicitly |
Minimum Operations to Make the Integer Zero | 2 Ways | Math Proof | Leetcode 2749 | codestorywithMIK • codestorywithMIK • 11,448 views views
Watch 9 more video solutions →Practice Minimum Operations to Make the Integer Zero with our built-in code editor and test cases.
Practice on FleetCode