Watch 10 video solutions for Minimum Operations to Reduce an Integer to 0, a medium level problem involving Dynamic Programming, Greedy, Bit Manipulation. This walkthrough by Aryan Mittal has 4,745 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
You are given a positive integer n, you can do the following operation any number of times:
2 from n.Return the minimum number of operations to make n equal to 0.
A number x is power of 2 if x == 2i where i >= 0.
Example 1:
Input: n = 39 Output: 3 Explanation: We can do the following operations: - Add 20 = 1 to n, so now n = 40. - Subtract 23 = 8 from n, so now n = 32. - Subtract 25 = 32 from n, so now n = 0. It can be shown that 3 is the minimum number of operations we need to make n equal to 0.
Example 2:
Input: n = 54 Output: 3 Explanation: We can do the following operations: - Add 21 = 2 to n, so now n = 56. - Add 23 = 8 to n, so now n = 64. - Subtract 26 = 64 from n, so now n = 0. So the minimum number of operations is 3.
Constraints:
1 <= n <= 105Problem Overview: You are given an integer n. In one operation you can add or subtract any power of two (2^k). The goal is to reduce the number to exactly 0 using the minimum number of operations.
Approach 1: Binary Representation Manipulation (O(log n) time, O(1) space)
This approach works directly on the binary representation of n. Each power of two corresponds to a single bit, so the task becomes minimizing the number of bit adjustments needed to turn the number into zero. Iterate through the bits from least significant to most significant. When you encounter consecutive 1s, it is often better to perform a carry by adding a power of two rather than subtracting repeatedly. This turns a block like 111 into 1000, reducing future operations. The algorithm essentially simulates how binary addition and subtraction propagate carries while scanning the bits. Because the number of bits in n is O(log n), the entire process runs in logarithmic time. This technique relies heavily on understanding bit manipulation patterns and how binary carries affect optimal decisions.
Approach 2: Greedy Power of Two Matching (O(log n) time, O(1) space)
The greedy strategy focuses on choosing the power of two that moves n closest to zero at each step. Compute the lowest set bit using n & -n. That value represents the smallest power of two currently contributing to the number. The key decision is whether subtracting that value or adding the next power of two leads to fewer remaining set bits. If subtracting creates a long chain of borrows (many adjacent 1s), adding the next power of two collapses the sequence via a carry. This greedy decision works because powers of two form a canonical representation of integers, making local improvements globally optimal. The method repeatedly updates n until it becomes zero, performing at most one operation per bit. The logic naturally connects with concepts from greedy algorithms and bit counting.
Recommended for interviews: The greedy bit-based solution is what most interviewers expect. It shows you understand binary structure and can reason about carries in bit operations. A brute-force or dynamic programming idea demonstrates correctness thinking, but the greedy bit manipulation solution proves you can optimize using properties of powers of two.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Binary Representation Manipulation | O(log n) | O(1) | Best general solution when working directly with bit patterns and minimizing operations via carry handling |
| Greedy Power of Two Matching | O(log n) | O(1) | Preferred in interviews when explaining decisions using lowest set bit and greedy updates |
| Dynamic Programming (conceptual) | O(n log n) | O(n) | Useful for understanding the problem as a state transition problem but impractical for large n |