You are given a positive integer n, you can do the following operation any number of times:
2 from n.Return the minimum number of operations to make n equal to 0.
A number x is power of 2 if x == 2i where i >= 0.
Example 1:
Input: n = 39 Output: 3 Explanation: We can do the following operations: - Add 20 = 1 to n, so now n = 40. - Subtract 23 = 8 from n, so now n = 32. - Subtract 25 = 32 from n, so now n = 0. It can be shown that 3 is the minimum number of operations we need to make n equal to 0.
Example 2:
Input: n = 54 Output: 3 Explanation: We can do the following operations: - Add 21 = 2 to n, so now n = 56. - Add 23 = 8 to n, so now n = 64. - Subtract 26 = 64 from n, so now n = 0. So the minimum number of operations is 3.
Constraints:
1 <= n <= 105Problem Overview: You are given an integer n. In one operation you can add or subtract any power of two (2^k). The goal is to reduce the number to exactly 0 using the minimum number of operations.
Approach 1: Binary Representation Manipulation (O(log n) time, O(1) space)
This approach works directly on the binary representation of n. Each power of two corresponds to a single bit, so the task becomes minimizing the number of bit adjustments needed to turn the number into zero. Iterate through the bits from least significant to most significant. When you encounter consecutive 1s, it is often better to perform a carry by adding a power of two rather than subtracting repeatedly. This turns a block like 111 into 1000, reducing future operations. The algorithm essentially simulates how binary addition and subtraction propagate carries while scanning the bits. Because the number of bits in n is O(log n), the entire process runs in logarithmic time. This technique relies heavily on understanding bit manipulation patterns and how binary carries affect optimal decisions.
Approach 2: Greedy Power of Two Matching (O(log n) time, O(1) space)
The greedy strategy focuses on choosing the power of two that moves n closest to zero at each step. Compute the lowest set bit using n & -n. That value represents the smallest power of two currently contributing to the number. The key decision is whether subtracting that value or adding the next power of two leads to fewer remaining set bits. If subtracting creates a long chain of borrows (many adjacent 1s), adding the next power of two collapses the sequence via a carry. This greedy decision works because powers of two form a canonical representation of integers, making local improvements globally optimal. The method repeatedly updates n until it becomes zero, performing at most one operation per bit. The logic naturally connects with concepts from greedy algorithms and bit counting.
Recommended for interviews: The greedy bit-based solution is what most interviewers expect. It shows you understand binary structure and can reason about carries in bit operations. A brute-force or dynamic programming idea demonstrates correctness thinking, but the greedy bit manipulation solution proves you can optimize using properties of powers of two.
This approach leverages the binary representation of the integer. Each bit in the binary form of the number denotes whether a particular power of 2 is present or not. Flipping each bit can effectively add or subtract the power of 2 it represents. We can make use of bitwise operations to achieve this and count the number of necessary operations.
The C implementation uses a while loop that runs until n becomes zero. During each iteration, it checks if the least significant bit (LSB) is 1 (i.e., n & 1). If so, it increments the operation count and flips the bit using n ^= 1. Then, it right shifts n by one bit using n >>= 1. This process will count the number of operations required to bring n to 0 by flipping bits.
The time complexity is O(b), where b is the number of bits in the binary representation of n. The space complexity is O(1).
This method prioritizes matching and subtracting/add operations for powers of 2 greedily. The target's binary form aids in deciding which power to manipulate. The approach effectively maps adjacent '1' combinations—adding to higher powers and removing lesser ones.
This greedy implementation calculates the closest power of 2 to n using log2, evaluating two routes—either reducing by this power or by its double. Recursion accumulates for minimum operations from these branches.
The time complexity is O(log(n)) owing to recursive power checks; space complexity involves recursive depth, about O(log(n)).
We convert the integer n to binary, starting from the lowest bit:
Finally, we also need to check whether the current number of consecutive 1s is 1. If it is, it means that we can eliminate 1 through one operation; if it is greater than 1, we can eliminate the consecutive 1s through two operations.
The time complexity is O(log n), and the space complexity is O(1). Here, n is the given integer in the problem.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Binary Representation Manipulation | The time complexity is |
| Greedy Power of Two Matching | The time complexity is |
| Greedy + Bitwise Operation | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Binary Representation Manipulation | O(log n) | O(1) | Best general solution when working directly with bit patterns and minimizing operations via carry handling |
| Greedy Power of Two Matching | O(log n) | O(1) | Preferred in interviews when explaining decisions using lowest set bit and greedy updates |
| Dynamic Programming (conceptual) | O(n log n) | O(n) | Useful for understanding the problem as a state transition problem but impractical for large n |
Minimum Operations to Reduce an Integer to 0 || Leetcode-2571 || C++ • Aryan Mittal • 4,745 views views
Watch 9 more video solutions →Practice Minimum Operations to Reduce an Integer to 0 with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor