You are given an integer array nums and three integers k, op1, and op2.
You can perform the following operations on nums:
i and divide nums[i] by 2, rounding up to the nearest whole number. You can perform this operation at most op1 times, and not more than once per index.i and subtract k from nums[i], but only if nums[i] is greater than or equal to k. You can perform this operation at most op2 times, and not more than once per index.Note: Both operations can be applied to the same index, but at most once each.
Return the minimum possible sum of all elements in nums after performing any number of operations.
Example 1:
Input: nums = [2,8,3,19,3], k = 3, op1 = 1, op2 = 1
Output: 23
Explanation:
nums[1] = 8, making nums[1] = 5.nums[3] = 19, making nums[3] = 10.[2, 5, 3, 10, 3], which has the minimum possible sum of 23 after applying the operations.Example 2:
Input: nums = [2,4,3], k = 3, op1 = 2, op2 = 1
Output: 3
Explanation:
nums[0] = 2, making nums[0] = 1.nums[1] = 4, making nums[1] = 2.nums[2] = 3, making nums[2] = 0.[1, 2, 0], which has the minimum possible sum of 3 after applying the operations.
Constraints:
1 <= nums.length <= 1000 <= nums[i] <= 1050 <= k <= 1050 <= op1, op2 <= nums.lengthProblem Overview: You are given an array of integers and a limited number of operations that modify element values. Each operation reduces a number in a specific way (for example halving it or subtracting a constant). The goal is to apply these operations optimally so the total array sum becomes as small as possible.
Approach 1: Greedy with Priority Queue (O((n + k) log n) time, O(n) space)
The greedy strategy always applies the operation that produces the largest immediate reduction in value. Push all elements into a max heap so the largest element is processed first. For each available operation, pop the current maximum, compute the reduction produced by the allowed operation, update the value, and push it back into the heap. This works well when the best choice at each step is the element producing the biggest decrease in the sum. The heap ensures efficient extraction and reinsertion while keeping the array effectively ordered by impact.
This approach is intuitive and fast in practice. You repeatedly evaluate which operation gives the maximum decrease and apply it to the largest candidate. Since heap operations take O(log n), the total cost depends on the number of operations performed.
Approach 2: Dynamic Programming (O(n * k1 * k2) time, O(n * k1 * k2) space)
When operations interact or their order matters, greedy choices may not always lead to the global minimum. Dynamic programming models every valid state explicitly. Define a DP state representing the minimum sum achievable after processing the first i elements while using a certain number of operations of each type. For each element, evaluate all valid combinations: no operation, operation A, operation B, or both if allowed.
Transitions update the state by applying the operation’s transformation to the current number and adding the resulting value to the accumulated sum. The DP table ensures every combination of operations is considered exactly once, guaranteeing the globally minimal sum. This approach is more expensive but reliable when greedy assumptions break.
The state transitions iterate through elements and update remaining operation counts. This makes the solution deterministic and safe for edge cases where applying a smaller local reduction enables a larger future gain.
Recommended for interviews: Start by explaining the greedy intuition since minimizing the largest contributors to the sum is a natural first idea. Then discuss why overlapping operation choices may require a dynamic programming formulation. Interviewers typically expect you to recognize the greedy optimization quickly, but implementing a correct DP over an array with operation counts demonstrates stronger problem‑solving depth.
In this approach, we aim to minimize the array sum by applying operations in a greedy fashion. The main idea is to sort the array and apply operations that give the maximum immediate decrement in the array sum, prioritized on larger numbers first.
Steps:
This way, the operations have maximal reduction impact on the sum.
The code sorts the array in descending order and tries to apply the operations with the highest reduction in value first. It first applies the division operation if it can be applied, then the subtraction if it still can meaningfully reduce a value.
Python
JavaScript
Time Complexity: O(n log n) due to sorting.
Space Complexity: O(1) besides the input array.
This approach uses dynamic programming to find a minimal configuration for the sum by exploring all combinations of operations applied.
Steps:
This allows tracking the impact of each operation on the whole array efficiently.
This Java code uses the DP array to track minimal arbitrary application of operations. Each cell represents a sum state given the operations performed up to that moment.
Time Complexity: O(n*op1*op2). In this form, it iterates over op1 and op2 for each number.
Space Complexity: O(op1*op2) for the DP table.
| Approach | Complexity |
|---|---|
| Greedy Approach | Time Complexity: O(n log n) due to sorting. |
| Dynamic Programming | Time Complexity: O(n*op1*op2). In this form, it iterates over op1 and op2 for each number. |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy with Max Heap | O((n + k) log n) | O(n) | When each operation independently reduces the largest element and greedy decisions are optimal |
| Dynamic Programming | O(n * k1 * k2) | O(n * k1 * k2) | When operations interact or their order affects the final result |
Minimum Array Sum | Detailed for Beginners | Contest Problem | Leetcode 3366 | codestorywithMIK • codestorywithMIK • 4,658 views views
Watch 6 more video solutions →Practice Minimum Array Sum with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor