You are given two 0-indexed integer arrays nums1 and nums2 of equal length. Every second, for all indices 0 <= i < nums1.length, value of nums1[i] is incremented by nums2[i]. After this is done, you can do the following operation:
0 <= i < nums1.length and make nums1[i] = 0.You are also given an integer x.
Return the minimum time in which you can make the sum of all elements of nums1 to be less than or equal to x, or -1 if this is not possible.
Example 1:
Input: nums1 = [1,2,3], nums2 = [1,2,3], x = 4 Output: 3 Explanation: For the 1st second, we apply the operation on i = 0. Therefore nums1 = [0,2+2,3+3] = [0,4,6]. For the 2nd second, we apply the operation on i = 1. Therefore nums1 = [0+1,0,6+3] = [1,0,9]. For the 3rd second, we apply the operation on i = 2. Therefore nums1 = [1+1,0+2,0] = [2,2,0]. Now sum of nums1 = 4. It can be shown that these operations are optimal, so we return 3.
Example 2:
Input: nums1 = [1,2,3], nums2 = [3,3,3], x = 4 Output: -1 Explanation: It can be shown that the sum of nums1 will always be greater than x, no matter which operations are performed.
Constraints:
1 <= nums1.length <= 1031 <= nums1[i] <= 1030 <= nums2[i] <= 103nums1.length == nums2.length0 <= x <= 106Problem Overview: You are given two arrays nums1 and nums2. Every second, each element in nums1 increases by its corresponding value in nums2. During that second, you may choose one index and set nums1[i] to 0. The goal is to determine the minimum number of seconds required so the total array sum becomes ≤ x.
The challenge comes from two competing effects: the array sum keeps increasing every second, but resetting an element removes both its current value and the growth it accumulated. Choosing the right indices and the right order is the key optimization problem.
Approach 1: Greedy Ordering with Sorting (O(n log n) + O(n²) evaluation)
The value of resetting an index grows over time because nums1[i] accumulates t * nums2[i] after t seconds. Elements with larger nums2[i] grow faster, so resetting them later produces a larger reduction. This observation leads to sorting indices by nums2 in ascending order so that slower-growing elements are considered earlier and fast-growing ones are delayed.
After sorting, simulate selecting elements over multiple seconds while tracking how much total sum reduction each reset can produce. The greedy insight determines the order of candidates, while the evaluation checks whether the accumulated reduction is enough to push the total sum below x. Sorting dominates the preprocessing cost at O(n log n), while the evaluation step typically becomes O(n²) depending on how many reset combinations you test. Space complexity remains O(n).
Approach 2: Dynamic Programming with Removal Array (O(n²) time, O(n) space)
A more precise formulation treats the problem like a knapsack-style dynamic programming optimization. First compute the base sum growth after t seconds: sum(nums1) + t * sum(nums2). Then maximize how much value you can remove by choosing t reset operations.
Sort elements by nums2 (a sorting step) and define dp[t] as the maximum reduction achievable using exactly t resets. When processing element i, updating the DP state adds the benefit of resetting that element at time t: nums1[i] + t * nums2[i]. Iterate t backwards to avoid overwriting states.
After computing DP values, check each possible time t. If sum(nums1) + t * sum(nums2) - dp[t] ≤ x, then t seconds is feasible. The first such t is the answer. The algorithm runs in O(n²) time and uses O(n) extra space, while leveraging structure from array processing and scheduling.
Recommended for interviews: The dynamic programming formulation is what most interviewers expect. Sorting by nums2 shows you understand the growth behavior, while the DP step proves you can convert the scheduling problem into a maximization problem. Mentioning the greedy intuition first often helps explain why the DP state transition works.
The core idea of this approach is to selectively zero elements in nums1 to prevent them from contributing to the sum during each second. At each step, choose the element with the largest current increment effect - typically decided by nums2. This greedy strategy ensures that the impact of future increments is minimized. Naturally, this approach does not guarantee an optimal solution but is a reasonable attempt in scenarios where heuristic-based approaches might work.
Start with the initial sum of nums1. In each second, update the array nums1 by adding nums2 to nums1's current value. Choose the maximum impact index at each second to zero it (impact here is measured by nums2). Update the total sum of nums1 and repeat until the sum is less than or equal to x, or return -1 if no solution is possible.
Time Complexity: O(n^2), due to the repeated summing of nums1 and element zeroing operation in the loop.
Space Complexity: O(1), as we're only using extra variables and modifying nums1 in place.
This approach utilizes a dynamic programming strategy, creating a decision-based removal array. The goal is to calculate the optimal decision point to minimize future increments while deciding which elements to zero at each second. The strategy aims to optimize the future sum dynamically with present decisions, leading to a more adaptive solution rather than greedy reduction.
This Java solution utilizes a dynamic programming table for decision points, storing minimal increments required to stay within bounded sums using two-dimensional indexing. Future increments are minimized through this arrangement and adaptive problem-solving.
Time Complexity: O(n * x), which arises from dp table filling.
Space Complexity: O(n * x), considering the space taken by the dp table.
We notice that if we operate on the same number multiple times, only the last operation is meaningful, and the rest of the operations on that number will only increase the other numbers. Therefore, we operate on each number at most once, that is to say, the number of operations is within [0,..n].
Let's assume that we have performed j operations, and the indices of the numbers operated on are i_1, i_2, cdots, i_j. For these j operations, the value that each operation can reduce the sum of array elements is:
$
\begin{aligned}
& d_1 = nums_1[i_1] + nums_2[i_1] times 1 \
& d_2 = nums_1[i_2] + nums_2[i_2] times 2 \
& cdots \
& d_j = nums_1[i_j] + nums_2[i_j] times j
\end{aligned}
From a greedy perspective, in order to maximize the reduction of the sum of array elements, we should let the larger elements in nums_2 be operated on as late as possible. Therefore, we can sort nums_1 and nums_2 in ascending order of the element values in nums_2.
Next, we consider the implementation of dynamic programming. We use f[i][j] to represent the maximum value that can reduce the sum of array elements for the first i elements of the array nums_1 with j operations. We can get the following state transition equation:
f[i][j] = max {f[i-1][j], f[i-1][j-1] + nums_1[i] + nums_2[i] times j}
Finally, we enumerate j and find the smallest j that satisfies s_1 + s_2 times j - f[n][j] \le x.
The time complexity is O(n^2), and the space complexity is O(n^2), where n$ is the length of the array.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Greedy Approach | Time Complexity: O(n^2), due to the repeated summing of nums1 and element zeroing operation in the loop. |
| Dynamic Programming Approach with Removal Array | Time Complexity: O(n * x), which arises from dp table filling. |
| Sorting + Dynamic Programming | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy Ordering with Sorting | O(n log n) + O(n²) | O(n) | When analyzing reset order based on growth rate and testing feasible reductions |
| Dynamic Programming with Removal Array | O(n²) | O(n) | General optimal solution used in interviews and editorials |
2809. Minimum Time to Make Array Sum At Most x | All Proofs | Exchange Args | Leetcode Biweekly 110 • codingMohan • 2,286 views views
Watch 3 more video solutions →Practice Minimum Time to Make Array Sum At Most x with our built-in code editor and test cases.
Practice on FleetCode