Watch 4 video solutions for Minimum Time to Make Array Sum At Most x, a hard level problem involving Array, Dynamic Programming, Sorting. This walkthrough by codingMohan has 2,286 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
You are given two 0-indexed integer arrays nums1 and nums2 of equal length. Every second, for all indices 0 <= i < nums1.length, value of nums1[i] is incremented by nums2[i]. After this is done, you can do the following operation:
0 <= i < nums1.length and make nums1[i] = 0.You are also given an integer x.
Return the minimum time in which you can make the sum of all elements of nums1 to be less than or equal to x, or -1 if this is not possible.
Example 1:
Input: nums1 = [1,2,3], nums2 = [1,2,3], x = 4 Output: 3 Explanation: For the 1st second, we apply the operation on i = 0. Therefore nums1 = [0,2+2,3+3] = [0,4,6]. For the 2nd second, we apply the operation on i = 1. Therefore nums1 = [0+1,0,6+3] = [1,0,9]. For the 3rd second, we apply the operation on i = 2. Therefore nums1 = [1+1,0+2,0] = [2,2,0]. Now sum of nums1 = 4. It can be shown that these operations are optimal, so we return 3.
Example 2:
Input: nums1 = [1,2,3], nums2 = [3,3,3], x = 4 Output: -1 Explanation: It can be shown that the sum of nums1 will always be greater than x, no matter which operations are performed.
Constraints:
1 <= nums1.length <= 1031 <= nums1[i] <= 1030 <= nums2[i] <= 103nums1.length == nums2.length0 <= x <= 106Problem Overview: You are given two arrays nums1 and nums2. Every second, each element in nums1 increases by its corresponding value in nums2. During that second, you may choose one index and set nums1[i] to 0. The goal is to determine the minimum number of seconds required so the total array sum becomes ≤ x.
The challenge comes from two competing effects: the array sum keeps increasing every second, but resetting an element removes both its current value and the growth it accumulated. Choosing the right indices and the right order is the key optimization problem.
Approach 1: Greedy Ordering with Sorting (O(n log n) + O(n²) evaluation)
The value of resetting an index grows over time because nums1[i] accumulates t * nums2[i] after t seconds. Elements with larger nums2[i] grow faster, so resetting them later produces a larger reduction. This observation leads to sorting indices by nums2 in ascending order so that slower-growing elements are considered earlier and fast-growing ones are delayed.
After sorting, simulate selecting elements over multiple seconds while tracking how much total sum reduction each reset can produce. The greedy insight determines the order of candidates, while the evaluation checks whether the accumulated reduction is enough to push the total sum below x. Sorting dominates the preprocessing cost at O(n log n), while the evaluation step typically becomes O(n²) depending on how many reset combinations you test. Space complexity remains O(n).
Approach 2: Dynamic Programming with Removal Array (O(n²) time, O(n) space)
A more precise formulation treats the problem like a knapsack-style dynamic programming optimization. First compute the base sum growth after t seconds: sum(nums1) + t * sum(nums2). Then maximize how much value you can remove by choosing t reset operations.
Sort elements by nums2 (a sorting step) and define dp[t] as the maximum reduction achievable using exactly t resets. When processing element i, updating the DP state adds the benefit of resetting that element at time t: nums1[i] + t * nums2[i]. Iterate t backwards to avoid overwriting states.
After computing DP values, check each possible time t. If sum(nums1) + t * sum(nums2) - dp[t] ≤ x, then t seconds is feasible. The first such t is the answer. The algorithm runs in O(n²) time and uses O(n) extra space, while leveraging structure from array processing and scheduling.
Recommended for interviews: The dynamic programming formulation is what most interviewers expect. Sorting by nums2 shows you understand the growth behavior, while the DP step proves you can convert the scheduling problem into a maximization problem. Mentioning the greedy intuition first often helps explain why the DP state transition works.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy Ordering with Sorting | O(n log n) + O(n²) | O(n) | When analyzing reset order based on growth rate and testing feasible reductions |
| Dynamic Programming with Removal Array | O(n²) | O(n) | General optimal solution used in interviews and editorials |