You are given two 1-indexed integer arrays, nums and, changeIndices, having lengths n and m, respectively.
Initially, all indices in nums are unmarked. Your task is to mark all indices in nums.
In each second, s, in order from 1 to m (inclusive), you can perform one of the following operations:
i in the range [1, n] and decrement nums[i] by 1.nums[changeIndices[s]] to any non-negative value.i in the range [1, n], where nums[i] is equal to 0, and mark index i.Return an integer denoting the earliest second in the range [1, m] when all indices in nums can be marked by choosing operations optimally, or -1 if it is impossible.
Example 1:
Input: nums = [3,2,3], changeIndices = [1,3,2,2,2,2,3] Output: 6 Explanation: In this example, we have 7 seconds. The following operations can be performed to mark all indices: Second 1: Set nums[changeIndices[1]] to 0. nums becomes [0,2,3]. Second 2: Set nums[changeIndices[2]] to 0. nums becomes [0,2,0]. Second 3: Set nums[changeIndices[3]] to 0. nums becomes [0,0,0]. Second 4: Mark index 1, since nums[1] is equal to 0. Second 5: Mark index 2, since nums[2] is equal to 0. Second 6: Mark index 3, since nums[3] is equal to 0. Now all indices have been marked. It can be shown that it is not possible to mark all indices earlier than the 6th second. Hence, the answer is 6.
Example 2:
Input: nums = [0,0,1,2], changeIndices = [1,2,1,2,1,2,1,2] Output: 7 Explanation: In this example, we have 8 seconds. The following operations can be performed to mark all indices: Second 1: Mark index 1, since nums[1] is equal to 0. Second 2: Mark index 2, since nums[2] is equal to 0. Second 3: Decrement index 4 by one. nums becomes [0,0,1,1]. Second 4: Decrement index 4 by one. nums becomes [0,0,1,0]. Second 5: Decrement index 3 by one. nums becomes [0,0,0,0]. Second 6: Mark index 3, since nums[3] is equal to 0. Second 7: Mark index 4, since nums[4] is equal to 0. Now all indices have been marked. It can be shown that it is not possible to mark all indices earlier than the 7th second. Hence, the answer is 7.
Example 3:
Input: nums = [1,2,3], changeIndices = [1,2,3] Output: -1 Explanation: In this example, it can be shown that it is impossible to mark all indices, as we don't have enough seconds. Hence, the answer is -1.
Constraints:
1 <= n == nums.length <= 50000 <= nums[i] <= 1091 <= m == changeIndices.length <= 50001 <= changeIndices[i] <= nProblem Overview: You are given an array representing the work required before an index can be marked and a timeline that specifies which index becomes markable at each second. The task is to determine the earliest second when every index can be marked while respecting the operation constraints.
Approach 1: Dynamic Programming (O(n * m) time, O(n * m) space)
The dynamic programming formulation models each second as a decision point: either spend the second reducing remaining work for some index or perform a marking operation when the index becomes available. The state tracks how many indices have been processed and how many seconds have been used so far. By iterating through the timeline and updating transitions, the algorithm determines whether all indices can be completed within a given prefix of time. This approach is conceptually straightforward and clearly models the constraints, but the state space grows quickly with larger inputs.
This approach is useful for reasoning about correctness because it explicitly represents every choice. However, the quadratic state expansion makes it impractical for large timelines. In interview settings it mainly serves as a stepping stone toward a greedy or search-based optimization.
Approach 2: Binary Search + Greedy Heap Scheduling (O(m log n) time, O(n) space)
The optimal strategy treats each marking opportunity as a deadline and checks whether the required preparation work can be scheduled before it. Perform a binary search over the answer: assume the earliest valid second is t, then verify if marking all indices within the first t seconds is feasible.
During the feasibility check, scan the timeline and treat each appearance of an index as a potential deadline to mark it. The amount of preparation work required for that index must be completed before the mark happens. Maintain a heap (priority queue) of chosen tasks and track the total time spent preparing them. If the schedule exceeds the available time, remove the most expensive preparation using a max-heap style adjustment. This greedy step ensures the schedule always keeps the set of tasks with the smallest total cost.
The combination of greedy scheduling and binary search drastically reduces the search space. Instead of simulating every possibility, the algorithm only verifies feasibility for candidate times. This leads to an efficient solution that scales to large inputs.
Recommended for interviews: The binary search with greedy heap check is the expected solution. Explaining the DP formulation first shows you understand the state transitions, but implementing the heap-based feasibility check demonstrates strong algorithmic reasoning and optimization skills.
This approach leverages the concept of Dynamic Programming where we break down the problem into smaller subproblems and solve them independently, but store the results to avoid redundant computations. This approach is particularly beneficial when the same subproblem arises multiple times.
The C solution uses a simple recursive function with memoization accomplished using a static array `dp[]`. The function `solveDP` computes Fib(n) only once for each n using the memoization technique, thus optimizing the recursion tree and avoiding exponential time complexity.
Time Complexity: O(n) due to memoization.
Space Complexity: O(n) for the memoization array.
The iterative approach calculates Fibonacci numbers using only two variables to eliminate extra space usage. This approach keeps track of only the last two Fibonacci numbers computed, thus reducing space complexity to O(1).
This C solution uses an iterative approach. By maintaining only the last two Fibonacci numbers, the algorithm destroys the need for additional memory beyond storing these two values, thus keeping space complexity at a constant.
Time Complexity: O(n)
Space Complexity: O(1)
| Approach | Complexity |
|---|---|
| Dynamic Programming Approach | Time Complexity: O(n) due to memoization. |
| Iterative Approach with Space Optimization | Time Complexity: O(n) |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming | O(n * m) | O(n * m) | Good for understanding the state transitions and verifying correctness on smaller inputs |
| Binary Search + Greedy Heap | O(m log n) | O(n) | Best for large constraints; efficiently finds earliest feasible time |
| Iterative Greedy with Space Optimization | O(m log n) | O(n) | When memory is limited but the same greedy feasibility logic is required |
3049. Earliest Second to Mark Indices II | Weekly Leetcode 386 • codingMohan • 1,180 views views
Watch 3 more video solutions →Practice Earliest Second to Mark Indices II with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor