There is a computer that can run an unlimited number of tasks at the same time. You are given a 2D integer array tasks where tasks[i] = [starti, endi, durationi] indicates that the ith task should run for a total of durationi seconds (not necessarily continuous) within the inclusive time range [starti, endi].
You may turn on the computer only when it needs to run a task. You can also turn it off if it is idle.
Return the minimum time during which the computer should be turned on to complete all tasks.
Example 1:
Input: tasks = [[2,3,1],[4,5,1],[1,5,2]] Output: 2 Explanation: - The first task can be run in the inclusive time range [2, 2]. - The second task can be run in the inclusive time range [5, 5]. - The third task can be run in the two inclusive time ranges [2, 2] and [5, 5]. The computer will be on for a total of 2 seconds.
Example 2:
Input: tasks = [[1,3,2],[2,5,3],[5,6,2]] Output: 4 Explanation: - The first task can be run in the inclusive time range [2, 3]. - The second task can be run in the inclusive time ranges [2, 3] and [5, 5]. - The third task can be run in the two inclusive time range [5, 6]. The computer will be on for a total of 4 seconds.
Constraints:
1 <= tasks.length <= 2000tasks[i].length == 31 <= starti, endi <= 20001 <= durationi <= endi - starti + 1 Problem Overview: Each task is represented as [start, end, duration]. You can run tasks at any integer time within their window, and multiple tasks can share the same time slot. The goal is to choose the minimum number of active time points so every task receives at least duration units of execution within its allowed interval.
Approach 1: Sweep Line Algorithm (O(n log n) time, O(n) space)
This approach treats the timeline like a sequence of events and processes tasks in sorted order by their end time. Sorting ensures you satisfy the most restrictive deadlines first. As you scan tasks, maintain a structure (often a stack or ordered list) that stores the time points already activated. For each task, count how many of those chosen time points already fall inside [start, end]. If the count is less than duration, activate additional time points starting from the task's end and moving backward. This greedy sweep ensures new time points are as late as possible so they can satisfy future tasks. The sweep-line idea combined with sorting avoids redundant allocations and keeps the active timeline minimal.
Core operations include sorting tasks (O(n log n)), checking existing active times inside an interval using binary search, and pushing new time slots when needed. The algorithm works well because later time points maximize reuse across overlapping intervals. Related concepts appear frequently in sorting and array scheduling problems.
Approach 2: Greedy Task Scheduling (O(n log n) time, O(n) space)
The most common interview solution uses a greedy scheduling strategy. First sort tasks by end. Maintain a boolean timeline or ordered structure that records which time slots are already used. For each task, iterate through the interval and count how many time points are already active. If the task still needs more execution time, add new active slots starting from end and moving backward until the required duration is met. Choosing the latest available slots preserves earlier ones for tasks with tighter windows.
This strategy is a classic greedy scheduling pattern: always satisfy the task with the earliest deadline while delaying work as much as possible. Implementations often combine sorting with binary search to quickly count existing slots in the interval. The technique mirrors interval scheduling problems commonly solved with greedy logic and occasionally binary search for efficient lookups.
Recommended for interviews: The greedy task scheduling approach is the expected solution. Interviewers want to see that you sort by end time and allocate execution slots from right to left. Explaining the sweep-line interpretation helps demonstrate deeper understanding of interval scheduling and timeline optimization.
This approach involves using a sweep line algorithm where you mark the start and end of each task's duration on a timeline, then calculate the total 'on' time required by traversing through these marks. This method effectively manages overlapping intervals by tracking when the computer needs to be switched on and off.
The code uses a list to record time changes (start as +1 and end+1 as -1) and sorts them. It then iterates through the sorted times to calculate how long the computer remains 'on' by adding to a counter and updating the total time whenever tasks are running.
Time Complexity: O(n log n) due to sorting of the time marks.
Space Complexity: O(n) to store the time marks for each task.
This strategy utilizes a greedy method where tasks are sorted and scheduled at the earliest possible time within their availability range. This approach attempts to minify idle time by fitting as many tasks as possible, sometimes by shifting later tasks earlier.
This implementation sorts tasks by their end time, then iterates over possible times attempting to allocate duration without overlaps. A set tracks unique active times, simulating the schedule process.
Python
C++
Java
JavaScript
C#
Time Complexity: O(n * d) where d is the maximum task duration, due to nested iterations.
Space Complexity: O(n) for the active times set.
We observe that the problem is equivalent to selecting duration integer time points in each interval [start,..,end], so that the total number of selected integer time points is minimized.
Therefore, we can first sort tasks in ascending order of end time end. Then we greedily make selections. For each task, we start from the end time end and choose the points as late as possible from back to front. This way, these points are more likely to be reused by later tasks.
In our implementation, we can use an array vis of length 2010 to record whether each time point has been selected. Then for each task, we first count the number of points cnt that have been selected in the interval [start,..,end], and then select duration - cnt points from back to front, while recording the number of selected points ans and updating the vis array.
Finally, we return ans.
The time complexity is O(n times log n + n times m), and the space complexity is O(m). Here, n and m are the lengths of tasks and vis array, respectively. In this problem, m = 2010.
| Approach | Complexity |
|---|---|
| Approach 1: Sweep Line Algorithm | Time Complexity: O(n log n) due to sorting of the time marks. |
| Approach 2: Greedy Task Scheduling | Time Complexity: O(n * d) where d is the maximum task duration, due to nested iterations. |
| Greedy + Sorting | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Sweep Line Algorithm | O(n log n) | O(n) | Useful when modeling the timeline as events and tracking activated time slots dynamically. |
| Greedy Task Scheduling | O(n log n) | O(n) | Best general solution. Sort by end time and allocate slots from right to left to maximize reuse. |
Minimum Time to Complete All Tasks || Greedy Algorithm || Faang Interview Question • Aryan Mittal • 5,226 views views
Watch 8 more video solutions →Practice Minimum Time to Complete All Tasks with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor