You are given n tasks labeled from 0 to n - 1 represented by a 2D integer array tasks, where tasks[i] = [enqueueTimei, processingTimei] means that the ith task will be available to process at enqueueTimei and will take processingTimei to finish processing.
You have a single-threaded CPU that can process at most one task at a time and will act in the following way:
Return the order in which the CPU will process the tasks.
Example 1:
Input: tasks = [[1,2],[2,4],[3,2],[4,1]]
Output: [0,2,3,1]
Explanation: The events go as follows:
- At time = 1, task 0 is available to process. Available tasks = {0}.
- Also at time = 1, the idle CPU starts processing task 0. Available tasks = {}.
- At time = 2, task 1 is available to process. Available tasks = {1}.
- At time = 3, task 2 is available to process. Available tasks = {1, 2}.
- Also at time = 3, the CPU finishes task 0 and starts processing task 2 as it is the shortest. Available tasks = {1}.
- At time = 4, task 3 is available to process. Available tasks = {1, 3}.
- At time = 5, the CPU finishes task 2 and starts processing task 3 as it is the shortest. Available tasks = {1}.
- At time = 6, the CPU finishes task 3 and starts processing task 1. Available tasks = {}.
- At time = 10, the CPU finishes task 1 and becomes idle.
Example 2:
Input: tasks = [[7,10],[7,12],[7,5],[7,4],[7,2]]
Output: [4,3,2,0,1]
Explanation: The events go as follows:
- At time = 7, all the tasks become available. Available tasks = {0,1,2,3,4}.
- Also at time = 7, the idle CPU starts processing task 4. Available tasks = {0,1,2,3}.
- At time = 9, the CPU finishes task 4 and starts processing task 3. Available tasks = {0,1,2}.
- At time = 13, the CPU finishes task 3 and starts processing task 2. Available tasks = {0,1}.
- At time = 18, the CPU finishes task 2 and starts processing task 0. Available tasks = {1}.
- At time = 28, the CPU finishes task 0 and starts processing task 1. Available tasks = {}.
- At time = 40, the CPU finishes task 1 and becomes idle.
Constraints:
tasks.length == n1 <= n <= 1051 <= enqueueTimei, processingTimei <= 109Problem Overview: You are given tasks where each task has an enqueueTime and processingTime. A single-threaded CPU can process only one task at a time and always picks the available task with the smallest processing time (breaking ties by index). The goal is to return the order in which the CPU processes the tasks.
Approach 1: Sort + Priority Queue Simulation (O(n log n) time, O(n) space)
This is the standard scheduling simulation used in most accepted solutions. First attach each task's original index and sort tasks by enqueueTime using sorting. Iterate through time while pushing all tasks whose enqueue time is less than or equal to the current time into a min-heap. The heap is ordered by (processingTime, index) so the CPU always selects the shortest job first with deterministic tie-breaking.
At each step, pop the top task from the heap (priority queue), append its index to the result, and advance the current time by its processing time. If the heap becomes empty but tasks remain, jump the current time to the next task's enqueue time. This approach efficiently models the scheduler and guarantees that every push/pop operation costs O(log n), giving an overall O(n log n) time complexity with O(n) extra space.
Approach 2: Multimap / TreeMap Scheduling (O(n log n) time, O(n) space)
Instead of explicitly sorting the array, store tasks in a structure ordered by enqueue time such as a multimap or TreeMap. This structure keeps tasks automatically sorted as they are inserted. Maintain another ordered structure (typically a min-heap) for currently available tasks prioritized by processing time and index.
Simulate the CPU clock the same way as the heap approach: move tasks from the time-ordered map into the processing heap whenever their enqueue time becomes available. Extract the shortest task, process it, and update the clock. Using a balanced tree avoids the explicit sort step but still performs ordered insertions and lookups in O(log n). Total complexity remains O(n log n) time with O(n) space.
The key idea behind both solutions is separating two concerns: ordering tasks by arrival time and selecting the best available task to run. Arrival ordering is handled by sorting or a tree structure over the array, while task selection is handled by a priority queue.
Recommended for interviews: The sort + priority queue approach. Interviewers expect you to recognize this as a scheduling simulation similar to Shortest Job First (SJF). A brute simulation without a heap becomes inefficient, while the heap-based scheduler demonstrates strong understanding of priority queues and event-driven simulation.
This approach involves using a priority queue (or min-heap) to efficiently select the task with the shortest processing time. We will first sort the tasks based on their enqueue time. At each step, we'll enqueue tasks that are available and select the task with the shortest processing time for execution using the priority queue. This ensures optimal selection and execution of tasks, minimizing wait times.
The function starts by sorting tasks with indices by their enqueue time, then iteratively adds tasks to a heap that are available at the current time. It processes tasks based on the heap, which optimizes the selection of the shortest available task, ensuring tasks are processed efficiently.
Python
JavaScript
Time Complexity: O(n log n) due to sorting and heap operations. Space Complexity: O(n) for storing tasks and heap.
This approach uses a structure like TreeMap (or Multimap) that supports ordering key-value pairs. It allows us to keep the tasks ordered by enqueue times and quickly find tasks that should start processing.
This Java solution uses a priority queue to keep track of available tasks by shortest processing time. It simulates task execution using an index-based list to ensure tasks are queried efficiently.
Time Complexity: O(n log n) because of sorting and the priority queue operations. Space Complexity: O(n) for storage.
First, we sort the tasks by enqueueTime in ascending order. Next, we use a priority queue (min heap) to maintain the currently executable tasks. The elements in the queue are (processingTime, index), which represent the execution time and the index of the task. We also use a variable t to represent the current time, initially set to 0.
Next, we simulate the execution process of the tasks.
If the current queue is empty, it means there are no executable tasks at the moment. We update t to the larger value between the enqueueTime of the next task and the current time t. Then, we add all tasks with enqueueTime less than or equal to t to the queue.
Then, we take out a task from the queue, add its index to the answer array, and update t to the sum of the current time t and the execution time of the current task.
We repeat the above process until the queue is empty and all tasks have been added to the queue.
The time complexity is O(n times log n), where n is the number of tasks.
| Approach | Complexity |
|---|---|
| Using Priority Queue | Time Complexity: O(n log n) due to sorting and heap operations. Space Complexity: O(n) for storing tasks and heap. |
| Instead of Sorting, Use a Multimap (or TreeMap) | Time Complexity: O(n log n) because of sorting and the priority queue operations. Space Complexity: O(n) for storage. |
| Sorting + Priority Queue (Min Heap) | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Naive CPU Simulation | O(n^2) | O(1) | Conceptual understanding of the scheduling process; not practical for large inputs |
| Sort + Priority Queue | O(n log n) | O(n) | Standard optimal approach used in interviews and most editorial solutions |
| TreeMap / Multimap + Heap | O(n log n) | O(n) | Useful in languages with strong ordered map support like Java or C++ |
Single-Threaded CPU - Priority Queue - Leetcode 1834 - Python • NeetCode • 35,063 views views
Watch 9 more video solutions →Practice Single-Threaded CPU with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor