You are given a list of blocks, where blocks[i] = t means that the i-th block needs t units of time to be built. A block can only be built by exactly one worker.
A worker can either split into two workers (number of workers increases by one) or build a block then go home. Both decisions cost some time.
The time cost of spliting one worker into two workers is given as an integer split. Note that if two workers split at the same time, they split in parallel so the cost would be split.
Output the minimum time needed to build all blocks.
Initially, there is only one worker.
Example 1:
Input: blocks = [1], split = 1 Output: 1 Explanation: We use 1 worker to build 1 block in 1 time unit.
Example 2:
Input: blocks = [1,2], split = 5 Output: 7 Explanation: We split the worker into 2 workers in 5 time units then assign each of them to a block so the cost is 5 + max(1, 2) = 7.
Example 3:
Input: blocks = [1,2,3], split = 1 Output: 4 Explanation: Split 1 worker into 2, then assign the first worker to the last block and split the second worker into 2. Then, use the two unassigned workers to build the first two blocks. The cost is 1 + max(3, 1 + max(1, 2)) = 4.
Constraints:
1 <= blocks.length <= 10001 <= blocks[i] <= 10^51 <= split <= 100Problem Overview: You are given build times for several blocks and a fixed cost to split a worker into two. A worker can either build a block or split into two workers. The goal is to schedule splits and builds so all blocks finish in the minimum possible time.
Approach 1: Exhaustive Simulation / Recursive Scheduling (Exponential Time, O(2^n) time, O(n) space)
A naive idea is to simulate every possible decision: either build a block now or split the worker to gain parallelism later. You recursively explore the order of splits and block assignments while tracking the maximum finishing time. This approach quickly becomes infeasible because the number of possible schedules grows exponentially with the number of blocks. It does help build intuition: splitting early helps when there are many long tasks, but unnecessary splits waste time.
Approach 2: Greedy + Priority Queue (Optimal) (O(n log n) time, O(n) space)
The key insight is that building blocks in parallel behaves like repeatedly combining the two longest remaining tasks. If two blocks take a and b time, splitting a worker costs split, and the final completion time becomes max(a, b) + split. This resembles the merging strategy used in greedy scheduling problems. Use a priority queue to always process the largest build times first.
Push all block times into a max heap. Repeatedly pop the two largest values a and b. Treat them as tasks handled by two workers created from a split. The combined finishing time becomes max(a, b) + split. Push this result back into the heap and continue until only one value remains. That value represents the earliest possible finishing time for all blocks.
This greedy strategy works because the longest jobs dominate the total completion time. Pairing them early ensures the split overhead is applied where it matters most. The heap guarantees each merge step runs in O(log n), producing a total complexity of O(n log n). The approach combines concepts from Greedy scheduling and efficient task selection with a Array of build times.
Recommended for interviews: Interviewers expect the Greedy + Priority Queue solution. Explaining the naive exponential scheduling idea first shows you understand the decision space. The heap-based greedy merge demonstrates the optimization insight and is the practical solution with O(n log n) complexity.
First, consider the case where there is only one block. In this case, there is no need to split the worker, just let him build the block directly. The time cost is block[0].
If there are two blocks, you need to split the worker into two, and then let them build the blocks separately. The time cost is split + max(block[0], block[1]).
If there are more than two blocks, at each step you need to consider how many workers to split. This is not easy to handle with forward thinking.
We might as well use reverse thinking, not splitting workers, but merging blocks. We select any two blocks i, j for merging. The time to build a new block is split + max(block[i], block[j]).
In order to let the blocks with long time consumption participate in the merge as little as possible, we can greedily select the two blocks with the smallest time consumption for merging each time. Therefore, we can maintain a min heap, take out the two smallest blocks for merging each time, until there is only one block left. The build time of the last remaining block is the answer.
The time complexity is O(n times log n), and the space complexity is O(n). Here, n is the number of blocks.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Exhaustive Recursive Scheduling | O(2^n) | O(n) | Conceptual understanding of split decisions; impractical for real constraints |
| Greedy + Priority Queue (Max Heap) | O(n log n) | O(n) | Optimal solution for interviews and production; efficiently merges longest tasks first |
1199 Minimum Time to Build Blocks • Kelvin Chandra • 1,790 views views
Watch 3 more video solutions →Practice Minimum Time to Build Blocks with our built-in code editor and test cases.
Practice on FleetCode