Watch 10 video solutions for Minimum Number of Increasing Subsequence to Be Removed, a hard level problem involving Array, Binary Search. This walkthrough by Reducible has 1,234,244 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
Given an array of integers nums, you are allowed to perform the following operation any number of times:
Your task is to find the minimum number of operations required to make the array empty.
Example 1:
Input: nums = [5,3,1,4,2]
Output: 3
Explanation:
We remove subsequences [1, 2], [3, 4], [5].
Example 2:
Input: nums = [1,2,3,4,5]
Output: 1
Example 3:
Input: nums = [5,4,3,2,1]
Output: 5
Constraints:
1 <= nums.length <= 1051 <= nums[i] <= 105Problem Overview: You are given an array and repeatedly remove strictly increasing subsequences. The goal is to minimize how many such subsequences are needed to remove the entire array.
The key observation comes from sequence partitioning theory: the minimum number of strictly increasing subsequences required to cover the array equals the length of the longest non‑increasing subsequence. Instead of explicitly constructing subsequences, you can compute this value efficiently.
Approach 1: Dynamic Programming (O(n2) time, O(n) space)
A straightforward method computes the longest non‑increasing subsequence using classic DP. For each index i, iterate through all previous indices j < i. If nums[j] >= nums[i], extend the subsequence ending at j to i. Track the maximum length across all positions. The final length represents the minimum number of increasing subsequences needed to remove the array. This approach is easy to reason about but becomes slow for large inputs because every pair of indices is compared.
Approach 2: Greedy + Binary Search (O(n log n) time, O(n) space)
The optimal approach adapts the patience sorting technique used for LIS problems. Maintain an array tails where each value represents the smallest possible ending value of a non‑increasing subsequence of a given length. Iterate through the array and use binary search to find the position where the current element should update tails. If the element extends the sequence, append it; otherwise replace the appropriate position. This greedy replacement keeps subsequence endings as flexible as possible and guarantees the longest non‑increasing subsequence length is found.
Binary search ensures each insertion or replacement runs in O(log n), giving overall O(n log n) time. The size of tails at the end equals the answer: the minimum number of increasing subsequences required to remove the array.
This pattern appears frequently in problems involving subsequence partitioning and ordering constraints. If you want to strengthen the underlying concepts, review array traversal patterns and binary search optimizations used in LIS-style problems under binary search.
Recommended for interviews: Interviewers expect the Greedy + Binary Search solution. Starting with the quadratic DP demonstrates understanding of subsequences, but optimizing it to O(n log n) using patience sorting shows strong algorithmic maturity and familiarity with sequence optimization techniques.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming (Longest Non‑Increasing Subsequence) | O(n^2) | O(n) | Useful for understanding the subsequence relationship or when input size is small |
| Greedy + Binary Search (Patience Sorting) | O(n log n) | O(n) | Optimal approach for large arrays and typical interview expectations |