You are given a 2D integer matrix grid of size n x m, an integer array limits of length n, and an integer k. The task is to find the maximum sum of at most k elements from the matrix grid such that:
The number of elements taken from the ith row of grid does not exceed limits[i].
Return the maximum sum.
Example 1:
Input: grid = [[1,2],[3,4]], limits = [1,2], k = 2
Output: 7
Explanation:
4 + 3 = 7.Example 2:
Input: grid = [[5,3,7],[8,2,6]], limits = [2,2], k = 3
Output: 21
Explanation:
7 + 8 + 6 = 21.
Constraints:
n == grid.length == limits.lengthm == grid[i].length1 <= n, m <= 5000 <= grid[i][j] <= 1050 <= limits[i] <= m0 <= k <= min(n * m, sum(limits))Problem Overview: You get a matrix of numbers and an array limits where limits[i] tells how many elements you can take from row i. From all allowed picks, choose at most k elements so the total sum is maximum.
Approach 1: Collect Candidates + Full Sort (O(n log n) time, O(n) space)
Process each row independently. Sort the row in descending order and take the top limits[i] values because any smaller values would never help maximize the total. Add those values to a global list of candidates. Once all rows are processed, sort the candidate list in descending order and sum the largest k values. The logic is straightforward: reduce the matrix to only valid picks, then take the global top k. Time complexity becomes O(n log n) due to sorting the candidates, and space complexity is O(n) for storing them.
Approach 2: Greedy + Min-Heap (Priority Queue) (O(n log k) time, O(k) space)
This approach keeps only the best k elements seen so far. For each row, sort it descending and iterate over the first limits[i] values. Push each candidate into a min-heap. If the heap size exceeds k, remove the smallest element. The heap always stores the current top k values across all rows. The key insight: any element smaller than the smallest element in the heap cannot belong to the final answer once the heap is full. After processing all rows, sum the heap elements. This greedy strategy avoids sorting the entire candidate list and reduces work to O(n log k).
The solution combines ideas from greedy algorithms, heap (priority queue), and sorting. Each row is treated as an independent pool of values, but the heap enforces the global constraint of selecting only the best k values overall.
Recommended for interviews: The Greedy + Min-Heap approach. Interviewers want to see that you limit each row to its best candidates and then maintain a global top-k structure. The full sorting approach demonstrates correct reasoning, but the heap solution shows stronger optimization and familiarity with priority queues.
We can use a priority queue (min-heap) pq to maintain the largest k elements.
Traverse each row, sort the elements in each row, and then take the largest limit elements from each row and add them to pq. If the size of pq exceeds k, pop the top element of the heap.
Finally, sum the elements in pq.
The time complexity is O(n times m times (log m + log k)), and the space complexity is O(k). Here, n and m are the number of rows and columns of the matrix grid, respectively.
Python
Java
C++
Go
TypeScript
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Collect Candidates + Full Sort | O(n log n) | O(n) | Simplest implementation when constraints are small and memory is not a concern |
| Greedy + Min-Heap (Priority Queue) | O(n log k) | O(k) | Optimal approach when the candidate pool is large but only top k values matter |
3462. Maximum Sum With at Most K Elements | Greedy • Aryan Mittal • 1,480 views views
Watch 8 more video solutions →Practice Maximum Sum With at Most K Elements with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor