You are given two integers, m and k, and a stream of integers. You are tasked to implement a data structure that calculates the MKAverage for the stream.
The MKAverage can be calculated using these steps:
m you should consider the MKAverage to be -1. Otherwise, copy the last m elements of the stream to a separate container.k elements and the largest k elements from the container.Implement the MKAverage class:
MKAverage(int m, int k) Initializes the MKAverage object with an empty stream and the two integers m and k.void addElement(int num) Inserts a new element num into the stream.int calculateMKAverage() Calculates and returns the MKAverage for the current stream rounded down to the nearest integer.Example 1:
Input
["MKAverage", "addElement", "addElement", "calculateMKAverage", "addElement", "calculateMKAverage", "addElement", "addElement", "addElement", "calculateMKAverage"]
[[3, 1], [3], [1], [], [10], [], [5], [5], [5], []]
Output
[null, null, null, -1, null, 3, null, null, null, 5]
Explanation
MKAverage obj = new MKAverage(3, 1);
obj.addElement(3); // current elements are [3]
obj.addElement(1); // current elements are [3,1]
obj.calculateMKAverage(); // return -1, because m = 3 and only 2 elements exist.
obj.addElement(10); // current elements are [3,1,10]
obj.calculateMKAverage(); // The last 3 elements are [3,1,10].
// After removing smallest and largest 1 element the container will be [3].
// The average of [3] equals 3/1 = 3, return 3
obj.addElement(5); // current elements are [3,1,10,5]
obj.addElement(5); // current elements are [3,1,10,5,5]
obj.addElement(5); // current elements are [3,1,10,5,5,5]
obj.calculateMKAverage(); // The last 3 elements are [5,5,5].
// After removing smallest and largest 1 element the container will be [5].
// The average of [5] equals 5/1 = 5, return 5
Constraints:
3 <= m <= 1051 <= k*2 < m1 <= num <= 105105 calls will be made to addElement and calculateMKAverage.Finding MK Average is a data stream design problem where you maintain the last m elements and compute the average after removing the k smallest and k largest values. The challenge lies in efficiently updating the structure as new elements arrive and old ones leave the sliding window.
A common strategy is to divide the window into three parts: the k smallest elements, the middle elements used for averaging, and the k largest elements. Using ordered sets, multisets, or balanced heaps allows efficient insertion, deletion, and rebalancing as the stream evolves. By maintaining a running sum of the middle section, the MK average can be computed in constant time when needed.
The key difficulty is keeping these partitions balanced when elements are added or removed. With properly managed ordered containers, each update operation can be handled in O(log m) time while using O(m) space for the sliding window.
| Approach | Time Complexity | Space Complexity |
|---|---|---|
| Ordered Sets / Multiset Partitioning | O(log m) per add operation, O(1) query | O(m) |
| Heap-Based Partitioning with Rebalancing | O(log m) per update | O(m) |
Greg Hogg
Use these hints if you're stuck. Try solving on your own first.
At each query, try to save and update the sum of the elements needed to calculate MKAverage.
You can use BSTs for fast insertion and deletion of the elements.
Watch expert explanations and walkthroughs
Practice problems asked by these companies to ace your technical interviews.
Explore More ProblemsJot down your thoughts, approach, and key learnings
Yes, problems involving streaming data, sliding windows, and advanced data structure design are common in FAANG-style interviews. This question tests understanding of heaps, ordered containers, and efficient state maintenance in a data stream.
Ordered sets, multisets, or balanced binary search trees are commonly used because they support logarithmic insertion and deletion. Some implementations also combine heaps with lazy deletion to manage the smallest and largest partitions efficiently.
The optimal approach maintains the last m elements and splits them into three groups: the k smallest, the middle elements, and the k largest. Using ordered sets or balanced structures allows efficient insertion, removal, and rebalancing while keeping a running sum of the middle elements.
The difficulty comes from maintaining a sliding window with dynamic partitioning. Every insertion may require rebalancing elements across multiple structures while preserving correct counts and sums for the average calculation.