The problem involves tracking the frequency of IDs in a collection that changes over time. You have two integer arrays, nums and freq, of equal length n. Each element in nums represents an ID, and the corresponding element in freq indicates how many times that ID should be added to or removed from the collection at each step.
freq[i] is positive, it means freq[i] IDs with the value nums[i] are added to the collection at step i.freq[i] is negative, it means -freq[i] IDs with the value nums[i] are removed from the collection at step i.Return an array ans of length n, where ans[i] represents the count of the most frequent ID in the collection after the ith step. If the collection is empty at any step, ans[i] should be 0 for that step.
Example 1:
Input: nums = [2,3,2,1], freq = [3,2,-3,1]
Output: [3,3,2,2]
Explanation:
After step 0, we have 3 IDs with the value of 2. So ans[0] = 3.
After step 1, we have 3 IDs with the value of 2 and 2 IDs with the value of 3. So ans[1] = 3.
After step 2, we have 2 IDs with the value of 3. So ans[2] = 2.
After step 3, we have 2 IDs with the value of 3 and 1 ID with the value of 1. So ans[3] = 2.
Example 2:
Input: nums = [5,5,3], freq = [2,-2,1]
Output: [2,0,1]
Explanation:
After step 0, we have 2 IDs with the value of 5. So ans[0] = 2.
After step 1, there are no IDs. So ans[1] = 0.
After step 2, we have 1 ID with the value of 3. So ans[2] = 1.
Constraints:
1 <= nums.length == freq.length <= 1051 <= nums[i] <= 105-105 <= freq[i] <= 105freq[i] != 0Problem Overview: You receive two arrays: nums (IDs) and freq (frequency changes). After each step, the frequency of nums[i] increases or decreases by freq[i]. After applying the update, return the current maximum frequency among all IDs.
The challenge is maintaining the highest frequency dynamically while updates continuously modify counts. A naive scan after every update would be too slow, so efficient data structures are required to track frequencies in near real time.
Approach 1: Divide and Conquer Approach (Segment Tree over Frequencies) (Time: O(n log n), Space: O(n))
This approach treats the problem as a dynamic range query. First maintain a running frequency map for each ID using a hash table. Each update modifies a single ID’s count. A segment tree or balanced structure tracks the maximum frequency among all IDs. Every update propagates through the tree in O(log n), while the maximum value can be queried in constant time from the root. This divide‑and‑conquer structure splits the range of IDs into segments and combines results from child nodes to compute the global maximum.
This approach works well when the ID space is large but updates are frequent. It keeps operations predictable and avoids repeated scanning of all IDs.
Approach 2: Iterative Approach Using Hash Map + Max Heap (Time: O(n log n), Space: O(n))
The more common solution maintains frequencies using a hash table and tracks the maximum using a max heap (priority queue). For every update, adjust the stored frequency of the ID in the map. Then push the new pair (frequency, id) into the heap. Because heap entries may become outdated after future updates, lazy deletion is used: repeatedly pop the heap top until the frequency matches the current value in the map.
This strategy ensures that the heap always reveals the correct maximum frequency. Each push and cleanup operation costs O(log n), and every element is inserted a limited number of times, keeping total complexity manageable.
An alternative implementation replaces the heap with an ordered set or balanced tree keyed by frequency. That allows direct updates and quick retrieval of the largest frequency.
Recommended for interviews: The hash map + max heap approach is what most interviewers expect. It shows you can combine a frequency map with a priority queue and handle stale entries using lazy deletion. Discussing a simpler brute force scan demonstrates understanding, but implementing the heap-based solution proves you can optimize dynamic queries efficiently.
The divide and conquer strategy involves breaking down the problem into smaller, more manageable sub-problems until they become simple enough to solve directly. This approach is particularly effective when the problem can be split into similar sub-problems which can be solved independently.
This solution uses the divide and conquer strategy by recursively dividing the array into two halves until we reach sub-arrays of size 1. The merging logic (if needed) post division-and-conquer can combine results, offering a solution for each segment of the problem.
Time Complexity: O(n log n) depending on the problem specifics.
Space Complexity: O(log n) for recursion stack space.
This approach employs auxiliary data structures such as stacks or queues to avoid recursion, making it an iterative solution. It simulates recursive behavior and is particularly useful when avoiding stack overflow scenarios or reducing function call overheads.
The C solution imitates a recursive divide and conquer approach but uses a stack manually. By avoiding system recursion, the solution can handle larger input sizes due to the explicit management of the call stack within program memory.
Time Complexity: O(n log n)
Space Complexity: O(n) for stack allocation.
We use a hash table cnt to record the occurrence times of each ID, a hash table lazy to record the number of times each occurrence needs to be deleted, and a priority queue pq to maintain the maximum occurrence times.
For each operation (x, f), we need to update the occurrence times cnt[x] of x, which means the value of cnt[x] in lazy needs to increase by 1, indicating that the number of times this occurrence needs to be deleted increases by 1. Then we update the value of cnt[x], adding f to cnt[x]. Then we add the updated value of cnt[x] to the priority queue pq. Then we check the top element of the priority queue pq. If the number of times the corresponding occurrence needs to be deleted in lazy is greater than 0, we pop the top element. Finally, we judge whether the priority queue is empty. If it is not empty, the top element is the maximum occurrence times, and we add it to the answer array.
The time complexity is O(n times log n), and the space complexity is O(n). Where n is the length of the array nums.
| Approach | Complexity |
|---|---|
| Divide and Conquer Approach | Time Complexity: O(n log n) depending on the problem specifics. |
| Iterative Approach Using Auxiliary Data Structures | Time Complexity: O(n log n) |
| Hash Table + Priority Queue (Max Heap) | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Brute Force Frequency Scan | O(n^2) | O(n) | Only for understanding the problem or very small inputs |
| Divide and Conquer (Segment Tree) | O(n log n) | O(n) | When frequent updates require structured range aggregation |
| Hash Map + Max Heap (Lazy Deletion) | O(n log n) | O(n) | General case; simple and widely used interview solution |
| Hash Map + Ordered Set | O(n log n) | O(n) | When a language provides efficient balanced tree or multiset operations |
Most Frequent IDs | Complete Intuition | Leetcode 3092 | codestorywithMIK • codestorywithMIK • 3,776 views views
Watch 9 more video solutions →Practice Most Frequent IDs with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor