Design a data structure to store the strings' count with the ability to return the strings with minimum and maximum counts.
Implement the AllOne class:
AllOne() Initializes the object of the data structure.inc(String key) Increments the count of the string key by 1. If key does not exist in the data structure, insert it with count 1.dec(String key) Decrements the count of the string key by 1. If the count of key is 0 after the decrement, remove it from the data structure. It is guaranteed that key exists in the data structure before the decrement.getMaxKey() Returns one of the keys with the maximal count. If no element exists, return an empty string "".getMinKey() Returns one of the keys with the minimum count. If no element exists, return an empty string "".Note that each function must run in O(1) average time complexity.
Example 1:
Input
["AllOne", "inc", "inc", "getMaxKey", "getMinKey", "inc", "getMaxKey", "getMinKey"]
[[], ["hello"], ["hello"], [], [], ["leet"], [], []]
Output
[null, null, null, "hello", "hello", null, "hello", "leet"]
Explanation
AllOne allOne = new AllOne();
allOne.inc("hello");
allOne.inc("hello");
allOne.getMaxKey(); // return "hello"
allOne.getMinKey(); // return "hello"
allOne.inc("leet");
allOne.getMaxKey(); // return "hello"
allOne.getMinKey(); // return "leet"
Constraints:
1 <= key.length <= 10key consists of lowercase English letters.dec, key is existing in the data structure.5 * 104 calls will be made to inc, dec, getMaxKey, and getMinKey.The key challenge in #432 All O`one Data Structure is supporting inc, dec, getMaxKey, and getMinKey operations in O(1) time. A common approach combines a hash table with a doubly-linked list. The hash map stores the mapping from each key to its corresponding node or bucket, enabling constant-time access when updating counts.
The doubly-linked list organizes buckets by frequency, where each node represents a specific count and holds all keys with that count. When a key is incremented or decremented, it moves to the adjacent bucket representing the updated frequency. If the required bucket does not exist, a new one is created and inserted in the correct position. This structure ensures that the head and tail of the list always represent the minimum and maximum frequencies.
By maintaining this ordered bucket structure and direct key references through hashing, all required operations can be executed efficiently with O(1) time and controlled space usage.
| Approach | Time Complexity | Space Complexity |
|---|---|---|
| Hash Map + Doubly Linked List Buckets | O(1) for inc, dec, getMaxKey, getMinKey | O(N) |
Ashish Pratap Singh
This approach involves using a hash map to store the frequency of each key and a doubly linked list (DLL) to keep track of the keys at each frequency level. Each node in the DLL represents a unique frequency and holds a set of keys that have the same frequency. This data structure allows us to efficiently update, delete, and access keys while keeping track of the frequencies.
Time Complexity: Each of the operations—inc, dec, getMaxKey, and getMinKey—takes O(1) on average.
Space Complexity: O(K) for storing the keys and their corresponding nodes, where K is the number of unique keys.
1```cpp
2#include <unordered_map>
3#include <unordered_set>
4#include <list>
5#include <string>
6using namespace std;
7
8class AllOne {
9public:
10 AllOne() {
11 bucketList.emplace_back(0);
12 head = bucketList.begin();
13 }
14 void inc(string key) {
15 if (!keyCountMap.count(key)) {
16 keyCountMap[key] = head;
17 head->keys.insert(key);
18 moveForward(key);
19 } else {
20 moveForward(key);
21 }
22 }
23 void dec(string key) {
24 moveBackward(key);
25 }
26 string getMaxKey() {
27 if (head->next == bucketList.end()) return "";
28 return head->next->keys.empty() ? "" : *(head->next->keys.begin());
29 }
30 string getMinKey() {
31 if (head->keys.empty()) return "";
32 return *(head->keys.begin());
33 }
34private:
35 struct Bucket {
36 int count;
37 unordered_set<string> keys;
38 Bucket(int c) : count(c) {}
39 };
40 list<Bucket> bucketList;
41 unordered_map<string, list<Bucket>::iterator> keyCountMap;
42 list<Bucket>::iterator head;
43
44 void moveForward(string key) {
45 auto it = keyCountMap[key];
46 auto itNext = it;
47 ++itNext;
48 if (itNext == bucketList.end() || itNext->count != it->count + 1) {
49 auto insertIt = bucketList.insert(itNext, Bucket(it->count + 1));
50 itNext = insertIt;
51 }
52 keyCountMap[key] = itNext;
53 itNext->keys.insert(key);
54 it->keys.erase(key);
55 if (it->keys.empty() && it != head) {
56 bucketList.erase(it);
57 }
58 }
59
60 void moveBackward(string key) {
61 auto it = keyCountMap[key];
62 if (it != head) {
63 auto itPrev = it;
64 --itPrev;
65 if (itPrev->count != it->count - 1) {
66 auto insertIt = bucketList.insert(it, Bucket(it->count - 1));
67 itPrev = insertIt;
68 }
69 keyCountMap[key] = itPrev;
70 itPrev->keys.insert(key);
71 } else {
72 keyCountMap.erase(key);
73 }
74 it->keys.erase(key);
75 if (it->keys.empty()) {
76 bucketList.erase(it);
77 }
78 }
79};
80```This solution uses an unordered map to quickly access the element's frequency node in the key-count map. The solution maintains a list of buckets, where each bucket stores the keys with the same frequency. Increment or decrement operations effectively modify an element's position in this list with minimal searching, preserving O(1) complexity.
This approach utilizes a linked hash map where the keys are linked, ensuring a constant-time access. Frequencies are stored in hash maps that maintain the order of the elements. Keys are moved forward and backward among buckets when incrementing and decrementing, respectively. This combination allows the solution to be optimal with respect to speed and space.
Time Complexity: Each operation incurs only O(1) time on average due to the hash map use.
Space Complexity: O(K) in terms of storage for keys and nodes, where K is the number of unique keys.
1```java
2import java.util.
```Watch expert explanations and walkthroughs
Practice problems asked by these companies to ace your technical interviews.
Explore More ProblemsJot down your thoughts, approach, and key learnings
Yes, this problem is commonly asked in top tech company interviews because it tests system design thinking, data structure composition, and the ability to achieve strict O(1) operations using multiple structures together.
The optimal approach combines a hash table with a doubly-linked list of frequency buckets. The hash map gives constant-time access to each key, while the linked list maintains keys grouped by count, allowing O(1) updates and retrieval of minimum and maximum keys.
A doubly linked list allows efficient insertion and removal of frequency buckets while maintaining order. Since each bucket represents a count, keys can move to neighboring buckets during increment or decrement operations in constant time.
A combination of a hash map and a doubly-linked list is the most effective structure. The map tracks each key's location, while the linked list maintains ordered frequency buckets so keys can move between counts efficiently.
This Java implementation leverages a doubly linked list of buckets, where each bucket holds keys with the same frequency. The frequency is managed by the bucket list, enabling constant-time updates. Each operation (inc, dec, getMaxKey, and getMinKey) requires only local adjustments within a bucket, which makes this structure efficient and fast.