Design and implement a data structure for a Least Frequently Used (LFU) cache.
Implement the LFUCache class:
LFUCache(int capacity) Initializes the object with the capacity of the data structure.int get(int key) Gets the value of the key if the key exists in the cache. Otherwise, returns -1.void put(int key, int value) Update the value of the key if present, or inserts the key if not already present. When the cache reaches its capacity, it should invalidate and remove the least frequently used key before inserting a new item. For this problem, when there is a tie (i.e., two or more keys with the same frequency), the least recently used key would be invalidated.To determine the least frequently used key, a use counter is maintained for each key in the cache. The key with the smallest use counter is the least frequently used key.
When a key is first inserted into the cache, its use counter is set to 1 (due to the put operation). The use counter for a key in the cache is incremented either a get or put operation is called on it.
The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LFUCache", "put", "put", "get", "put", "get", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [3], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, 3, null, -1, 3, 4]
Explanation
// cnt(x) = the use counter for key x
// cache=[] will show the last used order for tiebreakers (leftmost element is most recent)
LFUCache lfu = new LFUCache(2);
lfu.put(1, 1); // cache=[1,_], cnt(1)=1
lfu.put(2, 2); // cache=[2,1], cnt(2)=1, cnt(1)=1
lfu.get(1); // return 1
// cache=[1,2], cnt(2)=1, cnt(1)=2
lfu.put(3, 3); // 2 is the LFU key because cnt(2)=1 is the smallest, invalidate 2.
// cache=[3,1], cnt(3)=1, cnt(1)=2
lfu.get(2); // return -1 (not found)
lfu.get(3); // return 3
// cache=[3,1], cnt(3)=2, cnt(1)=2
lfu.put(4, 4); // Both 1 and 3 have the same cnt, but 1 is LRU, invalidate 1.
// cache=[4,3], cnt(4)=1, cnt(3)=2
lfu.get(1); // return -1 (not found)
lfu.get(3); // return 3
// cache=[3,4], cnt(4)=1, cnt(3)=3
lfu.get(4); // return 4
// cache=[4,3], cnt(4)=2, cnt(3)=3
Constraints:
1 <= capacity <= 1040 <= key <= 1050 <= value <= 1092 * 105 calls will be made to get and put.
In this approach, we use a combination of a HashMap (or Dictionary) and a double linked list to ensure O(1) operations for both get and put. The HashMap stores keys and values as well as a reference to the corresponding node in the double linked list, effectively allowing for quick updates and removals. The double linked list maintains the order of keys based on use frequency and recency to guarantee the correct eviction policy.
The Python solution involves two custom classes: Node and DoubleLinkedList. The Node class holds key-value pairs, frequency counts, and pointers to previous and next nodes. The DoubleLinkedList class is used to maintain the order based on frequency and recency of use.
The LFUCache class maintains two dictionaries: node_map which maps keys to nodes, and freq_map which maps frequencies to linked lists. The cache operations (get/put) are implemented to update the frequency of nodes appropriately and handle cache capacity by removing the least frequently and recently used nodes when needed.
JavaScript
Time Complexity: O(1) for get and put operations due to the use of HashMap and Doubly Linked List.
Space Complexity: O(capacity) due to storage of nodes and frequency mappings.
The OrderedDict based approach uses Python's built-in OrderedDict to efficiently track keys while maintaining the insertion order. By managing the dictionary's order, we can track the frequency of access and modify it when performing get or put operations. This may not offer the theoretical O(1) complexity for all operations but is a practical solution with simplicity in Python.
The OrderedDict solution keeps track of keys and their frequencies in separate dictionaries. key_freq_map maps the key to its frequency, while freq_map has OrderedDicts keyed by frequency with keys as their values. This way, we can update both frequency and recency of accesses.
With the built-in nature of Python's OrderedDict, the implementation remains concise and takes advantage of existing functionality to manage ordering effectively, even if all operations might not run in strict O(1) time.
Time Complexity: Average O(1) for both get and put, due to the effectiveness of OrderedDict for these operations.
Space Complexity: O(capacity) for storing nodes and their frequency mappings.
| Approach | Complexity |
|---|---|
| HashMap and Double Linked List Approach | Time Complexity: |
| OrderedDict Based Approach | Time Complexity: Average |
LRU Cache - Twitch Interview Question - Leetcode 146 • NeetCode • 362,351 views views
Watch 9 more video solutions →Practice LFU Cache with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor