Watch 10 video solutions for LFU Cache, a hard level problem involving Hash Table, Linked List, Design. This walkthrough by take U forward has 199,028 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
Design and implement a data structure for a Least Frequently Used (LFU) cache.
Implement the LFUCache class:
LFUCache(int capacity) Initializes the object with the capacity of the data structure.int get(int key) Gets the value of the key if the key exists in the cache. Otherwise, returns -1.void put(int key, int value) Update the value of the key if present, or inserts the key if not already present. When the cache reaches its capacity, it should invalidate and remove the least frequently used key before inserting a new item. For this problem, when there is a tie (i.e., two or more keys with the same frequency), the least recently used key would be invalidated.To determine the least frequently used key, a use counter is maintained for each key in the cache. The key with the smallest use counter is the least frequently used key.
When a key is first inserted into the cache, its use counter is set to 1 (due to the put operation). The use counter for a key in the cache is incremented either a get or put operation is called on it.
The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LFUCache", "put", "put", "get", "put", "get", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [3], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, 3, null, -1, 3, 4]
Explanation
// cnt(x) = the use counter for key x
// cache=[] will show the last used order for tiebreakers (leftmost element is most recent)
LFUCache lfu = new LFUCache(2);
lfu.put(1, 1); // cache=[1,_], cnt(1)=1
lfu.put(2, 2); // cache=[2,1], cnt(2)=1, cnt(1)=1
lfu.get(1); // return 1
// cache=[1,2], cnt(2)=1, cnt(1)=2
lfu.put(3, 3); // 2 is the LFU key because cnt(2)=1 is the smallest, invalidate 2.
// cache=[3,1], cnt(3)=1, cnt(1)=2
lfu.get(2); // return -1 (not found)
lfu.get(3); // return 3
// cache=[3,1], cnt(3)=2, cnt(1)=2
lfu.put(4, 4); // Both 1 and 3 have the same cnt, but 1 is LRU, invalidate 1.
// cache=[4,3], cnt(4)=1, cnt(3)=2
lfu.get(1); // return -1 (not found)
lfu.get(3); // return 3
// cache=[3,4], cnt(4)=1, cnt(3)=3
lfu.get(4); // return 4
// cache=[4,3], cnt(4)=2, cnt(3)=3
Constraints:
1 <= capacity <= 1040 <= key <= 1050 <= value <= 1092 * 105 calls will be made to get and put.
Problem Overview: Design a cache that evicts the Least Frequently Used (LFU) key when capacity is full. Each operation get(key) and put(key, value) must run in constant time. When multiple keys share the same frequency, the least recently used among them is removed.
Approach 1: HashMap + Frequency Doubly Linked Lists (O(1) time, O(n) space)
The optimal design groups cache entries by their access frequency. Maintain a hash table mapping keys to nodes, and another map from frequency to a doubly linked list that stores nodes with that frequency. Each node stores key, value, and freq. When you call get, look up the node in O(1), remove it from its current frequency list, increment its frequency, and insert it into the next list. Track the current minimum frequency so eviction is constant time: remove the least recently used node from the list representing that frequency. This structure keeps frequency updates and eviction both O(1).
This approach relies on two invariants: (1) nodes are always stored in a list matching their current frequency, and (2) within each frequency list, order represents recency. Because insertion and removal in a linked list are constant time, both cache operations stay O(1). This is the standard interview solution and the one most production-grade LFU implementations follow.
Approach 2: OrderedDict Based LFU (O(1) average time, O(n) space)
Python provides OrderedDict, which maintains insertion order and allows efficient deletion from both ends. You can maintain a dictionary mapping frequencies to OrderedDicts of keys. Another map stores key → (value, frequency). When a key is accessed, remove it from its current OrderedDict, increment its frequency, and insert it into the next one. Eviction happens by removing the first key from the OrderedDict corresponding to the minimum frequency.
This version avoids implementing your own doubly linked list and is easier to code in Python. The tradeoff is less explicit control over the underlying structure, but operations remain O(1) on average because OrderedDict internally uses a hash table and linked list.
Recommended for interviews: Interviewers usually expect the HashMap + frequency doubly linked list design. It demonstrates understanding of system design style data structures, constant-time eviction strategies, and careful state tracking. Mentioning the OrderedDict variation in Python is useful, but implementing the full structure shows deeper mastery.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| HashMap + Frequency Doubly Linked Lists | O(1) for get and put | O(n) | Standard optimal solution for LFU cache design problems and interviews |
| Frequency Map + OrderedDict (Python) | O(1) average | O(n) | Python implementations where built-in ordered dictionaries simplify LRU handling |
| Naive Frequency Tracking | O(n) eviction | O(n) | Conceptual baseline; scan keys to find the least frequently used item |