Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity) Initialize the LRU cache with positive size capacity.int get(int key) Return the value of the key if the key exists, otherwise return -1.void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 30000 <= key <= 1040 <= value <= 1052 * 105 calls will be made to get and put.The key challenge in #146 LRU Cache is designing a data structure that supports get and put operations in constant time while always removing the least recently used item when capacity is exceeded.
The optimal approach combines a Hash Table with a Doubly Linked List. The hash map provides O(1) access to cache nodes by key, while the doubly linked list maintains the order of usage. Whenever a key is accessed or inserted, its node is moved to the front of the list to mark it as recently used. The least recently used element is kept near the tail, allowing quick eviction when the cache exceeds capacity.
This design ensures that both lookup and update operations remain efficient while maintaining the usage order dynamically. The overall complexity for both get and put operations is O(1), making it suitable for high-performance caching systems.
| Approach | Time Complexity | Space Complexity |
|---|---|---|
| Hash Map + Doubly Linked List | O(1) for get and put | O(n) |
NeetCode
This approach combines a Doubly Linked List with a HashMap to maintain the LRU functionality. The doubly linked list keeps track of the order of elements. The head of the list represents the most recently used item, whereas the tail represents the least recently used item. Through a HashMap, we achieve O(1) access time complexity for updating and retrieving values. On cache misses that result in a new insertion when the cache is full, we evict the last node of the list.
Time Complexity: O(1) for both get and put operations due to the hashmap and linked list. Space Complexity: O(n), where n is the capacity of the cache for the hashmap and linked list.
1import java.util.*;
2
3class LRUCache {
4 class Node {
5 int key, value;
6 Node prev, next;
7 Node(int k, int v) {
8 key = k;
9 value = v;
10 }
11 }
12 private final int capacity;
13 private Map<Integer, Node> cache;
14 private Node head, tail;
15
16 public LRUCache(int capacity) {
17 this.capacity = capacity;
18 cache = new HashMap<>();
19 head = new Node(0, 0);
20 tail = new Node(0, 0);
21 head.next = tail;
22 tail.prev = head;
23 }
24
25 private void remove(Node node) {
26 Node prev = node.prev;
27 Node next = node.next;
28 prev.next = next;
29 next.prev = prev;
30 }
31
32 private void addToHead(Node node) {
33 node.next = head.next;
34 node.prev = head;
35 head.next.prev = node;
36 head.next = node;
37 }
38
39 public int get(int key) {
40 if (cache.containsKey(key)) {
41 Node node = cache.get(key);
42 remove(node);
43 addToHead(node);
44 return node.value;
45 }
46 return -1;
47 }
48
49 public void put(int key, int value) {
50 if (cache.containsKey(key)) {
51 remove(cache.get(key));
52 }
53 if (cache.size() == capacity) {
54 cache.remove(tail.prev.key);
55 remove(tail.prev);
56 }
57 Node node = new Node(key, value);
58 addToHead(node);
59 cache.put(key, node);
60 }
61}The Java solution uses a nested Node class to encapsulate each cache entry. The LRUCache class manages the functionality by using a HashMap and Doubly Linked List (using dummy head and tail nodes) for constant time complexity in operations.
This approach leverages built-in ordered dictionary data structures available in some languages. These structures inherently maintain the order of insertion, allowing for easy eviction of the least recently used item.
Time Complexity: O(1) for get and put since OrderedDict operations are optimized for simplicity and efficiency. Space Complexity: O(n), with n being the capacity of the cache.
1class LRUCache {
2 constructor(capacity)
Watch expert explanations and walkthroughs
Practice problems asked by these companies to ace your technical interviews.
Explore More ProblemsJot down your thoughts, approach, and key learnings
Yes, LRU Cache is a very common design question in FAANG and other top tech company interviews. It tests understanding of data structures, system design thinking, and the ability to combine multiple structures for optimal performance.
The optimal approach combines a hash map with a doubly linked list. The hash map allows O(1) access to nodes by key, while the doubly linked list maintains the order of recently used elements. This design ensures both get and put operations run in constant time.
A doubly linked list allows efficient insertion, deletion, and movement of nodes in O(1) time. This is important for quickly updating the position of recently accessed items and removing the least recently used node from the tail.
A combination of a hash table and a doubly linked list is considered the best data structure for LRU Cache. The hash table provides fast lookups, while the doubly linked list efficiently updates the order of recently used items.
This JavaScript solution utilizes the Map object. Map maintains the order of entries, enabling the quick updating and eviction needed for the LRU cache. Each get operation involves removing and reinserting the accessed element to retain ordering.