Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity) Initialize the LRU cache with positive size capacity.int get(int key) Return the value of the key if the key exists, otherwise return -1.void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Constraints:
1 <= capacity <= 30000 <= key <= 1040 <= value <= 1052 * 105 calls will be made to get and put.Problem Overview: Design a cache that supports get(key) and put(key, value) in constant time while evicting the least recently used item when capacity is exceeded. Every read or write updates the usage order, so the most recently accessed key stays in the cache while the oldest unused key gets removed.
Approach 1: Doubly Linked List + HashMap (O(1) time, O(n) space)
The standard solution combines a hash table with a doubly linked list. The hash map stores key → node mappings so you can access cache entries in O(1). The doubly linked list maintains usage order: the head holds the most recently used item and the tail holds the least recently used. When get is called, locate the node via hash lookup, remove it from its current position, and move it to the front. When inserting with put, add the node to the front; if the cache exceeds capacity, remove the tail node and delete its key from the hash map. Doubly linked lists allow constant-time insertions and removals because each node has prev and next pointers. Overall complexity stays O(1) for both operations, with O(n) space for storing nodes.
Approach 2: Ordered Dictionary / Built-in Ordered Map (O(1) time, O(n) space)
Some languages provide a library structure that maintains insertion order, such as Python's OrderedDict or JavaScript's Map. These data structures internally maintain ordering while supporting constant-time lookups. The idea is simple: store key-value pairs in the ordered map and treat the end of the structure as the most recently used position. On every get, remove and reinsert the key so it moves to the end. On put, update the key if it exists or insert it as most recent; if the capacity is exceeded, remove the first key (the least recently used). This approach avoids implementing a linked list manually, making it shorter and easier for production code. Time complexity remains O(1) per operation with O(n) space for stored entries.
Recommended for interviews: The expected solution is the HashMap + Doubly Linked List design. It demonstrates understanding of constant-time operations and how two data structures work together to maintain ordering. Interviewers want to see how you maintain node references, update pointers, and evict the least recently used entry efficiently. The library-based ordered map solution is concise but hides the underlying design, so it is better as a follow-up after explaining the core system design idea.
This approach combines a Doubly Linked List with a HashMap to maintain the LRU functionality. The doubly linked list keeps track of the order of elements. The head of the list represents the most recently used item, whereas the tail represents the least recently used item. Through a HashMap, we achieve O(1) access time complexity for updating and retrieving values. On cache misses that result in a new insertion when the cache is full, we evict the last node of the list.
The Python solution defines a node class to handle key-value pairs and handles the LRU logic in the LRUCache class. The nodes are stored in a doubly linked list, allowing quick updates and removals. The cache (dictionary) maps keys to their nodes, providing O(1) time complexity for get and put operations.
Java
Time Complexity: O(1) for both get and put operations due to the hashmap and linked list. Space Complexity: O(n), where n is the capacity of the cache for the hashmap and linked list.
This approach leverages built-in ordered dictionary data structures available in some languages. These structures inherently maintain the order of insertion, allowing for easy eviction of the least recently used item.
This Python solution uses the OrderedDict from the collections module. The inherent order tracking of OrderedDict facilitates moving accessed elements to the front or end of the dictionary easily, simplifying the logic for keeping track of least recently used items.
JavaScript
Time Complexity: O(1) for get and put since OrderedDict operations are optimized for simplicity and efficiency. Space Complexity: O(n), with n being the capacity of the cache.
| Approach | Complexity |
|---|---|
| Doubly Linked List with HashMap | Time Complexity: O(1) for both |
| Ordered Dictionary (Library Specific) | Time Complexity: O(1) for |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Array/List with Linear Scan | O(n) | O(n) | Conceptual starting point when explaining cache eviction before optimizing. |
| HashMap + Doubly Linked List | O(1) | O(n) | Interview-standard solution supporting constant-time access and eviction. |
| Ordered Dictionary / Ordered Map | O(1) | O(n) | When language libraries provide ordered maps that simplify the implementation. |
LRU Cache - Twitch Interview Question - Leetcode 146 • NeetCode • 362,351 views views
Watch 9 more video solutions →Practice LRU Cache with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor