Given a function fn, return a memoized version of that function.
A memoized function is a function that will never be called twice with the same inputs. Instead it will return a cached value.
fn can be any function and there are no constraints on what type of values it accepts. Inputs are considered identical if they are === to each other.
Example 1:
Input:
getInputs = () => [[2,2],[2,2],[1,2]]
fn = function (a, b) { return a + b; }
Output: [{"val":4,"calls":1},{"val":4,"calls":1},{"val":3,"calls":2}]
Explanation:
const inputs = getInputs();
const memoized = memoize(fn);
for (const arr of inputs) {
memoized(...arr);
}
For the inputs of (2, 2): 2 + 2 = 4, and it required a call to fn().
For the inputs of (2, 2): 2 + 2 = 4, but those inputs were seen before so no call to fn() was required.
For the inputs of (1, 2): 1 + 2 = 3, and it required another call to fn() for a total of 2.
Example 2:
Input:
getInputs = () => [[{},{}],[{},{}],[{},{}]]
fn = function (a, b) { return ({...a, ...b}); }
Output: [{"val":{},"calls":1},{"val":{},"calls":2},{"val":{},"calls":3}]
Explanation:
Merging two empty objects will always result in an empty object. It may seem like there should only be 1 call to fn() because of cache-hits, however none of those objects are === to each other.
Example 3:
Input:
getInputs = () => { const o = {}; return [[o,o],[o,o],[o,o]]; }
fn = function (a, b) { return ({...a, ...b}); }
Output: [{"val":{},"calls":1},{"val":{},"calls":1},{"val":{},"calls":1}]
Explanation:
Merging two empty objects will always result in an empty object. The 2nd and 3rd third function calls result in a cache-hit. This is because every object passed in is identical.
Constraints:
1 <= inputs.length <= 1050 <= inputs.flat().length <= 105inputs[i][j] != NaNProblem Overview: You need to implement a memoize utility that wraps a function and caches results for repeated calls. If the same set of arguments appears again, return the cached result instead of executing the function again. The challenge is handling different argument types—including arrays and objects—without collisions or incorrect cache hits.
Approach 1: Hash-Based Memoization (O(1) average time per call, O(n) space)
The simplest design uses a hash map where the key represents the argument list and the value stores the computed result. Convert the arguments into a stable key (for example using tuple packing in Python or serialization in JavaScript) and perform a hash lookup before executing the function. If the key already exists, return the cached value; otherwise compute the result and store it. Each lookup or insert in a hash map runs in O(1) average time, so repeated calls become constant time. Space complexity grows linearly with the number of unique argument combinations.
Approach 2: Custom Key Generation for Complex Types (O(1) average lookup, O(n) space)
Languages like Java and C++ require more explicit handling because complex objects or arrays cannot always be used directly as map keys. Generate a deterministic cache key from the argument list—commonly by building a canonical string representation or hashing each parameter into a composite key object. Store results in a map using this generated key. The key insight is ensuring two logically identical argument lists produce the same key while different inputs never collide. This approach is widely used when implementing memoization utilities in strongly typed environments.
Another scalable design uses a nested map structure where each argument corresponds to a level in a tree of maps. This avoids serialization overhead and preserves identity semantics for objects. Such designs are common in high-performance function caching systems where argument structures are large or deeply nested.
Recommended for interviews: Interviewers expect a hash-based memoization approach with deterministic key generation. Start by explaining the straightforward cache map solution to demonstrate understanding of memoization. Then discuss how complex argument types require stable key construction or nested maps. Showing awareness of serialization tradeoffs and collision risks demonstrates strong system design thinking.
This approach utilizes a hash (or map) to store computed results of the function, where the keys are generated based on the input parameters. By serializing the inputs into a unique key (e.g., using JSON), we can easily check whether a computation for the same inputs has been already done and retrieve the result from the cache if it has.
This Python solution uses a tuple of the arguments to uniquely identify each call to the function. The tuple is used as a key in a dictionary (hash map) to store results. If the inputs are repeated, the stored result is returned instead of calling the function again.
Python
JavaScript
The time complexity of this solution is O(1) on average for each call due to hash map lookups. The space complexity is O(n) where n is the number of unique input sets to the function.
This approach is similar to hash-based memoization but emphasizes generating a custom key for complex input types that might not be directly hashable. This is particularly useful for composite objects or nested data structures where default serialization (as in JSON) is inadequate or inefficient.
The Java example uses a List to capture arguments, which makes it feasible to handle object arrays. This List is used as a key in a HashMap to store the function results, providing efficient lookup.
The time complexity is O(1) on average for cache operations, assuming good hashCode implementations for objects. Space complexity is O(n) depending on unique input sets.
TypeScript
| Approach | Complexity |
|---|---|
| Hash-Based Memoization | The time complexity of this solution is O(1) on average for each call due to hash map lookups. The space complexity is O(n) where n is the number of unique input sets to the function. |
| Custom Key Generation for Complex Types | The time complexity is O(1) on average for cache operations, assuming good hashCode implementations for objects. Space complexity is O(n) depending on unique input sets. |
| Default Approach | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Hash-Based Memoization | O(1) average per call | O(n) | General case when arguments can be converted into a stable hash key |
| Custom Key Generation for Complex Types | O(1) average lookup | O(n) | When arguments include arrays or objects that require deterministic serialization or composite keys |
LeetCode | 2630. Memoize II | 30 Days of Javascript | 中文解說 • Harry謝 • 113 views views
Watch 1 more video solutions →Practice Memoize II with our built-in code editor and test cases.
Practice on FleetCode