Watch 10 video solutions for Stone Game II, a medium level problem involving Array, Math, Dynamic Programming. This walkthrough by NeetCodeIO has 30,981 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
Alice and Bob continue their games with piles of stones. There are a number of piles arranged in a row, and each pile has a positive integer number of stones piles[i]. The objective of the game is to end with the most stones.
Alice and Bob take turns, with Alice starting first.
On each player's turn, that player can take all the stones in the first X remaining piles, where 1 <= X <= 2M. Then, we set M = max(M, X). Initially, M = 1.
The game continues until all the stones have been taken.
Assuming Alice and Bob play optimally, return the maximum number of stones Alice can get.
Example 1:
Input: piles = [2,7,9,4,4]
Output: 10
Explanation:
2 + 4 + 4 = 10 stones in total.2 + 7 = 9 stones in total.So we return 10 since it's larger.
Example 2:
Input: piles = [1,2,3,4,5,100]
Output: 104
Constraints:
1 <= piles.length <= 1001 <= piles[i] <= 104Problem Overview: You are given an array of piles where each element represents stones in that pile. Two players take turns taking the first X piles where 1 ≤ X ≤ 2M, and M updates to max(M, X) after each move. Both players play optimally. The goal is to compute the maximum stones Alice (the first player) can collect.
Approach 1: Dynamic Programming with Memoization (O(n^3) time, O(n^2) space)
This is a classic optimal-play problem from game theory. Define a state dp(i, M) representing the maximum stones the current player can obtain starting from pile index i with parameter M. For each state, try taking X piles where 1 ≤ X ≤ 2M. After taking those piles, the opponent plays optimally, so your gain equals totalRemaining - dp(i + X, max(M, X)). Precompute suffix or prefix sums using prefix sums so the remaining stones can be calculated in O(1). Memoization stores results for each (i, M) pair to avoid recomputation. There are roughly O(n^2) states and up to 2M transitions per state, giving O(n^3) time.
Approach 2: Bottom-Up Dynamic Programming (O(n^3) time, O(n^2) space)
The same state transition can be implemented iteratively. Create a DP table where dp[i][M] stores the maximum stones the current player can collect from index i. Iterate indices from the end of the array toward the start because future states depend on later piles. For each state, evaluate every possible X choice in the range 1..2M. Use a precomputed suffix sum so you can instantly compute the total remaining stones and subtract the opponent's optimal result. Bottom-up DP avoids recursion overhead and can be easier to reason about during debugging, though the asymptotic complexity remains O(n^3).
Recommended for interviews: The memoized top-down dynamic programming solution is the most common interview answer. It shows you recognize the problem as a minimax-style DP and know how to model states with (index, M). Starting with the recursive formulation demonstrates understanding of the game strategy, while memoization reduces exponential exploration to polynomial time. If time allows, converting the same recurrence to bottom-up DP shows deeper mastery of dynamic programming state transitions.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming with Memoization | O(n^3) | O(n^2) | Best for interviews. Natural recursive formulation with caching of (index, M) states. |
| Bottom-Up Dynamic Programming | O(n^3) | O(n^2) | Useful when recursion depth is a concern or when implementing iterative DP tables. |