Watch 10 video solutions for Frog Jump, a hard level problem involving Array, Dynamic Programming. This walkthrough by codestorywithMIK has 21,263 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
A frog is crossing a river. The river is divided into some number of units, and at each unit, there may or may not exist a stone. The frog can jump on a stone, but it must not jump into the water.
Given a list of stones positions (in units) in sorted ascending order, determine if the frog can cross the river by landing on the last stone. Initially, the frog is on the first stone and assumes the first jump must be 1 unit.
If the frog's last jump was k units, its next jump must be either k - 1, k, or k + 1 units. The frog can only jump in the forward direction.
Example 1:
Input: stones = [0,1,3,5,6,8,12,17] Output: true Explanation: The frog can jump to the last stone by jumping 1 unit to the 2nd stone, then 2 units to the 3rd stone, then 2 units to the 4th stone, then 3 units to the 6th stone, 4 units to the 7th stone, and 5 units to the 8th stone.
Example 2:
Input: stones = [0,1,2,3,4,8,9,11] Output: false Explanation: There is no way to jump to the last stone as the gap between the 5th and 6th stone is too large.
Constraints:
2 <= stones.length <= 20000 <= stones[i] <= 231 - 1stones[0] == 0stones is sorted in a strictly increasing order.Problem Overview: You are given an increasing array of stone positions in a river. The frog starts on the first stone and must reach the last one. If the previous jump length was k, the next jump can only be k-1, k, or k+1. The task is to determine whether the frog can reach the final stone.
Approach 1: Dynamic Programming with Hash Map (Time: O(n^2), Space: O(n^2))
This approach tracks all valid jump lengths that can land on each stone. Use a hash map where the key is the stone position and the value is a set of jump sizes that can reach that stone. Start with the first stone having jump length 0. For every reachable jump size k, try the next jumps k-1, k, and k+1. If a resulting position exists in the map, record that jump size for the destination stone. The key insight is that multiple paths may reach the same stone with different jump lengths, so storing all possibilities avoids recomputation. Since each stone may track up to O(n) jump sizes, the total complexity becomes O(n^2). This technique combines state tracking and quick membership checks using a hash map, a common pattern in dynamic programming problems.
Approach 2: DFS with Memoization (Time: O(n^2), Space: O(n^2))
This approach treats the problem as a state search. A state is defined by (currentStone, lastJump). From each state, recursively attempt jumps of lastJump - 1, lastJump, and lastJump + 1. Use a hash set for constant-time lookup of valid stone positions. Memoization stores failed states so the algorithm never explores the same combination twice. Without memoization, the recursion tree grows exponentially. With caching, the number of states is bounded by the number of stones times possible jump sizes, resulting in O(n^2) complexity. This method feels natural if you think of the problem as exploring reachable paths in a graph, using recursion and caching typical in dynamic programming and array traversal problems.
Recommended for interviews: The dynamic programming hash map solution is usually the expected answer. It demonstrates strong state modeling and efficient lookups. Explaining the DFS state transition first helps show intuition, but converting it into memoized DP shows deeper problem-solving skill and avoids exponential recursion.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming with Hash Map | O(n^2) | O(n^2) | General solution with iterative state propagation and fast hash lookups |
| DFS with Memoization | O(n^2) | O(n^2) | When modeling the problem as recursive state exploration with caching |