Watch 10 video solutions for Maximum Energy Boost From Two Drinks, a medium level problem involving Array, Dynamic Programming. This walkthrough by Ayush Rao has 993 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
You are given two integer arrays energyDrinkA and energyDrinkB of the same length n by a futuristic sports scientist. These arrays represent the energy boosts per hour provided by two different energy drinks, A and B, respectively.
You want to maximize your total energy boost by drinking one energy drink per hour. However, if you want to switch from consuming one energy drink to the other, you need to wait for one hour to cleanse your system (meaning you won't get any energy boost in that hour).
Return the maximum total energy boost you can gain in the next n hours.
Note that you can start consuming either of the two energy drinks.
Example 1:
Input: energyDrinkA = [1,3,1], energyDrinkB = [3,1,1]
Output: 5
Explanation:
To gain an energy boost of 5, drink only the energy drink A (or only B).
Example 2:
Input: energyDrinkA = [4,1,1], energyDrinkB = [1,1,3]
Output: 7
Explanation:
To gain an energy boost of 7:
Constraints:
n == energyDrinkA.length == energyDrinkB.length3 <= n <= 1051 <= energyDrinkA[i], energyDrinkB[i] <= 105Problem Overview: You are given two arrays where A[i] and B[i] represent the energy gained from two different drinks at hour i. You can drink only one per hour. Switching drinks forces a one‑hour cooldown where no energy is gained. The goal is to maximize the total energy collected across all hours.
Approach 1: Dynamic Programming (O(n) time, O(n) space)
This problem naturally fits dynamic programming. Track two states: the best energy if the current hour ends with drink A or drink B. Let dpA[i] be the maximum energy achievable up to hour i when drinking A at that hour, and dpB[i] the equivalent for drink B. Continuing the same drink simply adds the current energy (dpA[i-1] + A[i]). Switching requires skipping one hour due to the cooldown, so the transition becomes dpB[i-2] + A[i] (and vice versa). Iterate through the array, updating both states at every step. The answer is max(dpA[n-1], dpB[n-1]). This approach systematically evaluates both choices at every hour and guarantees the optimal total. Time complexity is O(n) because each index is processed once, and space complexity is O(n) for the DP arrays.
Approach 2: Greedy with Pruning (O(n) time, O(1) space)
A more space‑efficient variation observes that each state depends only on the previous one or two hours. Instead of storing full DP arrays, keep rolling variables representing the best totals for the last two positions for each drink. At each step, compute the gain for continuing the same drink or switching after the cooldown, then keep only the maximum candidate. Greedy pruning discards dominated states where continuing or switching would always produce a smaller total than another tracked option. Because only constant state is maintained, the algorithm runs in O(n) time with O(1) extra space. This version is preferred when memory usage matters or when implementing a concise interview solution.
Recommended for interviews: The dynamic programming formulation is the expected explanation because it clearly models the cooldown constraint and state transitions. Writing the full DP first demonstrates solid reasoning about state definition and recurrence. After that, optimizing to rolling variables (the greedy‑pruned version) shows deeper understanding of DP space reduction.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming | O(n) | O(n) | Best for explaining the recurrence and cooldown transition clearly in interviews |
| Greedy with Pruning (Rolling DP) | O(n) | O(1) | When optimizing memory or implementing a concise production solution |