The alternating sum of a 0-indexed array is defined as the sum of the elements at even indices minus the sum of the elements at odd indices.
[4,2,5,3] is (4 + 5) - (2 + 3) = 4.Given an array nums, return the maximum alternating sum of any subsequence of nums (after reindexing the elements of the subsequence).
A subsequence of an array is a new array generated from the original array by deleting some elements (possibly none) without changing the remaining elements' relative order. For example, [2,7,4] is a subsequence of [4,2,3,7,2,1,4] (the underlined elements), while [2,4,2] is not.
Example 1:
Input: nums = [4,2,5,3] Output: 7 Explanation: It is optimal to choose the subsequence [4,2,5] with alternating sum (4 + 5) - 2 = 7.
Example 2:
Input: nums = [5,6,7,8] Output: 8 Explanation: It is optimal to choose the subsequence [8] with alternating sum 8.
Example 3:
Input: nums = [6,2,1,2,4,5] Output: 10 Explanation: It is optimal to choose the subsequence [6,1,5] with alternating sum (6 + 5) - 1 = 10.
Constraints:
1 <= nums.length <= 1051 <= nums[i] <= 105Problem Overview: You are given an integer array and must choose a subsequence whose alternating sum is maximized. The alternating sum means elements are added and subtracted in order: a0 - a1 + a2 - a3 .... The subsequence can skip any elements but must keep the original order. The challenge is deciding which numbers to include so the positive positions carry large values while negative positions carry smaller ones.
Approach 1: Dynamic Programming (O(n) time, O(1) space)
This problem maps naturally to dynamic programming. While scanning the array, track two states: the best alternating sum when the next operation is addition (even index in the subsequence) and the best sum when the next operation is subtraction (odd). For each number x, you either include it or skip it. Including it transitions between states: adding x updates the even state, subtracting x updates the odd state.
The recurrence is straightforward: update the "add" state using the previous subtract state plus the current value, and update the "subtract" state using the previous add state minus the value. Because only the previous states matter, you store just two variables instead of a full DP table. This reduces space complexity to O(1) while keeping time complexity O(n).
Approach 2: Greedy with Two Accumulators (O(n) time, O(1) space)
A greedy interpretation leads to an even simpler implementation. Maintain two accumulators: add for the best sum when the next operation is addition and subtract for the best sum when the next operation is subtraction. Iterate through the array once. For each value, compute the best new states by either taking the element or skipping it.
The key insight: if a value improves the alternating pattern, include it immediately. Updating the accumulators effectively captures the same transitions as the DP solution but framed as a greedy decision process. Since each element is processed exactly once and only constant variables are used, the algorithm runs in O(n) time with O(1) space.
This approach is often easier to reason about because the two accumulators represent the best possible sums for even and odd positions in the subsequence. The logic mirrors classic stock-trading style DP problems where you alternate between "buy" and "sell" states.
Recommended for interviews: The greedy two-accumulator version is what most interviewers expect. It demonstrates strong understanding of state transitions and optimization from dynamic programming to constant-space logic. Explaining the DP formulation first shows you understand the state definition, while deriving the greedy version highlights optimization skills.
In this approach, we use dynamic programming to keep track of the maximum alternating sum at each step. We maintain two variables to track the maximum sum of subsequences when the last index used was even ('even_idx_sum') and odd ('odd_idx_sum'). At each step, we can choose to include the current number at an even index or an odd index and update these two variables accordingly.
This implementation calculates the maximum alternating sum using two variables. We iterate over the array once, updating our even and odd index sums based on the current number and previously stored values. The approach ensures optimal results with respect to alternating sums.
Time Complexity: O(n) where n is the number of elements in nums.
Space Complexity: O(1) because we're using only two variables for calculations.
This greedy method maintains two accumulators simulating the addition or subtraction at either even or odd positions. The focus is on extracting the optimal number into the alternating sequence while transforming previously computed accumulations successively as per requirement.
Using accumulators 'change_to_even' and 'change_to_odd', this C implementation evaluates the current number to update feasible alternating sums prepared to switch positions. The larger alternative is recorded, formulating an effective strategy for a maximum alternating sum.
Time Complexity: O(n) for linear passes over the array.
Space Complexity: O(1) since constant extra space is needed.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Dynamic Programming Approach | Time Complexity: O(n) where n is the number of elements in nums. |
| Greedy Approach with Two Accumulators | Time Complexity: O(n) for linear passes over the array. |
| Default Approach | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming (State Transition) | O(n) | O(1) | Best for understanding the problem formally using DP states for add and subtract positions |
| Greedy with Two Accumulators | O(n) | O(1) | Preferred interview solution; simple implementation with constant memory |
Maximum Alternating Subsequence Sum - Dynamic Programming - Leetcode 1911 - Python • NeetCode • 31,710 views views
Watch 9 more video solutions →Practice Maximum Alternating Subsequence Sum with our built-in code editor and test cases.
Practice on FleetCode