You are given two positive integer arrays nums and target, of the same length.
In one operation, you can choose any two distinct indices i and j where 0 <= i, j < nums.length and:
nums[i] = nums[i] + 2 andnums[j] = nums[j] - 2.Two arrays are considered to be similar if the frequency of each element is the same.
Return the minimum number of operations required to make nums similar to target. The test cases are generated such that nums can always be similar to target.
Example 1:
Input: nums = [8,12,6], target = [2,14,10] Output: 2 Explanation: It is possible to make nums similar to target in two operations: - Choose i = 0 and j = 2, nums = [10,12,4]. - Choose i = 1 and j = 2, nums = [10,14,2]. It can be shown that 2 is the minimum number of operations needed.
Example 2:
Input: nums = [1,2,5], target = [4,1,3] Output: 1 Explanation: We can make nums similar to target in one operation: - Choose i = 1 and j = 2, nums = [1,4,3].
Example 3:
Input: nums = [1,1,1,1,1], target = [1,1,1,1,1] Output: 0 Explanation: The array nums is already similiar to target.
Constraints:
n == nums.length == target.length1 <= n <= 1051 <= nums[i], target[i] <= 106nums similar to target.Problem Overview: You are given two integer arrays nums and target. In one operation, you can choose two indices in nums, add 2 to one element and subtract 2 from the other. The goal is to transform nums so it becomes similar to target, meaning both arrays contain the same multiset of values. The task is to compute the minimum number of operations required.
Approach 1: Greedy Approach with Sorting (O(n log n) time, O(n) space)
The key observation: adding or subtracting 2 never changes the parity of a number. Odd numbers must match with odd numbers, and even numbers must match with even numbers. Split both arrays into two groups based on parity. Sort the odd elements of nums and target, and do the same for the even elements. Then compare corresponding elements and accumulate the positive differences. Each operation effectively shifts 2 units between elements, so the total difference divided appropriately gives the operation count. Sorting ensures the smallest values match first, which minimizes total adjustment. This approach uses classic greedy matching combined with sorting to minimize unnecessary adjustments.
Approach 2: Frequency Matching by Counting and Remainder Groups (O(n log n) or O(n) time depending on map usage, O(n) space)
Another perspective is to track how much adjustment each element requires relative to its matching parity group. Separate elements into odd and even buckets, then sort or frequency-count them. Compute the difference between each pair of corresponding elements. Only positive differences matter because decreases are balanced by increases elsewhere due to the operation structure. Each increase of 2 contributes toward the total operation count. Instead of directly simulating operations, aggregate the total increments needed and divide by 2. Using counting or frequency maps reduces repeated scanning when many values repeat. This approach still relies on the parity constraint but focuses on difference aggregation rather than pairwise greedy matching.
Recommended for interviews: The greedy sorting approach is the expected solution. It clearly shows you recognize the invariant that parity never changes and that matching sorted parity groups minimizes adjustment cost. Interviewers usually want to see the reasoning about parity first, then a clean greedy implementation using arrays or lists. Understanding the parity constraint comes from analyzing the allowed operation on the array, which is the core trick behind this hard problem.
This approach involves sorting both the 'nums' and 'target' arrays. The idea is to pair the elements of the two arrays in a sorted manner, and compute the difference needed to transform each element of 'nums' to become the element in 'target'. We accumulate the absolute difference divided by 2 (since each operation can adjust a pair by 2), hence calculating the minimum number of operations required.
In Python, after sorting the arrays, we use the 'zip' function to traverse elements from both arrays simultaneously. For each pair, we compute the absolute difference and accumulate it. Finally, this total difference is divided by 2 since one operation changes two elements by 2 units, yielding the minimum number of operations required.
Time Complexity: O(n log n) due to the sorting operations.
Space Complexity: O(1), as no additional data structures are used.
In this approach, consider the problem from a remainder perspective. Since adding 2 or subtracting 2 does not change the evenness or oddness of the numbers, instead of transforming the entire number, only focus on ensuring the ratios of odd and even numbers are maintained. Count the number of odd and even entries in both arrays. Pair excess even numbers in 'nums' with missing evens in 'target', and do the same with odd numbers.
By processing both lists element-wise, if an even number in 'nums' needs to be odd in 'target' or vice-versa, increment respective counters. The maximum discrepancy between odd and even shifts gives the minimum operations required to match odd/even distribution.
Time Complexity: O(n), linear scan through arrays.
Space Complexity: O(1), counters for odd/even differences.
Notice that, because each operation will only increase or decrease the value of an element by 2, the parity of the element will not change.
Therefore, we can divide the arrays nums and target into two groups according to their parity, denoted as a_1 and a_2, and b_1 and b_2 respectively.
Then, we just need to pair the elements in a_1 with the elements in b_1, and pair the elements in a_2 with the elements in b_2, and then perform operations. During the pairing process, we can use a greedy strategy, pairing the smaller elements in a_i with the smaller elements in b_i each time, which can ensure the minimum number of operations. This can be directly implemented through sorting.
Since each operation can reduce the difference of the corresponding elements by 4, we accumulate the difference of each corresponding position, and finally divide by 4 to get the answer.
The time complexity is O(n times log n), where n is the length of the array nums.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Greedy Approach with Sorting | Time Complexity: O(n log n) due to the sorting operations. |
| Frequency Matching by Counting and Matching Remainders | Time Complexity: O(n), linear scan through arrays. |
| Odd-Even Classification + Sorting | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy Matching with Sorting | O(n log n) | O(n) | General case. Clean and reliable solution used in most interviews. |
| Frequency Matching by Parity Groups | O(n log n) or O(n) | O(n) | Useful when many duplicate values exist or when implementing difference aggregation. |
Leetcode 2449 Minimum Number of Operations to Make Arrays Similar • ThinkCode • 994 views views
Watch 9 more video solutions →Practice Minimum Number of Operations to Make Arrays Similar with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor