You are given a 0-indexed integer array nums and a positive integer k.
You can do the following operation on the array any number of times:
i and j and simultaneously update the values of nums[i] to (nums[i] AND nums[j]) and nums[j] to (nums[i] OR nums[j]). Here, OR denotes the bitwise OR operation, and AND denotes the bitwise AND operation.You have to choose k elements from the final array and calculate the sum of their squares.
Return the maximum sum of squares you can achieve.
Since the answer can be very large, return it modulo 109 + 7.
Example 1:
Input: nums = [2,6,5,8], k = 2 Output: 261 Explanation: We can do the following operations on the array: - Choose i = 0 and j = 3, then change nums[0] to (2 AND 8) = 0 and nums[3] to (2 OR 8) = 10. The resulting array is nums = [0,6,5,10]. - Choose i = 2 and j = 3, then change nums[2] to (5 AND 10) = 0 and nums[3] to (5 OR 10) = 15. The resulting array is nums = [0,6,0,15]. We can choose the elements 15 and 6 from the final array. The sum of squares is 152 + 62 = 261. It can be shown that this is the maximum value we can get.
Example 2:
Input: nums = [4,5,4,7], k = 3 Output: 90 Explanation: We do not need to apply any operations. We can choose the elements 7, 5, and 4 with a sum of squares: 72 + 52 + 42 = 90. It can be shown that this is the maximum value we can get.
Constraints:
1 <= k <= nums.length <= 1051 <= nums[i] <= 109Problem Overview: You are given an array of integers and can repeatedly apply an operation that replaces two numbers with their bitwise AND and OR. After performing operations, choose k elements so the sum of their squares is maximized. The key observation: these operations never change the total count of set bits at each bit position across the array.
Approach 1: Greedy Bit Distribution (O((n + k) * B) time, O(B) space)
Track how many times each bit appears across the array. Because the AND/OR operation only redistributes bits between numbers, the total count of every bit position remains constant. To maximize the sum of squares, construct the largest possible numbers greedily. For each of the k selections, iterate through bit positions and assign a bit if any occurrences remain, decrementing the count. This packs high-value bits together, which increases the square of the resulting number. Bit counting takes O(n * B) where B is the number of bits (≈31), and constructing k values takes O(k * B). This approach relies heavily on bit manipulation and a greedy strategy.
Approach 2: Priority Queue (Heap) Simulation (O((n + k) log n) time, O(n) space)
Another way to reason about the redistribution is to dynamically form the largest numbers using available bits and always process the current largest candidates. Maintain numbers in a max heap and repeatedly combine available bit contributions to grow the largest values before squaring them. The heap ensures the largest partially formed values are expanded first, which aligns with maximizing the square contribution. Each heap operation costs O(log n), making the overall complexity O((n + k) log n). This approach is useful if you prefer explicit element tracking using a priority queue instead of direct bit counting.
Recommended for interviews: The greedy bit-counting solution is the expected optimal approach. It demonstrates the critical insight that AND/OR operations conserve bit frequencies, allowing you to ignore the actual operations and work directly with bit counts. Interviewers typically look for this observation first, then the greedy construction that groups bits to maximize squared values.
The goal is to maximize the k largest elements in the array as they substantially increase the sum of squares. By choosing the largest element available and performing bitwise operations wisely, we can pump up the values further.
Initially observe that the AND operation sets insignificant bits to zero, reducing the smallest values to zero over iterations, thus promoting the larger numbers. Use sorting to easily select the largest squares.
This C code sorts the array nums in descending order so that we can select the largest k elements easily. It then calculates the sum of their squares, taking each modulo 109 + 7 to avoid overflow.
Time Complexity: O(n log n), due to sorting the array. Space Complexity: O(1), outside of the sorting algorithm's space requirements.
This method employs a max heap to dynamically track the k largest elements in the array, which will allow efficient access without sorting the entire array. This is particularly useful for very large arrays where sorting might be inefficient.
This C implementation creates a max heap from the array nums and uses it to extract the largest elements iteratively. This approach is efficient for maintaining the largest elements without sorting the entire array.
Time Complexity: O(n + k log n), where we build the heap in O(n) and then fetch the k largest in O(k log n). Space Complexity: O(1) additional space.
According to the problem description, for an operation, we can change nums[i] to nums[i] AND nums[j], and change nums[j] to nums[i] OR nums[j]. Let's consider the bits of the numbers. If two bits are both 1 or both 0, the result of the operation will not change the bits. If two bits are different, the result of the operation will change the bits to 0 and 1, respectively. Therefore, we can move 1 bits to 0 bits, but not vice versa.
We can use an array cnt to count the number of 1 bits in each position, and then select k numbers from them. To maximize the sum of squares, we should choose the largest numbers as much as possible. This is because, assuming the sum of squares of two numbers is a^2 + b^2 (where a \gt b), changing them to (a + c)^2 + (b - c)^2 = a^2 + b^2 + 2c(a - b) + 2c^2 \gt a^2 + b^2 will increase the sum of squares. Therefore, to maximize the sum of squares, we should choose the largest number.
The time complexity is O(n times log M), and the space complexity is O(log M). Here, M is the maximum value in the array.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Greedy Approach to Maximize Final Elements | Time Complexity: O(n log n), due to sorting the array. Space Complexity: O(1), outside of the sorting algorithm's space requirements. |
| Priority Queue (Heap) Approach to Dynamically Track Largest Elements | Time Complexity: O(n + k log n), where we build the heap in O(n) and then fetch the k largest in O(k log n). Space Complexity: O(1) additional space. |
| Bitwise Operation + Greedy | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Greedy Bit Distribution | O((n + k) * B) | O(B) | Best overall approach when bit limits are small (≈31). Most efficient and common interview solution. |
| Priority Queue (Heap) Simulation | O((n + k) log n) | O(n) | Useful when explicitly tracking element growth or when modeling redistribution dynamically. |
2897. Apply Operations on Array to Maximize Sum of Squares | Weekly Leetcode 366 • codingMohan • 1,705 views views
Watch 5 more video solutions →Practice Apply Operations on Array to Maximize Sum of Squares with our built-in code editor and test cases.
Practice on FleetCode