You are given an integer array nums. The unique elements of an array are the elements that appear exactly once in the array.
Return the sum of all the unique elements of nums.
Example 1:
Input: nums = [1,2,3,2] Output: 4 Explanation: The unique elements are [1,3], and the sum is 4.
Example 2:
Input: nums = [1,1,1,1,1] Output: 0 Explanation: There are no unique elements, and the sum is 0.
Example 3:
Input: nums = [1,2,3,4,5] Output: 15 Explanation: The unique elements are [1,2,3,4,5], and the sum is 15.
Constraints:
1 <= nums.length <= 1001 <= nums[i] <= 100Problem Overview: You receive an integer array nums. The task is to compute the sum of elements that appear exactly once in the array. Any number that appears more than once must be ignored. The challenge is efficiently identifying which values have frequency 1.
Approach 1: Frequency Count and Sum (O(n) time, O(n) space)
This approach uses a frequency map to count how many times each number appears. Iterate through the array once and store counts in a hash table. Then iterate over the frequency map and add values whose count equals 1. Hash table lookup and insertion both run in constant time on average, so the entire operation is linear. This method is straightforward and widely used in problems involving element frequency tracking. It relies on a hash table for constant-time updates and works well for unsorted arrays.
The key insight is separating counting from summation. Instead of trying to determine uniqueness while iterating, first build a complete frequency distribution. Once you know exact counts, computing the result becomes trivial. This pattern appears frequently in problems involving duplicates, histograms, or occurrence tracking.
Approach 2: Single Pass with Multiple Sets (O(n) time, O(n) space)
This approach tracks numbers using two sets during a single traversal. Maintain a unique set for numbers seen exactly once and a duplicates set for numbers seen more than once. As you iterate through the array, check whether the number has been seen before. If it appears for the first time, insert it into unique. If it appears again, remove it from unique and add it to duplicates. At the end, sum all values remaining in unique.
The advantage is that counting and filtering happen simultaneously in one pass. Set membership checks and updates are constant time on average because they use hashing internally. This technique is common when you need to dynamically maintain "seen once" vs "seen multiple times" elements. It still uses hashing internally, making it conceptually similar to the counting approach but slightly more dynamic.
Both strategies operate in linear time and rely on efficient lookups provided by a hash table. Since the input is simply an array, the algorithm focuses on tracking frequencies rather than modifying structure or order. Problems involving frequency counting are often categorized under counting techniques.
Recommended for interviews: The frequency counting approach is usually the expected solution. It is simple, easy to reason about, and clearly demonstrates understanding of hash maps and counting patterns. The set-based single-pass method is equally efficient but slightly less common. Interviewers typically want to see the frequency map solution first because it generalizes well to many duplicate-handling problems.
In this approach, we maintain a frequency count of each number using a hash map (or array if constraints are small). By keeping track of these counts, we can identify which numbers occur exactly once. A second pass over the frequency data allows us to sum the unique elements efficiently.
This C code uses an array count to count occurrences of each element in the input array. We then iterate through the count array to compute the sum of unique elements (those with a count of 1).
Time Complexity: O(n) where n is the size of the input array. Space Complexity: O(1) since the count array has a constant size based on the constraints.
This approach involves using two sets: one to track all elements and another to track duplicates. As we iterate over the array, we determine if an element is being seen for the first time or if it has been repeated. The strategy involves set operations to track and sum unique values concurrently.
This C approach uses two arrays to track numbers and their duplicates. As each number is processed, we adjust the sum conditionally, ensuring to subtract only once.
Time Complexity: O(n) as the array is traversed once. Space Complexity: O(1) considering two fixed-size arrays are utilized.
| Approach | Complexity |
|---|---|
| Frequency Count and Sum | Time Complexity: O(n) where n is the size of the input array. Space Complexity: O(1) since the count array has a constant size based on the constraints. |
| Single Pass with Multiple Sets | Time Complexity: O(n) as the array is traversed once. Space Complexity: O(1) considering two fixed-size arrays are utilized. |
| Default Approach | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Frequency Count and Sum | O(n) | O(n) | General case when counting occurrences in an unsorted array |
| Single Pass with Multiple Sets | O(n) | O(n) | When you want to track unique vs duplicate elements dynamically in one traversal |
Sum of Unique Elements (LeetCode 1748) | Full solution with a frequency array • Nikhil Lohia • 3,357 views views
Watch 9 more video solutions →Practice Sum of Unique Elements with our built-in code editor and test cases.
Practice on FleetCode