Given an integer array nums, return true if any value appears at least twice in the array, and return false if every element is distinct.
Example 1:
Input: nums = [1,2,3,1]
Output: true
Explanation:
The element 1 occurs at the indices 0 and 3.
Example 2:
Input: nums = [1,2,3,4]
Output: false
Explanation:
All elements are distinct.
Example 3:
Input: nums = [1,1,1,3,3,4,3,2,4,2]
Output: true
Constraints:
1 <= nums.length <= 105-109 <= nums[i] <= 109Problem Overview: You are given an integer array nums. The task is simple: return true if any value appears at least twice in the array, otherwise return false. The challenge is detecting duplicates efficiently without unnecessary comparisons.
Approach 1: Using a HashSet to Check for Duplicates (Time: O(n), Space: O(n))
The most efficient solution uses a hash table, typically implemented as a HashSet. Iterate through the array once. For each element, check whether it already exists in the set. Hash lookups run in constant time on average, so this check is fast. If the value is already present, a duplicate exists and you can immediately return true. Otherwise, insert the value into the set and continue scanning.
The key insight: a set automatically enforces uniqueness. By checking membership before insertion, duplicates are detected instantly without comparing against every previous element. The algorithm performs one pass through the array, making it linear time O(n). The trade‑off is extra memory because the set may store up to n elements, resulting in O(n) space.
This approach is the standard solution for problems involving fast membership checks in an array. It scales well for large inputs and is usually the first method expected in interviews.
Approach 2: Sorting the Array (Time: O(n log n), Space: O(1) or O(n))
Another method sorts the array first, then checks neighboring elements. After sorting, duplicates become adjacent. Iterate through the sorted array and compare nums[i] with nums[i - 1]. If they match at any point, a duplicate exists.
Sorting takes O(n log n) time using standard algorithms like quicksort or mergesort. The scan afterward is linear O(n), so overall complexity remains O(n log n). Space complexity depends on the sorting implementation: in‑place sorts such as heapsort use O(1) extra space, while mergesort-based implementations may use O(n).
This approach avoids explicit hash structures and can be useful when sorting is already required for later operations. It also appears frequently in problems related to sorting and duplicate detection patterns.
Recommended for interviews: The HashSet approach is the expected optimal answer. It demonstrates understanding of constant-time hash lookups and efficient duplicate detection in linear time. Mentioning the sorting alternative shows awareness of tradeoffs: lower auxiliary structures at the cost of O(n log n) runtime. Strong candidates typically explain both, implement the hash-based solution, and discuss complexity clearly.
This approach leverages the properties of a HashSet (or similar data structures depending on the programming language), which allows for average O(1) time complexity for insertion and lookup operations. As you iterate over the array, you check if the current element is already in the HashSet. If it is, then a duplicate has been found, and you can return true immediately. If it’s not already in the HashSet, you add it. If no duplicates are found by the end of the array, you return false.
In the C solution, we sort the array using qsort() and then check each adjacent pair of elements for duplication. This is efficient given the constraints, as sorting the array is O(n log n) and the subsequent scan is O(n).
C++
Java
Python
C#
JavaScript
Time Complexity: O(n log n), due to the sorting step.
Space Complexity: O(1), no extra space required apart from sorting.
This approach involves sorting the array first, then checking for duplicates by comparing each element with its next neighbor. If duplicates exist, they will appear next to each other after sorting.
Here, qsort is used to sort the array, which allows for efficient comparison of adjacent elements to find duplicates. This leverages sorting's ability to bring identical elements together.
C++
Java
Python
C#
JavaScript
Time Complexity: O(n log n), due to sorting.
Space Complexity: O(1), aside from sorting in-place.
| Approach | Complexity |
|---|---|
| Using a HashSet to Check for Duplicates | Time Complexity: O(n log n), due to the sorting step. |
| Sorting Approach | Time Complexity: O(n log n), due to sorting. |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| HashSet Duplicate Check | O(n) | O(n) | Best general solution. Fast membership checks with a hash table. |
| Sorting + Adjacent Comparison | O(n log n) | O(1) to O(n) | When sorting is acceptable or memory usage must avoid extra hash structures. |
Contains Duplicate - Leetcode 217 - Python • NeetCode • 770,501 views views
Watch 9 more video solutions →Practice Contains Duplicate with our built-in code editor and test cases.
Practice on FleetCode