This approach leverages the properties of a HashSet (or similar data structures depending on the programming language), which allows for average O(1) time complexity for insertion and lookup operations. As you iterate over the array, you check if the current element is already in the HashSet. If it is, then a duplicate has been found, and you can return true immediately. If it’s not already in the HashSet, you add it. If no duplicates are found by the end of the array, you return false.
Time Complexity: O(n log n), due to the sorting step.
Space Complexity: O(1), no extra space required apart from sorting.
1#include <stdbool.h>
2#include <stdlib.h>
3int compare(const void *a, const void *b) {
4 return (*(int*)a - *(int*)b);
5}
6bool containsDuplicate(int* nums, int numsSize) {
7 qsort(nums, numsSize, sizeof(int), compare);
8 for (int i = 0; i < numsSize - 1; i++) {
9 if (nums[i] == nums[i + 1]) {
10 return true;
11 }
12 }
13 return false;
14}
In the C solution, we sort the array using qsort() and then check each adjacent pair of elements for duplication. This is efficient given the constraints, as sorting the array is O(n log n) and the subsequent scan is O(n).
This approach involves sorting the array first, then checking for duplicates by comparing each element with its next neighbor. If duplicates exist, they will appear next to each other after sorting.
Time Complexity: O(n log n), due to sorting.
Space Complexity: O(1), aside from sorting in-place.
1#include <vector>
2#include <algorithm>
3bool containsDuplicate(std::vector<int>& nums) {
4 sort(nums.begin(), nums.end());
5 for (int i = 0; i < nums.size() - 1; i++) {
6 if (nums[i] == nums[i + 1]) {
7 return true;
8 }
9 }
10 return false;
11}
For C++, we utilize the standard sort function which sorts the array in-place, followed by a linear sweep to detect duplicates in successive elements.