Sponsored
Sponsored
Negation Marking: This approach leverages the integer constraints (1 to n) to use the input array itself to track occurrences. Every time an index is visited based on an element's value, we negate the number at that index if it's positive. If we encounter that index again and the number is already negative, it means the number corresponding to this index has been seen before.
Time Complexity: O(n) as it traverses the list once.
Space Complexity: O(1) as it uses no additional space apart from the output list.
1#include <stdio.h>
2#include <stdlib.h>
3
4int* findDuplicates(int* nums, int numsSize, int* returnSize) {
5 int *duplicates = (int *)malloc(numsSize * sizeof(int));
6 *returnSize = 0;
7 for (int i = 0; i < numsSize; i++) {
8 int index = abs(nums[i]) - 1;
9 if (nums[index] < 0) {
10 duplicates[(*returnSize)++] = abs(nums[i]);
11 } else {
12 nums[index] = -nums[index];
13 }
14 }
15 return duplicates;
16}
C implementation uses dynamic memory allocation for the output array. It iterates using index manipulation to find duplicates as described in the Python explanation.
Index-Based Value Reordering: In this approach, we aim to place each number at the position corresponding to its value. If a position already holds the correct number, it signifies a duplicate, since we're attempting to place a number in its designated index. This allows us to directly identify duplicates while keeping the array order in check.
Time Complexity: O(n) due to swapping.
Space Complexity: O(1) as no extra space except for output.
1using System.Collections.Generic;
public class Solution {
public IList<int> FindDuplicates(int[] nums) {
List<int> duplicates = new List<int>();
for (int i = 0; i < nums.Length; i++) {
while (nums[i] != nums[nums[i] - 1]) {
int temp = nums[i];
nums[i] = nums[nums[i] - 1];
nums[temp - 1] = temp;
}
}
for (int i = 0; i < nums.Length; i++) {
if (nums[i] != i + 1) {
duplicates.Add(nums[i]);
}
}
return duplicates;
}
}
C# uses list and swaps within the input array to enforce correct ordering, detecting duplicates via out-of-place elements in second pass.