In this approach, we will first traverse the array to count the occurrences of each element using a hash map. Then, we will use a set to check if these occurrences are unique. If the size of the set matches the size of the occurrence counts, it implies all occurrences are unique.
Time Complexity: O(n), where n is the length of the array.
Space Complexity: O(1), since we're using fixed-size arrays.
1using System;
2using System.Collections.Generic;
3
4public class Program {
5 public static bool UniqueOccurrences(int[] arr) {
6 Dictionary<int, int> countMap = new Dictionary<int, int>();
7 foreach (int num in arr) {
8 if (countMap.ContainsKey(num)) countMap[num]++;
9 else countMap[num] = 1;
10 }
11 HashSet<int> occurrences = new HashSet<int>();
12 foreach (int count in countMap.Values) {
13 if (!occurrences.Add(count)) return false;
14 }
15 return true;
16 }
17
18 public static void Main() {
19 int[] arr = {1, 2, 2, 1, 1, 3};
20 Console.WriteLine(UniqueOccurrences(arr));
21 }
22}
The C# solution uses a Dictionary to count the occurrences and a HashSet to ensure all occurrence counts are unique.
This alternative approach starts by counting occurrences just like the first one. After that, it stores these counts in a list, sorts the list, and then checks for any consecutive equal elements, which would indicate duplicate occurrences.
Time Complexity: O(n log n), due to sorting.
Space Complexity: O(1), since we work within fixed-sized arrays.
1function uniqueOccurrences(arr) {
2 const countMap = new Map();
3 for (const num of arr) {
4 countMap.set(num, (countMap.get(num) || 0) + 1);
5 }
6 const occurrences = Array.from(countMap.values()).sort((a, b) => a - b);
7 for (let i = 1; i < occurrences.length; i++) {
8 if (occurrences[i] === occurrences[i - 1]) {
9 return false;
10 }
11 }
12 return true;
13}
14
15const arr = [1, 2, 2, 1, 1, 3];
16console.log(uniqueOccurrences(arr));
The JavaScript sorting approach involves sorting the list of occurrence counts and checking for any duplicates in consecutive elements.