This approach uses a hash map to count the frequency of each element. We then use a min-heap to keep track of the top k elements.
Time Complexity: O(n log n) due to sorting.
Space Complexity: O(n) for storing frequencies.
1from collections import Counter
2import heapq
3
4def topKFrequent(nums, k):
5 count = Counter(nums)
6 return heapq.nlargest(k, count.keys(), key=count.get)
7
8if __name__ == '__main__':
9 nums = [1,1,1,2,2,3]
10 k = 2
11 print(topKFrequent(nums, k))
12
We use Python's collections.Counter and heapq.nlargest functions to efficiently get the top k frequent elements.
This approach involves using bucket sort where we create buckets for frequency counts and then extract the top k frequent elements.
Time Complexity: O(n + k).
Space Complexity: O(n).
1#include <iostream>
2#include <vector>
3#include <unordered_map>
4#include <algorithm>
5
6using namespace std;
7
8vector<int> topKFrequent(vector<int>& nums, int k) {
9 unordered_map<int, int> freqMap;
10 vector<vector<int>> buckets(nums.size() + 1);
11
12 for (int num : nums) {
13 freqMap[num]++;
14 }
15
16 for (auto& p : freqMap) {
17 buckets[p.second].push_back(p.first);
18 }
19
20 vector<int> result;
21 for (int i = buckets.size() - 1; i >= 0 && result.size() < k; --i) {
22 for (int num : buckets[i]) {
23 result.push_back(num);
24 if (result.size() == k) break;
25 }
26 }
27 return result;
28}
29
30int main() {
31 vector<int> nums = {1, 1, 1, 2, 2, 3};
32 int k = 2;
33 vector<int> result = topKFrequent(nums, k);
34 for (int num : result) {
35 cout << num << " ";
36 }
37 return 0;
38}
39
Frequency occurrences are placed in buckets. The largest buckets represent the most frequent elements.