
Sponsored
Sponsored
Explanation:
To solve this problem, we can use a hash map (or dictionary) to count the frequency of each word in the given array. Once we have the frequencies, we can then create a list of words and sort them based on their frequency in descending order. An additional condition for sorting is that if two words have the same frequency, we sort them alphabetically.
This approach simplifies the solution since sorting (bound by the number of unique words) helps easily extract the top k frequent words.
Time Complexity: O(n log n), where n is the number of unique words. This arises from the sorting operation.
Space Complexity: O(n), used for storing unique words.
1using System;
2using System.Collections.Generic;
3using System.Linq;
4
5public class Solution {
6 public IList<string> TopKFrequent(string[] words, int k) {
7 Dictionary<string, int> countMap = new Dictionary<string, int>();
8 foreach (string word in words) {
9 if (!countMap.ContainsKey(word)) {
10 countMap[word] = 0;
11 }
12 countMap[word]++;
13 }
14
15 var sortedList = countMap.OrderByDescending(x => x.Value).ThenBy(x => x.Key).Select(x => x.Key).ToList();
16 return sortedList.GetRange(0, k);
17 }
18}
19In C#, we use a Dictionary to count word occurrences. LINQ is employed to sort the dictionary keys (words) by frequency first, and by lexicographical order second. The result is retrieved by fetching the first k elements from the sorted list.
Explanation:
This method takes advantage of a min-heap to efficiently determine the top k frequent words. Instead of sorting the entire set of words, which incurs an O(n log n) cost, we can maintain a min-heap of size k. As we iterate through the frequency map, we push elements into the heap. If heap size exceeds k, we remove the smallest element (i.e., least frequent word or alphabetically greater if frequencies are tied).
This is guided by the Follow-up question, showcasing an alternative solution bounded by O(n log k) time complexity.
Time Complexity: O(n log k), optimizing by constraining the heap's growth beyond k elements.
Space Complexity: O(n + k), combining map and heap usage.
1#include <vector>
2#include <string>
#include <unordered_map>
#include <queue>
class Solution {
public:
struct Comp {
bool operator()(const std::pair<int, std::string>& a, const std::pair<int, std::string>& b) const {
if (a.first == b.first)
return a.second < b.second;
return a.first > b.first;
}
};
std::vector<std::string> topKFrequent(std::vector<std::string>& words, int k) {
std::unordered_map<std::string, int> count;
for (const std::string& word : words) {
count[word]++;
}
std::priority_queue<std::pair<int, std::string>, std::vector<std::pair<int, std::string>>, Comp> minHeap;
for (const auto& [word, freq] : count) {
minHeap.emplace(freq, word);
if (minHeap.size() > k) {
minHeap.pop();
}
}
std::vector<std::string> result;
while (!minHeap.empty()) {
result.push_back(minHeap.top().second);
minHeap.pop();
}
std::reverse(result.begin(), result.end());
return result;
}
};
In this C++ example, we create a custom comparator to maintain a priority queue (min-heap) based on the frequency and lexicon criteria. By limiting the heap to size k, we efficiently manage only the most critical elements, producing a final list once extraneous words are removed from the heap.