
Sponsored
Sponsored
Explanation:
To solve this problem, we can use a hash map (or dictionary) to count the frequency of each word in the given array. Once we have the frequencies, we can then create a list of words and sort them based on their frequency in descending order. An additional condition for sorting is that if two words have the same frequency, we sort them alphabetically.
This approach simplifies the solution since sorting (bound by the number of unique words) helps easily extract the top k frequent words.
Time Complexity: O(n log n), where n is the number of unique words. This arises from the sorting operation.
Space Complexity: O(n), used for storing unique words.
1#include <vector>
2#include <string>
3#include <unordered_map>
4#include <algorithm>
5
6class Solution {
7public:
8 std::vector<std::string> topKFrequent(std::vector<std::string>& words, int k) {
9 std::unordered_map<std::string, int> freqs;
10 for (const auto& word : words) {
11 ++freqs[word];
12 }
13
14 std::vector<std::pair<std::string, int>> pairs(freqs.begin(), freqs.end());
15 std::sort(pairs.begin(), pairs.end(), [](const auto& a, const auto& b) {
16 return a.second > b.second || (a.second == b.second && a.first < b.first);
17 });
18
19 std::vector<std::string> result;
20 for (int i = 0; i < k; ++i) {
21 result.push_back(pairs[i].first);
22 }
23 return result;
24 }
25};
26In C++, we use an unordered_map to count frequencies of each word. We then convert the map to a vector of pairs to sort them. Sorting is done with a custom lambda function, prioritizing frequency over lexicographical order. Finally, we grab the top k most frequent words as our result.
Explanation:
This method takes advantage of a min-heap to efficiently determine the top k frequent words. Instead of sorting the entire set of words, which incurs an O(n log n) cost, we can maintain a min-heap of size k. As we iterate through the frequency map, we push elements into the heap. If heap size exceeds k, we remove the smallest element (i.e., least frequent word or alphabetically greater if frequencies are tied).
This is guided by the Follow-up question, showcasing an alternative solution bounded by O(n log k) time complexity.
Time Complexity: O(n log k), optimizing by constraining the heap's growth beyond k elements.
Space Complexity: O(n + k), combining map and heap usage.
1#include <vector>
2#include <string>
#include <unordered_map>
#include <queue>
class Solution {
public:
struct Comp {
bool operator()(const std::pair<int, std::string>& a, const std::pair<int, std::string>& b) const {
if (a.first == b.first)
return a.second < b.second;
return a.first > b.first;
}
};
std::vector<std::string> topKFrequent(std::vector<std::string>& words, int k) {
std::unordered_map<std::string, int> count;
for (const std::string& word : words) {
count[word]++;
}
std::priority_queue<std::pair<int, std::string>, std::vector<std::pair<int, std::string>>, Comp> minHeap;
for (const auto& [word, freq] : count) {
minHeap.emplace(freq, word);
if (minHeap.size() > k) {
minHeap.pop();
}
}
std::vector<std::string> result;
while (!minHeap.empty()) {
result.push_back(minHeap.top().second);
minHeap.pop();
}
std::reverse(result.begin(), result.end());
return result;
}
};
In this C++ example, we create a custom comparator to maintain a priority queue (min-heap) based on the frequency and lexicon criteria. By limiting the heap to size k, we efficiently manage only the most critical elements, producing a final list once extraneous words are removed from the heap.