




Sponsored
Sponsored
Explanation:
To solve this problem, we can use a hash map (or dictionary) to count the frequency of each word in the given array. Once we have the frequencies, we can then create a list of words and sort them based on their frequency in descending order. An additional condition for sorting is that if two words have the same frequency, we sort them alphabetically.
This approach simplifies the solution since sorting (bound by the number of unique words) helps easily extract the top k frequent words.
Time Complexity: O(n log n), where n is the number of unique words. This arises from the sorting operation.
Space Complexity: O(n), used for storing unique words.
1#include <stdio.h>
2#include <string.h>
3#include <stdlib.h>
4
5struct Node {
6    char *word;
7    int count;
8};
9
10int compare(const void *a, const void *b) {
11    struct Node *nodeA = (struct Node *)a;
12    struct Node *nodeB = (struct Node *)b;
13    if (nodeA->count != nodeB->count)
14        return nodeB->count - nodeA->count;
15    return strcmp(nodeA->word, nodeB->word);
16}
17
18char** topKFrequent(char** words, int wordsSize, int k, int* returnSize) {
19    int freq[501] = {0};
20    struct Node nodes[501];
21    int index = 0;
22
23    for (int i = 0; i < wordsSize; ++i) {
24        int found = 0;
25        for (int j = 0; j < index; ++j) {
26            if (strcmp(nodes[j].word, words[i]) == 0) {
27                nodes[j].count++;
28                found = 1;
29                break;
30            }
31        }
32        if (!found) {
33            nodes[index].word = words[i];
34            nodes[index].count = 1;
35            index++;
36        }
37    }
38
39    qsort(nodes, index, sizeof(struct Node), compare);
40    char **result = (char **)malloc(k * sizeof(char *));
41    for (int i = 0; i < k; ++i) {
42        result[i] = nodes[i].word;
43    }
44    *returnSize = k;
45    return result;
46}
47In this C solution, we use an array of Node structs to record words and their counts. We iterate through the input array, updating frequency counts. After processing the input, sorting is done via qsort using a custom comparator that sorts first by frequency, then lexicographically. Finally, we collect the first k elements as the result.
Explanation:
This method takes advantage of a min-heap to efficiently determine the top k frequent words. Instead of sorting the entire set of words, which incurs an O(n log n) cost, we can maintain a min-heap of size k. As we iterate through the frequency map, we push elements into the heap. If heap size exceeds k, we remove the smallest element (i.e., least frequent word or alphabetically greater if frequencies are tied).
This is guided by the Follow-up question, showcasing an alternative solution bounded by O(n log k) time complexity.
Time Complexity: O(n log k), optimizing by constraining the heap's growth beyond k elements.
Space Complexity: O(n + k), combining map and heap usage.
1#include <vector>
2#include <string>
#include <unordered_map>
#include <queue>
class Solution {
public:
    struct Comp {
        bool operator()(const std::pair<int, std::string>& a, const std::pair<int, std::string>& b) const {
            if (a.first == b.first)
                return a.second < b.second;
            return a.first > b.first;
        }
    };
    std::vector<std::string> topKFrequent(std::vector<std::string>& words, int k) {
        std::unordered_map<std::string, int> count;
        for (const std::string& word : words) {
            count[word]++;
        }
        std::priority_queue<std::pair<int, std::string>, std::vector<std::pair<int, std::string>>, Comp> minHeap;
        for (const auto& [word, freq] : count) {
            minHeap.emplace(freq, word);
            if (minHeap.size() > k) {
                minHeap.pop();
            }
        }
        std::vector<std::string> result;
        while (!minHeap.empty()) {
            result.push_back(minHeap.top().second);
            minHeap.pop();
        }
        std::reverse(result.begin(), result.end());
        return result;
    }
};
In this C++ example, we create a custom comparator to maintain a priority queue (min-heap) based on the frequency and lexicon criteria. By limiting the heap to size k, we efficiently manage only the most critical elements, producing a final list once extraneous words are removed from the heap.