Sponsored
Sponsored
This approach involves using a frequency map to determine the maximum frequency of each letter required by words2 that must be present in each word in words1 to be considered universal. We then check each word in words1 to see if it satisfies these maximum frequency requirements.
Time Complexity: O(N * S + M * T), where N is the length of words1, M is the length of words2, S is the average length of a word in words1, and T is the average length of a word in words2. The space complexity is O(1), as we are using a fixed character set.
1#include <vector>
2#include <string>
3#include <unordered_map>
4#include <algorithm>
5using namespace std;
6
7vector<string> wordSubsets(vector<string>& words1, vector<string>& words2) {
8 unordered_map<char, int> max_freq;
9 for (const string& word : words2) {
10 unordered_map<char, int> freq;
11 for (char letter : word) {
12 freq[letter]++;
13 }
14 for (const auto& [letter, count] : freq) {
15 max_freq[letter] = max(max_freq[letter], count);
16 }
17 }
18 vector<string> result;
19 for (const string& word : words1) {
20 unordered_map<char, int> freq;
21 for (char letter : word) {
22 freq[letter]++;
23 }
24 bool is_universal = true;
25 for (const auto& [letter, count] : max_freq) {
26 if (freq[letter] < count) {
27 is_universal = false;
28 break;
29 }
30 }
31 if (is_universal) {
32 result.push_back(word);
33 }
34 }
35 return result;
36}
37
38// Example usage
39// int main() {
40// vector<string> words1 = {"amazon","apple","facebook","google","leetcode"};
41// vector<string> words2 = {"e","o"};
42// vector<string> result = wordSubsets(words1, words2);
43// for (const string& word : result) {
44// cout << word << " ";
45// }
46// return 0;
47// }
In this C++ solution, we use unordered_map to track the frequency counts. We first loop over words2 to create a frequency map with the maximum requirement for each character. Then, for each word in words1, we ensure all characters can meet or exceed this frequency. If they do, the word is added to our result.
In this approach, we create a character frequency check vector for words2 and verify each word in words1 against this vector. Instead of a hashmap, we'll use arrays as fixed length makes space and access efficient.
Time Complexity: O(N * S + M * T), Space Complexity: O(1).
1import java.util.ArrayList;
2import java.
This Java solution uses an integer array of size 26 to store character frequencies, making it efficient in terms of both space and time for access. We compute the aggregate maximum frequency needed, and check words in words1 against this frequency array directly.