Sponsored
Sponsored
In this approach, we compute the frequency of each word in both sentences. We then check for words that appear exactly once in one sentence and do not appear in the other sentence. This can be efficiently achieved by using hash maps (or dictionaries) to store these frequencies.
Time Complexity: O(n + m), where n and m are the lengths of s1 and s2 respectively, due to splitting and counting.
Space Complexity: O(n + m) due to storage for hash maps.
1function uncommonFromSentences(s1, s2) {
2 const countWords = (sentence) => {
3 const words = sentence.split(' ');
4 const map = {};
5 for (const word of words) {
6 map[word] = (map[word] || 0) + 1;
7 }
8 return map;
9 };
10
11 const count1 = countWords(s1);
12 const count2 = countWords(s2);
13 const result = [];
14
15 for (const word in count1) {
16 if (count1[word] === 1 && !count2[word]) {
17 result.push(word);
18 }
19 }
20 for (const word in count2) {
21 if (count2[word] === 1 && !count1[word]) {
22 result.push(word);
23 }
24 }
25
26 return result;
27}
We define a helper function countWords
to count the occurrences of words in a sentence. Using this function, we create word frequency maps for both sentences. We then find uncommon words as those that appear exactly once in one map but do not appear in the other.
This approach involves merging both sentences into one list of words, along with a source identifier to track which sentence they came from. By counting occurrences in this merged list, we can determine uncommon words.
Time Complexity: O(n + m) merging and counting stages.
Space Complexity: O(n + m) for the storage of counts.
1#include <unordered_map>
2#include <vector>
3#include <string>
4#include <sstream>
5using namespace std;
6
vector<string> uncommonFromSentences(string s1, string s2) {
unordered_map<string, int> wordCount;
istringstream stream(s1 + " " + s2);
string word;
while (stream >> word) {
wordCount[word]++;
}
vector<string> result;
for (auto &entry : wordCount) {
if (entry.second == 1) {
result.push_back(entry.first);
}
}
return result;
}
We merge the two sentences into one using a stream to count each word's occurrence, identifying those that appear only once. Since these unique words have a count of 1, they are the uncommon words across both sentences.