Sponsored
Sponsored
In this approach, we compute the frequency of each word in both sentences. We then check for words that appear exactly once in one sentence and do not appear in the other sentence. This can be efficiently achieved by using hash maps (or dictionaries) to store these frequencies.
Time Complexity: O(n + m), where n and m are the lengths of s1 and s2 respectively, due to splitting and counting.
Space Complexity: O(n + m) due to storage for hash maps.
1def uncommonFromSentences(s1, s2):
2 from collections import Counter
3 count1 = Counter(s1.split())
4 count2 = Counter(s2.split())
5 result = []
6
7 for word in count1:
8 if count1[word] == 1 and word not in count2:
9 result.append(word)
10 for word in count2:
11 if count2[word] == 1 and word not in count1:
12 result.append(word)
13
14 return result
First, we split each of the sentences into words and count the occurrences using the Counter
class from the collections
module. We then iterate over these counts and check for words that meet our uncommon criteria. These words are collected in a result list.
This approach involves merging both sentences into one list of words, along with a source identifier to track which sentence they came from. By counting occurrences in this merged list, we can determine uncommon words.
Time Complexity: O(n + m) merging and counting stages.
Space Complexity: O(n + m) for the storage of counts.
1import java.util.*;
2
3
We create a map to hold word counts after merging both sentences, using split
to create a word list. Words occurring once in total are collected into a result list.