Sponsored
Sponsored
In this approach, we will use a hash map (or dictionary) to store the frequency of each character for each word. We then update a common frequency count table that holds the minimum frequency of each character across all words. This ensures that only characters existing in all words are recorded.
Time Complexity: O(N*K) where N is the number of words and K is the average length of the words. Space Complexity: O(1) since the space does not scale with input size.
1def findCommonChars(words):
2 from collections import Counter
3 min_count = Counter(words[0])
4 for word in words[1:]:
5 min_count &= Counter(word)
6 return list(min_count.elements())
7
8if __name__ == "__main__":
9 words = ["bella", "label", "roller"]
10 print(findCommonChars(words))
In Python, we use the Counter from the collections module to easily manage frequencies. We combine them by intersecting counts, resulting in only the minimum occurring characters across all words.
We can alternatively use direct character arrays to represent frequencies and update these arrays with each subsequent word processed. Starting with the first word's character frequencies, we iteratively compute the minimum with the rest.
Time Complexity: O(N*K). Space Complexity: O(1) for constant sized arrays.
1
Here, the Python implementation uses lists to maintain and update frequency counts, finding the minimum number across all word frequencies directly within the predictable character space.