Sponsored
Sponsored
This approach involves using a hash map to count the frequency of each word in the paragraph after converting it to lowercase and removing punctuation. Then, the word with the highest count that is not in the banned list is selected as the result.
Time Complexity: O(N + M), where N is the length of the paragraph and M is the number of banned words. Space Complexity: O(N) for storing word frequencies.
1import java.util.*;
2
3public class Main {
4 public static String mostCommonWord(String paragraph, String[] banned) {
5 Set<String> bannedSet = new HashSet<>(Arrays.asList(banned));
6 Map<String, Integer> wordFreq = new HashMap<>();
7
8 String[] words = paragraph.replaceAll("[^a-zA-Z ]", " ").toLowerCase().split("\s+");
9
10 String result = "";
11 int maxCount = 0;
12
13 for (String word : words) {
14 if (!bannedSet.contains(word)) {
15 wordFreq.put(word, wordFreq.getOrDefault(word, 0) + 1);
16 if (wordFreq.get(word) > maxCount) {
17 maxCount = wordFreq.get(word);
18 result = word;
19 }
20 }
21 }
22 return result;
23 }
24
25 public static void main(String[] args) {
26 String paragraph = "Bob hit a ball, the hit BALL flew far after it was hit.";
27 String[] banned = {"hit"};
28 System.out.println(mostCommonWord(paragraph, banned));
29 }
30}
This Java solution uses a Set to store banned words and a HashMap to count frequencies of words in a paragraph. The paragraph is sanitized by replacing non-alphabetic characters, split into words, and frequencies are counted only if the word is not banned. The most frequent word is returned.
This approach leverages advanced string manipulation functions available in each language for efficient parsing and counting. The words are extracted, normalized, and counted using advanced language-specific methods and libraries for cleaner code.
Time Complexity: O(N log N) due to sorting, where N is total words extracted. Space Complexity: O(N).
1using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
class Program {
public static string MostCommonWord(string paragraph, string[] banned) {
var bannedSet = new HashSet<string>(banned);
var wordCounts = new Dictionary<string, int>();
var words = Regex.Matches(paragraph.ToLower(), "\b\w+\b");
foreach (Match match in words) {
var word = match.Value;
if (!bannedSet.Contains(word)) {
if (!wordCounts.ContainsKey(word)) {
wordCounts[word] = 0;
}
wordCounts[word]++;
}
}
return wordCounts.Aggregate((a, b) => a.Value > b.Value ? a : b).Key;
}
static void Main() {
string paragraph = "Bob hit a ball, the hit BALL flew far after it was hit.";
string[] banned = {"hit"};
Console.WriteLine(MostCommonWord(paragraph, banned));
}
}
The C# solution uses Regex to parse words and cool variants of dictionary and LINQ to find the most common non-banned word using an aggregation operation.