Given two strings first and second, consider occurrences in some text of the form "first second third", where second comes immediately after first, and third comes immediately after second.
Return an array of all the words third for each occurrence of "first second third".
Example 1:
Input: text = "alice is a good girl she is a good student", first = "a", second = "good" Output: ["girl","student"]
Example 2:
Input: text = "we will we will rock you", first = "we", second = "will" Output: ["we","rock"]
Constraints:
1 <= text.length <= 1000text consists of lowercase English letters and spaces.text are separated by a single space.1 <= first.length, second.length <= 10first and second consist of lowercase English letters.text will not have any leading or trailing spaces.Problem Overview: You receive a string text and two words first and second. The goal is to return every word that appears immediately after the sequence first second in the text. In other words, whenever the text contains the bigram first second, capture the next word.
Approach 1: Sliding Window Approach (Time: O(n), Space: O(1) excluding output)
Split the sentence into an array of words, then scan it using a window of size three. At each index i, check whether words[i] == first and words[i+1] == second. If the condition holds, append words[i+2] to the result list. This works because every valid answer must appear exactly two positions after the start of the bigram. The algorithm performs a single pass over the words array, making it an efficient linear-time solution using simple iteration.
This technique is a classic pattern from string processing problems where you scan contiguous tokens with a fixed-size window. Since the algorithm only tracks a few variables while iterating, the auxiliary space remains constant.
Approach 2: Index Mapping with HashMap (Time: O(n), Space: O(n))
Another approach builds an index structure that maps each word pair to the list of words that follow it. Iterate through the words array and construct a key such as (words[i], words[i+1]). Store words[i+2] in a map entry for that pair. After preprocessing, retrieving the answer becomes a single hash lookup for the pair (first, second).
This approach uses a hash map to precompute relationships between adjacent words. It becomes useful if the same text is queried multiple times with different bigrams because the preprocessing step allows constant-time lookups afterward. The tradeoff is higher memory usage since all bigram relationships are stored.
Conceptually, this technique resembles building adjacency relationships in string or token streams. Each pair of words acts as a key that points to possible next tokens.
Recommended for interviews: The sliding window solution is the expected answer. It is simple, readable, and runs in O(n) time with minimal space. Interviewers want to see that you recognize the pattern of scanning adjacent words and checking a fixed window. Mentioning the hash map indexing idea demonstrates deeper thinking about preprocessing and repeated queries, but the sliding window implementation is the most practical for a single query.
This approach involves scanning through the list of words using a simple loop and checking each consecutive triplet of words. If a triplet matches the pattern 'first second third', the third word is added to the result list. This method directly inspects each triplet consecutively and checks for the match with 'first' and 'second'.
We first split the text into words. Then, iterate over the list of words, checking each group of three consecutive words. If the first two words match the given 'first' and 'second', we add the third word to the result list.
Time Complexity: O(n), where n is the number of words in the text.
Space Complexity: O(n) for storing the words list.
This method involves creating a mapping of all occurrences of each word in the text, storing their indices in a hashmap. This way, comparisons can be done only when needed, reducing unnecessary checks.
First, we split the text and map each word to its indices in the text. By iterating over the indices of 'first', we can look ahead to find 'second' and 'third'. This reduces the total number of comparisons to only when both words appear consecutively.
Python
Java
C++
C#
JavaScript
Time Complexity: O(n), where n is the number of words.
Space Complexity: O(n), due to the hashmap storing word indices.
Python
Java
C++
Go
TypeScript
| Approach | Complexity |
|---|---|
| Sliding Window Approach | Time Complexity: O(n), where n is the number of words in the text. |
| Index Mapping with HashMap | Time Complexity: O(n), where n is the number of words. |
| Default Approach | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Sliding Window | O(n) | O(1) | Best for a single query on the text. Simple scan with minimal memory. |
| Index Mapping with HashMap | O(n) | O(n) | Useful when the same text is queried many times with different bigrams. |
LeetCode 140 Problem 1 - Occurrences After Bigram • code_report • 4,122 views views
Watch 9 more video solutions →Practice Occurrences After Bigram with our built-in code editor and test cases.
Practice on FleetCode