You are building a string s of length n one character at a time, prepending each new character to the front of the string. The strings are labeled from 1 to n, where the string with length i is labeled si.
s = "abaca", s1 == "a", s2 == "ca", s3 == "aca", etc.The score of si is the length of the longest common prefix between si and sn (Note that s == sn).
Given the final string s, return the sum of the score of every si.
Example 1:
Input: s = "babab" Output: 9 Explanation: For s1 == "b", the longest common prefix is "b" which has a score of 1. For s2 == "ab", there is no common prefix so the score is 0. For s3 == "bab", the longest common prefix is "bab" which has a score of 3. For s4 == "abab", there is no common prefix so the score is 0. For s5 == "babab", the longest common prefix is "babab" which has a score of 5. The sum of the scores is 1 + 0 + 3 + 0 + 5 = 9, so we return 9.
Example 2:
Input: s = "azbazbzaz" Output: 14 Explanation: For s2 == "az", the longest common prefix is "az" which has a score of 2. For s6 == "azbzaz", the longest common prefix is "azb" which has a score of 3. For s9 == "azbazbzaz", the longest common prefix is "azbazbzaz" which has a score of 9. For all other si, the score is 0. The sum of the scores is 2 + 3 + 9 = 14, so we return 14.
Constraints:
1 <= s.length <= 105s consists of lowercase English letters.In #2223 Sum of Scores of Built Strings, the goal is to compute the total score of all strings formed by repeatedly adding characters from the original string. The score of each built string equals the length of the longest common prefix between that string and the original string. A brute-force comparison for every suffix would lead to O(n^2) time, which is inefficient for large inputs.
An optimized approach relies on efficient string matching techniques. One common strategy is to compute prefix matches for every suffix using algorithms similar to the Z-algorithm or by combining binary search with rolling hash. Rolling hash allows constant-time substring comparison after preprocessing, while binary search helps find the longest matching prefix length for each suffix.
Alternatively, advanced structures such as a suffix array or prefix-matching array can also compute longest common prefixes efficiently. These techniques reduce repeated comparisons and enable near-linear performance.
The overall goal is to reuse computed prefix information so that the total score can be calculated efficiently across all suffixes.
| Approach | Time Complexity | Space Complexity |
|---|---|---|
| Brute Force Prefix Comparison | O(n^2) | O(1) |
| Rolling Hash + Binary Search | O(n log n) | O(n) |
| Z-Algorithm / Prefix Matching | O(n) | O(n) |
| Suffix Array with LCP | O(n log n) | O(n) |
Greg Hogg
Use these hints if you're stuck. Try solving on your own first.
Each s_i is a suffix of the string s, so consider algorithms that can determine the longest prefix that is also a suffix.
Could you use the Z array from the Z algorithm to find the score of each s_i?
Watch expert explanations and walkthroughs
Practice problems asked by these companies to ace your technical interviews.
Explore More ProblemsJot down your thoughts, approach, and key learnings
Rolling hash enables constant-time substring comparison after preprocessing. When combined with binary search, it helps determine the maximum prefix length that matches between two substrings. This approach improves performance compared to direct character-by-character comparison.
Yes, problems involving string matching, prefix computation, and hashing are common in FAANG-level interviews. #2223 Sum of Scores of Built Strings tests advanced string algorithms and optimization techniques. Practicing it helps strengthen understanding of Z-algorithm, hashing, and suffix-based methods.
The optimal approach typically uses a prefix-matching technique such as the Z-algorithm. It calculates the longest common prefix between the original string and each suffix in linear time. This avoids repeated comparisons and reduces the overall complexity to O(n).
Efficient string-processing structures such as the Z-array, rolling hash tables, or suffix arrays work well for this problem. These tools allow fast substring comparisons and prefix matching. They significantly reduce the cost of checking each suffix against the original string.