There is a country of n cities numbered from 0 to n - 1. In this country, there is a road connecting every pair of cities.
There are m friends numbered from 0 to m - 1 who are traveling through the country. Each one of them will take a path consisting of some cities. Each path is represented by an integer array that contains the visited cities in order. The path may contain a city more than once, but the same city will not be listed consecutively.
Given an integer n and a 2D integer array paths where paths[i] is an integer array representing the path of the ith friend, return the length of the longest common subpath that is shared by every friend's path, or 0 if there is no common subpath at all.
A subpath of a path is a contiguous sequence of cities within that path.
Example 1:
Input: n = 5, paths = [[0,1,2,3,4],
[2,3,4],
[4,0,1,2,3]]
Output: 2
Explanation: The longest common subpath is [2,3].
Example 2:
Input: n = 3, paths = [[0],[1],[2]] Output: 0 Explanation: There is no common subpath shared by the three paths.
Example 3:
Input: n = 5, paths = [[0,1,2,3,4],
[4,3,2,1,0]]
Output: 1
Explanation: The possible longest common subpaths are [0], [1], [2], [3], and [4]. All have a length of 1.
Constraints:
1 <= n <= 105m == paths.length2 <= m <= 105sum(paths[i].length) <= 1050 <= paths[i][j] < npaths[i].Problem Overview: You receive several paths representing sequences of cities visited by different travelers. Each path is an array of integers. The task is to find the maximum length of a subpath (contiguous segment) that appears in every path.
Approach 1: Using Hash Map for Efficient Lookup (Binary Search + Rolling Hash) (Time: O(N log N), Space: O(N))
This approach combines binary search with rolling hash. Instead of directly searching for the longest subpath, you binary search the answer length L. For each candidate length, compute rolling hashes for all subpaths of length L in every path. Store hashes from the first path in a hash set, then intersect with hashes from subsequent paths using a hash map or set lookups. If a common hash exists across all paths, length L is valid. Rolling hash allows constant‑time updates when sliding the window, which keeps substring comparisons efficient.
The key insight: checking whether a subpath of length L exists in all paths is easier than directly finding the maximum length. Binary search reduces the search space from all possible lengths to log N checks. Hash-based matching avoids expensive array comparisons. This technique is widely used in substring and sequence matching problems involving large datasets.
Approach 2: Using Sorting and Two-Pointer Technique (Suffix Array) (Time: O(N log N), Space: O(N))
This method builds a suffix array over the combined paths. Each suffix represents a starting point in one of the paths. After constructing and sorting suffixes lexicographically, compute the LCP (Longest Common Prefix) values between adjacent suffixes. A sliding window with two pointers scans the sorted suffix array while ensuring the window contains suffixes from all paths. The minimum LCP value within that window represents the longest shared prefix among those suffixes.
Sorting suffixes groups similar prefixes together, which makes shared subpaths easy to detect. The two-pointer window ensures that every path contributes at least one suffix to the current candidate group. The maximum valid LCP across such windows gives the final answer.
Recommended for interviews: The binary search + rolling hash solution is the most common interview answer. It demonstrates understanding of hash functions, sliding window hashing, and decision-based binary search. The suffix array method is more advanced and rarely required unless the interviewer specifically asks about suffix structures or large-scale string indexing.
This approach involves using a hash map (or hash table) to efficiently solve the problem. The key insight is that we can keep track of elements and quickly perform lookups when needed. This allows us to achieve an optimal time complexity for certain operations by leveraging the hash map's average-case O(1) time for inserts and checks.
This C program defines a simple hash map using an array and uses modulo hashing to manage collisions. The insert function maps a key to its index, while the exists function checks whether a key is present based on its hashed index.
Time Complexity: O(1) for insert and lookup in average case.
Space Complexity: O(n) where n is the number of inputs stored.
This method first sorts the input array and then utilizes a two-pointer technique to efficiently solve the problem. The two-pointer technique is often used in problems involving sorted arrays where we need to find pairs or combinations that satisfy certain conditions.
The C code uses the qsort function to sort the array, then employs two pointers left and right to traverse the array from both ends. By adjusting the pointers based on their sum compared to the target, it determines if any two numbers sum up to the given value.
Time Complexity: O(n log n) due to sorting.
Space Complexity: O(1) as it operates in-place.
| Approach | Complexity |
|---|---|
| Using Hash Map for Efficient Lookup | Time Complexity: O(1) for insert and lookup in average case. |
| Using Sorting and Two-Pointer Technique | Time Complexity: O(n log n) due to sorting. |
| Default Approach | — |
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Binary Search + Rolling Hash (Hash Map Lookup) | O(N log N) | O(N) | Best general solution for interviews and competitive programming |
| Suffix Array + Sorting + Two Pointers | O(N log N) | O(N) | When working with large sequence comparisons or advanced string processing |
LeetCode 1923. Longest Common Subpath • Happy Coding • 2,201 views views
Watch 4 more video solutions →Practice Longest Common Subpath with our built-in code editor and test cases.
Practice on FleetCode