Watch 10 video solutions for Find Duplicate File in System, a medium level problem involving Array, Hash Table, String. This walkthrough by Algorithms Made Easy has 4,073 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
Given a list paths of directory info, including the directory path, and all the files with contents in this directory, return all the duplicate files in the file system in terms of their paths. You may return the answer in any order.
A group of duplicate files consists of at least two files that have the same content.
A single directory info string in the input list has the following format:
"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"It means there are n files (f1.txt, f2.txt ... fn.txt) with content (f1_content, f2_content ... fn_content) respectively in the directory "root/d1/d2/.../dm". Note that n >= 1 and m >= 0. If m = 0, it means the directory is just the root directory.
The output is a list of groups of duplicate file paths. For each group, it contains all the file paths of the files that have the same content. A file path is a string that has the following format:
"directory_path/file_name.txt"
Example 1:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)","root 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Example 2:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Constraints:
1 <= paths.length <= 2 * 1041 <= paths[i].length <= 30001 <= sum(paths[i].length) <= 5 * 105paths[i] consist of English letters, digits, '/', '.', '(', ')', and ' '.
Follow up:
Problem Overview: Each input string represents a directory followed by multiple files and their contents, formatted like "root/a 1.txt(abcd) 2.txt(efgh)". You need to identify files with identical content and return their full paths grouped together. Only groups with more than one file count as duplicates.
Approach 1: Using a HashMap / Dictionary (O(n * L) time, O(n * L) space)
This approach groups files by their content using a hash map. Iterate through each directory string, split it by spaces to separate the directory path from its files, and parse each file entry to extract the file name and the content inside parentheses. Build the full path using the directory and file name, then insert it into a map where the key is the file content and the value is a list of file paths. Because hash lookups are O(1) on average, grouping happens efficiently while scanning the input once.
After processing all entries, iterate through the map and collect only the lists whose size is greater than one. Those represent files that share identical content. The main cost comes from parsing strings and storing them, giving roughly O(n * L) time where L is the average length of each path string. This method relies heavily on fast hashing and string processing, making it a natural application of a hash table combined with careful string parsing.
Approach 2: Direct String Manipulation with 2D Array Comparison (O(n² * L) time, O(n * L) space)
Another way is to first parse all directory strings and store each file as a pair: [fullPath, content] inside a 2D array. Once every file entry is extracted, compare each file's content with every other file using nested loops. When two files share identical content, place their paths into the same duplicate group.
This approach uses only arrays and direct comparisons instead of hashing. The downside is the quadratic comparison step: for n files, the algorithm performs up to n² content checks. Each comparison may scan the entire string content, making the total cost roughly O(n² * L). While simpler conceptually, it becomes slow for large datasets. It mainly demonstrates how duplicate detection works without additional data structures.
Recommended for interviews: The hash map grouping approach is the expected solution. It demonstrates efficient parsing, use of a dictionary for grouping, and linear scanning of the input. Interviewers want to see that you immediately map file content to paths instead of comparing every pair. The brute-force comparison still helps show baseline reasoning, but the hash-based grouping proves you understand how to reduce quadratic work to near-linear time.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| HashMap / Dictionary Grouping | O(n * L) | O(n * L) | General case; fastest way to group files by identical content |
| Direct String Comparison with 2D Array | O(n² * L) | O(n * L) | Educational baseline when avoiding hash tables or demonstrating brute-force grouping |