Given a list paths of directory info, including the directory path, and all the files with contents in this directory, return all the duplicate files in the file system in terms of their paths. You may return the answer in any order.
A group of duplicate files consists of at least two files that have the same content.
A single directory info string in the input list has the following format:
"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"It means there are n files (f1.txt, f2.txt ... fn.txt) with content (f1_content, f2_content ... fn_content) respectively in the directory "root/d1/d2/.../dm". Note that n >= 1 and m >= 0. If m = 0, it means the directory is just the root directory.
The output is a list of groups of duplicate file paths. For each group, it contains all the file paths of the files that have the same content. A file path is a string that has the following format:
"directory_path/file_name.txt"
Example 1:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)","root 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Example 2:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Constraints:
1 <= paths.length <= 2 * 1041 <= paths[i].length <= 30001 <= sum(paths[i].length) <= 5 * 105paths[i] consist of English letters, digits, '/', '.', '(', ')', and ' '.
Follow up:
To find duplicate files by their content, we can use a HashMap (or Dictionary). For each directory info string, parse the directory path and the files with their contents. Use the content as the key in the map and store full path of the file as its value. After parsing all inputs, the map keys with more than one value represent duplicate files.
This Python solution uses defaultdict from the collections module to store lists of file paths with the same content. We split each path string into directory and file information, then iterate over each file's name and content to record them in our dictionary. Finally, we filter the dictionary to include only entries that have more than one file path, returning them as groups of duplicates.
C++
Java
JavaScript
C
C#
Time Complexity: O(n), where n is the total number of characters in all file paths. We iterate over each character once.
Space Complexity: O(n) to store the lists of file paths in the dictionary.
This approach is based on directly processing string data and arranging results using a 2D array. Strings are manipulated to extract directory data, file names, and contents into standalone variables, then append paths to a growing structure. Compared to hash maps, this method uses arrays to aggregate identical files.
By substituting hash maps with straightforward string operations and arrays in Python, this solution mimics hash map behavior to achieve equivalent outcomes. It uses slicing to extract filenames and contents and builds a result by tracking paths directly through array indexing.
C++
Java
C#
Time Complexity: O(n), where n is the total input character count due to one-pass evaluation.
Space Complexity: O(n), maintaining paths and intermediate arrays.
| Approach | Complexity |
|---|---|
| Using a HashMap or Dictionary | Time Complexity: O(n), where n is the total number of characters in all file paths. We iterate over each character once. |
| Using Direct String Manipulation and 2D Array | Time Complexity: O(n), where n is the total input character count due to one-pass evaluation. |
Find Duplicate File in System | Live Coding with Explanation | Leetcode - 609 • Algorithms Made Easy • 3,943 views views
Watch 9 more video solutions →Practice Find Duplicate File in System with our built-in code editor and test cases.
Practice on FleetCode