
Sponsored
Sponsored
To find duplicate files by their content, we can use a HashMap (or Dictionary). For each directory info string, parse the directory path and the files with their contents. Use the content as the key in the map and store full path of the file as its value. After parsing all inputs, the map keys with more than one value represent duplicate files.
Time Complexity: O(n), where n is the total number of characters in all file paths. We iterate over each character once.
Space Complexity: O(n) to store the lists of file paths in the dictionary.
1function findDuplicate(paths) {
2 const contentMap = {};
3 for (const path of paths) {
4 const [root, ...fileData] = path.split(' ');
5 for (const fileInfo of fileData) {
6 const separatorIndex = fileInfo.indexOf('(');
7 const name = fileInfo.substring(0, separatorIndex);
8 const content = fileInfo.substring(separatorIndex + 1, fileInfo.length - 1);
9 if (!contentMap[content]) {
10 contentMap[content] = [];
11 }
12 contentMap[content].push(`${root}/${name}`);
13 }
14 }
15 return Object.values(contentMap).filter(group => group.length > 1);
16}
17
18// Example Usage:
19const paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)"];
20console.log(findDuplicate(paths));This JavaScript solution uses an object to map file contents to their corresponding file paths. Each path string is split to manage the directory and file details separately. We extract file names and contents from their encoded format and append to the map under the key that matches the file content. Finally, results are filtered for duplicate-identifying groups and returned.
This approach is based on directly processing string data and arranging results using a 2D array. Strings are manipulated to extract directory data, file names, and contents into standalone variables, then append paths to a growing structure. Compared to hash maps, this method uses arrays to aggregate identical files.
Time Complexity: O(n), where n is the total input character count due to one-pass evaluation.
Space Complexity: O(n), maintaining paths and intermediate arrays.
1#include <vector>
2#include <string>
#include <unordered_map>
#include <sstream>
using namespace std;
vector<vector<string>> findDuplicateWithArrays(const vector<string>& paths) {
unordered_map<string, vector<string>> contentMap;
for (const string& path : paths) {
istringstream ss(path);
string root;
getline(ss, root, ' ');
string file;
while (getline(ss, file, ' ')) {
size_t openBracket = file.find('(');
string name = file.substr(0, openBracket);
string content = file.substr(openBracket + 1, file.length() - openBracket - 2);
contentMap[content].emplace_back(root + "/" + name);
}
}
vector<vector<string>> result;
for (const auto& elem : contentMap) {
if (elem.second.size() > 1) {
result.push_back(elem.second);
}
}
return result;
}This C++ code follows a direct string indexing stratagem. Names and contents are parsed through direct index-based operations to locate delimiter characters, rapidly preparing full path strings. The collected data is then filtered and output similarly to approaches using hashmaps.