
Sponsored
Sponsored
To find duplicate files by their content, we can use a HashMap (or Dictionary). For each directory info string, parse the directory path and the files with their contents. Use the content as the key in the map and store full path of the file as its value. After parsing all inputs, the map keys with more than one value represent duplicate files.
Time Complexity: O(n), where n is the total number of characters in all file paths. We iterate over each character once.
Space Complexity: O(n) to store the lists of file paths in the dictionary.
1#include <vector>
2#include <string>
3#include <unordered_map>
4#include <sstream>
5
6using namespace std;
7
8vector<vector<string>> findDuplicate(vector<string>& paths) {
9 unordered_map<string, vector<string>> contentMap;
10 for (const auto& path : paths) {
11 stringstream ss(path);
12 string root;
13 getline(ss, root, ' ');
14 string file;
15 while (getline(ss, file, ' ')) {
16 auto it = file.find('(');
17 string name = file.substr(0, it);
18 string content = file.substr(it + 1, file.size() - it - 2);
19 contentMap[content].push_back(root + "/" + name);
20 }
21 }
22 vector<vector<string>> result;
23 for (const auto& pair : contentMap) {
24 if (pair.second.size() > 1) {
25 result.push_back(pair.second);
26 }
27 }
28 return result;
29}This C++ solution leverages the unordered_map to group file paths by content. We parse the input strings using stringstream to separate the root directory and file data. For each file, we extract its name and content, adding the full path to our map indexed by content. At the end, we collect and return all content groups with more than one path.
This approach is based on directly processing string data and arranging results using a 2D array. Strings are manipulated to extract directory data, file names, and contents into standalone variables, then append paths to a growing structure. Compared to hash maps, this method uses arrays to aggregate identical files.
Time Complexity: O(n), where n is the total input character count due to one-pass evaluation.
Space Complexity: O(n), maintaining paths and intermediate arrays.
1def find_duplicate_with_arrays(
By substituting hash maps with straightforward string operations and arrays in Python, this solution mimics hash map behavior to achieve equivalent outcomes. It uses slicing to extract filenames and contents and builds a result by tracking paths directly through array indexing.