Given a text file file.txt, transpose its content.
You may assume that each row has the same number of columns, and each field is separated by the ' ' character.
Example:
If file.txt has the following content:
name age alice 21 ryan 30
Output the following:
name alice ryan age 21 30
Problem Overview: The input is a text file where each line contains space-separated values. The task is to transpose the file so that columns become rows. If the file has m rows and n columns, the output should produce n rows where each row contains the values from the corresponding column of the original file.
Approach 1: Using Arrays for Transpose (O(m × n) time, O(m × n) space)
This method reads the file row by row and stores values in an array indexed by column position. In awk or similar shell tools, you iterate through each field using $i while scanning the current line. For every column index i, append the field value to an array entry that represents the transposed row. After processing all lines, iterate over the stored array indices and print the constructed rows. The key insight is grouping elements by column index during the read phase instead of trying to rearrange them later.
This approach mirrors a classic matrix transpose using arrays. Each column accumulates values across multiple lines, and the final print step outputs them as rows. Time complexity is O(m × n) because every cell in the file is visited once. Space complexity is also O(m × n) since the entire transposed structure is stored before printing.
Approach 2: Using Stream Processing (O(m × n) time, O(n) space)
Stream-based solutions avoid storing the full matrix in memory. Tools like awk process the file as a stream and gradually build each column string. While scanning each line, iterate through fields using a loop from 1 to NF (number of fields). Maintain a column buffer for each index and append the current value with a separator. After all lines are processed, print the buffers sequentially.
This approach leverages typical shell text-processing workflows where data flows line by line instead of loading everything at once. Since only column accumulators are maintained, the extra space is proportional to the number of columns rather than the entire matrix. The algorithm still touches every element exactly once, so the time complexity remains O(m × n), but memory usage improves to roughly O(n) depending on implementation.
Shell interview problems like this emphasize command-line data transformation rather than complex algorithms. Understanding field iteration ($i), record processing, and streaming pipelines is more valuable than traditional data structures. These patterns appear frequently in string processing and log analysis tasks.
Recommended for interviews: The stream-processing approach is what most interviewers expect because it matches how shell utilities handle large files. Showing the array-based version first demonstrates understanding of matrix transpose logic, but implementing the streaming solution proves you know how to work efficiently with shell pipelines and awk.
Solutions for this problem are being prepared.
Try solving it yourself| Approach | Time | Space | When to Use |
|---|---|---|---|
| Arrays for Transpose | O(m × n) | O(m × n) | When clarity is preferred and storing the full transposed structure in memory is acceptable |
| Stream Processing with awk | O(m × n) | O(n) | Large files or typical shell scripting workflows where streaming is more memory efficient |
4 Leetcode Mistakes • Sahil & Sarra • 421,834 views views
Watch 9 more video solutions →Practice Transpose File with our built-in code editor and test cases.
Practice on FleetCodePractice this problem
Open in Editor