Sponsored
Sponsored
An efficient way to solve this problem is by performing a self-join on the Views table where the author_id
is equal to the viewer_id
. This will help in identifying rows where authors viewed their own articles. After identifying, we need to select distinct author IDs and return them in ascending order.
Time Complexity: O(n log n) - due to sorting the result.
Space Complexity: O(n) - storing distinct author IDs.
1// Sample demonstration in pseudo-code
2#include <stdio.h>
3#include <stdlib.h>
4
5int main() {
6 // Assuming we have data in Views table loaded
7 int views[][4] = {
8 {1, 3, 5, 20190801},
9 {1, 3, 6, 20190802},
10 {2, 7, 7, 20190801},
11 {2, 7, 6, 20190802},
12 {4, 7, 1, 20190722},
13 {3, 4, 4, 20190721},
14 {3, 4, 4, 20190721}
15 };
16 int num_of_views = 7;
17 int found_authors[100] = {0};
18 int current_id;
19
20 for (int i = 0; i < num_of_views; i++) {
21 if (views[i][1] == views[i][2]) {
22 current_id = views[i][1];
23 found_authors[current_id] = 1;
24 }
25 }
26
27 printf("+------+
28| id |
29+------+
30");
31 for (int i = 0; i < 100; i++) {
32 if (found_authors[i] == 1) {
33 printf("| %d |
34", i);
35 }
36 }
37 printf("+------+");
38}
This is a conceptual approach to mimic the self join operation in C and to demonstrate the logic of matching author_id
with viewer_id
and filtering distinct. Since arrays with predefined size are used, it simplifies the lookup but in actual implementation, a dynamic structure would be needed.
An alternative implementation can employ the use of a data structure such as a set to track those authors that viewed their own articles. We iterate over the Views table and whenever the author_id
equates viewer_id
, we insert it into the set. Finally, we convert this set into a sorted list of distinct author IDs.
Time Complexity: O(n log n) - due to sorting the set elements.
Space Complexity: O(n) - to store the unique authors in memory.
1views = [
2 [1, 3
The Python implementation makes use of sets to hold unique author_id
s where the author is the viewer. This approach ensures that duplicates are automatically eliminated in an O(1) time complexity for insertions.