




Sponsored
Sponsored
This approach uses sorting to calculate the h-index. The idea is to sort the array of citations in descending order. Then, find the maximum number h such that there are h papers with at least h citations. This can be efficiently determined by iterating over the sorted array.
Time Complexity: O(n log n) due to sorting, Space Complexity: O(1) since the sorting is in place.
1#include <vector>
2#include <algorithm>
3#include <iostream>
4using namespace std;
5
6int hIndex(vector<int>& citations) {
7    sort(citations.begin(), citations.end(), greater<int>());
8    for (int i = 0; i < citations.size(); i++) {
9        if (citations[i] < i + 1) {
10            return i;
11        }
12    }
13    return citations.size();
14}
15
16int main() {
17    vector<int> citations = {3, 0, 6, 1, 5};
18    cout << "H-Index: " << hIndex(citations) << endl;
19    return 0;
20}This C++ implementation sorts the citations in descending order and finds the h-index using the same logic as the C solution.
Given the constraints where citation counts do not exceed 1000 and the number of papers is at most 5000, a counting sort or bucket sort can be used. This approach involves creating a frequency array to count citations. Then traverse the frequency array to compute the h-index efficiently.
Time Complexity: O(n + m) where n is citationsSize and m is the maximum citation value, Space Complexity: O(m).
1#
This C implementation uses a frequency array to count papers for citation values. It accumulates from the back (high values) to find the point where the count matches or exceeds the index.