Sponsored
Sponsored
In this approach, we utilize Breadth-First Search (BFS) to traverse the binary tree level by level. As we traverse each level, we compute the sum of the node values at that level. We store these sums in a min-heap of size 'k'. If the heap grows beyond size 'k', we remove the smallest element, ensuring that at the end, the heap contains the 'k' largest level sums. This allows us to efficiently retrieve the kth largest level sum by looking at the top of the min-heap.
Time Complexity: O(n log k), where 'n' is the number of nodes in the tree. We traverse all nodes once, and each heap operation takes O(log k).
Space Complexity: O(max(w, k)), where 'w' is the maximum width of the tree at any level (for the queue), and 'k' is the size of the heap.
1function kthLargestLevelSum(root, k) {
2 if (!root) return -1;
3 const queue = [root];
4 const minHeap = new MinHeap();
5
6 while (queue.length > 0) {
7 const levelLength = queue.length;
8 let levelSum = 0;
9 for (let i = 0; i < levelLength; i++) {
10 const node = queue.shift();
11 levelSum += node.val;
12 if (node.left) queue.push(node.left);
13 if (node.right) queue.push(node.right);
14 }
15 minHeap.push(levelSum);
16 if (minHeap.size() > k) minHeap.pop();
17 }
18 return minHeap.size() === k ? minHeap.peek() : -1;
19}
20
21class MinHeap {
22 constructor() {
23 this.heap = [];
24 }
25 push(val) {
26 this.heap.push(val);
27 this.bubbleUp();
28 }
29 pop() {
30 const min = this.heap[0];
31 const end = this.heap.pop();
32 if (this.heap.length > 0) {
33 this.heap[0] = end;
34 this.bubbleDown();
35 }
36 return min;
37 }
38 bubbleUp() {
39 let idx = this.heap.length - 1;
40 let element = this.heap[idx];
41 while (idx > 0) {
42 let parentIdx = Math.floor((idx - 1) / 2);
43 let parent = this.heap[parentIdx];
44 if (element >= parent) break;
45 this.heap[parentIdx] = element;
46 this.heap[idx] = parent;
47 idx = parentIdx;
48 }
49 }
50 bubbleDown() {
51 let idx = 0;
52 let length = this.heap.length;
53 let element = this.heap[0];
54 while (true) {
55 let leftChildIdx = 2 * idx + 1;
56 let rightChildIdx = 2 * idx + 2;
57 let leftChild, rightChild;
58 let swap = null;
59 if (leftChildIdx < length) {
60 leftChild = this.heap[leftChildIdx];
61 if (leftChild < element) {
62 swap = leftChildIdx;
63 }
64 }
65 if (rightChildIdx < length) {
66 rightChild = this.heap[rightChildIdx];
67 if ((swap === null && rightChild < element) || (swap !== null && rightChild < leftChild)) {
68 swap = rightChildIdx;
69 }
70 }
71 if (swap === null) break;
72 this.heap[idx] = this.heap[swap];
73 this.heap[swap] = element;
74 idx = swap;
75 }
76 }
77 peek() {
78 return this.heap[0];
79 }
80 size() {
81 return this.heap.length;
82 }
83}
The JavaScript solution employs a custom MinHeap class to efficiently maintain the 'k' largest level sums found during a BFS traversal of the binary tree.
This approach utilizes a depth-first search (DFS) technique with a modification that tracks and accumulates the sum values per level in an array or list. Once all levels are processed with DFS, we sort the resulting list of sums. If the list has fewer than 'k' entries, we return -1; otherwise, we return the k-th largest value by accessing the list.
Time Complexity: O(n + d log d), where 'n' is the number of nodes and 'd' is the depth of the tree (to sort the level sums).
Space Complexity: O(d), representing the recursion stack and array for level sums, where 'd' is tree depth.
1#include <algorithm>
using namespace std;
struct TreeNode {
int val;
TreeNode *left;
TreeNode *right;
TreeNode(int x) : val(x), left(NULL), right(NULL) {}
};
void dfs(TreeNode* node, int level, vector<int>& levelSums) {
if (!node) return;
if (level == levelSums.size()) levelSums.push_back(0);
levelSums[level] += node->val;
dfs(node->left, level + 1, levelSums);
dfs(node->right, level + 1, levelSums);
}
int kthLargestLevelSum(TreeNode* root, int k) {
vector<int> levelSums;
dfs(root, 0, levelSums);
if (levelSums.size() < k) return -1;
sort(levelSums.begin(), levelSums.end(), greater<int>());
return levelSums[k - 1];
}
The C++ version tackles depth-based accumulation using DFS, after which it sorts the levels' sums and determines the k-th largest sum by selecting from the sorted results.