Sponsored
Sponsored
This method employs an array of linked lists (chaining) to handle collisions. Each index represents a hash code, and if multiple keys hash to the same index, we store them in a linked list at that index. This helps us efficiently manage key collisions.
Time Complexity: For each operation (put, get, remove), it is O(n) in the worst case, where n is the number of keys stored at a particular index.
Space Complexity: O(N) where N is the number of distinct keys inserted into the hash map.
1class MyHashMap {
2 constructor() {
3 this.size = 1000;
4 this.map = Array.from({ length: this.size }, () => []);
5 }
6
7 put(key, value) {
8 const index = key % this.size;
9 for (let pair of this.map[index]) {
10 if (pair[0] === key) {
11 pair[1] = value;
12 return;
13 }
14 }
15 this.map[index].push([key, value]);
16 }
17
18 get(key) {
19 const index = key % this.size;
20 for (let pair of this.map[index]) {
21 if (pair[0] === key) {
22 return pair[1];
23 }
24 }
25 return -1;
26 }
27
28 remove(key) {
29 const index = key % this.size;
30 this.map[index] = this.map[index].filter(pair => pair[0] !== key);
31 }
32}This JavaScript solution creates a hash map using an array of arrays to store key-value pairs. Collisions are resolved via separate chaining.
put checks if the key exists in the hashed index; if it does, its value is updated; otherwise, the pair is added.get retrieves values or returns -1 for missing keys.remove uses filter method to drop the target key.Double hashing avoids the clustering problem by calculating two hash functions. If a collision occurs, a second hash determines the step size for probing.
Time Complexity: O(1) on average for put, get, and remove due to effective hashing; O(n) in worst-case.
Space Complexity: O(N) for storing entries with keys and values.
1using System;
public class MyHashMap {
private int[][] map;
private bool[] occupied;
private const int size = 1000;
public MyHashMap() {
map = new int[size][];
occupied = new bool[size];
for (int i = 0; i < size; i++) {
map[i] = new int[] { -1, -1 };
}
}
private int PrimaryHash(int key) {
return key % size;
}
private int SecondaryHash(int key) {
return 1 + (key % (size - 1));
}
public void Put(int key, int value) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index] || map[index][0] == key) {
map[index][0] = key;
map[index][1] = value;
occupied[index] = true;
return;
}
index = (index + step) % size;
}
}
public int Get(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index]) return -1;
if (map[index][0] == key) return map[index][1];
index = (index + step) % size;
}
}
public void Remove(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (occupied[index] && map[index][0] == key) {
occupied[index] = false;
map[index][0] = -1;
return;
}
index = (index + step) % size;
}
}
}Throughout this C# double hashing model, dual hashes streamline practicality across core MyHashMap operations, mandatory for double duty within clustered settings.