Sponsored
Sponsored
This method employs an array of linked lists (chaining) to handle collisions. Each index represents a hash code, and if multiple keys hash to the same index, we store them in a linked list at that index. This helps us efficiently manage key collisions.
Time Complexity: For each operation (put, get, remove), it is O(n) in the worst case, where n is the number of keys stored at a particular index.
Space Complexity: O(N) where N is the number of distinct keys inserted into the hash map.
1class MyHashMap:
2
3 def __init__(self):
4 self.size = 1000
5 self.map = [[] for _ in range(self.size)]
6
7 def put(self, key: int, value: int) -> None:
8 index = key % self.size
9 for kv in self.map[index]:
10 if kv[0] == key:
11 kv[1] = value
12 return
13 self.map[index].append([key, value])
14
15 def get(self, key: int) -> int:
16 index = key % self.size
17 for kv in self.map[index]:
18 if kv[0] == key:
19 return kv[1]
20 return -1
21
22 def remove(self, key: int) -> None:
23 index = key % self.size
24 for i, kv in enumerate(self.map[index]):
25 if kv[0] == key:
26 del self.map[index][i]
27 returnThe Python version uses an array of lists to implement separate chaining for achieving collision resolution in the hash map.
put method updates or inserts a key-value pair.get method searches for a key at the calculated index.remove method deletes the key-value pair if present.Double hashing avoids the clustering problem by calculating two hash functions. If a collision occurs, a second hash determines the step size for probing.
Time Complexity: O(1) on average for put, get, and remove due to effective hashing; O(n) in worst-case.
Space Complexity: O(N) for storing entries with keys and values.
1using System;
2
public class MyHashMap {
private int[][] map;
private bool[] occupied;
private const int size = 1000;
public MyHashMap() {
map = new int[size][];
occupied = new bool[size];
for (int i = 0; i < size; i++) {
map[i] = new int[] { -1, -1 };
}
}
private int PrimaryHash(int key) {
return key % size;
}
private int SecondaryHash(int key) {
return 1 + (key % (size - 1));
}
public void Put(int key, int value) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index] || map[index][0] == key) {
map[index][0] = key;
map[index][1] = value;
occupied[index] = true;
return;
}
index = (index + step) % size;
}
}
public int Get(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index]) return -1;
if (map[index][0] == key) return map[index][1];
index = (index + step) % size;
}
}
public void Remove(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (occupied[index] && map[index][0] == key) {
occupied[index] = false;
map[index][0] = -1;
return;
}
index = (index + step) % size;
}
}
}Throughout this C# double hashing model, dual hashes streamline practicality across core MyHashMap operations, mandatory for double duty within clustered settings.