Sponsored
Sponsored
This method employs an array of linked lists (chaining) to handle collisions. Each index represents a hash code, and if multiple keys hash to the same index, we store them in a linked list at that index. This helps us efficiently manage key collisions.
Time Complexity: For each operation (put, get, remove), it is O(n) in the worst case, where n is the number of keys stored at a particular index.
Space Complexity: O(N) where N is the number of distinct keys inserted into the hash map.
1import java.util.LinkedList;
2
3class MyHashMap {
4
5 private class Pair {
6 int key, value;
7
8 Pair(int k, int v) {
9 key = k;
10 value = v;
11 }
12 }
13
14 private static final int SIZE = 1000;
15 private LinkedList<Pair>[] map;
16
17 public MyHashMap() {
18 map = new LinkedList[SIZE];
19 }
20
21 public void put(int key, int value) {
22 int index = key % SIZE;
23 if (map[index] == null) {
24 map[index] = new LinkedList<Pair>();
25 map[index].add(new Pair(key, value));
26 } else {
27 for (Pair p : map[index]) {
28 if (p.key == key) {
29 p.value = value;
30 return;
31 }
32 }
33 map[index].add(new Pair(key, value));
34 }
35 }
36
37 public int get(int key) {
38 int index = key % SIZE;
39 if (map[index] != null) {
40 for (Pair p : map[index]) {
41 if (p.key == key) {
42 return p.value;
43 }
44 }
45 }
46 return -1;
47 }
48
49 public void remove(int key) {
50 int index = key % SIZE;
51 if (map[index] != null) {
52 Pair toRemove = null;
53 for (Pair p : map[index]) {
54 if (p.key == key) {
55 toRemove = p;
56 }
57 }
58 if (toRemove != null) map[index].remove(toRemove);
59 }
60 }
61}This Java implementation uses an array of linked lists to store pairs of keys and values. Each bucket in the map is a linked list to handle potential collisions.
Pair class holds the key-value entries.put updates an entry if the key exists; otherwise, it adds a new entry.get looks through the linked list at the computed index to find the key.remove deletes the key-value pair if it exists in the map.Double hashing avoids the clustering problem by calculating two hash functions. If a collision occurs, a second hash determines the step size for probing.
Time Complexity: O(1) on average for put, get, and remove due to effective hashing; O(n) in worst-case.
Space Complexity: O(N) for storing entries with keys and values.
1using System;
2
public class MyHashMap {
private int[][] map;
private bool[] occupied;
private const int size = 1000;
public MyHashMap() {
map = new int[size][];
occupied = new bool[size];
for (int i = 0; i < size; i++) {
map[i] = new int[] { -1, -1 };
}
}
private int PrimaryHash(int key) {
return key % size;
}
private int SecondaryHash(int key) {
return 1 + (key % (size - 1));
}
public void Put(int key, int value) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index] || map[index][0] == key) {
map[index][0] = key;
map[index][1] = value;
occupied[index] = true;
return;
}
index = (index + step) % size;
}
}
public int Get(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (!occupied[index]) return -1;
if (map[index][0] == key) return map[index][1];
index = (index + step) % size;
}
}
public void Remove(int key) {
int index = PrimaryHash(key);
int step = SecondaryHash(key);
while (true) {
if (occupied[index] && map[index][0] == key) {
occupied[index] = false;
map[index][0] = -1;
return;
}
index = (index + step) % size;
}
}
}Throughout this C# double hashing model, dual hashes streamline practicality across core MyHashMap operations, mandatory for double duty within clustered settings.