Cosmic Module

D

Qubits of DPK

March 16, 2026

Core DSA

1. Group Anagrams – LC 49

Core Idea

Anagrams share the same character frequency. Use frequency array as HashMap key to group them.
Brute Force → O(n² * L) Time | O(n) Space:
For each word, compare with every other word to check if anagram.
Optimized → O(n L) Time | O(n L) Space:
java
QUBITS OF DPK
1class Solution {
2    public List<List<String>> groupAnagrams(String[] strs) {
3
4        HashMap<String, List<String>> map = new HashMap<>();
5
6        for (String word : strs) {
7
8            int[] frequency = new int[26];
9
10            for (char character : word.toCharArray()) {
11                frequency[character - 'a']++;
12            }
13
14            String key = Arrays.toString(frequency);
15
16            map.computeIfAbsent(key, k -> new ArrayList<>()).add(word);
17        }
18
19        return new ArrayList<>(map.values());
20    }
21}

Dry Run

Input: strs = ["eat", "tea", "tan", "ate", "nat", "bat"]
Output: [["eat","tea","ate"],["tan","nat"],["bat"]]

Key Tricks

  • Arrays.toString(int[26]) converts frequency array to consistent string key
  • computeIfAbsent(key, k -> new ArrayList<>()).add(word) — cleanest one-liner for this pattern
  • All anagrams of same group produce IDENTICAL frequency array → identical key
  • Alternative key: sort the word (Arrays.sort(chars)) — simpler but O(L log L) per word

30-Second Interview Explanation

Anagrams share the same character frequencies. Build a frequency array of size 26 for each word, convert to string using Arrays.toString() as a HashMap key. All anagrams map to the same key and get grouped together. Return all map values. Time O(nL), Space O(nL).

️ Interview Traps

  • Using sorted string as key — works but O(L log L) vs O(L) for frequency approach
  • Forgetting Arrays.toString() — directly using int[] as key FAILS (arrays compare by reference in Java, not value)
  • Not knowing computeIfAbsent — verbose alternative uses getOrDefault + put

Most Common Follow-ups

Q1: Why can't you use int[] directly as the HashMap key?
In Java, arrays use reference equality not value equality. Two different int[] arrays with identical contents are NOT equal in a HashMap — they hash to different buckets. Arrays.toString() converts to a String which uses value equality. You could also use Arrays.asList() to convert to a List which does use value equality.
Q2: What's the alternative key approach and its trade-off?
Sort each word alphabetically — "eat", "tea", "ate" all sort to "aet". Use that as key.
java
QUBITS OF DPK
1char[] chars = word.toCharArray();
2Arrays.sort(chars);
3String key = new String(chars);
Simpler code but O(L log L) per word instead of O(L). For short words the difference is negligible, but frequency array is asymptotically better.
Q3: What if input contains empty strings?
Empty string "" produces int[26] of all zeros → Arrays.toString gives "[0,0,...,0]". All empty strings group together correctly. Edge case handled automatically.

2️⃣ LC #128 — Longest Consecutive Sequence

Core Idea

A number is a sequence START only if num - 1 does NOT exist in the set. Expand only from starts — each element visited at most twice = O(n).
Brute Force → O(n²) Time | O(n) Space:
For each element, scan array checking for consecutive sequence.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
1class Solution {
2    public int longestConsecutive(int[] nums) {
3
4        HashSet<Integer> set = new HashSet<>();
5
6        for (int num : nums) {
7            set.add(num);
8        }
9
10        int longestSequence = 0;
11
12        for (int num : set) {
13
14            // Only start counting from sequence starts
15            if (!set.contains(num - 1)) {
16
17                int currentNumber = num;
18                int currentLength = 1;
19
20                while (set.contains(currentNumber + 1)) {
21                    currentNumber++;
22                    currentLength++;
23                }
24
25                longestSequence = Math.max(longestSequence, currentLength);
26            }
27        }
28
29        return longestSequence;
30    }
31}

Dry Run

Input: nums = [100, 4, 200, 1, 3, 2]
Set = {100, 4, 200, 1, 3, 2}
Output: 4 (sequence 1,2,3,4)

Key Tricks

  • num - 1 not in set → sequence start. This is the key insight
  • Iterate over set not nums — avoids processing duplicates
  • Each element visited at most twice total (once as start check, once in expansion) → O(n)
  • Do NOT sort — sorting makes it O(n log n) unnecessarily

30-Second Interview Explanation

Add all numbers to a HashSet. For each number, check if it's a sequence start (num-1 not in set). From each start, expand forward counting consecutive numbers. Track max length. Iterating set avoids duplicates. Each element visited at most twice → O(n) total. Time O(n), Space O(n).

️ Interview Traps

  • Sorting approach → O(n log n), interviewer will ask for O(n)
  • Iterating array instead of set → duplicates cause repeated work
  • Starting expansion from non-starts → O(n²) worst case

Most Common Follow-ups

Q1: Why is this O(n) and not O(n²)?
Because we only expand from sequence STARTS. Each number is visited exactly twice at most — once when checking if it's a start, and once during expansion from its start. Non-starts are skipped in O(1). Total work across all expansions = O(n).
Q2: What if array has duplicates?
Iterating over the HashSet automatically handles duplicates — each unique value appears once in the set. Even if nums = [1,1,2,2,3], the set is {1,2,3} and we only process each value once.
Q3: Can you do it without extra space?
Only if you sort — O(n log n) time, O(1) space. Scan sorted array counting consecutive runs (skip duplicates with nums[i] == nums[i-1]). But O(n) time requires O(n) space HashSet. There's no known O(n) time, O(1) space solution.

3️⃣ LC #560 — Subarray Sum Equals K

Core Idea

Prefix sum. If prefixSum[j] - prefixSum[i] == k, subarray [i+1, j] sums to k. Store prefix sum frequencies in HashMap.
Brute Force → O(n²) Time | O(1) Space:
Check every subarray sum.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
1class Solution {
2    public int subarraySum(int[] nums, int k) {
3        HashMap<Integer, Integer> prefixCount = new HashMap<>();
4        prefixCount.put(0, 1);
5        int prefixSum = 0, count = 0;
6        for (int num : nums) {
7            prefixSum += num;
8            count += prefixCount.getOrDefault(prefixSum - k, 0);
9            prefixCount.merge(prefixSum, 1, Integer::sum);
10        }
11        return count;
12    }
13}

Dry Run

Input: nums = [1, 2, 3], k=3
Output: 2 (subarrays [1,2] and [3])

Key Tricks

  • Look up prefixSum - k in the map
  • prefixCount.put(0, 1) — handles subarrays starting from index 0
  • Look up BEFORE updating map — prevents using current element twice
  • Sliding window doesn't work — negative numbers allowed

30-Second Interview Explanation

If prefixSum[j] - prefixSum[i] = k, subarray [i+1,j] sums to k. Store frequency of prefix sums seen so far. At each step look up (prefixSum - k). Time O(n), Space O(n).

️ Interview Traps

  • Sliding window doesn't work — negative numbers break monotonicity
  • Look up before update — not after
  • Forgetting put(0, 1) — misses subarrays starting at index 0

Most Common Follow-ups

Q1: Why can't you use sliding window?
Sliding window requires monotonic behavior. With negative numbers, expanding right can decrease sum and shrinking left can increase it — no consistent direction. HashMap prefix sum approach handles negatives correctly.
Q2: LC #974 — Subarray Sums Divisible by K connection?
Same prefix sum pattern. For exact sum k: look up prefixSum - k. For divisible by k: store remainders, look up same remainder. Identical mathematical insight.

4️⃣ LC #523 — Continuous Subarray Sum

Core Idea

Prefix sum mod k. Same remainder at two indices means subarray between them is divisible by k. Must have length >= 2.
Optimized → O(n) Time | O(k) Space:
java
QUBITS OF DPK
1class Solution {
2    public boolean checkSubarraySum(int[] nums, int k) {
3        HashMap<Integer, Integer> remainderIndex = new HashMap<>();
4        remainderIndex.put(0, -1);
5        int prefixSum = 0;
6        for (int i = 0; i < nums.length; i++) {
7            prefixSum += nums[i];
8            int remainder = prefixSum % k;
9            if (remainderIndex.containsKey(remainder)) {
10                if (i - remainderIndex.get(remainder) >= 2) return true;
11            } else {
12                remainderIndex.put(remainder, i);
13            }
14        }
15        return false;
16    }
17}

Dry Run

Input: nums = [23, 2, 4, 6, 7], k=6
Output: true

Key Tricks

  • Store FIRST occurrence of each remainder (don't overwrite)
  • remainderIndex.put(0, -1) — handles subarrays starting from index 0
  • Check i - firstIndex >= 2 — at least 2 elements required

30-Second Interview Explanation

Same remainder at two prefix sums means divisible-by-k subarray between them. Store first occurrence of each remainder. On revisit, check length >= 2. Time O(n), Space O(min(n,k)).

️ Interview Traps

  • Overwriting earlier index → might miss valid long subarrays
  • put(0, -1) not put(0, 0) — ensures length check works for subarrays starting at index 0

Most Common Follow-ups

Q1: Why store first occurrence and never update?
Maximize subarray length — first occurrence gives maximum distance. Overwriting with later indices could cause us to miss valid length >= 2 subarrays.
Q2: Why -1 as initial index for remainder 0?
If prefix sum at index 1 is divisible by k (remainder=0), subarray is nums[0..1]. Length = 1-(-1) = 2 ≥ 2 . Using 0 would give length 1-0 = 1 which fails the check incorrectly.

5️⃣ LC #347 — Top K Frequent Elements

Core Idea

Count frequencies. Bucket sort by frequency (bucket index = frequency). Scan from highest bucket to collect k elements.
HashMap + MinHeap → O(n log k) Time | O(n) Space:
java
QUBITS OF DPK
1class Solution {
2    public int[] topKFrequent(int[] nums, int k) {
3        HashMap<Integer, Integer> freq = new HashMap<>();
4        for (int num : nums) freq.merge(num, 1, Integer::sum);
5        PriorityQueue<Integer> minHeap = new PriorityQueue<>((a,b) -> freq.get(a) - freq.get(b));
6        for (int num : freq.keySet()) {
7            minHeap.offer(num);
8            if (minHeap.size() > k) minHeap.poll();
9        }
10        int[] result = new int[k];
11        for (int i = k-1; i >= 0; i--) result[i] = minHeap.poll();
12        return result;
13    }
14}
Bucket Sort → O(n) Time | O(n) Space:
java
QUBITS OF DPK
1class Solution {
2    public int[] topKFrequent(int[] nums, int k) {
3        HashMap<Integer, Integer> freq = new HashMap<>();
4        for (int num : nums) freq.merge(num, 1, Integer::sum);
5        List<Integer>[] buckets = new List[nums.length + 1];
6        for (int num : freq.keySet()) {
7            int f = freq.get(num);
8            if (buckets[f] == null) buckets[f] = new ArrayList<>();
9            buckets[f].add(num);
10        }
11        int[] result = new int[k];
12        int index = 0;
13        for (int f = buckets.length-1; f >= 0 && index < k; f--)
14            if (buckets[f] != null)
15                for (int num : buckets[f])
16                    if (index < k) result[index++] = num;
17        return result;
18    }
19}

Dry Run

Input: nums = [1,1,1,2,2,3], k=2
Frequency map: {1:3, 2:2, 3:1}
Scan from bucket 3 downward: collect 1 (freq 3), then 2 (freq 2). k=2 done.
Output: [1, 2]

Key Tricks

  • Bucket index = frequency value (max possible frequency = n)
  • Scan high to low, collect until k elements
  • Bucket sort approach beats heap: O(n) vs O(n log k)

30-Second Interview Explanation

Count frequencies with HashMap. Bucket sort by frequency — bucket index is the frequency, max = n. Scan buckets high to low, collect first k. Time O(n), Space O(n).

️ Interview Traps

  • Using max-heap of all elements → O(n log n), defeats purpose
  • Not knowing bucket sort when interviewer asks for O(n)

Most Common Follow-ups

Q1: Min-heap vs bucket sort tradeoffs?
Min-heap O(n log k) — good for streaming data or when k << n, doesn't need O(n) extra array. Bucket sort O(n) — optimal for static arrays, max frequency bounded by n.
Q2: LC #692 — Top K Frequent Words?
Same frequency counting. Custom comparator: higher frequency wins; on tie, lexicographically smaller string wins. Use max-heap with Comparator.comparingInt((String w) -> -freq.get(w)).thenComparing(Comparator.naturalOrder()).

6️⃣ LC #438 — Find All Anagrams in a String

Core Idea

Fixed sliding window of size p.length(). Compare frequency arrays. Add right char, remove left char each slide.
Optimized → O(n) Time | O(1) Space:
java
QUBITS OF DPK
1class Solution {
2    public List<Integer> findAnagrams(String s, String p) {
3        List<Integer> result = new ArrayList<>();
4        if (s.length() < p.length()) return result;
5        int[] pFreq = new int[26], windowFreq = new int[26];
6        for (char c : p.toCharArray()) pFreq[c - 'a']++;
7        for (int i = 0; i < p.length(); i++) windowFreq[s.charAt(i) - 'a']++;
8        if (Arrays.equals(pFreq, windowFreq)) result.add(0);
9        for (int i = p.length(); i < s.length(); i++) {
10            windowFreq[s.charAt(i) - 'a']++;
11            windowFreq[s.charAt(i - p.length()) - 'a']--;
12            if (Arrays.equals(pFreq, windowFreq)) result.add(i - p.length() + 1);
13        }
14        return result;
15    }
16}

Dry Run

Input: s = "cbaebabacd", p = "abc"
pFreq: a=1, b=1, c=1
Output: [0, 6]

Key Tricks

  • Fixed window size = p.length()
  • Arrays.equals() on int[26] = O(26) = O(1) effectively
  • Initialize first window before the main loop

30-Second Interview Explanation

Fixed sliding window of size p.length(). Maintain window frequency array. Each slide: add new rightmost char, remove leftmost. Compare with p's frequency array. Time O(n), Space O(1).

️ Interview Traps

  • Not initializing first window before the loop → missing first valid anagram
  • Sorting each window → O(n * L log L), too slow

Most Common Follow-ups

Q1: LC #567 — Permutation in String (same problem, return boolean)?
Identical approach. Return true as soon as Arrays.equals(pFreq, windowFreq). No index collection needed.
Q2: Optimize to avoid Arrays.equals each step?
Track matches counter (number of characters where window freq == p freq). Update matches when a char's frequency changes. Return/add when matches == 26.

7️⃣ LC #706 — Design HashMap

Core Idea

Array of buckets (LinkedLists). Hash function maps key to bucket index. Handle collisions with chaining.
Optimized → O(1) avg Time | O(n) Space:
java
QUBITS OF DPK
1class MyHashMap {
2    private static final int SIZE = 1000;
3    private LinkedList<int[]>[] buckets;
4
5    public MyHashMap() {
6        buckets = new LinkedList[SIZE];
7    }
8
9    private int hash(int key) { return key % SIZE; }
10
11    public void put(int key, int value) {
12        int index = hash(key);
13        if (buckets[index] == null) buckets[index] = new LinkedList<>();
14        for (int[] pair : buckets[index]) {
15            if (pair[0] == key) { pair[1] = value; return; }
16        }
17        buckets[index].add(new int[]{key, value});
18    }
19
20    public int get(int key) {
21        int index = hash(key);
22        if (buckets[index] == null) return -1;
23        for (int[] pair : buckets[index])
24            if (pair[0] == key) return pair[1];
25        return -1;
26    }
27
28    public void remove(int key) {
29        int index = hash(key);
30        if (buckets[index] == null) return;
31        buckets[index].removeIf(pair -> pair[0] == key);
32    }
33}

Dry Run

Input: put(1,1), put(2,2), get(1), get(3), put(2,1), get(2), remove(2), get(2)
Output: [1, -1, 1, -1]

Key Tricks

  • SIZE=1000 with chaining balances space vs collision rate
  • Check for existing key in put → update instead of duplicate
  • removeIf cleans up chains cleanly

30-Second Interview Explanation

Array of buckets. Hash key to bucket index. Handle collisions with LinkedList chaining. Put checks existing key first to avoid duplicates. Get/remove scan chain. Time O(1) avg, O(n) worst, Space O(n).

️ Interview Traps

  • Not handling duplicate key on put → multiple entries for same key
  • Not returning -1 for missing key

Most Common Follow-ups

Q1: What's the difference between chaining and open addressing?
Chaining: each bucket is a list, collisions appended. Simple but extra memory for pointers. Open addressing: on collision, probe for next empty slot. Better cache performance but trickier deletion (need tombstone markers).
Q2: How would you choose the bucket SIZE?
Typically a prime number (reduces hash collisions) slightly larger than expected n. Load factor monitoring: when n/SIZE > 0.75, resize to 2*SIZE and rehash all entries.

8️⃣ LC #705 — Design HashSet

Core Idea

Array of buckets (LinkedLists). Same as HashMap but only stores keys, no values.
Optimized → O(1) avg Time | O(n) Space:
java
QUBITS OF DPK
1class MyHashSet {
2    private static final int SIZE = 1000;
3    private LinkedList<Integer>[] buckets;
4
5    public MyHashSet() {
6        buckets = new LinkedList[SIZE];
7    }
8
9    private int hash(int key) { return key % SIZE; }
10
11    public void add(int key) {
12        int index = hash(key);
13        if (buckets[index] == null) buckets[index] = new LinkedList<>();
14        if (!buckets[index].contains(key)) buckets[index].add(key);
15    }
16
17    public void remove(int key) {
18        int index = hash(key);
19        if (buckets[index] != null) buckets[index].remove(Integer.valueOf(key));
20    }
21
22    public boolean contains(int key) {
23        int index = hash(key);
24        return buckets[index] != null && buckets[index].contains(key);
25    }
26}

Dry Run

Input: add(1), add(2), contains(1), contains(3), add(2), contains(2), remove(2), contains(2)
Output: [true, false, true, false]

Key Tricks

  • Check contains before add — prevents duplicates
  • Integer.valueOf(key) in remove — removes by value not index
  • Identical structure to HashMap, just no value stored

30-Second Interview Explanation

Array of buckets with LinkedList chaining. Hash key to bucket. Add checks for existing first. Contains scans chain. Remove by value. Time O(1) avg, Space O(n).

️ Interview Traps

  • list.remove(key) removes by INDEX — must use Integer.valueOf(key) to remove by value

Most Common Follow-ups

Q1: HashMap vs HashSet implementation difference?
HashMap stores [key, value] pairs. HashSet stores only keys. HashSet is internally a HashMap with a dummy value (in Java's actual implementation, it uses new HashMap<>() with PRESENT as the dummy value).
Q2: How to implement a thread-safe version?
Use synchronized on each method or use ReentrantReadWriteLock — read lock for contains, write lock for add/remove. Or use ConcurrentHashMap internally.

9️⃣ LC #454 — 4Sum II

Core Idea

Split 4 arrays into 2 pairs. Store all sums of first pair in HashMap. Count how many sums in second pair are the negatives.
Brute Force → O(n⁴) | Better O(n²) + O(n²) = O(n²) Time | O(n²) Space:
java
QUBITS OF DPK
1class Solution {
2    public int fourSumCount(int[] nums1, int[] nums2, int[] nums3, int[] nums4) {
3        HashMap<Integer, Integer> sumAB = new HashMap<>();
4        for (int a : nums1)
5            for (int b : nums2)
6                sumAB.merge(a + b, 1, Integer::sum);
7        int count = 0;
8        for (int c : nums3)
9            for (int d : nums4)
10                count += sumAB.getOrDefault(-(c + d), 0);
11        return count;
12    }
13}

Dry Run

Input: nums1=[1,2], nums2=[-2,-1], nums3=[-1,2], nums4=[0,2]
All sums of nums1+nums2:
  • 1+(-2)=-1, 1+(-1)=0, 2+(-2)=0, 2+(-1)=1
sumAB: {-1:1, 0:2, 1:1}
All sums of nums3+nums4:
  • (-1)+0=-1 → look for -(-1)=1 → count+=1
  • (-1)+2=1 → look for -1 → count+=1
  • 2+0=2 → look for -2 → count+=0
  • 2+2=4 → look for -4 → count+=0
Output: 2

Key Tricks

  • Split 4 arrays into 2 pairs: O(n⁴) → O(n²)
  • Store first pair sums, look for negatives in second pair
  • -(c+d) is the complement needed for zero total

30-Second Interview Explanation

Split 4 arrays into 2 pairs. Store all pairwise sums of first pair in HashMap. For each pairwise sum of second pair, look up its negative in the map. Count total matches. Time O(n²), Space O(n²).

️ Interview Traps

  • Trying to do all 4 together → O(n⁴)
  • Looking up c+d instead of -(c+d)

Most Common Follow-ups

Q1: Why does splitting into 2 pairs work?
a+b+c+d=0 means a+b = -(c+d). So we store all a+b sums and check if -(c+d) exists. Reduces 4 nested loops to 2 pairs of nested loops: O(n⁴) → O(n²).
Q2: Can you extend this idea to k-Sum with k arrays?
Yes — split k arrays into two halves. Compute all pairwise sums for each half. Then use the complement lookup. O(n^(k/2)) time — meet-in-the-middle approach.