Cosmic Module
D
Qubits of DPK
March 16, 2026
Core DSA
1️⃣ LC #3 — Longest Substring Without Repeating Characters
Core Idea
Sliding window with HashSet. Expand right always. When duplicate found, shrink from left until duplicate removed.
Brute Force → O(n²) Time | O(n) Space:
Check every substring for uniqueness.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
Dry Run
Input: s = "abcabcbb"
Output: 3
Key Tricks
- Window = right - left + 1
- Shrink from left until duplicate removed — don't jump directly
- HashSet gives O(1) contains/remove
- Optimized variant: use HashMap<char, lastIndex> to jump left pointer directly
30-Second Interview Explanation
Sliding window with HashSet tracking unique chars in current window. Expand right always. When duplicate detected, shrink from left removing chars until duplicate gone. Window size at each step = candidate answer. Time O(n), Space O(min(n, alphabet)).
️ Interview Traps
- Not shrinking properly — jumping left pointer directly without removing intervening chars from set
- Off-by-one in window size: right - left + 1 not right - left
Most Common Follow-ups
Q1: Can you optimize the shrinking with HashMap?
Yes — store char → last seen index. When duplicate found, jump left = max(left, lastSeen[char] + 1). Avoids repeated shrinking. O(n) time, same space.
java
QUBITS OF DPK
Q2: What if only lowercase letters?
Use int[26] instead of HashSet. Same logic, constant space O(1).
2️⃣ LC #146 — LRU Cache
Core Idea
Combine HashMap (O(1) lookup) + Doubly Linked List (O(1) insert/delete). Most recently used at head, LRU at tail.
Optimized → O(1) Time per get/put | O(capacity) Space:
java
QUBITS OF DPK
Dry Run
Input: capacity=2, operations: put(1,1), put(2,2), get(1), put(3,3), get(2)
Output: [1, -1]
Key Tricks
- Dummy head and tail nodes eliminate null checks
- Node stores key too — needed when evicting LRU to remove from HashMap
- Every get AND put moves node to front (marks as recently used)
30-Second Interview Explanation
HashMap for O(1) key lookup. Doubly linked list for O(1) insertion/deletion. Most recent at head, LRU at tail. On get: move to front. On put: insert at front, evict tail if over capacity. Time O(1), Space O(capacity).
️ Interview Traps
- Forgetting to store key in Node — can't remove from HashMap on eviction without it
- Not using dummy head/tail — lots of null pointer errors
- Not moving node to front on GET (only on put)
Most Common Follow-ups
Q1: Why doubly linked list and not singly?
Deletion requires access to the previous node. With singly linked list, deleting an arbitrary node requires O(n) traversal to find prev. Doubly linked list gives O(1) deletion from any position.
Q2: Can you use Java's built-in LinkedHashMap?
Yes — LinkedHashMap with accessOrder=true maintains access order automatically.
java
QUBITS OF DPK
But interviewers usually want the manual implementation to test your data structure knowledge.
3️⃣ LC #974 — Subarray Sums Divisible by K
Core Idea
Prefix sum + modulo. If prefix[j] % k == prefix[i] % k, then subarray [i+1, j] is divisible by k. Count pairs with same remainder.
Optimized → O(n) Time | O(k) Space:
java
QUBITS OF DPK
Dry Run
Input: nums = [4,5,0,-2,-3,1], k=5
Output: 7
Key Tricks
- ((prefixSum % k) + k) % k — handles negative remainders in Java
- remainderCount[0] = 1 — accounts for subarrays starting from index 0
- Each new occurrence of same remainder = that many new valid subarrays
30-Second Interview Explanation
If two prefix sums have same remainder mod k, their difference is divisible by k. Count pairs with matching remainders using frequency array. Handle negatives with (rem % k + k) % k. Time O(n), Space O(k).
️ Interview Traps
- Java's % returns negative values for negative numbers — must normalize
- Forgetting remainderCount[0] = 1 — misses subarrays starting at index 0
Most Common Follow-ups
Q1: Why does same remainder mean divisible by k?
If prefix[j] % k == prefix[i] % k, then (prefix[j] - prefix[i]) % k == 0. And prefix[j] - prefix[i] = sum of subarray [i+1, j]. So that subarray sum is divisible by k.
Q2: How to handle negative numbers in Java modulo?
Java's % operator returns negative remainders for negative dividends. Fix: (n % k + k) % k. The + k shifts negative remainders to positive range, and the second % k brings values that were already positive back down.
4️⃣ GFG — Count Subarrays with XOR = K
Core Idea
Prefix XOR. If prefixXOR[j] ^ prefixXOR[i] == k, then subarray [i+1, j] has XOR = k. Equivalent: prefixXOR[i] == prefixXOR[j] ^ k.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
Dry Run
Input: arr = [4,2,2,6,4], k=6
Output: 4
Key Tricks
- XOR property: a ^ b = c implies a ^ c = b
- prefixXOR[i] == prefixXOR[j] ^ k → look up prefixXOR ^ k in map
- prefixCount.put(0, 1) — handles subarrays starting at index 0
30-Second Interview Explanation
Prefix XOR analog of prefix sum. If prefix[j] ^ prefix[i] == k, subarray has XOR = k. Rearranging: look for prefix[j] ^ k in the map. Count occurrences. Time O(n), Space O(n).
️ Interview Traps
- Forgetting prefixCount.put(0, 1) — misses subarrays starting from index 0
- Confusing the lookup: look for prefixXOR ^ k, not just k
Most Common Follow-ups
Q1: Why is the lookup prefixXOR ^ k and not something else?
We want subarray XOR = k, meaning prefix[j] ^ prefix[i] = k. Rearranging using XOR property (a ^ b = c → a = b ^ c): prefix[i] = prefix[j] ^ k. So we look up prefix[j] ^ k in the map of past prefix XORs.
Q2: Connection to Subarray Sum Equals K (LC #560)?
Identical pattern. For sum: look up prefixSum - k. For XOR: look up prefixXOR ^ k. Both store prefix frequency in a map and check for the complement needed to form a valid subarray.
5️⃣ LC #41 — First Missing Positive
Core Idea
Place each number at its correct index (nums[i] should be at index nums[i]-1). Scan for first mismatch.
Brute Force → O(n log n): Sort, then scan.
Optimized → O(n) Time | O(1) Space:
java
QUBITS OF DPK
Dry Run
Input: nums = [3, 4, -1, 1], n=4
Placement pass:
Scan pass:
Output: 2
Key Tricks
- Only swap if nums[i] is in range [1, n] AND not already at correct position
- nums[nums[i]-1] != nums[i] prevents infinite loop on duplicates
- Answer is always in range [1, n+1] — pigeonhole principle
30-Second Interview Explanation
Answer is in range [1, n+1]. Place each number at its correct index using cyclic swaps. Scan for first mismatch — that index+1 is the answer. If all placed correctly, answer is n+1. Time O(n), Space O(1).
️ Interview Traps
- Not checking nums[nums[i]-1] != nums[i] → infinite loop on duplicates like [1,1]
- Forgetting the return n+1 case — when [1,2,3,...,n] are all present
Most Common Follow-ups
Q1: Why is the answer always in [1, n+1]?
Pigeonhole principle: array has n elements. If all of 1..n are present, answer is n+1. If any of 1..n is missing, that's the answer. Numbers outside [1,n] (negatives, zeros, >n) are irrelevant.
Q2: Why doesn't the while loop make this O(n²)?
Each element is swapped at most once to its correct position. Once in place, it's never moved again. Total swaps across all iterations ≤ n. Amortized O(n).
6️⃣ LC #560 — Subarray Sum Equals K
Core Idea
Prefix sum. If prefixSum[j] - prefixSum[i] == k, subarray [i+1, j] sums to k. Store prefix sum frequencies.
Brute Force → O(n²) Time | O(1) Space:
Check every subarray.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
Dry Run
Input: nums = [1, 2, 3], k=3
Output: 2 (subarrays [1,2] and [3])
Key Tricks
- Look up prefixSum - k in the map
- prefixCount.put(0, 1) — handles subarrays starting from index 0
- Look up BEFORE updating map — prevents using current element as both endpoints
30-Second Interview Explanation
If prefixSum at j minus prefixSum at i equals k, subarray [i+1,j] = k. Store frequency of prefix sums. At each step look up (prefixSum - k) in map. Time O(n), Space O(n).
️ Interview Traps
- Sliding window doesn't work here — negative numbers allowed
- Look up before update — not after
Most Common Follow-ups
Q1: Why can't you use sliding window?
Sliding window requires monotonic behavior — when you expand right, sum increases; shrink left, decreases. With negative numbers this breaks: adding a number can decrease the sum, so you can't know whether to expand or shrink.
Q2: LC #974 — Subarray Sums Divisible by K connection?
Identical prefix sum pattern. For exact sum k: look up prefixSum - k. For divisible by k: look up same remainder. Both rely on the same mathematical insight about prefix differences.
7️⃣ LC #523 — Continuous Subarray Sum
Core Idea
Prefix sum mod k. If two prefix sums have same remainder and indices are at least 2 apart, subarray between them has sum divisible by k.
Optimized → O(n) Time | O(k) Space:
java
QUBITS OF DPK
Dry Run
Input: nums = [23, 2, 4, 6, 7], k=6
Output: true (subarray [2,4] sums to 6)
Key Tricks
- Store FIRST occurrence of each remainder (don't overwrite)
- remainderIndex.put(0, -1) — handles subarrays starting from index 0
- Check i - firstIndex >= 2 — subarray must have at least 2 elements
30-Second Interview Explanation
Same remainder at two prefix sums means divisible-by-k subarray between them. Store first occurrence of each remainder. On revisit, check length >= 2. Time O(n), Space O(min(n,k)).
️ Interview Traps
- Overwriting earlier index → might miss valid long subarrays
- put(0, -1) not put(0, 0) — index -1 ensures length check works for subarrays starting at 0
Most Common Follow-ups
Q1: Why store first occurrence and not update?
We want the LONGEST possible subarray. First occurrence gives maximum distance. If we overwrote with later indices, we might miss valid subarrays of length >= 2.
Q2: Why -1 as the initial index for remainder 0?
If prefix sum at index 0 is divisible by k (remainder=0), the subarray is nums[0..0]. Length = 0-(-1) = 1. But we need length >= 2. So -1 correctly gives length check of i - (-1) = i + 1. For i=1, length=2 .
8️⃣ LC #347 — Top K Frequent Elements
Core Idea
Count frequencies with HashMap. Use min-heap of size k to track top k. Or bucket sort by frequency for O(n).
HashMap + Min-Heap → O(n log k) Time | O(n) Space:
java
QUBITS OF DPK
Bucket Sort → O(n) Time | O(n) Space:
java
QUBITS OF DPK
Dry Run
Input: nums = [1,1,1,2,2,3], k=2
Frequency map: {1:3, 2:2, 3:1}
Bucket sort (index = frequency):
Scan from right (highest freq): pick 1 (freq 3), then 2 (freq 2). k=2 reached.
Output: [1, 2]
Key Tricks
- Bucket index = frequency value (max frequency = n)
- Scan buckets from high to low, collect until k elements gathered
- Bucket sort gives O(n) vs O(n log k) for heap
30-Second Interview Explanation
Count frequencies with HashMap. Bucket sort by frequency (bucket index = frequency, max = n). Scan buckets high to low, collect first k elements. Time O(n), Space O(n).
️ Interview Traps
- Using max-heap of all elements → O(n log n), defeats purpose
- Not knowing bucket sort approach when interviewer asks for O(n)
Most Common Follow-ups
Q1: What's the difference between min-heap and bucket sort approach?
Min-heap O(n log k) — good for streaming data or when k << n. Bucket sort O(n) — optimal for static arrays but requires O(n) extra space and only works when max frequency is bounded by n.
Q2: LC #692 — Top K Frequent Words (same idea, string comparison)?
Same frequency counting. But for equal frequencies, sort alphabetically. Use max-heap with custom comparator: higher frequency wins; on tie, lexicographically smaller string wins.
9️⃣ LC #438 — Find All Anagrams in a String
Core Idea
Sliding window of size p.length(). Compare frequency arrays. Move window right, add new char, remove oldest char.
Optimized → O(n) Time | O(1) Space:
java
QUBITS OF DPK
Dry Run
Input: s = "cbaebabacd", p = "abc"
pFreq: a=1, b=1, c=1
Output: [0, 6]
Key Tricks
- Fixed window size = p.length()
- Add rightmost char, remove leftmost char each slide
- Arrays.equals() compares frequency arrays in O(26) = O(1)
30-Second Interview Explanation
Fixed sliding window of size p.length(). Maintain frequency array of window. Slide right: add new char, remove oldest. Compare with p's frequency. Time O(n), Space O(1).
️ Interview Traps
- Sorting each window → O(n * L log L), too slow
- Not initializing first window before the loop
Most Common Follow-ups
Q1: LC #567 — Permutation in String (same problem, return boolean)?
Identical approach. Just return true as soon as match found instead of collecting indices.
Q2: Optimize to avoid Arrays.equals() each step?
Track a matches counter. When a character's frequency matches p, increment matches. When it diverges, decrement. Return result when matches == 26.
LC #290 — Word Pattern
Core Idea
Bijection between pattern chars and words. Two HashMaps — charToWord and wordToChar — both directions must match.
Optimized → O(n) Time | O(n) Space:
java
QUBITS OF DPK
Dry Run
Input: pattern = "abba", s = "dog cat cat dog"
Output: true
Failure case: pattern = "aa", s = "dog cat"
Output: false
Key Tricks
- Same as Isomorphic Strings but at word level
- Two maps enforcing bijection both ways
- Length check upfront — pattern.length() != words.length
30-Second Interview Explanation
Bijection between pattern chars and words. Two maps — charToWord and wordToChar — enforce one-to-one mapping in both directions. Same as Isomorphic Strings at word level. Time O(n*L), Space O(n).
️ Interview Traps
- One map alone → misses many-to-one mapping
- Not checking length mismatch upfront
Most Common Follow-ups
Q1: Connection to LC #205 — Isomorphic Strings?
Identical concept — char-to-char bijection vs char-to-word bijection. Same two-map approach. Only difference: values in the map are strings instead of characters, so use .equals() for comparison not ==.
Q2: What if pattern has more characters than words?
Handled by length check: if (pattern.length() != words.length) return false. Returns false immediately.